Project

General

Profile

Actions

Bug #54431

closed

set_port() fails on Windows rbd clients on IPv6-only clusters

Added by brent s. about 2 years ago. Updated 11 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
pacific quincy
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Whenever I try to perform any cluster-interaction with the

rbd
tool on Windows Server 2019, I get the following error:

2022-02-28T16:17:06.511Eastern Standard Time 4 -1 ../src/msg/msg_types.h: In function 'void entity_addr_t::set_port(int)' thread 4 time 2022-02-28T16:17:06.495704Eastern Standard Time
../src/msg/msg_types.h: 364: ceph_abort_msg("abort() called")

 ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

(and, of course, a failure to actually connect) - leaving Windows hosts completely unusable in the cluster as clients.

In particular, I am testing with

rbd ls [POOLNAME]
, which works perfectly fine on a cluster node and lists my test image in the specific pool:

[00:50:29] root@[REDACTED_H1]:~ # rbd ls [REDACTED_P1]_rbd_hyperv_0
test

The key has been generated with, I believe, the proper capabilities needed:

client.[REDACTED_H2]
        key: [REDACTED_K1]
        caps: [mgr] profile rbd pool=[REDACTED_P1]_rbd_hyperv_0 network [REDACTED_PREFIX1]:10:12:64:82/128
        caps: [mon] profile rbd network [REDACTED_PREFIX1]:10:12:64:82/128
        caps: [osd] profile rbd pool=[REDACTED_P1]_rbd_hyperv_0 network [REDACTED_PREFIX1]:10:12:64:82/128

The Windows MSI installer was used, using the latest build available from the Cloudbase page (https://cloudbase.it/ceph-for-windows/) at the time, on February 24, 2022.

The ceph.conf file used on the client is as follows:

[global]
    log to stderr = true
    ; Uncomment the following in order to use the Windows Event Log
    ;log to syslog = true

    run dir = C:/ProgramData/ceph/out
    crash dir = C:/ProgramData/ceph/out

    ; Use the following to change the cephfs client log level
    ; debug client = 2
[client]
    keyring = C:/ProgramData/ceph/keyring
    ;log file = C:/ProgramData/ceph/out/$name.$pid.log
    admin socket = C:/ProgramData/ceph/out/$name.$pid.asok

    ;client_permissions = true
    ; client_mount_uid = 1000
    ; client_mount_gid = 1000
[global]
    ; None of these apparently works.
    ;mon host = [[REDACTED_PREFIX1]:10:12:64:14]
    ;mon host = [REDACTED_H1]:3300,
    mon host = [[REDACTED_H1]]
    ms_bind_ipv6 = true
    ms_bind_ipv4 = false
    ;public_network = [REDACTED_PREFIX1]::/64

Interestingly, though perhaps unrelated to this issue, the log is created with "client.admin." prefix, despite the client name being wholly different in the keyring file. Specifying "id" (either at runtime via --id or in the configuration file itself) has no change in this behavior.

Please let me know if you would like me to run any further tests.

Actions #1

Updated by Lucian Petrut 11 months ago

  • Status changed from New to Resolved
  • Backport set to pacific quincy
  • Pull request ID set to 43957
Actions

Also available in: Atom PDF