Actions
Bug #43816
closedcephadm: Unable to use IPv6 on "cephadm bootstrap"
Status:
Resolved
Priority:
Normal
Assignee:
Matthew Oliver
Category:
cephadm (binary)
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I'm trying to create a new cluster using IPv6, so I've executed the following command:
cephadm --verbose bootstrap \ --mon-ip [fde4:8dba:82e1:0:5054:ff:fe94:d39c] \ --skip-dashboard \ --output-keyring /etc/ceph/ceph.client.admin.keyring \ --output-config /etc/ceph/ceph.conf
This command produced the following error:
INFO:cephadm:Verifying IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 3300 ... WARNING:cephadm:Cannot bind to IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 3300: [Errno -2] Name or service not known INFO:cephadm:Verifying IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 6789 ... WARNING:cephadm:Cannot bind to IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 6789: [Errno -2] Name or service not known
To fix this I've applied the following patch to my local `cephadm`:
--- a/src/cephadm/cephadm +++ b/src/cephadm/cephadm @@ -171,6 +171,8 @@ def check_ip_port(ip, port): logger.info('Verifying IP %s port %d ...' % (ip, port)) if ip.startswith('[') or '::' in ip: s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) + if ip.startswith('[') and ip.endswith(']'): + ip = ip[1:-1] else: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try:
With the patch above, the error was gone, and the execution continues until "orchestrator set backend cephadm":
INFO:cephadm:Enabling cephadm module... DEBUG:cephadm:['/usr/bin/podman', 'run', '--rm', '--net=host', '-e', 'CONTAINER_IMAGE=ceph/daemon-base:latest-master-devel', '-e', 'NODE_NAME=node1.octopusipv6.com', '-v', '/var/log/ceph/98e3b906-3ee9-11ea-8fb9-525400fe0277:/var/log/ceph:z', '-v', '/tmp/ceph-tmpwlpgzkvy:/etc/ceph/ceph.client.admin.keyring:z', '-v', '/tmp/ceph-tmpt01cdw1d:/etc/ceph/ceph.conf:z', '--entrypoint', '/usr/bin/ceph', 'ceph/daemon-base:latest-master-devel', 'mgr', 'module', 'enable', 'cephadm'] DEBUG:cephadm:Running command: /usr/bin/podman run --rm --net=host -e CONTAINER_IMAGE=ceph/daemon-base:latest-master-devel -e NODE_NAME=node1.octopusipv6.com -v /var/log/ceph/98e3b906-3ee9-11ea-8fb9-525400fe0277:/var/log/ceph:z -v /tmp/ceph-tmpwlpgzkvy:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpt01cdw1d:/etc/ceph/ceph.conf:z --entrypoint /usr/bin/ceph ceph/daemon-base:latest-master-devel mgr module enable cephadm INFO:cephadm:Setting orchestrator backend to cephadm... DEBUG:cephadm:['/usr/bin/podman', 'run', '--rm', '--net=host', '-e', 'CONTAINER_IMAGE=ceph/daemon-base:latest-master-devel', '-e', 'NODE_NAME=node1.octopusipv6.com', '-v', '/var/log/ceph/98e3b906-3ee9-11ea-8fb9-525400fe0277:/var/log/ceph:z', '-v', '/tmp/ceph-tmpwlpgzkvy:/etc/ceph/ceph.client.admin.keyring:z', '-v', '/tmp/ceph-tmpt01cdw1d:/etc/ceph/ceph.conf:z', '--entrypoint', '/usr/bin/ceph', 'ceph/daemon-base:latest-master-devel', 'orchestrator', 'set', 'backend', 'cephadm'] DEBUG:cephadm:Running command: /usr/bin/podman run --rm --net=host -e CONTAINER_IMAGE=ceph/daemon-base:latest-master-devel -e NODE_NAME=node1.octopusipv6.com -v /var/log/ceph/98e3b906-3ee9-11ea-8fb9-525400fe0277:/var/log/ceph:z -v /tmp/ceph-tmpwlpgzkvy:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpt01cdw1d:/etc/ceph/ceph.conf:z --entrypoint /usr/bin/ceph ceph/daemon-base:latest-master-devel orchestrator set backend cephadm
At this point "cephadm bootstrap" hangs forever, and will never complete.
Additional information:
node1:~ # ceph -s cluster: id: 98e3b906-3ee9-11ea-8fb9-525400fe0277 health: HEALTH_WARN 1 stray host(s) with 2 service(s) not managed by cephadm 2 stray service(s) not managed by cephadm OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum node1.octopusipv6.com (age 9m) mgr: eyxhdl(active, since 9m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
node1:~ # cephadm ls [ { "style": "cephadm:v1", "name": "crash", "fsid": "98e3b906-3ee9-11ea-8fb9-525400fe0277", "enabled": true, "state": "running", "container_id": "3bb0ab826b33daf22729bf98de688c60417d3c68f9e1910253ad5e52819bd948", "container_image_name": "docker.io/ceph/daemon-base:latest-master-devel", "container_image_id": "dc878b80ff921af1adab46171e05b821931d34ee5d15d473cf8671341f5b7bfe", "version": "15.0.0-9720-gd1274aa" }, { "style": "cephadm:v1", "name": "mgr.eyxhdl", "fsid": "98e3b906-3ee9-11ea-8fb9-525400fe0277", "enabled": true, "state": "running", "container_id": "91155aed2aa8984bebc580d0a8d560494fe760d64528abe9532cd22f0fc2b006", "container_image_name": "docker.io/ceph/daemon-base:latest-master-devel", "container_image_id": "dc878b80ff921af1adab46171e05b821931d34ee5d15d473cf8671341f5b7bfe", "version": "15.0.0-9720-gd1274aa" }, { "style": "cephadm:v1", "name": "mon.node1.octopusipv6.com", "fsid": "98e3b906-3ee9-11ea-8fb9-525400fe0277", "enabled": true, "state": "running", "container_id": "4bfa4fcfc9bb513bd2bde33d3eb08e253639f159f68ec08d978750ae7c302cfe", "container_image_name": "docker.io/ceph/daemon-base:latest-master-devel", "container_image_id": "dc878b80ff921af1adab46171e05b821931d34ee5d15d473cf8671341f5b7bfe", "version": "15.0.0-9720-gd1274aa" } ]
node1:~ # cephadm logs --name mgr.eyxhdl INFO:cephadm:Inferring fsid 98e3b906-3ee9-11ea-8fb9-525400fe0277 debug 2020-01-24T20:40:56.498+0000 7f907038c040 0 set uid:gid to 167:167 (ceph:ceph) debug 2020-01-24T20:40:56.498+0000 7f907038c040 0 ceph version 15.0.0-9720-gd1274aa (d1274aae3a04a9cc903ef08e108254cf23b6d85d) octopus (dev), process ceph-mgr, pid 1 debug 2020-01-24T20:40:56.498+0000 7f907038c040 0 pidfile_write: ignore empty --pid-file debug 2020-01-24T20:40:56.534+0000 7f907038c040 1 mgr[py] Loading python module 'alerts' debug 2020-01-24T20:40:56.614+0000 7f907038c040 1 mgr[py] Loading python module 'ansible' debug 2020-01-24T20:40:56.766+0000 7f907038c040 1 mgr[py] Loading python module 'balancer' debug 2020-01-24T20:40:56.822+0000 7f907038c040 1 mgr[py] Loading python module 'cephadm' debug 2020-01-24T20:40:56.938+0000 7f907038c040 1 mgr[py] Loading python module 'crash' debug 2020-01-24T20:40:56.998+0000 7f907038c040 1 mgr[py] Loading python module 'dashboard' debug 2020-01-24T20:40:57.490+0000 7f907038c040 1 mgr[py] Loading python module 'deepsea' debug 2020-01-24T20:40:57.622+0000 7f907038c040 1 mgr[py] Loading python module 'devicehealth' debug 2020-01-24T20:40:57.714+0000 7f907038c040 1 mgr[py] Loading python module 'diskprediction_local' debug 2020-01-24T20:40:57.846+0000 7f907038c040 1 mgr[py] Loading python module 'influx' debug 2020-01-24T20:40:57.922+0000 7f907038c040 1 mgr[py] Loading python module 'insights' debug 2020-01-24T20:40:57.966+0000 7f907038c040 1 mgr[py] Loading python module 'iostat' debug 2020-01-24T20:40:58.010+0000 7f907038c040 1 mgr[py] Loading python module 'k8sevents' debug 2020-01-24T20:40:58.478+0000 7f907038c040 1 mgr[py] Loading python module 'localpool' debug 2020-01-24T20:40:58.610+0000 7f907038c040 1 mgr[py] Loading python module 'orchestrator_cli' debug 2020-01-24T20:40:58.690+0000 7f907038c040 1 mgr[py] Loading python module 'pg_autoscaler' debug 2020-01-24T20:40:58.762+0000 7f907038c040 1 mgr[py] Loading python module 'progress' debug 2020-01-24T20:40:58.826+0000 7f907038c040 1 mgr[py] Loading python module 'prometheus' debug 2020-01-24T20:40:59.202+0000 7f907038c040 1 mgr[py] Loading python module 'rbd_support' debug 2020-01-24T20:40:59.270+0000 7f907038c040 1 mgr[py] Loading python module 'restful' debug 2020-01-24T20:40:59.510+0000 7f907038c040 1 mgr[py] Loading python module 'rook' debug 2020-01-24T20:41:00.018+0000 7f907038c040 1 mgr[py] Loading python module 'selftest' debug 2020-01-24T20:41:00.066+0000 7f907038c040 1 mgr[py] Loading python module 'status' debug 2020-01-24T20:41:00.122+0000 7f907038c040 1 mgr[py] Loading python module 'telegraf' debug 2020-01-24T20:41:00.174+0000 7f907038c040 1 mgr[py] Loading python module 'telemetry' debug 2020-01-24T20:41:00.302+0000 7f907038c040 1 mgr[py] Loading python module 'test_orchestrator' debug 2020-01-24T20:41:00.386+0000 7f907038c040 1 mgr[py] Loading python module 'volumes' debug 2020-01-24T20:41:00.634+0000 7f907038c040 1 mgr[py] Loading python module 'zabbix' debug 2020-01-24T20:41:00.686+0000 7f905d0b9700 0 ms_deliver_dispatch: unhandled message 0x560c3c908800 mon_map magic: 0 v1 from mon.0 v2:[fde4:8dba:82e1:0:5054:ff:fe94:d39c]:3300/0 debug 2020-01-24T20:41:01.266+0000 7f905d0b9700 1 mgr handle_mgr_map Activating! debug 2020-01-24T20:41:01.266+0000 7f905d0b9700 1 mgr handle_mgr_map I am now activating debug 2020-01-24T20:41:01.282+0000 7f904af24700 1 mgr[py] Upgrading module configuration for Mimic debug 2020-01-24T20:41:01.282+0000 7f904af24700 0 [balancer] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.282+0000 7f904af24700 1 mgr load Constructed class from module: balancer debug 2020-01-24T20:41:01.282+0000 7f904af24700 0 [crash] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.282+0000 7f904af24700 1 mgr load Constructed class from module: crash debug 2020-01-24T20:41:01.286+0000 7f904af24700 0 [devicehealth] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.286+0000 7f904af24700 1 mgr load Constructed class from module: devicehealth debug 2020-01-24T20:41:01.286+0000 7f904af24700 0 [iostat] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.286+0000 7f904af24700 1 mgr load Constructed class from module: iostat debug 2020-01-24T20:41:01.286+0000 7f904af24700 0 [orchestrator_cli] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.290+0000 7f904af24700 1 mgr load Constructed class from module: orchestrator_cli debug 2020-01-24T20:41:01.290+0000 7f904af24700 0 [pg_autoscaler] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.290+0000 7f904af24700 1 mgr load Constructed class from module: pg_autoscaler debug 2020-01-24T20:41:01.290+0000 7f904af24700 0 [progress] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.290+0000 7f904af24700 1 mgr load Constructed class from module: progress debug 2020-01-24T20:41:01.294+0000 7f904af24700 0 [rbd_support] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.294+0000 7f904af24700 1 mgr load Constructed class from module: rbd_support debug 2020-01-24T20:41:01.294+0000 7f904af24700 0 [restful] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.294+0000 7f904af24700 1 mgr load Constructed class from module: restful debug 2020-01-24T20:41:01.298+0000 7f904af24700 0 [status] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.298+0000 7f904af24700 1 mgr load Constructed class from module: status debug 2020-01-24T20:41:01.298+0000 7f904af24700 0 [telemetry] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.298+0000 7f904af24700 1 mgr load Constructed class from module: telemetry debug 2020-01-24T20:41:01.298+0000 7f904af24700 0 [volumes] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:01.302+0000 7f9039f53700 0 [restful] [WARNING] [root] server not running: no certificate configured debug 2020-01-24T20:41:01.486+0000 7f904af24700 1 mgr load Constructed class from module: volumes debug 2020-01-24T20:41:03.270+0000 7f903ff5f700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:05.270+0000 7f903ff5f700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:07.270+0000 7f903ff5f700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs ignoring --setuser ceph since I am not root ignoring --setgroup ceph since I am not root debug 2020-01-24T20:41:08.722+0000 7fb565654040 0 ceph version 15.0.0-9720-gd1274aa (d1274aae3a04a9cc903ef08e108254cf23b6d85d) octopus (dev), process ceph-mgr, pid 1 debug 2020-01-24T20:41:08.722+0000 7fb565654040 0 pidfile_write: ignore empty --pid-file debug 2020-01-24T20:41:08.762+0000 7fb565654040 1 mgr[py] Loading python module 'alerts' debug 2020-01-24T20:41:08.846+0000 7fb565654040 1 mgr[py] Loading python module 'ansible' debug 2020-01-24T20:41:09.002+0000 7fb565654040 1 mgr[py] Loading python module 'balancer' debug 2020-01-24T20:41:09.058+0000 7fb565654040 1 mgr[py] Loading python module 'cephadm' debug 2020-01-24T20:41:09.170+0000 7fb565654040 1 mgr[py] Loading python module 'crash' debug 2020-01-24T20:41:09.234+0000 7fb565654040 1 mgr[py] Loading python module 'dashboard' debug 2020-01-24T20:41:09.694+0000 7fb565654040 1 mgr[py] Loading python module 'deepsea' debug 2020-01-24T20:41:09.850+0000 7fb565654040 1 mgr[py] Loading python module 'devicehealth' debug 2020-01-24T20:41:09.950+0000 7fb565654040 1 mgr[py] Loading python module 'diskprediction_local' debug 2020-01-24T20:41:10.078+0000 7fb565654040 1 mgr[py] Loading python module 'influx' debug 2020-01-24T20:41:10.126+0000 7fb565654040 1 mgr[py] Loading python module 'insights' debug 2020-01-24T20:41:10.174+0000 7fb565654040 1 mgr[py] Loading python module 'iostat' debug 2020-01-24T20:41:10.218+0000 7fb565654040 1 mgr[py] Loading python module 'k8sevents' debug 2020-01-24T20:41:10.646+0000 7fb565654040 1 mgr[py] Loading python module 'localpool' debug 2020-01-24T20:41:10.766+0000 7fb565654040 1 mgr[py] Loading python module 'orchestrator_cli' debug 2020-01-24T20:41:10.850+0000 7fb565654040 1 mgr[py] Loading python module 'pg_autoscaler' debug 2020-01-24T20:41:10.930+0000 7fb565654040 1 mgr[py] Loading python module 'progress' debug 2020-01-24T20:41:11.002+0000 7fb565654040 1 mgr[py] Loading python module 'prometheus' debug 2020-01-24T20:41:11.418+0000 7fb565654040 1 mgr[py] Loading python module 'rbd_support' debug 2020-01-24T20:41:11.502+0000 7fb565654040 1 mgr[py] Loading python module 'restful' debug 2020-01-24T20:41:11.766+0000 7fb565654040 1 mgr[py] Loading python module 'rook' debug 2020-01-24T20:41:12.270+0000 7fb565654040 1 mgr[py] Loading python module 'selftest' debug 2020-01-24T20:41:12.318+0000 7fb565654040 1 mgr[py] Loading python module 'status' debug 2020-01-24T20:41:12.378+0000 7fb565654040 1 mgr[py] Loading python module 'telegraf' debug 2020-01-24T20:41:12.426+0000 7fb565654040 1 mgr[py] Loading python module 'telemetry' debug 2020-01-24T20:41:12.558+0000 7fb565654040 1 mgr[py] Loading python module 'test_orchestrator' debug 2020-01-24T20:41:12.638+0000 7fb565654040 1 mgr[py] Loading python module 'volumes' debug 2020-01-24T20:41:12.886+0000 7fb565654040 1 mgr[py] Loading python module 'zabbix' debug 2020-01-24T20:41:12.938+0000 7fb552594700 0 ms_deliver_dispatch: unhandled message 0x5600a0a0c800 mon_map magic: 0 v1 from mon.0 v2:[fde4:8dba:82e1:0:5054:ff:fe94:d39c]:3300/0 debug 2020-01-24T20:41:13.586+0000 7fb552594700 1 mgr handle_mgr_map Activating! debug 2020-01-24T20:41:13.586+0000 7fb552594700 1 mgr handle_mgr_map I am now activating debug 2020-01-24T20:41:13.598+0000 7fb5403ff700 1 mgr[py] Upgrading module configuration for Mimic debug 2020-01-24T20:41:13.606+0000 7fb5403ff700 0 [balancer] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.606+0000 7fb5403ff700 1 mgr load Constructed class from module: balancer debug 2020-01-24T20:41:13.606+0000 7fb5403ff700 0 [cephadm] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.614+0000 7fb5403ff700 1 mgr load Constructed class from module: cephadm debug 2020-01-24T20:41:13.618+0000 7fb5403ff700 0 [crash] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.618+0000 7fb5403ff700 1 mgr load Constructed class from module: crash debug 2020-01-24T20:41:13.618+0000 7fb5403ff700 0 [devicehealth] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.618+0000 7fb5403ff700 1 mgr load Constructed class from module: devicehealth debug 2020-01-24T20:41:13.622+0000 7fb5403ff700 0 [iostat] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.622+0000 7fb5403ff700 1 mgr load Constructed class from module: iostat debug 2020-01-24T20:41:13.622+0000 7fb5403ff700 0 [orchestrator_cli] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.622+0000 7fb5403ff700 1 mgr load Constructed class from module: orchestrator_cli debug 2020-01-24T20:41:13.622+0000 7fb5403ff700 0 [pg_autoscaler] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.622+0000 7fb5403ff700 1 mgr load Constructed class from module: pg_autoscaler debug 2020-01-24T20:41:13.626+0000 7fb5403ff700 0 [progress] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.626+0000 7fb5403ff700 1 mgr load Constructed class from module: progress debug 2020-01-24T20:41:13.626+0000 7fb5403ff700 0 [rbd_support] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.630+0000 7fb5403ff700 1 mgr load Constructed class from module: rbd_support debug 2020-01-24T20:41:13.630+0000 7fb5403ff700 0 [restful] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.630+0000 7fb5403ff700 1 mgr load Constructed class from module: restful debug 2020-01-24T20:41:13.630+0000 7fb5403ff700 0 [status] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.630+0000 7fb5403ff700 1 mgr load Constructed class from module: status debug 2020-01-24T20:41:13.634+0000 7fb5403ff700 0 [telemetry] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.634+0000 7fb5403ff700 1 mgr load Constructed class from module: telemetry debug 2020-01-24T20:41:13.634+0000 7fb5403ff700 0 [volumes] [WARNING] [root] setting log level based on debug_mgr: WARNING (1/5) debug 2020-01-24T20:41:13.638+0000 7fb52cc29700 0 [restful] [WARNING] [root] server not running: no certificate configured debug 2020-01-24T20:41:13.642+0000 7fb5403ff700 1 mgr load Constructed class from module: volumes debug 2020-01-24T20:41:14.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:16.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:18.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:20.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:22.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:24.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:26.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:28.622+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:30.626+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:32.626+0000 7fb53543a700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs debug 2020-01-24T20:41:34.626+0000 7fb53543a700 1 mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon debug 2020-01-24T20:41:34.626+0000 7fb53543a700 0 log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail debug 2020-01-24T20:41:36.626+0000 7fb53543a700 0 log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail debug 2020-01-24T20:41:38.626+0000 7fb53543a700 0 log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail ...
node1:~ # ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:fe:02:77 brd ff:ff:ff:ff:ff:ff inet 192.168.121.100/24 brd 192.168.121.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fefe:277/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:94:d3:9c brd ff:ff:ff:ff:ff:ff inet 10.20.151.201/24 brd 10.20.151.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fde4:8dba:82e1:0:1dda:49da:3ac1:ea76/64 scope global temporary dynamic valid_lft 3059sec preferred_lft 3059sec inet6 fde4:8dba:82e1:0:5054:ff:fe94:d39c/64 scope global dynamic mngtmpaddr valid_lft 3059sec preferred_lft 3059sec inet6 fe80::5054:ff:fe94:d39c/64 scope link valid_lft forever preferred_lft forever
Updated by Ricardo Marques about 4 years ago
This command produced the following error:
> INFO:cephadm:Verifying IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 3300 ... > WARNING:cephadm:Cannot bind to IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 3300: [Errno -2] Name or service not known > INFO:cephadm:Verifying IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 6789 ... > WARNING:cephadm:Cannot bind to IP [fde4:8dba:82e1:0:5054:ff:fe94:d39c] port 6789: [Errno -2] Name or service not known >
This warning will be fixed by https://github.com/ceph/ceph/pull/34180
Updated by Sage Weil about 4 years ago
- Status changed from New to Pending Backport
- Backport set to octopus
Updated by Nathan Cutler about 4 years ago
- Copied to Backport #44845: octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap" added
Updated by Sebastian Wagner about 4 years ago
- Related to Bug #45016: mgr: `ceph tell mgr mgr_status` hangs added
Updated by Sebastian Wagner about 4 years ago
- Status changed from Pending Backport to New
- Pull request ID set to 34180
This covers PR 34180.
Updated by Matthew Oliver almost 4 years ago
- Assignee set to Matthew Oliver
Seems to work once I've applied https://github.com/ceph/ceph/pull/35633 and added the --ipv6 to bootstrap.
Updated by Sebastian Wagner over 3 years ago
- Status changed from New to Resolved
Actions