Bug #21427
closed"ceph-create-keys:ceph-mon is not in quorum: u'probing'" in ceph-deploy-luminous
0%
Description
This seems to be ovh specific
luminous 12.2.1
Run: http://pulpito.ceph.com/yuriw-2017-09-18_20:49:19-ceph-deploy-luminous-distro-basic-ovh/
Jobs: all
Logs: http://qa-proxy.ceph.com/teuthology/yuriw-2017-09-18_20:49:19-ceph-deploy-luminous-distro-basic-ovh/1645652/teuthology.log
2017-09-18T21:25:43.366 INFO:teuthology.orchestra.run.ovh023.stderr:INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing' 2017-09-18T21:25:44.506 INFO:teuthology.orchestra.run.ovh023.stderr:INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing' 2017-09-18T21:25:45.508 INFO:teuthology.orchestra.run.ovh023.stderr:ceph-mon was not able to join quorum within 0 seconds 2017-09-18T21:25:45.521 INFO:tasks.ceph_deploy:Error encountered, logging exception before tearing down ceph-deploy 2017-09-18T21:25:45.522 INFO:tasks.ceph_deploy:Traceback (most recent call last): File "/home/teuthworker/src/github.com_ceph_ceph_luminous/qa/tasks/ceph_deploy.py", line 306, in build_ceph_cluster '--id', remote.shortname]) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 193, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 423, in run r.wait() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 155, in wait self._raise_for_status() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 177, in _raise_for_status node=self.hostname, label=self.label CommandFailedError: Command failed on ovh023 with status 1: 'sudo ceph-create-keys --cluster ceph --id ovh023'
Updated by Yuri Weinstein over 6 years ago
Updated by David Galloway over 6 years ago
Success on VPS:
2017-09-18T21:31:31.032 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm095.asok mon_status 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] ******************************************************************************** 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] status for monitor: mon.vpm095 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] { 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "election_epoch": 0, 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "extra_probe_peers": [ 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "172.21.2.73:6789/0" 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] ], 2017-09-18T21:31:31.167 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "feature_map": { 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "mon": { 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "group": { 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "features": "0x1ffddff8eea4fffb", 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "num": 1, 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "release": "luminous" 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] } 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] } 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] }, 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "features": { 2017-09-18T21:31:31.168 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "quorum_con": "0", 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "quorum_mon": [], 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "required_con": "0", 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "required_mon": [] 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] }, 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "monmap": { 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "created": "2017-09-18 21:31:28.662719", 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "epoch": 0, 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "features": { 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "optional": [], 2017-09-18T21:31:31.169 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "persistent": [] 2017-09-18T21:31:31.170 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] }, 2017-09-18T21:31:31.170 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "fsid": "4489beaa-a808-41ab-804b-ede6ae4ba0ff", 2017-09-18T21:31:31.170 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "modified": "2017-09-18 21:31:28.662719", 2017-09-18T21:31:31.171 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "mons": [ 2017-09-18T21:31:31.171 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] { 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "addr": "172.21.2.95:6789/0", 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "name": "vpm095", 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "public_addr": "172.21.2.95:6789/0", 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "rank": 0 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] }, 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] { 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "addr": "0.0.0.0:0/1", 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "name": "vpm073", 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "public_addr": "0.0.0.0:0/1", 2017-09-18T21:31:31.172 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "rank": 1 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] } 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] ] 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] }, 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "name": "vpm095", 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "outside_quorum": [ 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "vpm095" 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] ], 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "quorum": [], 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "rank": 0, 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "state": "probing", 2017-09-18T21:31:31.173 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] "sync_provider": [] 2017-09-18T21:31:31.174 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] } 2017-09-18T21:31:31.174 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095][DEBUG ] ********************************************************************************
Failure on OVH:
2017-09-18T21:12:16.564 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ovh023.asok mon_status 2017-09-18T21:12:16.712 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] ******************************************************************************** 2017-09-18T21:12:16.712 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] status for monitor: mon.ovh023 2017-09-18T21:12:16.712 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] { 2017-09-18T21:12:16.713 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "election_epoch": 0, 2017-09-18T21:12:16.713 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "extra_probe_peers": [ 2017-09-18T21:12:16.713 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "158.69.68.150:6789/0" 2017-09-18T21:12:16.713 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] ], 2017-09-18T21:12:16.713 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "feature_map": { 2017-09-18T21:12:16.713 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "mon": { 2017-09-18T21:12:16.713 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "group": { 2017-09-18T21:12:16.714 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "features": "0x1ffddff8eea4fffb", 2017-09-18T21:12:16.751 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "num": 1, 2017-09-18T21:12:16.751 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "release": "luminous" 2017-09-18T21:12:16.752 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] } 2017-09-18T21:12:16.752 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] } 2017-09-18T21:12:16.752 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] }, 2017-09-18T21:12:16.752 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "features": { 2017-09-18T21:12:16.753 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "quorum_con": "0", 2017-09-18T21:12:16.753 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "quorum_mon": [], 2017-09-18T21:12:16.753 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "required_con": "0", 2017-09-18T21:12:16.753 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "required_mon": [] 2017-09-18T21:12:16.753 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] }, 2017-09-18T21:12:16.754 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "monmap": { 2017-09-18T21:12:16.754 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "created": "2017-09-18 21:12:14.248886", 2017-09-18T21:12:16.754 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "epoch": 0, 2017-09-18T21:12:16.754 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "features": { 2017-09-18T21:12:16.754 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "optional": [], 2017-09-18T21:12:16.755 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "persistent": [] 2017-09-18T21:12:16.755 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] }, 2017-09-18T21:12:16.757 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "fsid": "c99c8161-0fd5-4be7-a026-c5ee1c0d39bd", 2017-09-18T21:12:16.758 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "modified": "2017-09-18 21:12:14.248886", 2017-09-18T21:12:16.758 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "mons": [ 2017-09-18T21:12:16.758 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] { 2017-09-18T21:12:16.758 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "addr": "[::1]:6789/0", 2017-09-18T21:12:16.758 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "name": "ovh023", 2017-09-18T21:12:16.759 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "public_addr": "[::1]:6789/0", 2017-09-18T21:12:16.759 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "rank": 0 2017-09-18T21:12:16.759 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] }, 2017-09-18T21:12:16.759 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] { 2017-09-18T21:12:16.760 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "addr": "0.0.0.0:0/1", 2017-09-18T21:12:16.761 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "name": "ovh007", 2017-09-18T21:12:16.761 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "public_addr": "0.0.0.0:0/1", 2017-09-18T21:12:16.761 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "rank": 1 2017-09-18T21:12:16.761 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] } 2017-09-18T21:12:16.762 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] ] 2017-09-18T21:12:16.762 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] }, 2017-09-18T21:12:16.762 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "name": "ovh023", 2017-09-18T21:12:16.763 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "outside_quorum": [ 2017-09-18T21:12:16.763 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "ovh023" 2017-09-18T21:12:16.763 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] ], 2017-09-18T21:12:16.766 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "quorum": [], 2017-09-18T21:12:16.766 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "rank": 0, 2017-09-18T21:12:16.766 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "state": "probing", 2017-09-18T21:12:16.766 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] "sync_provider": [] 2017-09-18T21:12:16.766 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] } 2017-09-18T21:12:16.766 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][DEBUG ] ******************************************************************************** 2017-09-18T21:12:16.766 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][INFO ] monitor: mon.ovh023 is running 2017-09-18T21:12:16.767 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ovh023.asok mon_status
The difference is OVH nodes have IPv6 addresses and that's what getting used. See public_addr
in the failed output above and compare with VPS.
FAIL:
2017-09-18T21:07:46.008 INFO:teuthology.orchestra.run.ovh023:Running: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy new ovh023.front.sepia.ceph.com ovh007.front.sepia.ceph.com' 2017-09-18T21:07:46.690 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf 2017-09-18T21:07:46.708 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] Invoked (1.5.39): ./ceph-deploy new ovh023.front.sepia.ceph.com ovh007.front.sepia.ceph.com 2017-09-18T21:07:46.708 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] ceph-deploy options: 2017-09-18T21:07:46.708 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] cluster_network : None 2017-09-18T21:07:46.708 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] fsid : None 2017-09-18T21:07:46.709 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf object at 0x7fabdc3c2780> 2017-09-18T21:07:46.709 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] public_network : None 2017-09-18T21:07:46.709 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] verbose : False 2017-09-18T21:07:46.709 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] overwrite_conf : False 2017-09-18T21:07:46.709 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] cluster : ceph 2017-09-18T21:07:46.743 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] func : <function new at 0x7fabdcaeb730> 2017-09-18T21:07:46.744 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] mon : ['ovh023.front.sepia.ceph.com', 'ovh007.front.sepia.ceph.com'] 2017-09-18T21:07:46.744 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] username : None 2017-09-18T21:07:46.745 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] ssh_copykey : True 2017-09-18T21:07:46.745 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] quiet : False 2017-09-18T21:07:46.746 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] ceph_conf : None 2017-09-18T21:07:46.746 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.cli][INFO ] default_release : False 2017-09-18T21:07:46.747 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Creating new cluster named ceph 2017-09-18T21:07:46.747 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds 2017-09-18T21:07:46.809 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023.front.sepia.ceph.com][DEBUG ] connection detected need for sudo 2017-09-18T21:07:46.863 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023.front.sepia.ceph.com][DEBUG ] connected to host: ovh023.front.sepia.ceph.com 2017-09-18T21:07:46.945 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip link show 2017-09-18T21:07:46.955 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip addr show 2017-09-18T21:07:46.963 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh023.front.sepia.ceph.com][DEBUG ] IP addresses found: ['158.69.67.93'] 2017-09-18T21:07:46.964 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Resolving host ovh023.front.sepia.ceph.com 2017-09-18T21:07:46.964 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Monitor ovh023 at ::1 2017-09-18T21:07:46.964 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][INFO ] Monitors are IPv6, binding Messenger traffic on IPv6 2017-09-18T21:07:46.965 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds 2017-09-18T21:07:47.011 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh007.front.sepia.ceph.com][DEBUG ] connected to host: ovh023 2017-09-18T21:07:47.020 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh007.front.sepia.ceph.com][INFO ] Running command: ssh -CT -o BatchMode=yes ovh007.front.sepia.ceph.com 2017-09-18T21:07:47.551 INFO:teuthology.orchestra.run.ovh023.stderr:Warning: Permanently added 'ovh007.front.sepia.ceph.com,158.69.68.150' (ECDSA) to the list of known hosts. 2017-09-18T21:07:47.977 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh007.front.sepia.ceph.com][DEBUG ] connection detected need for sudo 2017-09-18T21:07:48.026 INFO:teuthology.orchestra.run.ovh023.stderr:Warning: Permanently added 'ovh007.front.sepia.ceph.com,158.69.68.150' (ECDSA) to the list of known hosts. 2017-09-18T21:07:48.461 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh007.front.sepia.ceph.com][DEBUG ] connected to host: ovh007.front.sepia.ceph.com 2017-09-18T21:07:48.536 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh007.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip link show 2017-09-18T21:07:48.548 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh007.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip addr show 2017-09-18T21:07:48.560 INFO:teuthology.orchestra.run.ovh023.stderr:[ovh007.front.sepia.ceph.com][DEBUG ] IP addresses found: ['158.69.68.150'] 2017-09-18T21:07:48.562 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Resolving host ovh007.front.sepia.ceph.com 2017-09-18T21:07:48.562 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Monitor ovh007 at 158.69.68.150 2017-09-18T21:07:48.563 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Monitor initial members are ['ovh023', 'ovh007'] 2017-09-18T21:07:48.563 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Monitor addrs are ['[::1]', '158.69.68.150'] 2017-09-18T21:07:48.563 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Creating a random mon key... 2017-09-18T21:07:48.563 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... 2017-09-18T21:07:48.563 INFO:teuthology.orchestra.run.ovh023.stderr:[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
PASS:
2017-09-18T21:27:35.880 INFO:teuthology.orchestra.run.vpm073:Running: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy new vpm095.front.sepia.ceph.com vpm073.front.sepia.ceph.com' 2017-09-18T21:27:36.413 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf 2017-09-18T21:27:36.413 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] Invoked (1.5.39): ./ceph-deploy new vpm095.front.sepia.ceph.com vpm073.front.sepia.ceph.com 2017-09-18T21:27:36.413 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] ceph-deploy options: 2017-09-18T21:27:36.413 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] quiet : False 2017-09-18T21:27:36.413 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] ssh_copykey : True 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] verbose : False 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] cluster_network : None 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] func : <function new at 0x7f7d68c772f0> 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf object at 0x7f7d68c78a20> 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] cluster : ceph 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] default_release : False 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] ceph_conf : None 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] fsid : None 2017-09-18T21:27:36.414 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] mon : ['vpm095.front.sepia.ceph.com', 'vpm073.front.sepia.ceph.com'] 2017-09-18T21:27:36.415 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] overwrite_conf : False 2017-09-18T21:27:36.415 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] public_network : None 2017-09-18T21:27:36.415 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.cli][INFO ] username : None 2017-09-18T21:27:36.415 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Creating new cluster named ceph 2017-09-18T21:27:36.415 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds 2017-09-18T21:27:36.494 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095.front.sepia.ceph.com][DEBUG ] connected to host: vpm073 2017-09-18T21:27:36.508 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095.front.sepia.ceph.com][INFO ] Running command: ssh -CT -o BatchMode=yes vpm095.front.sepia.ceph.com 2017-09-18T21:27:36.893 INFO:teuthology.orchestra.run.vpm073.stderr:Warning: Permanently added 'vpm095.front.sepia.ceph.com,172.21.2.95' (ECDSA) to the list of known hosts. 2017-09-18T21:27:37.216 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095.front.sepia.ceph.com][DEBUG ] connection detected need for sudo 2017-09-18T21:27:37.285 INFO:teuthology.orchestra.run.vpm073.stderr:Warning: Permanently added 'vpm095.front.sepia.ceph.com,172.21.2.95' (ECDSA) to the list of known hosts. 2017-09-18T21:27:37.614 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095.front.sepia.ceph.com][DEBUG ] connected to host: vpm095.front.sepia.ceph.com 2017-09-18T21:27:37.678 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip link show 2017-09-18T21:27:37.696 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip addr show 2017-09-18T21:27:37.699 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm095.front.sepia.ceph.com][DEBUG ] IP addresses found: ['172.21.2.95'] 2017-09-18T21:27:37.699 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Resolving host vpm095.front.sepia.ceph.com 2017-09-18T21:27:37.706 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Monitor vpm095 at 172.21.2.95 2017-09-18T21:27:37.707 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds 2017-09-18T21:27:37.761 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm073.front.sepia.ceph.com][DEBUG ] connection detected need for sudo 2017-09-18T21:27:37.837 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm073.front.sepia.ceph.com][DEBUG ] connected to host: vpm073.front.sepia.ceph.com 2017-09-18T21:27:37.887 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm073.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip link show 2017-09-18T21:27:37.894 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm073.front.sepia.ceph.com][INFO ] Running command: sudo /usr/sbin/ip addr show 2017-09-18T21:27:37.897 INFO:teuthology.orchestra.run.vpm073.stderr:[vpm073.front.sepia.ceph.com][DEBUG ] IP addresses found: ['172.21.2.73'] 2017-09-18T21:27:37.898 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Resolving host vpm073.front.sepia.ceph.com 2017-09-18T21:27:37.918 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Monitor vpm073 at 172.21.2.73 2017-09-18T21:27:37.918 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Monitor initial members are ['vpm095', 'vpm073'] 2017-09-18T21:27:37.918 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.21.2.95', '172.21.2.73'] 2017-09-18T21:27:37.919 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Creating a random mon key... 2017-09-18T21:27:37.919 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... 2017-09-18T21:27:37.919 INFO:teuthology.orchestra.run.vpm073.stderr:[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
Updated by David Galloway over 6 years ago
Is this possibly a bug in ceph-deploy? From what I'm parsing from the job output and https://github.com/ceph/ceph-deploy/blob/29224648961bdcb3a240a0e5f748a675940d9931/ceph_deploy/new.py , get_public_network_ip
is basically returning the IPv6 equivalent of 127.0.0.1 which would be [::1].
OVH nodes do have public IPv6 addresses and should be able to reach each other over those addresses but we don't manage IPv6 DNS records at the moment. So if any name resolution is going on, that's not going to work.
We're only not seeing this on VPSes because they don't have IPv6 addresses.
Updated by Vasu Kulkarni over 6 years ago
- Project changed from ovh to 18
Thanks David for the debug, I guess we can fix this in ceph-deploy.
Updated by David Galloway over 6 years ago
- Project changed from 18 to ovh
We need an IPv6 version of this https://github.com/ceph/ceph-deploy/blob/61098a63235d5cbab7417d491d7744a4f7dc0e23/ceph_deploy/util/net.py#L41-L47 in the same function to ensure [::1] isn't returned.