Project

General

Profile

Feature #6017

ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no mons are explicitly listed

Added by Sage Weil over 7 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

For mon status, use

ceph daemon mon.`hostname` mon_status

that goes through the admin socket instead of over the network, and skips the normal auth stuff and doesn't require the mons be quorum.

History

#1 Updated by Neil Levine over 7 years ago

  • Status changed from New to 12

#2 Updated by Sage Weil over 7 years ago

how about

ceph-deploy mon create-initial

which will

1. do mon create on each mon in the mon_initial_quorum set in ceph.conf
2. wait for them to form a quorum
3. gatherkeys

making ceph-deploy do #2 explicitly will help us narrow down what step is failing.

also, we will know that we are doing mon create on the initial quorum set (some users don't match up the nodes listed for 'new' with the ones they do mon create on).

doing this all at once will also let us add checks that catch any pitfalls with hostnames not matching initial quorum member names.

for example, if someone does

ceph-deploy new host1 host2 host2
ceph-deploy mon create host1

but host1's hostname is actually 'myhost1', then the mon will not consider itself part of the initial quorum and the cluster won't come up. in the general case we don't know that 'host1' on the mon create name needs to match the hostname, but with the combined command, we know we are trying to create the initial mon set.

(perhaps a simple "warning: remote hostname X does not match argument Y" might raise a flag to help catch this type of problem in general...)

#3 Updated by Neil Levine over 7 years ago

  • Assignee set to Alfredo Deza

#4 Updated by Neil Levine over 7 years ago

  • translation missing: en.field_story_points set to 2.00

#5 Updated by Neil Levine over 7 years ago

  • Target version changed from v0.68 - continued to v0.69

#6 Updated by Alfredo Deza over 7 years ago

  • Status changed from 12 to In Progress

Issue #6132 has been resolved and that adds a big warning message when the provided hostname does not match that of the remote one.

#7 Updated by Alfredo Deza over 7 years ago

  • Status changed from In Progress to 12

currently blocked as I cannot implement correctly the mon_status on remote hosts because I get output like:

2013-09-05 16:35:43.859174 7ff57588d700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2013-09-05 16:35:43.859179 7ff57588d700  0 librados: client.admin initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound

issue #6242 was opened

Update: Tamil helped me out, it was a permissions issue. Getting onto this back again.

#8 Updated by Alfredo Deza over 7 years ago

  • Description updated (diff)

#9 Updated by Alfredo Deza over 7 years ago

  • Description updated (diff)

#10 Updated by Alfredo Deza over 7 years ago

Added a helper to check the mon_status in the remote host, it will be used to make sure it is actually running correctly with the optional plus of being able to
output the info as shown below.

Merged into ceph-deploy's master branch with hash: d5c53c2

$ ceph-deploy mon create node1
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1
[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 12.04 precise
[node1][DEBUG ] determining if provided host has same hostname in remote
[node1][DEBUG ] deploying mon to node1
[node1][DEBUG ] remote hostname: node1
[node1][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done
[node1][INFO  ] create a done file to avoid re-doing the mon deployment
[node1][INFO  ] create the init path if it does not exist
[node1][INFO  ] Running command: initctl emit ceph-mon cluster=ceph id=node1
[node1][INFO  ] Running command: ceph daemon mon.node1 mon_status
[node1][DEBUG ] ********************************************************************************
[node1][DEBUG ] status for monitor: mon.node1
[node1][DEBUG ] election_epoch: 1
[node1][DEBUG ] name: node1
[node1][DEBUG ] outside_quorum: []
[node1][DEBUG ] rank: 0
[node1][DEBUG ] monmap: {u'epoch': 1, u'mons': [{u'name': u'node1', u'rank': 0, u'addr': u'192.168.111.100:6789/0'}], u'modified': u'0.000000', u'fsid': u'26890633-5e46-4bc0-abde-d14987fdd484', u'created': u'0.000000'}
[node1][DEBUG ] state: leader
[node1][DEBUG ] extra_probe_peers: []
[node1][DEBUG ] sync_provider: []
[node1][DEBUG ] quorum: [0]
[node1][DEBUG ] ********************************************************************************

#11 Updated by Alfredo Deza over 7 years ago

And now we have to back out of this change because once again, the infamous pushy bug (#5975) that cannot be fixed.

Previously we just omitted the stdout/stderr capturing on the remote end (which triggered this behavior) but for the mon_status it is imperative we
capture this information. Without capturing that there is simply no way we can implement this.

Here is the output attempting to use it on CentOS:


[node2][INFO  ] Running command: ceph daemon mon.node2 mon_status
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] election_epoch: 0
[node2][DEBUG ] name: node2
[node2][DEBUG ] outside_quorum: [u'node2']
[node2][DEBUG ] rank: 0
[node2][DEBUG ] monmap: {u'epoch': 0, u'mons': [{u'name': u'node2', u'rank': 0, u'addr': u'192.168.111.101:6789/0'}, {u'name': u'node1', u'rank': 1, u'addr': u'0.0.0.0:0/1'}], u'modified': u'0.000000', u'fsid': u'17988028-fbe4-4cfe-8afe-681c12b6bbee', u'created': u'0.000000'}
[node2][DEBUG ] state: probing
[node2][DEBUG ] extra_probe_peers: [u'192.168.111.100:6789/0']
[node2][DEBUG ] sync_provider: []
[node2][DEBUG ] quorum: []
[node2][DEBUG ] ********************************************************************************

^CTraceback (most recent call last):
  File "/Users/alfredo/.virtualenvs/ceph-deploy/lib/python2.7/site-packages/pushy/protocol/baseconnection.py", line 252, in close
    self.__istream.close()
  File "/Users/alfredo/.virtualenvs/ceph-deploy/lib/python2.7/site-packages/pushy/protocol/baseconnection.py", line 88, in close
    self.__lock.acquire(2)
KeyboardInterrupt

#12 Updated by Alfredo Deza over 7 years ago

And I just made this work with execnet.

... and it doesn't hang at all

ceph-deploy mon create node2
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node2
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.lsb][WARNIN] lsb_release was not found - inferring OS details
[ceph_deploy.mon][INFO  ] distro info: CentOS 6.3 Final
[node2][DEBUG ] determining if provided host has same hostname in remote
[node2][DEBUG ] deploying mon to node2
[node2][DEBUG ] remote hostname: node2
[node2][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO  ] create a done file to avoid re-doing the mon deployment
[node2][INFO  ] create the init path if it does not exist
[node2][INFO  ] locating `service` executable...
[node2][INFO  ] found `service` executable: /sbin/service
[node2][INFO  ] Running command: /sbin/service ceph start mon.node2
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] { "name": "node2",
[node2][DEBUG ]   "rank": 0,
[node2][DEBUG ]   "state": "probing",
[node2][DEBUG ]   "election_epoch": 0,
[node2][DEBUG ]   "quorum": [],
[node2][DEBUG ]   "outside_quorum": [
[node2][DEBUG ]         "node2"],
[node2][DEBUG ]   "extra_probe_peers": [
[node2][DEBUG ]         "192.168.111.100:6789\/0"],
[node2][DEBUG ]   "sync_provider": [],
[node2][DEBUG ]   "monmap": { "epoch": 0,
[node2][DEBUG ]       "fsid": "17988028-fbe4-4cfe-8afe-681c12b6bbee",
[node2][DEBUG ]       "modified": "0.000000",
[node2][DEBUG ]       "created": "0.000000",
[node2][DEBUG ]       "mons": [
[node2][DEBUG ]             { "rank": 0,
[node2][DEBUG ]               "name": "node2",
[node2][DEBUG ]               "addr": "192.168.111.101:6789\/0"},
[node2][DEBUG ]             { "rank": 1,
[node2][DEBUG ]               "name": "node1",
[node2][DEBUG ]               "addr": "0.0.0.0:0\/1"}]}}
[node2][DEBUG ]
[node2][DEBUG ] ********************************************************************************

#13 Updated by Alfredo Deza over 7 years ago

  • Status changed from 12 to In Progress

#14 Updated by Alfredo Deza over 7 years ago

Pull request to implement the new mon_status : https://github.com/ceph/ceph-deploy/pull/71

Was merged into ceph-deploy master branch with hash: 5a87cb8

This took a massive amount of effort because this was just not possible with `pushy` and we had to swap the library for `execnet` and attempt
to have a transparent behavior.

#15 Updated by Alfredo Deza over 7 years ago

This is how it looks for monitors that never reach quorum, it loops for 5 times per monitor with incremental wait times:

ceph-deploy mon create-initial
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2
[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 12.04 precise
[node1][DEBUG ] determining if provided host has same hostname in remote
[node1][DEBUG ] deploying mon to node1
[node1][DEBUG ] remote hostname: node1
[node1][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] creating path: /var/lib/ceph/mon/ceph-node1
[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done
[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done
[node1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][INFO  ] create the monitor keyring file
[node1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][INFO  ] ceph-mon: mon.noname-a 192.168.111.100:6789/0 is local, renaming to mon.node1
[node1][INFO  ] ceph-mon: set fsid to 72ad4e11-8bac-488b-ad03-698c91f9a173
[node1][INFO  ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
[node1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][INFO  ] create a done file to avoid re-doing the mon deployment
[node1][INFO  ] create the init path if it does not exist
[node1][INFO  ] Running command: sudo initctl emit ceph-mon cluster=ceph id=node1
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[node1][DEBUG ] ********************************************************************************
[node1][DEBUG ] status for monitor: mon.node1
[node1][DEBUG ] {
[node1][DEBUG ]   "election_epoch": 0,
[node1][DEBUG ]   "extra_probe_peers": [
[node1][DEBUG ]     "192.168.111.101:6789/0" 
[node1][DEBUG ]   ],
[node1][DEBUG ]   "monmap": {
[node1][DEBUG ]     "created": "0.000000",
[node1][DEBUG ]     "epoch": 0,
[node1][DEBUG ]     "fsid": "72ad4e11-8bac-488b-ad03-698c91f9a173",
[node1][DEBUG ]     "modified": "0.000000",
[node1][DEBUG ]     "mons": [
[node1][DEBUG ]       {
[node1][DEBUG ]         "addr": "192.168.111.100:6789/0",
[node1][DEBUG ]         "name": "node1",
[node1][DEBUG ]         "rank": 0
[node1][DEBUG ]       },
[node1][DEBUG ]       {
[node1][DEBUG ]         "addr": "0.0.0.0:0/1",
[node1][DEBUG ]         "name": "node2",
[node1][DEBUG ]         "rank": 1
[node1][DEBUG ]       }
[node1][DEBUG ]     ]
[node1][DEBUG ]   },
[node1][DEBUG ]   "name": "node1",
[node1][DEBUG ]   "outside_quorum": [
[node1][DEBUG ]     "node1" 
[node1][DEBUG ]   ],
[node1][DEBUG ]   "quorum": [],
[node1][DEBUG ]   "rank": 0,
[node1][DEBUG ]   "state": "probing",
[node1][DEBUG ]   "sync_provider": []
[node1][DEBUG ] }
[node1][DEBUG ] ********************************************************************************
[node1][INFO  ] monitor: mon.node1 is running
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.lsb][WARNIN] lsb_release was not found - inferring OS details
[ceph_deploy.mon][INFO  ] distro info: CentOS 6.3 Final
[node2][DEBUG ] determining if provided host has same hostname in remote
[node2][DEBUG ] deploying mon to node2
[node2][DEBUG ] remote hostname: node2
[node2][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][INFO  ] creating path: /var/lib/ceph/mon/ceph-node2
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done
[node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][INFO  ] create the monitor keyring file
[node2][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node2 --keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][INFO  ] ceph-mon: mon.noname-b 192.168.111.101:6789/0 is local, renaming to mon.node2
[node2][INFO  ] ceph-mon: set fsid to 72ad4e11-8bac-488b-ad03-698c91f9a173
[node2][INFO  ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node2 for mon.node2
[node2][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][INFO  ] create a done file to avoid re-doing the mon deployment
[node2][INFO  ] create the init path if it does not exist
[node2][INFO  ] locating `service` executable...
[node2][INFO  ] found `service` executable: /sbin/service
[node2][INFO  ] Running command: sudo /sbin/service ceph start mon.node2
[node2][DEBUG ] === mon.node2 ===
[node2][DEBUG ] Starting Ceph mon.node2 on node2...
[node2][DEBUG ] Starting ceph-create-keys on node2...
[node2][WARNIN] No data was received after 7 seconds, disconnecting...
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] {
[node2][DEBUG ]   "election_epoch": 0,
[node2][DEBUG ]   "extra_probe_peers": [
[node2][DEBUG ]     "192.168.111.100:6789/0" 
[node2][DEBUG ]   ],
[node2][DEBUG ]   "monmap": {
[node2][DEBUG ]     "created": "0.000000",
[node2][DEBUG ]     "epoch": 0,
[node2][DEBUG ]     "fsid": "72ad4e11-8bac-488b-ad03-698c91f9a173",
[node2][DEBUG ]     "modified": "0.000000",
[node2][DEBUG ]     "mons": [
[node2][DEBUG ]       {
[node2][DEBUG ]         "addr": "192.168.111.101:6789/0",
[node2][DEBUG ]         "name": "node2",
[node2][DEBUG ]         "rank": 0
[node2][DEBUG ]       },
[node2][DEBUG ]       {
[node2][DEBUG ]         "addr": "0.0.0.0:0/1",
[node2][DEBUG ]         "name": "node1",
[node2][DEBUG ]         "rank": 1
[node2][DEBUG ]       }
[node2][DEBUG ]     ]
[node2][DEBUG ]   },
[node2][DEBUG ]   "name": "node2",
[node2][DEBUG ]   "outside_quorum": [
[node2][DEBUG ]     "node2" 
[node2][DEBUG ]   ],
[node2][DEBUG ]   "quorum": [],
[node2][DEBUG ]   "rank": 0,
[node2][DEBUG ]   "state": "probing",
[node2][DEBUG ]   "sync_provider": []
[node2][DEBUG ] }
[node2][DEBUG ] ********************************************************************************
[node2][INFO  ] monitor: mon.node2 is running
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[ceph_deploy.mon][WARNIN] mon.node2 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[ceph_deploy.mon][WARNIN] mon.node2 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[ceph_deploy.mon][WARNIN] mon.node2 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[ceph_deploy.mon][WARNIN] mon.node2 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[ceph_deploy.mon][WARNIN] mon.node2 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] node1
[ceph_deploy.mon][ERROR ] node2

#16 Updated by Alfredo Deza over 7 years ago

And for a successful run for two nodes, it looks like this:

$ ceph-deploy mon create-initial
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2
[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 12.04 precise
[node1][DEBUG ] determining if provided host has same hostname in remote
[node1][DEBUG ] deploying mon to node1
[node1][DEBUG ] remote hostname: node1
[node1][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] creating path: /var/lib/ceph/mon/ceph-node1
[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done
[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done
[node1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][INFO  ] create the monitor keyring file
[node1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][INFO  ] ceph-mon: mon.noname-a 192.168.111.100:6789/0 is local, renaming to mon.node1
[node1][INFO  ] ceph-mon: set fsid to 72ad4e11-8bac-488b-ad03-698c91f9a173
[node1][INFO  ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
[node1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][INFO  ] create a done file to avoid re-doing the mon deployment
[node1][INFO  ] create the init path if it does not exist
[node1][INFO  ] Running command: sudo initctl emit ceph-mon cluster=ceph id=node1
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[node1][DEBUG ] ********************************************************************************
[node1][DEBUG ] status for monitor: mon.node1
[node1][DEBUG ] {
[node1][DEBUG ]   "election_epoch": 0,
[node1][DEBUG ]   "extra_probe_peers": [
[node1][DEBUG ]     "192.168.111.101:6789/0" 
[node1][DEBUG ]   ],
[node1][DEBUG ]   "monmap": {
[node1][DEBUG ]     "created": "0.000000",
[node1][DEBUG ]     "epoch": 0,
[node1][DEBUG ]     "fsid": "72ad4e11-8bac-488b-ad03-698c91f9a173",
[node1][DEBUG ]     "modified": "0.000000",
[node1][DEBUG ]     "mons": [
[node1][DEBUG ]       {
[node1][DEBUG ]         "addr": "192.168.111.100:6789/0",
[node1][DEBUG ]         "name": "node1",
[node1][DEBUG ]         "rank": 0
[node1][DEBUG ]       },
[node1][DEBUG ]       {
[node1][DEBUG ]         "addr": "0.0.0.0:0/1",
[node1][DEBUG ]         "name": "node2",
[node1][DEBUG ]         "rank": 1
[node1][DEBUG ]       }
[node1][DEBUG ]     ]
[node1][DEBUG ]   },
[node1][DEBUG ]   "name": "node1",
[node1][DEBUG ]   "outside_quorum": [
[node1][DEBUG ]     "node1" 
[node1][DEBUG ]   ],
[node1][DEBUG ]   "quorum": [],
[node1][DEBUG ]   "rank": 0,
[node1][DEBUG ]   "state": "probing",
[node1][DEBUG ]   "sync_provider": []
[node1][DEBUG ] }
[node1][DEBUG ] ********************************************************************************
[node1][INFO  ] monitor: mon.node1 is running
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.lsb][WARNIN] lsb_release was not found - inferring OS details
[ceph_deploy.mon][INFO  ] distro info: CentOS 6.3 Final
[node2][DEBUG ] determining if provided host has same hostname in remote
[node2][DEBUG ] deploying mon to node2
[node2][DEBUG ] remote hostname: node2
[node2][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][INFO  ] creating path: /var/lib/ceph/mon/ceph-node2
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done
[node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][INFO  ] create the monitor keyring file
[node2][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node2 --keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][INFO  ] ceph-mon: mon.noname-b 192.168.111.101:6789/0 is local, renaming to mon.node2
[node2][INFO  ] ceph-mon: set fsid to 72ad4e11-8bac-488b-ad03-698c91f9a173
[node2][INFO  ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node2 for mon.node2
[node2][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][INFO  ] create a done file to avoid re-doing the mon deployment
[node2][INFO  ] create the init path if it does not exist
[node2][INFO  ] locating `service` executable...
[node2][INFO  ] found `service` executable: /sbin/service
[node2][INFO  ] Running command: sudo /sbin/service ceph start mon.node2
[node2][DEBUG ] === mon.node2 ===
[node2][DEBUG ] Starting Ceph mon.node2 on node2...
[node2][DEBUG ] Starting ceph-create-keys on node2...
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] {
[node2][DEBUG ]   "election_epoch": 4,
[node2][DEBUG ]   "extra_probe_peers": [
[node2][DEBUG ]     "192.168.111.100:6789/0" 
[node2][DEBUG ]   ],
[node2][DEBUG ]   "monmap": {
[node2][DEBUG ]     "created": "0.000000",
[node2][DEBUG ]     "epoch": 1,
[node2][DEBUG ]     "fsid": "72ad4e11-8bac-488b-ad03-698c91f9a173",
[node2][DEBUG ]     "modified": "0.000000",
[node2][DEBUG ]     "mons": [
[node2][DEBUG ]       {
[node2][DEBUG ]         "addr": "192.168.111.100:6789/0",
[node2][DEBUG ]         "name": "node1",
[node2][DEBUG ]         "rank": 0
[node2][DEBUG ]       },
[node2][DEBUG ]       {
[node2][DEBUG ]         "addr": "192.168.111.101:6789/0",
[node2][DEBUG ]         "name": "node2",
[node2][DEBUG ]         "rank": 1
[node2][DEBUG ]       }
[node2][DEBUG ]     ]
[node2][DEBUG ]   },
[node2][DEBUG ]   "name": "node2",
[node2][DEBUG ]   "outside_quorum": [],
[node2][DEBUG ]   "quorum": [
[node2][DEBUG ]     0,
[node2][DEBUG ]     1
[node2][DEBUG ]   ],
[node2][DEBUG ]   "rank": 1,
[node2][DEBUG ]   "state": "peon",
[node2][DEBUG ]   "sync_provider": []
[node2][DEBUG ] }
[node2][DEBUG ] ********************************************************************************
[node2][INFO  ] monitor: mon.node2 is running
[node1][INFO  ] Running command: sudo ceph daemon mon.node1 mon_status
[ceph_deploy.mon][INFO  ] mon.node1 monitor has reached quorum!
[node2][INFO  ] Running command: sudo ceph daemon mon.node2 mon_status
[ceph_deploy.mon][INFO  ] mon.node2 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.client.admin.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-mds.keyring

Note how gatherkeys is run only when there is quorum.

#17 Updated by Alfredo Deza over 7 years ago

  • Status changed from In Progress to Fix Under Review

#18 Updated by Alfredo Deza over 7 years ago

  • Status changed from Fix Under Review to Resolved

Merged into ceph-deploy master branch with hash: e66d751

\o/

Also available in: Atom PDF