Bug #5489
ceph-deploy: mon destroy throws inappropriate message
0%
Description
I tried to build 3 node cluster using ceph-deploy on centos , but forgot to disable iptables service on one of the nodes and so it was stuck on ceph-create-keys.
soon after i created monitors [using "mon create" command], I tried to destroy the monitor running on the node, on which iptables was disabled and seeing an inappropriate error.
[ubuntu@burnupi05 ceph-deploy]$ ./ceph-deploy mon destroy burnupi21 unparseable JSON monremoveburnupi21Traceback (most recent call last): File "./ceph-deploy", line 8, in <module> load_entry_point('ceph-deploy==0.1', 'console_scripts', 'ceph-deploy')() File "/home/ubuntu/ceph-dep/ceph-deploy/ceph_deploy/cli.py", line 112, in main return args.func(args) File "/home/ubuntu/ceph-dep/ceph-deploy/ceph_deploy/mon.py", line 238, in mon mon_destroy(args) File "/home/ubuntu/ceph-dep/ceph-deploy/ceph_deploy/mon.py", line 222, in mon_destroy cluster=args.cluster, File "/home/ubuntu/ceph-dep/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/proxy.py", line 255, in <lambda> (conn.operator(type_, self, args, kwargs)) File "/home/ubuntu/ceph-dep/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/connection.py", line 66, in operator return self.send_request(type_, (object, args, kwargs)) File "/home/ubuntu/ceph-dep/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/baseconnection.py", line 323, in send_request return self.__handle(m) File "/home/ubuntu/ceph-dep/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/baseconnection.py", line 639, in __handle raise e pushy.protocol.proxy.ExceptionProxy: Command '['sudo', 'ceph', '--cluster=ceph', '-n', 'mon.', '-k', '/var/lib/ceph/mon/ceph-burnupi21/keyring', 'mon', 'remove', 'burnupi21']' returned non-zero exit status 22
we may have to handle this scenario more gracefully.
History
#1 Updated by Tamilarasi muthamizhan over 10 years ago
test setup: burnupi05, burnupi21, burnupi63
burnupi21 is the one, stuck up at ceph-create-keys.
#2 Updated by Ian Colle over 10 years ago
- Priority changed from Normal to High
#3 Updated by Alfredo Deza over 10 years ago
- Status changed from New to In Progress
- Assignee set to Alfredo Deza
#4 Updated by Alfredo Deza over 10 years ago
- Status changed from In Progress to Need More Info
Tamil, is the problem the lack of a timeout with a graceful handling? or is it just the the error message that would need to change?
#5 Updated by Tamilarasi muthamizhan over 10 years ago
I think the answer is both :)
#6 Updated by Alfredo Deza over 10 years ago
- Status changed from Need More Info to Resolved
With the addition of more logging and better output from the remote node this should no longer be an issue.
Tamil, if you are still experiencing issues, could you please re-open with the output of what you are seeing?