Bug #49467
closedcephadm bootstrap process fails in open_ports on octopus branch: TypeError: __init__() got an unexpected keyword argument 'verbose_on_failure'
0%
Description
There is no problem until v15.2.8, but an error occurs during bootstrap process in open_ports of the latest octopus branch.
[root@master1 vagrant]# ./cephadm bootstrap --mon-ip=192.168.70.100 ... Enabling firewalld port 9283/tcp in current zone... Traceback (most recent call last): File "./cephadm", line 6153, in <module> r = args.func() File "./cephadm", line 1410, in _default_image return func() File "./cephadm", line 3093, in command_bootstrap config=config, keyring=mgr_keyring, ports=[9283]) File "./cephadm", line 2176, in deploy_daemon fw.open_ports(ports) File "./cephadm", line 2384, in open_ports out, err, ret = call([self.cmd, '--permanent', '--query-port', tcp_port], verbose_on_failure=False) File "./cephadm", line 964, in call **kwargs TypeError: __init__() got an unexpected keyword argument 'verbose_on_failure'
Updated by Sebastian Wagner about 3 years ago
- Subject changed from cephadm bootstrap process fails in open_ports on octopus branch to cephadm bootstrap process fails in open_ports on octopus branch: TypeError: __init__() got an unexpected keyword argument 'verbose_on_failure'
- Description updated (diff)
Updated by Sebastian Wagner about 3 years ago
- Status changed from New to Fix Under Review
Updated by Nathan Cutler about 3 years ago
- Status changed from Fix Under Review to Resolved
- Target version set to v15.2.10
Updated by Robert Toole about 3 years ago
Is there a workaround that can be used for this issue until 15.2.10 is released ?
My situation is that I installed the cephadm from the 15.2.7 repository because of this issue, and successfully bootstrapped a cluster, however it pulled the 15.2.9 docker image. The cluster was initially healthy and I did not notice the version mismatch between the system's cephadm and the cephadm that is in the docker container. I had to reboot a node for hardware repairs, and now my cluster, while running, is spamming the logs with the following error:
,
[ERR]cephadm exited with an error code: 1, stderr:Reconfig daemon node-exporter.nas-backup ... firewalld ready Traceback (most recent call last): File "<stdin>", line 6153, in <module> File "<stdin>", line 1412, in _default_image File "<stdin>", line 3435, in command_deploy File "<stdin>", line 2173, in deploy_daemon File "<stdin>", line 2406, in update_firewalld File "<stdin>", line 2386, in open_ports File "<stdin>", line 966, in call TypeError: init() got an unexpected keyword argument 'verbose_on_failure' Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/module.py", line 1021, in _remote_connection yield (conn, connr) File "/usr/share/ceph/mgr/cephadm/module.py", line 1168, in _run_cephadm code, '\n'.join(err))) orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr:Reconfig daemon node-exporter.nas-backup ... firewalld ready Traceback (most recent call last): File "<stdin>", line 6153, in <module> File "<stdin>", line 1412, in _default_image File "<stdin>", line 3435, in command_deploy File "<stdin>", line 2173, in deploy_daemon File "<stdin>", line 2406, in update_firewalld File "<stdin>", line 2386, in open_ports File "<stdin>", line 966, in call TypeError: init() got an unexpected keyword argument 'verbose_on_failure'