Actions
Bug #55602
openAdding OSDs does not work as documented
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
root@reesi001:/# ceph -v ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) root@reesi001:/# ceph orch daemon add osd ivan01:data_devices=/dev/sdc,/dev/sdd,db_devices=/dev/journals/sdc,/dev/journals/sdd,osds_per_device=1 Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1701, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 171, in handle_command return dispatch[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 433, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 107, in <lambda> wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731 File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 96, in wrapper return func(*args, **kwargs) File "/usr/share/ceph/mgr/orchestrator/module.py", line 800, in _daemon_add_osd raise_if_exception(completion) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 228, in raise_if_exception raise e RuntimeError: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:12006e978235e769c604367d9c0b3662ccac864ba30d0d4c7ca6f28331ad729a -e NODE_NAME=ivan01 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a:/var/run/ceph:z -v /var/log/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a:/var/log/ceph:z -v /var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp3wjx70xz:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6p6wjbjq:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:12006e978235e769c604367d9c0b3662ccac864ba30d0d4c7ca6f28331ad729a lvm batch --no-auto data_devices=/dev/sdc /dev/sdd db_devices=/dev/journals/sdc /dev/journals/sdd osds_per_device=1 --yes --no-systemd /usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]] /usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]] /usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]] /usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore] /usr/bin/docker: stderr [--report] [--yes] /usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt] /usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS] /usr/bin/docker: stderr [--no-systemd] /usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE] /usr/bin/docker: stderr [--data-slots DATA_SLOTS] /usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION] /usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE] /usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS] /usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE] /usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS] /usr/bin/docker: stderr [--journal-size JOURNAL_SIZE] /usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare] /usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]] /usr/bin/docker: stderr [DEVICES [DEVICES ...]] /usr/bin/docker: stderr ceph-volume lvm batch: error: argument DEVICES: invalid <ceph_volume.util.arg_validators.ValidBatchDevice object at 0x7f46d987d320> value: 'data_devices=/dev/sdc' Traceback (most recent call last): File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 8634, in <module> main() File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 8622, in main r = ctx.func(ctx) File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 1886, in _infer_config return func(ctx) File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 1827, in _infer_fsid return func(ctx) File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 1914, in _infer_image return func(ctx) File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 1814, in _validate_fsid return func(ctx) File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 5562, in command_ceph_volume out, err, code = call_throws(ctx, c.run_cmd()) File "/var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/cephadm.ea61f01e22cb1466a53bc70973048e0ee2b182669fd1a1f6635fd7298a83a6cb", line 1616, in call_throws raise RuntimeError('Failed command: %s' % ' '.join(command)) RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:12006e978235e769c604367d9c0b3662ccac864ba30d0d4c7ca6f28331ad729a -e NODE_NAME=ivan01 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a:/var/run/ceph:z -v /var/log/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a:/var/log/ceph:z -v /var/lib/ceph/28f7427e-5558-4ffd-ae1a-51ec3042759a/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp3wjx70xz:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6p6wjbjq:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:12006e978235e769c604367d9c0b3662ccac864ba30d0d4c7ca6f28331ad729a lvm batch --no-auto data_devices=/dev/sdc /dev/sdd db_devices=/dev/journals/sdc /dev/journals/sdd osds_per_device=1 --yes --no-systemd
Updated by Guillaume Abrioux almost 2 years ago
the changes that allow that got merged after 17.2.0 was released which means it will be included in the next release, 17.2.1
I guess the documentation is ahead
Actions