Project

General

Profile

Actions

Bug #45394

closed

cephadm: fail to create/preview OSDs via drive group

Added by Kiefer Chang almost 4 years ago. Updated almost 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
cephadm
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Create OSD with the following config:

# cat /tmp/dg2.yaml    
service_type: osd
service_id: dashboard-admin-1586509411014
host_pattern: '*'
data_devices:
  rotational: true

# bin/ceph orch apply osd -i /tmp/dg2.yaml

*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-05-06T06:54:13.828+0000 7f7ef2635700 -1 WARNING: all dangerous and experimental features are enabled.
2020-05-06T06:54:13.852+0000 7f7ef2635700 -1 WARNING: all dangerous and experimental features are enabled.
No pending deployments.

Error in mgr log:


2020-05-06T06:47:35.474+0000 7fc35fc0f700 10 log_client  logged 2020-05-06T06:47:35.042119+0000 mgr.x (mgr.4688) 45 : cephadm [ERR] Failed to apply osd.dashboard-admin-1586509411014 spec Dr
iveGroupSpec(name=dashboard-admin-1586509411014->encrypted=True, placement=PlacementSpec(host_pattern='*'), service_id='dashboard-admin-1586509411014', service_type='osd', data_devices=Devi
ceSelection(rotational=True, all=False), db_devices=DeviceSelection(size='10G', rotational=False, all=False), wal_devices=DeviceSelection(size='15G', rotational=False, all=False), osd_id_cl
aims={}, unmanaged=False): cephadm exited with an error code: 1, stderr:INFO:cephadm:/bin/podman:stderr usage: ceph-volume [-h] [--cluster CLUSTER] [--log-level LOG_LEVEL]
INFO:cephadm:/bin/podman:stderr                    [--log-path LOG_PATH]
INFO:cephadm:/bin/podman:stderr ceph-volume: error: unrecognized arguments: CEPH_VOLUME_OSDSPEC_AFFINITY=dashboard-admin-1586509411014
Traceback (most recent call last):
  File "<stdin>", line 4633, in <module>
  File "<stdin>", line 1062, in _infer_fsid
  File "<stdin>", line 1139, in _infer_image
  File "<stdin>", line 2893, in command_ceph_volume
  File "<stdin>", line 839, in call_throws
RuntimeError: Failed command: /bin/podman run --rm --net=host --ipc=host --privileged --group-add=disk -e CONTAINER_IMAGE=quay.io/ceph-ci/ceph:master -e NODE_NAME=mgr0 -v /var/log/ceph/949c
f12d-5708-4c89-b94f-16c081c69bcd:/var/log/ceph:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmp1k1mrzhk:/etc/ceph/ceph
.conf:z -v /tmp/ceph-tmpzyo710w0:/var/lib/ceph/bootstrap-osd/ceph.keyring:z --entrypoint /usr/sbin/ceph-volume quay.io/ceph-ci/ceph:master CEPH_VOLUME_OSDSPEC_AFFINITY=dashboard-admin-15865
09411014 lvm batch --no-auto /dev/sdc /dev/sdd --wal-devices /dev/sdb --dmcrypt --yes --no-systemd
Traceback (most recent call last):
  File "/ceph/src/pybind/mgr/cephadm/module.py", line 2524, in _apply_all_services
    if self._apply_service(spec):
  File "/ceph/src/pybind/mgr/cephadm/module.py", line 2469, in _apply_service
    return False if create_func(spec) else True # type: ignore
  File "/ceph/src/pybind/mgr/cephadm/module.py", line 560, in wrapper
    return AsyncCompletion(value=f(*args, **kwargs), name=f.__name__)
  File "/ceph/src/pybind/mgr/cephadm/module.py", line 2104, in create_osds
    ret_msg = self._create_osd(host, cmd,
  File "/ceph/src/pybind/mgr/cephadm/module.py", line 2209, in _create_osd
    raise RuntimeError(
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:/bin/podman:stderr usage: ceph-volume [-h] [--cluster CLUSTER] [--log-level LOG_LEVEL]
INFO:cephadm:/bin/podman:stderr                    [--log-path LOG_PATH]
INFO:cephadm:/bin/podman:stderr ceph-volume: error: unrecognized arguments: CEPH_VOLUME_OSDSPEC_AFFINITY=dashboard-admin-1586509411014

Related to this PR? https://github.com/ceph/ceph/pull/34856

Previewing OSDs doesn't work with the same error in mgr.log, either.

Actions #1

Updated by Kiefer Chang almost 4 years ago

  • Subject changed from cephadm: fail to create OSDs via drive group to cephadm: fail to create/preview OSDs via drive group
  • Description updated (diff)
Actions #2

Updated by Kiefer Chang almost 4 years ago

  • Description updated (diff)
Actions #3

Updated by Joshua Schmid almost 4 years ago

I could imagine that you're seeing this because the container images are not fully up to date yet. They're probably missing this commit https://github.com/ceph/ceph/pull/34436

edit: actually, this is due to the way we pass the ENVVAR to cephadm->podman..

Fix is on the way.

Actions #4

Updated by Joshua Schmid almost 4 years ago

  • Status changed from New to In Progress
  • Assignee set to Joshua Schmid
Actions #5

Updated by Joshua Schmid almost 4 years ago

  • Pull request ID set to 34944
Actions #6

Updated by Sebastian Wagner almost 4 years ago

  • Status changed from In Progress to Pending Backport
Actions #7

Updated by Sebastian Wagner almost 4 years ago

  • Status changed from Pending Backport to Resolved
  • Target version set to v15.2.4
Actions

Also available in: Atom PDF