Project

General

Profile

Actions

Bug #53524

open

CEPHADM_APPLY_SPEC_FAIL is very verbose

Added by Sebastian Wagner over 2 years ago. Updated over 1 year ago.

Status:
New
Priority:
Normal
Target version:
-
% Done:

0%

Source:
Tags:
low-hanging-fruit
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
    osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr                              [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr                              [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr                              [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr                              [--report] [--yes]
/usr/bin/docker: stderr                              [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr                              [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr                              [--no-systemd]
/usr/bin/docker: stderr                              [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr                              [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr                              [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr                              [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr                              [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr                              [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr                              [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr                              [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr                              [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr                              [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr                              [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
    main()
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
    r = ctx.func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
    out, err, code = call_throws(ctx, c.run_cmd())
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
    osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr                              [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr                              [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr                              [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr                              [--report] [--yes]
/usr/bin/docker: stderr                              [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr                              [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr                              [--no-systemd]
/usr/bin/docker: stderr                              [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr                              [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr                              [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr                              [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr                              [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr                              [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr                              [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr                              [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr                              [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr                              [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr                              [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
    main()
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
    r = ctx.func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
    return func(ctx)
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
    out, err, code = call_throws(ctx, c.run_cmd())
  File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3

Actions #1

Updated by Redouane Kachach Elhichou almost 2 years ago

  • Tags set to low-hanging-fruit
Actions #2

Updated by Redouane Kachach Elhichou almost 2 years ago

  • Assignee set to Guillaume Abrioux
Actions #3

Updated by Redouane Kachach Elhichou almost 2 years ago

  • Project changed from Orchestrator to ceph-volume
Actions #4

Updated by Laura Flores over 1 year ago

  • Translation missing: en.field_tag_list set to low-hanging-fruit
Actions

Also available in: Atom PDF