Project

General

Profile

Bug #47010

'ceph-volume raw prepare' fails to prepare because ceph-osd cannot acquire lock on device

Added by Sébastien Han over 3 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

'ceph-volume raw' fails to prepare the osd because the ceph-osd cannot acquire a lock on it.

Error example:


[2020-08-13 09:02:59,953][ceph_volume.main][INFO  ] Running command: ceph-volume --log-path /var/log/ceph/set1-data-0-db956 raw prepare --bluestore --data /dev/xvdc
[2020-08-13 09:02:59,955][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/loop1 /dev/loop1 loop
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/xvdc  /dev/xvdc  disk
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/xvda  /dev/xvda  disk
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/xvda1 /dev/xvda1 part
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/loop4 /dev/loop4 loop
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/loop2 /dev/loop2 loop
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/loop0 /dev/loop0 loop
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/xvdb  /dev/xvdb  disk
[2020-08-13 09:03:00,240][ceph_volume.process][INFO  ] stdout /dev/loop3 /dev/loop3 loop
[2020-08-13 09:03:00,245][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-08-13 09:03:00,623][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-08-13 09:03:00,623][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-08-13 09:03:00,623][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-08-13 09:03:00,624][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/xvdc
[2020-08-13 09:03:00,937][ceph_volume.process][INFO  ] stdout NAME="xvdc" KNAME="xvdc" MAJ:MIN="202:32" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="10G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-08-13 09:03:00,938][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/xvdc
[2020-08-13 09:03:01,317][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/xvdc
[2020-08-13 09:03:01,726][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-08-13 09:03:01,726][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-08-13 09:03:01,726][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-08-13 09:03:01,726][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/xvdc".
[2020-08-13 09:03:01,726][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/xvdc
[2020-08-13 09:03:02,099][ceph_volume.process][INFO  ] stderr unable to read label for /dev/xvdc: (2) No such file or directory
[2020-08-13 09:03:02,100][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/xvdc
[2020-08-13 09:03:02,457][ceph_volume.process][INFO  ] stderr unable to read label for /dev/xvdc: (2) No such file or directory
[2020-08-13 09:03:02,457][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/xvdc
[2020-08-13 09:03:02,764][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/xvdc
[2020-08-13 09:03:02,764][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/vbd-51744/block/xvdc
[2020-08-13 09:03:02,764][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-08-13 09:03:02,765][ceph_volume.process][INFO  ] stdout MAJOR=202
[2020-08-13 09:03:02,765][ceph_volume.process][INFO  ] stdout MINOR=32
[2020-08-13 09:03:02,765][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-08-13 09:03:02,765][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-08-13 09:03:02,765][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=2000653
[2020-08-13 09:03:02,765][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2020-08-13 09:03:03,219][ceph_volume.process][INFO  ] stdout AQDHATVfm9H8DBAAtTFIF3jifDidssG+ldCjGw==
[2020-08-13 09:03:03,219][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 48455871-3b6a-4312-b01e-d29253d01648
[2020-08-13 09:03:03,821][ceph_volume.process][INFO  ] stdout 0
[2020-08-13 09:03:03,822][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2020-08-13 09:03:04,109][ceph_volume.process][INFO  ] stdout AQDIATVfCLNtBhAANBawx1iEjC+IinLcHo8IlA==
[2020-08-13 09:03:04,110][ceph_volume.process][INFO  ] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[2020-08-13 09:03:04,358][ceph_volume.util.system][INFO  ] CEPH_VOLUME_SKIP_RESTORECON environ is set, will not call restorecon
[2020-08-13 09:03:04,359][ceph_volume.process][INFO  ] Running command: /usr/bin/chown -R ceph:ceph /dev/xvdc
[2020-08-13 09:03:04,640][ceph_volume.process][INFO  ] Running command: /usr/bin/ln -s /dev/xvdc /var/lib/ceph/osd/ceph-0/block
[2020-08-13 09:03:04,942][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[2020-08-13 09:03:06,009][ceph_volume.process][INFO  ] stderr got monmap epoch 1
[2020-08-13 09:03:06,054][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQDHATVfm9H8DBAAtTFIF3jifDidssG+ldCjGw==
[2020-08-13 09:03:06,433][ceph_volume.process][INFO  ] stdout creating /var/lib/ceph/osd/ceph-0/keyring
[2020-08-13 09:03:06,433][ceph_volume.process][INFO  ] stdout added entity osd.0 auth(key=AQDHATVfm9H8DBAAtTFIF3jifDidssG+ldCjGw==)
[2020-08-13 09:03:06,449][ceph_volume.process][INFO  ] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[2020-08-13 09:03:06,749][ceph_volume.process][INFO  ] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[2020-08-13 09:03:07,082][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 48455871-3b6a-4312-b01e-d29253d01648 --setuser ceph --setgroup ceph
[2020-08-13 09:03:07,748][ceph_volume.process][INFO  ] stderr 2020-08-13 09:03:07.723 7f5c3389da80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[2020-08-13 09:03:07,748][ceph_volume.process][INFO  ] stderr 2020-08-13 09:03:07.731 7f5c3389da80 -1 bdev(0x557770992700 /var/lib/ceph/osd/ceph-0//block) _lock flock failed on /var/lib/ceph/osd/ceph-0//block
[2020-08-13 09:03:07,748][ceph_volume.process][INFO  ] stderr 2020-08-13 09:03:07.731 7f5c3389da80 -1 bdev(0x557770992700 /var/lib/ceph/osd/ceph-0//block) open failed to lock /var/lib/ceph/osd/ceph-0//block: (11) Resource temporarily unavailable
[2020-08-13 09:03:07,749][ceph_volume.process][INFO  ] stderr 2020-08-13 09:03:07.731 7f5c3389da80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) mkfs failed, (11) Resource temporarily unavailable
[2020-08-13 09:03:07,749][ceph_volume.process][INFO  ] stderr 2020-08-13 09:03:07.731 7f5c3389da80 -1 OSD::mkfs: ObjectStore::mkfs failed with error (11) Resource temporarily unavailable
[2020-08-13 09:03:07,749][ceph_volume.process][INFO  ] stderr 2020-08-13 09:03:07.731 7f5c3389da80 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-0/: (11) Resource temporarily unavailable
[2020-08-13 09:03:07,749][ceph_volume.devices.raw.prepare][ERROR ] raw prepare was unable to complete
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/prepare.py", line 61, in safe_prepare
    self.prepare()
  File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/prepare.py", line 104, in prepare
    tmpfs,
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/prepare.py", line 38, in prepare_bluestore
    db=db
  File "/usr/lib/python2.7/site-packages/ceph_volume/util/prepare.py", line 463, in osd_mkfs_bluestore
    raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 48455871-3b6a-4312-b01e-d29253d01648 --setuser ceph --setgroup ceph
[2020-08-13 09:03:07,755][ceph_volume.devices.raw.prepare][INFO  ] will rollback OSD ID creation
[2020-08-13 09:03:07,756][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2020-08-13 09:03:09,011][ceph_volume.process][INFO  ] stderr purged osd.0
[2020-08-13 09:03:09,012][ceph_volume.process][INFO  ] stderr
[2020-08-13 09:03:09,041][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 150, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/main.py", line 32, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/prepare.py", line 136, in main
    self.safe_prepare(self.args)
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/prepare.py", line 61, in safe_prepare
    self.prepare()
  File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/prepare.py", line 104, in prepare
    tmpfs,
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/raw/prepare.py", line 38, in prepare_bluestore
    db=db
  File "/usr/lib/python2.7/site-packages/ceph_volume/util/prepare.py", line 463, in osd_mkfs_bluestore
    raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 48455871-3b6a-4312-b01e-d29253d01648 --setuser ceph --setgroup ceph
: exit status 1}

Related issues

Copied to ceph-volume - Backport #47256: octopus: 'ceph-volume raw prepare' fails to prepare because ceph-osd cannot acquire lock on device Resolved

History

#1 Updated by Jan Fajerski over 3 years ago

  • Status changed from New to Pending Backport
  • Backport set to octopus
  • Pull request ID set to 36700

#2 Updated by Nathan Cutler over 3 years ago

  • Copied to Backport #47256: octopus: 'ceph-volume raw prepare' fails to prepare because ceph-osd cannot acquire lock on device added

#3 Updated by Nathan Cutler over 3 years ago

backported to nautilus by https://github.com/ceph/ceph/pull/36926

#4 Updated by Nathan Cutler over 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Also available in: Atom PDF