Project

General

Profile

Bug #43981

ceph-volume fails when rerunning lvm create on already existing OSDs

Added by Guillaume Abrioux 4 months ago. Updated 3 months ago.

Status:
Resolved
Priority:
Normal
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
nautilus, mimic
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

Typical failure:

[root@osd0 ~]# ceph-volume lvm create --bluestore --data test_group/data-lv1
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b8250f96-0995-404f-b578-47891f0993f5
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Running command: /sbin/restorecon /var/lib/ceph/osd/ceph-0
Running command: /bin/chown -h ceph:ceph /dev/test_group/data-lv1
Running command: /bin/chown -R ceph:ceph /dev/dm-0
Running command: /bin/ln -s /dev/test_group/data-lv1 /var/lib/ceph/osd/ceph-0/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
stderr: got monmap epoch 1
Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQC/xjpeQDoYNRAAWt5ZtaM4piNHF8EAVJlNmQ==
stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth(key=AQC/xjpeQDoYNRAAWt5ZtaM4piNHF8EAVJlNmQ==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b8250f96-0995-404f-b578-47891f0993f5 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: test_group/data-lv1
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/test_group/data-lv1 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /bin/ln -snf /dev/test_group/data-lv1 /var/lib/ceph/osd/ceph-0/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /bin/chown -R ceph:ceph /dev/dm-0
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /bin/systemctl enable ceph-volume@lvm-0-b8250f96-0995-404f-b578-47891f0993f5
stderr: Created symlink → /usr/lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@0
Running command: /bin/systemctl start ceph-osd@0
--> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm create successful for: test_group/data-lv1
[root@osd0 ~]#
[root@osd0 ~]#
[root@osd0 ~]#
[root@osd0 ~]#
[root@osd0 ~]#
[root@osd0 ~]# ceph-volume lvm create --bluestore --data test_group/data-lv1
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8ceecb53-5d5d-470f-90b9-15dd8dbf35ad
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5
Running command: /sbin/restorecon /var/lib/ceph/osd/ceph-5
Running command: /bin/chown -h ceph:ceph /dev/test_group/data-lv1
Running command: /bin/chown -R ceph:ceph /dev/dm-0
Running command: /bin/ln -s /dev/test_group/data-lv1 /var/lib/ceph/osd/ceph-5/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap
stderr: got monmap epoch 1
Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-5/keyring --create-keyring --name osd.5 --add-key AQDsxjpeMG9CFRAApd6odOQ49SLwXCtFSdmHKw==
stdout: creating /var/lib/ceph/osd/ceph-5/keyring
added entity osd.5 auth(key=AQDsxjpeMG9CFRAApd6odOQ49SLwXCtFSdmHKw==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid 8ceecb53-5d5d-470f-90b9-15dd8dbf35ad --setuser ceph --setgroup ceph
stderr: 2020-02-05T13:45:29.361+0000 7f61d6479ec0 -1 bluestore(/var/lib/ceph/osd/ceph-5/) _open_fsid (2) No such file or directory
stderr: 2020-02-05T13:45:29.361+0000 7f61d6479ec0 -1 bluestore(/var/lib/ceph/osd/ceph-5/) mkfs fsck found fatal error: (2) No such file or directory
stderr: 2020-02-05T13:45:29.361+0000 7f61d6479ec0 -1 OSD::mkfs: ObjectStore::mkfs failed with error (2) No such file or directory
stderr: 2020-02-05T13:45:29.361+0000 7f61d6479ec0 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-5/: (2) No such file or directory
--> Was unable to complete a new OSD, will rollback changes
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.5 --yes-i-really-mean-it
stderr: purged osd.5
--> RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid 8ceecb53-5d5d-470f-90b9-15dd8dbf35ad --setuser ceph --setgroup ceph
[root@osd0 ~]#

Related issues

Copied to ceph-volume - Backport #44047: nautilus: ceph-volume fails when rerunning lvm create on already existing OSDs Resolved
Copied to ceph-volume - Backport #44048: mimic: ceph-volume fails when rerunning lvm create on already existing OSDs Resolved

History

#1 Updated by Jan Fajerski 4 months ago

  • Status changed from New to In Progress
  • Assignee set to Guillaume Abrioux

#2 Updated by Jan Fajerski 4 months ago

  • Status changed from In Progress to Pending Backport
  • Backport set to nautilus, mimic

#3 Updated by Nathan Cutler 4 months ago

  • Copied to Backport #44047: nautilus: ceph-volume fails when rerunning lvm create on already existing OSDs added

#4 Updated by Nathan Cutler 4 months ago

  • Copied to Backport #44048: mimic: ceph-volume fails when rerunning lvm create on already existing OSDs added

#5 Updated by Jan Fajerski 3 months ago

  • Pull request ID set to 33086

#7 Updated by Nathan Cutler 3 months ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Also available in: Atom PDF