Project

General

Profile

Actions

Bug #51163

open

N ceph-volume lvm batch to create bluestore osds

Added by Zhike Li almost 3 years ago. Updated almost 3 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-ansible
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] **********************************************************
Tuesday 08 June 2021  13:45:31 +0800 (0:00:00.237)       0:07:55.990 ********** 
changed: [ceph-node4]
changed: [ceph-node3]
changed: [ceph-node2]
fatal: [ceph-node1]: FAILED! => changed=true 
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - /dev/sda
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdg
  - /dev/sdh
  - /dev/sdj
  - /dev/sdk
  - /dev/sdl
  - /dev/sdm
  - /dev/nvme0n1
  - /dev/nvme1n1
  - --db-devices
  - /dev/nvme2n1
  - --wal-devices2.21
  - /dev/nvme3n1
  delta: '0:00:46.020625'
  end: '2021-06-08 13:46:18.011201'
  msg: non-zero return code
  rc: 1
  start: '2021-06-08 13:45:31.990576'
  stderr: |-
    --> passed data devices: 14 physical, 0 LVM
    --> relative data size: 1.0
    --> passed block_db devices: 1 physical, 0 LVM
    --> passed block_wal devices: 1 physical, 0 LVM
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6d20d8c8-fdc2-41e0-98d7-b2777f5002f5
    Running command: /sbin/vgcreate --force --yes ceph-9be9e493-5f79-4920-86c4-9b47cc2fe2d0 /dev/nvme1n1
     stdout: Physical volume "/dev/nvme1n1" successfully created.
     stdout: Volume group "ceph-9be9e493-5f79-4920-86c4-9b47cc2fe2d0" successfully created
    Running command: /sbin/lvcreate --yes -l 763089 -n osd-block-6d20d8c8-fdc2-41e0-98d7-b2777f5002f5 ceph-9be9e493-5f79-4920-86c4-9b47cc2fe2d0
     stdout: Logical volume "osd-block-6d20d8c8-fdc2-41e0-98d7-b2777f5002f5" created.
    Running command: /sbin/lvcreate --yes -l 54506 -n osd-wal-d02bd532-3e7d-44d3-89ef-cb721c19bde2 ceph-7634c4cc-08e9-4293-9be7-88591cedad07
     stdout: Logical volume "osd-wal-d02bd532-3e7d-44d3-89ef-cb721c19bde2" created.
    Running command: /sbin/lvcreate --yes -l 54506 -n osd-db-eb170f6c-7e4c-4991-8e85-fbc30abac8d5 ceph-f5ecf715-07b9-4d07-8700-d60187ca1938
     stdout: Logical volume "osd-db-eb170f6c-7e4c-4991-8e85-fbc30abac8d5" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-48
    Running command: /sbin/restorecon /var/lib/ceph/osd/ceph-48
    Running command: /bin/chown -h ceph:ceph /dev/ceph-9be9e493-5f79-4920-86c4-9b47cc2fe2d0/osd-block-6d20d8c8-fdc2-41e0-98d7-b2777f5002f5
    Running command: /bin/chown -R ceph:ceph /dev/dm-18
    Running command: /bin/ln -s /dev/ceph-9be9e493-5f79-4920-86c4-9b47cc2fe2d0/osd-block-6d20d8c8-fdc2-41e0-98d7-b2777f5002f5 /var/lib/ceph/osd/ceph-48/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-48/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-48/keyring --create-keyring --name osd.48 --add-key AQAfBL9gxa3tLxAAg8Q3Xh/G3Gj7Bnwk9C0/hg==
     stdout: creating /var/lib/ceph/osd/ceph-48/keyring
     stdout: added entity osd.48 auth(key=AQAfBL9gxa3tLxAAg8Q3Xh/G3Gj7Bnwk9C0/hg==)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-48/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-48/
    Running command: /bin/chown -h ceph:ceph /dev/ceph-7634c4cc-08e9-4293-9be7-88591cedad07/osd-wal-d02bd532-3e7d-44d3-89ef-cb721c19bde2
    Running command: /bin/chown -R ceph:ceph /dev/dm-19
    Running command: /bin/chown -h ceph:ceph /dev/ceph-f5ecf715-07b9-4d07-8700-d60187ca1938/osd-db-eb170f6c-7e4c-4991-8e85-fbc30abac8d5
    Running command: /bin/chown -R ceph:ceph /dev/dm-20
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 48 --monmap /var/lib/ceph/osd/ceph-48/activate.monmap --keyfile - --bluestore-block-wal-path /dev/ceph-7634c4cc-08e9-4293-9be7-88591cedad07/osd-wal-d02bd532-3e7d-44d3-89ef-cb721c19bde2 --bluestore-block-db-path /dev/ceph-f5ecf715-07b9-4d07-8700-d60187ca1938/osd-db-eb170f6c-7e4c-4991-8e85-fbc30abac8d5 --osd-data /var/lib/ceph/osd/ceph-48/ --osd-uuid 6d20d8c8-fdc2-41e0-98d7-b2777f5002f5 --setuser ceph --setgroup ceph
     stderr: 2021-06-08 13:46:16.552 ffff9cce1550 -1 bluestore(/var/lib/ceph/osd/ceph-48/) _read_fsid unparsable uuid
     stderr: /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff9cce1550 time 2021-06-08 13:46:16.622989
     stderr: /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0)
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaade532c3c]
     stderr: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 5: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 8: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 10: (main()+0x15a8) [0xaaaade537b70]
     stderr: 11: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 12: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: 2021-06-08 13:46:16.612 ffff9cce1550 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff9cce1550 time 2021-06-08 13:46:16.622989
     stderr: /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0)
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaade532c3c]
     stderr: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 5: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 8: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 10: (main()+0x15a8) [0xaaaade537b70]
     stderr: 11: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 12: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: *** Caught signal (Aborted) **
     stderr: in thread ffff9cce1550 thread_name:ceph-osd
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: [0xffff9de4066c]
     stderr: 2: (gsignal()+0x4c) [0xffff9cfd50e8]
     stderr: 3: (abort()+0x11c) [0xffff9cfd6760]
     stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaade532c90]
     stderr: 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 8: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 11: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 13: (main()+0x15a8) [0xaaaade537b70]
     stderr: 14: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 15: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: 2021-06-08 13:46:16.612 ffff9cce1550 -1 *** Caught signal (Aborted) **
     stderr: in thread ffff9cce1550 thread_name:ceph-osd
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: [0xffff9de4066c]
     stderr: 2: (gsignal()+0x4c) [0xffff9cfd50e8]
     stderr: 3: (abort()+0x11c) [0xffff9cfd6760]
     stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaade532c90]
     stderr: 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 8: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 11: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 13: (main()+0x15a8) [0xaaaade537b70]
     stderr: 14: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 15: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
     stderr: -73> 2021-06-08 13:46:16.552 ffff9cce1550 -1 bluestore(/var/lib/ceph/osd/ceph-48/) _read_fsid unparsable uuid
     stderr: -1> 2021-06-08 13:46:16.612 ffff9cce1550 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff9cce1550 time 2021-06-08 13:46:16.622989
     stderr: /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0)
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaade532c3c]
     stderr: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 5: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 8: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 10: (main()+0x15a8) [0xaaaade537b70]
     stderr: 11: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 12: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: 0> 2021-06-08 13:46:16.612 ffff9cce1550 -1 *** Caught signal (Aborted) **
     stderr: in thread ffff9cce1550 thread_name:ceph-osd
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: [0xffff9de4066c]
     stderr: 2: (gsignal()+0x4c) [0xffff9cfd50e8]
     stderr: 3: (abort()+0x11c) [0xffff9cfd6760]
     stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaade532c90]
     stderr: 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 8: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 11: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 13: (main()+0x15a8) [0xaaaade537b70]
     stderr: 14: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 15: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
     stderr: -73> 2021-06-08 13:46:16.552 ffff9cce1550 -1 bluestore(/var/lib/ceph/osd/ceph-48/) _read_fsid unparsable uuid
     stderr: -1> 2021-06-08 13:46:16.612 ffff9cce1550 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff9cce1550 time 2021-06-08 13:46:16.622989
     stderr: /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0)
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaade532c3c]
     stderr: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 5: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 8: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 10: (main()+0x15a8) [0xaaaade537b70]
     stderr: 11: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 12: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: 0> 2021-06-08 13:46:16.612 ffff9cce1550 -1 *** Caught signal (Aborted) **
     stderr: in thread ffff9cce1550 thread_name:ceph-osd
     stderr: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)
     stderr: 1: [0xffff9de4066c]
     stderr: 2: (gsignal()+0x4c) [0xffff9cfd50e8]
     stderr: 3: (abort()+0x11c) [0xffff9cfd6760]
     stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaade532c90]
     stderr: 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaade532e08]
     stderr: 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaadeaba734]
     stderr: 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaadeac9964]
     stderr: 8: (BlueFS::mount()+0x180) [0xaaaadead64a0]
     stderr: 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaade9c8e08]
     stderr: 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaade9ca0cc]
     stderr: 11: (BlueStore::mkfs()+0x4d4) [0xaaaadea260c4]
     stderr: 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaade58b0ac]
     stderr: 13: (main()+0x15a8) [0xaaaade537b70]
     stderr: 14: (__libc_start_main()+0xf0) [0xffff9cfc15d4]
     stderr: 15: (()+0x4cab9c) [0xaaaade56ab9c]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
    --> Was unable to complete a new OSD, will rollback changes
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.48 --yes-i-really-mean-it
     stderr: purged osd.48
    Traceback (most recent call last):
      File "/sbin/ceph-volume", line 9, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
      File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 39, in __init__
        self.main(self.argv)
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 151, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 415, in main
        self._execute(plan)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 434, in _execute
        c.create(argparse.Namespace(**args))
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create
        prepare_step.safe_prepare(args)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
        self.prepare()
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/prepare.py", line 394, in prepare
        osd_fsid,
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore
        db=db
      File "/usr/lib/python2.7/site-packages/ceph_volume/util/prepare.py", line 480, in osd_mkfs_bluestore
        raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
    RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 48 --monmap /var/lib/ceph/osd/ceph-48/activate.monmap --keyfile - --bluestore-block-wal-path /dev/ceph-7634c4cc-08e9-4293-9be7-88591cedad07/osd-wal-d02bd532-3e7d-44d3-89ef-cb721c19bde2 --bluestore-block-db-path /dev/ceph-f5ecf715-07b9-4d07-8700-d60187ca1938/osd-db-eb170f6c-7e4c-4991-8e85-fbc30abac8d5 --osd-data /var/lib/ceph/osd/ceph-48/ --osd-uuid 6d20d8c8-fdc2-41e0-98d7-b2777f5002f5 --setuser ceph --setgroup ceph
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

Files

ceph-osd.48.log (54.7 KB) ceph-osd.48.log Zhike Li, 06/16/2021 01:55 PM
Actions #1

Updated by Igor Fedotov almost 3 years ago

  • Status changed from New to Need More Info

There should be relevant osd log under /var/log/ceph... Could you please share it?

Actions #2

Updated by Zhike Li almost 3 years ago

Thank you very much for your attention to the problem I met. Since the log file is too large (more than 1000KB), I have sent it to your email. I hope we can make progress together!

Actions #3

Updated by Sebastian Wagner almost 3 years ago

  • Description updated (diff)
Actions #4

Updated by Igor Fedotov almost 3 years ago

Zhike Li wrote:

Thank you very much for your attention to the problem I met. Since the log file is too large (more than 1000KB), I have sent it to your email. I hope we can make progress together!

email still doesn't have desired log - I meant OSD one for osd.48, sorry for the unclear request

Actions #5

Updated by Zhike Li almost 3 years ago

Thank you for your attention!This is my configuration of group_vars/osds.yml.
devices:
- /dev/sda
- /dev/sdb
- /dev/sdc
- /dev/sdd
- /dev/sde
- /dev/sdf
- /dev/sdg
- /dev/sdh
- /dev/sdj
- /dev/sdk
- /dev/sdl
- /dev/sdm
- /dev/nvme0n1
- /dev/nvme1n1

#devices: []

  1. Declare devices to be used as block.db devices

dedicated_devices:
- /dev/nvme2n1

#dedicated_devices: []

bluestore_wal_devices:
- /dev/nvme3n1

#bluestore_wal_devices:

Attached is the log of creating SSD type OSD with LVM, similar to the log of other OSD deployment errors.

Actions #6

Updated by Igor Fedotov almost 3 years ago

Zhike Li wrote:

Thank you for your attention!This is my configuration of group_vars/osds.yml.
devices:
- /dev/sda
- /dev/sdb
- /dev/sdc
- /dev/sdd
- /dev/sde
- /dev/sdf
- /dev/sdg
- /dev/sdh
- /dev/sdj
- /dev/sdk
- /dev/sdl
- /dev/sdm
- /dev/nvme0n1
- /dev/nvme1n1

#devices: []

  1. Declare devices to be used as block.db devices

dedicated_devices:
- /dev/nvme2n1

#dedicated_devices: []

bluestore_wal_devices:
- /dev/nvme3n1

#bluestore_wal_devices:

Attached is the log of creating SSD type OSD with LVM, similar to the log of other OSD deployment errors.

If OSD creation procedure leaves the OSD in a state which allows ceph-bluestore-tool invocation against that OSD could you please run the following command and share OSD log (named fsck.log):

CEPH_ARGS="--debug-bluefs 20 --debug-bdev 20 --log-file fsck.log" ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-48 --command fsck

Additionally you might want to check dmesg output for relevant read errors - looks like readings are failing for unknown reasont...

Actions #7

Updated by Neha Ojha almost 3 years ago

  • Project changed from Ceph to bluestore

Please feel free to move it to ceph-volume, if it isn't bluestore related.

Actions

Also available in: Atom PDF