sysadmin@ceph-test:~$ sudo setenforce 0 sysadmin@ceph-test:~$ sudo sh -c 'CEPH_VOLUME_DEBUG=true ceph-volume --cluster test lvm batch --bluestore /dev/vdb' Total OSDs: 1 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/vdb 63.00 GB 100.0% --> The above OSDs would be created if the operation continues --> do you want to proceed? (yes/no) yes Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-1cc81d7c-a153-462a-8080-ec3d217c7180 /dev/vdb stdout: Physical volume "/dev/vdb" successfully created. stdout: Volume group "ceph-1cc81d7c-a153-462a-8080-ec3d217c7180" successfully created Running command: /usr/sbin/lvcreate --yes -l 63 -n osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e ceph-1cc81d7c-a153-462a-8080-ec3d217c7180 stdout: Wiping ceph_bluestore signature on /dev/ceph-1cc81d7c-a153-462a-8080-ec3d217c7180/osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e. stdout: Logical volume "osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster test --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/test.keyring -i - osd new e3ebb6e0-82c8-4088-a6bd-abd729a575bb Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/test-0 Running command: /usr/sbin/restorecon /var/lib/ceph/osd/test-0 Running command: /bin/chown -h ceph:ceph /dev/ceph-1cc81d7c-a153-462a-8080-ec3d217c7180/osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e Running command: /bin/chown -R ceph:ceph /dev/dm-1 Running command: /bin/ln -s /dev/ceph-1cc81d7c-a153-462a-8080-ec3d217c7180/osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e /var/lib/ceph/osd/test-0/block Running command: /bin/ceph --cluster test --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/test.keyring mon getmap -o /var/lib/ceph/osd/test-0/activate.monmap stderr: got monmap epoch 1 Running command: /bin/ceph-authtool /var/lib/ceph/osd/test-0/keyring --create-keyring --name osd.0 --add-key AQAcgzBeTlc5BxAApXJgwyoRAHtrL9kk1tbs9w== stdout: creating /var/lib/ceph/osd/test-0/keyring stdout: added entity osd.0 auth(key=AQAcgzBeTlc5BxAApXJgwyoRAHtrL9kk1tbs9w==) Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/test-0/keyring Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/test-0/ Running command: /bin/ceph-osd --cluster test --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/test-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/test-0/ --osd-uuid e3ebb6e0-82c8-4088-a6bd-abd729a575bb --setuser ceph --setgroup ceph stderr: 2020-01-28 18:53:20.438 7f17de7b3c00 -1 bluestore(/var/lib/ceph/osd/test-0/) _read_fsid unparsable uuid stderr: terminate called after throwing an instance of 'boost::exception_detail::clone_impl >' stderr: what(): boost::bad_get: failed value get using boost::get stderr: *** Caught signal (Aborted) ** stderr: in thread 7f17de7b3c00 thread_name:ceph-osd stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable) stderr: 1: (()+0x13520) [0x7f17dee75520] stderr: 2: (gsignal()+0x141) [0x7f17de93b081] stderr: 3: (abort()+0x121) [0x7f17de926535] stderr: 4: (()+0x9a643) [0x7f17decba643] stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6] stderr: 6: (()+0xa6041) [0x7f17decc6041] stderr: 7: (()+0xa6295) [0x7f17decc6295] stderr: 8: (()+0x49a92c) [0x56027edc792c] stderr: 9: (Option::size_t const md_config_t::get_val(ConfigValues const&, std::__cxx11::basic_string, std::allocator > const&) const+0x51) [0x56027eedeea1] stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44] stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845] stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620] stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23] stderr: 14: (main()+0x1821) [0x56027eea68d1] stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb] stderr: 16: (_start()+0x2a) [0x56027eed903a] stderr: 2020-01-28 18:53:20.486 7f17de7b3c00 -1 *** Caught signal (Aborted) ** stderr: in thread 7f17de7b3c00 thread_name:ceph-osd stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable) stderr: 1: (()+0x13520) [0x7f17dee75520] stderr: 2: (gsignal()+0x141) [0x7f17de93b081] stderr: 3: (abort()+0x121) [0x7f17de926535] stderr: 4: (()+0x9a643) [0x7f17decba643] stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6] stderr: 6: (()+0xa6041) [0x7f17decc6041] stderr: 7: (()+0xa6295) [0x7f17decc6295] stderr: 8: (()+0x49a92c) [0x56027edc792c] stderr: 9: (Option::size_t const md_config_t::get_val(ConfigValues const&, std::__cxx11::basic_string, std::allocator > const&) const+0x51) [0x56027eedeea1] stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44] stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845] stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620] stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23] stderr: 14: (main()+0x1821) [0x56027eea68d1] stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb] stderr: 16: (_start()+0x2a) [0x56027eed903a] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -5> 2020-01-28 18:53:20.438 7f17de7b3c00 -1 bluestore(/var/lib/ceph/osd/test-0/) _read_fsid unparsable uuid stderr: 0> 2020-01-28 18:53:20.486 7f17de7b3c00 -1 *** Caught signal (Aborted) ** stderr: in thread 7f17de7b3c00 thread_name:ceph-osd stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable) stderr: 1: (()+0x13520) [0x7f17dee75520] stderr: 2: (gsignal()+0x141) [0x7f17de93b081] stderr: 3: (abort()+0x121) [0x7f17de926535] stderr: 4: (()+0x9a643) [0x7f17decba643] stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6] stderr: 6: (()+0xa6041) [0x7f17decc6041] stderr: 7: (()+0xa6295) [0x7f17decc6295] stderr: 8: (()+0x49a92c) [0x56027edc792c] stderr: 9: (Option::size_t const md_config_t::get_val(ConfigValues const&, std::__cxx11::basic_string, std::allocator > const&) const+0x51) [0x56027eedeea1] stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44] stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845] stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620] stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23] stderr: 14: (main()+0x1821) [0x56027eea68d1] stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb] stderr: 16: (_start()+0x2a) [0x56027eed903a] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -5> 2020-01-28 18:53:20.438 7f17de7b3c00 -1 bluestore(/var/lib/ceph/osd/test-0/) _read_fsid unparsable uuid stderr: 0> 2020-01-28 18:53:20.486 7f17de7b3c00 -1 *** Caught signal (Aborted) ** stderr: in thread 7f17de7b3c00 thread_name:ceph-osd stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable) stderr: 1: (()+0x13520) [0x7f17dee75520] stderr: 2: (gsignal()+0x141) [0x7f17de93b081] stderr: 3: (abort()+0x121) [0x7f17de926535] stderr: 4: (()+0x9a643) [0x7f17decba643] stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6] stderr: 6: (()+0xa6041) [0x7f17decc6041] stderr: 7: (()+0xa6295) [0x7f17decc6295] stderr: 8: (()+0x49a92c) [0x56027edc792c] stderr: 9: (Option::size_t const md_config_t::get_val(ConfigValues const&, std::__cxx11::basic_string, std::allocator > const&) const+0x51) [0x56027eedeea1] stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44] stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845] stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620] stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23] stderr: 14: (main()+0x1821) [0x56027eea68d1] stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb] stderr: 16: (_start()+0x2a) [0x56027eed903a] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --> Was unable to complete a new OSD, will rollback changes Running command: /bin/ceph --cluster test --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/test.keyring osd purge-new osd.0 --yes-i-really-mean-it stderr: purged osd.0 Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 38, in __init__ self.main(self.argv) File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 149, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 40, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 325, in main self.execute() File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute self.strategy.execute() File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute Create(command).main() File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py", line 69, in main self.create(args) File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py", line 26, in create prepare_step.safe_prepare(args) File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare self.prepare() File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare osd_fsid, File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore db=db File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster test --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/test-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/test-0/ --osd-uuid e3ebb6e0-82c8-4088-a6bd-abd729a575bb --setuser ceph --setgroup ceph sysadmin@ceph-test:~$ sudo setenforce 1 sysadmin@ceph-test:~$