stderr: 12: (__libc_start_main()+0xf3) [0x7fb69739a813] stderr: 13: (_start()+0x2e) [0x560adc532d0e] stderr: 2019-04-18 17:10:17.092 7fb69bb41080 -1 *** Caught signal (Aborted) ** stderr: in thread 7fb69bb41080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7fb6986d3d80] stderr: 2: (gsignal()+0x10f) [0x7fb6973ae93f] stderr: 3: (abort()+0x127) [0x7fb697398c95] stderr: 4: (()+0x65c458) [0x560adc534458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x560adcb32487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x560adc9dc393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x560adc9fde66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x560adca31297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x560adca40fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x560adc5547ce] stderr: 11: (main()+0x1bd1) [0x560adc44caa1] stderr: 12: (__libc_start_main()+0xf3) [0x7fb69739a813] stderr: 13: (_start()+0x2e) [0x560adc532d0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -485> 2019-04-18 17:10:16.571 7fb69bb41080 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid stderr: 0> 2019-04-18 17:10:17.092 7fb69bb41080 -1 *** Caught signal (Aborted) ** stderr: in thread 7fb69bb41080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7fb6986d3d80] stderr: 2: (gsignal()+0x10f) [0x7fb6973ae93f] stderr: 3: (abort()+0x127) [0x7fb697398c95] stderr: 4: (()+0x65c458) [0x560adc534458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x560adcb32487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x560adc9dc393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x560adc9fde66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x560adca31297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x560adca40fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x560adc5547ce] stderr: 11: (main()+0x1bd1) [0x560adc44caa1] stderr: 12: (__libc_start_main()+0xf3) [0x7fb69739a813] stderr: 13: (_start()+0x2e) [0x560adc532d0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -485> 2019-04-18 17:10:16.571 7fb69bb41080 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid stderr: 0> 2019-04-18 17:10:17.092 7fb69bb41080 -1 *** Caught signal (Aborted) ** stderr: in thread 7fb69bb41080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7fb6986d3d80] stderr: 2: (gsignal()+0x10f) [0x7fb6973ae93f] stderr: 3: (abort()+0x127) [0x7fb697398c95] stderr: 4: (()+0x65c458) [0x560adc534458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x560adcb32487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x560adc9dc393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x560adc9fde66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x560adca31297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x560adca40fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x560adc5547ce] stderr: 11: (main()+0x1bd1) [0x560adc44caa1] stderr: 12: (__libc_start_main()+0xf3) [0x7fb69739a813] stderr: 13: (_start()+0x2e) [0x560adc532d0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --> Was unable to complete a new OSD, will rollback changes Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.2 --yes-i-really-mean-it stderr: purged osd.2 stdout_lines: fatal: [e24-h07-740xd.alias.bos.scalelab.redhat.com]: FAILED! => changed=true cmd: - ceph-volume - --cluster - ceph - lvm - batch - --bluestore - --yes - --osds-per-device - '2' - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1 - /dev/nvme4n1 delta: '0:00:04.804547' end: '2019-04-18 17:10:17.781972' msg: non-zero return code rc: 1 start: '2019-04-18 17:10:12.977425' stderr: |- Traceback (most recent call last): File "/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main self.execute() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute self.strategy.execute() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute Create(command).main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main self.create(args) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create prepare_step.safe_prepare(args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare osd_fsid, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid ad7bd3f1-3f52-4d80-93b2-80e62b811338 --setuser ceph --setgroup ceph stderr_lines: - 'Traceback (most recent call last):' - ' File "/sbin/ceph-volume", line 11, in ' - ' load_entry_point(''ceph-volume==1.0.0'', ''console_scripts'', ''ceph-volume'')()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__' - ' self.main(self.argv)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc' - ' return f(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main' - ' terminal.dispatch(self.mapper, subcommand_args)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch' - ' instance.main()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main' - ' terminal.dispatch(self.mapper, self.argv)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch' - ' instance.main()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root' - ' return func(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main' - ' self.execute()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute' - ' self.strategy.execute()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute' - ' Create(command).main()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main' - ' self.create(args)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root' - ' return func(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create' - ' prepare_step.safe_prepare(args)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare' - ' self.prepare()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root' - ' return func(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare' - ' osd_fsid,' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore' - ' db=db' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore' - ' raise RuntimeError(''Command failed with exit code %s: %s'' % (returncode, '' ''.join(command)))' - 'RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid ad7bd3f1-3f52-4d80-93b2-80e62b811338 --setuser ceph --setgroup ceph' stdout: |- Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-e2aa3f76-70bc-4df5-96e7-4789fdd731db /dev/nvme0n1 stdout: Physical volume "/dev/nvme0n1" successfully created. stdout: Volume group "ceph-e2aa3f76-70bc-4df5-96e7-4789fdd731db" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-be465e22-5edb-459f-9bc5-a4334c938fe7 /dev/nvme1n1 stdout: Physical volume "/dev/nvme1n1" successfully created. stdout: Volume group "ceph-be465e22-5edb-459f-9bc5-a4334c938fe7" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-87c57b05-6352-4e7b-b95f-dc71aee8b626 /dev/nvme2n1 stdout: Physical volume "/dev/nvme2n1" successfully created. stdout: Volume group "ceph-87c57b05-6352-4e7b-b95f-dc71aee8b626" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-79d2c9f1-305c-4a23-8a17-bb8aa0b9eb92 /dev/nvme3n1 stdout: Physical volume "/dev/nvme3n1" successfully created. stdout: Volume group "ceph-79d2c9f1-305c-4a23-8a17-bb8aa0b9eb92" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-0cf78ba5-2b0e-4e19-b331-459d7f62139d /dev/nvme4n1 stdout: Physical volume "/dev/nvme4n1" successfully created. stdout: Volume group "ceph-0cf78ba5-2b0e-4e19-b331-459d7f62139d" successfully created Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-c3c23617-f56b-4094-8a4c-4f76e0a8dd9b ceph-e2aa3f76-70bc-4df5-96e7-4789fdd731db stdout: Logical volume "osd-data-c3c23617-f56b-4094-8a4c-4f76e0a8dd9b" created. Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-2abb1c9d-e24f-4101-b1c1-0ac8f7d05319 ceph-e2aa3f76-70bc-4df5-96e7-4789fdd731db stdout: Logical volume "osd-data-2abb1c9d-e24f-4101-b1c1-0ac8f7d05319" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ad7bd3f1-3f52-4d80-93b2-80e62b811338 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4 Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-4 Running command: /bin/chown -h ceph:ceph /dev/ceph-e2aa3f76-70bc-4df5-96e7-4789fdd731db/osd-data-c3c23617-f56b-4094-8a4c-4f76e0a8dd9b Running command: /bin/chown -R ceph:ceph /dev/dm-3 Running command: /bin/ln -s /dev/ceph-e2aa3f76-70bc-4df5-96e7-4789fdd731db/osd-data-c3c23617-f56b-4094-8a4c-4f76e0a8dd9b /var/lib/ceph/osd/ceph-4/block Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap stderr: got monmap epoch 1 Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-4/keyring --create-keyring --name osd.4 --add-key AQB3r7hclSWHGhAA3JP5+/Egyjot37Gf549QEQ== stdout: creating /var/lib/ceph/osd/ceph-4/keyring added entity osd.4 auth(key=AQB3r7hclSWHGhAA3JP5+/Egyjot37Gf549QEQ==) Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid ad7bd3f1-3f52-4d80-93b2-80e62b811338 --setuser ceph --setgroup ceph stdout: /usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = long unsigned int; _Alloc = mempool::pool_allocator<(mempool::pool_index_t)1, long unsigned int>; std::vector<_Tp, _Alloc>::reference = long unsigned int&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed. stderr: 2019-04-18 17:10:16.669 7fcb2a66a080 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid stderr: *** Caught signal (Aborted) ** stderr: in thread 7fcb2a66a080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7fcb271fbd80] stderr: 2: (gsignal()+0x10f) [0x7fcb25ed693f] stderr: 3: (abort()+0x127) [0x7fcb25ec0c95] stderr: 4: (()+0x65c458) [0x556bcfba7458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x556bd01a5487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x556bd004f393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x556bd0070e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x556bd00a4297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556bd00b3fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x556bcfbc77ce] stderr: 11: (main()+0x1bd1) [0x556bcfabfaa1] stderr: 12: (__libc_start_main()+0xf3) [0x7fcb25ec2813] stderr: 13: (_start()+0x2e) [0x556bcfba5d0e] stderr: 2019-04-18 17:10:17.195 7fcb2a66a080 -1 *** Caught signal (Aborted) ** stderr: in thread 7fcb2a66a080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7fcb271fbd80] stderr: 2: (gsignal()+0x10f) [0x7fcb25ed693f] stderr: 3: (abort()+0x127) [0x7fcb25ec0c95] stderr: 4: (()+0x65c458) [0x556bcfba7458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x556bd01a5487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x556bd004f393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x556bd0070e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x556bd00a4297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556bd00b3fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x556bcfbc77ce] stderr: 11: (main()+0x1bd1) [0x556bcfabfaa1] stderr: 12: (__libc_start_main()+0xf3) [0x7fcb25ec2813] stderr: 13: (_start()+0x2e) [0x556bcfba5d0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -485> 2019-04-18 17:10:16.669 7fcb2a66a080 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid stderr: 0> 2019-04-18 17:10:17.195 7fcb2a66a080 -1 *** Caught signal (Aborted) ** stderr: in thread 7fcb2a66a080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7fcb271fbd80] stderr: 2: (gsignal()+0x10f) [0x7fcb25ed693f] stderr: 3: (abort()+0x127) [0x7fcb25ec0c95] stderr: 4: (()+0x65c458) [0x556bcfba7458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x556bd01a5487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x556bd004f393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x556bd0070e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x556bd00a4297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556bd00b3fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x556bcfbc77ce] stderr: 11: (main()+0x1bd1) [0x556bcfabfaa1] stderr: 12: (__libc_start_main()+0xf3) [0x7fcb25ec2813] stderr: 13: (_start()+0x2e) [0x556bcfba5d0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -485> 2019-04-18 17:10:16.669 7fcb2a66a080 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid stderr: 0> 2019-04-18 17:10:17.195 7fcb2a66a080 -1 *** Caught signal (Aborted) ** stderr: in thread 7fcb2a66a080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7fcb271fbd80] stderr: 2: (gsignal()+0x10f) [0x7fcb25ed693f] stderr: 3: (abort()+0x127) [0x7fcb25ec0c95] stderr: 4: (()+0x65c458) [0x556bcfba7458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x556bd01a5487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x556bd004f393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x556bd0070e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x556bd00a4297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556bd00b3fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x556bcfbc77ce] stderr: 11: (main()+0x1bd1) [0x556bcfabfaa1] stderr: 12: (__libc_start_main()+0xf3) [0x7fcb25ec2813] stderr: 13: (_start()+0x2e) [0x556bcfba5d0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --> Was unable to complete a new OSD, will rollback changes Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.4 --yes-i-really-mean-it stderr: purged osd.4 stdout_lines: fatal: [e23-h05-740xd.alias.bos.scalelab.redhat.com]: FAILED! => changed=true cmd: - ceph-volume - --cluster - ceph - lvm - batch - --bluestore - --yes - --osds-per-device - '2' - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1 - /dev/nvme4n1 delta: '0:00:09.650571' end: '2019-04-18 17:10:22.573013' msg: non-zero return code rc: 1 start: '2019-04-18 17:10:12.922442' stderr: |- Traceback (most recent call last): File "/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main self.execute() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute self.strategy.execute() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute Create(command).main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main self.create(args) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create prepare_step.safe_prepare(args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare osd_fsid, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid d9b36014-9d2a-4cc4-9d96-36bd79826ec3 --setuser ceph --setgroup ceph stderr_lines: - 'Traceback (most recent call last):' - ' File "/sbin/ceph-volume", line 11, in ' - ' load_entry_point(''ceph-volume==1.0.0'', ''console_scripts'', ''ceph-volume'')()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__' - ' self.main(self.argv)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc' - ' return f(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main' - ' terminal.dispatch(self.mapper, subcommand_args)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch' - ' instance.main()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main' - ' terminal.dispatch(self.mapper, self.argv)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch' - ' instance.main()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root' - ' return func(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main' - ' self.execute()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute' - ' self.strategy.execute()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute' - ' Create(command).main()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main' - ' self.create(args)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root' - ' return func(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create' - ' prepare_step.safe_prepare(args)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare' - ' self.prepare()' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root' - ' return func(*a, **kw)' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare' - ' osd_fsid,' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore' - ' db=db' - ' File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore' - ' raise RuntimeError(''Command failed with exit code %s: %s'' % (returncode, '' ''.join(command)))' - 'RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid d9b36014-9d2a-4cc4-9d96-36bd79826ec3 --setuser ceph --setgroup ceph' stdout: |- Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-f5eb4073-202a-4612-a63f-31e94b3e222a /dev/nvme0n1 stdout: Physical volume "/dev/nvme0n1" successfully created. stdout: Volume group "ceph-f5eb4073-202a-4612-a63f-31e94b3e222a" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-6989c594-acac-44d0-89b8-1c0d0985bf3e /dev/nvme1n1 stdout: Physical volume "/dev/nvme1n1" successfully created. stdout: Volume group "ceph-6989c594-acac-44d0-89b8-1c0d0985bf3e" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-439b663b-b499-4298-aad4-6cea51e33e46 /dev/nvme2n1 stdout: Physical volume "/dev/nvme2n1" successfully created. stdout: Volume group "ceph-439b663b-b499-4298-aad4-6cea51e33e46" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-c92899a6-b962-41af-a5d5-1a1590c7e594 /dev/nvme3n1 stdout: Physical volume "/dev/nvme3n1" successfully created. stdout: Volume group "ceph-c92899a6-b962-41af-a5d5-1a1590c7e594" successfully created Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-f402fedb-97eb-4979-a739-5159669ec2b9 /dev/nvme4n1 stdout: Physical volume "/dev/nvme4n1" successfully created. stdout: Volume group "ceph-f402fedb-97eb-4979-a739-5159669ec2b9" successfully created Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-b9c86a6e-21a8-4166-99d3-d80554849fa1 ceph-f5eb4073-202a-4612-a63f-31e94b3e222a stdout: Logical volume "osd-data-b9c86a6e-21a8-4166-99d3-d80554849fa1" created. Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-fd32237c-9803-4d00-926e-1742fe0777f9 ceph-f5eb4073-202a-4612-a63f-31e94b3e222a stdout: Logical volume "osd-data-fd32237c-9803-4d00-926e-1742fe0777f9" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d9b36014-9d2a-4cc4-9d96-36bd79826ec3 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1 Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-1 Running command: /bin/chown -h ceph:ceph /dev/ceph-f5eb4073-202a-4612-a63f-31e94b3e222a/osd-data-b9c86a6e-21a8-4166-99d3-d80554849fa1 Running command: /bin/chown -R ceph:ceph /dev/dm-3 Running command: /bin/ln -s /dev/ceph-f5eb4073-202a-4612-a63f-31e94b3e222a/osd-data-b9c86a6e-21a8-4166-99d3-d80554849fa1 /var/lib/ceph/osd/ceph-1/block Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap stderr: got monmap epoch 1 Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQB3r7hc5ZDeDxAAuQmv17q58owwit/bYnNvbQ== stdout: creating /var/lib/ceph/osd/ceph-1/keyring added entity osd.1 auth(key=AQB3r7hc5ZDeDxAAuQmv17q58owwit/bYnNvbQ==) Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid d9b36014-9d2a-4cc4-9d96-36bd79826ec3 --setuser ceph --setgroup ceph stdout: /usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = long unsigned int; _Alloc = mempool::pool_allocator<(mempool::pool_index_t)1, long unsigned int>; std::vector<_Tp, _Alloc>::reference = long unsigned int&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed. stderr: 2019-04-18 17:10:21.455 7f863a092080 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid stderr: *** Caught signal (Aborted) ** stderr: in thread 7f863a092080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7f8636c23d80] stderr: 2: (gsignal()+0x10f) [0x7f86358fe93f] stderr: 3: (abort()+0x127) [0x7f86358e8c95] stderr: 4: (()+0x65c458) [0x5566360fd458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5566366fb487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5566365a5393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5566365c6e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5566365fa297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556636609fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55663611d7ce] stderr: 11: (main()+0x1bd1) [0x556636015aa1] stderr: 12: (__libc_start_main()+0xf3) [0x7f86358ea813] stderr: 13: (_start()+0x2e) [0x5566360fbd0e] stderr: 2019-04-18 17:10:21.976 7f863a092080 -1 *** Caught signal (Aborted) ** stderr: in thread 7f863a092080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7f8636c23d80] stderr: 2: (gsignal()+0x10f) [0x7f86358fe93f] stderr: 3: (abort()+0x127) [0x7f86358e8c95] stderr: 4: (()+0x65c458) [0x5566360fd458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5566366fb487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5566365a5393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5566365c6e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5566365fa297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556636609fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55663611d7ce] stderr: 11: (main()+0x1bd1) [0x556636015aa1] stderr: 12: (__libc_start_main()+0xf3) [0x7f86358ea813] stderr: 13: (_start()+0x2e) [0x5566360fbd0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -485> 2019-04-18 17:10:21.455 7f863a092080 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid stderr: 0> 2019-04-18 17:10:21.976 7f863a092080 -1 *** Caught signal (Aborted) ** stderr: in thread 7f863a092080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7f8636c23d80] stderr: 2: (gsignal()+0x10f) [0x7f86358fe93f] stderr: 3: (abort()+0x127) [0x7f86358e8c95] stderr: 4: (()+0x65c458) [0x5566360fd458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5566366fb487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5566365a5393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5566365c6e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5566365fa297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556636609fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55663611d7ce] stderr: 11: (main()+0x1bd1) [0x556636015aa1] stderr: 12: (__libc_start_main()+0xf3) [0x7f86358ea813] stderr: 13: (_start()+0x2e) [0x5566360fbd0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. stderr: -485> 2019-04-18 17:10:21.455 7f863a092080 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid stderr: 0> 2019-04-18 17:10:21.976 7f863a092080 -1 *** Caught signal (Aborted) ** stderr: in thread 7f863a092080 thread_name:ceph-osd stderr: ceph version 14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable) stderr: 1: (()+0x12d80) [0x7f8636c23d80] stderr: 2: (gsignal()+0x10f) [0x7f86358fe93f] stderr: 3: (abort()+0x127) [0x7f86358e8c95] stderr: 4: (()+0x65c458) [0x5566360fd458] stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5566366fb487] stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5566365a5393] stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5566365c6e66] stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5566365fa297] stderr: 9: (BlueStore::mkfs()+0x141f) [0x556636609fbf] stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55663611d7ce] stderr: 11: (main()+0x1bd1) [0x556636015aa1] stderr: 12: (__libc_start_main()+0xf3) [0x7f86358ea813] stderr: 13: (_start()+0x2e) [0x5566360fbd0e] stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --> Was unable to complete a new OSD, will rollback changes Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1 --yes-i-really-mean-it stderr: purged osd.1 stdout_lines: NO MORE HOSTS LEFT ********************************************************************************************************************************************************************************************************************* PLAY RECAP ***************************************************************************************************************************************************************************************************************************** e23-h05-740xd.alias.bos.scalelab.redhat.com : ok=88 changed=11 unreachable=0 failed=1 e23-h07-740xd.alias.bos.scalelab.redhat.com : ok=86 changed=11 unreachable=0 failed=1 e24-h05-740xd.alias.bos.scalelab.redhat.com : ok=86 changed=10 unreachable=0 failed=1 e24-h07-740xd.alias.bos.scalelab.redhat.com : ok=86 changed=11 unreachable=0 failed=1 e24-h17-740xd.alias.bos.scalelab.redhat.com : ok=153 changed=10 unreachable=0 failed=0 e24-h19-740xd.alias.bos.scalelab.redhat.com : ok=142 changed=8 unreachable=0 failed=0 e24-h21-740xd.alias.bos.scalelab.redhat.com : ok=143 changed=9 unreachable=0 failed=0 INSTALLER STATUS *********************************************************************************************************************************************************************************************************************** Install Ceph Monitor : Complete (0:01:10) Install Ceph Manager : Complete (0:00:25) Install Ceph OSD : In Progress (0:01:20) This phase can be restarted by running: roles/ceph-osd/tasks/main.yml Thursday 18 April 2019 17:10:22 +0000 (0:00:09.829) 0:03:34.509 ******** =============================================================================== ceph-handler : restart ceph mon daemon(s) - non container ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 32.01s /usr/share/ceph-ansible/roles/ceph-handler/handlers/main.yml:29 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-common : install redhat ceph packages ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 18.77s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:20 ----------------------------------------------------------------------------------------------------------------------------------------------- ceph-config : generate ceph configuration file: ceph.conf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.97s /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:83 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-config : generate ceph configuration file: ceph.conf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.72s /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:83 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-common : install redhat dependencies -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.33s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:2 ------------------------------------------------------------------------------------------------------------------------------------------------ ceph-osd : use ceph-volume lvm batch to create bluestore osds ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 9.83s /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/lvm-batch.yml:3 ---------------------------------------------------------------------------------------------------------------------------------------------------------------- gather and delegate facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.86s /usr/share/ceph-ansible/site.yml:38 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-mgr : install ceph-mgr package on RedHat or SUSE --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.53s /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-mgr : systemd start mgr ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.36s /usr/share/ceph-ansible/roles/ceph-mgr/tasks/start_mgr.yml:33 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-osd : read information about the devices ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.96s /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:35 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ceph-mon : create ceph mgr keyring(s) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.82s /usr/share/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:33 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-validate : validate provided configuration --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.62s /usr/share/ceph-ansible/roles/ceph-validate/tasks/main.yml:2 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-facts : is ceph running already? ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.39s /usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:53 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-handler : check for a ceph mon socket -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.26s /usr/share/ceph-ansible/roles/ceph-handler/tasks/check_socket_non_container.yml:2 ----------------------------------------------------------------------------------------------------------------------------------------------------- ceph-handler : check for a ceph osd socket -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.21s /usr/share/ceph-ansible/roles/ceph-handler/tasks/check_socket_non_container.yml:30 ---------------------------------------------------------------------------------------------------------------------------------------------------- ceph-facts : get default crush rule value from ceph configuration --------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.21s /usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:257 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created --------------------------------------------------------------------------------------------------------------------------------------- 1.07s /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:22 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-common : install redhat dependencies --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.05s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:2 ------------------------------------------------------------------------------------------------------------------------------------------------ ceph-facts : set_fact ceph_current_status (convert to json) --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.00s /usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:84 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ceph-validate : validate devices is actually a device --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.97s /usr/share/ceph-ansible/roles/ceph-validate/tasks/check_devices.yml:4 -----------------------------------------------------------------------------------------------------------------------------------------------------------------