Project

General

Profile

Actions

Bug #25098

closed

Bluestore OSD failed to start with `bluefs_types.h: 54: FAILED assert(pos <= end)`

Added by benoit hudzia over 5 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Normal
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This occurs sometimes... hard to catch :

1. we zap the device
2. do ceph lvm active with a cache and DB on SSD ( separate disk )
3. OSD activated successfully then we try to start it
4. keep getting stack trace

Note : we have another OSD being setup in parrallel on the same node which start without an hitch

Can this be related to a parallel setup of OSD activation ?

Logs:

Zapping device /dev/sdc , All DATA WILL BE LOST
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 6.0558 s, 173 MB/s
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.
--> Zapping: /dev/sdc
Running command: /usr/sbin/cryptsetup status /dev/mapper/
 stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes are found for /dev/sdc
Running command: wipefs --all /dev/sdc
 stdout: /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdc: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdc: calling ioclt to re-read partition table: Success
Running command: dd if=/dev/zero of=/dev/sdc bs=1M count=10
 stderr: 10+0 records in
10+0 records out
10485760 bytes (10 MB) copied
 stderr: , 0.00540907 s, 1.9 GB/s
--> Zapping successful for: /dev/sdc
Preparing cached /dev/sdc for bluestore with 
Zapping cache
-->  KeyError: 'ceph.cluster_name'
--> Zapping: /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-wal
--> Zapping: /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-db
-->  KeyError: 'ceph.cluster_name'
Creating volume with cache
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ab3eea63-4e3c-40a8-80ae-426a69fc90e0
Running command: vgcreate --force --yes ceph-7ecdff2f-917e-480a-9af0-32efb26c5604 /dev/sdc
 stdout: Physical volume "/dev/sdc" successfully created.
 stdout: Volume group "ceph-7ecdff2f-917e-480a-9af0-32efb26c5604" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-ab3eea63-4e3c-40a8-80ae-426a69fc90e0 ceph-7ecdff2f-917e-480a-9af0-32efb26c5604
 stdout: Logical volume "osd-block-ab3eea63-4e3c-40a8-80ae-426a69fc90e0" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Running command: chown -h ceph:ceph /dev/ceph-7ecdff2f-917e-480a-9af0-32efb26c5604/osd-block-ab3eea63-4e3c-40a8-80ae-426a69fc90e0
Running command: chown -R ceph:ceph /dev/mapper/ceph--7ecdff2f--917e--480a--9af0--32efb26c5604-osd--block--ab3eea63--4e3c--40a8--80ae--426a69fc90e0
Running command: ln -s /dev/ceph-7ecdff2f-917e-480a-9af0-32efb26c5604/osd-block-ab3eea63-4e3c-40a8-80ae-426a69fc90e0 /var/lib/ceph/osd/ceph-1/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
 stderr: got monmap epoch 3
Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQBGj1dbWqmuEhAAjGQ2C7SnDP0J4NLZFCG49g==
 stdout: creating /var/lib/ceph/osd/ceph-1/keyring
 stdout: added entity osd.1 auth auth(auid = 18446744073709551615 key=AQBGj1dbWqmuEhAAjGQ2C7SnDP0J4NLZFCG49g== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Running command: chown -h ceph:ceph /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-wal
Running command: chown -R ceph:ceph /dev/mapper/inaugurator-ab3eea63--4e3c--40a8--80ae--426a69fc90e0--wal
Running command: chown -h ceph:ceph /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-db
Running command: chown -R ceph:ceph /dev/mapper/inaugurator-ab3eea63--4e3c--40a8--80ae--426a69fc90e0--db
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --bluestore-block-wal-path /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-wal --bluestore-block-db-path /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-db --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid ab3eea63-4e3c-40a8-80ae-426a69fc90e0 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/sdc
Starting OSD service 
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-7ecdff2f-917e-480a-9af0-32efb26c5604/osd-block-ab3eea63-4e3c-40a8-80ae-426a69fc90e0 --path /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/ceph-7ecdff2f-917e-480a-9af0-32efb26c5604/osd-block-ab3eea63-4e3c-40a8-80ae-426a69fc90e0 /var/lib/ceph/osd/ceph-1/block
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Running command: chown -R ceph:ceph /dev/mapper/ceph--7ecdff2f--917e--480a--9af0--32efb26c5604-osd--block--ab3eea63--4e3c--40a8--80ae--426a69fc90e0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-wal /var/lib/ceph/osd/ceph-1/block.wal
Running command: chown -h ceph:ceph /dev/inaugurator/ab3eea63-4e3c-40a8-80ae-426a69fc90e0-wal
Running command: chown -R ceph:ceph /dev/mapper/inaugurator-ab3eea63--4e3c--40a8--80ae--426a69fc90e0--wal
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block.wal
Running command: chown -R ceph:ceph /dev/mapper/inaugurator-ab3eea63--4e3c--40a8--80ae--426a69fc90e0--wal
Running command: systemctl enable ceph-volume@lvm-1-ab3eea63-4e3c-40a8-80ae-426a69fc90e0
 stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-ab3eea63-4e3c-40a8-80ae-426a69fc90e0.service, pointing to /usr/lib/systemd/system/ceph-volume@.service.
Running command: systemctl start ceph-osd@1
 stderr: Running in chroot, ignoring request.
--> ceph-volume lvm activate successful for osd ID: 1
osd.1 belongs to no class, 
set osd(s) 1 to class '79631a28-8d7f-4ca4-a3f6-b02e0f8de8ec'
OSD ID 1 for device : /dev/sdc , UUID: ab3eea63-4e3c-40a8-80ae-426a69fc90e0
starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.6/rpm/el7/BUILD/ceph-12.2.6/src/os/bluestore/bluefs_types.h: In function 'static void bluefs_fnode_t::_denc_finish(ceph::buffer::ptr::iterator&, __u8*, __u8*, char**, uint32_t*)' thread 7f637a5fed80 time 2018-07-24 20:43:05.884337
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.6/rpm/el7/BUILD/ceph-12.2.6/src/os/bluestore/bluefs_types.h: 54: FAILED assert(pos <= end)
 ceph version 12.2.6 (488df8a1076c4f5fc5b8d18a90463262c438740f) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x5570da469bf0]
 2: (bluefs_super_t::decode(ceph::buffer::list::iterator&)+0x776) [0x5570da3ffd66]
 3: (BlueFS::_open_super()+0xfe) [0x5570da3df17e]
 4: (BlueFS::mount()+0xe3) [0x5570da3f7283]
 5: (BlueStore::_open_db(bool)+0x1847) [0x5570da311147]
 6: (BlueStore::_mount(bool)+0x40e) [0x5570da34240e]
 7: (OSD::init()+0x3bd) [0x5570d9ef732d]
 8: (main()+0x2d07) [0x5570d9dfc927]
 9: (__libc_start_main()+0xf5) [0x7f6376abe445]
 10: (()+0x4b8fe3) [0x5570d9e9afe3]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
2018-07-24 20:43:05.888052 7f637a5fed80 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.6/rpm/el7/BUILD/ceph-12.2.6/src/os/bluestore/bluefs_types.h: In function 'static void bluefs_fnode_t::_denc_finish(ceph::buffer::ptr::iterator&, __u8*, __u8*, char**, uint32_t*)' thread 7f637a5fed80 time 2018-07-24 20:43:05.884337
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.6/rpm/el7/BUILD/ceph-12.2.6/src/os/bluestore/bluefs_types.h: 54: FAILED assert(pos <= end)

 ceph version 12.2.6 (488df8a1076c4f5fc5b8d18a90463262c438740f) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x5570da469bf0]
 2: (bluefs_super_t::decode(ceph::buffer::list::iterator&)+0x776) [0x5570da3ffd66]
 3: (BlueFS::_open_super()+0xfe) [0x5570da3df17e]
 4: (BlueFS::mount()+0xe3) [0x5570da3f7283]
 5: (BlueStore::_open_db(bool)+0x1847) [0x5570da311147]
 6: (BlueStore::_mount(bool)+0x40e) [0x5570da34240e]
 7: (OSD::init()+0x3bd) [0x5570d9ef732d]
 8: (main()+0x2d07) [0x5570d9dfc927]
 9: (__libc_start_main()+0xf5) [0x7f6376abe445]
 10: (()+0x4b8fe3) [0x5570d9e9afe3]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

     0> 2018-07-24 20:43:05.888052 7f637a5fed80 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.6/rpm/el7/BUILD/ceph-12.2.6/src/os/bluestore/bluefs_types.h: In function 'static void bluefs_fnode_t::_denc_finish(ceph::buffer::ptr::iterator&, __u8*, __u8*, char**, uint32_t*)' thread 7f637a5fed80 time 2018-07-24 20:43:05.884337
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.6/rpm/el7/BUILD/ceph-12.2.6/src/os/bluestore/bluefs_types.h: 54: FAILED assert(pos <= end)

 ceph version 12.2.6 (488df8a1076c4f5fc5b8d18a90463262c438740f) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x5570da469bf0]
 2: (bluefs_super_t::decode(ceph::buffer::list::iterator&)+0x776) [0x5570da3ffd66]
 3: (BlueFS::_open_super()+0xfe) [0x5570da3df17e]
 4: (BlueFS::mount()+0xe3) [0x5570da3f7283]
 5: (BlueStore::_open_db(bool)+0x1847) [0x5570da311147]
 6: (BlueStore::_mount(bool)+0x40e) [0x5570da34240e]
 7: (OSD::init()+0x3bd) [0x5570d9ef732d]
 8: (main()+0x2d07) [0x5570d9dfc927]
 9: (__libc_start_main()+0xf5) [0x7f6376abe445]
 10: (()+0x4b8fe3) [0x5570d9e9afe3]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.


Related issues 1 (0 open1 closed)

Related to bluestore - Bug #18389: crash when opening bluefs superblockCan't reproduce01/02/2017

Actions
Actions #1

Updated by benoit hudzia over 5 years ago

output of ceph-volume.log , seems ok, nothing strange:

[2018-07-25 17:51:53,741][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm activate 1 44edbcd5-7061-4b57-91b2-db308523210a
[2018-07-25 17:51:53,743][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -o lv_tags,lv_path,lv_name,vg_name,lv_uuid
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-3a96db34-67d8-4fec-ae6e-8ddd1bf4aed3/osd-block-44edbcd5-7061-4b57-91b2-db308523210a,ceph.block_uuid=sjna4W-VkVA-naSW-1ygQ-YelS-lhIa-SQFHpX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ea0df043-7b25-4447-a43d-e9b2af8fe069,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-db,ceph.db_uuid=O4uBae-Vbxc-x1pm-uVBK-fW2T-8vd8-DSKC51,ceph.encrypted=0,ceph.osd_fsid=44edbcd5-7061-4b57-91b2-db308523210a,ceph.osd_id=1,ceph.type=block,ceph.vdo=0,ceph.wal_device=/dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-wal,ceph.wal_uuid=KH0SMU-Ykaj-qu38-nI4w-Fiwm-lMxa-AJCi8W";"/dev/ceph-3a96db34-67d8-4fec-ae6e-8ddd1bf4aed3/osd-block-44edbcd5-7061-4b57-91b2-db308523210a";"osd-block-44edbcd5-7061-4b57-91b2-db308523210a";"ceph-3a96db34-67d8-4fec-ae6e-8ddd1bf4aed3";"sjna4W-VkVA-naSW-1ygQ-YelS-lhIa-SQFHpX
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-4d76af5c-478a-4e5e-8535-2a8ef90a0371/osd-block-8c51de4a-4ad2-4cbe-b48e-0cbadc864f46,ceph.block_uuid=cmXuwv-0jXq-93ac-cwvY-wXDt-5LUd-2s1Dgx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ea0df043-7b25-4447-a43d-e9b2af8fe069,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/inaugurator/8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-db,ceph.db_uuid=gwLHrF-GQGt-Y6Ro-YaP6-mTO5-5TON-LigcCf,ceph.encrypted=0,ceph.osd_fsid=8c51de4a-4ad2-4cbe-b48e-0cbadc864f46,ceph.osd_id=0,ceph.type=block,ceph.vdo=0,ceph.wal_device=/dev/inaugurator/8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-wal,ceph.wal_uuid=ibFWOe-bdTE-dm82-B4Ea-8N18-L0CL-Fig1Q1";"/dev/ceph-4d76af5c-478a-4e5e-8535-2a8ef90a0371/osd-block-8c51de4a-4ad2-4cbe-b48e-0cbadc864f46";"osd-block-8c51de4a-4ad2-4cbe-b48e-0cbadc864f46";"ceph-4d76af5c-478a-4e5e-8535-2a8ef90a0371";"cmXuwv-0jXq-93ac-cwvY-wXDt-5LUd-2s1Dgx
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-3a96db34-67d8-4fec-ae6e-8ddd1bf4aed3/osd-block-44edbcd5-7061-4b57-91b2-db308523210a,ceph.block_uuid=sjna4W-VkVA-naSW-1ygQ-YelS-lhIa-SQFHpX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ea0df043-7b25-4447-a43d-e9b2af8fe069,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-db,ceph.db_uuid=O4uBae-Vbxc-x1pm-uVBK-fW2T-8vd8-DSKC51,ceph.encrypted=0,ceph.osd_fsid=44edbcd5-7061-4b57-91b2-db308523210a,ceph.type=db,ceph.vdo=0,ceph.wal_device=/dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-wal,ceph.wal_uuid=KH0SMU-Ykaj-qu38-nI4w-Fiwm-lMxa-AJCi8W";"/dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-db";"44edbcd5-7061-4b57-91b2-db308523210a-db";"inaugurator";"O4uBae-Vbxc-x1pm-uVBK-fW2T-8vd8-DSKC51
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-3a96db34-67d8-4fec-ae6e-8ddd1bf4aed3/osd-block-44edbcd5-7061-4b57-91b2-db308523210a,ceph.block_uuid=sjna4W-VkVA-naSW-1ygQ-YelS-lhIa-SQFHpX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ea0df043-7b25-4447-a43d-e9b2af8fe069,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=44edbcd5-7061-4b57-91b2-db308523210a,ceph.osd_id=1,ceph.type=wal,ceph.vdo=0,ceph.wal_device=/dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-wal,ceph.wal_uuid=KH0SMU-Ykaj-qu38-nI4w-Fiwm-lMxa-AJCi8W";"/dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-wal";"44edbcd5-7061-4b57-91b2-db308523210a-wal";"inaugurator";"KH0SMU-Ykaj-qu38-nI4w-Fiwm-lMxa-AJCi8W
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-4d76af5c-478a-4e5e-8535-2a8ef90a0371/osd-block-8c51de4a-4ad2-4cbe-b48e-0cbadc864f46,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ea0df043-7b25-4447-a43d-e9b2af8fe069,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/inaugurator/8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-db,ceph.db_uuid=gwLHrF-GQGt-Y6Ro-YaP6-mTO5-5TON-LigcCf,ceph.encrypted=0,ceph.osd_fsid=8c51de4a-4ad2-4cbe-b48e-0cbadc864f46,ceph.osd_id=0,ceph.type=db,ceph.vdo=0,ceph.wal_uuid=ibFWOe-bdTE-dm82-B4Ea-8N18-L0CL-Fig1Q1";"/dev/inaugurator/8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-db";"8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-db";"inaugurator";"gwLHrF-GQGt-Y6Ro-YaP6-mTO5-5TON-LigcCf
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-4d76af5c-478a-4e5e-8535-2a8ef90a0371/osd-block-8c51de4a-4ad2-4cbe-b48e-0cbadc864f46,ceph.block_uuid=cmXuwv-0jXq-93ac-cwvY-wXDt-5LUd-2s1Dgx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ea0df043-7b25-4447-a43d-e9b2af8fe069,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=8c51de4a-4ad2-4cbe-b48e-0cbadc864f46,ceph.osd_id=0,ceph.type=wal,ceph.vdo=0,ceph.wal_device=/dev/inaugurator/8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-wal,ceph.wal_uuid=ibFWOe-bdTE-dm82-B4Ea-8N18-L0CL-Fig1Q1";"/dev/inaugurator/8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-wal";"8c51de4a-4ad2-4cbe-b48e-0cbadc864f46-wal";"inaugurator";"ibFWOe-bdTE-dm82-B4Ea-8N18-L0CL-Fig1Q1
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ";"/dev/inaugurator/cache_flag_8959492f-82ff-4a1b-af19-86ac5873a4f2";"cache_flag_8959492f-82ff-4a1b-af19-86ac5873a4f2";"inaugurator";"RhAaLK-YXnc-0LI3-TCJ6-2LgY-0qsh-hg0dWx
[2018-07-25 17:51:53,856][ceph_volume.process][INFO  ] stdout ";"/dev/inaugurator/containers";"containers";"inaugurator";"4zNH75-uKtT-gk7G-nwS0-qJbN-VPjQ-poDMu1
[2018-07-25 17:51:53,857][ceph_volume.process][INFO  ] stdout ";"/dev/inaugurator/data";"data";"inaugurator";"sFHPnH-webh-s9Ac-vQZB-er96-mcIB-IYquGd
[2018-07-25 17:51:53,857][ceph_volume.process][INFO  ] stdout ";"";"docker_thinpool";"inaugurator";"963UzL-XMyV-UuTS-X0zB-KcwP-Quqd-WJd29I
[2018-07-25 17:51:53,857][ceph_volume.process][INFO  ] stdout ";"/dev/inaugurator/log";"log";"inaugurator";"MlgKeA-LzIW-H8Yw-xLEp-7igA-Jtd3-Hp0P9K
[2018-07-25 17:51:53,857][ceph_volume.process][INFO  ] stdout ";"/dev/inaugurator/root";"root";"inaugurator";"4PC2Um-8SgD-Mb0U-lirn-Ur2K-qz7V-VAXMw4
[2018-07-25 17:51:53,857][ceph_volume.process][INFO  ] stdout ";"/dev/inaugurator/swap";"swap";"inaugurator";"qeRN25-8mz0-OeDb-M64o-ZHMA-9mGJ-h0gwFB
[2018-07-25 17:51:53,857][ceph_volume.process][INFO  ] stdout ";"/dev/inaugurator/temp";"temp";"inaugurator";"FD5Aqc-EA4P-LLQd-65pY-1rcR-HPLS-ve2sqM
[2018-07-25 17:51:53,875][ceph_volume.process][INFO  ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[2018-07-25 17:51:53,927][ceph_volume.devices.lvm.activate][DEBUG ] Found block device (osd-block-44edbcd5-7061-4b57-91b2-db308523210a) with encryption: False
[2018-07-25 17:51:53,927][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -t PARTUUID="O4uBae-Vbxc-x1pm-uVBK-fW2T-8vd8-DSKC51" -o device
[2018-07-25 17:51:54,071][ceph_volume.devices.lvm.activate][DEBUG ] Found block device (osd-block-44edbcd5-7061-4b57-91b2-db308523210a) with encryption: False
[2018-07-25 17:51:54,077][ceph_volume.process][INFO  ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-3a96db34-67d8-4fec-ae6e-8ddd1bf4aed3/osd-block-44edbcd5-7061-4b57-91b2-db308523210a --path /var/lib/ceph/osd/ceph-1
[2018-07-25 17:51:54,289][ceph_volume.process][INFO  ] Running command: ln -snf /dev/ceph-3a96db34-67d8-4fec-ae6e-8ddd1bf4aed3/osd-block-44edbcd5-7061-4b57-91b2-db308523210a /var/lib/ceph/osd/ceph-1/block
[2018-07-25 17:51:54,337][ceph_volume.process][INFO  ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[2018-07-25 17:51:54,385][ceph_volume.process][INFO  ] Running command: chown -R ceph:ceph /dev/mapper/ceph--3a96db34--67d8--4fec--ae6e--8ddd1bf4aed3-osd--block--44edbcd5--7061--4b57--91b2--db308523210a
[2018-07-25 17:51:54,433][ceph_volume.process][INFO  ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[2018-07-25 17:51:54,480][ceph_volume.process][INFO  ] Running command: ln -snf /dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-wal /var/lib/ceph/osd/ceph-1/block.wal
[2018-07-25 17:51:54,527][ceph_volume.process][INFO  ] Running command: chown -h ceph:ceph /dev/inaugurator/44edbcd5-7061-4b57-91b2-db308523210a-wal
[2018-07-25 17:51:54,578][ceph_volume.process][INFO  ] Running command: chown -R ceph:ceph /dev/mapper/inaugurator-44edbcd5--7061--4b57--91b2--db308523210a--wal
[2018-07-25 17:51:54,625][ceph_volume.process][INFO  ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block.wal
[2018-07-25 17:51:54,673][ceph_volume.process][INFO  ] Running command: chown -R ceph:ceph /dev/mapper/inaugurator-44edbcd5--7061--4b57--91b2--db308523210a--wal
[2018-07-25 17:51:54,722][ceph_volume.process][INFO  ] Running command: systemctl enable ceph-volume@lvm-1-44edbcd5-7061-4b57-91b2-db308523210a
[2018-07-25 17:51:54,780][ceph_volume.process][INFO  ] stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-44edbcd5-7061-4b57-91b2-db308523210a.service, pointing to /usr/lib/systemd/system/ceph-volume@.service.
[2018-07-25 17:51:54,780][ceph_volume.process][INFO  ] Running command: systemctl start ceph-osd@1
Actions #2

Updated by benoit hudzia over 5 years ago

Dump of the bluefs super block :

1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.000148461 s, 27.6 MB/s
[root@stratonode3 ~]# hexdump -C /tmp/foo
00000000  2f 17 cd ef 20 4c 56 4d  32 20 78 5b 35 41 25 72  |/... LVM2 x[5A%r|
00000010  30 4e 2a 3e 01 00 00 00  00 10 00 00 00 00 00 00  |0N*>............|
00000020  00 f0 0f 00 00 00 00 00  00 50 01 00 00 00 00 00  |.........P......|
00000030  5a 07 00 00 00 00 00 00  5c 23 73 b4 00 00 00 00  |Z.......\#s.....|
00000040  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00001000

Actions #3

Updated by benoit hudzia over 5 years ago

first 1 MB of the device `dd if=/dev/sdc of=/tmp/foo bs=1024K skip=1 count=1`

00000000  62 6c 75 65 73 74 6f 72  65 20 62 6c 6f 63 6b 20  |bluestore block |
00000010  64 65 76 69 63 65 0a 34  34 65 64 62 63 64 35 2d  |device.44edbcd5-|
00000020  37 30 36 31 2d 34 62 35  37 2d 39 31 62 32 2d 64  |7061-4b57-91b2-d|
00000030  62 33 30 38 35 32 33 32  31 30 61 0a 02 01 16 01  |b308523210a.....|
00000040  00 00 44 ed bc d5 70 61  4b 57 91 b2 db 30 85 23  |..D...paKW...0.#|
00000050  21 0a 00 00 00 c1 d1 01  00 00 6e ac 58 5b 6c 28  |!.........n.X[l(|
00000060  00 11 04 00 00 00 6d 61  69 6e 08 00 00 00 06 00  |......main......|
00000070  00 00 62 6c 75 65 66 73  01 00 00 00 31 09 00 00  |..bluefs....1...|
00000080  00 63 65 70 68 5f 66 73  69 64 24 00 00 00 65 61  |.ceph_fsid$...ea|
00000090  30 64 66 30 34 33 2d 37  62 32 35 2d 34 34 34 37  |0df043-7b25-4447|
000000a0  2d 61 34 33 64 2d 65 39  62 32 61 66 38 66 65 30  |-a43d-e9b2af8fe0|
000000b0  36 39 0a 00 00 00 6b 76  5f 62 61 63 6b 65 6e 64  |69....kv_backend|
000000c0  07 00 00 00 72 6f 63 6b  73 64 62 05 00 00 00 6d  |....rocksdb....m|
000000d0  61 67 69 63 14 00 00 00  63 65 70 68 20 6f 73 64  |agic....ceph osd|
000000e0  20 76 6f 6c 75 6d 65 20  76 30 32 36 09 00 00 00  | volume v026....|
000000f0  6d 6b 66 73 5f 64 6f 6e  65 03 00 00 00 79 65 73  |mkfs_done....yes|
00000100  07 00 00 00 6f 73 64 5f  6b 65 79 28 00 00 00 41  |....osd_key(...A|
00000110  51 42 69 72 46 68 62 58  6f 45 52 47 68 41 41 32  |QBirFhbXoERGhAA2|
00000120  48 75 37 45 52 58 6a 2f  54 34 6e 70 74 37 67 46  |Hu7ERXj/T4npt7gF|
00000130  7a 2f 5a 64 41 3d 3d 05  00 00 00 72 65 61 64 79  |z/ZdA==....ready|
00000140  05 00 00 00 72 65 61 64  79 06 00 00 00 77 68 6f  |....ready....who|
00000150  61 6d 69 01 00 00 00 31  c1 34 e7 18 00 00 00 00  |ami....1.4......|
00000160  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00010000  08 05 f9 01 00 00 ea 0d  f0 43 7b 25 44 47 a4 3d  |.........C{%DG.=|
00010010  e9 b2 af 8f e0 69 01 00  00 00 00 00 00 00 00 00  |.....i..........|
00010020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00010040  00 00 00 00 00 00 fe f7  01 00 00 00 00 00 0f 00  |................|
00010050  00 00 01 00 00 00 00 00  00 00 1a 00 00 00 69 6e  |..............in|
00010060  69 74 69 61 6c 20 66 65  61 74 75 72 65 20 73 65  |itial feature se|
00010070  74 28 7e 76 2e 31 38 29  02 00 00 00 00 00 00 00  |t(~v.18)........|
00010080  0d 00 00 00 70 67 69 6e  66 6f 20 6f 62 6a 65 63  |....pginfo objec|
00010090  74 03 00 00 00 00 00 00  00 0e 00 00 00 6f 62 6a  |t............obj|
000100a0  65 63 74 20 6c 6f 63 61  74 6f 72 04 00 00 00 00  |ect locator.....|
000100b0  00 00 00 10 00 00 00 6c  61 73 74 5f 65 70 6f 63  |.......last_epoc|

Actions #4

Updated by Radoslaw Zarzynski over 5 years ago

Reported also by Alredo Deza on 14.0.0-2808-gccbb825 (ccbb82553b5f130a109835907d4e09cd6e9cd993). Log is available: https://paste.fedoraproject.org/paste/5OqZV~nJA5w3taDf9huLSA.

Actions #5

Updated by Radoslaw Zarzynski over 5 years ago

  • Status changed from New to 12
Actions #6

Updated by Alfredo Deza over 5 years ago

Full log output from our latest tests:

2018-09-04 23:01:14.307 7fb97b4158c0  0 set uid:gid to 167:167 (ceph:ceph)
2018-09-04 23:01:14.307 7fb97b4158c0  0 ceph version 14.0.0-2808-gccbb825 (ccbb82553b5f130a109835907d4e09cd6e9cd993) nautilus (dev), process ceph-osd, pid 29840
2018-09-04 23:01:14.307 7fb97b4158c0  0 pidfile_write: ignore empty --pid-file
2018-09-04 23:01:14.336 7fb97b4158c0  0 load: jerasure load: lrc load: isa
2018-09-04 23:01:14.336 7fb97b4158c0  1 bdev create path /var/lib/ceph/osd/test-0/block type kernel
2018-09-04 23:01:14.336 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open path /var/lib/ceph/osd/test-0/block
2018-09-04 23:01:14.339 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open size 11530141696 (0x2af400000, 11 GiB) block_size 4096 (4 KiB) rotational
2018-09-04 23:01:14.339 7fb97b4158c0  1 bluestore(/var/lib/ceph/osd/test-0) _set_cache_sizes cache_size 1073741824 meta 0.4 kv 0.4 data 0.2
2018-09-04 23:01:14.339 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) close
2018-09-04 23:01:14.593 7fb97b4158c0  1 bluestore(/var/lib/ceph/osd/test-0) _mount path /var/lib/ceph/osd/test-0
2018-09-04 23:01:14.595 7fb97b4158c0  1 bdev create path /var/lib/ceph/osd/test-0/block type kernel
2018-09-04 23:01:14.595 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open path /var/lib/ceph/osd/test-0/block
2018-09-04 23:01:14.595 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open size 11530141696 (0x2af400000, 11 GiB) block_size 4096 (4 KiB) rotational
2018-09-04 23:01:14.598 7fb97b4158c0  1 bluestore(/var/lib/ceph/osd/test-0) _set_cache_sizes cache_size 1073741824 meta 0.4 kv 0.4 data 0.2
2018-09-04 23:01:14.598 7fb97b4158c0  1 bdev create path /var/lib/ceph/osd/test-0/block type kernel
2018-09-04 23:01:14.598 7fb97b4158c0  1 bdev(0x55920da5a380 /var/lib/ceph/osd/test-0/block) open path /var/lib/ceph/osd/test-0/block
2018-09-04 23:01:14.598 7fb97b4158c0  1 bdev(0x55920da5a380 /var/lib/ceph/osd/test-0/block) open size 11530141696 (0x2af400000, 11 GiB) block_size 4096 (4 KiB) rotational
2018-09-04 23:01:14.598 7fb97b4158c0  1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/test-0/block size 11 GiB
2018-09-04 23:01:14.598 7fb97b4158c0  1 bluefs mount
2018-09-04 23:01:14.603 7fb97b4158c0 -1 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.0-2808-gccbb825/rpm/el7/BUILD/ceph-14.0.0-2808-gccbb825/src/os/bluestore/bluefs_types.h: In function 'static void bluefs_fnode_t::_denc_finish(ceph::buffer::ptr::const_iterator&, __u8*, __u8*, char**, uint32_t*)' thread 7fb97b4158c0 time 2018-09-04 23:01:14.602067
/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.0-2808-gccbb825/rpm/el7/BUILD/ceph-14.0.0-2808-gccbb825/src/os/bluestore/bluefs_types.h: 57: FAILED ceph_assert(pos <= end)

 ceph version 14.0.0-2808-gccbb825 (ccbb82553b5f130a109835907d4e09cd6e9cd993) nautilus (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x150) [0x5592033f20df]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x5592033f2264]
 3: (bluefs_super_t::decode(ceph::buffer::list::iterator_impl<true>&)+0x78b) [0x559203a12c4b]
 4: (BlueFS::_open_super()+0xc6) [0x5592039f80e6]
 5: (BlueFS::mount()+0xa6) [0x559203a064c6]
 6: (BlueStore::_open_db(bool, bool)+0x15bc) [0x5592039218ec]
 7: (BlueStore::_mount(bool, bool)+0x459) [0x55920394db59]
 8: (OSD::init()+0x3e8) [0x559203526118]
 9: (main()+0x132d) [0x5592033f570d]
 10: (__libc_start_main()+0xf5) [0x7fb976e15c05]
 11: (()+0x5127e5) [0x5592034d97e5]

2018-09-04 23:01:14.603 7fb97b4158c0 -1 *** Caught signal (Aborted) **
 in thread 7fb97b4158c0 thread_name:ceph-osd

 ceph version 14.0.0-2808-gccbb825 (ccbb82553b5f130a109835907d4e09cd6e9cd993) nautilus (dev)
 1: (()+0xf5e0) [0x7fb977e125e0]
 2: (gsignal()+0x37) [0x7fb976e291f7]
 3: (abort()+0x148) [0x7fb976e2a8e8]
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a4) [0x5592033f2133]
 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x5592033f2264]
 6: (bluefs_super_t::decode(ceph::buffer::list::iterator_impl<true>&)+0x78b) [0x559203a12c4b]
 7: (BlueFS::_open_super()+0xc6) [0x5592039f80e6]
 8: (BlueFS::mount()+0xa6) [0x559203a064c6]
 9: (BlueStore::_open_db(bool, bool)+0x15bc) [0x5592039218ec]
 10: (BlueStore::_mount(bool, bool)+0x459) [0x55920394db59]
 11: (OSD::init()+0x3e8) [0x559203526118]
 12: (main()+0x132d) [0x5592033f570d]
 13: (__libc_start_main()+0xf5) [0x7fb976e15c05]
 14: (()+0x5127e5) [0x5592034d97e5]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
  -131> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command abort hook 0x55920d53a250
  -130> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command perfcounters_dump hook 0x55920d53a250
  -129> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command 1 hook 0x55920d53a250
  -128> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command perf dump hook 0x55920d53a250
  -127> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command perfcounters_schema hook 0x55920d53a250
  -126> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command perf histogram dump hook 0x55920d53a250
  -125> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command 2 hook 0x55920d53a250
  -124> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command perf schema hook 0x55920d53a250
  -123> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command perf histogram schema hook 0x55920d53a250
  -122> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command perf reset hook 0x55920d53a250
  -121> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command config show hook 0x55920d53a250
  -120> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command config help hook 0x55920d53a250
  -119> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command config set hook 0x55920d53a250
  -118> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command config unset hook 0x55920d53a250
  -117> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command config get hook 0x55920d53a250
  -116> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command config diff hook 0x55920d53a250
  -115> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command config diff get hook 0x55920d53a250
  -114> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command log flush hook 0x55920d53a250
  -113> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command log dump hook 0x55920d53a250
  -112> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command log reopen hook 0x55920d53a250
  -111> 2018-09-04 23:01:14.268 7fb97b4158c0  5 asok(0x55920d842000) register_command dump_mempools hook 0x55920d5882c8
  -110> 2018-09-04 23:01:14.279 7fb97b4158c0 10 monclient: get_monmap_and_config
  -109> 2018-09-04 23:01:14.299 7fb97b4158c0 10 monclient: build_initial_monmap
  -108> 2018-09-04 23:01:14.299 7fb97331c700  2 Event(0x55920d8a5c80 nevent=5000 time_id=1).set_owner idx=0 owner=140434478319360
  -107> 2018-09-04 23:01:14.299 7fb972b1b700  2 Event(0x55920d547680 nevent=5000 time_id=1).set_owner idx=1 owner=140434469926656
  -106> 2018-09-04 23:01:14.299 7fb97231a700  2 Event(0x55920d8a4480 nevent=5000 time_id=1).set_owner idx=2 owner=140434461533952
  -105> 2018-09-04 23:01:14.302 7fb97b4158c0  1  Processor -- start
  -104> 2018-09-04 23:01:14.302 7fb97b4158c0  1 --  start start
  -103> 2018-09-04 23:01:14.302 7fb97b4158c0 10 monclient: init
  -102> 2018-09-04 23:01:14.302 7fb97b4158c0  5 adding auth protocol: cephx
  -101> 2018-09-04 23:01:14.302 7fb97b4158c0 10 monclient: auth_supported 2 method cephx
  -100> 2018-09-04 23:01:14.302 7fb97b4158c0  2 auth: KeyRing::load: loaded key file /var/lib/ceph/osd/test-0/keyring
   -99> 2018-09-04 23:01:14.302 7fb97b4158c0 10 monclient: _reopen_session rank -1
   -98> 2018-09-04 23:01:14.302 7fb97b4158c0 10 monclient(hunting): picked mon.noname-a con 0x55920d980700 addr 192.168.3.10:6789/0
   -97> 2018-09-04 23:01:14.302 7fb97b4158c0  1 --  --> 192.168.3.10:6789/0 -- auth(proto 0 26 bytes epoch 0) v1 -- 0x55920d5fad80 con 0
   -96> 2018-09-04 23:01:14.302 7fb97b4158c0 10 monclient(hunting): _renew_subs
   -95> 2018-09-04 23:01:14.302 7fb97b4158c0 10 monclient(hunting): authenticate will time out at 2018-09-04 23:06:14.303078
   -94> 2018-09-04 23:01:14.303 7fb972b1b700  1 -- 192.168.3.100:47082/702231899 learned_addr learned my addr 192.168.3.100:47082/702231899 (peer_addr_for_me 192.168.3.100:47082/0)
   -93> 2018-09-04 23:01:14.303 7fb972b1b700  2 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_CONNECTING_WAIT_ACK_SEQ pgs=0 cs=0 l=0)._process_connection got newly_acked_seq 0 vs out_seq 0
   -92> 2018-09-04 23:01:14.304 7fb972b1b700  5 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=134 cs=1 l=1). rx mon.0 seq 1 0x55920d5fad80 auth_reply(proto 2 0 (0) Success) v1
   -91> 2018-09-04 23:01:14.304 7fb971b19700  1 -- 192.168.3.100:47082/702231899 <== mon.0 192.168.3.10:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (3088870390 0 0) 0x55920d5fad80 con 0x55920d980700
   -90> 2018-09-04 23:01:14.304 7fb971b19700 10 monclient(hunting): my global_id is 14227
   -89> 2018-09-04 23:01:14.304 7fb971b19700  1 -- 192.168.3.100:47082/702231899 --> 192.168.3.10:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x55920d5fb8c0 con 0
   -88> 2018-09-04 23:01:14.304 7fb972b1b700  5 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=134 cs=1 l=1). rx mon.0 seq 2 0x55920d5fb8c0 auth_reply(proto 2 0 (0) Success) v1
   -87> 2018-09-04 23:01:14.304 7fb972b1b700  5 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=134 cs=1 l=1). rx mon.0 seq 3 0x55920d5fb680 mon_map magic: 0 v1
   -86> 2018-09-04 23:01:14.304 7fb971b19700  1 -- 192.168.3.100:47082/702231899 <== mon.0 192.168.3.10:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (150198195 0 0) 0x55920d5fb8c0 con 0x55920d980700
   -85> 2018-09-04 23:01:14.304 7fb971b19700  1 monclient: found mon.noname-a
   -84> 2018-09-04 23:01:14.304 7fb971b19700 10 monclient: _send_mon_message to mon.noname-a at 192.168.3.10:6789/0
   -83> 2018-09-04 23:01:14.304 7fb971b19700  1 -- 192.168.3.100:47082/702231899 --> 192.168.3.10:6789/0 -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x55920d8a5400 con 0
   -82> 2018-09-04 23:01:14.304 7fb971b19700 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2018-09-04 23:00:44.305647)
   -81> 2018-09-04 23:01:14.304 7fb971b19700 10 monclient: _send_mon_message to mon.noname-a at 192.168.3.10:6789/0
   -80> 2018-09-04 23:01:14.304 7fb971b19700  1 -- 192.168.3.100:47082/702231899 --> 192.168.3.10:6789/0 -- auth(proto 2 2 bytes epoch 0) v1 -- 0x55920d5fad80 con 0
   -79> 2018-09-04 23:01:14.304 7fb971b19700  1 -- 192.168.3.100:47082/702231899 <== mon.0 192.168.3.10:6789/0 3 ==== mon_map magic: 0 v1 ==== 149+0+0 (2507971757 0 0) 0x55920d5fb680 con 0x55920d980700
   -78> 2018-09-04 23:01:14.304 7fb971b19700 10 monclient: handle_monmap mon_map magic: 0 v1
   -77> 2018-09-04 23:01:14.304 7fb971b19700 10 monclient:  got monmap 1, mon.noname-a is now rank -1
   -76> 2018-09-04 23:01:14.304 7fb971b19700 10 monclient: dump:
epoch 1
fsid de649134-fe80-43d3-b952-62aca7e949fb
last_changed 2018-09-04 19:50:12.844115
created 2018-09-04 19:50:12.844115
0: 192.168.3.10:6789/0 mon.mon0

   -75> 2018-09-04 23:01:14.304 7fb97b4158c0  5 monclient: authenticate success, global_id 14227
   -74> 2018-09-04 23:01:14.305 7fb972b1b700  5 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=134 cs=1 l=1). rx mon.0 seq 4 0x55920d98c200 config(0 keys) v1
   -73> 2018-09-04 23:01:14.305 7fb972b1b700  5 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=134 cs=1 l=1). rx mon.0 seq 5 0x55920d5fad80 mon_map magic: 0 v1
   -72> 2018-09-04 23:01:14.305 7fb972b1b700  5 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=134 cs=1 l=1). rx mon.0 seq 6 0x55920d5fbb00 auth_reply(proto 2 0 (0) Success) v1
   -71> 2018-09-04 23:01:14.305 7fb971b19700  1 -- 192.168.3.100:47082/702231899 <== mon.0 192.168.3.10:6789/0 4 ==== config(0 keys) v1 ==== 4+0+0 (0 0 0) 0x55920d98c200 con 0x55920d980700
   -70> 2018-09-04 23:01:14.305 7fb971b19700 10 monclient: handle_config config(0 keys) v1
   -69> 2018-09-04 23:01:14.305 7fb971b19700  1 -- 192.168.3.100:47082/702231899 <== mon.0 192.168.3.10:6789/0 5 ==== mon_map magic: 0 v1 ==== 149+0+0 (2507971757 0 0) 0x55920d5fad80 con 0x55920d980700
   -68> 2018-09-04 23:01:14.305 7fb971b19700 10 monclient: handle_monmap mon_map magic: 0 v1
   -67> 2018-09-04 23:01:14.305 7fb971b19700 10 monclient:  got monmap 1, mon.mon0 is now rank 0
   -66> 2018-09-04 23:01:14.305 7fb971b19700 10 monclient: dump:
epoch 1
fsid de649134-fe80-43d3-b952-62aca7e949fb
last_changed 2018-09-04 19:50:12.844115
created 2018-09-04 19:50:12.844115
0: 192.168.3.10:6789/0 mon.mon0

   -65> 2018-09-04 23:01:14.305 7fb971b19700  1 -- 192.168.3.100:47082/702231899 <== mon.0 192.168.3.10:6789/0 6 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (3274556671 0 0) 0x55920d5fbb00 con 0x55920d980700
   -64> 2018-09-04 23:01:14.305 7fb971b19700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2018-09-04 23:00:44.306200)
   -63> 2018-09-04 23:01:14.305 7fb97b4158c0 10 monclient: get_monmap_and_config success
   -62> 2018-09-04 23:01:14.305 7fb97b4158c0 10 monclient: shutdown
   -61> 2018-09-04 23:01:14.305 7fb97b4158c0  1 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN pgs=134 cs=1 l=1).mark_down
   -60> 2018-09-04 23:01:14.305 7fb97b4158c0  2 -- 192.168.3.100:47082/702231899 >> 192.168.3.10:6789/0 conn(0x55920d980700 legacy :-1 s=STATE_OPEN pgs=134 cs=1 l=1)._stop
   -59> 2018-09-04 23:01:14.305 7fb970316700  4 set_mon_vals no callback set
   -58> 2018-09-04 23:01:14.306 7fb97b4158c0  1 -- 192.168.3.100:47082/702231899 shutdown_connections
   -57> 2018-09-04 23:01:14.306 7fb97b4158c0  5 -- 192.168.3.100:47082/702231899 shutdown_connections mark down 192.168.3.10:6789/0 0x55920d980700
   -56> 2018-09-04 23:01:14.306 7fb97b4158c0  5 -- 192.168.3.100:47082/702231899 shutdown_connections delete 0x55920d980700
   -55> 2018-09-04 23:01:14.306 7fb97b4158c0  1 -- 192.168.3.100:47082/702231899 shutdown_connections
   -54> 2018-09-04 23:01:14.306 7fb97b4158c0  1 -- 192.168.3.100:47082/702231899 wait complete.
   -53> 2018-09-04 23:01:14.306 7fb97b4158c0  1 -- 192.168.3.100:47082/702231899 >> - conn(0x55920d980000 msgr2 :-1 s=STATE_NONE pgs=0 cs=0 l=0).mark_down
   -52> 2018-09-04 23:01:14.306 7fb97b4158c0  2 -- 192.168.3.100:47082/702231899 >> - conn(0x55920d980000 msgr2 :-1 s=STATE_NONE pgs=0 cs=0 l=0)._stop
   -51> 2018-09-04 23:01:14.307 7fb97b4158c0  0 set uid:gid to 167:167 (ceph:ceph)
   -50> 2018-09-04 23:01:14.307 7fb97b4158c0  0 ceph version 14.0.0-2808-gccbb825 (ccbb82553b5f130a109835907d4e09cd6e9cd993) nautilus (dev), process ceph-osd, pid 29840
   -49> 2018-09-04 23:01:14.307 7fb97b4158c0  0 pidfile_write: ignore empty --pid-file
   -48> 2018-09-04 23:01:14.325 7fb97b4158c0  5 asok(0x55920d842000) init /var/run/ceph/test-osd.0.asok
   -47> 2018-09-04 23:01:14.325 7fb97b4158c0  5 asok(0x55920d842000) bind_and_listen /var/run/ceph/test-osd.0.asok
   -46> 2018-09-04 23:01:14.328 7fb97b4158c0  5 asok(0x55920d842000) register_command 0 hook 0x55920d53f530
   -45> 2018-09-04 23:01:14.328 7fb97b4158c0  5 asok(0x55920d842000) register_command version hook 0x55920d53f530
   -44> 2018-09-04 23:01:14.328 7fb97b4158c0  5 asok(0x55920d842000) register_command git_version hook 0x55920d53f530
   -43> 2018-09-04 23:01:14.328 7fb97b4158c0  5 asok(0x55920d842000) register_command help hook 0x55920d53a640
   -42> 2018-09-04 23:01:14.328 7fb97b4158c0  5 asok(0x55920d842000) register_command get_command_descriptions hook 0x55920d53a1b0
   -41> 2018-09-04 23:01:14.328 7fb97b4158c0  5 object store type is bluestore
   -40> 2018-09-04 23:01:14.328 7fb971318700  5 asok(0x55920d842000) entry start
   -39> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.3.100:0/0 learned_addr learned my addr 192.168.3.100:0/0 (peer_addr_for_me 192.168.3.100:0/0)
   -38> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.3.100:6800/29840 _finish_bind bind my_addrs is 192.168.3.100:6800/29840
   -37> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.4.200:0/0 learned_addr learned my addr 192.168.4.200:0/0 (peer_addr_for_me 192.168.4.200:0/0)
   -36> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.4.200:6800/29840 _finish_bind bind my_addrs is 192.168.4.200:6800/29840
   -35> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.3.100:0/0 learned_addr learned my addr 192.168.3.100:0/0 (peer_addr_for_me 192.168.3.100:0/0)
   -34> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.3.100:6801/29840 _finish_bind bind my_addrs is 192.168.3.100:6801/29840
   -33> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.4.200:0/0 learned_addr learned my addr 192.168.4.200:0/0 (peer_addr_for_me 192.168.4.200:0/0)
   -32> 2018-09-04 23:01:14.331 7fb97b4158c0  1 -- 192.168.4.200:6801/29840 _finish_bind bind my_addrs is 192.168.4.200:6801/29840
   -31> 2018-09-04 23:01:14.331 7fb97b4158c0 10 monclient: build_initial_monmap
   -30> 2018-09-04 23:01:14.336 7fb97b4158c0  0 load: jerasure load: lrc load: isa
   -29> 2018-09-04 23:01:14.336 7fb97b4158c0  5 adding auth protocol: cephx
   -28> 2018-09-04 23:01:14.336 7fb97b4158c0  5 adding auth protocol: cephx
   -27> 2018-09-04 23:01:14.336 7fb97b4158c0  1 bdev create path /var/lib/ceph/osd/test-0/block type kernel
   -26> 2018-09-04 23:01:14.336 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open path /var/lib/ceph/osd/test-0/block
   -25> 2018-09-04 23:01:14.339 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open size 11530141696 (0x2af400000, 11 GiB) block_size 4096 (4 KiB) rotational
   -24> 2018-09-04 23:01:14.339 7fb97b4158c0  1 bluestore(/var/lib/ceph/osd/test-0) _set_cache_sizes cache_size 1073741824 meta 0.4 kv 0.4 data 0.2
   -23> 2018-09-04 23:01:14.339 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) close
   -22> 2018-09-04 23:01:14.593 7fb97b4158c0  5 asok(0x55920d842000) register_command objecter_requests hook 0x55920d53a7b0
   -21> 2018-09-04 23:01:14.593 7fb97b4158c0  1 -- 192.168.3.100:6800/29840 start start
   -20> 2018-09-04 23:01:14.593 7fb97b4158c0  1 --  start start
   -19> 2018-09-04 23:01:14.593 7fb97b4158c0  1 --  start start
   -18> 2018-09-04 23:01:14.593 7fb97b4158c0  1 -- 192.168.3.100:6801/29840 start start
   -17> 2018-09-04 23:01:14.593 7fb97b4158c0  1 -- 192.168.4.200:6801/29840 start start
   -16> 2018-09-04 23:01:14.593 7fb97b4158c0  1 -- 192.168.4.200:6800/29840 start start
   -15> 2018-09-04 23:01:14.593 7fb97b4158c0  1 --  start start
   -14> 2018-09-04 23:01:14.593 7fb97b4158c0  2 osd.0 0 init /var/lib/ceph/osd/test-0 (looks like hdd)
   -13> 2018-09-04 23:01:14.593 7fb97b4158c0  2 osd.0 0 journal /var/lib/ceph/osd/test-0/journal
   -12> 2018-09-04 23:01:14.593 7fb97b4158c0  1 bluestore(/var/lib/ceph/osd/test-0) _mount path /var/lib/ceph/osd/test-0
   -11> 2018-09-04 23:01:14.595 7fb97b4158c0  1 bdev create path /var/lib/ceph/osd/test-0/block type kernel
   -10> 2018-09-04 23:01:14.595 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open path /var/lib/ceph/osd/test-0/block
    -9> 2018-09-04 23:01:14.595 7fb97b4158c0  1 bdev(0x55920da5a000 /var/lib/ceph/osd/test-0/block) open size 11530141696 (0x2af400000, 11 GiB) block_size 4096 (4 KiB) rotational
    -8> 2018-09-04 23:01:14.598 7fb97b4158c0  1 bluestore(/var/lib/ceph/osd/test-0) _set_cache_sizes cache_size 1073741824 meta 0.4 kv 0.4 data 0.2
    -7> 2018-09-04 23:01:14.598 7fb97b4158c0  1 bdev create path /var/lib/ceph/osd/test-0/block type kernel
    -6> 2018-09-04 23:01:14.598 7fb97b4158c0  1 bdev(0x55920da5a380 /var/lib/ceph/osd/test-0/block) open path /var/lib/ceph/osd/test-0/block
    -5> 2018-09-04 23:01:14.598 7fb97b4158c0  1 bdev(0x55920da5a380 /var/lib/ceph/osd/test-0/block) open size 11530141696 (0x2af400000, 11 GiB) block_size 4096 (4 KiB) rotational
    -4> 2018-09-04 23:01:14.598 7fb97b4158c0  1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/test-0/block size 11 GiB
    -3> 2018-09-04 23:01:14.598 7fb97b4158c0  1 bluefs mount
    -2> 2018-09-04 23:01:14.603 7fb97b4158c0 -1 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.0-2808-gccbb825/rpm/el7/BUILD/ceph-14.0.0-2808-gccbb825/src/os/bluestore/bluefs_types.h: In function 'static void bluefs_fnode_t::_denc_finish(ceph::buffer::ptr::const_iterator&, __u8*, __u8*, char**, uint32_t*)' thread 7fb97b4158c0 time 2018-09-04 23:01:14.602067
/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.0-2808-gccbb825/rpm/el7/BUILD/ceph-14.0.0-2808-gccbb825/src/os/bluestore/bluefs_types.h: 57: FAILED ceph_assert(pos <= end)

 ceph version 14.0.0-2808-gccbb825 (ccbb82553b5f130a109835907d4e09cd6e9cd993) nautilus (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x150) [0x5592033f20df]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x5592033f2264]
 3: (bluefs_super_t::decode(ceph::buffer::list::iterator_impl<true>&)+0x78b) [0x559203a12c4b]
 4: (BlueFS::_open_super()+0xc6) [0x5592039f80e6]
 5: (BlueFS::mount()+0xa6) [0x559203a064c6]
 6: (BlueStore::_open_db(bool, bool)+0x15bc) [0x5592039218ec]
 7: (BlueStore::_mount(bool, bool)+0x459) [0x55920394db59]
 8: (OSD::init()+0x3e8) [0x559203526118]
 9: (main()+0x132d) [0x5592033f570d]
 10: (__libc_start_main()+0xf5) [0x7fb976e15c05]
 11: (()+0x5127e5) [0x5592034d97e5]

    -1> 2018-09-04 23:01:14.603 7fb97b4158c0 -1 *** Caught signal (Aborted) **
 in thread 7fb97b4158c0 thread_name:ceph-osd

 ceph version 14.0.0-2808-gccbb825 (ccbb82553b5f130a109835907d4e09cd6e9cd993) nautilus (dev)
 1: (()+0xf5e0) [0x7fb977e125e0]
 2: (gsignal()+0x37) [0x7fb976e291f7]
 3: (abort()+0x148) [0x7fb976e2a8e8]
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a4) [0x5592033f2133]
 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x5592033f2264]
 6: (bluefs_super_t::decode(ceph::buffer::list::iterator_impl<true>&)+0x78b) [0x559203a12c4b]
 7: (BlueFS::_open_super()+0xc6) [0x5592039f80e6]
 8: (BlueFS::mount()+0xa6) [0x559203a064c6]
 9: (BlueStore::_open_db(bool, bool)+0x15bc) [0x5592039218ec]
 10: (BlueStore::_mount(bool, bool)+0x459) [0x55920394db59]
 11: (OSD::init()+0x3e8) [0x559203526118]
 12: (main()+0x132d) [0x5592033f570d]
 13: (__libc_start_main()+0xf5) [0x7fb976e15c05]
 14: (()+0x5127e5) [0x5592034d97e5]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   1/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 1 reserver
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 rgw_sync
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 xio
   1/ 5 compressor
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   4/ 5 memdb
   1/ 5 kinetic
   1/ 5 fuse
   1/ 5 mgr
   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/test-osd.0.log
--- end dump of recent events ---
Actions #7

Updated by Radoslaw Zarzynski over 5 years ago

The problem appeared during decoding of BlueFS' superblock deeply im the low-level decoding bits.

void bluefs_super_t::decode(bufferlist::const_iterator& p)
{
  DECODE_START(1, p);
  decode(uuid, p);
  decode(osd_uuid, p);
  decode(version, p);
  decode(block_size, p);
  decode(log_fnode, p);
  DECODE_FINISH(p);
}
#define DENC_HELPERS                                                    \
  //
  // ...                                                                \
  //
  /* decode */                                                          \
  static void _denc_start(buffer::ptr::const_iterator& p,               \
                          __u8 *struct_v,                               \
                          __u8 *struct_compat,                          \
                          char **start_pos,                             \
                          uint32_t *struct_len) {                       \
    denc(*struct_v, p);                                                 \
    denc(*struct_compat, p);                                            \
    denc(*struct_len, p);                                               \
    *start_pos = const_cast<char*>(p.get_pos());                        \
  }                                                                     \
  static void _denc_finish(buffer::ptr::const_iterator& p,              \
                           __u8 *struct_v, __u8 *struct_compat,         \
                           char **start_pos,                            \
                           uint32_t *struct_len) {                      \
    const char *pos = p.get_pos();                                      \
    char *end = *start_pos + *struct_len;                               \
    ceph_assert(pos <= end);                                                    \
    if (pos < end) {                                                    \
      p.advance(end - pos);                                             \
    }                                                                   \
  }

// Helpers for versioning the encoding.  These correspond to the
// {ENCODE,DECODE}_{START,FINISH} macros.

#define DENC_START(v, compat, p)                                        \
  __u8 struct_v = v;                                                    \
  __u8 struct_compat = compat;                                          \
  char *_denc_pchar;                                                    \
  uint32_t _denc_u32;                                                   \
  _denc_start(p, &struct_v, &struct_compat, &_denc_pchar, &_denc_u32);  \
  do {

#define DENC_FINISH(p)                                                  \
  } while (false);                                                      \
  _denc_finish(p, &struct_v, &struct_compat, &_denc_pchar, &_denc_u32);

The likely reason is that the struct_len (obtained from on-disk data) is smaller than the amount of data consumed in bluefs_super_t::decode leading to failure of the ceph_assert(pos <= end). Why this happened is an open question.

Alfredo, has the OSD seen any RADOS workload or it failed just after its deployment?

Actions #8

Updated by Igor Fedotov over 5 years ago

Just a thought - may be superblock has been overwritten by the parallel deployment or something?

Actions #9

Updated by Radoslaw Zarzynski over 5 years ago

  • Related to Bug #18389: crash when opening bluefs superblock added
Actions #10

Updated by Alfredo Deza over 5 years ago

Was able to narrow down this problem: the OSD was deployed with a block.db in a different device that didn't come up after a reboot. The OSD tried to start without the block.db link, and crashed.

Once the block.db device was made available, the OSD started without a problem.

This doesn't require any sort of workload, can be easily replicated by creating an OSD that has a block.db on a different device, and then stopping the OSD, removing the block.db link and starting the OSD again

Actions #11

Updated by Radoslaw Zarzynski over 5 years ago

Looks like BlueStore did the mkfs stage with the block.db on a separated device which suddenly disappeared after.

int BlueStore::_open_db(bool create, bool to_repair_db)
{
  // ...

  if (do_bluefs) {
    // ...

    bfn = path + "/block.db";
    if (::stat(bfn.c_str(), &st) == 0) {
      r = bluefs->add_block_device(BlueFS::BDEV_DB, bfn,
            create && cct->_conf->bdev_enable_discard);

      // ...

      if (create) {
        bluefs->add_block_extent(
          BlueFS::BDEV_DB,
          SUPER_RESERVED,
          bluefs->get_block_device_size(BlueFS::BDEV_DB) - SUPER_RESERVED);
      }
      bluefs_shared_bdev = BlueFS::BDEV_SLOW;
      bluefs_single_shared_device = false;
    } else {
      r = -errno;
      if (::lstat(bfn.c_str(), &st) == -1) {
        r = 0;
        bluefs_shared_bdev = BlueFS::BDEV_DB;
      } else {
        derr << __func__ << " " << bfn << " symlink exists but target unusable: " 
             << cpp_strerror(r) << dendl;
        goto free_bluefs;
      }
    }

    // shared device
    bfn = path + "/block";
    // never trim here
    r = bluefs->add_block_device(bluefs_shared_bdev, bfn, false);
    // ...

    if (create) {
      // note: we always leave the first SUPER_RESERVED (8k) of the device unused
      // ...
      bluefs->add_block_extent(bluefs_shared_bdev, start, initial);
      bluefs_extents.insert(start, initial);
    }

The second 4k storing the superblock on the main block device was, not surprisingly, zeroed (thanks, Alfredo, for the dump):

00000000  62 6c 75 65 73 74 6f 72  65 20 62 6c 6f 63 6b 20  |bluestore block |
00000010  64 65 76 69 63 65 0a 37  35 32 37 65 36 36 37 2d  |device.7527e667-|
00000020  32 62 37 33 2d 34 32 61  36 2d 61 30 30 35 2d 33  |2b73-42a6-a005-3|
00000030  36 30 37 30 64 32 31 34  34 61 66 0a 02 01 16 01  |6070d2144af.....|
00000040  00 00 75 27 e6 67 2b 73  42 a6 a0 05 36 07 0d 21  |..u'.g+sB...6..!|
00000050  44 af 00 00 40 af 02 00  00 00 10 fa 8f 5b 1c 1d  |D...@........[..|
00000060  83 22 04 00 00 00 6d 61  69 6e 08 00 00 00 06 00  |."....main......|
00000070  00 00 62 6c 75 65 66 73  01 00 00 00 31 09 00 00  |..bluefs....1...|
00000080  00 63 65 70 68 5f 66 73  69 64 24 00 00 00 64 65  |.ceph_fsid$...de|
00000090  36 34 39 31 33 34 2d 66  65 38 30 2d 34 33 64 33  |649134-fe80-43d3|
000000a0  2d 62 39 35 32 2d 36 32  61 63 61 37 65 39 34 39  |-b952-62aca7e949|
000000b0  66 62 0a 00 00 00 6b 76  5f 62 61 63 6b 65 6e 64  |fb....kv_backend|
000000c0  07 00 00 00 72 6f 63 6b  73 64 62 05 00 00 00 6d  |....rocksdb....m|
000000d0  61 67 69 63 14 00 00 00  63 65 70 68 20 6f 73 64  |agic....ceph osd|
000000e0  20 76 6f 6c 75 6d 65 20  76 30 32 36 09 00 00 00  | volume v026....|
000000f0  6d 6b 66 73 5f 64 6f 6e  65 03 00 00 00 79 65 73  |mkfs_done....yes|
00000100  07 00 00 00 6f 73 64 5f  6b 65 79 28 00 00 00 41  |....osd_key(...A|
00000110  51 44 2f 2b 59 39 62 44  6d 63 65 4a 42 41 41 78  |QD/+Y9bDmceJBAAx|
00000120  6e 63 4a 68 76 6b 65 32  37 31 77 50 59 70 4e 69  |ncJhvke271wPYpNi|
00000130  6d 56 4f 6d 51 3d 3d 05  00 00 00 72 65 61 64 79  |mVOmQ==....ready|
00000140  05 00 00 00 72 65 61 64  79 06 00 00 00 77 68 6f  |....ready....who|
00000150  61 6d 69 01 00 00 00 30  c8 a0 aa 51 00 00 00 00  |ami....0...Q....|
00000160  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00002000

This would mean that struct_len == 0 and explain the assert. The reason why the block.db symlink has disappeared is yet unknown. However, because of that the fuse in BlueStore::_open_db failed to trigger:

      r = -errno;
      if (::lstat(bfn.c_str(), &st) == -1) {
        r = 0;
        bluefs_shared_bdev = BlueFS::BDEV_DB;
      } else {
        derr << __func__ << " " << bfn << " symlink exists but target unusable: " 
             << cpp_strerror(r) << dendl;
        goto free_bluefs;
      }

Thanks for your help, Alfredo!

Actions #12

Updated by Sage Weil over 5 years ago

We discussed this in standup. I think there are 2 things going on here:

1. the symlink disappeared in Alfredo's test. This is probably an issue with the test or test environment, since ceph-volume should always to creating the symlink if the device is found.

2. Bluestore is relying on the presence of the symlink to decide whether an external block.db is present/needed. There should be an explicit check in bluestore for a valid bluefs superblock (in second block) or a fully zeroed second block (indicating an external block.db is required). This (or something similar) will give us an explicit indication on the main block device itself of whether an external block.db is required or not.

Actions #13

Updated by Radoslaw Zarzynski over 5 years ago

  • Status changed from 12 to In Progress

Adding the check to BlueStore.
IIUC ceph-volume started creating the OSD's directory dynamically and this applies also to the symlink we use as a fuse. If so, the link might be absent because e.g. hardware failure. We might also want to verify whether there is no race between starting OSD and creating the symlink.

Actions #14

Updated by Sage Weil over 5 years ago

  • Assignee set to Radoslaw Zarzynski
Actions #15

Updated by Radoslaw Zarzynski about 5 years ago

  • Severity changed from 2 - major to 3 - minor

This is about OSD's behavior in malfunctioned environment (disappearing block.db symlink). We need to terminate OSD but with more meaningful message.

Actions #16

Updated by Sage Weil about 5 years ago

Current status:

We want a more concrete source of truth for whether the db and/or wal partitions should exist--something other than the presence/absence of the symlinks.

Actions #17

Updated by Radoslaw Zarzynski about 5 years ago

  • Status changed from In Progress to Fix Under Review
Actions #19

Updated by Kefu Chai over 4 years ago

  • Status changed from Fix Under Review to Resolved
  • Pull request ID set to 30109
Actions

Also available in: Atom PDF