Project

General

Profile

Actions

Bug #22535

closed

OSD crushes with FAILED assert(used_blocks.size() > count) during the first start after upgrade 12.2.1 -> 12.2.2

Added by Vladimir Drobyshevskiy over 6 years ago. Updated about 6 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hello!

I'm trying to upgrade Ceph Luminous cluster from 12.2.1 to 12.2.2. mon/mds/mgr restarts went flawlessly, but after the first osd restart I've got a problem.

I have "bluestore fsck on mount = true", so osd initiates fsck process on start. It takes a lot of time, but crashes with:

@
2017-12-24 18:54:07.158729 7fa6baf27e00 1 bluefs fsck
2017-12-24 18:54:07.158744 7fa6baf27e00 1 bluestore(/var/lib/ceph/osd/ceph-0) _fsck walking object keyspace
2017-12-24 19:40:08.939488 7fa6baf27e00 1 bluestore(/var/lib/ceph/osd/ceph-0) _fsck checking shared_blobs
2017-12-24 19:40:08.940645 7fa6baf27e00 1 bluestore(/var/lib/ceph/osd/ceph-0) _fsck checking for stray omap data
2017-12-24 19:40:14.739627 7fa6baf27e00 1 bluestore(/var/lib/ceph/osd/ceph-0) _fsck checking deferred events
2017-12-24 19:40:14.744003 7fa6baf27e00 1 bluestore(/var/lib/ceph/osd/ceph-0) _fsck checking freelist vs allocated
2017-12-24 19:40:26.217813 7fa6baf27e00 -1 /build/ceph-12.2.2/src/os/bluestore/BlueStore.cc: In function 'int BlueStore::_fsck(bool, bool)' thread 7fa6baf27e00 time 2017-12-24 19:40:26.215431
/build/ceph-12.2.2/src/os/bluestore/BlueStore.cc: 6122: FAILED assert(used_blocks.size() > count)

ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x55e7f9f7e892]
2: (BlueStore::_fsck(bool, bool)+0x7e3c) [0x55e7f9e41c8c]
3: (BlueStore::_mount(bool)+0x20a) [0x55e7f9e4284a]
4: (OSD::init()+0x3e2) [0x55e7f99ae4e2]
5: (main()+0x2f07) [0x55e7f98c01d7]
6: (__libc_start_main()+0xf0) [0x7fa6b838f830]
7: (_start()+0x29) [0x55e7f994b7f9]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
@

Full ceph-osd.log from the "systemctl ceph-osd@x restart" is in the attachment. I believe it may be connected with #22464 bug.

If I disable fsck then osd starts normally.

I stopped upgrade process before I know my data is safe. What do I have to do in this situation?

Thanks!

It's a small all-flash cluster using Ubuntu 16.04, kernel 4.10.0-38.

@root@e001n02:/var/log# ceph -s
cluster:
id: 6a8ffc55-fa2e-48dc-a71c-647e1fff749b
health: HEALTH_WARN
too many PGs per OSD (252 > max 200)
mon e001n01 is low on available space

services:
mon: 3 daemons, quorum e001n01,e001n02,e001n03
mgr: e001n01(active), standbys: e001n02
mds: onefs-1/1/1 up {0=e001n03=up:active}, 2 up:standby
osd: 4 osds: 4 up, 4 in
data:
pools: 3 pools, 336 pgs
objects: 383k objects, 1497 GB
usage: 4306 GB used, 2421 GB / 6728 GB avail
pgs: 336 active+clean@

Pools use snappy compression.


Files

ceph-osd.0.log (221 KB) ceph-osd.0.log Vladimir Drobyshevskiy, 12/24/2017 06:24 PM

Related issues 1 (0 open1 closed)

Copied to bluestore - Backport #22633: luminous: OSD crushes with FAILED assert(used_blocks.size() > count) during the first start after upgrade 12.2.1 -> 12.2.2ResolvedPrashant DActions
Actions #1

Updated by Igor Fedotov over 6 years ago

  • Status changed from New to Need More Info

Could you please set bluestore debug level to 10 and provide a log for reproduced issue.

Actions #2

Updated by Vladimir Drobyshevskiy over 6 years ago

Igor Fedotov wrote:

Could you please set bluestore debug level to 10 and provide a log for reproduced issue.

Sure. Here is a link to log file: https://cloud.mail.ru/public/BQEo/vsVhQqL8A

Please beware it is 47Mb bz2 and around 780Mb raw.

Actions #3

Updated by Igor Fedotov over 6 years ago

Submitted https://github.com/ceph/ceph/pull/19718 to fix that in master.
IMO the root cause is a bug in out-of-range access to fsck's used_blocks bitmap. Looks like legacy store's could initialize freelist manager with block device size (=4K) while fsck in v12.2.2+ uses min_alloc_size (=16K) to estimate amount of used blocks. Additionally device size isn't aligned with that new min_alloc_size. For unknown reason boost doesn't 100% properly handle out-of-range access on dynamic_biteset resulting that later odd assert when size < count.

Actions #4

Updated by Sage Weil over 6 years ago

  • Backport set to luminous
Actions #5

Updated by Sage Weil over 6 years ago

  • Priority changed from Normal to Urgent
Actions #6

Updated by Kefu Chai over 6 years ago

  • Status changed from Need More Info to Pending Backport
  • Target version deleted (v12.2.2)
Actions #7

Updated by Nathan Cutler over 6 years ago

  • Copied to Backport #22633: luminous: OSD crushes with FAILED assert(used_blocks.size() > count) during the first start after upgrade 12.2.1 -> 12.2.2 added
Actions #8

Updated by Nathan Cutler about 6 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF