Project

General

Profile

Actions

Bug #50346

closed

OSD crash FAILED ceph_assert(!is_scrubbing())

Added by 玮文 胡 about 3 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
Urgent
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When I see warning PG_NOT_SCRUBBED, I set osd flag "nodeep-scrub", set config osd_max_scrubs to 2, and run:

for pg in $(ceph health detail | awk '{print $2}' | tail -n +3); do ceph pg scrub $pg; done

I'm intended to accelerate scrub to resolve this warning. After some minutes, one OSD crashed. "ceph crash info" shows"

{
    "assert_condition": "!is_scrubbing()",
    "assert_file": "/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.0/rpm/el8/BUILD/ceph-16.2.0/src/osd/PG.cc",
    "assert_func": "bool PG::sched_scrub()",
    "assert_line": 1339,
    "assert_msg": "/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.0/rpm/el8/BUILD/ceph-16.2.0/src/osd/PG.cc: In function 'bool PG::sched_scrub()' thread 7fa63b19c700 time 2021-04-14T06:50:16.690936+0000\n/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.0/rpm/el8/BUILD/ceph-16.2.0/src/osd/PG.cc: 1339: FAILED ceph_assert(!is_scrubbing())\n",
    "assert_thread_name": "safe_timer",
    "backtrace": [
        "/lib64/libpthread.so.0(+0x12b20) [0x7fa644996b20]",
        "gsignal()",
        "abort()",
        "(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5641acec9d49]",
        "/usr/bin/ceph-osd(+0x568f12) [0x5641acec9f12]",
        "(PG::sched_scrub()+0x561) [0x5641ad07a201]",
        "(OSD::sched_scrub()+0x8e3) [0x5641acfc4063]",
        "(OSD::tick_without_osd_lock()+0x678) [0x5641acfd59d8]",
        "(Context::complete(int)+0xd) [0x5641ad00917d]",
        "(SafeTimer::timer_thread()+0x1b7) [0x5641ad64e807]",
        "(SafeTimerThread::entry()+0x11) [0x5641ad64fde1]",
        "/lib64/libpthread.so.0(+0x814a) [0x7fa64498c14a]",
        "clone()" 
    ],
    "ceph_version": "16.2.0",
    "crash_id": "2021-04-14T06:50:16.721313Z_ba82e6bc-e025-4c14-9431-8522393cb79d",
    "entity_name": "osd.6",
    "os_id": "centos",
    "os_name": "CentOS Linux",
    "os_version": "8",
    "os_version_id": "8",
    "process_name": "ceph-osd",
    "stack_sig": "dc53b29bcd5e6e90adf9cd40bff50b2b558b52cc78ef2c401896ad21b883bfa5",
    "timestamp": "2021-04-14T06:50:16.721313Z",
    "utsname_hostname": "gpu014",
    "utsname_machine": "x86_64",
    "utsname_release": "5.4.0-56-generic",
    "utsname_sysname": "Linux",
    "utsname_version": "#62-Ubuntu SMP Mon Nov 23 19:20:19 UTC 2020" 
}

Then it is automatically restarted and seems OK.


Files

osd.6.log.gz (146 KB) osd.6.log.gz 玮文 胡, 04/14/2021 07:16 AM
pg (201 KB) pg Neha Ojha, 06/24/2021 10:15 PM

Related issues 1 (0 open1 closed)

Copied to RADOS - Backport #51371: pacific: OSD crash FAILED ceph_assert(!is_scrubbing())ResolvedRonen FriedmanActions
Actions #1

Updated by 玮文 胡 about 3 years ago

Actions #2

Updated by Neha Ojha about 3 years ago

  • Assignee set to Ronen Friedman

Ronen, can you please take a look at this bug.

Actions #3

Updated by Ronen Friedman almost 3 years ago

We are using two distinct conditions to decide whether a candidate PG is already being scrubbed. The OSD checks pg->is_scrub_active(),
while the PG asserts on the value of PG_STATE_FLAG.
There is a time window when PG_STATE_FLAG is set but is_scrub_active() wasn't yet set. is_reserving() covers most of that period, but the
ceph_assert is just before the is_reserving check.

A clean refactoring of this part in Q is almost ready. For v 16, I'll try to recreate the condition and provide a minimal fix.

Actions #4

Updated by 玮文 胡 almost 3 years ago

I have encountered this 4 more times on a 20-OSD cluster now running 16.2.3. If needed, I can provide more info.

Actions #5

Updated by Andrej Filipcic almost 3 years ago

On a 60-node, 1500 HDD cluster, and 16.2.4 release, this issue become very frequent, especially when RBD writes exceed ~5GB/s which is most of the time. There are ~5-10 OSD crashes per hour. I had to disable scrub and deep-scrub to keep the cluster healthy. cephfs does not seem to create such a pressure as RBD. Are there any workarounds?

Actions #6

Updated by Andrej Filipcic almost 3 years ago

A related crash, happened when I disabled scrubbing:

-1> 2021-06-14T11:17:15.373+0200 7fb9916f5700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/osd/scrub_machine.cc: In function 'void Scrub::ScrubMachine::assert_not_active() const' thread 7fb9916f5700 time 2021-06-14T11:17:15.364312+0200
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<const NotActive*>())
ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x55f12c369064]
2: /usr/bin/ceph-osd(+0x56927e) [0x55f12c36927e]
3: /usr/bin/ceph-osd(+0x9df81f) [0x55f12c7df81f]
4: (PgScrubber::replica_scrub_op(boost::intrusive_ptr<OpRequest>)+0x4bf) [0x55f12c7cfc0f]
5: (PG::replica_scrub(boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x62) [0x55f12c51eb22]
6: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x7bb) [0x55f12c5e3e0b]
7: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x309) [0x55f12c46d649]
8: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x68) [0x55f12c6ca808]
9: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x55f12c48d618]
10: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x55f12caf5794]
11: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x55f12caf8434]
12: /lib64/libpthread.so.0(+0x814a) [0x7fb9b4f0514a]
13: clone()
0> 2021-06-14T11:17:15.377+0200 7fb9916f5700 -1 ** Caught signal (Aborted) *
in thread 7fb9916f5700 thread_name:tp_osd_tp
ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)
1: /lib64/libpthread.so.0(+0x12b20) [0x7fb9b4f0fb20]
2: gsignal()
3: abort()
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x55f12c3690b5]
5: /usr/bin/ceph-osd(+0x56927e) [0x55f12c36927e]
6: /usr/bin/ceph-osd(+0x9df81f) [0x55f12c7df81f]
7: (PgScrubber::replica_scrub_op(boost::intrusive_ptr<OpRequest>)+0x4bf) [0x55f12c7cfc0f]
8: (PG::replica_scrub(boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x62) [0x55f12c51eb22]
9: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x7bb) [0x55f12c5e3e0b]
10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x309) [0x55f12c46d649]
11: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x68) [0x55f12c6ca808]
12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x55f12c48d618]
13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x55f12caf5794]
14: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x55f12caf8434]
15: /lib64/libpthread.so.0(+0x814a) [0x7fb9b4f0514a]
16: clone()
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

and the OSD cannot recover any more:
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bdev(0x5591c05ea400 /var/lib/ceph/osd/ceph-168/block) open size 16000896466944 (0xe8d7fc00000, 15 TiB) block_size 4096 (4 KiB) rotational discard not supported
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bluestore(/var/lib/ceph/osd/ceph-168) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bdev(0x5591c05ea800 /var/lib/ceph/osd/ceph-168/block) open path /var/lib/ceph/osd/ceph-168/block
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bdev(0x5591c05ea800 /var/lib/ceph/osd/ceph-168/block) open size 16000896466944 (0xe8d7fc00000, 15 TiB) block_size 4096 (4 KiB) rotational discard not supported
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-168/block size 15 TiB
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bluefs mount
2021-06-14T11:17:51.961+0200 7f7bf3ed3080 1 bluefs _init_alloc shared, id 1, capacity 0xe8d7fc00000, block size 0x10000
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 1 bluefs mount shared_bdev_used = 0
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 -1 bluestore(/var/lib/ceph/osd/ceph-168) _open_db erroring opening db:
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 1 bluefs umount
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 1 bdev(0x5591c05ea800 /var/lib/ceph/osd/ceph-168/block) close
2021-06-14T11:17:52.235+0200 7f7bf3ed3080 1 bdev(0x5591c05ea400 /var/lib/ceph/osd/ceph-168/block) close
2021-06-14T11:17:52.489+0200 7f7bf3ed3080 -1 osd.168 0 OSD:init: unable to mount object store

Actions #7

Updated by Neha Ojha almost 3 years ago

  • Priority changed from Normal to Urgent
Actions #8

Updated by Ronen Friedman almost 3 years ago

  • Status changed from New to In Progress
  • Component(RADOS) OSD added
Actions #9

Updated by 玮文 胡 almost 3 years ago

Andrej Filipcic wrote:

On a 60-node, 1500 HDD cluster, and 16.2.4 release, this issue become very frequent, especially when RBD writes exceed ~5GB/s which is most of the time. There are ~5-10 OSD crashes per hour. I had to disable scrub and deep-scrub to keep the cluster healthy. cephfs does not seem to create such a pressure as RBD. Are there any workarounds?

Setting osd_max_scrubs to 1 seems to do the trick. I set it to 1 and OSDs haven't crashed since then. We are running primarily cephfs.

Actions #10

Updated by Ronen Friedman almost 3 years ago

  • Status changed from In Progress to Fix Under Review
  • Pull request ID set to 41944
Actions #11

Updated by Neha Ojha almost 3 years ago

  • Backport set to pacific
  • Pull request ID changed from 41944 to 41993
Actions #12

Updated by Neha Ojha almost 3 years ago

Andrej Filipcic wrote:

A related crash, happened when I disabled scrubbing:

-1> 2021-06-14T11:17:15.373+0200 7fb9916f5700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/osd/scrub_machine.cc: In function 'void Scrub::ScrubMachine::assert_not_active() const' thread 7fb9916f5700 time 2021-06-14T11:17:15.364312+0200
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<const NotActive*>())

ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x55f12c369064]
2: /usr/bin/ceph-osd(+0x56927e) [0x55f12c36927e]
3: /usr/bin/ceph-osd(+0x9df81f) [0x55f12c7df81f]
4: (PgScrubber::replica_scrub_op(boost::intrusive_ptr<OpRequest>)+0x4bf) [0x55f12c7cfc0f]
5: (PG::replica_scrub(boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x62) [0x55f12c51eb22]
6: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x7bb) [0x55f12c5e3e0b]
7: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x309) [0x55f12c46d649]
8: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x68) [0x55f12c6ca808]
9: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x55f12c48d618]
10: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x55f12caf5794]
11: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x55f12caf8434]
12: /lib64/libpthread.so.0(+0x814a) [0x7fb9b4f0514a]
13: clone()

0> 2021-06-14T11:17:15.377+0200 7fb9916f5700 -1 ** Caught signal (Aborted) *
in thread 7fb9916f5700 thread_name:tp_osd_tp

ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)
1: /lib64/libpthread.so.0(+0x12b20) [0x7fb9b4f0fb20]
2: gsignal()
3: abort()
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x55f12c3690b5]
5: /usr/bin/ceph-osd(+0x56927e) [0x55f12c36927e]
6: /usr/bin/ceph-osd(+0x9df81f) [0x55f12c7df81f]
7: (PgScrubber::replica_scrub_op(boost::intrusive_ptr<OpRequest>)+0x4bf) [0x55f12c7cfc0f]
8: (PG::replica_scrub(boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x62) [0x55f12c51eb22]
9: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x7bb) [0x55f12c5e3e0b]
10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x309) [0x55f12c46d649]
11: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x68) [0x55f12c6ca808]
12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x55f12c48d618]
13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x55f12caf5794]
14: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x55f12caf8434]
15: /lib64/libpthread.so.0(+0x814a) [0x7fb9b4f0514a]
16: clone()
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

and the OSD cannot recover any more:
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bdev(0x5591c05ea400 /var/lib/ceph/osd/ceph-168/block) open size 16000896466944 (0xe8d7fc00000, 15 TiB) block_size 4096 (4 KiB) rotational discard not supported
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bluestore(/var/lib/ceph/osd/ceph-168) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bdev(0x5591c05ea800 /var/lib/ceph/osd/ceph-168/block) open path /var/lib/ceph/osd/ceph-168/block
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bdev(0x5591c05ea800 /var/lib/ceph/osd/ceph-168/block) open size 16000896466944 (0xe8d7fc00000, 15 TiB) block_size 4096 (4 KiB) rotational discard not supported
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-168/block size 15 TiB
2021-06-14T11:17:51.960+0200 7f7bf3ed3080 1 bluefs mount
2021-06-14T11:17:51.961+0200 7f7bf3ed3080 1 bluefs _init_alloc shared, id 1, capacity 0xe8d7fc00000, block size 0x10000
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 1 bluefs mount shared_bdev_used = 0
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 -1 bluestore(/var/lib/ceph/osd/ceph-168) _open_db erroring opening db:
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 1 bluefs umount
2021-06-14T11:17:51.983+0200 7f7bf3ed3080 1 bdev(0x5591c05ea800 /var/lib/ceph/osd/ceph-168/block) close
2021-06-14T11:17:52.235+0200 7f7bf3ed3080 1 bdev(0x5591c05ea400 /var/lib/ceph/osd/ceph-168/block) close
2021-06-14T11:17:52.489+0200 7f7bf3ed3080 -1 osd.168 0 OSD:init: unable to mount object store

I have created https://tracker.ceph.com/issues/51338 for this. If you have any relevant logs, please attach them to the tracker.

Actions #13

Updated by Neha Ojha almost 3 years ago

Actions #14

Updated by Neha Ojha almost 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #15

Updated by Backport Bot almost 3 years ago

  • Copied to Backport #51371: pacific: OSD crash FAILED ceph_assert(!is_scrubbing()) added
Actions #17

Updated by Loïc Dachary almost 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF