Project

General

Profile

Actions

Bug #49427

closed

FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT))

Added by Brad Hubbard about 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/bhubbard-2021-02-22_23:51:15-rados-master-distro-basic-smithi/5904732

rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default
thrashosds-health} mon_election/connectivity msgr/async-v2only objectstore/bluestore-bitmap
rados tasks/rados_api_tests validater/lockdep}

2021-02-23T00:40:55.023+0000 7f653a802700 20 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] finish_ctx 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head 0x5610a457b600 op modify
2021-02-23T00:40:55.025+0000 7f653e009700 20 osd.0 op_wq(1) _process 60.9 to_process <> waiting <> waiting_peering {}
2021-02-23T00:40:55.025+0000 7f653e009700 20 osd.0 op_wq(1) _process OpSchedulerItem(60.9 PGRecovery(pgid=60.9 epoch_queued=416 reserved_pushes=1) prio 5 cost 20971520 e416 reserved_pushes 1) queued
2021-02-23T00:40:55.025+0000 7f653e009700 20 osd.0 op_wq(1) _process 60.9 to_process <OpSchedulerItem(60.9 PGRecovery(pgid=60.9 epoch_queued=416 reserved_pushes=1) prio 5 cost 20971520 e416 reserved_pushes 1)> waiting <> waiting_peering {}
2021-02-23T00:40:55.025+0000 7f653e009700 20 osd.0 op_wq(1) _process OpSchedulerItem(60.9 PGRecovery(pgid=60.9 epoch_queued=416 reserved_pushes=1) prio 5 cost 20971520 e416 reserved_pushes 1) pg 0x5610a307c000
2021-02-23T00:40:55.025+0000 7f653e009700 10 osd.0 416 do_recovery starting 1 pg[60.9( v 284'52 lc 81'27 (0'0,284'52] local-lis/les=414/415 n=4 ec=48/48 lis/c=414/409 les/c/f=415/412/0 sis=414) [0,7] r=0 lpr=414 pi=[409,414)/1 crt=284'52 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=16 mbc={255={(1+1)=4}}]
2021-02-23T00:40:55.025+0000 7f653e009700 10 osd.0 pg_epoch: 416 pg[60.9( v 284'52 lc 81'27 (0'0,284'52] local-lis/les=414/415 n=4 ec=48/48 lis/c=414/409 les/c/f=415/412/0 sis=414) [0,7] r=0 lpr=414 pi=[409,414)/1 crt=284'52 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=16 mbc={255={(1+1)=4}}] recovery raced and were queued twice, ignoring!
2021-02-23T00:40:55.025+0000 7f653e009700 10 osd.0 416 do_recovery started 0/1 on pg[60.9( v 284'52 lc 81'27 (0'0,284'52] local-lis/les=414/415 n=4 ec=48/48 lis/c=414/409 les/c/f=415/412/0 sis=414) [0,7] r=0 lpr=414 pi=[409,414)/1 crt=284'52 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=16 mbc={255={(1+1)=4}}]
2021-02-23T00:40:55.025+0000 7f653e009700 10 osd.0 416 release_reserved_pushes(1), recovery_ops_reserved 3 -> 2
2021-02-23T00:40:55.025+0000 7f653a001700 10 osd.0 op_wq(1) _process dequeue future request at 2021-02-23T00:40:55.074352+0000
2021-02-23T00:40:55.027+0000 7f653a802700 -1 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-1024-ga3591378/rpm/el8/BUILD/ceph-17.0.0-1024-ga3591378/src/osd/PrimaryLogPG.cc: In function 'ObjectContextRef PrimaryLogPG::get_object_context(const hobject_t&, bool, const std::map<std::__cxx11::basic_string<char>, ceph::buffer::v15_2_0::list>*)' thread 7f653a802700 time 2021-02-23T00:40:55.024443+0000
/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-1024-ga3591378/rpm/el8/BUILD/ceph-17.0.0-1024-ga3591378/src/osd/PrimaryLogPG.cc: 11423: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT))

 ceph version 17.0.0-1024-ga3591378 (a3591378a59c621ac597987facb3fb30707c218f) quincy (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x56108a9df99e]
 2: ceph-osd(+0x568bb8) [0x56108a9dfbb8]
 3: (PrimaryLogPG::get_object_context(hobject_t const&, bool, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, ceph::buffer::v15_2_0::list, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, ceph::buffer::v15_2_0::list> > > const*)+0x6ff) [0x56108abf6b5f]
 4: (PrimaryLogPG::get_prev_clone_obc(std::shared_ptr<ObjectContext>)+0xc1) [0x56108abf76c1]
 5: (PrimaryLogPG::dec_refcount_by_dirty(PrimaryLogPG::OpContext*)+0x12b) [0x56108abf788b]
 6: (PrimaryLogPG::finish_ctx(PrimaryLogPG::OpContext*, int, int)+0x1327) [0x56108abf9ff7]
 7: (PrimaryLogPG::prepare_transaction(PrimaryLogPG::OpContext*)+0x25c) [0x56108ac4805c]
 8: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x326) [0x56108ac4a066]
 9: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x2d50) [0x56108ac53980]
 10: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xd1c) [0x56108ac5acdc]
 11: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x309) [0x56108aae4ac9]
 12: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x68) [0x56108ad40f38]
 13: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x56108ab04038]
 14: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x56108b168d64]
 15: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x56108b16ba04]
 16: (Thread::_entry_func(void*)+0xd) [0x56108b15a86d]
 17: /lib64/libpthread.so.0(+0x82de) [0x7f6561e4a2de]
 18: clone()

Related issues 2 (0 open2 closed)

Related to RADOS - Bug #50806: osd/PrimaryLogPG.cc: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT))ResolvedMyoungwon Oh

Actions
Copied to RADOS - Backport #49911: pacific: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT))ResolvedActions
Actions #1

Updated by Brad Hubbard about 3 years ago

/a/bhubbard-2021-02-23_02:25:14-rados-master-distro-basic-smithi/5905669

Actions #2

Updated by Brad Hubbard about 3 years ago

  • Description updated (diff)
Actions #3

Updated by Neha Ojha about 3 years ago

dec_refcount_by_dirty is related to tiering/dedeup which got added fairly recently in https://github.com/ceph/ceph/pull/35899/files#diff-04d5a3324b706d64a3230221d2eb08f6aa66829f93ca151ebb71896fb743ada7R3336

2021-02-23T00:40:55.023+0000 7f653a802700 10 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 
active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] get_object_context: 0x5610a2fde2c0 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head rwstate(none n=0 w=0) oi: 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head(407'66
 client.5653.0:6404 dirty|manifest s 10 uv 65 alloc_hint [0 0 0] manifest(chunked {2=(len: 2 oid: 205:be9fd356:test-rados-api-smithi003-44693-70::602c57ffb51af99d6f3b54c0ee9587bb110fb990:0 offset: 0 flags: has_reference),6=(len: 2 oid:
 205:3d6586d2:test-rados-api-smithi003-44693-70::aab9b7bd17003da21256cca5c7d2d7ff4ea384b6:0 offset: 0 flags: has_reference),8=(len: 2 oid: 205:a6a1a418:test-rados-api-smithi003-44693-70::253420c1158bc6382093d409ce2e9cff5806e980:0 offse
t: 0 flags: has_reference)})) exists: 1 ssc: 0x5610a3f6ab00 snapset: 20=[]:{20=[20]}
2021-02-23T00:40:55.023+0000 7f653a802700 10 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 
active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] find_object_context 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head @head oi=60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head(407'66 client.5653.0:6404 dirty|manife
st s 10 uv 65 alloc_hint [0 0 0] manifest(chunked {2=(len: 2 oid: 205:be9fd356:test-rados-api-smithi003-44693-70::602c57ffb51af99d6f3b54c0ee9587bb110fb990:0 offset: 0 flags: has_reference),6=(len: 2 oid: 205:3d6586d2:test-rados-api-smi
thi003-44693-70::aab9b7bd17003da21256cca5c7d2d7ff4ea384b6:0 offset: 0 flags: has_reference),8=(len: 2 oid: 205:a6a1a418:test-rados-api-smithi003-44693-70::253420c1158bc6382093d409ce2e9cff5806e980:0 offset: 0 flags: has_reference)}))
2021-02-23T00:40:55.023+0000 7f653a802700 25 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 
active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] do_op oi 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head(407'66 client.5653.0:6404 dirty|manifest s 10 uv 65 alloc_hint [0 0 0] manifest(chunked {2=(len: 2 oid: 205:be9fd3
56:test-rados-api-smithi003-44693-70::602c57ffb51af99d6f3b54c0ee9587bb110fb990:0 offset: 0 flags: has_reference),6=(len: 2 oid: 205:3d6586d2:test-rados-api-smithi003-44693-70::aab9b7bd17003da21256cca5c7d2d7ff4ea384b6:0 offset: 0 flags:
 has_reference),8=(len: 2 oid: 205:a6a1a418:test-rados-api-smithi003-44693-70::253420c1158bc6382093d409ce2e9cff5806e980:0 offset: 0 flags: has_reference)}))
2021-02-23T00:40:55.023+0000 7f653a802700 20 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 
active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] do_op obc obc(60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head rwstate(write n=1 w=0))
2021-02-23T00:40:55.023+0000 7f653a802700 10 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] execute_ctx 0x5610a457b600
2021-02-23T00:40:55.023+0000 7f653a802700 10 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] execute_ctx 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head [writefull 0~10 in=10b] ov 407'66 av 416'67 snapc 20=[20] snapset 20=[]:{20=[20]}
2021-02-23T00:40:55.023+0000 7f653a802700 10 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] do_osd_op 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head [writefull 0~10 in=10b]
2021-02-23T00:40:55.023+0000 7f653a802700 10 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] do_osd_op  writefull 0~10 in=10b
2021-02-23T00:40:55.023+0000 7f653a802700 20 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] make_writeable 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head snapset=20=[]:{20=[20]}  snapc=20=[20]
2021-02-23T00:40:55.023+0000 7f653a802700 20 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] make_writeable 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head done, snapset=20=[]:{20=[20]}
2021-02-23T00:40:55.023+0000 7f653a802700 20 osd.0 pg_epoch: 416 pg[60.0( v 407'66 lc 81'27 (0'0,407'66] local-lis/les=414/415 n=7 ec=48/48 lis/c=414/409 les/c/f=415/411/0 sis=414) [0,6] r=0 lpr=414 pi=[409,414)/1 crt=407'66 mlcod 0'0 active+recovery_wait+degraded m=18 mbc={255={(1+1)=6}}] finish_ctx 60:005cdd9b:test-rados-api-smithi003-44693-70::foo:head 0x5610a457b600 op modify
.
.
2021-02-23T00:40:55.027+0000 7f653a802700 -1 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-1024-ga3591378/rpm/el8/BUILD/ceph-17.0.0-1024-ga3591378/src/osd/PrimaryLogPG.cc: In function 'ObjectContextRef PrimaryLogPG::get_object_context(const hobject_t&, bool, const std::map<std::__cxx11::basic_string<char>, ceph::buffer::v15_2_0::list>*)' thread 7f653a802700 time 2021-02-23T00:40:55.024443+0000
/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-1024-ga3591378/rpm/el8/BUILD/ceph-17.0.0-1024-ga3591378/src/osd/PrimaryLogPG.cc: 11423: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT))

 ceph version 17.0.0-1024-ga3591378 (a3591378a59c621ac597987facb3fb30707c218f) quincy (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x56108a9df99e]
 2: ceph-osd(+0x568bb8) [0x56108a9dfbb8]
 3: (PrimaryLogPG::get_object_context(hobject_t const&, bool, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, ceph::buffer::v15_2_0::list, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, ceph::buffer::v15_2_0::list> > > const*)+0x6ff) [0x56108abf6b5f]
 4: (PrimaryLogPG::get_prev_clone_obc(std::shared_ptr<ObjectContext>)+0xc1) [0x56108abf76c1]
 5: (PrimaryLogPG::dec_refcount_by_dirty(PrimaryLogPG::OpContext*)+0x12b) [0x56108abf788b]
 6: (PrimaryLogPG::finish_ctx(PrimaryLogPG::OpContext*, int, int)+0x1327) [0x56108abf9ff7]
Actions #4

Updated by Neha Ojha about 3 years ago

  • Assignee set to Myoungwon Oh
Actions #5

Updated by Samuel Just about 3 years ago

Most likely, the problem is that the object being dirtied is present, but the prior clone is missing pending recovery.

Actions #6

Updated by Myoungwon Oh about 3 years ago

  • Status changed from New to In Progress
Actions #7

Updated by Kefu Chai about 3 years ago

  • Status changed from In Progress to Resolved
Actions #8

Updated by Sage Weil about 3 years ago

  • Status changed from Resolved to Pending Backport
  • Backport set to pacific

need this in pacific too: /a/sage-2021-03-20_15:11:51-rados-wip-sage2-testing-2021-03-20-0832-pacific-distro-basic-smithi/5983830

Actions #9

Updated by Backport Bot about 3 years ago

  • Copied to Backport #49911: pacific: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) added
Actions #10

Updated by Sage Weil about 3 years ago

  • Pull request ID set to 39670
Actions #11

Updated by Neha Ojha about 3 years ago

@Myoungwon Oh Oh this new failure looks very similar to the issue tracked in this ticket?

2021-04-03T12:45:16.005 INFO:tasks.ceph.osd.6.smithi183.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-2788-g0d634484/rpm/el8/BUILD/ceph-17.0.0-2788-g0d634484/src/osd/PrimaryLogPG.cc: 11504: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT))
2021-04-03T12:45:16.102 INFO:tasks.ceph.osd.6.smithi183.stderr: ceph version 17.0.0-2788-g0d634484 (0d6344840d6904e667a339b95a33d5effbffaf9f) quincy (dev)
2021-04-03T12:45:16.102 INFO:tasks.ceph.osd.6.smithi183.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x6b006a]
2021-04-03T12:45:16.102 INFO:tasks.ceph.osd.6.smithi183.stderr: 2: ceph-osd(+0x5a8272) [0x6b0272]
2021-04-03T12:45:16.102 INFO:tasks.ceph.osd.6.smithi183.stderr: 3: (PrimaryLogPG::get_object_context(hobject_t const&, bool, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, ceph::buffer::v15_2_0::list, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, ceph::buffer::v15_2_0::list> > > const*)+0x6ff) [0x8c857f]
2021-04-03T12:45:16.103 INFO:tasks.ceph.osd.6.smithi183.stderr: 4: (PrimaryLogPG::get_manifest_ref_count(std::shared_ptr<ObjectContext>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&)+0x33d) [0x8ca30d]
2021-04-03T12:45:16.103 INFO:tasks.ceph.osd.6.smithi183.stderr: 5: (cls_get_manifest_ref_count(void*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)+0x5d) [0x766e9d]
2021-04-03T12:45:16.103 INFO:tasks.ceph.osd.6.smithi183.stderr: 6: /usr/lib64/rados-classes/libcls_cas.so(+0xbd14) [0x1b9b5d14]
2021-04-03T12:45:16.103 INFO:tasks.ceph.osd.6.smithi183.stderr: 7: ceph-osd(+0x743ffc) [0x84bffc]
2021-04-03T12:45:16.103 INFO:tasks.ceph.osd.6.smithi183.stderr: 8: (ClassHandler::ClassMethod::exec(void*, ceph::buffer::v15_2_0::list&, ceph::buffer::v15_2_0::list&)+0x5e) [0x84c36e]
2021-04-03T12:45:16.104 INFO:tasks.ceph.osd.6.smithi183.stderr: 9: (PrimaryLogPG::do_osd_ops(PrimaryLogPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)+0x17e1) [0x906541]
2021-04-03T12:45:16.104 INFO:tasks.ceph.osd.6.smithi183.stderr: 10: (PrimaryLogPG::prepare_transaction(PrimaryLogPG::OpContext*)+0x177) [0x919b27]
2021-04-03T12:45:16.104 INFO:tasks.ceph.osd.6.smithi183.stderr: 11: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x31d) [0x91bc0d]
2021-04-03T12:45:16.104 INFO:tasks.ceph.osd.6.smithi183.stderr: 12: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x2ddc) [0x92559c]
2021-04-03T12:45:16.104 INFO:tasks.ceph.osd.6.smithi183.stderr: 13: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xd1c) [0x92c79c]
2021-04-03T12:45:16.105 INFO:tasks.ceph.osd.6.smithi183.stderr: 14: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x309) [0x7b8a79]
2021-04-03T12:45:16.105 INFO:tasks.ceph.osd.6.smithi183.stderr: 15: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x68) [0xa12718]
2021-04-03T12:45:16.105 INFO:tasks.ceph.osd.6.smithi183.stderr: 16: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x7d5cd8]
2021-04-03T12:45:16.105 INFO:tasks.ceph.osd.6.smithi183.stderr: 17: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0xe36fc4]
2021-04-03T12:45:16.105 INFO:tasks.ceph.osd.6.smithi183.stderr: 18: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0xe39c64]
2021-04-03T12:45:16.106 INFO:tasks.ceph.osd.6.smithi183.stderr: 19: (Thread::_entry_func(void*)+0xd) [0xe28acd]
2021-04-03T12:45:16.106 INFO:tasks.ceph.osd.6.smithi183.stderr: 20: /lib64/libpthread.so.0(+0x814a) [0x6cc014a]
2021-04-03T12:45:16.106 INFO:tasks.ceph.osd.6.smithi183.stderr: 21: clone()

rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_api_tests validater/valgrind}

/a/kchai-2021-04-03_11:44:43-rados-wip-kefu-testing-2021-04-03-1318-distro-basic-smithi/6017847

Actions #13

Updated by Neha Ojha about 3 years ago

Myoungwon Oh wrote:

https://github.com/ceph/ceph/pull/40606

Thanks for the fix, let's use https://tracker.ceph.com/issues/50192 to track it.

Actions #14

Updated by Loïc Dachary about 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions #15

Updated by Yuri Weinstein about 3 years ago

Neha Ojha wrote:

Myoungwon Oh wrote:

https://github.com/ceph/ceph/pull/40606

merged

Actions #16

Updated by Neha Ojha almost 3 years ago

  • Related to Bug #50806: osd/PrimaryLogPG.cc: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) added
Actions

Also available in: Atom PDF