Project

General

Profile

Bug #12200

assert(hinfo.get_total_chunk_size() == (uint64_t)st.st_size)

Added by David Zafman over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
David Zafman
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Corruption of an EC pool shard crashes osd during a deep-scrub. Specifically, the file size of one of the shards is larger than expected:

$ find dev -name '*foo*' -ls
279010   56 -rw-r--r--   1 dzafman  dzafman     51201 Jul  1 16:22 dev/osd4/current/1.6s1_head/foo__head_7FC1F406__1_ffffffffffffffff_1
279011   56 -rw-r--r--   1 dzafman  dzafman     51200 Jul  1 16:12 dev/osd2/current/1.6s2_head/foo__head_7FC1F406__1_ffffffffffffffff_2
279009   56 -rw-r--r--   1 dzafman  dzafman     51200 Jul  1 16:12 dev/osd3/current/1.6s0_head/foo__head_7FC1F406__1_ffffffffffffffff_0
2015-07-01 16:22:40.244766 7f3fa17ea700 10 osd.4 26 dequeue_op 0x7f3fb402cfe0 prio 127 cost 0 latency 0.000140 replica scrub(pg: 1.6s1,from:0'0,to:20'1,epoch:26,start:0//0//-1,end:MAX,chunky:1,deep:1,seed:4294967295,version:6) v6 pg pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active]
2015-07-01 16:22:40.244799 7f3fa17ea700 10 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] handle_message: replica scrub(pg: 1.6s1,from:0'0,to:20'1,epoch:26,start:0//0//-1,end:MAX,chunky:1,deep:1,seed:4294967295,version:6) v6
2015-07-01 16:22:40.244814 7f3fa17ea700  7 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] replica_scrub
2015-07-01 16:22:40.244824 7f3fa17ea700 10 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] build_scrub_map_chunk [0//0//-1,MAX)  seed 4294967295
2015-07-01 16:22:40.244835 7f3fa17ea700 10 filestore(/home/dzafman/ceph/src/dev/osd4) collection_list_partial: 1.6s1_head
2015-07-01 16:22:40.244843 7f3fa17ea700 20 _collection_list_partial 0//0//-1 32-64 ls.size 0
2015-07-01 16:22:40.244937 7f3fa17ea700 20  prefixes 60000000,604F1CF7
2015-07-01 16:22:40.244947 7f3fa17ea700 20 filestore(/home/dzafman/ceph/src/dev/osd4) objects: [6//head//1/ffffffffffffffff/1,7fc1f406/foo/head//1/ffffffffffffffff/1]
2015-07-01 16:22:40.244957 7f3fa17ea700 10 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] be_scan_list scanning 1 objects deeply
2015-07-01 16:22:40.245000 7f3fa17ea700 10 filestore(/home/dzafman/ceph/src/dev/osd4) stat 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1 = 0 (size 51201)
2015-07-01 16:22:40.245014 7f3fa17ea700 15 filestore(/home/dzafman/ceph/src/dev/osd4) getattrs 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1
2015-07-01 16:22:40.245075 7f3fa17ea700 20 filestore(/home/dzafman/ceph/src/dev/osd4) fgetattrs 36 getting '_'
2015-07-01 16:22:40.245085 7f3fa17ea700 20 filestore(/home/dzafman/ceph/src/dev/osd4) fgetattrs 36 getting 'hinfo_key'
2015-07-01 16:22:40.245205 7f3fa17ea700 10 filestore(/home/dzafman/ceph/src/dev/osd4) getattrs 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1 = 0
2015-07-01 16:22:40.245214 7f3fa17ea700 15 filestore(/home/dzafman/ceph/src/dev/osd4) read 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1 0~524288
2015-07-01 16:22:40.245280 7f3fa17ea700 10 filestore(/home/dzafman/ceph/src/dev/osd4) FileStore::read 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1 0~51201/524288
2015-07-01 16:22:40.245301 7f3fa17ea700  0 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] _scan_list  7fc1f406/foo/head//1 got -5 on read, read_error
2015-07-01 16:22:40.245323 7f3fa17ea700 10 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] get_hash_info: Getting attr on 7fc1f406/foo/head//1
2015-07-01 16:22:40.245337 7f3fa17ea700 10 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] get_hash_info: not in cache 7fc1f406/foo/head//1
2015-07-01 16:22:40.245379 7f3fa17ea700 10 filestore(/home/dzafman/ceph/src/dev/osd4) stat 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1 = 0 (size 51201)
2015-07-01 16:22:40.245386 7f3fa17ea700 10 osd.4 pg_epoch: 26 pg[1.6s1( v 20'1 (0'0,20'1] local-les=26 n=1 ec=19 les/c 26/26 25/25/25) [3,4,2] r=1 lpr=25 pi=19-24/4 luod=0'0 crt=0'0 lcod 0'0 active] get_hash_info: found on disk, size 51201
2015-07-01 16:22:40.245397 7f3fa17ea700 15 filestore(/home/dzafman/ceph/src/dev/osd4) getattr 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1 'hinfo_key'
2015-07-01 16:22:40.245413 7f3fa17ea700 10 filestore(/home/dzafman/ceph/src/dev/osd4) getattr 1.6s1_head/7fc1f406/foo/head//1/ffffffffffffffff/1 'hinfo_key' = 30
2015-07-01 16:22:40.261902 7f3fa17ea700 -1 osd/ECBackend.cc: In function 'ECUtil::HashInfoRef ECBackend::get_hash_info(const hobject_t&)' thread 7f3fa17ea700 time 2015-07-01 16:22:40.245421
osd/ECBackend.cc: 1482: FAILED assert(hinfo.get_total_chunk_size() == (uint64_t)st.st_size)

 ceph version 9.0.1-1111-g075fb9f (075fb9f9e07f5a97bda4f8a4a23cba4df5bc826d)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x1a0342b]
 2: (ECBackend::get_hash_info(hobject_t const&)+0x65c) [0x180040a]
 3: (ECBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x43c) [0x180255e]
 4: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x444) [0x16e599a]
 5: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x3a9) [0x1590fb5]
 6: (PG::replica_scrub(std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x63d) [0x1591e63]
 7: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xa32) [0x1624320]
 8: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x47f) [0x1391075]

Related issues

Duplicated by Ceph - Bug #8588: In the erasure-coded pool, primary OSD will crash at decoding if any data chunk's size is changed Duplicate 06/11/2014

Associated revisions

Revision da2987d7 (diff)
Added by David Zafman over 8 years ago

osd: Fix ECBackend to handle mismatch of total chunk size

Fixes: #12200

Signed-off-by: David Zafman <>

History

#1 Updated by David Zafman over 8 years ago

  • Description updated (diff)

#2 Updated by Kefu Chai over 8 years ago

yeah, i believe xinze would be interested in this.

#3 Updated by Kefu Chai over 8 years ago

  • Assignee set to Kefu Chai

#4 Updated by David Zafman over 8 years ago

I pushed a branch wip-12000-12200 which includes the fix for this.

#5 Updated by David Zafman over 8 years ago

  • Status changed from New to In Progress

#6 Updated by David Zafman over 8 years ago

  • Assignee changed from Kefu Chai to David Zafman

#7 Updated by David Zafman over 8 years ago

  • Status changed from In Progress to 7

#8 Updated by David Zafman over 8 years ago

Changing get_hash_info() to return ECUtil::HashInfoRef() (NULL HashInfoRef) can cause this assert in submit_transaction().

2015-08-04 17:27:30.876250 7f86967fc700 10 osd.3 23 dequeue_op 0x7f864801d990 prio 63 cost 4096 latency 0.000269 osd_op(client.4189.0:1 obj-size-18967-0 [writefull 0~4096] 2.81384b9f ondisk+write+known_if_redirected e23) v5 pg pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean]
2015-08-04 17:27:30.876321 7f86967fc700 20 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] op_has_sufficient_caps pool=2 (pool-jerasure ) owner=0 need_read_cap=0 need_write_cap=1 need_class_read_cap=0 need_class_write_cap=0 -> yes
2015-08-04 17:27:30.876333 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] handle_message: osd_op(client.4189.0:1 obj-size-18967-0 [writefull 0~4096] 2.81384b9f ondisk+write+known_if_redirected e23) v5
2015-08-04 17:27:30.876343 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] do_op osd_op(client.4189.0:1 obj-size-18967-0 [writefull 0~4096] 2.81384b9f ondisk+write+known_if_redirected e23) v5 may_write -> write-ordered flags ondisk+write+known_if_redirected
2015-08-04 17:27:30.876380 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] get_object_context: found obc in cache: 0x7f866c0031a0
2015-08-04 17:27:30.876422 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] get_object_context: 0x7f866c0031a0 2/81384b9f/obj-size-18967-0/head rwstate(none n=0 w=0) oi: 2/81384b9f/obj-size-18967-0/head(19'1 client.4164.0:1 wrlock_by=unknown.0.0:0 dirty|data_digest|omap_digest s 4096 uv 1 dd d54493fd od ffffffff) ssc: 0x7f866c004890 snapset: 0=[]:[]+head
2015-08-04 17:27:30.876438 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] find_object_context 2/81384b9f/obj-size-18967-0/head @head oi=2/81384b9f/obj-size-18967-0/head(19'1 client.4164.0:1 wrlock_by=unknown.0.0:0 dirty|data_digest|omap_digest s 4096 uv 1 dd d54493fd od ffffffff)
2015-08-04 17:27:30.876466 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] execute_ctx 0x7f866c0090d0
2015-08-04 17:27:30.876477 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] do_op 2/81384b9f/obj-size-18967-0/head [writefull 0~4096] ov 19'1 av 23'2 snapc 0=[] snapset 0=[]:[]+head
2015-08-04 17:27:30.876486 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] do_osd_op 2/81384b9f/obj-size-18967-0/head [writefull 0~4096]
2015-08-04 17:27:30.876499 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] do_osd_op  writefull 0~4096
2015-08-04 17:27:30.876554 7f86967fc700 20 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] make_writeable 2/81384b9f/obj-size-18967-0/head snapset=0x7f866c0048d0  snapc=0=[]
2015-08-04 17:27:30.876590 7f86967fc700 20 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] make_writeable 2/81384b9f/obj-size-18967-0/head done, snapset=0=[]:[]+head
2015-08-04 17:27:30.876598 7f86967fc700 20 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] finish_ctx 2/81384b9f/obj-size-18967-0/head 0x7f866c0090d0 op modify
2015-08-04 17:27:30.876614 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean]  set mtime to 2015-08-04 17:27:30.875016
2015-08-04 17:27:30.876645 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean]  final snapset 0=[]:[]+head in 2/81384b9f/obj-size-18967-0/head
2015-08-04 17:27:30.876676 7f86967fc700 20 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean]  zeroing write result code 0
2015-08-04 17:27:30.876722 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] new_repop rep_tid 2 on osd_op(client.4189.0:1 obj-size-18967-0 [writefull 0~4096] 2.81384b9f ondisk+write+known_if_redirected e23) v5
2015-08-04 17:27:30.876738 7f86967fc700  7 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] issue_repop rep_tid 2 o 2/81384b9f/obj-size-18967-0/head
2015-08-04 17:27:30.876776 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] get_hash_info: Getting attr on 2/81384b9f/obj-size-18967-0/head
2015-08-04 17:27:30.876800 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] get_hash_info: not in cache 2/81384b9f/obj-size-18967-0/head
2015-08-04 17:27:30.876849 7f86967fc700 10 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] get_hash_info: found on disk, size 10
2015-08-04 17:27:30.876920 7f86967fc700  0 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] get_hash_info: Mismatch of total_chunk_size 2048
2015-08-04 17:27:30.876928 7f86967fc700 -1 osd.3 pg_epoch: 23 pg[2.0s0( v 19'1 (0'0,19'1] local-les=23 n=1 ec=18 les/c 23/23 22/22/22) [3,1,2] r=0 lpr=22 crt=0'0 lcod 0'0 mlcod 0'0 active+clean] submit_transaction: get_hash_info(2/81384b9f/obj-size-18967-0/head) returned a null pointer and there is no  way to recover from such an error in this  context
2015-08-04 17:27:30.891247 7f86967fc700 -1 osd/ECBackend.cc: In function 'virtual void ECBackend::submit_transaction(const hobject_t&, const eversion_t&, PGBackend::PGTransaction*, const eversion_t&, const eversion_t&, const std::vector<pg_log_entry_t>&, boost::optional<pg_hit_set_history_t>&, Context*, Context*, Context*, ceph_tid_t, osd_reqid_t, OpRequestRef)' thread 7f86967fc700 time 2015-08-04 17:27:30.876937
osd/ECBackend.cc: 1308: FAILED assert(0)

 ceph version 9.0.2-1063-g111ceb9 (111ceb96b5933c690b31e6758182f198fb28e385)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x1a05199]
 2: (ECBackend::submit_transaction(hobject_t const&, eversion_t const&, PGBackend::PGTransaction*, eversion_t const&, eversion_t const&, std::vector<pg_log_entry_t, std::allocator<pg_log_entry_t> > const&, boost::optional<pg_hit_set_history_t>&, Context*, Context*, Context*, unsigned long, osd_reqid_t, std::tr1::shared_ptr<OpRequest>)+0x3fd) [0x17fe867]
 3: (ReplicatedPG::issue_repop(ReplicatedPG::RepGather*)+0x820) [0x1659fc8]
 4: (ReplicatedPG::execute_ctx(ReplicatedPG::OpContext*)+0x1b50) [0x162c5de]
 5: (ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>&)+0x4592) [0x1627590]
 6: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x895) [0x1622915]
 7: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x47f) [0x139178d]
 8: (PGQueueable::RunVis::operator()(std::tr1::shared_ptr<OpRequest>&)+0x5d) [0x133fadd]
 9: (void boost::detail::variant::invoke_visitor<PGQueueable::RunVis>::internal_visit<std::tr1::shared_ptr<OpRequest> >(std::tr1::shared_ptr<OpRequest>&, int)+0x29) [0x1442505]
 10: (boost::detail::variant::invoke_visitor<PGQueueable::RunVis>::result_type boost::detail::variant::visitation_impl_invoke_impl<boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*, std::tr1::shared_ptr<OpRequest> >(int, boost::detail::variant::invoke_visitor<PGQueueable::RunVis>&, void*, std::tr1::shared_ptr<OpRequest>*, mpl_::bool_<false>)+0x40) [0x143e75d]
 11: (boost::detail::variant::invoke_visitor<PGQueueable::RunVis>::result_type boost::detail::variant::visitation_impl_invoke<boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*, std::tr1::shared_ptr<OpRequest>, boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>::has_fallback_type_>(int, boost::detail::variant::invoke_visitor<PGQueueable::RunVis>&, void*, std::tr1::shared_ptr<OpRequest>*, boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>::has_fallback_type_, int)+0x35) [0x143955b]
 12: (boost::detail::variant::invoke_visitor<PGQueueable::RunVis>::result_type boost::detail::variant::visitation_impl<mpl_::int_<0>, boost::detail::variant::visitation_impl_step<boost::mpl::l_iter<boost::mpl::l_item<mpl_::long_<3l>, std::tr1::shared_ptr<OpRequest>, boost::mpl::l_item<mpl_::long_<2l>, PGSnapTrim, boost::mpl::l_item<mpl_::long_<1l>, PGScrub, boost::mpl::l_end> > > >, boost::mpl::l_iter<boost::mpl::l_end> >, boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*, boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>::has_fallback_type_>(int, int, boost::detail::variant::invoke_visitor<PGQueueable::RunVis>&, void*, mpl_::bool_<false>, boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>::has_fallback_type_, mpl_::int_<0>*, boost::detail::variant::visitation_impl_step<boost::mpl::l_iter<boost::mpl::l_item<mpl_::long_<3l>, std::tr1::shared_ptr<OpRequest>, boost::mpl::l_item<mpl_::long_<2l>, PGSnapTrim, boost::mpl::l_item<mpl_::long_<1l>, PGScrub, boost::mpl::l_end> > > >, boost::mpl::l_iter<boost::mpl::l_end> >*)+0x79) [0x142d4c8]
 13: (boost::detail::variant::invoke_visitor<PGQueueable::RunVis>::result_type boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>::internal_apply_visitor_impl<boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*>(int, int, boost::detail::variant::invoke_visitor<PGQueueable::RunVis>&, void*)+0x40) [0x1417014]
 14: (boost::detail::variant::invoke_visitor<PGQueueable::RunVis>::result_type boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>::internal_apply_visitor<boost::detail::variant::invoke_visitor<PGQueueable::RunVis> >(boost::detail::variant::invoke_visitor<PGQueueable::RunVis>&)+0x46) [0x13fce84]
 15: (PGQueueable::RunVis::result_type boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>::apply_visitor<PGQueueable::RunVis>(PGQueueable::RunVis&)+0x36) [0x13e193e]
 16: (PGQueueable::RunVis::result_type boost::apply_visitor<PGQueueable::RunVis, boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> >(PGQueueable::RunVis&, boost::variant<std::tr1::shared_ptr<OpRequest>, PGSnapTrim, PGScrub, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>&)+0x23) [0x13c176c]
 17: (PGQueueable::run(OSD*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x44) [0x13aa258]
 18: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x70d) [0x1390b2d]
 19: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b8) [0x19f7732]
 20: (ShardedThreadPool::WorkThreadSharded::entry()+0x25) [0x19f8fb5]
 21: (Thread::entry_wrapper()+0xa8) [0x19ed082]
 22: (Thread::_entry_func(void*)+0x18) [0x19ecfd0]
 23: (()+0x8182) [0x7f86cdfab182]

#9 Updated by David Zafman over 8 years ago

  • Status changed from 7 to Resolved

da2987d79db679e7b44d7886462c81d34994af26

#10 Updated by David Zafman almost 7 years ago

  • Duplicated by Bug #8588: In the erasure-coded pool, primary OSD will crash at decoding if any data chunk's size is changed added

Also available in: Atom PDF