Project

General

Profile

Bug #20960

ceph_test_rados: mismatched version (due to pg import/export)

Added by Sage Weil over 6 years ago. Updated almost 2 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2017-08-09T14:04:23.836 INFO:tasks.rados.rados.0.smithi143.stderr:3296: oid 205 oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo version is 358 and expected 482
2017-08-09T14:04:23.837 INFO:tasks.rados.rados.0.smithi143.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.1.2-533-g98d5369/rpm/el7/BUILD/ceph-12.1.2-533-g98d5369/src/test/osd/RadosModel.h: In function '
virtual void ReadOp::_finish(TestOp::CallbackInfo*)' thread 7f6b2cff9700 time 2017-08-09 14:04:23.824433
2017-08-09T14:04:23.837 INFO:tasks.rados.rados.0.smithi143.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.1.2-533-g98d5369/rpm/el7/BUILD/ceph-12.1.2-533-g98d5369/src/test/osd/RadosModel.h: 1368: FAILED 
assert(version == old_value.version)
2017-08-09T14:04:23.837 INFO:tasks.rados.rados.0.smithi143.stderr: ceph version 12.1.2-533-g98d5369 (98d5369995e4ef1258e99f10a5039e5ed3f01ca9) luminous (rc)
2017-08-09T14:04:23.837 INFO:tasks.rados.rados.0.smithi143.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x7f6b4b69acd0]
2017-08-09T14:04:23.837 INFO:tasks.rados.rados.0.smithi143.stderr: 2: (ReadOp::_finish(TestOp::CallbackInfo*)+0x1ba) [0x7f6b552dad8a]
2017-08-09T14:04:23.837 INFO:tasks.rados.rados.0.smithi143.stderr: 3: (librados::C_AioComplete::finish(int)+0x22) [0x7f6b54dd6912]
2017-08-09T14:04:23.837 INFO:tasks.rados.rados.0.smithi143.stderr: 4: (Context::complete(int)+0x9) [0x7f6b54db4399]
2017-08-09T14:04:23.838 INFO:tasks.rados.rados.0.smithi143.stderr: 5: (Finisher::finisher_thread_entry()+0x198) [0x7f6b4b698328]
2017-08-09T14:04:23.838 INFO:tasks.rados.rados.0.smithi143.stderr: 6: (()+0x7dc5) [0x7f6b540a5dc5]
2017-08-09T14:04:23.838 INFO:tasks.rados.rados.0.smithi143.stderr: 7: (clone()+0x6d) [0x7f6b496d273d]

/a/sage-2017-08-09_13:09:07-rados-wip-sage-testing-20170808b-distro-basic-smithi/1501666

similar error at
/a/sage-2017-08-08_21:51:30-rados-wip-sage-testing-distro-basic-smithi/1498737

History

#1 Updated by Sage Weil over 6 years ago

second write to teh object sets uv482

2017-08-09 14:01:07.750069 7f18b0c98700 10 osd.6 pg_epoch: 120 pg[2.4( v 120'482 (31'101,120'482] local-lis/les=104/105 n=35 ec=22/22 lis/c 104/22 les/c/f 105/23/0 104/104/104) [6] r=0 lpr=104 pi=[22,104)/1 crt=120'482 lcod 120'481 mlcod 120'481 active+undersized+degraded] get_object_context: 0x7f18e34783c0 2:2d767ac2:::smithi143478691-205 oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo:head rwstate(none n=0 w=0) oi: 2:2d767ac2:::smithi143478691-205 oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo:head(120'482 client.4213.0:11405 dirty|omap_digest s 3546706 uv 482 od ffffffff alloc_hint [0 0 0]) exists: 1 ssc: 0x7f18e1df6360 snapset: 0=[]:{}

then osd stops, 2.4 is exported from osd.6 and imported into osd.2
2017-08-09T14:02:20.872 INFO:teuthology.orchestra.run.smithi074:Running: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-6 --journal-path /var/lib/ceph/osd/ceph-6/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op remove --pgid 2.4'
2017-08-09T14:02:29.789 INFO:teuthology.orchestra.run.smithi143:Running: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-2 --journal-path /var/lib/ceph/osd/ceph-2/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op import --file /home/ubuntu/cephtest/ceph.data/exp.2.4.6'

then osd.6 restarts,
2017-08-09 14:04:01.719747 7fecd7809d00  0 ceph version 12.1.2-533-g98d5369 (98d5369995e4ef1258e99f10a5039e5ed3f01ca9) luminous (rc), process (unknown), pid 8514

and is backfilled to with older version 258 by osd.5
2017-08-09 14:04:12.152021 7fecb36b1700 10 osd.6 pg_epoch: 197 pg[2.4( v 197'449 (31'130,197'449] lb 2:2cc2ec44:::smithi143478691-359 oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo:head (bitwise) local-lis/les=195/196 n=31 ec=22/22 lis/c 195/22 les/c/f 196/23/0 194/195/195) [6,5]/[5] r=-1 lpr=195 pi=[22,195)/2 luod=0'0 crt=197'449 active+remapped] handle_push ObjectRecoveryInfo(2:2d767ac2:::smithi143478691-205 oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo:head@72'358, size: 2252402, copy_subset: [0~2252402], clone_subset: {}, snapset: 0=[]:[])ObjectRecoveryProgress(!first, data_recovered_to:2156282, data_complete:false, omap_recovered_to:, omap_complete:true, error:false)

#2 Updated by Sage Weil over 6 years ago

  • Subject changed from ceph_test_rados: mismatched version to ceph_test_rados: mismatched version (due to pg import/export)

meanwhile on osd.2, start is

2017-08-09 14:04:24.272275 7fb838dc8d00  0 ceph version 12.1.2-533-g98d5369 (98d5369995e4ef1258e99f10a5039e5ed3f01ca9) luminous (rc), process (unknown), pid 543055

pg 2.4 loads,
2017-08-09 14:04:25.987898 7fb838dc8d00 10 osd.2 pg_epoch: 144 pg[2.4( v 120'482 (31'101,120'482] local-lis/les=104/105 n=35 ec=22/22 lis/c 104/22 les/c/f 105/23/0 104/0/104) [] r=-1 lpr=0 crt=120'482 lcod 0'0 unknown] handle_loaded

note that at this point osd.5 is the primary....

on osd.5,

2017-08-09 14:03:01.827395 7fc26a05b700  1 osd.5 pg_epoch: 162 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 162/162/162) [5] r=0 lpr=162 pi=[22,162)/1 crt=103'430 lcod 0'0 mlcod 0'0 unknown NOTIFY] start_peering_interval up [] -> [5], acting [] -> [5], acting_primary ? -> 5, up_primary ? -> 5, role -1 -> 0, features acting 2305244844532236283 upacting 2305244844532236283
...
2017-08-09 14:03:01.827539 7fc26a05b700 10 osd.5 pg_epoch: 162 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 162/162/162) [5] r=0 lpr=162 pi=[22,162)/1 crt=103'430 lcod 0'0 mlcod 0'0 peering] build_prior all_probe 5,6
2017-08-09 14:03:01.827546 7fc26a05b700 10 osd.5 pg_epoch: 162 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 162/162/162) [5] r=0 lpr=162 pi=[22,162)/1 crt=103'430 lcod 0'0 mlcod 0'0 peering] build_prior maybe_rw interval:104, acting: 6
2017-08-09 14:03:01.827549 7fc26a05b700 10 osd.5 pg_epoch: 162 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 162/162/162) [5] r=0 lpr=162 pi=[22,162)/1 crt=103'430 lcod 0'0 mlcod 0'0 peering] build_prior  prior osd.6 is down
2017-08-09 14:03:01.827553 7fc26a05b700 10 osd.5 pg_epoch: 162 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 162/162/162) [5] r=0 lpr=162 pi=[22,162)/1 crt=103'430 lcod 0'0 mlcod 0'0 peering] build_prior  possibly went active+rw, insufficient up; including down osds
2017-08-09 14:03:01.827560 7fc26a05b700 10 osd.5 pg_epoch: 162 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 162/162/162) [5] r=0 lpr=162 pi=[22,162)/1 crt=103'430 lcod 0'0 mlcod 0'0 peering] build_prior final: probe 5 down 6 blocked_by {6=0} pg_down

and a bit later,
2017-08-09 14:04:04.821055 7fc26985a700  1 osd.5 pg_epoch: 194 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/194/194) [6,5] r=1 lpr=194 pi=[22,194)/2 crt=103'430 lcod 0'0 unknown] start_peering_interval up [5] -> [6,5], acting [5] -> [6,5], acting_primary 5 -> 6, up_primary 5 -> 6, role 0 -> 1, features acting 2305244844532236283 upacting 2305244844532236283
...
2017-08-09 14:04:05.958285 7fc26a05b700  1 osd.5 pg_epoch: 195 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/195/195) [6,5]/[5] r=0 lpr=195 pi=[22,195)/2 crt=103'430 lcod 0'0 mlcod 0'0 remapped NOTIFY] start_peering_interval up [6,5] -> [6,5], acting [6,5] -> [5], acting_primary 6 -> 5, up_primary 6 -> 6, role 1 -> 0, features acting 2305244844532236283 upacting 2305244844532236283
...
2017-08-09 14:04:05.959014 7fc26a05b700 10 osd.5 pg_epoch: 195 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/195/195) [6,5]/[5] r=0 lpr=195 pi=[22,195)/2 crt=103'430 lcod 0'0 mlcod 0'0 remapped+peering] build_prior all_probe 5,6
2017-08-09 14:04:05.959026 7fc26a05b700 10 osd.5 pg_epoch: 195 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/195/195) [6,5]/[5] r=0 lpr=195 pi=[22,195)/2 crt=103'430 lcod 0'0 mlcod 0'0 remapped+peering] build_prior maybe_rw interval:162, acting: 5
2017-08-09 14:04:05.959037 7fc26a05b700 10 osd.5 pg_epoch: 195 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/195/195) [6,5]/[5] r=0 lpr=195 pi=[22,195)/2 crt=103'430 lcod 0'0 mlcod 0'0 remapped+peering] build_prior maybe_rw interval:104, acting: 6
2017-08-09 14:04:05.959047 7fc26a05b700 10 osd.5 pg_epoch: 195 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/195/195) [6,5]/[5] r=0 lpr=195 pi=[22,195)/2 crt=103'430 lcod 0'0 mlcod 0'0 remapped+peering] build_prior final: probe 5,6 down  blocked_by {}

but osd.6 is empty!
2017-08-09 14:04:05.960008 7fc26a05b700 10 osd.5 pg_epoch: 195 pg[2.4( v 103'430 (31'101,103'430] local-lis/les=22/23 n=37 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/195/195) [6,5]/[5] r=0 lpr=195 pi=[22,195)/2 crt=103'430 lcod 0'0 mlcod 0'0 remapped+peering]  got osd.6 2.4( empty local-lis/les=0/0 n=0 ec=22/22 lis/c 22/22 les/c/f 23/23/0 194/195/195)

which gives osd.5 the wrong idea and it goes active with its old state.

this is a bug, either
1. in thrashosds.py: we should start the importing osd first and wait for it to peer before starting the exporting osd (which may give peer osds teh wrong idea). or,
2. peering: we should be alarmed if a member of a maybe_went_rw interval is empty for the PG and not proceed.

2 seems dangerous without a lot more thinking about the implications.

#3 Updated by Greg Farnum over 6 years ago

I'm not really sure how we could reasonably handle this scenario on the Ceph side. Seems like we should adjust the test case.

#4 Updated by Patrick Donnelly about 4 years ago

  • Status changed from 12 to New

#5 Updated by Sage Weil about 4 years ago

  • Priority changed from Urgent to Normal

#6 Updated by Neha Ojha almost 4 years ago

  • Project changed from Ceph to RADOS

/a/nojha-2020-05-19_23:54:26-rados-wip-cephadm-test-distro-basic-smithi/5070712

#7 Updated by Neha Ojha almost 4 years ago

Has started appearing more frequently recently - /a/nojha-2020-05-21_19:33:40-rados-wip-32601-distro-basic-smithi/5077147

#8 Updated by Neha Ojha over 3 years ago

/a/yuriw-2020-06-04_18:03:48-rados-wip-yuri2-testing-2020-06-03-2341-MASTER-distro-basic-smithi/5118028

#9 Updated by Kefu Chai over 3 years ago

2020-06-08T11:51:16.668 INFO:tasks.rados.rados.0.smithi104.stderr:10488: oid 9152 version is 8009 and expected 8808
2020-06-08T11:51:16.668 INFO:tasks.rados.rados.0.smithi104.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/
gigantic/release/16.0.0-2335-g2cb9872a203/rpm/el8/BUILD/ceph-16.0.0-2335-g2cb9872a203/src/test/osd/RadosModel.h: In function 'virtual void ReadOp::_finish(TestOp::CallbackInfo*)' thread 7fcb167fc700 time
2020-06-08T11:51:16.662931+0000
2020-06-08T11:51:16.669 INFO:tasks.rados.rados.0.smithi104.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/
gigantic/release/16.0.0-2335-g2cb9872a203/rpm/el8/BUILD/ceph-16.0.0-2335-g2cb9872a203/src/test/osd/RadosModel.h: 1397: FAILED ceph_assert(version == old_value.version)
2020-06-08T11:51:16.669 INFO:tasks.rados.rados.0.smithi104.stderr: ceph version 16.0.0-2335-g2cb9872a203 (2cb9872a203194b8c6ee2bf947577ee78c7dd961) pacific (dev)
2020-06-08T11:51:16.669 INFO:tasks.rados.rados.0.smithi104.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x7fcb29d563a8]
2020-06-08T11:51:16.669 INFO:tasks.rados.rados.0.smithi104.stderr: 2: (()+0x2885c2) [0x7fcb29d565c2]
2020-06-08T11:51:16.669 INFO:tasks.rados.rados.0.smithi104.stderr: 3: (ReadOp::_finish(TestOp::CallbackInfo*)+0x215) [0x55590cdccd35]
2020-06-08T11:51:16.670 INFO:tasks.rados.rados.0.smithi104.stderr: 4: (()+0xa48bb) [0x7fcb332a68bb]
2020-06-08T11:51:16.670 INFO:tasks.rados.rados.0.smithi104.stderr: 5: (()+0xbf325) [0x7fcb332c1325]
2020-06-08T11:51:16.670 INFO:tasks.rados.rados.0.smithi104.stderr: 6: (()+0xc157a) [0x7fcb332c357a]
2020-06-08T11:51:16.670 INFO:tasks.rados.rados.0.smithi104.stderr: 7: (()+0xc5e5a) [0x7fcb332c7e5a]
2020-06-08T11:51:16.670 INFO:tasks.rados.rados.0.smithi104.stderr: 8: (()+0xc2b23) [0x7fcb2863fb23]
2020-06-08T11:51:16.671 INFO:tasks.rados.rados.0.smithi104.stderr: 9: (()+0x82de) [0x7fcb2918e2de]
2020-06-08T11:51:16.671 INFO:tasks.rados.rados.0.smithi104.stderr: 10: (clone()+0x43) [0x7fcb27d1c133]

/a/kchai-2020-06-08_10:56:36-rados-wip-kefu-testing-2020-06-08-1713-distro-basic-smithi/5128820

#10 Updated by Neha Ojha over 3 years ago

  • Priority changed from Normal to High

/a/dis-2020-06-28_18:43:20-rados-wip-msgr21-fix-reuse-rebuildci-distro-basic-smithi/5186890

#11 Updated by Neha Ojha over 3 years ago

  • Status changed from New to Can't reproduce
  • Priority changed from High to Normal

The thrash_cache_writeback_proxy_none failure has a different root cause, opened a new tracker for it https://tracker.ceph.com/issues/46323.

#12 Updated by Aishwarya Mathuria almost 2 years ago

2022-04-06T21:56:25.978 INFO:tasks.rados.rados.0.smithi176.stdout:12930: copy_from oid 6278 from oid 4813 current snap is 0
2022-04-06T21:56:25.978 INFO:tasks.rados.rados.0.smithi176.stdout:12924:  expect (ObjNum 8518 snap 0 seq_num 8518)
2022-04-06T21:56:26.085 INFO:tasks.rados.rados.0.smithi176.stdout:12922:  expect (ObjNum 7426 snap 0 seq_num 7426)
2022-04-06T21:56:26.173 INFO:tasks.rados.rados.0.smithi176.stderr:12927: oid 4442 version is 9193 and expected 9195
2022-04-06T21:56:26.173 INFO:tasks.rados.rados.0.smithi176.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-11491-g37ef971d/rpm/el8/BUILD/ceph-17.0.0-11491-g37ef971d/src/test/osd/RadosModel.h: In function 'virtual void ReadOp::_finish(TestOp::CallbackInfo*)' thread 7f035e866700 time 2022-04-06T21:56:26.171796+0000
2022-04-06T21:56:26.174 INFO:tasks.rados.rados.0.smithi176.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-11491-g37ef971d/rpm/el8/BUILD/ceph-17.0.0-11491-g37ef971d/src/test/osd/RadosModel.h: 1526: FAILED ceph_assert(version == old_value.version)
2022-04-06T21:56:26.174 INFO:tasks.rados.rados.0.smithi176.stderr: ceph version 17.0.0-11491-g37ef971d (37ef971db5d69256a78734330cbd85e2b14fd088) quincy (dev)
2022-04-06T21:56:26.174 INFO:tasks.rados.rados.0.smithi176.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x7f0365171604]
2022-04-06T21:56:26.174 INFO:tasks.rados.rados.0.smithi176.stderr: 2: /usr/lib64/ceph/libceph-common.so.2(+0x284825) [0x7f0365171825]
2022-04-06T21:56:26.175 INFO:tasks.rados.rados.0.smithi176.stderr: 3: (ReadOp::_finish(TestOp::CallbackInfo*)+0x215) [0x55e4f302a345]
2022-04-06T21:56:26.175 INFO:tasks.rados.rados.0.smithi176.stderr: 4: /lib64/librados.so.2(+0xac246) [0x7f0366628246]
2022-04-06T21:56:26.175 INFO:tasks.rados.rados.0.smithi176.stderr: 5: /lib64/librados.so.2(+0xc7065) [0x7f0366643065]
2022-04-06T21:56:26.175 INFO:tasks.rados.rados.0.smithi176.stderr: 6: /lib64/librados.so.2(+0xc9158) [0x7f0366645158]
2022-04-06T21:56:26.175 INFO:tasks.rados.rados.0.smithi176.stderr: 7: /lib64/librados.so.2(+0xce4fa) [0x7f036664a4fa]
2022-04-06T21:56:26.176 INFO:tasks.rados.rados.0.smithi176.stderr: 8: /lib64/libstdc++.so.6(+0xc2ba3) [0x7f0363c28ba3]
2022-04-06T21:56:26.176 INFO:tasks.rados.rados.0.smithi176.stderr: 9: /lib64/libpthread.so.0(+0x81cf) [0x7f03647ec1cf]
2022-04-06T21:56:26.176 INFO:tasks.rados.rados.0.smithi176.stderr: 10: clone()

/a/yuriw-2022-04-06_16:35:43-rados-wip-yuri5-testing-2022-04-05-1720-distro-default-smithi/6780064

Also available in: Atom PDF