Bug #12469
RadosModel.h: read returned error code -2 (hammer)
0%
Description
While running integration tests for hammer-backports for v0.94.3 ran into a failure at
http://pulpito.ceph.com/loic-2015-07-20_16:39:35-rados-hammer-backports---basic-multi/
The details of current commits in the run over hammer v0.94.2 can be found at http://tracker.ceph.com/issues/11990#teuthology-run-commit-1e841b08cc9534e654de50967f06113fb7383b0chammer-backports-July-20
Detailed traceback below
2015-07-23T05:10:40.562 INFO:teuthology.orchestra.run.burnupi10:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd reweight 4 0.70314715106' 2015-07-23T05:10:41.180 INFO:teuthology.orchestra.run.burnupi10.stderr:reweighted osd.4 to 0.703147 (8460812) 2015-07-23T05:10:41.417 INFO:tasks.rados.rados.0.burnupi10.stdout:617: finishing write tid 1 to burnupi1018920-31 2015-07-23T05:10:41.417 INFO:tasks.rados.rados.0.burnupi10.stdout:618: expect (ObjNum 201 snap 46 seq_num 201) 2015-07-23T05:10:44.593 INFO:tasks.rados.rados.0.burnupi10.stdout:617: finishing write tid 2 to burnupi1018920-31 2015-07-23T05:10:44.593 INFO:tasks.rados.rados.0.burnupi10.stdout:617: finishing write tid 3 to burnupi1018920-31 2015-07-23T05:10:44.593 INFO:tasks.rados.rados.0.burnupi10.stdout:617: finishing write tid 5 to burnupi1018920-31 2015-07-23T05:10:44.593 INFO:tasks.rados.rados.0.burnupi10.stdout:617: finishing write tid 6 to burnupi1018920-31 2015-07-23T05:10:44.594 INFO:tasks.rados.rados.0.burnupi10.stdout:update_object_version oid 31 v 85 (ObjNum 245 snap 60 seq_num 245) dirty exists 2015-07-23T05:10:44.594 INFO:tasks.rados.rados.0.burnupi10.stdout:617: left oid 31 (ObjNum 245 snap 60 seq_num 245) 2015-07-23T05:10:44.594 INFO:tasks.rados.rados.0.burnupi10.stdout:620: expect (ObjNum 241 snap 58 seq_num 241) 2015-07-23T05:10:45.898 INFO:tasks.rados.rados.0.burnupi10.stdout:update_object_version oid 7 v 93 (ObjNum 166 snap 38 seq_num 166) dirty exists 2015-07-23T05:10:45.898 INFO:tasks.rados.rados.0.burnupi10.stdout:617: done (11 left) 2015-07-23T05:10:45.898 INFO:tasks.rados.rados.0.burnupi10.stdout:618: done (10 left) 2015-07-23T05:10:45.898 INFO:tasks.rados.rados.0.burnupi10.stdout:619: done (9 left) 2015-07-23T05:10:45.898 INFO:tasks.rados.rados.0.burnupi10.stdout:620: done (8 left) 2015-07-23T05:10:45.898 INFO:tasks.rados.rados.0.burnupi10.stdout:621: done (7 left) 2015-07-23T05:10:45.899 INFO:tasks.rados.rados.0.burnupi10.stdout:622: done (6 left) 2015-07-23T05:10:45.899 INFO:tasks.rados.rados.0.burnupi10.stdout:624: done (5 left) 2015-07-23T05:10:45.899 INFO:tasks.rados.rados.0.burnupi10.stdout:625: rollback oid 25 current snap is 61 2015-07-23T05:10:45.899 INFO:tasks.rados.rados.0.burnupi10.stdout:rollback oid 25 to 59 2015-07-23T05:10:45.899 INFO:tasks.rados.rados.0.burnupi10.stdout:626: write oid 7 current snap is 61 2015-07-23T05:10:45.899 INFO:tasks.rados.rados.0.burnupi10.stdout:626: seq_num 246 ranges {517847=413147,1537111=726365,2713101=644542,3717846=1} 2015-07-23T05:10:45.913 INFO:tasks.rados.rados.0.burnupi10.stdout:626: writing burnupi1018920-7 from 517847 to 930994 tid 1 2015-07-23T05:10:45.940 INFO:tasks.rados.rados.0.burnupi10.stdout:626: writing burnupi1018920-7 from 1537111 to 2263476 tid 2 2015-07-23T05:10:45.963 INFO:tasks.rados.rados.0.burnupi10.stdout:626: writing burnupi1018920-7 from 2713101 to 3357643 tid 3 2015-07-23T05:10:45.966 INFO:tasks.rados.rados.0.burnupi10.stdout:626: writing burnupi1018920-7 from 3717846 to 3717847 tid 4 2015-07-23T05:10:45.966 INFO:tasks.rados.rados.0.burnupi10.stdout:627: write oid 47 current snap is 61 2015-07-23T05:10:45.966 INFO:tasks.rados.rados.0.burnupi10.stdout:627: seq_num 247 ranges {461653=420023,1533867=547984,2461652=1} 2015-07-23T05:10:45.982 INFO:tasks.rados.rados.0.burnupi10.stdout:627: writing burnupi1018920-47 from 461653 to 881676 tid 1 2015-07-23T05:10:46.003 INFO:tasks.rados.rados.0.burnupi10.stdout:627: writing burnupi1018920-47 from 1533867 to 2081851 tid 2 2015-07-23T05:10:46.006 INFO:tasks.rados.rados.0.burnupi10.stdout:627: writing burnupi1018920-47 from 2461652 to 2461653 tid 3 2015-07-23T05:10:46.007 INFO:tasks.rados.rados.0.burnupi10.stdout:update_object_version oid 25 v 48 (ObjNum 219 snap 51 seq_num 219) dirty exists 2015-07-23T05:10:46.007 INFO:tasks.rados.rados.0.burnupi10.stdout:625: done (7 left) 2015-07-23T05:10:46.008 INFO:tasks.rados.rados.0.burnupi10.stdout:628: copy_from oid 30 from oid 8 current snap is 61 2015-07-23T05:10:46.008 INFO:tasks.rados.rados.0.burnupi10.stdout:627: finishing write tid 1 to burnupi1018920-47 2015-07-23T05:10:46.008 INFO:tasks.rados.rados.0.burnupi10.stdout:629: read oid 9 snap -1 2015-07-23T05:10:46.008 INFO:tasks.rados.rados.0.burnupi10.stdout:629: expect deleted 2015-07-23T05:10:46.008 INFO:tasks.rados.rados.0.burnupi10.stdout:630: read oid 14 snap -1 2015-07-23T05:10:46.008 INFO:tasks.rados.rados.0.burnupi10.stdout:630: expect (ObjNum 191 snap 46 seq_num 191) 2015-07-23T05:10:46.008 INFO:tasks.rados.rados.0.burnupi10.stdout:631: delete oid 25 current snap is 61 2015-07-23T05:10:46.009 INFO:tasks.rados.rados.0.burnupi10.stdout:627: finishing write tid 2 to burnupi1018920-47 2015-07-23T05:10:46.009 INFO:tasks.rados.rados.0.burnupi10.stdout:627: finishing write tid 3 to burnupi1018920-47 2015-07-23T05:10:46.010 INFO:tasks.rados.rados.0.burnupi10.stderr:630: Error: oid 14 read returned error code -2 2015-07-23T05:10:46.010 INFO:tasks.rados.rados.0.burnupi10.stderr:./test/osd/RadosModel.h: In function 'virtual void ReadOp::_finish(TestOp::CallbackInfo*)' thread 7f212ffff700 time 2015-07-23 05:10:46.009769 2015-07-23T05:10:46.010 INFO:tasks.rados.rados.0.burnupi10.stderr:./test/osd/RadosModel.h: 1094: FAILED assert(0) 2015-07-23T05:10:46.029 INFO:tasks.rados.rados.0.burnupi10.stderr: ceph version 0.94.2-197-g1e841b0 (1e841b08cc9534e654de50967f06113fb7383b0c) 2015-07-23T05:10:46.029 INFO:tasks.rados.rados.0.burnupi10.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0x4df68b] 2015-07-23T05:10:46.029 INFO:tasks.rados.rados.0.burnupi10.stderr: 2: (ReadOp::_finish(TestOp::CallbackInfo*)+0xec) [0x4d001c] 2015-07-23T05:10:46.029 INFO:tasks.rados.rados.0.burnupi10.stderr: 3: (()+0xb3dad) [0x7f214deb0dad] 2015-07-23T05:10:46.030 INFO:tasks.rados.rados.0.burnupi10.stderr: 4: (()+0x8faf9) [0x7f214de8caf9] 2015-07-23T05:10:46.030 INFO:tasks.rados.rados.0.burnupi10.stderr: 5: (()+0x153298) [0x7f214df50298] 2015-07-23T05:10:46.030 INFO:tasks.rados.rados.0.burnupi10.stderr: 6: (()+0x8182) [0x7f214d9d1182] 2015-07-23T05:10:46.030 INFO:tasks.rados.rados.0.burnupi10.stderr: 7: (clone()+0x6d) [0x7f214c35738d] 2015-07-23T05:10:46.030 INFO:tasks.rados.rados.0.burnupi10.stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. 2015-07-23T05:10:46.030 INFO:tasks.rados.rados.0.burnupi10.stderr:terminate called after throwing an instance of 'ceph::FailedAssertion' 2015-07-23T05:10:46.085 ERROR:teuthology.run_tasks:Manager failed: rados Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 125, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/var/lib/teuthworker/src/ceph-qa-suite_hammer/tasks/rados.py", line 196, in task running.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 331, in get raise self._exception CommandCrashedError: Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --pool unique_pool_0'
Related issues
History
#1 Updated by Abhishek Lekshmanan over 8 years ago
Rescheduled a run against hammer branch (as per Loic's comments) in order to ensure whether the bug is reproducible against hammer branch as well
http://pulpito.ceph.com/abhi-2015-07-26_15:39:50-rados-hammer---basic-multi/
#2 Updated by Loïc Dachary over 8 years ago
run=loic-2015-07-20_16:39:35-rados-hammer-backports---basic-multi eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') echo $filter rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/fastclose.yaml thrashers/pggrow.yaml workloads/snaps-few-objects.yaml} teuthology-openstack --verbose --key-name loic --priority 50 --suite rados --filter="$filter" --suite-branch hammer --distro ubuntu --ceph hammer teuthology-openstack --verbose --key-name loic --priority 50 --suite rados --filter="$filter" --suite-branch hammer --distro ubuntu --ceph hammer-backports
- pass hammer run
- pass hammer-backports run
#3 Updated by Loïc Dachary over 8 years ago
- Subject changed from Failed assert(0) in rados-hammer-multi run to RadosModel.h: read returned error code -2
- Status changed from New to 12
https://github.com/ceph/ceph/blob/hammer/src/test/osd/RadosModel.h#L1091
if (!(err == -ENOENT && old_value.deleted())) { cerr << num << ": Error: oid " << oid << " read returned error code " << err << std::endl; assert(0); }
The err -2 means err == -ENOENT, therefore old_value.deleted() is false and triggers the assert.
#4 Updated by Loïc Dachary over 8 years ago
I don't see anything in the commits from the integration branch of hammer that could cause that kind of problem (it's almost entirely rgw / librbd / build). Since a similar error (#7985) was rejected a while ago by Sam, it's worth asking him why it was rejected before digging further.
#5 Updated by Loïc Dachary over 8 years ago
- Subject changed from RadosModel.h: read returned error code -2 to RadosModel.h: read returned error code -2 (hammer)
- Target version deleted (
v0.94.3) - Release set to hammer
#6 Updated by Sage Weil over 8 years ago
This appears to be caused by bad past_intervals on PG 1.16, resulting from the ceph-objectstore-tool import/export:
remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871127 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(156-164 up [](-1) acting [](-1)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871157 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(155-155 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871177 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(154-154 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871197 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(153-153 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871217 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(152-152 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871236 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(151-151 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871256 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(150-150 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871275 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(149-149 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871299 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(148-148 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871319 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(147-147 up [](5) acting [](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871339 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(146-146 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871360 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(145-145 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871390 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(144-144 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871410 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(143-143 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871430 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(142-142 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871451 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(141-141 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871471 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(140-140 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871491 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(139-139 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871518 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(138-138 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871539 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(137-137 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871560 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(136-136 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871580 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(135-135 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871600 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(134-134 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871620 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(133-133 up [0](5) acting [0](5)) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871640 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior interval(104-132 up [5,0](5) acting [5,0](5) maybe_went_rw) remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871661 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior prior osd.0 is down remote/plana11/log/ceph-osd.5.log.gz:2015-07-23 05:10:25.871679 7fb97a47d700 10 osd.5 pg_epoch: 165 pg[1.16( empty local-les=0 n=0 ec=5 les/c 104/104 165/165/165) [5] r=0 lpr=165 pi=104-164/25 crt=0'0 mlcod 0'0 peering] PriorSet: build_prior final: probe 5 down 0 blocked_by {}
is totally wrong.
#7 Updated by Loïc Dachary over 8 years ago
It's a bug with ceph-objectstore-tool that is not a blocker for a hammer release.
#8 Updated by Loïc Dachary over 8 years ago
- Assignee set to David Zafman
Feel free to unassign yourself if that's not for you :-)
#9 Updated by David Zafman about 8 years ago
- Related to Backport #15171: hammer: osd: corruption when min_read_recency_for_promote > 1 added
#10 Updated by David Zafman about 8 years ago
- Assignee deleted (
David Zafman)
Added related to 15171 at this point only because it exhibited the same symptom.
#11 Updated by Sage Weil almost 7 years ago
- Status changed from 12 to Can't reproduce