Project

General

Profile

Bug #13176

test/ceph_objectstore_tool.py: common/Mutex.cc: In function 'void Mutex::Lock(bool)

Added by Loïc Dachary over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

http://jenkins.ceph.dachary.org/job/ceph/LABELS=centos-7&&x86_64/7811/console

common/Mutex.cc: In function 'void Mutex::Lock(bool)' thread 7f2039bf2700 time 2015-09-17 02:44:03.194060
common/Mutex.cc: 105: FAILED assert(r == 0)
 ceph version 9.0.3-1683-g5767521 (5767521500d3174170a354ce50c429d36b581be1)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0x7f2041252bab]
 2: (Mutex::Lock(bool)+0x19c) [0x7f2041212cfc]
 3: (FileStore::_journaled_ahead(FileStore::OpSequencer*, FileStore::Op*, Context*)+0x1ea) [0x7f204107a77a]
 4: (Context::complete(int)+0x9) [0x7f2040de8199]
 5: (Finisher::finisher_thread_entry()+0x158) [0x7f204119bfe8]
 6: (()+0x7df5) [0x7f203f22bdf5]
 7: (clone()+0x6d) [0x7f203e0bc1ad]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'

Related issues

Related to Ceph - Bug #13897: (jewel) rados/objectstore/objectstore.yaml common/Mutex.cc: 105: FAILED assert(r == 0) Resolved 11/27/2015
Duplicated by Ceph - Bug #13113: common/Mutex.cc: 105: FAILED assert(r == 0) Duplicate 09/16/2015

Associated revisions

Revision 47f4a039 (diff)
Added by Loic Dachary over 8 years ago

ceph-objectstore-tool: delete ObjectStore::Sequencer after umount

An ObjectStore::Sequencer provided to an ObjectStore must not be
deallocated before umount. The ObjectStore::Sequencer may hold a pointer
to the instance with no reference counting. If a Context completes after
the ObjectStore::Sequencer is deleted, it could try to use it and fail.

http://tracker.ceph.com/issues/13176 Fixes: #13176

Signed-off-by: Loic Dachary <>

History

#1 Updated by Loïc Dachary over 8 years ago

  • Priority changed from Normal to Urgent

David, does that ring a bell ? I'm setting it as Urgent because it create noises in make check (happened twice in the past 72 hours, not too much noise ;-).

#4 Updated by David Zafman over 8 years ago

I have never seen this.

#5 Updated by Loïc Dachary over 8 years ago

and again http://jenkins.ceph.dachary.org/job/ceph/LABELS=centos-7&&x86_64/7923/console
it's weird that it was quiet during a few days and now it's coming back...

#8 Updated by Loïc Dachary over 8 years ago

It looks like running the test about a 100 times is going to trigger the problem reliably.

#9 Updated by Loïc Dachary over 8 years ago

This is an interesting occurrence : http://jenkins.ceph.dachary.org/job/ceph/LABELS=centos-7&&x86_64/7976/console as it happens when using ceph_objectstore_tool but as part of another test, not the ceph_objectstore_tool.py test

#10 Updated by Loïc Dachary over 8 years ago

For archive, to get a better idea of the frequency, I'm recording all occurrences http://jenkins.ceph.dachary.org/job/ceph/LABELS=centos-7&&x86_64/7979/

#13 Updated by Loïc Dachary over 8 years ago

Running the test in a loop to reproduce it. If it does not happen in ~50 run, then it's probably a concurrency problem when tests are run with -j12.

while : ; do test/ceph_objectstore_tool.py > /tmp/out 2>&1 || break ; echo -n . ; done

#18 Updated by Loïc Dachary over 8 years ago

Running ceph_objectstore_tool.py in a loop does not reproduce the problem. However

while : ; do timeout 3600 make -j12 check > out3 2>&1 || break ; echo -n . ; done

reproduces the issue after a dozen run or so. Since the problem also happens when ceph_objectstore_tool is called in a context different from ceph_objectstore_tool.py (but not as often), a concurrency problem (tests stepping on each other because they use the same directory or port) is less likely than a race condition that reveals itself because the conditions are right. Such a race condition would also explain why it is not possible to reproduce the problem by running test/ceph_objectstore_tool.py alone because the load of the machine does not vary significantly.

#19 Updated by Loïc Dachary over 8 years ago

The Mutex::Lock called from FileStore::_journaled_ahead is inlined from OpSequencer::dequeue_journal. There seems to be a window of opportunity for the following to happen:

  • new C_JournaledAhead is queueue
  • shutdown() calls flush() which returns from ondisk_finisher.wait_for_empty() although _journaled_ahead is still being run (to be run ?) from C_JournaledAhead
  • shutdown() get the OpSequencer lock before _journaled_ahead
  • _journaled_ahead fails to get the OpSequencer lock and aborts

#20 Updated by Loïc Dachary over 8 years ago

The Mutex::Lock called from FileStore::_journaled_ahead is inlined from ondisk_finisher Finisher::queue. There seems to be a window of opportunity for the following to happen:

  • new C_JournaledAhead is queueue
  • shutdown() calls flush() calls ondisk_finisher.wait_for_empty() which holds the ondisk_finisher lock and C_JournaledAhead is still in the queue
  • C_JournaledAhead is dequeued, calls FileStore::_journaled_ahead which calls ondisk_finisher.queue(ondisk) which aborts because it tries to get the ondisk_finisher lock

#21 Updated by Loïc Dachary over 8 years ago

With

    void __attribute__ ((noinline))  dequeue_journal(list<Context*> *to_queue) {

and preparing a cluster with
MON=1 OSD=3 ./vstart.sh -X -d -n -l mon osd
./rados -p rbd put group /etc/group
./stop.sh

then running
while : ; do ./ceph-objectstore-tool --debug-filestore=20 --data-path dev/osd0 --journal-path dev/osd0.journal group set-bytes /etc/group > out10 2>&1 || break ; done

and compiling / running tests to vary the load of the machine.
common/Mutex.cc: In function 'void Mutex::Lock(bool)' thread 7fce5bf7b700 time 2015-09-29 11:19:03.222057
common/Mutex.cc: 105: FAILED assert(r == 0)
 ceph version 9.0.3-1870-gfd861bb (fd861bb2504298590c18814be787d93a195c1877)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0x7fce635dc6eb]
 2: (Mutex::Lock(bool)+0x19c) [0x7fce6359c76c]
 3: (FileStore::OpSequencer::dequeue_journal(std::list<Context*, std::allocator<Context*> >*)+0x22) [0x7fce6342b202]
 4: (FileStore::_journaled_ahead(FileStore::OpSequencer*, FileStore::Op*, Context*)+0x93) [0x7fce63403a53]
 5: (Context::complete(int)+0x9) [0x7fce631718f9]
 6: (Finisher::finisher_thread_entry()+0x158) [0x7fce63525a58]
 7: (()+0x7df5) [0x7fce615b4df5]
 8: (clone()+0x6d) [0x7fce604451ad]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

#22 Updated by Loïc Dachary over 8 years ago

  • Assignee changed from David Zafman to Loïc Dachary
   -38> 2015-09-29 11:58:07.466903 7ff939901800 10 filestore(dev/osd0) list_collections
   -37> 2015-09-29 11:58:07.467198 7ff939901800  5 filestore(dev/osd0) queue_transactions new 0x7ff93e27a3c0 osr(do_set_bytes 0x7fff89152930)
   -36> 2015-09-29 11:58:07.467223 7ff939901800 10 journal _op_journal_transactions_prepare 0x7fff89152a50
   -35> 2015-09-29 11:58:07.467242 7ff939901800 10 journal op_submit_start 18001
   -34> 2015-09-29 11:58:07.467247 7ff939901800  5 filestore(dev/osd0) queue_transactions (writeahead) 18001 0x7fff89152a50
   -33> 2015-09-29 11:58:07.467251 7ff939901800 10 journal op_journal_transactions 18001
   -32> 2015-09-29 11:58:07.467253 7ff939901800  5 journal submit_entry seq 18001 len 790 (0x7ff93e288b50)
   -31> 2015-09-29 11:58:07.467263 7ff939901800 10 journal op_submit_finish 18001
   -30> 2015-09-29 11:58:07.467273 7ff933226700 20 journal write_thread_entry woke up
   -29> 2015-09-29 11:58:07.467281 7ff933226700 10 journal room 104853503 max_size 104857600 pos 73748480 header.start 73748480 top 4096
   -28> 2015-09-29 11:58:07.467287 7ff933226700 10 journal check_for_full at 73748480 : 4096 < 104853503
   -27> 2015-09-29 11:58:07.467290 7ff933226700 15 journal prepare_single_write 1 will write 73748480 : seq 18001 len 790 -> 4096 (head 40 pre_pad 0 ebl 790 post_pad 3226 tail 40) (ebl alignment -1)
   -26> 2015-09-29 11:58:07.467311 7ff933226700 20 journal prepare_multi_write queue_pos now 73752576
   -25> 2015-09-29 11:58:07.467318 7ff933226700 15 journal do_write writing 73748480~4096
   -24> 2015-09-29 11:58:07.467330 7ff933226700 10 journal align_bl total memcopy: 4096
   -23> 2015-09-29 11:58:07.467913 7ff933226700 20 journal do_write latency 0.000592
   -22> 2015-09-29 11:58:07.467924 7ff933226700 20 journal do_write queueing finishers through seq 18001
   -21> 2015-09-29 11:58:07.467927 7ff933226700 10 journal queue_completions_thru seq 18001 queueing seq 18001 0x7ff93e288b50 lat 0.000669
   -20> 2015-09-29 11:58:07.467949 7ff933226700  5 journal put_throttle finished 1 ops and 790 bytes, now 0 ops and 0 bytes
   -19> 2015-09-29 11:58:07.467969 7ff933226700 20 journal write_thread_entry going to sleep
   -18> 2015-09-29 11:58:07.467966 7ff932a25700  5 filestore(dev/osd0) _journaled_ahead 0x7ff93e2300d0 seq 18001 osr(do_set_bytes 0x7fff89152930) 0x7fff89152a50
   -17> 2015-09-29 11:58:07.468000 7ff932a25700  5 filestore(dev/osd0) queue_op 0x7ff93e2300d0 seq 18001 osr(do_set_bytes 0x7fff89152930) 776 bytes   (queue has 1 ops and 776 bytes)
   -16> 2015-09-29 11:58:07.468072 7ff931a23700 10 journal op_apply_start 18001 open_ops 0 -> 1
   -15> 2015-09-29 11:58:07.468080 7ff931a23700  5 filestore(dev/osd0) _do_op 0x7ff93e2300d0 seq 18001 osr(do_set_bytes 0x7fff89152930)/0x7fff89152930 start
   -14> 2015-09-29 11:58:07.468091 7ff931a23700 10 filestore(dev/osd0) _do_transaction on 0x7fff89152a50
   -13> 2015-09-29 11:58:07.468120 7ff931a23700 15 filestore(dev/osd0) touch 0.0_head/0/148c0040/group/head
   -12> 2015-09-29 11:58:07.468137 7ff931a23700 10 filestore(dev/osd0) touch 0.0_head/0/148c0040/group/head = 0
   -11> 2015-09-29 11:58:07.468148 7ff931a23700 15 filestore(dev/osd0) truncate 0.0_head/0/148c0040/group/head size 0
   -10> 2015-09-29 11:58:07.468208 7ff931a23700 10 filestore(dev/osd0) truncate 0.0_head/0/148c0040/group/head size 0 = 0
    -9> 2015-09-29 11:58:07.468222 7ff931a23700 15 filestore(dev/osd0) write 0.0_head/0/148c0040/group/head 0~428
    -8> 2015-09-29 11:58:07.468257 7ff931a23700 10 filestore(dev/osd0) write 0.0_head/0/148c0040/group/head 0~428 = 428
    -7> 2015-09-29 11:58:07.468266 7ff931a23700 10 journal op_apply_finish 18001 open_ops 1 -> 0, max_applied_seq 18000 -> 18001
    -6> 2015-09-29 11:58:07.468269 7ff931a23700 10 filestore(dev/osd0) _do_op 0x7ff93e2300d0 seq 18001 r = 0, finisher 0x7ff93e288af0 0
    -5> 2015-09-29 11:58:07.468274 7ff931a23700 10 filestore(dev/osd0) _finish_op 0x7ff93e2300d0 seq 18001 osr(do_set_bytes 0x7fff89152930)/0x7fff89152930 lat 0.001058
    -4> 2015-09-29 11:58:07.468341 7ff939901800  5 filestore(dev/osd0) umount dev/osd0
    -3> 2015-09-29 11:58:07.468347 7ff939901800 10 filestore(dev/osd0) flush
    -2> 2015-09-29 11:58:07.468349 7ff939901800 10 journal waiting for completions to empty
    -1> 2015-09-29 11:58:07.468352 7ff939901800 10 journal flush waiting for finisher
     0> 2015-09-29 11:58:07.470718 7ff932a25700 -1 common/Mutex.cc: In function 'void Mutex::Lock(bool)' thread 7ff932a25700 time 2015-09-29 11:58:07.468491

#23 Updated by Loïc Dachary over 8 years ago

When things go as expected, the following happens:

2015-09-29 12:05:11.247606 7f0b43f9d800 10 filestore(dev/osd0) list_collections
2015-09-29 12:05:11.247796 7f0b43f9d800  5 filestore(dev/osd0) queue_transactions new 0x7f0b48a1e3c0 osr(do_set_bytes 0x7fff94aa3790)
2015-09-29 12:05:11.247816 7f0b43f9d800 10 journal _op_journal_transactions_prepare 0x7fff94aa38b0
2015-09-29 12:05:11.247830 7f0b43f9d800 10 journal op_submit_start 18002
2015-09-29 12:05:11.247834 7f0b43f9d800  5 filestore(dev/osd0) queue_transactions (writeahead) 18002 0x7fff94aa38b0
2015-09-29 12:05:11.247837 7f0b43f9d800 10 journal op_journal_transactions 18002
2015-09-29 12:05:11.247838 7f0b43f9d800  5 journal submit_entry seq 18002 len 790 (0x7f0b48a2cb50)
2015-09-29 12:05:11.247846 7f0b43f9d800 10 journal op_submit_finish 18002
2015-09-29 12:05:11.247880 7f0b3d8c2700 20 journal write_thread_entry woke up
2015-09-29 12:05:11.247894 7f0b3d8c2700 10 journal room 104849407 max_size 104857600 pos 73752576 header.start 73748480 top 4096
2015-09-29 12:05:11.247900 7f0b3d8c2700 10 journal check_for_full at 73752576 : 4096 < 104849407
2015-09-29 12:05:11.247902 7f0b3d8c2700 15 journal prepare_single_write 1 will write 73752576 : seq 18002 len 790 -> 4096 (head 40 pre_pad 0 ebl 790 post_pad 3226 tail 40) (ebl alignment -1)
2015-09-29 12:05:11.247925 7f0b3d8c2700 20 journal prepare_multi_write queue_pos now 73756672
2015-09-29 12:05:11.247927 7f0b3d8c2700 15 journal do_write writing 73752576~4096
2015-09-29 12:05:11.247938 7f0b3d8c2700 10 journal align_bl total memcopy: 4096
2015-09-29 12:05:11.248495 7f0b3d8c2700 20 journal do_write latency 0.000566
2015-09-29 12:05:11.248508 7f0b3d8c2700 20 journal do_write queueing finishers through seq 18002
2015-09-29 12:05:11.248512 7f0b3d8c2700 10 journal queue_completions_thru seq 18002 queueing seq 18002 0x7f0b48a2cb50 lat 0.000669
2015-09-29 12:05:11.248533 7f0b3d8c2700  5 journal put_throttle finished 1 ops and 790 bytes, now 0 ops and 0 bytes
2015-09-29 12:05:11.248558 7f0b3d8c2700 20 journal write_thread_entry going to sleep
2015-09-29 12:05:11.248538 7f0b3d0c1700  5 filestore(dev/osd0) _journaled_ahead 0x7f0b489d40d0 seq 18002 osr(do_set_bytes 0x7fff94aa3790) 0x7fff94aa38b0
2015-09-29 12:05:11.248585 7f0b3d0c1700  5 filestore(dev/osd0) queue_op 0x7f0b489d40d0 seq 18002 osr(do_set_bytes 0x7fff94aa3790) 776 bytes   (queue has 1 ops and 776 bytes)
2015-09-29 12:05:11.248604 7f0b3c0bf700 10 journal op_apply_start 18002 open_ops 0 -> 1
2015-09-29 12:05:11.248612 7f0b3c0bf700  5 filestore(dev/osd0) _do_op 0x7f0b489d40d0 seq 18002 osr(do_set_bytes 0x7fff94aa3790)/0x7fff94aa3790 start
2015-09-29 12:05:11.248623 7f0b3c0bf700 10 filestore(dev/osd0) _do_transaction on 0x7fff94aa38b0
2015-09-29 12:05:11.248646 7f0b3c0bf700 15 filestore(dev/osd0) touch 0.0_head/0/148c0040/group/head
2015-09-29 12:05:11.248663 7f0b3c0bf700 10 filestore(dev/osd0) touch 0.0_head/0/148c0040/group/head = 0
2015-09-29 12:05:11.248671 7f0b3c0bf700 15 filestore(dev/osd0) truncate 0.0_head/0/148c0040/group/head size 0
2015-09-29 12:05:11.248710 7f0b3c0bf700 10 filestore(dev/osd0) truncate 0.0_head/0/148c0040/group/head size 0 = 0
2015-09-29 12:05:11.248723 7f0b3c0bf700 15 filestore(dev/osd0) write 0.0_head/0/148c0040/group/head 0~428
2015-09-29 12:05:11.248753 7f0b3c0bf700 10 filestore(dev/osd0) write 0.0_head/0/148c0040/group/head 0~428 = 428
2015-09-29 12:05:11.248761 7f0b3c0bf700 10 journal op_apply_finish 18002 open_ops 1 -> 0, max_applied_seq 18001 -> 18002
2015-09-29 12:05:11.248763 7f0b3c0bf700 10 filestore(dev/osd0) _do_op 0x7f0b489d40d0 seq 18002 r = 0, finisher 0x7f0b48a2caf0 0
2015-09-29 12:05:11.248768 7f0b3c0bf700 10 filestore(dev/osd0) _finish_op 0x7f0b489d40d0 seq 18002 osr(do_set_bytes 0x7fff94aa3790)/0x7fff94aa3790 lat 0.000958
2015-09-29 12:05:11.248814 7f0b43f9d800  5 filestore(dev/osd0) umount dev/osd0
2015-09-29 12:05:11.248822 7f0b43f9d800 10 filestore(dev/osd0) flush
2015-09-29 12:05:11.248823 7f0b43f9d800 10 journal waiting for completions to empty
2015-09-29 12:05:11.248825 7f0b43f9d800 10 journal flush waiting for finisher
2015-09-29 12:05:11.248827 7f0b43f9d800 10 journal flush done
2015-09-29 12:05:11.248829 7f0b43f9d800 10 filestore(dev/osd0) flush draining ondisk finisher
2015-09-29 12:05:11.248830 7f0b43f9d800 10 filestore(dev/osd0) _flush_op_queue draining op tp
2015-09-29 12:05:11.248832 7f0b43f9d800 10 filestore(dev/osd0) _flush_op_queue waiting for apply finisher
2015-09-29 12:05:11.248833 7f0b43f9d800 10 filestore(dev/osd0) flush complete
2015-09-29 12:05:11.248839 7f0b43f9d800 10 filestore(dev/osd0) start_sync
2015-09-29 12:05:11.248840 7f0b43f9d800 10 filestore(dev/osd0) sync waiting
2015-09-29 12:05:11.248862 7f0b3e0c3700 20 filestore(dev/osd0) sync_entry force_sync set
2015-09-29 12:05:11.248886 7f0b3e0c3700 10 journal commit_start max_applied_seq 18002, open_ops 0
2015-09-29 12:05:11.248889 7f0b3e0c3700 10 journal commit_start blocked, all open_ops have completed
2015-09-29 12:05:11.248890 7f0b3e0c3700 10 journal commit_start committing 18002, still blocked
2015-09-29 12:05:11.248892 7f0b3e0c3700 10 journal commit_start
2015-09-29 12:05:11.248902 7f0b3e0c3700 15 filestore(dev/osd0) sync_entry committing 18002
2015-09-29 12:05:11.248905 7f0b3e0c3700 10 journal commit_started committing 18002, unblocking
2015-09-29 12:05:11.248915 7f0b3e0c3700 20 filestore dbobjectmap: seq is 9
2015-09-29 12:05:11.249414 7f0b3e0c3700 15 genericfilestorebackend(dev/osd0) syncfs: doing a full sync (syncfs(2) if possible)
2015-09-29 12:05:11.289237 7f0b3e0c3700 10 filestore(dev/osd0) sync_entry commit took 0.040342, interval was 0.043783
2015-09-29 12:05:11.289270 7f0b3e0c3700 10 journal commit_finish thru 18002
2015-09-29 12:05:11.289274 7f0b3e0c3700  5 journal committed_thru 18002 (last_committed_seq 18000)
2015-09-29 12:05:11.289278 7f0b3e0c3700 10 journal header: block_size 4096 alignment 4096 max_size 104857600
2015-09-29 12:05:11.289280 7f0b3e0c3700 10 journal header: start 73756672
2015-09-29 12:05:11.289281 7f0b3e0c3700 10 journal  write_pos 73756672
2015-09-29 12:05:11.289283 7f0b3e0c3700 10 journal committed_thru done
2015-09-29 12:05:11.289296 7f0b3e0c3700 15 filestore(dev/osd0) sync_entry committed to op_seq 18002
2015-09-29 12:05:11.289308 7f0b3e0c3700 20 filestore(dev/osd0) sync_entry waiting for max_interval 5.000000
2015-09-29 12:05:11.289313 7f0b43f9d800 10 filestore(dev/osd0) sync done
2015-09-29 12:05:11.289321 7f0b43f9d800 10 filestore(dev/osd0) do_force_sync
2015-09-29 12:05:11.289330 7f0b3e0c3700 20 filestore(dev/osd0) sync_entry force_sync set
2015-09-29 12:05:11.289335 7f0b3e0c3700 10 journal commit_start max_applied_seq 18002, open_ops 0
2015-09-29 12:05:11.289337 7f0b3e0c3700 10 journal commit_start blocked, all open_ops have completed
2015-09-29 12:05:11.289340 7f0b3e0c3700 10 journal commit_start nothing to do
2015-09-29 12:05:11.289342 7f0b3e0c3700 10 journal commit_start
2015-09-29 12:05:11.289496 7f0b43f9d800 10 journal journal_stop
2015-09-29 12:05:11.289553 7f0b43f9d800  1 journal close dev/osd0.journal
2015-09-29 12:05:11.289567 7f0b3d8c2700 20 journal write_thread_entry woke up
2015-09-29 12:05:11.289578 7f0b3d8c2700 20 journal prepare_multi_write queue_pos now 73756672
2015-09-29 12:05:11.289609 7f0b3d8c2700 15 journal do_write writing 73756672~0 + header
2015-09-29 12:05:11.289784 7f0b3d8c2700 20 journal do_write latency 0.000168
2015-09-29 12:05:11.289792 7f0b3d8c2700 20 journal do_write queueing finishers through seq 18002
2015-09-29 12:05:11.289798 7f0b3d8c2700  5 journal put_throttle finished 0 ops and 0 bytes, now 0 ops and 0 bytes
2015-09-29 12:05:11.289802 7f0b3d8c2700 10 journal write_thread_entry finish
2015-09-29 12:05:11.289902 7f0b43f9d800 15 journal do_write writing 73756672~0 + header
2015-09-29 12:05:11.290125 7f0b43f9d800 20 journal do_write latency 0.000211
2015-09-29 12:05:11.290136 7f0b43f9d800 20 journal do_write queueing finishers through seq 18002
2015-09-29 12:05:11.290139 7f0b43f9d800 20 journal write_header_sync finish

#24 Updated by Loïc Dachary over 8 years ago

The value of r is -22

#define    EINVAL        22    /* Invalid argument */

#25 Updated by Loïc Dachary over 8 years ago

  • Status changed from 12 to In Progress

#26 Updated by Loïc Dachary over 8 years ago

  • Status changed from In Progress to Fix Under Review

#27 Updated by Loïc Dachary over 8 years ago

  • Status changed from Fix Under Review to 7

#28 Updated by Loïc Dachary over 8 years ago

An ObjectStore::Sequencer provided to an ObjectStore must not be deallocated before umount. The ObjectStore::Sequencer may hold a pointer to the instance with no reference counting. If a Context completes after the ObjectStore::Sequencer is deleted, it could try to use it and fail.

#29 Updated by Loïc Dachary over 8 years ago

  • Status changed from 7 to Resolved

#30 Updated by Loïc Dachary over 8 years ago

  • Related to Bug #13897: (jewel) rados/objectstore/objectstore.yaml common/Mutex.cc: 105: FAILED assert(r == 0) added

Also available in: Atom PDF