Bug #42828

rbd journal err assert(ictx->journal != __null) when release exclusive_lock

Added by liuzhong chen over 3 years ago. Updated over 3 years ago.

Target version:
% Done:


3 - minor
Affected Versions:
Pull request ID:
Crash signature (v1):
Crash signature (v2):


ceph version: 12.2.12
1. create a rbd with journaling feature.
2. test rbd with two fio client,fio command as follows:

fio --name test --rw=randwrite --bs=4k --runtime=3600 --ioengine=rbd --clientname=admin --pool=poolclz --rbdname=rbdclz --iodepth=128 --numjobs=1 --direct=1 --group_reporting --time_based=1  --eta-newline 1 --fsync=1

3. wait for while, coredump happen.
/root/rpmbuild/BUILD/ceph-12.2.12-1/src/librbd/io/ 86: FAILED assert(ictx->journal != __null)

coredump file:
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".
Core was generated by `/usr/libexec/qemu-kvm -name guest=instance-00000082,debug-threads=on -S -object'.
Program terminated with signal 6, Aborted.
#0  0x00007f42a6f261f7 in raise () from /lib64/
Missing separate debuginfos, use: debuginfo-install qemu-kvm-2.9.0-16.el7.centos.es_4.5.1.2.x86_64
(gdb) bt
#0  0x00007f42a6f261f7 in raise () from /lib64/
#1  0x00007f42a6f278e8 in abort () from /lib64/
#2  0x00007f429c02b744 in ceph::__ceph_assert_fail(char const*, char const*, int, char const*) () from /usr/lib64/ceph/
#3  0x00007f42ac35c4d3 in librbd::io::AioCompletion::complete() () from /lib64/
#4  0x00007f42ac35d14f in librbd::io::AioCompletion::complete_request(long) () from /lib64/
#5  0x00007f42ac35e511 in librbd::io::(anonymous namespace)::C_FlushJournalCommit<librbd::ImageCtx>::finish(int) () from /lib64/
#6  0x00007f42ac29c829 in Context::complete(int) () from /lib64/
#7  0x00007f42ac2aaac4 in ContextWQ::process(Context*) () from /lib64/
#8  0x00007f429c03385e in ThreadPool::worker(ThreadPool::WorkThread*) () from /usr/lib64/ceph/
#9  0x00007f429c034780 in ThreadPool::WorkThread::entry() () from /usr/lib64/ceph/
#10 0x00007f42a72bbe25 in start_thread () from /lib64/
#11 0x00007f42a6fe934d in clone () from /lib64/ 

The reason is prerelase exclusive_lock not wait for all aio complete. when one aio flush complete and just in time the journal has been closed cause assert(ictx->journal != __null) err.
I wonder if we need to wait for all aio complete in prerelease exclusive_lock or anything else need to consider.

rbd.log.bak (629 KB) liuzhong chen, 11/19/2019 05:29 AM


#1 Updated by liuzhong chen over 3 years ago

Follow is my method to slove this problem, Wonder if there is a better already existed api to do this.
I have test this method and passed.

commit 9560f9744fa4b86a49f717fcbe85eeb976cc9a51
Author: Chen liuzhong <>
Date:   Mon Nov 18 16:22:53 2019 +0800

    Add image flush when pre-release exclusive_lock

diff --git a/src/librbd/exclusive_lock/ b/src/librbd/exclusive_lock/
index 504de0f..66863e8 100644
--- a/src/librbd/exclusive_lock/
+++ b/src/librbd/exclusive_lock/
@@ -219,6 +219,26 @@ void PreReleaseRequest<I>::handle_flush_notifies(int r) {
   ldout(cct, 10) << dendl;

   assert(r == 0);
+  send_image_flush();
+template <typename I>
+void PreReleaseRequest<I>::send_image_flush(){
+  CephContext *cct = m_image_ctx.cct;
+  ldout(cct, 10) << dendl;
+  using klass = PreReleaseRequest<I>;
+  Context *ctx =
+    create_context_callback<klass, &klass::handle_image_flush>(this);
+  m_image_ctx.flush(ctx);
+template <typename I>
+void PreReleaseRequest<I>::handle_image_flush(int r){
+  CephContext *cct = m_image_ctx.cct;
+  ldout(cct, 10) << dendl;
+  assert(r == 0);

diff --git a/src/librbd/exclusive_lock/PreReleaseRequest.h b/src/librbd/exclusive_lock/PreReleaseRequest.h
index 34ac07a..0c6051c 100644
--- a/src/librbd/exclusive_lock/PreReleaseRequest.h
+++ b/src/librbd/exclusive_lock/PreReleaseRequest.h
@@ -93,6 +93,9 @@ private:
   void send_flush_notifies();
   void handle_flush_notifies(int r);

+  void send_image_flush();
+  void handle_image_flush(int r);
   void send_close_journal();
   void handle_close_journal(int r);

#2 Updated by liuzhong chen over 3 years ago

@jesse please review,thank you!

#3 Updated by liuzhong chen over 3 years ago

@Jason Dillaman review this please,thank you!

#4 Updated by Jason Dillaman over 3 years ago

  • Status changed from New to Need More Info

Is there any reason why the exclusive lock was being released? Was the fio test run complete and therefore the image was being closed w/ outstanding IO? I think there was an expectation that the pre-release state machine would block all IO at an earlier step and blocking IO should issue a flush. Any chance you could collect logs with "debug rbd = 20" from just prior to the crash?

#5 Updated by liuzhong chen over 3 years ago

The reason that exclusive lock be released is two fio client libaio write the image together with, two client battle for exclusive_lock.
Pre-release state machine has block all IO in 'send_block_writes' method and io in 'io_work_queue' would not dequeue, but these ops that have been in "op_work_queue" and not be completed may cause this err.
I will test again with "debug_rbd = 20/20" and upload log latter, just a moment. Thank you.

#6 Updated by liuzhong chen over 3 years ago

I test again and upload the rbd log as follows, crash time is : 2019-11-19 13:27:02.360991

/home/clz/code/cephes/ceph/src/librbd/io/ In function 'void librbd::io::AioCompletion::complete()' thread 7f4800ff9700 time 2019-11-19 13:27:02.360991
/home/clz/code/cephes/ceph/src/librbd/io/ 86: FAILED assert(ictx->journal != __null)

#7 Updated by liuzhong chen over 3 years ago

@Jason Dillaman Yes, I find the image_ctx flush in the block_writes and it works.
But journal flush_event method is async. The C_FlushJournalCommit may be put into op_work_queue after on_blocked context which cause this crash.

typename Journal<I>::Future Journal<I>::wait_event(Mutex &lock, uint64_t tid,
                                                   Context *on_safe) {
  if ( {
    // journal entry already safe
    ldout(cct, 20) << this << " " << __func__ << ": " 
                   << "journal entry already safe" << dendl;
    m_image_ctx.op_work_queue->queue(on_safe, event.ret_val);
    return Future();

  return event.futures.back();

So my code before won't work as well.

#8 Updated by Jason Dillaman over 3 years ago

Can you retest this w/ the current luminous branch? I think this might have been fixed by commit ef4fdf78da2d2fb0a05e59409b26d5f9597187c0 [1]


#9 Updated by liuzhong chen over 3 years ago

@Jason Dillaman I have understand and test the patch(, it can slove this issue. Close this issue please, Thank you.

#10 Updated by Jason Dillaman over 3 years ago

  • Status changed from Need More Info to Resolved

Also available in: Atom PDF