Project

General

Profile

Actions

Bug #42828

closed

rbd journal err assert(ictx->journal != __null) when release exclusive_lock

Added by liuzhong chen over 4 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rbd
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version: 12.2.12
Reproduction:
1. create a rbd with journaling feature.
2. test rbd with two fio client,fio command as follows:

fio --name test --rw=randwrite --bs=4k --runtime=3600 --ioengine=rbd --clientname=admin --pool=poolclz --rbdname=rbdclz --iodepth=128 --numjobs=1 --direct=1 --group_reporting --time_based=1  --eta-newline 1 --fsync=1

3. wait for while, coredump happen.
/root/rpmbuild/BUILD/ceph-12.2.12-1/src/librbd/io/AioCompletion.cc: 86: FAILED assert(ictx->journal != __null)

coredump file:
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/libexec/qemu-kvm -name guest=instance-00000082,debug-threads=on -S -object'.
Program terminated with signal 6, Aborted.
#0  0x00007f42a6f261f7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install qemu-kvm-2.9.0-16.el7.centos.es_4.5.1.2.x86_64
(gdb) bt
#0  0x00007f42a6f261f7 in raise () from /lib64/libc.so.6
#1  0x00007f42a6f278e8 in abort () from /lib64/libc.so.6
#2  0x00007f429c02b744 in ceph::__ceph_assert_fail(char const*, char const*, int, char const*) () from /usr/lib64/ceph/libceph-common.so.0
#3  0x00007f42ac35c4d3 in librbd::io::AioCompletion::complete() () from /lib64/librbd.so.1
#4  0x00007f42ac35d14f in librbd::io::AioCompletion::complete_request(long) () from /lib64/librbd.so.1
#5  0x00007f42ac35e511 in librbd::io::(anonymous namespace)::C_FlushJournalCommit<librbd::ImageCtx>::finish(int) () from /lib64/librbd.so.1
#6  0x00007f42ac29c829 in Context::complete(int) () from /lib64/librbd.so.1
#7  0x00007f42ac2aaac4 in ContextWQ::process(Context*) () from /lib64/librbd.so.1
#8  0x00007f429c03385e in ThreadPool::worker(ThreadPool::WorkThread*) () from /usr/lib64/ceph/libceph-common.so.0
#9  0x00007f429c034780 in ThreadPool::WorkThread::entry() () from /usr/lib64/ceph/libceph-common.so.0
#10 0x00007f42a72bbe25 in start_thread () from /lib64/libpthread.so.0
#11 0x00007f42a6fe934d in clone () from /lib64/libc.so.6 

The reason is prerelase exclusive_lock not wait for all aio complete. when one aio flush complete and just in time the journal has been closed cause assert(ictx->journal != __null) err.
I wonder if we need to wait for all aio complete in prerelease exclusive_lock or anything else need to consider.


Files

rbd.log.bak (629 KB) rbd.log.bak liuzhong chen, 11/19/2019 05:29 AM
Actions #1

Updated by liuzhong chen over 4 years ago

Follow is my method to slove this problem, Wonder if there is a better already existed api to do this.
I have test this method and passed.

commit 9560f9744fa4b86a49f717fcbe85eeb976cc9a51
Author: Chen liuzhong <liuzhong.chen@easystack.cn>
Date:   Mon Nov 18 16:22:53 2019 +0800

    Add image flush when pre-release exclusive_lock

diff --git a/src/librbd/exclusive_lock/PreReleaseRequest.cc b/src/librbd/exclusive_lock/PreReleaseRequest.cc
index 504de0f..66863e8 100644
--- a/src/librbd/exclusive_lock/PreReleaseRequest.cc
+++ b/src/librbd/exclusive_lock/PreReleaseRequest.cc
@@ -219,6 +219,26 @@ void PreReleaseRequest<I>::handle_flush_notifies(int r) {
   ldout(cct, 10) << dendl;

   assert(r == 0);
+  send_image_flush();
+}
+
+template <typename I>
+void PreReleaseRequest<I>::send_image_flush(){
+  CephContext *cct = m_image_ctx.cct;
+  ldout(cct, 10) << dendl;
+
+  using klass = PreReleaseRequest<I>;
+  Context *ctx =
+    create_context_callback<klass, &klass::handle_image_flush>(this);
+  m_image_ctx.flush(ctx);
+}
+
+template <typename I>
+void PreReleaseRequest<I>::handle_image_flush(int r){
+  CephContext *cct = m_image_ctx.cct;
+  ldout(cct, 10) << dendl;
+
+  assert(r == 0);
   send_close_journal();
 }

diff --git a/src/librbd/exclusive_lock/PreReleaseRequest.h b/src/librbd/exclusive_lock/PreReleaseRequest.h
index 34ac07a..0c6051c 100644
--- a/src/librbd/exclusive_lock/PreReleaseRequest.h
+++ b/src/librbd/exclusive_lock/PreReleaseRequest.h
@@ -93,6 +93,9 @@ private:
   void send_flush_notifies();
   void handle_flush_notifies(int r);

+  void send_image_flush();
+  void handle_image_flush(int r);
+
   void send_close_journal();
   void handle_close_journal(int r);

Actions #2

Updated by liuzhong chen over 4 years ago

@jesse please review,thank you!

Actions #3

Updated by liuzhong chen over 4 years ago

@Jason Borden Dillaman review this please,thank you!

Actions #4

Updated by Jason Dillaman over 4 years ago

  • Status changed from New to Need More Info

Is there any reason why the exclusive lock was being released? Was the fio test run complete and therefore the image was being closed w/ outstanding IO? I think there was an expectation that the pre-release state machine would block all IO at an earlier step and blocking IO should issue a flush. Any chance you could collect logs with "debug rbd = 20" from just prior to the crash?

Actions #5

Updated by liuzhong chen over 4 years ago

The reason that exclusive lock be released is two fio client libaio write the image together with, two client battle for exclusive_lock.
Pre-release state machine has block all IO in 'send_block_writes' method and io in 'io_work_queue' would not dequeue, but these ops that have been in "op_work_queue" and not be completed may cause this err.
I will test again with "debug_rbd = 20/20" and upload log latter, just a moment. Thank you.

Actions #6

Updated by liuzhong chen over 4 years ago

I test again and upload the rbd log as follows, crash time is : 2019-11-19 13:27:02.360991

/home/clz/code/cephes/ceph/src/librbd/io/AioCompletion.cc: In function 'void librbd::io::AioCompletion::complete()' thread 7f4800ff9700 time 2019-11-19 13:27:02.360991
/home/clz/code/cephes/ceph/src/librbd/io/AioCompletion.cc: 86: FAILED assert(ictx->journal != __null)

Actions #7

Updated by liuzhong chen over 4 years ago

@Jason Borden Dillaman Yes, I find the image_ctx flush in the block_writes and it works.
But journal flush_event method is async. The C_FlushJournalCommit may be put into op_work_queue after on_blocked context which cause this crash.

typename Journal<I>::Future Journal<I>::wait_event(Mutex &lock, uint64_t tid,
                                                   Context *on_safe) {
  .......
  if (event.safe) {
    // journal entry already safe
    ldout(cct, 20) << this << " " << __func__ << ": " 
                   << "journal entry already safe" << dendl;
    m_image_ctx.op_work_queue->queue(on_safe, event.ret_val);
    return Future();
  }

  event.on_safe_contexts.push_back(create_async_context_callback(m_image_ctx,
                                                                 on_safe));
  return event.futures.back();

So my code before won't work as well.

Actions #8

Updated by Jason Dillaman over 4 years ago

Can you retest this w/ the current luminous branch? I think this might have been fixed by commit ef4fdf78da2d2fb0a05e59409b26d5f9597187c0 [1]

[1] https://tracker.ceph.com/issues/40574

Actions #9

Updated by liuzhong chen over 4 years ago

@Jason Borden Dillaman I have understand and test the patch(https://github.com/ceph/ceph/pull/28773), it can slove this issue. Close this issue please, Thank you.

Actions #10

Updated by Jason Dillaman over 4 years ago

  • Status changed from Need More Info to Resolved
Actions

Also available in: Atom PDF