Project

General

Profile

Actions

Bug #16556

open

LibCephFS.InterProcessLocking failing on master and jewel

Added by John Spray almost 8 years ago. Updated over 4 years ago.

Status:
New
Priority:
Urgent
Assignee:
-
Category:
Testing
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):


Related issues 3 (0 open3 closed)

Related to CephFS - Bug #17832: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4Resolved11/09/2016

Actions
Has duplicate CephFS - Bug #17181: "[ FAILED ] LibCephFS.ThreesomeInterProcessRecordLocking" in smokeDuplicateYuri Weinstein08/31/2016

Actions
Has duplicate CephFS - Bug #17525: "[ FAILED ] LibCephFS.ThreesomeInterProcessRecordLocking" in smokeDuplicate10/06/2016

Actions
Actions #1

Updated by Kefu Chai almost 8 years ago

2016-06-30T05:02:37.114 INFO:tasks.workunit.client.0.mira088.stdout:[ RUN      ] LibCephFS.InterProcessLocking
2016-06-30T05:02:37.431 INFO:tasks.workunit.client.0.mira088.stderr:*** Error in `ceph_test_libcephfs': double free or corruption (!prev): 0x0000000002f7b970 ***
2016-06-30T05:02:37.467 INFO:tasks.workunit.client.0.mira088.stdout:/srv/autobuild-ceph/gitbuilder.git/build/out~/ceph-11.0.0-122-g082a073/src/test/libcephfs/flock.cc:519: Failure
2016-06-30T05:02:37.468 INFO:tasks.workunit.client.0.mira088.stdout:Value of: status
2016-06-30T05:02:37.468 INFO:tasks.workunit.client.0.mira088.stdout:  Actual: 134
2016-06-30T05:02:37.468 INFO:tasks.workunit.client.0.mira088.stdout:Expected: 0
2016-06-30T05:02:37.469 INFO:tasks.workunit.client.0.mira088.stdout:[  FAILED  ] LibCephFS.InterProcessLocking (353 ms)
2016-06-30T05:02:37.469 INFO:tasks.workunit.client.0.mira088.stdout:[ RUN      ] LibCephFS.ThreesomeInterProcessLocking
2016-06-30T05:02:37.900 INFO:tasks.workunit.client.0.mira088.stderr:*** Error in `ceph_test_libcephfs': double free or corruption (!prev): 0x0000000002f7b970 ***
2016-06-30T05:02:37.910 INFO:tasks.workunit.client.0.mira088.stderr:*** Error in `ceph_test_libcephfs': double free or corruption (!prev): 0x0000000002f7b970 ***
2016-06-30T05:02:37.930 INFO:tasks.workunit.client.0.mira088.stdout:/srv/autobuild-ceph/gitbuilder.git/build/out~/ceph-11.0.0-122-g082a073/src/test/libcephfs/flock.cc:629: Failure
2016-06-30T05:02:37.930 INFO:tasks.workunit.client.0.mira088.stdout:Value of: status
2016-06-30T05:02:37.930 INFO:tasks.workunit.client.0.mira088.stdout:  Actual: 134
2016-06-30T05:02:37.930 INFO:tasks.workunit.client.0.mira088.stdout:Expected: 0
2016-06-30T05:02:37.931 INFO:tasks.workunit.client.0.mira088.stdout:[  FAILED  ] LibCephFS.ThreesomeInterProcessLocking (463 ms)
Actions #2

Updated by John Spray almost 8 years ago

Jeff points out that we can also get it to blow up with just a passing test like bin/ceph_test_libcephfs --gtest_filter=LibCephFS.Fchown

Actions #3

Updated by Kefu Chai almost 8 years ago

LibCephFS.Fchown is fixed by https://github.com/ceph/ceph/pull/10081,

but we still have

[ RUN      ] LibCephFS.InterProcessLocking
2016-07-01 11:34:35.308910 7f69b4701500 -1 WARNING: the following dangerous and experimental features are enabled: *
2016-07-01 11:34:35.308926 7f69b4701500 -1 WARNING: the following dangerous and experimental features are enabled: *
/var/ceph/ceph/src/common/lockdep.cc: In function 'int lockdep_will_lock(const char*, int, bool)' thread 7f69927fc700 time 2016-07-01 11:34:35.315443
/var/ceph/ceph/src/common/lockdep.cc: 330: FAILED assert(0)
 ceph version Development (no_version)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x7f69ab97cf70]
 2: (lockdep_will_lock(char const*, int, bool)+0xb65) [0x7f69abb9fe7e]
 3: (Mutex::_will_lock()+0x3b) [0x7f69ab92b61d]
 4: (Mutex::Lock(bool)+0x50) [0x7f69ab92b398]
 5: (SimpleMessenger::reaper_entry()+0x149) [0x7f69aba4dfd9]
 6: (SimpleMessenger::ReaperThread::entry()+0x1c) [0x7f69aba556b6]
 7: (Thread::entry_wrapper()+0xc1) [0x7f69abb63025]
 8: (Thread::_entry_func(void*)+0x18) [0x7f69abb62f5a]
 9: (()+0x7464) [0x7f699f09a464]
 10: (clone()+0x6d) [0x7f699e543e5d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
/var/ceph/ceph/src/test/libcephfs/flock.cc:466: Failure
Value of: sem_timedwait(&s.sem[1%2], abstime(ts, waitSlowMs))
  Actual: -1
Expected: 0
[  FAILED  ] LibCephFS.InterProcessLocking (5490 ms)

and LibCephFS.ThreesomeInterProcessLocking has the same failure.

if we single out LibCephFS.InterProcessLocking and run it, we have

[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from LibCephFS
[ RUN      ] LibCephFS.InterProcessLocking
/var/ceph/ceph/src/test/libcephfs/flock.cc:466: Failure
Value of: sem_timedwait(&s.sem[1%2], abstime(ts, waitSlowMs))
  Actual: -1
Expected: 0
[  FAILED  ] LibCephFS.InterProcessLocking (5212 ms)

Actions #4

Updated by John Spray almost 8 years ago

Thanks Kefu, I guess the lockdep one is either a cephfs or msgr issue so we'll keep this ticket open to look into it.

Actions #5

Updated by Greg Farnum over 7 years ago

  • Category set to Testing
Actions #6

Updated by Kefu Chai over 7 years ago

tested with latest master, still fails.

@Greg Farnum, do we have a fix for this issue now?

Actions #7

Updated by John Spray over 7 years ago

Looking into what's happening in the case of running LibCephFS.InterProcessLocking on its own, I see that the forked process is getting stuck during initialisation like this:

#0  0x00007fb33f07d29d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fb33f07789d in pthread_mutex_lock () from /lib64/libpthread.so.0
#2  0x00007fb349e941f6 in lockdep_register (name=0xc9def8 "PerfCountersCollection")
    at /home/john/ceph.tmp/src/common/lockdep.cc:176
#3  0x00007fb349c1a032 in Mutex::_register (this=0xc9de48) at /home/john/ceph.tmp/src/common/Mutex.h:53
#4  0x00007fb349c19c31 in Mutex::Mutex (this=0xc9de48, n="PerfCountersCollection", r=false, ld=true, bt=false, 
    cct=0x0) at /home/john/ceph.tmp/src/common/Mutex.cc:65
#5  0x00007fb349c14c06 in PerfCountersCollection::PerfCountersCollection (this=0xc9de40, cct=0xc99cc0)
    at /home/john/ceph.tmp/src/common/perf_counters.cc:34
#6  0x00007fb349e3f8f5 in CephContext::CephContext (this=0xc99cc0, module_type_=8, init_flags_=0)
    at /home/john/ceph.tmp/src/common/ceph_context.cc:490
#7  0x00007fb349e38003 in common_preinit (iparams=..., code_env=CODE_ENVIRONMENT_LIBRARY, flags=0, 
    data_dir_option=0x0) at /home/john/ceph.tmp/src/common/common_init.cc:46
#8  0x00007fb349ae9536 in ceph_create (cmount=0x7ffc51456488, id=0x0) at /home/john/ceph.tmp/src/libcephfs.cc:318
#9  0x00000000004dcdde in process_ConcurrentLocking (s=...) at /home/john/ceph.tmp/src/test/libcephfs/flock.cc:393
#10 0x00000000004dec3d in LibCephFS_InterProcessLocking_Test::TestBody (this=0xc8ee80)
    at /home/john/ceph.tmp/src/test/libcephfs/flock.cc:450

I believe this is happening because of the global variables in lockdep.cc, which is linked into libcommon and thereby duplicated when we have librados and libcephfs in the same process.

I have a patch to work around this by separating the librados-requiring cephfs test out into its own compilation unit.
https://github.com/ceph/ceph/pull/10452

See also mailing list thread "static member variable in libcommon and double free"

Actions #8

Updated by Kefu Chai over 7 years ago

new dependency Messenger::DispatchQueue::lockradosclient (3) -> SimpleMessenger::lock (8)

#0  lockdep_will_lock (name=0x7eac30 "SimpleMessenger::lock", id=8, force_backtrace=false) at /var/ceph/ceph/src/common/lockdep.cc:311
#1  0x00007fffef15cabd in Mutex::_will_lock (this=0x7cd0e0) at /var/ceph/ceph/src/common/Mutex.h:56
#2  0x00007fffef15c838 in Mutex::Lock (this=0x7cd0e0, no_lockdep=false) at /var/ceph/ceph/src/common/Mutex.cc:95
#3  0x00007fffef291ca3 in SimpleMessenger::reaper_entry (this=0x7ccb60) at /var/ceph/ceph/src/msg/simple/SimpleMessenger.cc:200
#4  0x00007fffef299380 in SimpleMessenger::ReaperThread::entry (this=0x7cd0a8) at /var/ceph/ceph/src/msg/simple/SimpleMessenger.h:196
#5  0x00007fffef3a2dab in Thread::entry_wrapper (this=0x7cd0a8) at /var/ceph/ceph/src/common/Thread.cc:89
#6  0x00007fffef3a2ce0 in Thread::_entry_func (arg=0x7cd0a8) at /var/ceph/ceph/src/common/Thread.cc:69
#7  0x00007fffe4561464 in start_thread (arg=0x7fffdcd61700) at pthread_create.c:333
#8  0x00007fffe33f330d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

so, at that moment, SimpleMessenger::lock follows Messenger::DispatchQueue::lockradosclient, (this is what we have in librados), but with the new dependency, Messenger::DispatchQueue::lockradosclient would follow SimpleMessenger::lock. (and this is what we have in libcephfs). hence a cycle is created this way,

we should either split them into difference tests, like https://github.com/ceph/ceph/pull/10452. or we need to hide the non-public symbols so the static variables in the two copies of libcommon in libcephfs and librados' won't interfere with each other.

Actions #9

Updated by Kefu Chai over 7 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Kefu Chai
Actions #10

Updated by Kefu Chai over 7 years ago

John,

i marked PR#10472 as a fix of this issue. it does. but i would like to keep this issue open, because:

by inspecting the backtraces above, we found that librados was in the middle of doing something:

  • in #16556-7, the parent acquired the global "lockdep_mutex". and before it released the lock, the fork() call returned, leaving the child process an mutex with undefined behaviour. in your case, the child is blocked when trying to acquire the mutex.
  • in #16556-8, the parent had a dependency from the lock of "SimpleMessenger::lock" to another lock "Messenger::DispatchQueue::lockradosclient", but before the dependency was dismissed, the fork() call returned, leaving the child process (at least) an arc in the lockdep's "follows[][]" DAG. and when the child was reaping the messenger thread, it tried to create a dependency from "Messenger::DispatchQueue::lockradosclient" to "SimpleMessenger::lock", which made the lockdep believe that we have a cycle.

but this could still happen in other places in our code even with your change (https://github.com/ceph/ceph/pull/10452) and mine, we can not be 100% sure that the child process will not inherit any mess from its parent left by either a rados/cephfs/rbd call or other library APIs directly or indirectly. maybe as fork(2) manpage put:

After a fork(2) in a multithreaded program, the child can safely call only async-signal-safe functions (see signal(7)) until such time as it calls execve(2).

and Brad also urged caution when fork(2) not followed by execve(2). maybe we should wrap the forked process in a call? and do something like:

// pseudo code here
int main(int argc, char* argv[]) {
  if (strcmp(argv[1], "process-concurrent-locking") {
    // prepare for the tests
    // perform the tests
  } else {
    process_ConcurrentLocking(s);
  }
}

TEST(LibCephFS, InterProcessLocking) {
  // ...
  pid_t pid = fork();
  if (pid == 0) {
    execve(argv[0], [argv[0], "process-concurrent-locking"], NULL);
  } else if (pid < 0) {
    // die
  }
  // do the lock
}

what do you think?

Actions #11

Updated by Kefu Chai over 7 years ago

  • Status changed from Fix Under Review to 12
  • Assignee deleted (Kefu Chai)
Actions #12

Updated by Brad Hubbard over 7 years ago

It's not only Ceph's locks, mutexes, etc. that we need to be aware of or concerned with. I have seen multiple occurrences in the past of multithreaded processes calling fork and then calling async signal unsafe functions such as malloc (via new), which uses mutexes to lock its arenas, where the parent has been holding one of these arena locks when the child forks and the child has then attempted to call malloc and gets stuck on the lock with no possibility of the lock being released. I believe you shouldn't do anything between fork() and exec() that you wouldn't do in a signal handler (and that's not much).

Actions #13

Updated by John Spray over 7 years ago

  • Has duplicate Bug #17181: "[ FAILED ] LibCephFS.ThreesomeInterProcessRecordLocking" in smoke added
Actions #14

Updated by John Spray over 7 years ago

Actions #15

Updated by Loïc Dachary over 7 years ago

  • Has duplicate Bug #17525: "[ FAILED ] LibCephFS.ThreesomeInterProcessRecordLocking" in smoke added
Actions #16

Updated by Yuri Weinstein over 7 years ago

  • Related to Bug #17832: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4 added
Actions #17

Updated by Nathan Cutler over 7 years ago

  • Subject changed from LibCephFS.InterProcessLocking failing on master to LibCephFS.InterProcessLocking failing on master and jewel
Actions #18

Updated by Patrick Donnelly over 4 years ago

  • Status changed from 12 to New
Actions

Also available in: Atom PDF