Project

General

Profile

Actions

Bug #44294

closed

mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"

Added by Patrick Donnelly about 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

From the LRC testing Octopus:

root@reesi001:~# ceph crash info 2020-02-24T03:31:36.013391Z_9904f466-37f3-43c7-a983-32ebc5a939d8
{
    "assert_condition": "_head.empty()",
    "assert_file": "/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/huge/release/15.1.0-1093-g42bf1cc/rpm/el8/BUILD/ceph-15.1.0-1093-g42bf1cc/src/include/elist.h",
    "assert_func": "elist<T>::~elist() [with T = MDSIOContextBase*]",
    "assert_line": 91,
    "assert_msg": "/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/huge/release/15.1.0-1093-g42bf1cc/rpm/el8/BUILD/ceph-15.1.0-1093-g42bf1cc/src/include/elist.h: In function 'elist<T>::~elist() [with T = MDSIOContextBase*]' thread 7f4160f97700 time 2020-02-24T03:31:36.009479+0000\n/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/huge/release/15.1.0-1093-g42bf1cc/rpm/el8/BUILD/ceph-15.1.0-1093-g42bf1cc/src/include/elist.h: 91: FAILED ceph_assert(_head.empty())\n",
    "assert_thread_name": "MR_Finisher",
    "backtrace": [
        "(()+0x12dc0) [0x7f416e131dc0]",
        "(abort()+0x203) [0x7f416cbdfdd1]",
        "(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7f416f2fae77]",
        "(()+0x279040) [0x7f416f2fb040]",
        "(()+0x414ad7) [0x562b74206ad7]",
        "(()+0x3a06c) [0x7f416cbf806c]",
        "(on_exit()+0) [0x7f416cbf81a0]",
        "(()+0x4a7640) [0x562b74299640]",
        "(()+0x12dc0) [0x7f416e131dc0]",
        "(gsignal()+0x10f) [0x7f416cbf58df]",
        "(abort()+0x127) [0x7f416cbdfcf5]",
        "(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7f416f2fae77]",
        "(()+0x279040) [0x7f416f2fb040]",
        "(()+0x2cd631) [0x562b740bf631]",
        "(MDSContext::complete(int)+0x56) [0x562b74206206]",
        "(MDSIOContextBase::complete(int)+0x197) [0x562b74206507]",
        "(Finisher::finisher_thread_entry()+0x1a5) [0x7f416f38b5a5]",
        "(()+0x82de) [0x7f416e1272de]",
        "(clone()+0x43) [0x7f416ccba133]" 
    ],
    "ceph_version": "15.1.0-1093-g42bf1cc",
    "crash_id": "2020-02-24T03:31:36.013391Z_9904f466-37f3-43c7-a983-32ebc5a939d8",
    "entity_name": "mds.reesi002",
    "os_id": "centos",
    "os_name": "CentOS Linux",
    "os_version": "8 (Core)",
    "os_version_id": "8",
    "process_name": "ceph-mds",
    "stack_sig": "cbbfe537c26445d934c542d66875091cdcd9e38f80b5f433e8d97b5853ff604e",
    "timestamp": "2020-02-24T03:31:36.013391Z",
    "utsname_hostname": "reesi002",
    "utsname_machine": "x86_64",
    "utsname_release": "4.4.0-116-generic",
    "utsname_sysname": "Linux",
    "utsname_version": "#140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018" 
}

Related issues 2 (0 open2 closed)

Related to CephFS - Bug #44295: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)ResolvedPatrick Donnelly

Actions
Copied to CephFS - Backport #46778: nautilus: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"DuplicatePatrick DonnellyActions
Actions #1

Updated by Patrick Donnelly about 4 years ago

  • Related to Bug #44295: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2) added
Actions #2

Updated by Zheng Yan about 4 years ago

  • Status changed from New to Resolved
  • Pull request ID set to 33538

resolved, but also introduce new bug. Newest ticket is https://tracker.ceph.com/issues/44680

Actions #3

Updated by Mark Nelson almost 4 years ago

Possibly hit a new manifestation of this while testing ceph.dir.pin.distributed for the IO500 challenge:

2020-07-07T13:59:07.557+0000 7f7dd17f5600 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,lmdb,rocksdb
2020-07-07T13:59:07.559+0000 7f7dd17f5600 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,lmdb,rocksdb
2020-07-07T13:59:07.559+0000 7f7dd17f5600  0 ceph version 16.0.0-3204-ge9cc7d863a (e9cc7d863ac8d63caefd191d9b7942e51c5bf780) pacific (dev), process ceph-mds, pid 25530
2020-07-07T13:59:07.560+0000 7f7dd17f5600  0 pidfile_write: ignore empty --pid-file
2020-07-07T13:59:07.560+0000 7f7dd17f5600 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,lmdb,rocksdb
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x107
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x646
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x647
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x648
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x649
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64a
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64b
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64c
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64d
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64e
2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64f
2020-07-07T15:27:20.984+0000 7f7dccf8b700 -1 /home/fedora/src/ceph/ceph/src/include/elist.h: In function 'elist<T>::~elist() [with T = MDLockCache*]' thread 7f7dccf8b700 time 2020-07-07T15:27:20.979768+0000
/home/fedora/src/ceph/ceph/src/include/elist.h: 91: FAILED ceph_assert(_head.empty())

 ceph version 16.0.0-3204-ge9cc7d863a (e9cc7d863ac8d63caefd191d9b7942e51c5bf780) pacific (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x124) [0x7f7dd262a774]
 2: (()+0x2508ff) [0x7f7dd262a8ff]
 3: (std::_Rb_tree<client_t, std::pair<client_t const, Capability>, std::_Select1st<std::pair<client_t const, Capability> >, std::less<client_t>, mempool::pool_allocator<(mempool::pool_index_t)18, std::pair<client_t const, Capability> > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<client_t const, Capability> >)+0x15a) [0x562ad7020a4a]
 4: (CInode::remove_client_cap(client_t)+0x221) [0x562ad6ffdf01]
 5: (CInode::clear_client_caps_after_export()+0x3b) [0x562ad6ffe28b]
 6: (Migrator::finish_export_inode_caps(CInode*, int, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >&)+0x534) [0x562ad6f86804]
 7: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x282) [0x562ad6f87402]
 8: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x41a) [0x562ad6f8759a]
 9: (Migrator::export_finish(CDir*)+0x438) [0x562ad6f89988]
 10: (Migrator::handle_export_notify_ack(boost::intrusive_ptr<MExportDirNotifyAck const> const&)+0x383) [0x562ad6f8abe3]
 11: (Migrator::dispatch(boost::intrusive_ptr<Message const> const&)+0x204) [0x562ad6f8b1f4]
 12: (MDSRank::_dispatch(boost::intrusive_ptr<Message const> const&, bool)+0x5b7) [0x562ad6d9f097]
 13: (MDSRankDispatcher::ms_dispatch(boost::intrusive_ptr<Message const> const&)+0x4f) [0x562ad6d9f68f]
 14: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x118) [0x562ad6d7b288]
 15: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0x448) [0x7f7dd2842848]
 16: (DispatchQueue::entry()+0x5ef) [0x7f7dd283ffef]
 17: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f7dd28f7ecd]
 18: (()+0x9432) [0x7f7dd1e68432]
 19: (clone()+0x43) [0x7f7dd19c29d3]

2020-07-07T15:27:20.986+0000 7f7dccf8b700 -1 *** Caught signal (Aborted) **
 in thread 7f7dccf8b700 thread_name:ms_dispatch

 ceph version 16.0.0-3204-ge9cc7d863a (e9cc7d863ac8d63caefd191d9b7942e51c5bf780) pacific (dev)
 1: (()+0x14a90) [0x7f7dd1e73a90]
 2: (gsignal()+0x145) [0x7f7dd18fda25]
 3: (abort()+0x127) [0x7f7dd18e6895]
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x7f7dd262a7be]
 5: (()+0x2508ff) [0x7f7dd262a8ff]
 6: (std::_Rb_tree<client_t, std::pair<client_t const, Capability>, std::_Select1st<std::pair<client_t const, Capability> >, std::less<client_t>, mempool::pool_allocator<(mempool::pool_index_t)18, std::pair<client_t const, Capability> > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<client_t const, Capability> >)+0x15a) [0x562ad7020a4a]
 7: (CInode::remove_client_cap(client_t)+0x221) [0x562ad6ffdf01]
 8: (CInode::clear_client_caps_after_export()+0x3b) [0x562ad6ffe28b]
 9: (Migrator::finish_export_inode_caps(CInode*, int, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >&)+0x534) [0x562ad6f86804]
 10: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x282) [0x562ad6f87402]
 11: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x41a) [0x562ad6f8759a]
 12: (Migrator::export_finish(CDir*)+0x438) [0x562ad6f89988]
 13: (Migrator::handle_export_notify_ack(boost::intrusive_ptr<MExportDirNotifyAck const> const&)+0x383) [0x562ad6f8abe3]
 14: (Migrator::dispatch(boost::intrusive_ptr<Message const> const&)+0x204) [0x562ad6f8b1f4]
 15: (MDSRank::_dispatch(boost::intrusive_ptr<Message const> const&, bool)+0x5b7) [0x562ad6d9f097]
 16: (MDSRankDispatcher::ms_dispatch(boost::intrusive_ptr<Message const> const&)+0x4f) [0x562ad6d9f68f]
 17: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x118) [0x562ad6d7b288]
 18: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0x448) [0x7f7dd2842848]
 19: (DispatchQueue::entry()+0x5ef) [0x7f7dd283ffef]
 20: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f7dd28f7ecd]
 21: (()+0x9432) [0x7f7dd1e68432]
 22: (clone()+0x43) [0x7f7dd19c29d3]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
   -63> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command assert hook 0x562ad93c40f0
   -62> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command abort hook 0x562ad93c40f0
   -61> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command leak_some_memory hook 0x562ad93c40f0
   -60> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command perfcounters_dump hook 0x562ad93c40f0
   -59> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command 1 hook 0x562ad93c40f0
   -58> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command perf dump hook 0x562ad93c40f0
   -57> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command perfcounters_schema hook 0x562ad93c40f0
   -56> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command perf histogram dump hook 0x562ad93c40f0
   -55> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command 2 hook 0x562ad93c40f0
   -54> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command perf schema hook 0x562ad93c40f0
   -53> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command perf histogram schema hook 0x562ad93c40f0
   -52> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command perf reset hook 0x562ad93c40f0
   -51> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command config show hook 0x562ad93c40f0
   -50> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command config help hook 0x562ad93c40f0
   -49> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command config set hook 0x562ad93c40f0
   -48> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command config unset hook 0x562ad93c40f0
   -47> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command config get hook 0x562ad93c40f0
   -46> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command config diff hook 0x562ad93c40f0
   -45> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command config diff get hook 0x562ad93c40f0
   -44> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command injectargs hook 0x562ad93c40f0
   -43> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command log flush hook 0x562ad93c40f0
   -42> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command log dump hook 0x562ad93c40f0
   -41> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command log reopen hook 0x562ad93c40f0
   -40> 2020-07-07T13:59:07.555+0000 7f7dd17f5600  5 asok(0x562ad945a000) register_command dump_mempools hook 0x562ada030068
   -39> 2020-07-07T13:59:07.557+0000 7f7dd17f5600 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,lmdb,rocksdb
   -38> 2020-07-07T13:59:07.559+0000 7f7dd17f5600 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,lmdb,rocksdb
   -37> 2020-07-07T13:59:07.559+0000 7f7dd17f5600  0 ceph version 16.0.0-3204-ge9cc7d863a (e9cc7d863ac8d63caefd191d9b7942e51c5bf780) pacific (dev), process ceph-mds, pid 25530
   -36> 2020-07-07T13:59:07.560+0000 7f7dd17f5600  0 pidfile_write: ignore empty --pid-file
   -35> 2020-07-07T13:59:07.560+0000 7f7dd17f5600 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,lmdb,rocksdb
   -34> 2020-07-07T13:59:07.569+0000 7f7dd17f5600  1 finished global_init_daemonize
   -33> 2020-07-07T13:59:08.283+0000 7f7dccf8b700  4 mgrc handle_mgr_map Got map version 4
   -32> 2020-07-07T13:59:08.283+0000 7f7dccf8b700  4 mgrc handle_mgr_map Active mgr is now [v2:10.0.1.1:6800/18339,v1:10.0.1.1:6801/18339]
   -31> 2020-07-07T13:59:08.283+0000 7f7dccf8b700  4 mgrc reconnect Starting new session with [v2:10.0.1.1:6800/18339,v1:10.0.1.1:6801/18339]
   -30> 2020-07-07T13:59:08.284+0000 7f7dccf8b700  4 mgrc handle_mgr_configure stats_period=5
   -29> 2020-07-07T13:59:08.284+0000 7f7dccf8b700  4 mgrc handle_mgr_configure updated stats threshold: 5
   -28> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x107
   -27> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x646
   -26> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x647
   -25> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x648
   -24> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x649
   -23> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64a
   -22> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64b
   -21> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64c
   -20> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64d
   -19> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64e
   -18> 2020-07-07T14:00:12.639+0000 7f7dccf8b700  0 mds.7.cache creating system inode with ino:0x64f
   -17> 2020-07-07T14:29:08.284+0000 7f7dccf8b700  4 mgrc ms_handle_reset ms_handle_reset con 0x562ad94d9000
   -16> 2020-07-07T14:29:08.284+0000 7f7dccf8b700  4 mgrc reconnect Terminating session with v2:10.0.1.1:6800/18339
   -15> 2020-07-07T14:29:08.284+0000 7f7dccf8b700  4 mgrc reconnect Starting new session with [v2:10.0.1.1:6800/18339,v1:10.0.1.1:6801/18339]
   -14> 2020-07-07T14:29:08.284+0000 7f7dccf8b700  4 mgrc handle_mgr_configure stats_period=5
   -13> 2020-07-07T14:44:08.284+0000 7f7dccf8b700  4 mgrc ms_handle_reset ms_handle_reset con 0x562ae12fd400
   -12> 2020-07-07T14:44:08.284+0000 7f7dccf8b700  4 mgrc reconnect Terminating session with v2:10.0.1.1:6800/18339
   -11> 2020-07-07T14:44:08.284+0000 7f7dccf8b700  4 mgrc reconnect Starting new session with [v2:10.0.1.1:6800/18339,v1:10.0.1.1:6801/18339]
   -10> 2020-07-07T14:44:08.284+0000 7f7dccf8b700  4 mgrc handle_mgr_configure stats_period=5
    -9> 2020-07-07T14:59:08.284+0000 7f7dccf8b700  4 mgrc ms_handle_reset ms_handle_reset con 0x562ad94d9000
    -8> 2020-07-07T14:59:08.284+0000 7f7dccf8b700  4 mgrc reconnect Terminating session with v2:10.0.1.1:6800/18339
    -7> 2020-07-07T14:59:08.284+0000 7f7dccf8b700  4 mgrc reconnect Starting new session with [v2:10.0.1.1:6800/18339,v1:10.0.1.1:6801/18339]
    -6> 2020-07-07T14:59:08.284+0000 7f7dccf8b700  4 mgrc handle_mgr_configure stats_period=5
    -5> 2020-07-07T15:14:08.285+0000 7f7dccf8b700  4 mgrc ms_handle_reset ms_handle_reset con 0x562ae08b6c00
    -4> 2020-07-07T15:14:08.285+0000 7f7dccf8b700  4 mgrc reconnect Terminating session with v2:10.0.1.1:6800/18339
    -3> 2020-07-07T15:14:08.285+0000 7f7dccf8b700  4 mgrc reconnect Starting new session with [v2:10.0.1.1:6800/18339,v1:10.0.1.1:6801/18339]
    -2> 2020-07-07T15:14:08.285+0000 7f7dccf8b700  4 mgrc handle_mgr_configure stats_period=5
    -1> 2020-07-07T15:27:20.984+0000 7f7dccf8b700 -1 /home/fedora/src/ceph/ceph/src/include/elist.h: In function 'elist<T>::~elist() [with T = MDLockCache*]' thread 7f7dccf8b700 time 2020-07-07T15:27:20.979768+0000
/home/fedora/src/ceph/ceph/src/include/elist.h: 91: FAILED ceph_assert(_head.empty())

 ceph version 16.0.0-3204-ge9cc7d863a (e9cc7d863ac8d63caefd191d9b7942e51c5bf780) pacific (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x124) [0x7f7dd262a774]
 2: (()+0x2508ff) [0x7f7dd262a8ff]
 3: (std::_Rb_tree<client_t, std::pair<client_t const, Capability>, std::_Select1st<std::pair<client_t const, Capability> >, std::less<client_t>, mempool::pool_allocator<(mempool::pool_index_t)18, std::pair<client_t const, Capability> > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<client_t const, Capability> >)+0x15a) [0x562ad7020a4a]
 4: (CInode::remove_client_cap(client_t)+0x221) [0x562ad6ffdf01]
 5: (CInode::clear_client_caps_after_export()+0x3b) [0x562ad6ffe28b]
 6: (Migrator::finish_export_inode_caps(CInode*, int, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >&)+0x534) [0x562ad6f86804]
 7: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x282) [0x562ad6f87402]
 8: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x41a) [0x562ad6f8759a]
 9: (Migrator::export_finish(CDir*)+0x438) [0x562ad6f89988]
 10: (Migrator::handle_export_notify_ack(boost::intrusive_ptr<MExportDirNotifyAck const> const&)+0x383) [0x562ad6f8abe3]
 11: (Migrator::dispatch(boost::intrusive_ptr<Message const> const&)+0x204) [0x562ad6f8b1f4]
 12: (MDSRank::_dispatch(boost::intrusive_ptr<Message const> const&, bool)+0x5b7) [0x562ad6d9f097]
 13: (MDSRankDispatcher::ms_dispatch(boost::intrusive_ptr<Message const> const&)+0x4f) [0x562ad6d9f68f]
 14: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x118) [0x562ad6d7b288]
 15: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0x448) [0x7f7dd2842848]
 16: (DispatchQueue::entry()+0x5ef) [0x7f7dd283ffef]
 17: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f7dd28f7ecd]
 18: (()+0x9432) [0x7f7dd1e68432]
 19: (clone()+0x43) [0x7f7dd19c29d3]

     0> 2020-07-07T15:27:20.986+0000 7f7dccf8b700 -1 *** Caught signal (Aborted) **
 in thread 7f7dccf8b700 thread_name:ms_dispatch

 ceph version 16.0.0-3204-ge9cc7d863a (e9cc7d863ac8d63caefd191d9b7942e51c5bf780) pacific (dev)
 1: (()+0x14a90) [0x7f7dd1e73a90]
 2: (gsignal()+0x145) [0x7f7dd18fda25]
 3: (abort()+0x127) [0x7f7dd18e6895]
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x7f7dd262a7be]
 5: (()+0x2508ff) [0x7f7dd262a8ff]
 6: (std::_Rb_tree<client_t, std::pair<client_t const, Capability>, std::_Select1st<std::pair<client_t const, Capability> >, std::less<client_t>, mempool::pool_allocator<(mempool::pool_index_t)18, std::pair<client_t const, Capability> > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<client_t const, Capability> >)+0x15a) [0x562ad7020a4a]
 7: (CInode::remove_client_cap(client_t)+0x221) [0x562ad6ffdf01]
 8: (CInode::clear_client_caps_after_export()+0x3b) [0x562ad6ffe28b]
 9: (Migrator::finish_export_inode_caps(CInode*, int, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >&)+0x534) [0x562ad6f86804]
 10: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x282) [0x562ad6f87402]
 11: (Migrator::finish_export_dir(CDir*, int, std::map<inodeno_t, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > >, std::less<inodeno_t>, std::allocator<std::pair<inodeno_t const, std::map<client_t, Capability::Import, std::less<client_t>, std::allocator<std::pair<client_t const, Capability::Import> > > > > >&, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int*)+0x41a) [0x562ad6f8759a]
 12: (Migrator::export_finish(CDir*)+0x438) [0x562ad6f89988]
 13: (Migrator::handle_export_notify_ack(boost::intrusive_ptr<MExportDirNotifyAck const> const&)+0x383) [0x562ad6f8abe3]
 14: (Migrator::dispatch(boost::intrusive_ptr<Message const> const&)+0x204) [0x562ad6f8b1f4]
 15: (MDSRank::_dispatch(boost::intrusive_ptr<Message const> const&, bool)+0x5b7) [0x562ad6d9f097]
 16: (MDSRankDispatcher::ms_dispatch(boost::intrusive_ptr<Message const> const&)+0x4f) [0x562ad6d9f68f]
 17: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x118) [0x562ad6d7b288]
 18: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0x448) [0x7f7dd2842848]
 19: (DispatchQueue::entry()+0x5ef) [0x7f7dd283ffef]
 20: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f7dd28f7ecd]
 21: (()+0x9432) [0x7f7dd1e68432]
 22: (clone()+0x43) [0x7f7dd19c29d3]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 0 filer
   0/ 1 striper
   0/ 0 objecter
   0/ 0 rados
   0/ 0 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 rbd_rwl
   0/ 0 journaler
   0/ 0 objectcacher
   0/ 5 immutable_obj_cache
   0/ 0 client
   0/ 0 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   1/ 5 crypto
   0/ 0 finisher
   1/ 1 reserver
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   1/ 5 rgw_sync
   1/10 civetweb
   1/ 5 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   1/ 5 compressor
   5/ 5 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   4/ 5 memdb
   1/ 5 fuse
   1/ 5 mgr
   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
   5/ 5 prioritycache
   0/ 5 test
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
--- pthread ID / name mapping for recent threads ---
  7f7dccf8b700 / ms_dispatch
  7f7dd17f5600 / ceph-mds
  max_recent     10000
  max_new         1000
  log_file /tmp/cbt/ceph/log/mds.d.log
--- end dump of recent events ---

Actions #4

Updated by Patrick Donnelly over 3 years ago

  • Status changed from Resolved to Pending Backport
  • Backport set to nautilus

This should have been flagged for backport.

Actions #5

Updated by Patrick Donnelly over 3 years ago

  • Copied to Backport #46778: nautilus: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())" added
Actions #6

Updated by Patrick Donnelly over 3 years ago

  • Status changed from Pending Backport to Resolved
Actions #7

Updated by Nathan Cutler over 3 years ago

This issue and #44295 were both fixed by the same PR, https://github.com/ceph/ceph/pull/33538, which was backported to nautilus via #44295.

Actions

Also available in: Atom PDF