Project

General

Profile

Actions

Bug #5239

closed

osd: Segmentation fault in ceph-osd / tcmalloc

Added by Emil Renner Berthing almost 11 years ago. Updated over 10 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

We're still experiencing segmentation faults in the ceph-osd daemons from the 0.61.2-1~bpo70+1 debian packages.
It appears to happen inside tcmalloc when used by LevelDB. It happens across all the OSD servers and it seems to happen more often under load.

The issue was reported on the mailing list here:
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/15146

Initially we thought it was related to us using very big objects, but the daemons keep crashing even when all the cluster contains is
data from the following runs of rados benchmark:

while true; do rados -p benchmarks -b 4096 bench 3600 write -t 64 --no-cleanup; sleep 1; done

and
while true; do rados -p benchmarks -b 4194304 bench 3600 write -t 64 --no-cleanup; sleep 1; done

Here are some stats on the cluster:
- each server has 64GB ram,
- there are 12 OSDs pr. server and now 216 OSDs in all, (earlier we only had 132 OSDs)
- each OSD uses around 1.5 GB of memory,
- there are now 33792 PGs, (earlier we had 18432 PGs)
- all drives are 4TB large, have an xfs-formatted sdx1 and a 10GB journal at sdx2.
- the filesystems are mounted xfs (rw,noatime,attr2,noquota)
- we don't use snapshots

Backtrace from the coredump:

Core was generated by `/usr/bin/ceph-osd -i 130 --pid-file /var/run/ceph/osd.130.pid -c /etc/ceph/ceph'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007f6a64e3eefb in raise () from /lib/x86_64-linux-gnu/libpthread.so.0
(gdb) backtrace
#0 0x00007f6a64e3eefb in raise () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x0000000000853a89 in reraise_fatal (signum=11) at global/signal_handler.cc:58
#2 handle_fatal_signal (signum=11) at global/signal_handler.cc:104
#3 <signal handler called>
#4 0x00007f6a640596f3 in do_malloc (size=364131408) at src/tcmalloc.cc:1059
#5 cpp_alloc (nothrow=false, size=364131408) at src/tcmalloc.cc:1354
#6 tc_new (size=364131408) at src/tcmalloc.cc:1530
#7 0x00007f6a59e90c10 in ?? ()
#8 0x0000000015b43450 in ?? ()
#9 0x00007f6a63e09b21 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#10 0x00007f6a63e06ba8 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#11 0x00007f6a63df24d4 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#12 0x0000000000840977 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::lower_bound (this=0x1ec1b6c0, prefix=..., to=...) at os/LevelDBStore.h:204
#13 0x000000000083f351 in LevelDBStore::get (this=<optimized out>, prefix=..., keys=..., out=0x7f6a59e90f60) at os/LevelDBStore.cc:106
#14 0x0000000000838449 in DBObjectMap::_lookup_map_header (this=this@entry=0x207b600, hoid=...) at os/DBObjectMap.cc:1080
#15 0x00000000008386f4 in DBObjectMap::lookup_create_map_header (this=this@entry=0x207b600, hoid=..., t=...) at os/DBObjectMap.cc:1146
#16 0x0000000000838c61 in DBObjectMap::set_keys (this=0x207b600, hoid=..., set=..., spos=0x7f6a59e91400) at os/DBObjectMap.cc:504
#17 0x00000000007f4380 in FileStore::_omap_setkeys (this=this@entry=0x2092000, cid=..., hoid=..., aset=..., spos=...) at os/FileStore.cc:4754
#18 0x000000000080f720 in FileStore::_do_transaction (this=this@entry=0x2092000, t=..., op_seq=op_seq@entry=22064536, trans_num=trans_num@entry=0) at os/FileStore.cc:2586
#19 0x0000000000812999 in FileStore::_do_transactions (this=this@entry=0x2092000, tls=..., op_seq=22064536, handle=handle@entry=0x7f6a59e91b80) at os/FileStore.cc:2151
#20 0x0000000000812b2e in FileStore::_do_op (this=0x2092000, osr=<optimized out>, handle=...) at os/FileStore.cc:1985
#21 0x00000000008f52ea in ThreadPool::worker (this=0x2092a08, wt=0x20a6480) at common/WorkQueue.cc:119
#22 0x00000000008f6590 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:316
#23 0x00007f6a64e36b50 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#24 0x00007f6a63372a7d in clone () from /lib/x86_64-linux-gnu/libc.so.6
#25 0x0000000000000000 in ?? ()
(gdb)

The log from the same crashed server is attached.


Files

ceph-osd.130.log.gz (230 KB) ceph-osd.130.log.gz Log from crashed OSD daemon Emil Renner Berthing, 06/03/2013 08:37 AM
ceph-osd.5.log (2.88 MB) ceph-osd.5.log Log of new type of crashes. Emil Renner Berthing, 06/28/2013 06:08 AM

Related issues 1 (0 open1 closed)

Related to Ceph - Bug #5301: mon: leveldb crash in tcmallocCan't reproduce06/11/2013

Actions
Actions #1

Updated by Sage Weil almost 11 years ago

  • Priority changed from Normal to Urgent
Actions #2

Updated by Ian Colle almost 11 years ago

  • Assignee set to Anonymous

Gary, can you please take a look at this?

Actions #3

Updated by Sage Weil almost 11 years ago

  • Assignee deleted (Anonymous)

this is either heap corruption, or a buggy tcmalloc, i think.

are there known problsm with wheezy's tcmalloc version?

can you try with the latest cuttlefish branch?

Actions #4

Updated by Sage Weil almost 11 years ago

  • Status changed from New to Need More Info
Actions #5

Updated by Emil Renner Berthing almost 11 years ago

All our OSD nodes have now been updated to packages built from the latest cuttlefish branch, commit 7d549cb82ab8e..

I'll of course keep you updated if we see any more crashes.

Actions #6

Updated by Emil Renner Berthing almost 11 years ago

No, unfortunately the latest cuttlefish branch didn't fix it. We had another crash about 6 hours after we upgraded.

This is with packages built from commit 7d549cb82ab8e.. We can try upgrading to 0.61.3-1~bpo70+1 from your repos, but I see from the git history that it is basically the same only with version numbers changed.

Core was generated by `/usr/bin/ceph-osd -i 28 --pid-file /var/run/ceph/osd.28.pid -c /etc/ceph/ceph.c'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f61ac4aeefb in raise () from /lib/x86_64-linux-gnu/libpthread.so.0
(gdb) backtrace 
#0  0x00007f61ac4aeefb in raise () from /lib/x86_64-linux-gnu/libpthread.so.0
#1  0x0000000000854fc9 in reraise_fatal (signum=11) at global/signal_handler.cc:58
#2  handle_fatal_signal (signum=11) at global/signal_handler.cc:104
#3  <signal handler called>
#4  0x00007f61ab6c96f3 in do_malloc (size=414072480) at src/tcmalloc.cc:1059
#5  cpp_alloc (nothrow=false, size=414072480) at src/tcmalloc.cc:1354
#6  tc_new (size=414072480) at src/tcmalloc.cc:1530
#7  0x00007f619fcfbc00 in ?? ()
#8  0x0000000018ae3ea0 in ?? ()
#9  0x00007f61ab479b21 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#10 0x00007f61ab476ba8 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#11 0x00007f61ab4624d4 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#12 0x0000000000841ec7 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::lower_bound (this=0x31114740, prefix=..., to=...) at os/LevelDBStore.h:241
#13 0x0000000000840b0c in LevelDBStore::get (this=0x313da40, prefix=..., keys=..., out=0x7f619fcfbf60) at os/LevelDBStore.cc:160
#14 0x00000000008385a9 in DBObjectMap::_lookup_map_header (this=this@entry=0x315b600, hoid=...) at os/DBObjectMap.cc:1080
#15 0x0000000000838854 in DBObjectMap::lookup_create_map_header (this=this@entry=0x315b600, hoid=..., t=...) at os/DBObjectMap.cc:1146
#16 0x0000000000838dc1 in DBObjectMap::set_keys (this=0x315b600, hoid=..., set=..., spos=0x7f619fcfc400) at os/DBObjectMap.cc:504
#17 0x00000000007f41b0 in FileStore::_omap_setkeys (this=this@entry=0x3172000, cid=..., hoid=..., aset=..., spos=...) at os/FileStore.cc:4754
#18 0x000000000080f620 in FileStore::_do_transaction (this=this@entry=0x3172000, t=..., op_seq=op_seq@entry=56503531, trans_num=trans_num@entry=0) at os/FileStore.cc:2586
#19 0x0000000000812899 in FileStore::_do_transactions (this=this@entry=0x3172000, tls=..., op_seq=56503531, handle=handle@entry=0x7f619fcfcb80) at os/FileStore.cc:2151
#20 0x0000000000812a2e in FileStore::_do_op (this=0x3172000, osr=<optimized out>, handle=...) at os/FileStore.cc:1985
#21 0x00000000008f680a in ThreadPool::worker (this=0x3172a08, wt=0x316bd60) at common/WorkQueue.cc:119
#22 0x00000000008f7ab0 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:316
#23 0x00007f61ac4a6b50 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#24 0x00007f61aa9e2a7d in clone () from /lib/x86_64-linux-gnu/libc.so.6
#25 0x0000000000000000 in ?? ()
(gdb) 
Actions #7

Updated by Emil Renner Berthing almost 11 years ago

Would it be helpful to try and build packages that don't use tcmalloc (using the --without-tcmalloc configure option)?

This might confirm if indeed there are problems with wheezy's tcmalloc as Gary suggests.

Actions #8

Updated by Emil Renner Berthing almost 11 years ago

Sorry. s/Gary/Sage/

Actions #9

Updated by Emil Renner Berthing almost 11 years ago

It turns out that the Debian wheezy libgoogle-perftools-dev package and ceph packages depends on libgoogle-perftools4 whereas the libgoogle-perftools-dev package in Ubuntu precise depends on libgoogle-perftools0.

In other words ceph-osd from the Debian packages links against libtcmalloc.so.4 from version 2.0 of the Google Performance Tools but (I think) ceph-osd in the Ubuntu packages links against libtcmalloc.so.0 from version 1.7 of the Google Performance Tools. I don't have an Ubuntu installed to check this, so please correct me if I'm wrong.

Unfortunately Debian wheezy doesn't have a libgoogle-perftools0 package, but we can check if this is really the source of the problem either by packaging our own version of Google Performance Tools version 1.7 or just build ceph --without-tcmalloc. What do you think?

Actions #10

Updated by Sage Weil almost 11 years ago

Running without tcmalloc would be a very helpful data point, yes. You can get non-tcmalloc packages built for precise from http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-notcmalloc/ ... I'm not sure if they'll install cleanly on wheezy, but that may save you some effort building yourself if they do.

If not,

CEPH_EXTRA_CONFIGURE_ARGS=--without-tcmalloc dpkg-buildpackage

in the ceph root dir to build some debs.

Actions #11

Updated by Sage Weil almost 11 years ago

  • Subject changed from Segmentation fault in ceph-osd to osd: Segmentation fault in ceph-osd / tcmalloc
Actions #12

Updated by Emil Renner Berthing almost 11 years ago

Ok, all our OSD nodes are now running v0.61.3, but built --without-tcmalloc.

We'll try different workloads during the coming days to see if we can make them crash again.

Actions #13

Updated by Sage Weil almost 11 years ago

  • Priority changed from Urgent to High

any luck?

Actions #14

Updated by Emil Renner Berthing almost 11 years ago

Yes, now we seem to have provoked two different errors. Both of them has happened at least twice each but on different servers.

The first type is an uncaught signal:

Core was generated by `/usr/bin/ceph-osd -i 15 --pid-file /var/run/ceph/osd.15.pid -c /etc/ceph/ceph.c'.
Program terminated with signal 6, Aborted.
#0  0x00007f469db16efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
42    ../nptl/sysdeps/unix/sysv/linux/pt-raise.c: No such file or directory.
(gdb) backtrace 
#0  0x00007f469db16efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
#1  0x0000000000854759 in reraise_fatal (signum=6) at global/signal_handler.cc:58
#2  handle_fatal_signal (signum=6) at global/signal_handler.cc:104
#3  <signal handler called>
#4  0x00007f469c1fa475 in *__GI_raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#5  0x00007f469c1fd6f0 in *__GI_abort () at abort.c:92
#6  0x00007f469c23ef7a in __malloc_assert (assertion=<optimized out>, file=<optimized out>, line=<optimized out>, function=<optimized out>) at malloc.c:351
#7  0x00007f469c2416e2 in _int_malloc (av=<optimized out>, bytes=<optimized out>) at malloc.c:4485
#8  0x00007f466d9ae5c0 in ?? ()
#9  0x00007f468da01c20 in ?? ()
#10 0x00007f469cd39b33 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#11 0x00007f469cd36ba8 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#12 0x00007f469cd224d4 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#13 0x0000000000841657 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::lower_bound (this=0x7f46772cb010, prefix=..., to=...) at os/LevelDBStore.h:241
#14 0x000000000084029c in LevelDBStore::get (this=0x7f469402eee0, prefix=..., keys=..., out=0x7f468da01f80) at os/LevelDBStore.cc:160
#15 0x0000000000837d39 in DBObjectMap::_lookup_map_header (this=this@entry=0x7f4694079170, hoid=...) at os/DBObjectMap.cc:1080
#16 0x0000000000837fe4 in DBObjectMap::lookup_create_map_header (this=this@entry=0x7f4694079170, hoid=..., t=...) at os/DBObjectMap.cc:1146
#17 0x0000000000838551 in DBObjectMap::set_keys (this=0x7f4694079170, hoid=..., set=..., spos=0x7f468da02420) at os/DBObjectMap.cc:504
#18 0x00000000007f3940 in FileStore::_omap_setkeys (this=this@entry=0x7f4694027af0, cid=..., hoid=..., aset=..., spos=...) at os/FileStore.cc:4754
#19 0x000000000080edb0 in FileStore::_do_transaction (this=this@entry=0x7f4694027af0, t=..., op_seq=op_seq@entry=71130559, trans_num=trans_num@entry=0) at os/FileStore.cc:2586
#20 0x0000000000812029 in FileStore::_do_transactions (this=this@entry=0x7f4694027af0, tls=..., op_seq=71130559, handle=handle@entry=0x7f468da02ba0) at os/FileStore.cc:2151
#21 0x00000000008121be in FileStore::_do_op (this=0x7f4694027af0, osr=<optimized out>, handle=...) at os/FileStore.cc:1985
#22 0x00000000008f5f9a in ThreadPool::worker (this=0x7f46940284f8, wt=0x7f469402c4d0) at common/WorkQueue.cc:119
#23 0x00000000008f7240 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:316
#24 0x00007f469db0eb50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#25 0x00007f469c2a2a7d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#26 0x0000000000000000 in ?? ()
(gdb) 

And the second type is another segmentation fault:

Core was generated by `/usr/bin/ceph-osd -i 59 --pid-file /var/run/ceph/osd.59.pid -c /etc/ceph/ceph.c'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007ffe28f16efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
42    ../nptl/sysdeps/unix/sysv/linux/pt-raise.c: No such file or directory.
(gdb) backtrace 
#0  0x00007ffe28f16efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
#1  0x0000000000854759 in reraise_fatal (signum=11) at global/signal_handler.cc:58
#2  handle_fatal_signal (signum=11) at global/signal_handler.cc:104
#3  <signal handler called>
#4  __strncmp_ssse3 () at ../sysdeps/x86_64/multiarch/../strcmp.S:2257
#5  0x00007ffe27eb8828 in std::string::reserve(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007ffe27eb8ab5 in std::string::append(char const*, unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x00007ffe28134c03 in leveldb::Block::Iter::Seek(leveldb::Slice const&) () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#8  0x00007ffe28139b33 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#9  0x00007ffe28139b33 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#10 0x00007ffe28136ba8 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#11 0x00007ffe281224d4 in ?? () from /usr/lib/x86_64-linux-gnu/libleveldb.so.1
#12 0x0000000000841657 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::lower_bound (this=0x7ffdb34009c0, prefix=..., to=...) at os/LevelDBStore.h:241
#13 0x000000000084029c in LevelDBStore::get (this=0x7ffe207011c0, prefix=..., keys=..., out=0x7ffe1b12af80) at os/LevelDBStore.cc:160
#14 0x0000000000837d39 in DBObjectMap::_lookup_map_header (this=this@entry=0x7ffe2078f500, hoid=...) at os/DBObjectMap.cc:1080
#15 0x0000000000837fe4 in DBObjectMap::lookup_create_map_header (this=this@entry=0x7ffe2078f500, hoid=..., t=...) at os/DBObjectMap.cc:1146
#16 0x0000000000838551 in DBObjectMap::set_keys (this=0x7ffe2078f500, hoid=..., set=..., spos=0x7ffe1b12b420) at os/DBObjectMap.cc:504
#17 0x00000000007f3940 in FileStore::_omap_setkeys (this=this@entry=0x7ffe2001d4c0, cid=..., hoid=..., aset=..., spos=...) at os/FileStore.cc:4754
#18 0x000000000080edb0 in FileStore::_do_transaction (this=this@entry=0x7ffe2001d4c0, t=..., op_seq=op_seq@entry=65830337, trans_num=trans_num@entry=0) at os/FileStore.cc:2586
#19 0x0000000000812029 in FileStore::_do_transactions (this=this@entry=0x7ffe2001d4c0, tls=..., op_seq=65830337, handle=handle@entry=0x7ffe1b12bba0) at os/FileStore.cc:2151
#20 0x00000000008121be in FileStore::_do_op (this=0x7ffe2001d4c0, osr=<optimized out>, handle=...) at os/FileStore.cc:1985
#21 0x00000000008f5f9a in ThreadPool::worker (this=0x7ffe2001dec8, wt=0x7ffe207027d0) at common/WorkQueue.cc:119
#22 0x00000000008f7240 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:316
#23 0x00007ffe28f0eb50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#24 0x00007ffe276a2a7d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#25 0x0000000000000000 in ?? ()
(gdb) 

Do you want the logs for any of these crashes?
Do you think these are related or should I create new issues for them?

Actions #15

Updated by Emil Renner Berthing almost 11 years ago

I just looked into LevelDB packaging in wheezy and precise. Again it seems that debian ships a newer version of LevelDB than ubuntu. Commit dd0d562 in wheezy vs. commit 3c8be in precise. We can try installing the ubuntu version of LevelDB to rule out this difference if you want.

Btw. we have a pool of mostly 4k objects (47744072 in all) created by the benchmark tool. It seems that a good way to produce these crashes is to change the crush map for that pool.

Actions #16

Updated by Sage Weil almost 11 years ago

Ah. Can you please try the ubuntu leveldb package and see if the problem persists? Thanks!

Actions #17

Updated by Emil Renner Berthing almost 11 years ago

Ok, I tried the ubuntu leveldb package but in ubuntu leveldb is only built as a static library. So what I did was to build the ubuntu leveldb package, install the static library on the build machine, and then rebuild the ceph v0.61.3 packages. Hence the resulting packages don't depend on debians libleveldb1 but has the older version statically linked in.

This build has now been running on all OSD nodes for a few hours while we tried to make them crash again. We've tried changing the crush map of the benchmark pool while running the benchmark and killing random OSD daemons, but they still haven't crashed. So we're quite confident that this actually fixed the issue for us.

Maybe you could do something similar when building the official debian packages to solve this issue for other debian users?

Actions #18

Updated by Emil Renner Berthing almost 11 years ago

Argh. I spoke too soon. We just had another crash this morning while deleting the benchmark pool. Using the statically linked LevelDB at the same version as ubuntu is definitely more stable, but it seems isn't the root cause :(

Core was generated by `/usr/bin/ceph-osd -i 99 --pid-file /var/run/ceph/osd.99.pid -c /etc/ceph/ceph.c'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007fe79c586efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
42    ../nptl/sysdeps/unix/sysv/linux/pt-raise.c: No such file or directory.
(gdb) backtrace 
#0  0x00007fe79c586efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
#1  0x000000000085ec49 in reraise_fatal (signum=11) at global/signal_handler.cc:58
#2  handle_fatal_signal (signum=11) at global/signal_handler.cc:104
#3  <signal handler called>
#4  0x00007fe79b7a7afc in ?? () from /usr/lib/libtcmalloc.so.4
#5  0x00007fe79b318828 in std::string::reserve(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007fe79b318ab5 in std::string::append(char const*, unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x00000000009ece84 in leveldb::Block::Iter::Next() ()
#8  0x00000000009e6a6e in leveldb::(anonymous namespace)::TwoLevelIterator::Next() ()
#9  0x00000000009e6a6e in leveldb::(anonymous namespace)::TwoLevelIterator::Next() ()
#10 0x00000000009e4a90 in leveldb::(anonymous namespace)::MergingIterator::Next() ()
#11 0x00000000009d6a56 in leveldb::(anonymous namespace)::DBIter::FindNextUserEntry(bool, std::string*) ()
#12 0x00000000009d6d8d in leveldb::(anonymous namespace)::DBIter::Seek(leveldb::Slice const&) ()
#13 0x000000000084bb47 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::lower_bound (this=0x44b3ff90, prefix=..., to=...) at os/LevelDBStore.h:241
#14 0x000000000084a78c in LevelDBStore::get (this=0x1f79900, prefix=..., keys=..., out=0x7fe79123dde0) at os/LevelDBStore.cc:160
#15 0x0000000000842299 in DBObjectMap::_lookup_map_header (this=this@entry=0x1f97080, hoid=...) at os/DBObjectMap.cc:1080
#16 0x00000000008482c9 in DBObjectMap::lookup_map_header (this=this@entry=0x1f97080, hoid=...) at os/DBObjectMap.h:404
#17 0x0000000000844c1c in DBObjectMap::clear (this=0x1f97080, hoid=..., spos=0x7fe79123e400) at os/DBObjectMap.cc:575
#18 0x000000000080af4f in FileStore::lfn_unlink (this=this@entry=0x1fae000, cid=..., o=..., spos=...) at os/FileStore.cc:350
#19 0x000000000080b0a0 in FileStore::_remove (this=this@entry=0x1fae000, cid=..., oid=..., spos=...) at os/FileStore.cc:2882
#20 0x000000000081a74a in FileStore::_do_transaction (this=this@entry=0x1fae000, t=..., op_seq=op_seq@entry=32504370, trans_num=trans_num@entry=0) at os/FileStore.cc:2408
#21 0x000000000081c599 in FileStore::_do_transactions (this=this@entry=0x1fae000, tls=..., op_seq=32504370, handle=handle@entry=0x7fe79123eb80) at os/FileStore.cc:2151
#22 0x000000000081c72e in FileStore::_do_op (this=0x1fae000, osr=<optimized out>, handle=...) at os/FileStore.cc:1985
#23 0x000000000090048a in ThreadPool::worker (this=0x1faea08, wt=0x1fc4940) at common/WorkQueue.cc:119
#24 0x0000000000901730 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:316
#25 0x00007fe79c57eb50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#26 0x00007fe79ab02a7d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#27 0x0000000000000000 in ?? ()
(gdb) 
Actions #19

Updated by Sage Weil almost 11 years ago

  • Priority changed from High to Urgent
Actions #20

Updated by Sage Weil almost 11 years ago

sandon put wheezy on these mira for us to test this locally: mira09456

Actions #21

Updated by Sage Weil almost 11 years ago

  • Status changed from Need More Info to 12
Actions #22

Updated by Sage Weil almost 11 years ago

  • Assignee set to Sage Weil
Actions #23

Updated by Sage Weil almost 11 years ago

  • Status changed from 12 to 7
  • Priority changed from Urgent to High

latest is that the precise version of boost appears to have resolved this.. so far.

Actions #24

Updated by Emil Renner Berthing almost 11 years ago

Yes, boost1.46 from Ubuntu does seem to make a difference. The last built has been running for 5 days now with far fewer segmentation faults even though the workload hasn't changed.

However during these 5 days we did see 4 crashes but with very different backtraces:

Core was generated by `/usr/bin/ceph-osd -i 5 --pid-file /var/run/ceph/osd.5.pid -c /etc/ceph/ceph.con'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007fc8e9a26efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
42      ../nptl/sysdeps/unix/sysv/linux/pt-raise.c: No such file or directory.
(gdb) backtrace 
#0  0x00007fc8e9a26efb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
#1  0x0000000000860439 in reraise_fatal (signum=11) at global/signal_handler.cc:58
#2  handle_fatal_signal (signum=11) at global/signal_handler.cc:104
#3  <signal handler called>
#4  0x000000001d6aaf20 in ?? ()
#5  0x00000000006454ac in ~ptr (this=0x1d6aaf40, __in_chrg=<optimized out>) at ./include/buffer.h:159
#6  destroy (__p=0x1d6aaf40, this=<optimized out>) at /usr/include/c++/4.7/ext/new_allocator.h:123
#7  std::_List_base<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_clear (this=this@entry=0x2349a5a0)
    at /usr/include/c++/4.7/bits/list.tcc:78
#8  0x00000000006a8375 in ~_List_base (this=0x2349a5a0, __in_chrg=<optimized out>)
    at /usr/include/c++/4.7/bits/stl_list.h:379
#9  ~list (this=0x2349a5a0, __in_chrg=<optimized out>) at /usr/include/c++/4.7/bits/stl_list.h:436
#10 ~list (this=0x2349a5a0, __in_chrg=<optimized out>) at ./include/buffer.h:304
#11 Message::~Message (this=this@entry=0x2349a540, __in_chrg=<optimized out>) at ./msg/Message.h:351
#12 0x0000000000720d50 in ~MOSDPing (this=0x2349a540, __in_chrg=<optimized out>) at ./messages/MOSDPing.h:64
#13 MOSDPing::~MOSDPing (this=0x2349a540, __in_chrg=<optimized out>) at ./messages/MOSDPing.h:64
#14 0x00000000009a8ed4 in ~intrusive_ptr (this=0x7fc8da4b6bd0, __in_chrg=<optimized out>)
    at /usr/include/boost/smart_ptr/intrusive_ptr.hpp:101
#15 ~QueueItem (this=0x7fc8da4b6bc0, __in_chrg=<optimized out>) at msg/DispatchQueue.h:43
#16 DispatchQueue::entry (this=0x25a8668) at msg/DispatchQueue.cc:127
#17 0x00000000008f75ed in DispatchQueue::DispatchThread::entry (this=<optimized out>) at msg/DispatchQueue.h:104
#18 0x00007fc8e9a1eb50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#19 0x00007fc8e7d82a7d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#20 0x0000000000000000 in ?? ()
(gdb) 

For completeness our latest wheezy packages was again built with leveldb 0+20120125.git3c8be10 from Ubuntu precise statically linked in, but this time they're also built against the boost1.46 packages from Ubuntu.

Actions #25

Updated by Sage Weil over 10 years ago

  • Status changed from 7 to Need More Info
Actions #26

Updated by Sage Weil over 10 years ago

  • Assignee deleted (Sage Weil)
Actions #27

Updated by Sage Weil over 10 years ago

  • Status changed from Need More Info to Can't reproduce

Let us know if this is still happening for you. Thanks!

Actions #28

Updated by Emil Renner Berthing over 10 years ago

Since the last update here, we've been running our own builds of the cuttlefish branch, built exactly as described above. That is with leveldb 0+20120125.git3c8be10 from Ubuntu precise statically linked and using the boost 1.46.1-7ubuntu3 packages from Ubuntu.

Since then we've had almost no crashes due to segmentation faults, and none of them looked like the problems above.
Actually we've had no problems at all with the cluster since updating to our own build of cuttlefish 0.61.7.

However, when upgrading to cuttlefish 0.61.8 we grew bold and tried your 0.61.8-1~bpo70+1 build and these crashes came back. Therefore we've just upgraded the cluster to our own build of 0.61.8 again using the Ubuntu leveldb and boost packages. Hopefully this will make the problems go away again.

Here is a sample backtrace from a crash, though it is very similiar to the ones in message #6 and #14.

Core was generated by `/usr/bin/ceph-osd -i 56 --pid-file /var/run/ceph/osd.56.pid -c /etc/ceph/ceph.c'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f90ed1beefb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
(gdb) #0  0x00007f90ed1beefb in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42
#1  0x00000000008641e9 in reraise_fatal (signum=11) at global/signal_handler.cc:58
#2  handle_fatal_signal (signum=11) at global/signal_handler.cc:104
#3  <signal handler called>
#4  0x00007f90ec3d99ed in SampleAllocation (this=<optimized out>, k=174825472) at src/sampler.h:142
#5  SampleAllocation (this=<optimized out>, k=174825472) at src/thread_cache.h:332
#6  do_malloc_pages (size=174825472, heap=<optimized out>) at src/tcmalloc.cc:1036
#7  do_malloc (size=174821360) at src/tcmalloc.cc:1071
#8  cpp_alloc (nothrow=false, size=174821360) at src/tcmalloc.cc:1354
#9  tc_new (size=174821360) at src/tcmalloc.cc:1530
#10 0x00007f90e2c579a0 in ?? ()
#11 0x000000000a6b8ff0 in ?? ()
#12 0x00000000009f1e21 in leveldb::(anonymous namespace)::TwoLevelIterator::Seek(leveldb::Slice const&) ()
#13 0x00000000009efb28 in leveldb::(anonymous namespace)::MergingIterator::Seek(leveldb::Slice const&) ()
#14 0x00000000009e1fe4 in leveldb::(anonymous namespace)::DBIter::Seek(leveldb::Slice const&) ()
#15 0x0000000000850ed7 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::lower_bound (this=0x1c203ad0, prefix=..., 
    to=...) at os/LevelDBStore.h:241
#16 0x000000000084fb1c in LevelDBStore::get (this=0x171b540, prefix=..., keys=..., out=0x7f90e2c57d00)
    at os/LevelDBStore.cc:160
#17 0x00000000008475f9 in DBObjectMap::_lookup_map_header (this=this@entry=0x17351e0, hoid=...)
    at os/DBObjectMap.cc:1080
#18 0x000000000084d659 in DBObjectMap::lookup_map_header (this=this@entry=0x17351e0, hoid=...) at os/DBObjectMap.h:404
#19 0x0000000000848fb6 in DBObjectMap::rm_keys (this=0x17351e0, hoid=..., to_clear=..., spos=0x7f90e2c58400)
    at os/DBObjectMap.cc:696
#20 0x0000000000803361 in FileStore::_omap_rmkeys (this=this@entry=0x1752000, cid=..., hoid=..., keys=..., spos=...)
    at os/FileStore.cc:4825
#21 0x000000000081e530 in FileStore::_do_transaction (this=this@entry=0x1752000, t=..., 
    op_seq=op_seq@entry=157733680, trans_num=trans_num@entry=0) at os/FileStore.cc:2673
#22 0x00000000008218b9 in FileStore::_do_transactions (this=this@entry=0x1752000, tls=..., op_seq=157733680, 
    handle=handle@entry=0x7f90e2c58b80) at os/FileStore.cc:2152
#23 0x0000000000821a4e in FileStore::_do_op (this=0x1752000, osr=<optimized out>, handle=...) at os/FileStore.cc:1986
#24 0x0000000000907a8a in ThreadPool::worker (this=0x1752a08, wt=0x174bce0) at common/WorkQueue.cc:119
#25 0x0000000000908d30 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:316
#26 0x00007f90ed1b6b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#27 0x00007f90eb51aa7d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#28 0x0000000000000000 in ?? ()
(gdb) quit
(END)

Actions #29

Updated by Emil Renner Berthing over 10 years ago

For future reference the culprit seems be the version of leveldb in debian wheezy. We've built a newer leveldb package from the Ubuntu Saucy package and then built ceph packages to link against that. This makes the cluster much more stable, but to get rid of the problems completely you need to rewrite all the old leveldbs with the ceph-kvstore-tool from newer versions of ceph.

Actions

Also available in: Atom PDF