Project

General

Profile

Actions

Bug #19803

open

osd_op_reply for stat does not contain data (ceph-mds crashes with unhandled buffer::end_of_buffer exception)

Added by Andreas Gerstmayr about 7 years ago. Updated almost 7 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
EC Pools
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

our MDS crashes reproducible after some hours when we're extracting lots of zip archives (with many small files) to CephFS from 8 different clients in parallel.
Unfortunately the bad operation gets persisted in the MDS journal, and when another MDS gets active and wants to replay the journal (clientreplay_start), it crashes also.

Initial crash:

what():  buffer::end_of_buffer
*** Caught signal (Aborted) **
terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
1: (()+0x53677a) [0x55d7ecf6a77a]
2: (()+0xf370) [0x7f300b6e9370]
ceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7)
in thread 7f30090a1700 thread_name:ceph-mds
3: (gsignal()+0x37) [0x7f300a70b1d7]
6: (()+0x5e946) [0x7f300b00d946]
4: (abort()+0x148) [0x7f300a70c8c8]
5: (__gnu_cxx::__verbose_terminate_handler()+0x165) [0x7f300b00f9d5]
8: (()+0xb52c5) [0x7f300b0642c5]
9: (()+0x7dc5) [0x7f300b6e1dc5]
7: (()+0x5e973) [0x7f300b00d973]
10: (clone()+0x6d) [0x7f300a7cd73d]
2017-04-28 00:49:58.691715 7f30090a1700 -1 *** Caught signal (Aborted) **
in thread 7f30090a1700 thread_name:ceph-mds
ceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7)
3: (gsignal()+0x37) [0x7f300a70b1d7]
1: (()+0x53677a) [0x55d7ecf6a77a]
2: (()+0xf370) [0x7f300b6e9370]
4: (abort()+0x148) [0x7f300a70c8c8]
5: (__gnu_cxx::__verbose_terminate_handler()+0x165) [0x7f300b00f9d5]
6: (()+0x5e946) [0x7f300b00d946]
8: (()+0xb52c5) [0x7f300b0642c5]
7: (()+0x5e973) [0x7f300b00d973]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
10: (clone()+0x6d) [0x7f300a7cd73d]
9: (()+0x7dc5) [0x7f300b6e1dc5]
--- begin dump of recent events ---
-10000> 2017-04-28 00:49:33.714423 7f3004126700  2 Event(0x55d7f7930680 nevent=5000 time_id=1101).wakeup
-9999> 2017-04-28 00:49:33.714473 7f3004126700  1 -- 10.250.21.12:6800/3482923004 --> 10.250.21.15:6839/3187 -- osd_op(unknown.0.134963:3091941 27.f2c81b03 100067f8439.00000000 [create 0~0,setxattr parent (528),setxattr layout (30)] snapc 0=[] ondisk+write+known_if_redirected+full_force e72474) v7 -- 0x55d8310c6c00 con 0
terminate called recursively
-9997> 2017-04-28 00:49:33.714506 7f3004126700  1 -- 10.250.21.12:6800/3482923004 --> 10.250.21.12:6831/313676 -- osd_op(unknown.0.134963:3091942 1.f2c81b03 100067f8439.00000000 [create 0~0,setxattr parent (528)] snapc 0=[] ondisk+write+known_if_redirected+full_force e72474) v7 -- 0x55d82f736000 con 0
-9998> 2017-04-28 00:49:33.714495 7f30090a1700  5 -- 10.250.21.12:6800/3482923004 >> 10.250.21.15:6813/28389 conn(0x55d7f837d000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=142080 cs=1 l=1). rx osd.104 seq 22898 0x55d7f7b1a840 osd_op_reply(3091489 100067f8dd1.00000000 [create 0~0,setxattr (528),setxattr (30)] v0'0 uv234058 ondisk = 0) v7
terminate called recursively
-9995> 2017-04-28 00:49:33.714512 7f30090a1700  1 -- 10.250.21.12:6800/3482923004 <== osd.104 10.250.21.15:6813/28389 22898 ==== osd_op_reply(3091489 100067f8dd1.00000000 [create 0~0,setxattr (528),setxattr (30)] v0'0 uv234058 ondisk = 0) v7 ==== 224+0+0 (2199786887 0 0) 0x55d7f7b1a840 con 0x55d7f837d000
-9994> 2017-04-28 00:49:33.714511 7f30098a2700  5 -- 10.250.21.12:6800/3482923004 >> 10.250.21.11:6825/46129 conn(0x55d7f82d7000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=113646 cs=1 l=1). rx osd.20 seq 45062 0x55d7faed4b00 osd_op_reply(3091378 100067f83d2.00000000 [create 0~0,setxattr (528),setxattr (30)] v0'0 uv231835 ondisk = 0) v7
-9996> 2017-04-28 00:49:33.714514 7f3004126700  2 Event(0x55d7f7930680 nevent=5000 time_id=1101).wakeup
-9993> 2017-04-28 00:49:33.714530 7f30098a2700  1 -- 10.250.21.12:6800/3482923004 <== osd.20 10.250.21.11:6825/46129 45062 ==== osd_op_reply(3091378 100067f83d2.00000000 [create 0~0,setxattr (528),setxattr (30)] v0'0 uv231835 ondisk = 0) v7 ==== 224+0+0 (3127545423 0 0) 0x55d7faed4b00 con 0x55d7f82d7000
-9992> 2017-04-28 00:49:33.714550 7f3004126700  1 -- 10.250.21.12:6800/3482923004 --> 10.250.21.15:6839/3187 -- osd_op(unknown.0.134963:3091943 27.e38456d3 100067f8bf4.00000000 [create 0~0,setxattr parent (528),setxattr layout (30)] snapc 0=[] ondisk+write+known_if_redirected+full_force e72474) v7 -- 0x55d82f0fe2c0 con 0
/entrypoint.sh: line 339: 619559 Aborted                 /usr/bin/ceph-mds ${CEPH_OPTS} -d -i ${MDS_NAME} --setuser ceph --setgroup ceph

The following logs/stacktraces occur when replaying the journal:

2017-04-28 00:50:54.788216 7f85301a3700  1 mds.0.134974 handle_mds_map i am now mds.0.134974
2017-04-28 00:50:54.788225 7f85301a3700  1 mds.0.134974 handle_mds_map state change up:reconnect --> up:rejoin
2017-04-28 00:50:54.788449 7f85301a3700  1 mds.0.134974 rejoin_joint_start
2017-04-28 00:50:54.788237 7f85301a3700  1 mds.0.134974 rejoin_start
2017-04-28 00:50:54.879546 7f852b99a700  1 mds.0.134974 rejoin_done
2017-04-28 00:50:55.790125 7f85301a3700  1 mds.0.134974 handle_mds_map state change up:rejoin --> up:clientreplay
2017-04-28 00:50:55.790116 7f85301a3700  1 mds.0.134974 handle_mds_map i am now mds.0.134974
2017-04-28 00:50:55.790138 7f85301a3700  1 mds.0.134974 recovery_done -- successful recovery!
2017-04-28 00:50:55.790323 7f85301a3700  1 mds.0.134974 clientreplay_start
terminate called recursively
terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
what():  buffer::end_of_buffer
*** Caught signal (Aborted) **
in thread 7f853311a700 thread_name:ceph-mds
/entrypoint.sh: line 339: 600855 Aborted                 /usr/bin/ceph-mds ${CEPH_OPTS} -d -i ${MDS_NAME} --setuser ceph --setgroup ceph

Backtrace of the core dump:

(gdb) bt
#0  0x00007f7afcace1d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007f7afcacf8c8 in __GI_abort () at abort.c:90
#2  0x00007f7afd3d2965 in __gnu_cxx::__verbose_terminate_handler () at ../../../../libstdc++-v3/libsupc++/vterminate.cc:50
#3  0x00007f7afd3d0946 in __cxxabiv1::__terminate (handler=<optimized out>) at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:38
#4  0x00007f7afd3d0973 in std::terminate () at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:48
#5  0x00007f7afd4272c5 in std::(anonymous namespace)::execute_native_thread_routine (__p=<optimized out>) at ../../../../../libstdc++-v3/src/c++11/thread.cc:92
#6  0x00007f7afdaa4dc5 in start_thread (arg=0x7f7afbc65700) at pthread_create.c:308
#7  0x00007f7afcb9073d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Running with gdb, showing backtraces of all exceptions:

2017-04-28 14:32:50.666872 7ffff1544700  1 mds.0.153707 reconnect_done
2017-04-28 14:32:51.661987 7ffff1544700  1 mds.0.153707 handle_mds_map i am now mds.0.153707
2017-04-28 14:32:51.661990 7ffff1544700  1 mds.0.153707 handle_mds_map state change up:reconnect --> up:rejoin
2017-04-28 14:32:51.661999 7ffff1544700  1 mds.0.153707 rejoin_start
2017-04-28 14:32:51.662189 7ffff1544700  1 mds.0.153707 rejoin_joint_start
2017-04-28 14:32:51.686178 7fffecd3b700  1 mds.0.153707 rejoin_done
2017-04-28 14:32:52.667188 7ffff1544700  1 mds.0.153707 handle_mds_map i am now mds.0.153707
2017-04-28 14:32:52.667195 7ffff1544700  1 mds.0.153707 handle_mds_map state change up:rejoin --> up:clientreplay
2017-04-28 14:32:52.667207 7ffff1544700  1 mds.0.153707 recovery_done -- successful recovery!
2017-04-28 14:32:52.667394 7ffff1544700  1 mds.0.153707 clientreplay_start
[Switching to Thread 0x7ffff44bb700 (LWP 759717)]
Catchpoint 1 (exception thrown), __cxxabiv1::__cxa_throw (obj=0x55555ebe11f0, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
62    {
#0  __cxxabiv1::__cxa_throw (obj=0x55555ebe11f0, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
#1  0x0000555555cdfa05 in ceph::buffer::list::iterator_impl<false>::copy (this=0x7ffff44b9730, len=<optimized out>, dest=0x7ffff44b9720 "") at /usr/src/debug/ceph-11.2.0/src/common/buffer.cc:1158
#2  0x0000555555ad1dbb in decode_raw<ceph_le64> (p=..., t=...) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:61
#3  decode (p=..., v=<synthetic pointer>) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:107
#4  Objecter::C_Stat::finish (this=0x55555edfaf50, r=0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.h:1348
#5  0x00005555557bead9 in Context::complete (this=0x55555edfaf50, r=<optimized out>) at /usr/src/debug/ceph-11.2.0/src/include/Context.h:70
#6  0x0000555555aa94aa in Objecter::handle_osd_op_reply (this=this@entry=0x55555ec02700, m=m@entry=0x555560facb00) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:3411
#7  0x0000555555abc9db in Objecter::ms_dispatch (this=0x55555ec02700, m=0x555560facb00) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:973
#8  0x0000555555d0f546 in ms_fast_dispatch (m=0x555560facb00, this=0x55555ec02000) at /usr/src/debug/ceph-11.2.0/src/msg/Messenger.h:564
#9  DispatchQueue::fast_dispatch (this=0x55555ec02150, m=m@entry=0x555560facb00) at /usr/src/debug/ceph-11.2.0/src/msg/DispatchQueue.cc:71
#10 0x0000555555d492f9 in AsyncConnection::process (this=0x555563458000) at /usr/src/debug/ceph-11.2.0/src/msg/async/AsyncConnection.cc:769
#11 0x0000555555bb9459 in EventCenter::process_events (this=this@entry=0x55555eb7e080, timeout_microseconds=<optimized out>, timeout_microseconds@entry=30000000) at /usr/src/debug/ceph-11.2.0/src/msg/async/Event.cc:405
#12 0x0000555555bbbe5a in NetworkStack::__lambda0::operator() (__closure=0x55555eb560d0) at /usr/src/debug/ceph-11.2.0/src/msg/async/Stack.cc:46
#13 0x00007ffff5c7d230 in std::(anonymous namespace)::execute_native_thread_routine (__p=<optimized out>) at ../../../../../libstdc++-v3/src/c++11/thread.cc:84
#14 0x00007ffff62fadc5 in start_thread (arg=0x7ffff44bb700) at pthread_create.c:308
#15 0x00007ffff53e673d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
[Switching to Thread 0x7ffff34b9700 (LWP 759719)]
Catchpoint 1 (exception thrown), __cxxabiv1::__cxa_throw (obj=0x55555ebe2480, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
62    {
#0  __cxxabiv1::__cxa_throw (obj=0x55555ebe2480, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
#1  0x0000555555cdfa05 in ceph::buffer::list::iterator_impl<false>::copy (this=0x7ffff34b7730, len=<optimized out>, dest=0x7ffff34b7720 "") at /usr/src/debug/ceph-11.2.0/src/common/buffer.cc:1158
#2  0x0000555555ad1dbb in decode_raw<ceph_le64> (p=..., t=...) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:61
#3  decode (p=..., v=<synthetic pointer>) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:107
#4  Objecter::C_Stat::finish (this=0x55555edfbe30, r=0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.h:1348
#5  0x00005555557bead9 in Context::complete (this=0x55555edfbe30, r=<optimized out>) at /usr/src/debug/ceph-11.2.0/src/include/Context.h:70
#6  0x0000555555aa94aa in Objecter::handle_osd_op_reply (this=this@entry=0x55555ec02700, m=m@entry=0x555560fac000) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:3411
#7  0x0000555555abc9db in Objecter::ms_dispatch (this=0x55555ec02700, m=0x555560fac000) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:973
#8  0x0000555555d0f546 in ms_fast_dispatch (m=0x555560fac000, this=0x55555ec02000) at /usr/src/debug/ceph-11.2.0/src/msg/Messenger.h:564
#9  DispatchQueue::fast_dispatch (this=0x55555ec02150, m=m@entry=0x555560fac000) at /usr/src/debug/ceph-11.2.0/src/msg/DispatchQueue.cc:71
#10 0x0000555555d492f9 in AsyncConnection::process (this=0x55556345b000) at /usr/src/debug/ceph-11.2.0/src/msg/async/AsyncConnection.cc:769
#11 0x0000555555bb9459 in EventCenter::process_events (this=this@entry=0x55555eb7ea80, timeout_microseconds=<optimized out>, timeout_microseconds@entry=30000000) at /usr/src/debug/ceph-11.2.0/src/msg/async/Event.cc:405
#12 0x0000555555bbbe5a in NetworkStack::__lambda0::operator() (__closure=0x55555eb560e0) at /usr/src/debug/ceph-11.2.0/src/msg/async/Stack.cc:46
#13 0x00007ffff5c7d230 in std::(anonymous namespace)::execute_native_thread_routine (__p=<optimized out>) at ../../../../../libstdc++-v3/src/c++11/thread.cc:84
#14 0x00007ffff62fadc5 in start_thread (arg=0x7ffff34b9700) at pthread_create.c:308
#15 0x00007ffff53e673d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
  what():  buffer::end_of_buffer
terminate called recursively
[Switching to Thread 0x7ffff3cba700 (LWP 759718)]
Catchpoint 1 (exception thrown), __cxxabiv1::__cxa_throw (obj=0x55555ebe2240, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
62    {
#0  __cxxabiv1::__cxa_throw (obj=0x55555ebe2240, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
#1  0x0000555555cdfa05 in ceph::buffer::list::iterator_impl<false>::copy (this=0x7ffff3cb8730, len=<optimized out>, dest=0x7ffff3cb8720 "") at /usr/src/debug/ceph-11.2.0/src/common/buffer.cc:1158
#2  0x0000555555ad1dbb in decode_raw<ceph_le64> (p=..., t=...) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:61
#3  decode (p=..., v=<synthetic pointer>) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:107
#4  Objecter::C_Stat::finish (this=0x55555edfb420, r=0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.h:1348
#5  0x00005555557bead9 in Context::complete (this=0x55555edfb420, r=<optimized out>) at /usr/src/debug/ceph-11.2.0/src/include/Context.h:70
#6  0x0000555555aa94aa in Objecter::handle_osd_op_reply (this=this@entry=0x55555ec02700, m=m@entry=0x5555607842c0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:3411
#7  0x0000555555abc9db in Objecter::ms_dispatch (this=0x55555ec02700, m=0x5555607842c0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:973
#8  0x0000555555d0f546 in ms_fast_dispatch (m=0x5555607842c0, this=0x55555ec02000) at /usr/src/debug/ceph-11.2.0/src/msg/Messenger.h:564
#9  DispatchQueue::fast_dispatch (this=0x55555ec02150, m=m@entry=0x5555607842c0) at /usr/src/debug/ceph-11.2.0/src/msg/DispatchQueue.cc:71
#10 0x0000555555d492f9 in AsyncConnection::process (this=0x55555ed65000) at /usr/src/debug/ceph-11.2.0/src/msg/async/AsyncConnection.cc:769
#11 0x0000555555bb9459 in EventCenter::process_events (this=this@entry=0x55555eb7e680, timeout_microseconds=<optimized out>, timeout_microseconds@entry=30000000) at /usr/src/debug/ceph-11.2.0/src/msg/async/Event.cc:405
#12 0x0000555555bbbe5a in NetworkStack::__lambda0::operator() (__closure=0x55555eb560f0) at /usr/src/debug/ceph-11.2.0/src/msg/async/Stack.cc:46
#13 0x00007ffff5c7d230 in std::(anonymous namespace)::execute_native_thread_routine (__p=<optimized out>) at ../../../../../libstdc++-v3/src/c++11/thread.cc:84
#14 0x00007ffff62fadc5 in start_thread (arg=0x7ffff3cba700) at pthread_create.c:308
#15 0x00007ffff53e673d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff34b9700 (LWP 759719)]
0x00007ffff53241d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56      return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb)

Running with gdb, debug_mds=20, showing backtraces of all exceptions:

2017-04-28 15:06:27.391885 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f9745 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B06.gml auth v164 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758298]
2017-04-28 15:06:27.391892 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f9745 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B06.gml auth v164 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758298]
2017-04-28 15:06:27.391897 7ffff1544700 15 mds.0 RecoveryQueue::enqueue RecoveryQueue::enqueue [inode 100067f9745 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B06.gml auth v164 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758298]
2017-04-28 15:06:27.391901 7ffff1544700 10 mds.0.cache.ino(100067f9745) auth_pin by 0x55555ed4e9b8 on [inode 100067f9745 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B06.gml auth v164 ap=2+0 recovering dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758298] now 2+0
2017-04-28 15:06:27.391906 7ffff1544700 15 mds.0.cache.dir(100067f971d) adjust_nested_auth_pins 1/1 on [dir 100067f971d /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/ [2,head] auth v=273 cv=0/0 ap=0+123+123 state=1610612738|complete f(v0 m2017-04-28 00:48:49.795657 68=68+0) n(v0 rc2017-04-28 00:48:49.795657 b1133693 68=68+0) hs=68+0,ss=0+0 dirty=68 | child=1 dirty=1 0x555563db16b8] by 0x555564758298 count now 0 + 123
2017-04-28 15:06:27.391915 7ffff44bb700  0 -- 10.250.21.11:6800/2668642040 >> - conn(0x5555655ce000 :6800 s=STATE_ACCEPTING_WAIT_BANNER_ADDR pgs=0 cs=0 l=0).fault with nothing to send and in the half  accept state just closed
2017-04-28 15:06:27.391914 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f9746 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_NODATA_B08.gml auth v216 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758880]
2017-04-28 15:06:27.391919 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f9746 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_NODATA_B08.gml auth v216 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758880]
2017-04-28 15:06:27.391924 7ffff1544700 15 mds.0 RecoveryQueue::enqueue RecoveryQueue::enqueue [inode 100067f9746 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_NODATA_B08.gml auth v216 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758880]
2017-04-28 15:06:27.391927 7ffff1544700 10 mds.0.cache.ino(100067f9746) auth_pin by 0x55555ed4e9b8 on [inode 100067f9746 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_NODATA_B08.gml auth v216 ap=2+0 recovering dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564758880] now 2+0
2017-04-28 15:06:27.391932 7ffff1544700 15 mds.0.cache.dir(100067f971d) adjust_nested_auth_pins 1/1 on [dir 100067f971d /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/ [2,head] auth v=273 cv=0/0 ap=0+124+124 state=1610612738|complete f(v0 m2017-04-28 00:48:49.795657 68=68+0) n(v0 rc2017-04-28 00:48:49.795657 b1133693 68=68+0) hs=68+0,ss=0+0 dirty=68 | child=1 dirty=1 0x555563db16b8] by 0x555564758880 count now 0 + 124
2017-04-28 15:06:27.391938 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f9748 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B10.gml auth v240 ap=1+0 dirtyparent s=7972 n(v0 b7972 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759450]
2017-04-28 15:06:27.391945 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f9748 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B10.gml auth v240 ap=1+0 dirtyparent s=7972 n(v0 b7972 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759450][Switching to Thread 0x7ffff34b9700 (LWP 760057)]
Catchpoint 1 (exception thrown), __cxxabiv1::__cxa_throw (obj=0x55555ebe22d0, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
62    {
#0  __cxxabiv1::__cxa_throw (obj=0x55555ebe22d0, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
#1  0x0000555555cdfa05 in ceph::buffer::list::iterator_impl<false>::copy (this=0x7ffff34b7730, len=<optimized out>, dest=0x7ffff34b7720 "") at /usr/src/debug/ceph-11.2.0/src/common/buffer.cc:1158
#2  0x0000555555ad1dbb in decode_raw<ceph_le64> (p=..., t=...) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:61
#3  decode (p=..., v=<synthetic pointer>) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:107
#4  Objecter::C_Stat::finish (this=0x55556383f560, r=0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.h:1348
#5  0x00005555557bead9 in Context::complete (this=0x55556383f560, r=<optimized out>) at /usr/src/debug/ceph-11.2.0/src/include/Context.h:70
#6  0x0000555555aa94aa in Objecter::handle_osd_op_reply (this=this@entry=0x55555ec02700, m=m@entry=0x5555639e9340) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:3411
#7  0x0000555555abc9db in Objecter::ms_dispatch (this=0x55555ec02700, m=0x5555639e9340) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:973
#8  0x0000555555d0f546 in ms_fast_dispatch (m=0x5555639e9340, this=0x55555ec02000) at /usr/src/debug/ceph-11.2.0/src/msg/Messenger.h:564
#9  DispatchQueue::fast_dispatch (this=0x55555ec02150, m=m@entry=0x5555639e9340) at /usr/src/debug/ceph-11.2.0/src/msg/DispatchQueue.cc:71
#10 0x0000555555d492f9 in AsyncConnection::process (this=0x55555edee800) at /usr/src/debug/ceph-11.2.0/src/msg/async/AsyncConnection.cc:769
#11 0x0000555555bb9459 in EventCenter::process_events (this=this@entry=0x55555eb7ea80, timeout_microseconds=<optimized out>, timeout_microseconds@entry=30000000) at /usr/src/debug/ceph-11.2.0/src/msg/async/Event.cc:405
#12 0x0000555555bbbe5a in NetworkStack::__lambda0::operator() (__closure=0x55555eb560e0) at /usr/src/debug/ceph-11.2.0/src/msg/async/Stack.cc:46
#13 0x00007ffff5c7d230 in std::(anonymous namespace)::execute_native_thread_routine (__p=<optimized out>) at ../../../../../libstdc++-v3/src/c++11/thread.cc:84
#14 0x00007ffff62fadc5 in start_thread (arg=0x7ffff34b9700) at pthread_create.c:308
#15 0x00007ffff53e673d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

2017-04-28 15:06:27.391949 7ffff1544700 15 mds.0 RecoveryQueue::enqueue RecoveryQueue::enqueue [inode 100067f9748 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B10.gml auth v240 ap=1+0 dirtyparent s=7972 n(v0 b7972 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759450]
2017-04-28 15:06:27.391953 7ffff1544700 10 mds.0.cache.ino(100067f9748) auth_pin by 0x55555ed4e9b8 on [inode 100067f9748 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B10.gml auth v240 ap=2+0 recovering dirtyparent s=7972 n(v0 b7972 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759450] now 2+0
2017-04-28 15:06:27.391957 7ffff1544700 15 mds.0.cache.dir(100067f971d) adjust_nested_auth_pins 1/1 on [dir 100067f971d /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/ [2,head] auth v=273 cv=0/0 ap=0+125+125 state=1610612738|complete f(v0 m2017-04-28 00:48:49.795657 68=68+0) n(v0 rc2017-04-28 00:48:49.795657 b1133693 68=68+0) hs=68+0,ss=0+0 dirty=68 | child=1 dirty=1 0x555563db16b8] by 0x555564759450 count now 0 + 125
2017-04-28 15:06:27.391964 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f9749 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/GENERAL_QUALITY.xml auth v242 ap=1+0 dirtyparent s=3210 n(v0 b3210 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759a38]
2017-04-28 15:06:27.391981 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f9749 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/GENERAL_QUALITY.xml auth v242 ap=1+0 dirtyparent s=3210 n(v0 b3210 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759a38]
2017-04-28 15:06:27.391985 7ffff1544700 15 mds.0 RecoveryQueue::enqueue RecoveryQueue::enqueue [inode 100067f9749 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/GENERAL_QUALITY.xml auth v242 ap=1+0 dirtyparent s=3210 n(v0 b3210 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759a38]
2017-04-28 15:06:27.391989 7ffff1544700 10 mds.0.cache.ino(100067f9749) auth_pin by 0x55555ed4e9b8 on [inode 100067f9749 [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/GENERAL_QUALITY.xml auth v242 ap=2+0 recovering dirtyparent s=3210 n(v0 b3210 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x555564759a38] now 2+0
2017-04-28 15:06:27.391993 7ffff1544700 15 mds.0.cache.dir(100067f971d) adjust_nested_auth_pins 1/1 on [dir 100067f971d /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/ [2,head] auth v=273 cv=0/0 ap=0+126+126 state=1610612738|complete f(v0 m2017-04-28 00:48:49.795657 68=68+0) n(v0 rc2017-04-28 00:48:49.795657 b1133693 68=68+0) hs=68+0,ss=0+0 dirty=68 | child=1 dirty=1 0x555563db16b8] by 0x555564759a38 count now 0 + 126
2017-04-28 15:06:27.392001 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f974a [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B8A.gml auth v178 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a020]
2017-04-28 15:06:27.392007 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f974a [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B8A.gml auth v178 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a020]
2017-04-28 15:06:27.392011 7ffff1544700 15 mds.0 RecoveryQueue::enqueue RecoveryQueue::enqueue [inode 100067f974a [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B8A.gml auth v178 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a020]
2017-04-28 15:06:27.392015 7ffff1544700 10 mds.0.cache.ino(100067f974a) auth_pin by 0x55555ed4e9b8 on [inode 100067f974a [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B8A.gml auth v178 ap=2+0 recovering dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a020] now 2+0
2017-04-28 15:06:27.392021 7ffff1544700 15 mds.0.cache.dir(100067f971d) adjust_nested_auth_pins 1/1 on [dir 100067f971d /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/ [2,head] auth v=273 cv=0/0 ap=0+127+127 state=1610612738|complete f(v0 m2017-04-28 00:48:49.795657 68=68+0) n(v0 rc2017-04-28 00:48:49.795657 b1133693 68=68+0) hs=68+0,ss=0+0 dirty=68 | child=1 dirty=1 0x555563db16b8] by 0x55556475a020 count now 0 + 127
2017-04-28 15:06:27.392027 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f974b [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B05.gml auth v244 ap=1+0 dirtyparent s=7970 n(v0 b7970 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a608]
2017-04-28 15:06:27.392034 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f974b [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B05.gml auth v244 ap=1+0 dirtyparent s=7970 n(v0 b7970 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a608]
2017-04-28 15:06:27.392038 7ffff1544700 15 mds.0 RecoveryQueue::enqueue RecoveryQueue::enqueue [inode 100067f974b [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B05.gml auth v244 ap=1+0 dirtyparent s=7970 n(v0 b7970 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a608]
2017-04-28 15:06:27.392042 7ffff1544700 10 mds.0.cache.ino(100067f974b) auth_pin by 0x55555ed4e9b8 on [inode 100067f974b [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B05.gml auth v244 ap=2+0 recovering dirtyparent s=7970 n(v0 b7970 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475a608] now 2+0
2017-04-28 15:06:27.392046 7ffff1544700 15 mds.0.cache.dir(100067f971d) adjust_nested_auth_pins 1/1 on [dir 100067f971d /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/ [2,head] auth v=273 cv=0/0 ap=0+128+128 state=1610612738|complete f(v0 m2017-04-28 00:48:49.795657 68=68+0) n(v0 rc2017-04-28 00:48:49.795657 b1133693 68=68+0) hs=68+0,ss=0+0 dirty=68 | child=1 dirty=1 0x555563db16b8] by 0x55556475a608 count now 0 + 128
2017-04-28 15:06:27.392053 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f974c [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B11.gml auth v200 ap=1+0 dirtyparent s=7969 n(v0 b7969 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475abf0]
terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
  what():  buffer::end_of_buffer2017-04-28 15:06:27.392059 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f974c [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B11.gml auth v200 ap=1+0 dirtyparent s=7969 n(v0 b7969 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475abf0]

2017-04-28 15:06:27.392063 7ffff1544700 15 mds.0 RecoveryQueue::enqueue RecoveryQueue::enqueue [inode 100067f974c [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B11.gml auth v200 ap=1+0 dirtyparent s=7969 n(v0 b7969 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475abf0]
2017-04-28 15:06:27.392067 7ffff1544700 10 mds.0.cache.ino(100067f974c) auth_pin by 0x55555ed4e9b8 on [inode 100067f974c [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_DETFOO_B11.gml auth v200 ap=2+0 recovering dirtyparent s=7969 n(v0 b7969 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475abf0] now 2+0
2017-04-28 15:06:27.392071 7ffff1544700 15 mds.0.cache.dir(100067f971d) adjust_nested_auth_pins 1/1 on [dir 100067f971d /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/ [2,head] auth v=273 cv=0/0 ap=0+129+129 state=1610612738|complete f(v0 m2017-04-28 00:48:49.795657 68=68+0) n(v0 rc2017-04-28 00:48:49.795657 b1133693 68=68+0) hs=68+0,ss=0+0 dirty=68 | child=1 dirty=1 0x555563db16b8] by 0x55556475abf0 count now 0 + 129
[Switching to Thread 0x7ffff44bb700 (LWP 760055)]
Catchpoint 1 (exception thrown), __cxxabiv1::__cxa_throw (obj=0x55555ebe1e50, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
62    {
#0  __cxxabiv1::__cxa_throw (obj=0x55555ebe1e50, tinfo=0x5555561312b0 <typeinfo for ceph::buffer::end_of_buffer>, dest=0x5555557f41e0 <ceph::buffer::end_of_buffer::~end_of_buffer()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:62
#1  0x0000555555cdfa05 in ceph::buffer::list::iterator_impl<false>::copy (this=0x7ffff44b9730, len=<optimized out>, dest=0x7ffff44b9720 "") at /usr/src/debug/ceph-11.2.0/src/common/buffer.cc:1158
#2  0x0000555555ad1dbb in decode_raw<ceph_le64> (p=..., t=...) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:61
#3  decode (p=..., v=<synthetic pointer>) at /usr/src/debug/ceph-11.2.0/src/include/encoding.h:107
#4  Objecter::C_Stat::finish (this=0x5555612d6770, r=0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.h:1348
#5  0x00005555557bead9 in Context::complete (this=0x5555612d6770, r=<optimized out>) at /usr/src/debug/ceph-11.2.0/src/include/Context.h:70
#6  0x0000555555aa94aa in Objecter::handle_osd_op_reply (this=this@entry=0x55555ec02700, m=m@entry=0x555561dd23c0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:3411
#7  0x0000555555abc9db in Objecter::ms_dispatch (this=0x55555ec02700, m=0x555561dd23c0) at /usr/src/debug/ceph-11.2.0/src/osdc/Objecter.cc:973
#8  0x0000555555d0f546 in ms_fast_dispatch (m=0x555561dd23c0, this=0x55555ec02000) at /usr/src/debug/ceph-11.2.0/src/msg/Messenger.h:564
#9  DispatchQueue::fast_dispatch (this=0x55555ec02150, m=m@entry=0x555561dd23c0) at /usr/src/debug/ceph-11.2.0/src/msg/DispatchQueue.cc:71
#10 0x0000555555d492f9 in AsyncConnection::process (this=0x55556545a800) at /usr/src/debug/ceph-11.2.0/src/msg/async/AsyncConnection.cc:769
#11 0x0000555555bb9459 in EventCenter::process_events (this=this@entry=0x55555eb7e080, timeout_microseconds=<optimized out>, timeout_microseconds@entry=30000000) at /usr/src/debug/ceph-11.2.0/src/msg/async/Event.cc:405
#12 0x0000555555bbbe5a in NetworkStack::__lambda0::operator() (__closure=0x55555eb560d0) at /usr/src/debug/ceph-11.2.0/src/msg/async/Stack.cc:46
#13 0x00007ffff5c7d230 in std::(anonymous namespace)::execute_native_thread_routine (__p=<optimized out>) at ../../../../../libstdc++-v3/src/c++11/thread.cc:84
#14 0x00007ffff62fadc5 in start_thread (arg=0x7ffff44bb700) at pthread_create.c:308
#15 0x00007ffff53e673d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
2017-04-28 15:06:27.392078 7ffff1544700  7 mds.0.locker file_recover (ifile *->scan) on [inode 100067f974d [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B10.gml auth v246 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile *->scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475b1d8]
2017-04-28 15:06:27.392083 7ffff1544700 10 mds.0.cache queue_file_recover [inode 100067f974d [2,head] /archive/S2A_MSIL1C_20170410T052641_N0204_R105_T46WFA_20170410T052644.SAFE/GRANULE/L1C_T46WFA_A009397_20170410T052644/QI_DATA/MSK_SATURA_B10.gml auth v246 ap=1+0 dirtyparent s=428 n(v0 b428 1=1+0) (ifile scan) (iversion lock) cr={1638767=0-4194304@1} | dirtyparent=1 dirty=1 authpin=1 0x55556475b1d8]

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff34b9700 (LWP 760057)]
0x00007ffff53241d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56      return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);

Ceph version: 11.2.0
1 active MDS, 5 standby MDS
Clients: kernel client, kernel version 4.2.0-16-generic (ubuntu)

Please tell me if you need additional debug info.

Actions

Also available in: Atom PDF