Project

General

Profile

Bug #51280

mds: "FAILED ceph_assert(r == 0 || r == -2)"

Added by Patrick Donnelly almost 3 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
pacific,octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
crash, qa-failure
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

    -3> 2021-06-16T23:37:26.998+0000 7fe145b22700 10 MDSIOContextBase::complete: 23C_IO_MDC_TruncateFinish
    -2> 2021-06-16T23:37:26.998+0000 7fe145b22700 10 MDSContext::complete: 23C_IO_MDC_TruncateFinish
    -1> 2021-06-16T23:37:26.999+0000 7fe145b22700 -1 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5307-g06fb5cf0/rpm/el8/BUILD/ceph-17.0.0-5307-g06fb5cf0/src/mds/MDCache.cc: In function 'virtual void C_IO_MDC_TruncateFinish::finish(int)' thread 7fe145b22700 time 2021-06-16T23:37:26.999873+0000
/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5307-g06fb5cf0/rpm/el8/BUILD/ceph-17.0.0-5307-g06fb5cf0/src/mds/MDCache.cc: 6451: FAILED ceph_assert(r == 0 || r == -2)

 ceph version 17.0.0-5307-g06fb5cf0 (06fb5cf0031e099ece537a86a27543dc4010ce0c) quincy (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x7fe15458400c]
 2: /usr/lib64/ceph/libceph-common.so.2(+0x27d214) [0x7fe154584214]
 3: ceph-mds(+0x3693a1) [0x5632175693a1]
 4: (MDSContext::complete(int)+0x56) [0x5632176c79a6]
 5: (MDSIOContextBase::complete(int)+0xa3) [0x5632176c7cd3]
 6: (Finisher::finisher_thread_entry()+0x18c) [0x7fe154627a7c]
 7: (Thread::_entry_func(void*)+0xd) [0x7fe15467ad2d]
 8: /lib64/libpthread.so.0(+0x814a) [0x7fe15331f14a]
 9: clone()

From: /ceph/teuthology-archive/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/6175669/remote/smithi025/log/ceph-mds.e.log.gz

Looks like #44295 but that was during shutdown. This MDS is active.


Related issues

Related to CephFS - Bug #44295: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2) Resolved
Copied to CephFS - Bug #51357: osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS crash Resolved
Copied to CephFS - Backport #51414: pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)" Resolved
Copied to CephFS - Backport #51415: octopus: mds: "FAILED ceph_assert(r == 0 || r == -2)" Resolved

History

#1 Updated by Patrick Donnelly almost 3 years ago

  • Related to Bug #44295: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2) added

#2 Updated by Xiubo Li almost 3 years ago

It seems the same "23C_IO_MDC_TruncateFinish" called twice:

2021-06-16T23:22:45.400+0000 7fe145b22700 10 MDSIOContextBase::complete: 23C_IO_MDC_TruncateFinish
2021-06-16T23:22:45.400+0000 7fe145b22700 10 MDSContext::complete: 23C_IO_MDC_TruncateFinish
2021-06-16T23:22:45.400+0000 7fe145b22700 10 mds.0.cache truncate_inode_finish [inode 0x10000003122 [2,head] /client.0/tmp/tmp.PPAumFs3be/p7/d3/f4 auth v367 ap=1 DIRTYPARENT truncating(4194304 to 279610) s=279610 n(v0 rc2021-06-16T23:21:46.854122+0000 b279610 1=1+0) (iversion lock) caps={4598=pAsLsXsFc/-@11} | ptrwaiter=0 request=0 lock=0 caps=1 truncating=1 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x56321f27f080]
2021-06-16T23:22:45.400+0000 7fe145b22700 15 mds.0.cache.ino(0x10000003122) project_inode 0x10000003122
2021-06-16T23:22:45.400+0000 7fe145b22700 10 mds.0.cache.dir(0x1000000312b) pre_dirty 385

...

2021-06-16T23:37:26.967+0000 7fe14b32d700 20 set_next_seq: mds.metrics: current sequence number 244, setting next sequence number 245
2021-06-16T23:37:26.998+0000 7fe145b22700 10 MDSIOContextBase::complete: 23C_IO_MDC_TruncateFinish
2021-06-16T23:37:26.998+0000 7fe145b22700 10 MDSContext::complete: 23C_IO_MDC_TruncateFinish
2021-06-16T23:37:26.999+0000 7fe145b22700 -1 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5307-g06fb5cf0/rpm/el8/BUILD/ceph-17.0.0-5307-g06fb5cf0/src/mds/MDCache.cc: In function 'virtual void C_IO_MDC_TruncateFinish::finish(int)' thread 7fe145b22700 time 2021-06-16T23:37:26.999873+0000
/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5307-g06fb5cf0/rpm/el8/BUILD/ceph-17.0.0-5307-g06fb5cf0/src/mds/MDCache.cc: 6451: FAILED ceph_assert(r == 0 || r == -2)

#3 Updated by Xiubo Li almost 3 years ago

Xiubo Li wrote:

It seems the same "23C_IO_MDC_TruncateFinish" called twice:

[...]

Sorry, it is not.

Checked the whole logs, not like all the others the "trimtrunc" for "10000003280.00000000:head" didn't get a reply from OSD before calling the finisher cb:

2021-06-16T23:22:51.207+0000 7fe14bb2e700  1 -- [v2:172.21.15.25:6836/1748841430,v1:172.21.15.25:6837/1748841430] --> [v2:172.21.15.100:6816/38134,v1:172.21.15.100:6818/38134] -- osd_op(unknown.0.5:3136 5.7s1 5:f4126cad:::10000003280.00000000:head [trimtrunc 2@431624] snapc 1=[] RETRY=3 ondisk+retry+write+known_if_redirected+full_force e178) v8 -- 0x563219dff400 con 0x56321fdcec00

...

2021-06-16T23:37:26.998+0000 7fe145b22700 10 MDSIOContextBase::complete: 23C_IO_MDC_TruncateFinish
2021-06-16T23:37:26.998+0000 7fe145b22700 10 MDSContext::complete: 23C_IO_MDC_TruncateFinish
2021-06-16T23:37:26.999+0000 7fe145b22700 -1 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5307-g06fb5cf0/rpm/el8/BUILD/ceph-17.0.0-5307-g06fb5cf0/src/mds/MDCache.cc: In function 'virtual void C_IO_MDC_TruncateFinish::finish(int)' thread 7fe145b22700 time 2021-06-16T23:37:26.999873+0000
/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5307-g06fb5cf0/rpm/el8/BUILD/ceph-17.0.0-5307-g06fb5cf0/src/mds/MDCache.cc: 6451: FAILED ceph_assert(r == 0 || r == -2)

I am afraid this is also a similar or same issue with https://tracker.ceph.com/issues/44295.

#4 Updated by Xiubo Li almost 3 years ago

In the osd.7 side, the request was blocked and the osd was added to backoff:

2021-06-16T23:22:52.635+0000 7f3b98070700 10 bluefs _flush 0x55a805a6a240 ignoring, length 1431 < min_flush_size 524288
2021-06-16T23:22:52.635+0000 7f3b98070700 10 bluefs _flush 0x55a805a6a240 ignoring, length 1431 < min_flush_size 524288
2021-06-16T23:22:52.635+0000 7f3b84048700 10 osd.7 179 dequeue_op 0x55a81cdda6e0 prio 63 cost 0 latency 1.426534 osd_op(mds.0.5:3136 5.7s1 5.b536482f (undecoded) ondisk+retry+write+known_if_redirected+full_force e178) v8 pg pg[5.7s1( v 147'1441 (0'0,147'1441] local-lis/les=144/145 n=1327 ec=28/28 lis/c=144/28 les/c/f=145/29/0 sis=178) [1,7,5,0]/[NONE,7,NONE,0]p7(1) async=[1(0),5(2)] r=1 lpr=178 pi=[28,178)/1 crt=147'1441 lcod 147'1440 mlcod 0'0 undersized+degraded+remapped+peered mbc={0={(0+0)=1,(0+1)=452,(1+0)=751,(1+1)=123},1={(1+0)=1290,(1+1)=37},2={(0+1)=1313,(0+2)=14},3={(1+0)=1327}}]
2021-06-16T23:22:52.635+0000 7f3b84048700 10 osd.7 179 send_incremental_map 178 -> 179 to 0x55a813469800 v2:172.21.15.25:6836/1748841430
2021-06-16T23:22:52.635+0000 7f3b84048700  1 -- [v2:172.21.15.100:6816/38134,v1:172.21.15.100:6818/38134] --> [v2:172.21.15.25:6836/1748841430,v1:172.21.15.25:6837/1748841430] -- osd_map(179..179 src has 1..179) v4 -- 0x55a829907dc0 con 0x55a813469800
2021-06-16T23:22:52.635+0000 7f3b84048700 10 osd.7 179 maybe_share_map con 0x55a813469800 v2:172.21.15.25:6836/1748841430 map epoch 178 -> 179 (shared)
2021-06-16T23:22:52.635+0000 7f3b84048700 10 osd.7 pg_epoch: 179 pg[5.7s1( v 147'1441 (0'0,147'1441] local-lis/les=144/145 n=1327 ec=28/28 lis/c=144/28 les/c/f=145/29/0 sis=178) [1,7,5,0]/[NONE,7,NONE,0]p7(1) async=[1(0),5(2)] r=1 lpr=178 pi=[28,178)/1 crt=147'1441 lcod 147'1440 mlcod 0'0 undersized+degraded+remapped+peered mbc={0={(0+0)=1,(0+1)=452,(1+0)=751,(1+1)=123},1={(1+0)=1290,(1+1)=37},2={(0+1)=1313,(0+2)=14},3={(1+0)=1327}}] add_backoff session 0x55a81c0c0d20 added Backoff(0x55a822c8d080 5.7s1 3 new [5:e0000000::::0,MAX)  session 0x55a81c0c0d20 pg 0x55a8074ba000)
2021-06-16T23:22:52.635+0000 7f3b84048700  1 -- [v2:172.21.15.100:6816/38134,v1:172.21.15.100:6818/38134] --> [v2:172.21.15.25:6836/1748841430,v1:172.21.15.25:6837/1748841430] -- osd_backoff(5.7s1 block id 3 [5:e0000000::::0,MAX) e179) v1 -- 0x55a8218c0f00 con 0x55a813469800
2021-06-16T23:22:52.635+0000 7f3b84048700 10 osd.7 179 dequeue_op 0x55a81cdda6e0 finish

And the the mds.e has well receivied the backoff requests:

2021-06-16T23:22:52.634+0000 7fe14bb2e700  1 -- [v2:172.21.15.25:6836/1748841430,v1:172.21.15.25:6837/1748841430] <== osd.7 v2:172.21.15.100:6816/38134 61 ==== osd_backoff(5.7s1 block id 3 [5:e0000000::::0,MAX) e179) v1 ==== 115+0+0 (crc 0 0 0) 0x56321e6fa500 con 0x56321fdcec00
2021-06-16T23:22:52.634+0000 7fe14bb2e700  1 -- [v2:172.21.15.25:6836/1748841430,v1:172.21.15.25:6837/1748841430] --> [v2:172.21.15.100:6816/38134,v1:172.21.15.100:6818/38134] -- osd_backoff(5.7s1 ack-block id 3 [5:e0000000::::0,MAX) e179) v1 -- 0x56321ef4f680 con 0x56321fdcec00

Then the "trimtrunc" for "10000003280.00000000:head" request was stuck.

#5 Updated by Xiubo Li almost 3 years ago

From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout just after 15m, which is the default value of "rados osd op timeout: 15m" in "qa/cephfs/conf/mds.yaml". Then the 'r' value should be "-ETIMEDOUT".

In this case should we fail it instead of assert the mds daemon ?

#6 Updated by Patrick Donnelly almost 3 years ago

Xiubo Li wrote:

From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout just after 15m, which is the default value of "rados osd op timeout: 15m" in "qa/cephfs/conf/mds.yaml". Then the 'r' value should be "-ETIMEDOUT".

In this case should we fail it instead of assert the mds daemon ?

MDS doesn't know how to retry OSD requests in general so if we get this error the only safe thing to do is reboot the MDS. I think it would help to print the error message (ETIMEDOUT) and respawn the MDS rather this trigger this assert.

That won't help fix the test failure though...

#7 Updated by Xiubo Li almost 3 years ago

Patrick Donnelly wrote:

Xiubo Li wrote:

From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout just after 15m, which is the default value of "rados osd op timeout: 15m" in "qa/cephfs/conf/mds.yaml". Then the 'r' value should be "-ETIMEDOUT".

In this case should we fail it instead of assert the mds daemon ?

MDS doesn't know how to retry OSD requests in general so if we get this error the only safe thing to do is reboot the MDS. I think it would help to print the error message (ETIMEDOUT) and respawn the MDS rather this trigger this assert.

Okay, will do that.

That won't help fix the test failure though...

Yeah, agree, it's should be an OSD's bug. I will spend more time on it try to fix it, but I am not familiar with the OSD related code, may need much more time on it, or should we raise one OSD bug to the core team ?

#8 Updated by Patrick Donnelly almost 3 years ago

Xiubo Li wrote:

Patrick Donnelly wrote:

Xiubo Li wrote:

From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout just after 15m, which is the default value of "rados osd op timeout: 15m" in "qa/cephfs/conf/mds.yaml". Then the 'r' value should be "-ETIMEDOUT".

In this case should we fail it instead of assert the mds daemon ?

MDS doesn't know how to retry OSD requests in general so if we get this error the only safe thing to do is reboot the MDS. I think it would help to print the error message (ETIMEDOUT) and respawn the MDS rather this trigger this assert.

Okay, will do that.

That won't help fix the test failure though...

Yeah, agree, it's should be an OSD's bug. I will spend more time on it try to fix it, but I am not familiar with the OSD related code, may need much more time on it, or should we raise one OSD bug to the core team ?

I think so, at least to hear their thoughts.

#9 Updated by Xiubo Li almost 3 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Xiubo Li
  • Pull request ID set to 42011

Currently in MDS I just add one imporvement fixing, which will respawn the MDS daemon instead of crash it. But for the OSD side bug, I will raise on separate PR to the core team.

#10 Updated by Xiubo Li almost 3 years ago

  • Copied to Bug #51357: osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS crash added

#11 Updated by Patrick Donnelly over 2 years ago

  • Status changed from Fix Under Review to Pending Backport

#12 Updated by Backport Bot over 2 years ago

  • Copied to Backport #51414: pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)" added

#13 Updated by Backport Bot over 2 years ago

  • Copied to Backport #51415: octopus: mds: "FAILED ceph_assert(r == 0 || r == -2)" added

#14 Updated by Loïc Dachary about 2 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Also available in: Atom PDF