Project

General

Profile

Actions

Bug #41935

closed

ceph mdss keep on crashing

Added by Kenneth Waegeman over 4 years ago. Updated over 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
crash, multimds
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I updated ceph to 14.2.3 yesterday. everything was running fine, but today mds start crashing. I tried restarting all mds, but issue remains

[root@mds03 ~]# ceph -s
cluster:
id: 92bfcf0a-1d39-43b3-b60f-44f01b630e47
health: HEALTH_WARN
insufficient standby MDS daemons available

services:
mon: 3 daemons, quorum mds01,mds02,mds03 (age 22h)
mgr: mds02(active, since 25h), standbys: mds01, mds03
mds: ceph_fs:2 {0=mds02=up:active,1=mds03=up:active(laggy or crashed)}
osd: 535 osds: 535 up, 535 in
data:
pools: 3 pools, 3328 pgs
objects: 370.95M objects, 666 TiB
usage: 1.0 PiB used, 2.2 PiB / 3.2 PiB avail
pgs: 3317 active+clean
9 active+clean+scrubbing+deep
2 active+clean+scrubbing

This is the stacktraces I see in th mds logs:


Related issues 1 (0 open1 closed)

Is duplicate of CephFS - Bug #41948: nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode does not cleanup unneeded client_snap_caps)ResolvedZheng Yan

Actions
Actions

Also available in: Atom PDF