Actions
Bug #18680
closedmultimds: cluster can assign active mds beyond max_mds during failures
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDSMonitor
Labels (FS):
multimds
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
The thrasher sets max_mds 3 -> 1 and then deactivates mds.a and mds.b. Without waiting for mds.a and mds.b to fully stop, the thrasher kills mds.b. Eventually mds.a reactivates and takes the rank of mds.b (with 2 actives when max_mds is 3!).
Logs are on teuthology here: /home/pdonnell/748363
(There is a bug in the thrasher causing an infinite loop at the end. It is unrelated to this issue.)
Actions