Project

General

Profile

Bug #36396

mds: handle duplicated uuid in multi-mds cluster

Added by Zheng Yan about 1 month ago. Updated about 1 month ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
-
Target version:
-
Start date:
10/11/2018
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:

Description

current code can't detect following case:

clientA open mds.0 session with uuid
clientB open mds.1 session with the same uuid


Related issues

Related to fs - Feature #26974: mds: provide mechanism to allow new instance of an application to cancel old MDS session Resolved 08/21/2018

History

#1 Updated by Zheng Yan about 1 month ago

  • Related to Feature #26974: mds: provide mechanism to allow new instance of an application to cancel old MDS session added

#2 Updated by Jeff Layton about 1 month ago

I think we want to kick clientA's session out at this point and let clientB take over (i.e. last one wins).

The question is what should happen to clientA? Ideally I think we'd want to consider it something like a forcible umount and start returning ENOTCONN on most libcephfs calls.

#3 Updated by Patrick Donnelly about 1 month ago

Simple solution to this is to evict both sessions when we detect this and log an error.

Also available in: Atom PDF