mds: handle duplicated uuid in multi-mds cluster
current code can't detect following case:
clientA open mds.0 session with uuid
clientB open mds.1 session with the same uuid
#2 Updated by Jeff Layton about 1 month ago
I think we want to kick clientA's session out at this point and let clientB take over (i.e. last one wins).
The question is what should happen to clientA? Ideally I think we'd want to consider it something like a forcible umount and start returning ENOTCONN on most libcephfs calls.