Bug #3845
closedmds: standby_for_rank not getting cleared on takeover
0%
Description
This is the mdsmap after mds.a was active and given rank 0, then killed, and another mds (mds.b-s-r0) that had standby_for_rank=0 took over for mds.a. Notice that mds.b-s-r0 now has rank=0 and standby_for_rank=0. Not sure if this just needs to be cleared in the struct at the MDS or if something needs to be updated at the monitor.
dumped mdsmap epoch 11
{ "epoch": 11,
"flags": 0,
"created": "2013-01-17 09:18:38.790612",
"modified": "2013-01-17 09:19:26.593309",
"tableserver": 0,
"root": 0,
"session_timeout": 60,
"session_autoclose": 300,
"max_file_size": 1099511627776,
"last_failure": 8,
"last_failure_osd_epoch": 5,
"compat": { "compat": {},
"ro_compat": {},
"incompat": { "1": "base v0.20",
"2": "client writeable ranges",
"3": "default file layouts on dirs",
"4": "dir inode in separate object"}},
"max_mds": 1,
"in": [
0],
"up": { "0": 4098},
"failed": [],
"stopped": [],
"info": { "4098": { "gid": 4098,
"name": "b-s-r0",
"rank": 0,
"incarnation": 2,
"state": "up:active",
"state_seq": 15,
"addr": "10.214.131.29:6800\/2879",
"standby_for_rank": 0,
"standby_for_name": "",
"export_targets": []}},
"data_pools": [
0],
"metadata_pool": 1}
Updated by Greg Farnum over 11 years ago
- Project changed from CephFS to Ceph
- Category changed from 47 to 1
This is a monitor thing; the MDS is only involved in relaying the config setting over on boot-up.
Updated by Sage Weil over 11 years ago
I dont' think it matters. It's is a fixed lifecycle from standby -> active -> dead, so the leftover standby_ just tell you where the mds came from/why it was the one to take over. We could clear the fields for cosmetic reasons, but i'm not sure it matters
Updated by Loïc Dachary over 9 years ago
- Project changed from Ceph to CephFS
- Category deleted (
Monitor)
Updated by Greg Farnum almost 8 years ago
- Status changed from New to Closed
A bunch of this got rejiggered in John's multi-fs and follow-on work; it's probably gone.