Bug #52975
closed
MDSMonitor: no active MDS after cluster deployment
Added by Igor Fedotov over 2 years ago.
Updated over 2 years ago.
Category:
Correctness/Safety
Description
This happens starting v16.2.6 if CephFS volume creation and setting allow_standby_replay mode occur before MDS daemons creation.
E.g. the attached vstart patch produces a new cluster with all MDS-es marked as standby.
Files
This behavior isn't present in 16.2.5.
- Assignee set to Venky Shankar
- Category set to Correctness/Safety
- Target version set to v17.0.0
- Component(FS) MDS added
- Backport set to pacific,octopus
Thanks for the reproducer Igor.
commit cbd9a7b354abb06cd395753f93564bdc687cdb04 ("mon,mds: use per-MDS compat to inform replacement") seems to be the breaking this.
- Subject changed from No active MDS after cluster deployment to MDSMonitor: no active MDS after cluster deployment
- Status changed from New to In Progress
- Assignee changed from Venky Shankar to Patrick Donnelly
- Source set to Development
- Backport changed from pacific,octopus to pacific
- Component(FS) MDSMonitor added
- Component(FS) deleted (
MDS)
- Status changed from In Progress to Fix Under Review
- Priority changed from Normal to Urgent
- Pull request ID set to 43851
- Status changed from Fix Under Review to Pending Backport
- Copied to Backport #53232: pacific: MDSMonitor: no active MDS after cluster deployment added
- Has duplicate Bug #52094: Tried out Quincy: All MDS Standby added
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".
Also available in: Atom
PDF