Feature #20611
closed
MDSMonitor: do not show cluster health warnings for file system intentionally marked down
Added by Patrick Donnelly almost 7 years ago.
Updated over 5 years ago.
Category:
Administration/Usability
Description
Here's what you see currently:
$ ceph fs set a down true
a marked down.
$ ceph status
cluster:
id: e3d43918-f643-442b-bacc-5a1c1d9a8a7a
health: HEALTH_ERR
1 filesystem is offline
services:
mon: 3 daemons, quorum a,b,c (age 100s)
mgr: x(active, since 96s)
mds: a-0/0/0 up , 3 up:standby
osd: 3 osds: 3 up (since 64s), 3 in (since 64s)
data:
pools: 2 pools, 16 pgs
objects: 22 objects, 2.2 KiB
usage: 3.2 GiB used, 27 GiB / 30 GiB avail
pgs: 16 active+clean
Taking an MDS down for hardware maintenance, etc, should trigger a health warning because such actions do, even if intentionally, degrade the MDS cluster.
I think we should show a warning here unless the user's clear intention was to permanently shrink the MDS cluster or remove the filesystem entirely. I think we should show:
- HEALTH_WARN if there are fewer MDSs active than max_mds for a filesystem
- HEALTH_ERR if there are no MDSs online for a filesystem
Maybe we could add some detail to the HEALTH_WARN telling the user what to do to remove the warning (decrease max_mds or delete the filesystem).
Douglas Fuller wrote:
Taking an MDS down for hardware maintenance, etc, should trigger a health warning because such actions do, even if intentionally, degrade the MDS cluster.
One ERR message saying the file system is offline should be sufficient. The message should be clear which file system(s) is offline (rather than the MDS cluster).
- Category changed from 90 to Administration/Usability
- Target version set to v13.0.0
- Release deleted (
master)
See https://github.com/ceph/ceph/pull/16608, which implements the opposite of this behavior. Whenever a filesystem is marked down, data is inaccessible. That should be HEALTH_ERR, even if intentional.
- Status changed from New to Fix Under Review
- Status changed from Fix Under Review to New
- Assignee deleted (
Douglas Fuller)
- Priority changed from High to Normal
- Target version changed from v13.0.0 to v14.0.0
- Parent task deleted (
#20606)
- Labels (FS) multifs added
Doug, I was just thinking about this and a valid reason to not want a HEALTH_ERR is if you have dozens or hundreds of ceph file systems, one for each "tenant"/user/use-case/application/whatever, but only activate them (i.e. assign MDSs) when that corresponding application is online.
This seemed to be a direction Rook wanted to go [1] but will not proceed with because multifs is not yet stable.
I propose we retarget this to 14.0.0 and add it to multifs.
[1] https://github.com/rook/rook/issues/1027
- Blocks Feature #22477: multifs: remove multifs experimental warnings added
- Description updated (diff)
- Status changed from New to In Progress
- Assignee set to Patrick Donnelly
- Priority changed from Normal to High
Suggest we silence the health warning only when the cluster is marked down (not failed).
- Status changed from In Progress to Fix Under Review
- Pull request ID set to 26012
- Status changed from Fix Under Review to Resolved
Also available in: Atom
PDF