Project

General

Profile

Bug #43397

FS_DEGRADED to cluster log despite --no-mon-health-to-clog

Added by Sage Weil over 4 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2019-12-19T20:32:38.485 INFO:teuthology.orchestra.run.smithi160:> sudo ceph --cluster ceph --mon-client-directed-command-retry 5 tell 'mon.*' injectargs -- --no-mon-health-to-clog
2019-12-19T20:32:38.605 INFO:teuthology.orchestra.run.smithi160.stdout:{}
2019-12-19T20:32:38.605 INFO:teuthology.orchestra.run.smithi160.stderr:mon_health_to_clog = 'false'
2019-12-19T20:32:38.614 INFO:teuthology.misc:Shutting down mds daemons...
2019-12-19T20:32:38.615 DEBUG:tasks.ceph.mds.a:waiting for process to exit
2019-12-19T20:32:38.615 INFO:teuthology.orchestra.run:waiting for 300
2019-12-19T20:32:38.629 INFO:tasks.ceph.mds.a:Stopped
2019-12-19T20:32:38.630 INFO:teuthology.misc:Shutting down osd daemons...
2019-12-19T20:32:38.630 DEBUG:tasks.ceph.osd.1:waiting for process to exit
2019-12-19T20:32:38.630 INFO:teuthology.orchestra.run:waiting for 300
2019-12-19T20:32:38.694 INFO:tasks.ceph.osd.1:Stopped
2019-12-19T20:32:38.695 DEBUG:tasks.ceph.osd.0:waiting for process to exit
2019-12-19T20:32:38.695 INFO:teuthology.orchestra.run:waiting for 300
2019-12-19T20:32:38.759 INFO:tasks.ceph.osd.0:Stopped
2019-12-19T20:32:38.760 INFO:teuthology.misc:Shutting down mgr daemons...
2019-12-19T20:32:38.760 DEBUG:tasks.ceph.mgr.x:waiting for process to exit
2019-12-19T20:32:38.760 INFO:teuthology.orchestra.run:waiting for 300
2019-12-19T20:32:38.822 INFO:tasks.ceph.mgr.x:Stopped
2019-12-19T20:32:38.823 INFO:teuthology.misc:Shutting down mon daemons...
2019-12-19T20:32:38.823 DEBUG:tasks.ceph.mon.a:waiting for process to exit
2019-12-19T20:32:38.823 INFO:teuthology.orchestra.run:waiting for 300
2019-12-19T20:32:38.873 INFO:tasks.ceph.mon.a:Stopped
2019-12-19T20:32:38.873 INFO:tasks.ceph:Checking cluster log for badness...
2019-12-19T20:32:38.873 INFO:teuthology.orchestra.run.smithi160:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'MDS in read-only mode' | egrep -v 'force file system read-only' | egrep -v 'overall HEALTH_' | egrep -v '\(OSD
MAP_FLAGS\)' | egrep -v '\(OSD_FULL\)' | egrep -v '\(MDS_READ_ONLY\)' | egrep -v '\(POOL_FULL\)' | head -n 1
2019-12-19T20:32:38.942 INFO:teuthology.orchestra.run.smithi160.stdout:2019-12-19T20:27:25.239720+0000 mon.a (mon.0) 179 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)
2019-12-19T20:32:38.942 WARNING:tasks.ceph:Found errors (ERR|WRN|SEC) in cluster log
2019-12-19T20:32:38.943 INFO:teuthology.orchestra.run.smithi160:> sudo egrep '\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'MDS in read-only mode' | egrep -v 'force file system read-only' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | e
grep -v '\(OSD_FULL\)' | egrep -v '\(MDS_READ_ONLY\)' | egrep -v '\(POOL_FULL\)' | head -n 1
2019-12-19T20:32:39.012 INFO:teuthology.orchestra.run.smithi160:> sudo egrep '\[ERR\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'MDS in read-only mode' | egrep -v 'force file system read-only' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | e
grep -v '\(OSD_FULL\)' | egrep -v '\(MDS_READ_ONLY\)' | egrep -v '\(POOL_FULL\)' | head -n 1
2019-12-19T20:32:39.082 INFO:teuthology.orchestra.run.smithi160:> sudo egrep '\[WRN\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'MDS in read-only mode' | egrep -v 'force file system read-only' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | e
grep -v '\(OSD_FULL\)' | egrep -v '\(MDS_READ_ONLY\)' | egrep -v '\(POOL_FULL\)' | head -n 1
2019-12-19T20:32:39.155 INFO:teuthology.orchestra.run.smithi160.stdout:2019-12-19T20:27:25.239720+0000 mon.a (mon.0) 179 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)
2019-12-19T20:32:39.155 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-0 on ubuntu@smithi160.front.sepia.ceph.com
2019-12-19T20:32:39.155 INFO:teuthology.orchestra.run.smithi160:> sync && sudo umount -f /var/lib/ceph/osd/ceph-0
2019-12-19T20:32:39.372 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-1 on ubuntu@smithi160.front.sepia.ceph.com
2019-12-19T20:32:39.372 INFO:teuthology.orchestra.run.smithi160:> sync && sudo umount -f /var/lib/ceph/osd/ceph-1

/a/sage-2019-12-19_19:10:50-rados-master-distro-basic-smithi/4615437

History

#1 Updated by Neha Ojha about 4 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 32549

#2 Updated by Sage Weil about 4 years ago

  • Status changed from Fix Under Review to Resolved

Also available in: Atom PDF