Project

General

Profile

Bug #53316

Updated by Kotresh Hiremath Ravishankar over 2 years ago

The warning is seen in following teuthology run 

 http://pulpito.front.sepia.ceph.com/yuriw-2021-11-17_19:02:43-fs-wip-yuri10-testing-2021-11-17-0856-pacific-distro-basic-smithi/6510429/ 

 Failure: 

 ------------ 

 description: fs/mirror/{begin cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} 
   mount/fuse objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distros$/{centos_8} 
   tasks/mirror} 
 duration: 6992.47478890419 
 failure_reason: '"2021-11-18T07:55:10.982539+0000 osd.1 (osd.1) 3 : cluster [WRN] 
   slow request osd_op(client.10227.0:889 64.3f 64:ffccef2f:::100000001f8.00000076:head 
   [write 0~4194304 in=4194304b] snapc 1=[] ondisk+write+known_if_redirected e385) 
   initiated 2021-11-18T07:54:30.841218+0000 currently waiting for sub ops" in cluster 
   log' 
 flavor: default 
 owner: scheduled_yuriw@teuthology 
 success: false 

 ------------ 


 ------------------- 

 2021-11-18T08:48:30.704 DEBUG:teuthology.orchestra.run.smithi150:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'overall HEALTH_' | egrep -v '\(FS_DEGRADED\)' | egrep -v '\(MDS_FAILED\)' | egrep -v '\(MDS_DEGRADED\)' | egrep -v '\(FS_WITH_FAILED_MDS\)' | egrep -v '\(MDS_DAMAGE\)' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(FS_INLINE_DATA_DEPRECATED\)' | egrep -v 'Reduced data availability' | egrep -v 'Degraded data redundancy' | head -n 1 
 2021-11-18T08:48:30.860 INFO:teuthology.orchestra.run.smithi150.stdout:2021-11-18T07:55:10.982539+0000 osd.1 (osd.1) 3 : cluster [WRN] slow request osd_op(client.10227.0:889 64.3f 64:ffccef2f:::100000001f8.00000076:head [write 0~4194304 in=4194304b] snapc 1=[] ondisk+write+known_if_redirected e385) initiated 2021-11-18T07:54:30.841218+0000 currently waiting for sub ops 
 2021-11-18T08:48:30.860 WARNING:tasks.ceph:Found errors (ERR|WRN|SEC) in cluster log 
 2021-11-18T08:48:30.861 DEBUG:teuthology.orchestra.run.smithi150:> sudo egrep '\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'overall HEALTH_' | egrep -v '\(FS_DEGRADED\)' | egrep -v '\(MDS_FAILED\)' | egrep -v '\(MDS_DEGRADED\)' | egrep -v '\(FS_WITH_FAILED_MDS\)' | egrep -v '\(MDS_DAMAGE\)' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(FS_INLINE_DATA_DEPRECATED\)' | egrep -v 'Reduced data availability' | egrep -v 'Degraded data redundancy' | head -n 1 
 2021-11-18T08:48:30.888 DEBUG:teuthology.orchestra.run.smithi150:> sudo egrep '\[ERR\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'overall HEALTH_' | egrep -v '\(FS_DEGRADED\)' | egrep -v '\(MDS_FAILED\)' | egrep -v '\(MDS_DEGRADED\)' | egrep -v '\(FS_WITH_FAILED_MDS\)' | egrep -v '\(MDS_DAMAGE\)' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(FS_INLINE_DATA_DEPRECATED\)' | egrep -v 'Reduced data availability' | egrep -v 'Degraded data redundancy' | head -n 1 
 2021-11-18T08:48:30.958 DEBUG:teuthology.orchestra.run.smithi150:> sudo egrep '\[WRN\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'overall HEALTH_' | egrep -v '\(FS_DEGRADED\)' | egrep -v '\(MDS_FAILED\)' | egrep -v '\(MDS_DEGRADED\)' | egrep -v '\(FS_WITH_FAILED_MDS\)' | egrep -v '\(MDS_DAMAGE\)' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(FS_INLINE_DATA_DEPRECATED\)' | egrep -v 'Reduced data availability' | egrep -v 'Degraded data redundancy' | head -n 1 
 2021-11-18T08:48:31.026 INFO:teuthology.orchestra.run.smithi150.stdout:2021-11-18T07:55:10.982539+0000 osd.1 (osd.1) 3 : cluster [WRN] slow request osd_op(client.10227.0:889 64.3f 64:ffccef2f:::100000001f8.00000076:head [write 0~4194304 in=4194304b] snapc 1=[] ondisk+write+known_if_redirected e385) initiated 2021-11-18T07:54:30.841218+0000 currently waiting for sub ops 

 -----------------------------

Back