Project

General

Profile

Bug #46178

slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49)

Added by Sebastian Wagner over 3 years ago. Updated over 3 years ago.

Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Saw this error yesterday for the first time:

http://pulpito.ceph.com/swagner-2020-06-23_13:15:09-rados:cephadm-wip-swagner3-testing-2020-06-23-1058-distro-basic-smithi/5172444

2020-06-23T14:14:24.479 INFO:tasks.cephadm:Deploying osd.1 on smithi140 with /dev/vg_nvme/lv_3...
...
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532712+0000 osd.1 (osd.1) 951804 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:02:28.014118+0000 currently delayed
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532721+0000 osd.1 (osd.1) 951805 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T01:02:28.112645+0000 currently delayed
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532732+0000 osd.1 (osd.1) 951806 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:17:28.017258+0000 currently delayed
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532741+0000 osd.1 (osd.1) 951807 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T01:17:28.116826+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532749+0000 osd.1 (osd.1) 951808 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:32:28.021231+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532758+0000 osd.1 (osd.1) 951809 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T01:32:28.117176+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532770+0000 osd.1 (osd.1) 951810 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:47:28.021867+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532795+0000 osd.1 (osd.1) 951811 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:02:28.024273+0000 currently delayed
2020-06-24T01:44:38.779 INFO:ceph.osd.1.smithi140.stdout:Jun 24 01:44:38 smithi140 bash[20025]: debug 2020-06-24T01:44:38.512+0000 7f660a6f2700 -1 osd.1 49 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49)
2020-06-24T01:44:39.499 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.476728+0000 mgr.x (mgr.34109) 20737 : cluster [DBG] pgmap v20741: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.4 MiB used, 707 GiB / 715 GiB avail
2020-06-24T01:44:39.500 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515272+0000 osd.1 (osd.1) 951812 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:17:28.028627+0000 currently delayed
2020-06-24T01:44:39.500 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515294+0000 osd.1 (osd.1) 951813 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:32:28.033173+0000 currently delayed
2020-06-24T01:44:39.500 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515312+0000 osd.1 (osd.1) 951814 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:47:28.037863+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515329+0000 osd.1 (osd.1) 951815 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T19:02:28.037117+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515344+0000 osd.1 (osd.1) 951816 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T19:17:28.041383+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515363+0000 osd.1 (osd.1) 951817 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T19:32:28.045582+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515379+0000 osd.1 (osd.1) 951818 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overla

now it happened again:

http://pulpito.ceph.com/swagner-2020-06-24_11:29:20-rados:cephadm-wip-swagner-testing-2020-06-24-1032-distro-basic-smithi/5175427/

2020-06-24T11:56:54.575 INFO:tasks.cephadm:Deploying osd.1 on smithi118 with /dev/vg_nvme/lv_3...
...
7f44d2a96700 -1 osd.1 49 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49)
0000 mgr.x (mgr.34103) 1527 : cluster [DBG] pgmap v1531: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
0000 osd.1 (osd.1) 5934 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T11:58:48.006893+0000 currently delayed
0000 osd.1 (osd.1) 5935 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:13:48.005126+0000 currently delayed
0000 osd.1 (osd.1) 5936 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:28:48.005918+0000 currently delayed
0000 osd.1 (osd.1) 5937 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:43:48.009047+0000 currently delayed
0000 mgr.x (mgr.34103) 1527 : cluster [DBG] pgmap v1531: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
0000 osd.1 (osd.1) 5934 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T11:58:48.006893+0000 currently delayed
0000 osd.1 (osd.1) 5935 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:13:48.005126+0000 currently delayed
0000 osd.1 (osd.1) 5936 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:28:48.005918+0000 currently delayed
0000 osd.1 (osd.1) 5937 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:43:48.009047+0000 currently delayed
0000 mgr.x (mgr.34103) 1527 : cluster [DBG] pgmap v1531: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
0000 osd.1 (osd.1) 5934 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T11:58:48.006893+0000 currently delayed
0000 osd.1 (osd.1) 5935 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:13:48.005126+0000 currently delayed
0000 osd.1 (osd.1) 5936 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:28:48.005918+0000 currently delayed
0000 osd.1 (osd.1) 5937 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:43:48.009047+0000 currently delayed

Unfortunately, I don't know where this comes from.


Related issues

Duplicates RADOS - Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean. Resolved

History

#1 Updated by Sebastian Wagner over 3 years ago

http://pulpito.ceph.com/swagner-2020-06-24_11:30:44-rados:cephadm-wip-swagner3-testing-2020-06-24-1025-distro-basic-smithi/5175480/

2020-06-24T12:17:15.031 INFO:tasks.cephadm:Deploying osd.4 on smithi086 with /dev/vg_nvme/lv_4...
...
2020-06-24T12:39:57.962 INFO:ceph.mon.a.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[25319]: cluster 2020-06-24T12:39:57.395830+0000 osd.4 (osd.4) 2973 : cluster [WRN] slow request osd_op(client.25525.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:28:56.101558+0000 currently delayed
2020-06-24T12:39:57.962 INFO:ceph.mon.a.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[25319]: cluster 2020-06-24T12:39:57.395850+0000 osd.4 (osd.4) 2974 : cluster [WRN] slow request osd_op(client.26062.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:34:08.034626+0000 currently delayed
2020-06-24T12:39:57.962 INFO:ceph.mon.a.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[25319]: cluster 2020-06-24T12:39:57.395860+0000 osd.4 (osd.4) 2975 : cluster [WRN] slow request osd_op(client.26623.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:39:20.069032+0000 currently delayed
2020-06-24T12:39:57.962 INFO:ceph.mon.a.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[25319]: cluster 2020-06-24T12:39:57.395871+0000 osd.4 (osd.4) 2976 : cluster [WRN] slow request osd_op(client.24382.0:64 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e53) initiated 2020-06-24T12:18:35.915363+0000 currently delayed
2020-06-24T12:39:57.962 INFO:ceph.mon.a.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[25319]: cluster 2020-06-24T12:39:57.395881+0000 osd.4 (osd.4) 2977 : cluster [WRN] slow request osd_op(client.24970.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:23:43.738782+0000 currently delayed
2020-06-24T12:39:57.963 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[29527]: audit 2020-06-24T12:39:56.865059+0000 mgr.x (mgr.34109) 1017 : audit [DBG] from='client.26683 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
2020-06-24T12:39:57.963 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[29527]: cluster 2020-06-24T12:39:57.395830+0000 osd.4 (osd.4) 2973 : cluster [WRN] slow request osd_op(client.25525.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:28:56.101558+0000 currently delayed
2020-06-24T12:39:57.963 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[29527]: cluster 2020-06-24T12:39:57.395850+0000 osd.4 (osd.4) 2974 : cluster [WRN] slow request osd_op(client.26062.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:34:08.034626+0000 currently delayed
2020-06-24T12:39:57.963 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[29527]: cluster 2020-06-24T12:39:57.395860+0000 osd.4 (osd.4) 2975 : cluster [WRN] slow request osd_op(client.26623.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:39:20.069032+0000 currently delayed
2020-06-24T12:39:57.963 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[29527]: cluster 2020-06-24T12:39:57.395871+0000 osd.4 (osd.4) 2976 : cluster [WRN] slow request osd_op(client.24382.0:64 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e53) initiated 2020-06-24T12:18:35.915363+0000 currently delayed
2020-06-24T12:39:57.964 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:57 smithi073 bash[29527]: cluster 2020-06-24T12:39:57.395881+0000 osd.4 (osd.4) 2977 : cluster [WRN] slow request osd_op(client.24970.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:23:43.738782+0000 currently delayed
2020-06-24T12:39:58.188 INFO:ceph.mon.b.smithi086.stdout:Jun 24 12:39:57 smithi086 bash[25326]: audit 2020-06-24T12:39:56.865059+0000 mgr.x (mgr.34109) 1017 : audit [DBG] from='client.26683 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
2020-06-24T12:39:58.189 INFO:ceph.mon.b.smithi086.stdout:Jun 24 12:39:57 smithi086 bash[25326]: cluster 2020-06-24T12:39:57.395830+0000 osd.4 (osd.4) 2973 : cluster [WRN] slow request osd_op(client.25525.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:28:56.101558+0000 currently delayed
2020-06-24T12:39:58.189 INFO:ceph.mon.b.smithi086.stdout:Jun 24 12:39:57 smithi086 bash[25326]: cluster 2020-06-24T12:39:57.395850+0000 osd.4 (osd.4) 2974 : cluster [WRN] slow request osd_op(client.26062.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:34:08.034626+0000 currently delayed
2020-06-24T12:39:58.189 INFO:ceph.mon.b.smithi086.stdout:Jun 24 12:39:57 smithi086 bash[25326]: cluster 2020-06-24T12:39:57.395860+0000 osd.4 (osd.4) 2975 : cluster [WRN] slow request osd_op(client.26623.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:39:20.069032+0000 currently delayed
2020-06-24T12:39:58.190 INFO:ceph.mon.b.smithi086.stdout:Jun 24 12:39:57 smithi086 bash[25326]: cluster 2020-06-24T12:39:57.395871+0000 osd.4 (osd.4) 2976 : cluster [WRN] slow request osd_op(client.24382.0:64 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e53) initiated 2020-06-24T12:18:35.915363+0000 currently delayed
2020-06-24T12:39:58.190 INFO:ceph.mon.b.smithi086.stdout:Jun 24 12:39:57 smithi086 bash[25326]: cluster 2020-06-24T12:39:57.395881+0000 osd.4 (osd.4) 2977 : cluster [WRN] slow request osd_op(client.24970.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:23:43.738782+0000 currently delayed
2020-06-24T12:39:58.688 INFO:ceph.osd.4.smithi086.stdout:Jun 24 12:39:58 smithi086 bash[28147]: debug 2020-06-24T12:39:58.351+0000 7f56a45d9700 -1 osd.4 54 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.24382.0:64 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e53)
2020-06-24T12:39:58.965 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:58 smithi073 bash[29527]: cluster 2020-06-24T12:39:57.710162+0000 mgr.x (mgr.34109) 1018 : cluster [DBG] pgmap v751: 97 pgs: 3 creating+peering, 94 active+clean; 1.9 KiB data, 4.0 MiB used, 707 GiB / 715 GiB avail
2020-06-24T12:39:58.966 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:58 smithi073 bash[29527]: cluster 2020-06-24T12:39:58.351337+0000 osd.4 (osd.4) 2978 : cluster [WRN] slow request osd_op(client.25525.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:28:56.101558+0000 currently delayed
2020-06-24T12:39:58.966 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:58 smithi073 bash[29527]: cluster 2020-06-24T12:39:58.351343+0000 osd.4 (osd.4) 2979 : cluster [WRN] slow request osd_op(client.26062.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:34:08.034626+0000 currently delayed
2020-06-24T12:39:58.966 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:58 smithi073 bash[29527]: cluster 2020-06-24T12:39:58.351347+0000 osd.4 (osd.4) 2980 : cluster [WRN] slow request osd_op(client.26623.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:39:20.069032+0000 currently delayed
2020-06-24T12:39:58.966 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:58 smithi073 bash[29527]: cluster 2020-06-24T12:39:58.351351+0000 osd.4 (osd.4) 2981 : cluster [WRN] slow request osd_op(client.24382.0:64 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e53) initiated 2020-06-24T12:18:35.915363+0000 currently delayed
2020-06-24T12:39:58.966 INFO:ceph.mon.c.smithi073.stdout:Jun 24 12:39:58 smithi073 bash[29527]: cluster 2020-06-24T12:39:58.351355+0000 osd.4 (osd.4) 2982 : cluster [WRN] slow request osd_op(client.24970.0:62 4.d 4.a204812d (undecoded) ondisk+write+known_if_redirected e54) initiated 2020-06-24T12:23:43.738782+0000 currently delayed
2020-06-24T12:39:58.967 INFO:ceph.mon.a.smithi073.stdout:Jun 24 12:39:58 smithi073 bash[25319]: cluster 2020-06-24T12:39:57.710162+0000 mgr.x (mgr.34109) 1018 : cluster [DBG] pgmap v751: 97 pgs: 3 creating+peering, 94 active+clean; 1.9 KiB data, 4.0 MiB used, 707 GiB / 715 GiB avail

#2 Updated by Sebastian Wagner over 3 years ago

  • Priority changed from Normal to High

#5 Updated by Neha Ojha over 3 years ago

  • Priority changed from High to Urgent

#6 Updated by Neha Ojha over 3 years ago

/a/yuvalif-2020-06-23_14:40:15-rgw-wip-yuval-test-35331-35155-distro-basic-smithi/5173465

Seems very likely to have the same root cause as https://tracker.ceph.com/issues/46180

2020-06-24T06:38:10.611 INFO:tasks.ceph.ceph_manager.ceph:PG 2.2 is not active+clean
2020-06-24T06:38:10.618 INFO:tasks.ceph.ceph_manager.ceph:{'pgid': '2.2', 'version': "0'0", 'reported_seq': '3', 'reported_epoch': '19', 'state': 'creating+peering', 'last_fresh': '2020-06-24T06:18:01.513416+0000', 'last_change': '2020-06-24T06:17:59.508158+0000', 'last_active': '2020-06-24T06:17:59.221347+0000', 'last_peered': '2020-06-24T06:17:59.221347+0000', 'last_clean': '2020-06-24T06:17:59.221347+0000', 'last_became_active': '0.000000', 'last_became_peered': '0.000000', 'last_unstale': '2020-06-24T06:18:01.513416+0000', 'last_undegraded': '2020-06-24T06:18:01.513416+0000', 'last_fullsized': '2020-06-24T06:18:01.513416+0000', 'mapping_epoch': 17, 'log_start': "0'0", 'ondisk_log_start': "0'0", 'created': 17, 'last_epoch_clean': 0, 'parent': '0.0', 'parent_split_bits': 0, 'last_scrub': "0'0", 'last_scrub_stamp': '2020-06-24T06:17:59.221347+0000', 'last_deep_scrub': "0'0", 'last_deep_scrub_stamp': '2020-06-24T06:17:59.221347+0000', 'last_clean_scrub_stamp': '2020-06-24T06:17:59.221347+0000', 'log_size': 0, 'ondisk_log_size': 0, 'stats_invalid': False, 'dirty_stats_invalid': False, 'omap_stats_invalid': False, 'hitset_stats_invalid': False, 'hitset_bytes_stats_invalid': False, 'pin_stats_invalid': False, 'manifest_stats_invalid': False, 'snaptrimq_len': 0, 'stat_sum': {'num_bytes': 0, 'num_objects': 0, 'num_object_clones': 0, 'num_object_copies': 0, 'num_objects_missing_on_primary': 0, 'num_objects_missing': 0, 'num_objects_degraded': 0, 'num_objects_misplaced': 0, 'num_objects_unfound': 0, 'num_objects_dirty': 0, 'num_whiteouts': 0, 'num_read': 0, 'num_read_kb': 0, 'num_write': 0, 'num_write_kb': 0, 'num_scrub_errors': 0, 'num_shallow_scrub_errors': 0, 'num_deep_scrub_errors': 0, 'num_objects_recovered': 0, 'num_bytes_recovered': 0, 'num_keys_recovered': 0, 'num_objects_omap': 0, 'num_objects_hit_set_archive': 0, 'num_bytes_hit_set_archive': 0, 'num_flush': 0, 'num_flush_kb': 0, 'num_evict': 0, 'num_evict_kb': 0, 'num_promote': 0, 'num_flush_mode_high': 0, 'num_flush_mode_low': 0, 'num_evict_mode_some': 0, 'num_evict_mode_full': 0, 'num_objects_pinned': 0, 'num_legacy_snapsets': 0, 'num_large_omap_objects': 0, 'num_objects_manifest': 0, 'num_omap_bytes': 0, 'num_omap_keys': 0, 'num_objects_repaired': 0}, 'up': [3, 6], 'acting': [3, 6], 'avail_no_missing': [], 'object_location_counts': [], 'blocked_by': [6], 'up_primary': 3, 'acting_primary': 3, 'purged_snaps': []}

2020-06-24T06:51:53.330 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-yuval-test-35331-35155/qa/tasks/ceph.py", line 1830, in task
    healthy(ctx=ctx, config=dict(cluster=config['cluster']))
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-yuval-test-35331-35155/qa/tasks/ceph.py", line 1419, in healthy
    manager.wait_for_clean()
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-yuval-test-35331-35155/qa/tasks/ceph_manager.py", line 2518, in wait_for_clean
    'wait_for_clean: failed before timeout expired'
AssertionError: wait_for_clean: failed before timeout expired

#7 Updated by Sebastian Wagner over 3 years ago

  • Related to Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean. added

#10 Updated by Ilya Dryomov over 3 years ago

  • Related to deleted (Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.)

#11 Updated by Ilya Dryomov over 3 years ago

  • Duplicates Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean. added

#12 Updated by Ilya Dryomov over 3 years ago

  • Status changed from New to Duplicate

Also available in: Atom PDF