Project

General

Profile

Actions

Bug #59049

open

WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event

Added by Laura Flores about 1 year ago. Updated 12 months ago.

Status:
Pending Backport
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/yuriw-2023-03-10_22:46:37-rados-reef-distro-default-smithi/7203358

2023-03-11T10:28:47.074 DEBUG:teuthology.orchestra.run.smithi149:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(PG_' | egrep -v '\(SLOW_OPS\)' | egrep -v 'overall HEALTH' | egrep -v 'slow request' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1
2023-03-11T10:28:47.263 INFO:teuthology.orchestra.run.smithi149.stdout:1678530338.9736485 osd.3 (osd.3) 104 : cluster [WRN] WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
2023-03-11T10:28:47.263 WARNING:tasks.ceph:Found errors (ERR|WRN|SEC) in cluster log
2023-03-11T10:28:47.264 DEBUG:teuthology.orchestra.run.smithi149:> sudo egrep '\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(PG_' | egrep -v '\(SLOW_OPS\)' | egrep -v 'overall HEALTH' | egrep -v 'slow request' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1
2023-03-11T10:28:47.280 DEBUG:teuthology.orchestra.run.smithi149:> sudo egrep '\[ERR\]' /var/log/ceph/ceph.log | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(PG_' | egrep -v '\(SLOW_OPS\)' | egrep -v 'overall HEALTH' | egrep -v 'slow request' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1
2023-03-11T10:28:47.338 DEBUG:teuthology.orchestra.run.smithi149:> sudo egrep '\[WRN\]' /var/log/ceph/ceph.log | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(PG_' | egrep -v '\(SLOW_OPS\)' | egrep -v 'overall HEALTH' | egrep -v 'slow request' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1
2023-03-11T10:28:47.392 INFO:teuthology.orchestra.run.smithi149.stdout:1678530338.9736485 osd.3 (osd.3) 104 : cluster [WRN] WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event

/a/yuriw-2023-03-10_22:46:37-rados-reef-distro-default-smithi/7203358/remote/smithi149/log/ceph-osd.3.log.gz

2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 op_wq(3) _process OpSchedulerItem(2.23 PGScrubDigestUpdate(pgid=2.23epoch_queued=258 scrub-token=0) class_id 1 cost 52428800 e258) queued
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 op_wq(3) _process 2.23 to_process <OpSchedulerItem(2.23 PGScrubDigestUpdate(pgid=2.23epoch_queued=258 scrub-token=0) class_id 1 cost 52428800 e258)> waiting <> waiting_peering {}
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 op_wq(3) _process OpSchedulerItem(2.23 PGScrubDigestUpdate(pgid=2.23epoch_queued=258 scrub-token=0) class_id 1 cost 52428800 e258) pg 0x55e1ee71d000
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 pg_epoch: 258 pg[2.23( v 258'779 (258'754,258'779] local-lis/les=89/90 n=605 ec=13/13 lis/c=89/89 les/c/f=90/90/0 sis=89) [3,1] r=0 lpr=89 crt=258'779 lcod 258'778 mlcod 258'778 active+clean+scrubbing+deep [ 2.23:  REQ_SCRUB ]  MUST_DEEP_SCRUB MUST_SCRUB planned REQ_SCRUB] forward_scrub_event: DigestUpdate queued at: 258
2023-03-11T10:25:38.967+0000 7f90a9fb1640 10 osd.3 pg_epoch: 258 pg[2.23( v 258'779 (258'754,258'779] local-lis/les=89/90 n=605 ec=13/13 lis/c=89/89 les/c/f=90/90/0 sis=89) [3,1] r=0 lpr=89 crt=258'779 lcod 258'778 mlcod 258'778 active+clean+scrubbing+deep [ 2.23:  REQ_SCRUB ]  MUST_DEEP_SCRUB MUST_SCRUB planned REQ_SCRUB] scrubber<Act/WaitReplicas>: scrubber event -->> digest_update_notification epoch: 258
2023-03-11T10:25:38.967+0000 7f90a9fb1640 10 osd.3 pg_epoch: 258 pg[2.23( v 258'779 (258'754,258'779] local-lis/les=89/90 n=605 ec=13/13 lis/c=89/89 les/c/f=90/90/0 sis=89) [3,1] r=0 lpr=89 crt=258'779 lcod 258'778 mlcod 258'778 active+clean+scrubbing+deep [ 2.23:  REQ_SCRUB ]  MUST_DEEP_SCRUB MUST_SCRUB planned REQ_SCRUB] scrubber<Act/WaitReplicas>: FSM: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
2023-03-11T10:25:38.967+0000 7f90a9fb1640  0 log_channel(cluster) log [WRN] : WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
2023-03-11T10:25:38.967+0000 7f90a9fb1640 10 osd.3 pg_epoch: 258 pg[2.23( v 258'779 (258'754,258'779] local-lis/les=89/90 n=605 ec=13/13 lis/c=89/89 les/c/f=90/90/0 sis=89) [3,1] r=0 lpr=89 crt=258'779 lcod 258'778 mlcod 258'778 active+clean+scrubbing+deep [ 2.23:  REQ_SCRUB ]  MUST_DEEP_SCRUB MUST_SCRUB planned REQ_SCRUB] scrubber<Act/WaitReplicas>: scrubber event --<< digest_update_notification
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 op_wq(3) _process 2.23 to_process <> waiting <> waiting_peering {}
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 op_wq(3) _process OpSchedulerItem(2.23 PGScrubMapsCompared(pgid=2.23epoch_queued=258 scrub-token=0) class_id 1 qos_cost 577 cost 52428800 e258) queued
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 op_wq(3) _process 2.23 to_process <OpSchedulerItem(2.23 PGScrubMapsCompared(pgid=2.23epoch_queued=258 scrub-token=0) class_id 1 qos_cost 577 cost 52428800 e258)> waiting <> waiting_peering {}
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 op_wq(3) _process OpSchedulerItem(2.23 PGScrubMapsCompared(pgid=2.23epoch_queued=258 scrub-token=0) class_id 1 qos_cost 577 cost 52428800 e258) pg 0x55e1ee71d000
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20 osd.3 pg_epoch: 258 pg[2.23( v 258'779 (258'754,258'779] local-lis/les=89/90 n=605 ec=13/13 lis/c=89/89 les/c/f=90/90/0 sis=89) [3,1] r=0 lpr=89 crt=258'779 lcod 258'778 mlcod 258'778 active+clean+scrubbing+deep [ 2.23:  REQ_SCRUB ]  MUST_DEEP_SCRUB MUST_SCRUB planned REQ_SCRUB] forward_scrub_event: MapsCompared queued at: 258
2023-03-11T10:25:38.967+0000 7f90a9fb1640 10 osd.3 pg_epoch: 258 pg[2.23( v 258'779 (258'754,258'779] local-lis/les=89/90 n=605 ec=13/13 lis/c=89/89 les/c/f=90/90/0 sis=89) [3,1] r=0 lpr=89 crt=258'779 lcod 258'778 mlcod 258'778 active+clean+scrubbing+deep [ 2.23:  REQ_SCRUB ]  MUST_DEEP_SCRUB MUST_SCRUB planned REQ_SCRUB] scrubber<Act/WaitReplicas>: scrubber event -->> send_maps_compared epoch: 258
2023-03-11T10:25:38.967+0000 7f90a9fb1640 20  scrubberFSM  event: --vvvv---- MapsCompared
2023-03-11T10:25:38.967+0000 7f90a9fb1640 10 osd.3 pg_epoch: 258 pg[2.23( v 258'779 (258'754,258'779] local-lis/les=89/90 n=605 ec=13/13 lis/c=89/89 les/c/f=90/90/0 sis=89) [3,1] r=0 lpr=89 crt=258'779 lcod 258'778 mlcod 258'778 active+clean+scrubbing+deep [ 2.23:  REQ_SCRUB ]  MUST_DEEP_SCRUB MUST_SCRUB planned REQ_SCRUB] scrubber<Act/WaitDigestUpdate>: FSM: -- state -->> Act/WaitDigestUpdate


Related issues 1 (0 open1 closed)

Copied to RADOS - Backport #61345: reef: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate eventResolvedSamuel JustActions
Actions

Also available in: Atom PDF