Bug #62076
openreef: Test failure: test_grow_shrink (tasks.cephfs.test_failover.TestMultiFilesystems)
0%
Description
/a/yuriw-2023-07-14_23:37:57-fs-wip-yuri8-testing-2023-07-14-0803-reef-distro-default-smithi/7338327
standby mds (mds.b) was in up:creating when the monitors assigned it an active rank for file system `alpha`. mds.b remained in up:creating for over a minute which caused the test to fail since it expect 3 MDSs in up:active.
MDS log file: ./remote/smithi042/log/ceph-mds.b.log.gz
Things go quiet after this log entry
-626> 2023-07-15T03:40:59.620+0000 7f4403fa5700 1 -- [v2:172.21.15.42:6834/3295642961,v1:172.21.15.42:6836/3295642961] <== osd.0 v2:172.21.15.16:6808/131189 7 ==== osd_op_reply(24 202.00000006 [delete] v366'4 uv3 ondisk = -2 ((2) No such file or directory)) v8 ==== 156 +0+0 (crc 0 0 0) 0x5579a526ab40 con 0x5579a526c800
The its mostly beacon messages to the monitors and mgrreport messages.
Updated by Patrick Donnelly 10 months ago
- Project changed from CephFS to RADOS
- Category changed from Correctness/Safety to Correctness/Safety
Stuck op:
2023-07-15T03:40:59.612+0000 7f43fb794700 1 -- [v2:172.21.15.42:6834/3295642961,v1:172.21.15.42:6836/3295642961] --> [v2:172.21.15.42:6800/101270,v1:172.21.15.42:6801/101270] -- osd_op(unknown.0.516:29 56.19 56:9ec52373:::61a.00000000:head [omap-set-header in=274b] snapc 0=[] ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8 -- 0x5579a5359000 con 0x5579a5276c00 ... 2023-07-15T03:42:04.793+0000 7f4403fa5700 1 -- [v2:172.21.15.42:6834/3295642961,v1:172.21.15.42:6836/3295642961] <== osd.6 v2:172.21.15.42:6800/101270 2 ==== osd_op_reply(9 61a.00000000 [create,setxattr (62),setxattr (30)] v0'0 uv0 ondisk = -108 ((108) Cannot send after transport endpoint shutdown)) v8 ==== 240+0+0 (crc 0 0 0) 0x5579a526ab40 con 0x5579a5276c00
OSD says:
2023-07-15T03:40:59.613+0000 7ff3f6d01700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean] check_laggy mnow 1406.601684570s > readable_until 1406.601074219s 2023-07-15T03:40:59.613+0000 7ff3f6d01700 20 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] prepare_stats_for_publish reporting purged_snaps [] 2023-07-15T03:40:59.613+0000 7ff3f6d01700 15 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] publish_stats_to_osd 366:10 2023-07-15T03:40:59.613+0000 7ff3f6d01700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] check_laggy not readable 2023-07-15T03:40:59.613+0000 7ff3f6d01700 10 osd.6 366 dequeue_op osd_op(mds.2.516:9 56.19 56:9ec52373:::61a.00000000:head [create,setxattr parent (62) in=68b,setxattr layout (30) in=36b] snapc 0=[] ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8 finish 2023-07-15T03:40:59.613+0000 7ff3f6d01700 20 osd.6 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 RenewLease) class_id 2 prio 255 cost 10 e366)> waiting <> waiting_peering {} 2023-07-15T03:40:59.613+0000 7ff3f6d01700 20 osd.6 op_wq(1) _process OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLeaseAck epoch 366 from osd.2 pg_lease_ack(ruub 1406.601318359s)) class_id 2 prio 255 cost 10 e366) queued 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 RenewLease) class_id 2 prio 255 cost 10 e366),OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLeaseAck epoch 366 from osd.2 pg_lease_ack(ruub 1406.601318359s)) class_id 2 prio 255 cost 10 e366)> waiting <> waiting_peering {} 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 op_wq(1) _process OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 RenewLease) class_id 2 prio 255 cost 10 e366) pg 0x55cf2f63a000 2023-07-15T03:40:59.613+0000 7ff3fad09700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] do_peering_event: epoch_sent: 366 epoch_requested: 366 RenewLease 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] recalc_readable_until peer osd.2 ruub 1406.601074219s 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] recalc_readable_until readable_until[_ub] 1406.601074219s (sent 1406.601806641s) 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 366 get_con_osd_cluster to osd.2 from_epoch 366 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 366 get_nextmap_reserved map_reservations: {366=1} 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 366 release_map epoch: 366 2023-07-15T03:40:59.613+0000 7ff3fad09700 1 -- [v2:172.21.15.42:6803/101270,v1:172.21.15.42:6805/101270] --> [v2:172.21.15.16:6802/131191,v1:172.21.15.16:6803/131191] -- pg_lease(56.19 pg_lease(ru 1406.601074219s ub 1406.601806641s int 0.000000000s) e366/366) v1 -- 0x55cf30df1b00 con 0x55cf2ecc8000 2023-07-15T03:40:59.613+0000 7ff3fad09700 10 log is not dirty 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLeaseAck epoch 366 from osd.2 pg_lease_ack(ruub 1406.601318359s)) class_id 2 prio 255 cost 10 e366)> waiting <> waiting_peering {} 2023-07-15T03:40:59.613+0000 7ff3fc50c700 15 osd.6 366 enqueue_peering_evt 56.19 epoch_sent: 366 epoch_requested: 366 RenewLease 2023-07-15T03:40:59.613+0000 7ff3fad09700 20 osd.6 op_wq(1) _process OpSchedulerItem(56.19 PGOpItem(op=osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8) class_id 3 prio 127 qos_cost 212087 cost 274 e366) queued 2023-07-15T03:40:59.613+0000 7ff3fc50c700 20 osd.6 op_wq(1) _enqueue OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 RenewLease) class_id 2 prio 255 cost 10 e366) 2023-07-15T03:40:59.614+0000 7ff3f6d01700 20 osd.6 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLeaseAck epoch 366 from osd.2 pg_lease_ack(ruub 1406.601318359s)) class_id 2 prio 255 cost 10 e366),OpSchedulerItem(56.19 PGOpItem(op=osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8) class_id 3 prio 127 qos_cost 212087 cost 274 e366)> waiting <> waiting_peering {} 2023-07-15T03:40:59.614+0000 7ff3f6d01700 20 osd.6 op_wq(1) _process OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLeaseAck epoch 366 from osd.2 pg_lease_ack(ruub 1406.601318359s)) class_id 2 prio 255 cost 10 e366) pg 0x55cf2f63a000 2023-07-15T03:40:59.614+0000 7ff3f6d01700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] do_peering_event: epoch_sent: 366 epoch_requested: 366 MLeaseAck epoch 366 from osd.2 pg_lease_ack(ruub 1406.601318359s) 2023-07-15T03:40:59.614+0000 7ff3f6d01700 20 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] recalc_readable_until peer osd.2 ruub 1406.601318359s 2023-07-15T03:40:59.614+0000 7ff3f6d01700 20 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] recalc_readable_until readable_until[_ub] 1406.601318359s (sent 1406.601806641s) 2023-07-15T03:40:59.614+0000 7ff3f6d01700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] recheck_readable still laggy (mnow 1406.601928711s >= readable_until 1406.601318359s) 2023-07-15T03:40:59.614+0000 7ff3f6d01700 10 log is not dirty 2023-07-15T03:40:59.614+0000 7ff3f6d01700 20 osd.6 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGOpItem(op=osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8) class_id 3 prio 127 qos_cost 212087 cost 274 e366)> waiting <> waiting_peering {} 2023-07-15T03:40:59.614+0000 7ff3f6d01700 20 osd.6 op_wq(1) _process OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 RenewLease) class_id 2 prio 255 cost 10 e366) queued 2023-07-15T03:40:59.614+0000 7ff3fad09700 20 osd.6 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGOpItem(op=osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8) class_id 3 prio 127 qos_cost 212087 cost 274 e366),OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 RenewLease) class_id 2 prio 255 cost 10 e366)> waiting <> waiting_peering {} 2023-07-15T03:40:59.614+0000 7ff3fad09700 20 osd.6 op_wq(1) _process OpSchedulerItem(56.19 PGOpItem(op=osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8) class_id 3 prio 127 qos_cost 212087 cost 274 e366) pg 0x55cf2f63a000 2023-07-15T03:40:59.614+0000 7ff3fad09700 10 osd.6 366 dequeue_op osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8 prio 127 cost 274 latency 0.000685 osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8 pg pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] 2023-07-15T03:40:59.614+0000 7ff3fad09700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] _handle_message: osd_op(mds.2.516:29 56.19 56.cec4a379 (undecoded) ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8 2023-07-15T03:40:59.614+0000 7ff3fad09700 20 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] do_op: op osd_op(mds.2.516:29 56.19 56:9ec52373:::61a.00000000:head [omap-set-header in=274b] snapc 0=[] ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8 2023-07-15T03:40:59.614+0000 7ff3fad09700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] check_laggy not readable 2023-07-15T03:40:59.614+0000 7ff3fad09700 10 osd.6 366 dequeue_op osd_op(mds.2.516:29 56.19 56:9ec52373:::61a.00000000:head [omap-set-header in=274b] snapc 0=[] ondisk+write+known_if_redirected+full_force+supports_pool_eio e366) v8 finish
The pg just went laggy, blocking that op.
Here's what osd.2 is doing towards the end, repeatedly:
2023-07-15T03:42:04.123+0000 7fcf3acae700 1 -- [v2:172.21.15.16:6802/131191,v1:172.21.15.16:6803/131191] <== osd.6 v2:172.21.15.42:6803/101270 1292211 ==== pg_lease(56.19 pg_lease(ru 1471.105590820s ub 1471.110839844s int 0.000000000s) e366/366) v1 ==== 58+0+0 (crc 0 0 0) 0x562314084c00 con 0x562312393400 2023-07-15T03:42:04.123+0000 7fcf1a057700 1 -- [v2:172.21.15.16:6802/131191,v1:172.21.15.16:6803/131191] --> [v2:172.21.15.42:6803/101270,v1:172.21.15.42:6805/101270] -- pg_lease_ack(56.19 pg_lease_ack(ruub 1471.109252930s) e366/366) v1 -- 0x562314491760 con 0x562312393400 2023-07-15T03:42:04.123+0000 7fcf1a057700 10 log is not dirty 2023-07-15T03:42:04.123+0000 7fcf3acae700 15 osd.2 366 enqueue_peering_evt 56.19 epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.105590820s ub 1471.110839844s int 0.000000000s) 2023-07-15T03:42:04.123+0000 7fcf3acae700 20 osd.2 op_wq(1) _enqueue OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.105590820s ub 1471.110839844s int 0.000000000s)) class_id 2 prio 255 cost 10 e366) 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.104858398s ub 1471.109375000s int 0.000000000s)) class_id 2 prio 255 cost 10 e366)> waiting <> waiting_peering {} 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 op_wq(1) _process OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.104980469s ub 1471.109497070s int 0.000000000s)) class_id 2 prio 255 cost 10 e366) queued 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 op_wq(1) _process 56.19 to_process <OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.104858398s ub 1471.109375000s int 0.000000000s)) class_id 2 prio 255 cost 10 e366),OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.104980469s ub 1471.109497070s int 0.000000000s)) class_id 2 prio 255 cost 10 e366)> waiting <> waiting_peering {} 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 op_wq(1) _process OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.104858398s ub 1471.109375000s int 0.000000000s)) class_id 2 prio 255 cost 10 e366) pg 0x562314322000 2023-07-15T03:42:04.123+0000 7fcf3acae700 1 -- [v2:172.21.15.16:6802/131191,v1:172.21.15.16:6803/131191] <== osd.6 v2:172.21.15.42:6803/101270 1292212 ==== pg_lease(56.19 pg_lease(ru 1471.107055664s ub 1471.111206055s int 0.000000000s) e366/366) v1 ==== 58+0+0 (crc 0 0 0) 0x562314084c00 con 0x562312393400 2023-07-15T03:42:04.123+0000 7fcf1a057700 10 osd.2 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=1 lpr=365 crt=0'0 mlcod 0'0 active mbc={}] do_peering_event: epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.104858398s ub 1471.109375000s int 0.000000000s) 2023-07-15T03:42:04.123+0000 7fcf3acae700 15 osd.2 366 enqueue_peering_evt 56.19 epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.107055664s ub 1471.111206055s int 0.000000000s) 2023-07-15T03:42:04.123+0000 7fcf3acae700 20 osd.2 op_wq(1) _enqueue OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.107055664s ub 1471.111206055s int 0.000000000s)) class_id 2 prio 255 cost 10 e366) 2023-07-15T03:42:04.123+0000 7fcf1a057700 10 osd.2 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=1 lpr=365 crt=0'0 mlcod 0'0 active mbc={}] proc_lease pg_lease(ru 1471.104858398s ub 1471.109375000s int 0.000000000s) 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=1 lpr=365 crt=0'0 mlcod 0'0 active mbc={}] peer_clock_delta_ub 0.073277198s -> ru 1471.031372070s 2023-07-15T03:42:04.123+0000 7fcf3acae700 1 -- [v2:172.21.15.16:6802/131191,v1:172.21.15.16:6803/131191] <== osd.6 v2:172.21.15.42:6803/101270 1292213 ==== pg_lease(56.19 pg_lease(ru 1471.107177734s ub 1471.111206055s int 0.000000000s) e366/366) v1 ==== 58+0+0 (crc 0 0 0) 0x562314084c00 con 0x562312393400 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=1 lpr=365 crt=0'0 mlcod 0'0 active mbc={}] proc_lease readable_until now 1471.031372070s 2023-07-15T03:42:04.123+0000 7fcf3acae700 15 osd.2 366 enqueue_peering_evt 56.19 epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.107177734s ub 1471.111206055s int 0.000000000s) 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=1 lpr=365 crt=0'0 mlcod 0'0 active mbc={}] peer_clock_delta_lb 0.072879992s -> ruub 1471.036499023s 2023-07-15T03:42:04.123+0000 7fcf3acae700 20 osd.2 op_wq(1) _enqueue OpSchedulerItem(56.19 PGPeeringEvent(epoch_sent: 366 epoch_requested: 366 MLease epoch 366 from osd.6 pg_lease(ru 1471.107177734s ub 1471.111206055s int 0.000000000s)) class_id 2 prio 255 cost 10 e366) 2023-07-15T03:42:04.123+0000 7fcf1a057700 20 osd.2 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=1 lpr=365 crt=0'0 mlcod 0'0 active mbc={}] proc_lease readable_until_ub now 1471.036499023s
From: /teuthology/yuriw-2023-07-14_23:37:57-fs-wip-yuri8-testing-2023-07-14-0803-reef-distro-default-smithi/7338327/remote/smithi016/log/ceph-osd.2.log.gz
I'm firmly in the wilderness of RADOS. I think our friends should have a look!
Updated by Patrick Donnelly 10 months ago
There was an upmap command for this pg which may be relevant:
2023-07-15T03:40:36.536+0000 7fda2e3db700 7 mon.a@0(leader).osd e364 prepare_update mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "56.19", "id": [4, 6]} v 0) v1 from mgr.4101 172.21.15.42:0/100607
Updated by Radoslaw Zarzynski 10 months ago
- Assignee set to Laura Flores
Hi Laura. Could you please take a look if have free cycles?
Updated by Laura Flores 10 months ago
Hey Venky, Patrick, has this failure reproduced at all since the tracker was opened?
Updated by Laura Flores 10 months ago
Snippet from the cluster log, which is basically what Patrick already described above:
2023-07-15T03:40:42.553786+0000 mon.a (mon.0) 4083 : cluster 1 daemon mds.a assigned to filesystem alpha as rank 1 (now has 2 ranks)
2023-07-15T03:40:42.553862+0000 mon.a (mon.0) 4084 : cluster 1 Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
2023-07-15T03:40:42.553886+0000 mon.a (mon.0) 4085 : cluster 1 Cluster is now healthy
2023-07-15T03:40:42.554152+0000 mon.a (mon.0) 4086 : cluster 0 fsmap alpha:1 bravo:2 {alpha:0=c=up:active,bravo:0=d=up:active,bravo:1=b=up:active} 1 up:standby
2023-07-15T03:40:42.558608+0000 mon.a (mon.0) 4087 : cluster 0 fsmap alpha:2 bravo:2 {alpha:0=c=up:active,alpha:1=a=up:creating,bravo:0=d=up:active,bravo:1=b=up:active}
2023-07-15T03:40:42.571929+0000 mon.a (mon.0) 4088 : cluster 1 daemon mds.a is now active in filesystem alpha as rank 1
2023-07-15T03:40:43.558859+0000 mon.a (mon.0) 4090 : cluster 0 mds.? [v2:172.21.15.16:6834/1379235747,v1:172.21.15.16:6835/1379235747] up:active
2023-07-15T03:40:43.558943+0000 mon.a (mon.0) 4091 : cluster 0 fsmap alpha:2 bravo:2 {alpha:0=c=up:active,alpha:1=a=up:active,bravo:0=d=up:active,bravo:1=b=up:active}
2023-07-15T03:40:43.706600+0000 mgr.x (mgr.4101) 800 : cluster 0 pgmap v1052: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 3.0 KiB/s wr, 10 op/s; 0 B/s, 0 objects/s recovering
2023-07-15T03:40:45.707046+0000 mgr.x (mgr.4101) 801 : cluster 0 pgmap v1053: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 2.9 KiB/s wr, 9 op/s; 0 B/s, 0 objects/s recovering
2023-07-15T03:40:47.708011+0000 mgr.x (mgr.4101) 802 : cluster 0 pgmap v1054: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 1.2 KiB/s wr, 4 op/s; 0 B/s, 0 objects/s recovering
2023-07-15T03:40:49.708487+0000 mgr.x (mgr.4101) 803 : cluster 0 pgmap v1055: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 1.1 KiB/s wr, 3 op/s; 0 B/s, 0 objects/s recovering
2023-07-15T03:40:49.768299+0000 mon.a (mon.0) 4095 : cluster 1 stopping daemon mds.b
2023-07-15T03:40:49.768547+0000 mon.a (mon.0) 4096 : cluster 0 fsmap alpha:2 bravo:2 {alpha:0=c=up:active,alpha:1=a=up:active,bravo:0=d=up:active,bravo:1=b=up:active}
2023-07-15T03:40:49.771714+0000 mon.a (mon.0) 4097 : cluster 0 fsmap alpha:2 bravo:2 {alpha:0=c=up:active,alpha:1=a=up:active,bravo:0=d=up:active,bravo:1=b=up:stopping}
2023-07-15T03:40:51.530241+0000 mon.a (mon.0) 4100 : cluster 3 Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
2023-07-15T03:40:51.534035+0000 mon.a (mon.0) 4102 : cluster 0 fsmap alpha:2 bravo:2 {alpha:0=c=up:active,alpha:1=a=up:active,bravo:0=d=up:active,bravo:1=b=up:stopping}
2023-07-15T03:40:51.708862+0000 mgr.x (mgr.4101) 804 : cluster 0 pgmap v1056: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 1023 B/s wr, 3 op/s; 0 B/s, 0 objects/s recovering
2023-07-15T03:40:53.709698+0000 mgr.x (mgr.4101) 805 : cluster 0 pgmap v1057: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 1023 B/s wr, 3 op/s; 0 B/s, 0 objects/s recovering
2023-07-15T03:40:55.710225+0000 mgr.x (mgr.4101) 806 : cluster 0 pgmap v1058: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 0 B/s wr, 0 op/s
2023-07-15T03:40:57.710953+0000 mgr.x (mgr.4101) 807 : cluster 0 pgmap v1059: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 170 B/s wr, 0 op/s
2023-07-15T03:40:58.325103+0000 mon.a (mon.0) 4104 : cluster 1 daemon mds.b finished stopping rank 1 in filesystem bravo (now has 1 ranks)
2023-07-15T03:40:58.601239+0000 mon.a (mon.0) 4105 : cluster 0 mds.? [v2:172.21.15.42:6834/4076550945,v1:172.21.15.42:6836/4076550945] down:stopped
2023-07-15T03:40:58.601309+0000 mon.a (mon.0) 4106 : cluster 0 fsmap alpha:2 bravo:1 {alpha:0=c=up:active,alpha:1=a=up:active,bravo:0=d=up:active}
2023-07-15T03:40:59.605216+0000 mon.a (mon.0) 4107 : cluster 0 mds.? [v2:172.21.15.42:6834/3295642961,v1:172.21.15.42:6836/3295642961] up:boot
2023-07-15T03:40:59.605270+0000 mon.a (mon.0) 4108 : cluster 1 daemon mds.b assigned to filesystem alpha as rank 2 (now has 3 ranks)
2023-07-15T03:40:59.605357+0000 mon.a (mon.0) 4109 : cluster 1 Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
2023-07-15T03:40:59.605379+0000 mon.a (mon.0) 4110 : cluster 1 Cluster is now healthy
2023-07-15T03:40:59.605571+0000 mon.a (mon.0) 4111 : cluster 0 fsmap alpha:2 bravo:1 {alpha:0=c=up:active,alpha:1=a=up:active,bravo:0=d=up:active} 1 up:standby
2023-07-15T03:40:59.608501+0000 mon.a (mon.0) 4112 : cluster 0 fsmap alpha:3 bravo:1 {alpha:0=c=up:active,alpha:1=a=up:active,alpha:2=b=up:creating,bravo:0=d=up:active}
2023-07-15T03:40:59.711520+0000 mgr.x (mgr.4101) 808 : cluster 0 pgmap v1060: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 170 B/s wr, 0 op/s
2023-07-15T03:41:01.711888+0000 mgr.x (mgr.4101) 809 : cluster 0 pgmap v1061: 201 pgs: 201 active+clean; 584 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 170 B/s wr, 0 op/s
2023-07-15T03:41:03.712643+0000 mgr.x (mgr.4101) 810 : cluster 0 pgmap v1062: 201 pgs: 1 active+clean+laggy, 200 active+clean; 585 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 1.2 KiB/s wr, 3 op/s
2023-07-15T03:41:05.713051+0000 mgr.x (mgr.4101) 811 : cluster 0 pgmap v1063: 201 pgs: 1 active+clean+laggy, 200 active+clean; 585 KiB data, 312 MiB used, 720 GiB / 720 GiB avail; 1.2 KiB/s wr, 3 op/s
... long bit saying that 1 pg is "active+clean+laggy"...
2023-07-15T03:42:01.728267+0000 mgr.x (mgr.4101) 839 : cluster 0 pgmap v1091: 201 pgs: 1 active+clean+laggy, 200 active+clean; 585 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:03.729140+0000 mgr.x (mgr.4101) 840 : cluster 0 pgmap v1092: 201 pgs: 1 active+clean+laggy, 200 active+clean; 585 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:04.124607+0000 mon.a (mon.0) 4128 : cluster 0 osdmap e367: 8 total, 8 up, 8 in
2023-07-15T03:42:04.798719+0000 mon.a (mon.0) 4129 : cluster 3 Health check failed: 1 filesystem is degraded (FS_DEGRADED)
2023-07-15T03:42:04.798753+0000 mon.a (mon.0) 4130 : cluster 3 Health check failed: 1 filesystem has a failed mds daemon (FS_WITH_FAILED_MDS)
2023-07-15T03:42:04.798773+0000 mon.a (mon.0) 4131 : cluster 4 Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2023-07-15T03:42:04.798786+0000 mon.a (mon.0) 4132 : cluster 3 Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
2023-07-15T03:42:04.805377+0000 mon.a (mon.0) 4134 : cluster 0 fsmap alpha:0/2 bravo:1 {bravo:0=d=up:active}, 2 failed
2023-07-15T03:42:05.729550+0000 mgr.x (mgr.4101) 841 : cluster 0 pgmap v1094: 201 pgs: 1 active+clean+laggy, 200 active+clean; 585 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:05.803283+0000 mon.a (mon.0) 4136 : cluster 1 Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
2023-07-15T03:42:05.803313+0000 mon.a (mon.0) 4137 : cluster 1 Health check cleared: FS_WITH_FAILED_MDS (was: 1 filesystem has a failed mds daemon)
2023-07-15T03:42:05.803326+0000 mon.a (mon.0) 4138 : cluster 1 Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
2023-07-15T03:42:05.803340+0000 mon.a (mon.0) 4139 : cluster 1 Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
2023-07-15T03:42:05.803354+0000 mon.a (mon.0) 4140 : cluster 1 Cluster is now healthy
2023-07-15T03:42:05.806111+0000 mon.a (mon.0) 4141 : cluster 0 mds.? [v2:172.21.15.42:6834/447356980,v1:172.21.15.42:6836/447356980] up:boot
2023-07-15T03:42:05.806165+0000 mon.a (mon.0) 4142 : cluster 0 mds.? [v2:172.21.15.16:6832/827201626,v1:172.21.15.16:6834/827201626] up:boot
2023-07-15T03:42:05.806242+0000 mon.a (mon.0) 4143 : cluster 0 mds.? [v2:172.21.15.16:6833/1591114195,v1:172.21.15.16:6835/1591114195] up:boot
2023-07-15T03:42:05.806415+0000 mon.a (mon.0) 4145 : cluster 0 fsmap bravo:1 {0=d=up:active} 3 up:standby
2023-07-15T03:42:06.807959+0000 mon.a (mon.0) 4148 : cluster 0 osdmap e368: 8 total, 8 up, 8 in
2023-07-15T03:42:07.730230+0000 mgr.x (mgr.4101) 842 : cluster 0 pgmap v1096: 169 pgs: 169 active+clean; 580 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:07.813458+0000 mon.a (mon.0) 4151 : cluster 0 osdmap e369: 8 total, 8 up, 8 in
2023-07-15T03:42:09.730673+0000 mgr.x (mgr.4101) 843 : cluster 0 pgmap v1098: 105 pgs: 105 active+clean; 580 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:11.731143+0000 mgr.x (mgr.4101) 844 : cluster 0 pgmap v1099: 105 pgs: 105 active+clean; 580 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:11.891964+0000 mon.a (mon.0) 4163 : cluster 0 osdmap e370: 8 total, 8 up, 8 in
2023-07-15T03:42:12.821338+0000 mon.a (mon.0) 4164 : cluster 3 Health check failed: 1 filesystem is degraded (FS_DEGRADED)
2023-07-15T03:42:12.821376+0000 mon.a (mon.0) 4165 : cluster 4 Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2023-07-15T03:42:12.828110+0000 mon.a (mon.0) 4167 : cluster 0 fsmap bravo:0/1 3 up:standby, 1 failed
2023-07-15T03:42:13.731936+0000 mgr.x (mgr.4101) 845 : cluster 0 pgmap v1101: 105 pgs: 105 active+clean; 580 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:13.826199+0000 mon.a (mon.0) 4169 : cluster 1 Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
2023-07-15T03:42:13.826233+0000 mon.a (mon.0) 4170 : cluster 1 Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
2023-07-15T03:42:13.826271+0000 mon.a (mon.0) 4171 : cluster 1 Cluster is now healthy
2023-07-15T03:42:13.828611+0000 mon.a (mon.0) 4172 : cluster 0 mds.? [v2:172.21.15.42:6835/1511194975,v1:172.21.15.42:6837/1511194975] up:boot
2023-07-15T03:42:13.828742+0000 mon.a (mon.0) 4174 : cluster 0 fsmap 4 up:standby
2023-07-15T03:42:14.885801+0000 mon.a (mon.0) 4177 : cluster 0 osdmap e371: 8 total, 8 up, 8 in
2023-07-15T03:42:15.732472+0000 mgr.x (mgr.4101) 846 : cluster 0 pgmap v1103: 73 pgs: 73 active+clean; 577 KiB data, 312 MiB used, 720 GiB / 720 GiB avail
2023-07-15T03:42:15.887221+0000 mon.a (mon.0) 4180 : cluster 0 osdmap e372: 8 total, 8 up, 8 in
2023-07-15T03:42:16.686603+0000 client.admin (client.?) 0 : cluster 1 Ended test tasks.cephfs.test_failover.TestMultiFilesystems.test_grow_shrink
Updated by Laura Flores 10 months ago
Rerunning the same test 10x here to check for any reproducers:
http://pulpito.front.sepia.ceph.com/lflores-2023-07-28_19:47:15-fs-wip-yuri8-testing-2023-07-14-0803-reef-distro-default-smithi/
Updated by Laura Flores 10 months ago
The laggy PG (56.19) is only in osd.6:
/a/yuriw-2023-07-14_23:37:57-fs-wip-yuri8-testing-2023-07-14-0803-reef-distro-default-smithi/7338327/remote/smithi042/log/ceph-osd.6.log.gz
Investigating events that led up to it.
Updated by Laura Flores 10 months ago
2023-07-15T03:40:59.613+0000 7ff3f6d01700 10 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean] check_laggy mnow 1406.601684570s > readable_until 1406.601074219s
2023-07-15T03:40:59.613+0000 7ff3f6d01700 20 osd.6 pg_epoch: 366 pg[56.19( empty local-lis/les=365/366 n=0 ec=353/353 lis/c=365/365 les/c/f=366/366/0 sis=365) [6,2] r=0 lpr=366 crt=0'0 mlcod 0'0 active+clean+laggy] prepare_stats_for_publish reporting purged_snaps []
From osd.6, the pg was marked laggy here when "mnow" was determined greater than "readable until". And the pg was primary at the time.
Updated by Laura Flores 10 months ago
From the above tests, it didn't reproduce. So, we know the issue is transient.
Updated by Laura Flores 10 months ago
So the upmap command mapped pg 56.19 from osd.4 to osd.6. Then on osd.6 it went into a laggy state.
Updated by Radoslaw Zarzynski 9 months ago
- Status changed from New to In Progress