Project

General

Profile

Actions

Bug #58467

closed

osd: Only have one osd daemon no reply heartbeat on one node

Added by yite gu over 1 year ago. Updated about 1 year ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

osd.9 log file:

2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.147:6804 osd.0 since back 2023-01-15T20:31:08.051853+0000 front 2023-01-15T20:31:08.051869+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.46:6804 osd.1 since back 2023-01-15T20:31:08.051903+0000 front 2023-01-15T20:31:08.051901+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.173:6804 osd.5 since back 2023-01-15T20:31:08.051825+0000 front 2023-01-15T20:31:08.051820+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.174:6804 osd.7 since back 2023-01-15T20:31:08.051943+0000 front 2023-01-15T20:31:08.051911+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.201:6804 osd.8 since back 2023-01-15T20:31:08.051908+0000 front 2023-01-15T20:31:08.051860+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.175:6804 osd.10 since back 2023-01-15T20:31:08.051946+0000 front 2023-01-15T20:31:08.051890+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.178:6804 osd.11 since back 2023-01-15T20:31:08.051849+0000 front 2023-01-15T20:31:08.051963+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.183:6804 osd.14 since back 2023-01-15T20:31:08.051838+0000 front 2023-01-15T20:31:08.051884+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.189:6804 osd.15 since back 2023-01-15T20:31:08.051916+0000 front 2023-01-15T20:31:08.051907+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:32.833+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.196:6804 osd.16 since back 2023-01-15T20:31:08.051865+0000 front 2023-01-15T20:31:08.051878+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:33.824+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.147:6804 osd.0 since back 2023-01-15T20:31:08.051853+0000 front 2023-01-15T20:31:08.051869+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:33.824+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.46:6804 osd.1 since back 2023-01-15T20:31:08.051903+0000 front 2023-01-15T20:31:08.051901+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:33.824+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.173:6804 osd.5 since back 2023-01-15T20:31:08.051825+0000 front 2023-01-15T20:31:08.051820+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:33.824+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.174:6804 osd.7 since back 2023-01-15T20:31:08.051943+0000 front 2023-01-15T20:31:08.051911+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:33.824+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.201:6804 osd.8 since back 2023-01-15T20:31:08.051908+0000 front 2023-01-15T20:31:08.051860+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:33.824+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.175:6804 osd.10 since back 2023-01-15T20:31:08.051946+0000 front 2023-01-15T20:31:08.051890+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)
2023-01-15T20:31:33.824+0000 7f11a6302700 -1 osd.9 3962 heartbeat_check: no reply from 172.16.1.178:6804 osd.11 since back 2023-01-15T20:31:08.051849+0000 front 2023-01-15T20:31:08.051963+0000 (oldest deadline 2023-01-15T20:31:32.151841+0000)

mark osd.9 down in mon log:
2023-01-15T20:31:32.223+0000 7fe4ad7bb700  1 mon.c@0(leader).osd e3962 prepare_failure osd.9 [v2:172.16.1.133:6800/39,v1:172.16.1.133:6801/39] from osd.14 is reporting failure:1
2023-01-15T20:31:32.223+0000 7fe4ad7bb700  0 log_channel(cluster) log [DBG] : osd.9 reported failed by osd.14
2023-01-15T20:31:33.161+0000 7fe4ad7bb700  1 mon.c@0(leader).osd e3962 prepare_failure osd.9 [v2:172.16.1.133:6800/39,v1:172.16.1.133:6801/39] from osd.16 is reporting failure:1
2023-01-15T20:31:33.161+0000 7fe4ad7bb700  0 log_channel(cluster) log [DBG] : osd.9 reported failed by osd.16
2023-01-15T20:31:34.320+0000 7fe4ad7bb700  1 mon.c@0(leader).osd e3962 prepare_failure osd.9 [v2:172.16.1.133:6800/39,v1:172.16.1.133:6801/39] from osd.7 is reporting failure:1
2023-01-15T20:31:34.320+0000 7fe4ad7bb700  0 log_channel(cluster) log [DBG] : osd.9 reported failed by osd.7
2023-01-15T20:31:34.320+0000 7fe4ad7bb700  1 mon.c@0(leader).osd e3962  we have enough reporters to mark osd.9 down

osd.9 on the node07:

# ceph osd tree
ID   CLASS  WEIGHT     TYPE NAME              STATUS  REWEIGHT  PRI-AFF
 -1         134.21826  root default                                    
 -5          14.71190      host admin01-xncm                           
  1    hdd   14.71190          osd.1              up   1.00000  1.00000
 -3          14.71190      host admin02-xncm                           
  0    hdd   14.71190          osd.0              up   1.00000  1.00000
-16          34.93149      host node05-xncm                            
  8    ssd    6.98630          osd.8              up   1.00000  1.00000
 11    ssd    6.98630          osd.11             up   1.00000  1.00000
 14    ssd    6.98630          osd.14             up   1.00000  1.00000
 15    ssd    6.98630          osd.15             up   1.00000  1.00000
 16    ssd    6.98630          osd.16             up   1.00000  1.00000
-13          34.93149      host node06-xncm                            
  3    ssd    6.98630          osd.3              up   1.00000  1.00000
  5    ssd    6.98630          osd.5              up   1.00000  1.00000
  7    ssd    6.98630          osd.7              up   1.00000  1.00000
 10    ssd    6.98630          osd.10             up   1.00000  1.00000
 13    ssd    6.98630          osd.13             up   1.00000  1.00000
-10          34.93149      host node07-xncm                            
  2    ssd    6.98630          osd.2              up   1.00000  1.00000
  4    ssd    6.98630          osd.4              up   1.00000  1.00000
  6    ssd    6.98630          osd.6              up   1.00000  1.00000
  9    ssd    6.98630          osd.9            down   1.00000  1.00000
 12    ssd    6.98630          osd.12             up   1.00000  1.00000

If this is a network problem, other osds should no reply heartbeat either on the node07, but only osd.9 have no reply heartbeat.
bluestore have no slow op on the osd.9.

ceph version: 15.2.7


Files

ceph-osd.9.log (954 KB) ceph-osd.9.log yite gu, 01/16/2023 08:43 AM
ceph-osd.12.log (958 KB) ceph-osd.12.log yite gu, 01/19/2023 09:04 AM
Actions #1

Updated by yite gu over 1 year ago

Actions #2

Updated by Radoslaw Zarzynski over 1 year ago

This is what struck me at first glance:

2023-01-15T20:32:41.183+0000 7f11a6302700  0 log_channel(cluster) log [WRN] : slow request osd_op(client.10141568.0:84038298 3.c 3:30095f50:::rbd_header.55ecaae37275fc:head [watch ping cookie 18446462598732840963 gen 55] snapc 0=[] ondisk+write+known_if_redirected e3962) initiated 2023-01-15T20:31:24.384026+0000 currently delayed
2023-01-15T20:32:41.183+0000 7f11a6302700  0 log_channel(cluster) log [WRN] : slow request osd_op(client.10141568.0:84038306 3.c 3:30095f50:::rbd_header.55ecaae37275fc:head [watch ping cookie 18446462598732840963 gen 55] snapc 0=[] ondisk+write+known_if_redirected e3962) initiated 2023-01-15T20:31:29.504055+0000 currently delayed
2023-01-15T20:32:41.183+0000 7f11a6302700  0 log_channel(cluster) log [WRN] : slow request osd_op(client.10141568.0:84038314 3.c 3:30095f50:::rbd_header.55ecaae37275fc:head [watch ping cookie 18446462598732840963 gen 55] snapc 0=[] ondisk+write+known_if_redirected e3962) initiated 2023-01-15T20:31:34.624055+0000 currently delayed
2023-01-15T20:32:41.183+0000 7f11a6302700 -1 osd.9 3962 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.10141568.0:84038219 3.c 3:32b718d5:::rbd_data.55ecaae37275fc.0000000000000a00:head [stat out=16b,set-alloc-hint object_size 4194304 write_size 4194304,write 1060864~4096 in=4096b] snapc 0=[] ondisk+write+known_if_redirected e3962)

So osd.9 is seeing slow ops. There is a possibility that an OSD might not timely respond to heartbeats to starvation (CPU, memory, swapping etc.). Could you please take a look on CPU / mem utilization by osd.9?

Sometimes a starvation could be self-induced (e.g. terribly costly PG deletion). To check for that, collecting logs with debug_osd=20 would be helpful.

To exclude network completely, could you please provide a with debug_ms=5?

Actions #3

Updated by Radoslaw Zarzynski over 1 year ago

  • Status changed from New to Need More Info
Actions #4

Updated by yite gu over 1 year ago

Radoslaw Zarzynski wrote:

This is what struck me at first glance:

[...]

So osd.9 is seeing slow ops. There is a possibility that an OSD might not timely respond to heartbeats to starvation (CPU, memory, swapping etc.). Could you please take a look on CPU / mem utilization by osd.9?

Sometimes a starvation could be self-induced (e.g. terribly costly PG deletion). To check for that, collecting logs with debug_osd=20 would be helpful.

To exclude network completely, could you please provide a with debug_ms=5?

1. my cluster workload is very low:

2023-01-15T20:30:52.764+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120384: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 7.6 MiB/s wr, 247 op/s
2023-01-15T20:30:54.764+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120385: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 5.3 MiB/s wr, 168 op/s
2023-01-15T20:30:56.764+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120386: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 5.0 MiB/s wr, 153 op/s
2023-01-15T20:30:57.370+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:30:57.379+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:30:57.388+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:30:58.765+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120387: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 24 KiB/s rd, 6.1 MiB/s wr, 225 op/s
2023-01-15T20:31:00.765+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120388: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 24 KiB/s rd, 3.7 MiB/s wr, 112 op/s
2023-01-15T20:31:02.766+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120389: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 30 KiB/s rd, 5.4 MiB/s wr, 161 op/s
2023-01-15T20:31:04.766+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120390: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 30 KiB/s rd, 3.3 MiB/s wr, 134 op/s
2023-01-15T20:31:06.767+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120391: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 30 KiB/s rd, 3.0 MiB/s wr, 129 op/s
2023-01-15T20:31:08.767+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120392: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 30 KiB/s rd, 4.0 MiB/s wr, 171 op/s
2023-01-15T20:31:10.767+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120393: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 5.6 KiB/s rd, 2.9 MiB/s wr, 99 op/s
2023-01-15T20:31:12.768+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120394: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 5.6 KiB/s rd, 5.5 MiB/s wr, 157 op/s
2023-01-15T20:31:14.768+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120395: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 4.1 MiB/s wr, 121 op/s
2023-01-15T20:31:16.769+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120396: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 3.8 MiB/s wr, 112 op/s
2023-01-15T20:31:18.769+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120397: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 5.0 MiB/s wr, 154 op/s
2023-01-15T20:31:20.770+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120398: 65 pgs: 65 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 4.0 MiB/s wr, 112 op/s
2023-01-15T20:31:22.400+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:22.408+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:22.416+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:22.423+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:22.431+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:22.442+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:22.451+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:22.770+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120399: 65 pgs: 1 active+clean+laggy, 64 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 4.5 MiB/s wr, 132 op/s
2023-01-15T20:31:24.771+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120400: 65 pgs: 1 active+clean+laggy, 64 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 2.0 MiB/s wr, 78 op/s
2023-01-15T20:31:26.771+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120401: 65 pgs: 1 active+clean+laggy, 64 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 0 B/s rd, 1.7 MiB/s wr, 66 op/s
2023-01-15T20:31:27.209+0000 7f0e51d3d700  0 log_channel(audit) log [DBG] : from='client.14753867 -' entity='client.admin' cmd=[{"prefix": "osd pool stats", "pool_name": "device_health_metrics", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
2023-01-15T20:31:27.467+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:27.476+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:28.772+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120402: 65 pgs: 2 active+clean+laggy, 63 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 36 KiB/s rd, 2.1 MiB/s wr, 129 op/s
2023-01-15T20:31:30.772+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120403: 65 pgs: 2 active+clean+laggy, 63 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 36 KiB/s rd, 967 KiB/s wr, 86 op/s
2023-01-15T20:31:32.773+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120404: 65 pgs: 2 active+clean+laggy, 63 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 60 KiB/s rd, 1.1 MiB/s wr, 122 op/s
2023-01-15T20:31:34.773+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120405: 65 pgs: 2 active+clean+laggy, 63 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 60 KiB/s rd, 879 KiB/s wr, 107 op/s
2023-01-15T20:31:36.001+0000 7f0e5ef09700  0 [rbd_support ERROR root] execute_task: [errno 39] RBD image has snapshots (error deleting image from trash)
2023-01-15T20:31:36.313+0000 7f0e51d3d700  0 log_channel(audit) log [DBG] : from='client.14707768 -' entity='client.admin' cmd=[{"prefix": "osd pool stats", "pool_name": "replicapool-hdd", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
2023-01-15T20:31:36.735+0000 7f0e51d3d700  0 log_channel(audit) log [DBG] : from='client.14766747 -' entity='client.admin' cmd=[{"prefix": "osd pool stats", "pool_name": "replicapool-ssd", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
2023-01-15T20:31:36.774+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120408: 65 pgs: 2 stale+active+clean, 2 active+clean+laggy, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 36 KiB/s rd, 574 KiB/s wr, 60 op/s
2023-01-15T20:31:38.774+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120409: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 271 KiB/s rd, 4.3 MiB/s wr, 480 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:40.774+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120410: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 235 KiB/s rd, 4.0 MiB/s wr, 425 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:42.775+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120411: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 248 KiB/s rd, 6.8 MiB/s wr, 526 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:44.775+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120412: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 203 KiB/s rd, 6.0 MiB/s wr, 449 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:46.776+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120413: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 184 KiB/s rd, 5.4 MiB/s wr, 408 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:48.777+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120414: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 166 KiB/s rd, 7.0 MiB/s wr, 407 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:50.777+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120415: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 9.0 KiB/s rd, 4.5 MiB/s wr, 126 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:52.777+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120416: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 9.0 KiB/s rd, 5.3 MiB/s wr, 154 op/s; 110013/3428473 objects degraded (3.209%)
2023-01-15T20:31:54.778+0000 7f0e56d41700  0 log_channel(cluster) log [DBG] : pgmap v120417: 65 pgs: 4 active+undersized+degraded, 61 active+clean; 4.8 TiB data, 13 TiB used, 121 TiB / 134 TiB avail; 682 B/s rd, 3.4 MiB/s wr, 88 op/s; 110013/3428475 objects degraded (3.209%)

2. osd.9 process info

# ps aux | grep ceph-osd | grep " 9 " 
167        65145  0.8  0.4 3951888 2436744 ?     Ssl   2022 1884:51 ceph-osd --foreground --id 9 --fsid 9ef91db1-94d1-4678-b6c6-33ca95c4f969 --setuser ceph --setgroup ceph --crush-location=root=default host=node07-xncm --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --ms-learn-addr-from-peer=false

# top -H -p 65145

top - 10:41:19 up 154 days, 23:26,  1 user,  load average: 5.90, 5.34, 4.65
Threads:  69 total,   0 running,  69 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.2 us,  1.8 sy,  0.0 ni, 95.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 52781190+total, 12726225+free, 33176265+used, 68786976 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 18857004+avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                                                                                                                                                 
  65641 167       20   0 3951888 2.328g  32440 S  0.3  0.5  42:45.11 bstore_aio                                                                                                                                                                                              
  65798 167       20   0 3951888 2.328g  32440 S  0.3  0.5  46:41.55 bstore_kv_final                                                                                                                                                                                         
  65835 167       20   0 3951888 2.328g  32440 S  0.3  0.5  85:51.00 tp_osd_tp                                                                                                                                                                                               
  65145 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.09 ceph-osd                                                                                                                                                                                                
  65196 167       20   0 3951888 2.328g  32440 S  0.0  0.5   7:14.30 log                                                                                                                                                                                                     
  65198 167       20   0 3951888 2.328g  32440 S  0.0  0.5 312:33.41 msgr-worker-0                                                                                                                                                                                           
  65199 167       20   0 3951888 2.328g  32440 S  0.0  0.5 412:09.92 msgr-worker-1                                                                                                                                                                                           
  65201 167       20   0 3951888 2.328g  32440 S  0.0  0.5 230:26.06 msgr-worker-2                                                                                                                                                                                           
  65215 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:52.38 service                                                                                                                                                                                                 
  65216 167       20   0 3951888 2.328g  32440 S  0.0  0.5  13:10.83 admin_socket                                                                                                                                                                                            
  65509 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 signal_handler                                                                                                                                                                                          
  65634 167       20   0 3951888 2.328g  32440 S  0.0  0.5  22:58.29 OpHistorySvc                                                                                                                                                                                            
  65635 167       20   0 3951888 2.328g  32440 S  0.0  0.5   1:36.68 ceph-osd                                                                                                                                                                                                
  65636 167       20   0 3951888 2.328g  32440 S  0.0  0.5   3:17.38 safe_timer                                                                                                                                                                                              
  65637 167       20   0 3951888 2.328g  32440 S  0.0  0.5  15:17.37 safe_timer                                                                                                                                                                                              
  65638 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 safe_timer                                                                                                                                                                                              
  65639 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 safe_timer                                                                                                                                                                                              
  65640 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 fn_anonymous                                                                                                                                                                                            
  65642 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 bstore_discard                                                                                                                                                                                          
  65645 167       20   0 3951888 2.328g  32440 S  0.0  0.5   3:56.60 rocksdb:low0                                                                                                                                                                                            
  65646 167       20   0 3951888 2.328g  32440 S  0.0  0.5   3:34.71 rocksdb:low1                                                                                                                                                                                            
  65647 167       20   0 3951888 2.328g  32440 S  0.0  0.5   3:45.16 rocksdb:high0                                                                                                                                                                                           
  65776 167       20   0 3951888 2.328g  32440 S  0.0  0.5  18:48.99 bstore_aio                                                                                                                                                                                              
  65777 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 bstore_discard                                                                                                                                                                                          
  65778 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ceph-osd                                                                                                                                                                                                
  65794 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:04.74 rocksdb:dump_st                                                                                                                                                                                         
  65795 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.31 rocksdb:pst_st                                                                                                                                                                                          
  65796 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.22 cfin                                                                                                                                                                                                    
  65797 167       20   0 3951888 2.328g  32440 S  0.0  0.5 105:28.95 bstore_kv_sync                                                                                                                                                                                          
  65799 167       20   0 3951888 2.328g  32440 S  0.0  0.5  63:34.77 bstore_mempool                                                                                                                                                                                          
  65800 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:18.71 ms_dispatch                                                                                                                                                                                             
  65801 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_local                                                                                                                                                                                                
  65802 167       20   0 3951888 2.328g  32440 S  0.0  0.5   8:11.53 safe_timer                                                                                                                                                                                              
  65803 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:05.01 fn_anonymous                                                                                                                                                                                            
  65804 167       20   0 3951888 2.328g  32440 S  0.0  0.5  10:36.31 safe_timer                                                                                                                                                                                              
  65805 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:01.09 ms_dispatch                                                                                                                                                                                             
  65806 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_local                                                                                                                                                                                                
  65807 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_dispatch                                                                                                                                                                                             
  65808 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_local                                                                                                                                                                                                
  65809 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_dispatch                                                                                                                                                                                             
  65810 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_local                                                                                                                                                                                                
  65811 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_dispatch                                                                                                                                                                                             
  65812 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_local                                                                                                                                                                                                
  65813 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_dispatch                                                                                                                                                                                             
  65814 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_local                                                                                                                                                                                                
  65815 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_dispatch                                                                                                                                                                                             
  65816 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 ms_local                                                                                                                                                                                                
  65817 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 fn_anonymous                                                                                                                                                                                            
  65818 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:00.00 finisher                                                                                                                                                                                                
  65819 167       20   0 3951888 2.328g  32440 S  0.0  0.5   0:17.10 safe_timer         

# kubectl top pod -n rook-ceph 
NAME                                                     CPU(cores)   MEMORY(bytes)   
csi-rbdplugin-4d2xm                                      0m           129Mi           
csi-rbdplugin-4k7g4                                      0m           223Mi           
csi-rbdplugin-bh4lt                                      0m           88Mi            
csi-rbdplugin-bnwd5                                      1m           240Mi           
csi-rbdplugin-ftggt                                      0m           145Mi           
csi-rbdplugin-gth6k                                      0m           137Mi           
csi-rbdplugin-mpmcd                                      0m           135Mi           
csi-rbdplugin-provisioner-77c585c547-b9qg7               0m           266Mi           
csi-rbdplugin-provisioner-77c585c547-dm8jz               1m           329Mi           
csi-rbdplugin-t4wmc                                      0m           136Mi           
csi-rbdplugin-vgxgm                                      0m           131Mi           
rook-ceph-crashcollector-admin01.xncm-7df58448f5-kt94f   0m           7Mi             
rook-ceph-crashcollector-admin02.xncm-84858c5dc4-hwhjl   0m           8Mi             
rook-ceph-crashcollector-node03.xncm-59f8554b48-s4vsv    0m           6Mi             
rook-ceph-crashcollector-node05.xncm-6d7c596654-xjhpc    0m           6Mi             
rook-ceph-crashcollector-node06.xncm-5f5c7b9bd5-brvkp    0m           6Mi             
rook-ceph-crashcollector-node07.xncm-77cf7fb44-fntnf     0m           6Mi             
rook-ceph-mgr-a-554cf69bb6-nlzwv                         12m          510Mi           
rook-ceph-mon-c-b94b859b7-56jm8                          12m          1292Mi          
rook-ceph-mon-f-7f597fb4d4-f5zx6                         15m          1127Mi          
rook-ceph-mon-g-d4c9dd74-2s9hp                           13m          1115Mi          
rook-ceph-operator-7c648cff88-vrrrv                      15m          177Mi           
rook-ceph-osd-0-7db45fc87b-slmf8                         29m          3703Mi          
rook-ceph-osd-1-745459dfcd-2mnjp                         23m          3662Mi          
rook-ceph-osd-10-5ccf6dc695-ns5hd                        23m          2500Mi          
rook-ceph-osd-11-97fb9f66f-4srwg                         25m          2692Mi          
rook-ceph-osd-12-5b84c7dd57-tcqd4                        23m          2504Mi          
rook-ceph-osd-13-65859754bd-hjcc4                        33m          2511Mi          
rook-ceph-osd-14-96dc46844-497rc                         20m          2564Mi          
rook-ceph-osd-15-699547d9df-529jb                        20m          2443Mi          
rook-ceph-osd-16-7fb7cbdc65-t4c8p                        24m          2623Mi          
rook-ceph-osd-2-867699c84d-8qzxd                         26m          2616Mi          
rook-ceph-osd-3-b7ccfff49-5jh7j                          19m          2464Mi          
rook-ceph-osd-4-7f8df65546-jcj7g                         23m          2673Mi          
rook-ceph-osd-5-5cd4c6c667-7tsln                         22m          2651Mi          
rook-ceph-osd-6-855df8fccb-s9kxc                         20m          2490Mi          
rook-ceph-osd-7-5687ccc6c7-j7j2m                         19m          2578Mi          
rook-ceph-osd-8-67db76bbb4-fwst4                         21m          2557Mi          
rook-ceph-osd-9-fb5c48569-cpq7d                          19m          2418Mi          
rook-ceph-tools-5b59768dcd-ptncz                         0m           92Mi  

3. I have no execute delete pg

4. I adjust debug_ms=5, I will provide the log when the problem again

Actions #5

Updated by yite gu over 1 year ago

This problem happed again, but terrible osd is 12 in this time. Other osd report heartbeat no reply as below
osd.15

2023-01-19T08:01:04.926+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:05.974+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:06.967+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:07.974+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:09.019+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:10.023+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:11.001+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:12.046+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:13.005+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:13.973+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:14.949+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:15.923+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:16.951+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:17.973+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:19.016+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:19.968+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:20.966+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:21.930+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:22.907+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:23.927+0000 7f6818865700 -1 osd.15 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:39.873357+0000 front 2023-01-19T08:00:39.873373+0000 (oldest deadline 2023-01-19T08:01:04.573182+0000)
2023-01-19T08:01:24.315+0000 7f67fb82b700  1 osd.15 pg_epoch: 4052 pg[3.b( v 4051'33695733 (4046'33685731,4051'33695733] local-lis/les=3804/3805 n=27471 ec=469/469 lis/c=3804/3804 les/c/f=3805/3805/0 sis=4052 pruub=12.115981475s) [7,15] r=1 lpr=4052 pi=[3804,4052)/1 luod=0'0 lua=4016'33409183 crt=4051'33695733 lcod 4051'33695732 mlcod 0'0 active pruub 19959580.002023022s@ mbc={}] start_peering_interval up [7,15,2] -> [7,15], acting [7,15,2] -> [7,15], acting_primary 7 -> 7, up_primary 7 -> 7, role 1 -> 1, features acting 4540138292836696063 upacting 4540138292836696063

osd.5
2023-01-19T08:01:03.898+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:04.917+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:05.876+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:06.868+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:07.862+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:08.820+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:09.808+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:10.827+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:11.837+0000 7f75b7f34700 -1 osd.5 4051 heartbeat_check: no reply from 172.16.1.129:6804 osd.12 since back 2023-01-19T08:00:40.321346+0000 front 2023-01-19T08:00:40.321406+0000 (oldest deadline 2023-01-19T08:01:03.221281+0000)
2023-01-19T08:01:11.837+0000 7f75b7f34700  0 log_channel(cluster) log [WRN] : slow request osd_op(client.4089151.0:325453613 3.1c 3:3bf33282:::rbd_data.55ecaa5d9f52be.0000000000000223:head [stat out=16b,set-alloc-hint object_size 4194304 write_size 4194304,write 3317760~28672 in=28672b] snapc 0=[] ondisk+write+known_if_redirected e4051) initiated 2023-01-19T08:00:41.671469+0000 currently waiting for sub ops

Actions #6

Updated by yite gu over 1 year ago

osd.12 log file with debug_ms=5

Actions #7

Updated by yite gu over 1 year ago

It is recommended to adjust the upload file size limit to 10M :)

Actions #8

Updated by Radoslaw Zarzynski about 1 year ago

Thanks for the log!

I think it's not just about hearbeats but rather a general slowness. Client IO is affected as well:

2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:15.606+0000 7efcf61af700  0 log_channel(cluster) log [WRN] : slow request osd_op(client.3509703.0:356457799 3.18 3:1b30d70e:::rbd_data.55ecaa3712adf7.0000000000003a65:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2531328~4096 in=4096b] snapc 0=[] ondisk+write+known_if_redirected e4051) initiated 2023-01-19T08:00:41.199305+0000 currently waiting for sub ops
2023-01-19T08:01:15.606+0000 7efcf61af700 -1 osd.12 4051 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.3509703.0:356457799 3.18 3:1b30d70e:::rbd_data.55ecaa3712adf7.0000000000003a65:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2531328~4096 in=4096b] snapc 0=[] ondisk+write+known_if_redirected e4051)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.147:6804 osd.0 since back 2023-01-19T08:00:40.386741+0000 front 2023-01-19T08:00:40.386655+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.46:6804 osd.1 since back 2023-01-19T08:00:40.386663+0000 front 2023-01-19T08:00:40.386671+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.173:6804 osd.5 since back 2023-01-19T08:00:40.386755+0000 front 2023-01-19T08:00:40.386785+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.175:6804 osd.10 since back 2023-01-19T08:00:40.386749+0000 front 2023-01-19T08:00:40.386773+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.178:6804 osd.11 since back 2023-01-19T08:00:40.386771+0000 front 2023-01-19T08:00:40.386874+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.176:6804 osd.13 since back 2023-01-19T08:00:40.386804+0000 front 2023-01-19T08:00:40.386814+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.183:6804 osd.14 since back 2023-01-19T08:00:40.386830+0000 front 2023-01-19T08:00:40.386823+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.189:6804 osd.15 since back 2023-01-19T08:00:40.386834+0000 front 2023-01-19T08:00:40.386856+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)
2023-01-19T08:01:16.561+0000 7efcf61af700 -1 osd.12 4051 heartbeat_check: no reply from 172.16.1.196:6804 osd.16 since back 2023-01-19T08:00:40.386842+0000 front 2023-01-19T08:00:40.386850+0000 (oldest deadline 2023-01-19T08:01:06.286774+0000)

Could please provide output from `dump_historic_slow_ops`?
And write a bit about the cluster / HW? Number of OSDs per node, CPU etc.
Also bumping debug_osd=20 could help to see what those slow ops were doing.

Actions #9

Updated by yite gu about 1 year ago

Radoslaw Zarzynski wrote:

Thanks for the log!

I think it's not just about hearbeats but rather a general slowness. Client IO is affected as well:

[...]

Could please provide output from `dump_historic_slow_ops`?
And write a bit about the cluster / HW? Number of OSDs per node, CPU etc.
Also bumping debug_osd=20 could help to see what those slow ops were doing.

To return to normal, I have kicked node07-xncm out of the cluster. Have no report heartbeat problem again.
1. Slow op is due to waiting for sub op.
2. other slow op delay is due to pg laggy, op queued to waiting_for_readable. We should explain the queue name of delayed op in the log.

 770 bool PrimaryLogPG::check_laggy(OpRequestRef& op)
 771 {
 772   if (!HAVE_FEATURE(recovery_state.get_min_upacting_features(),
 773                     SERVER_OCTOPUS)) {
 774     dout(20) << __func__ << " not all upacting has SERVER_OCTOPUS" << dendl;
 775     return true;
 776   }
 777   if (state_test(PG_STATE_WAIT)) {
 778     dout(10) << __func__ << " PG is WAIT state" << dendl;
 779   } else if (!state_test(PG_STATE_LAGGY)) {
 780     auto mnow = osd->get_mnow();
 781     auto ru = recovery_state.get_readable_until();
 782     if (mnow <= ru) {
 783       // not laggy
 784       return true;
 785     }
 786     dout(10) << __func__
 787              << " mnow " << mnow
 788              << " > readable_until " << ru << dendl;
 789 
 790     if (!is_primary()) {
 791       osd->reply_op_error(op, -EAGAIN);
 792       return false;
 793     }
 794 
 795     // go to laggy state
 796     state_set(PG_STATE_LAGGY);
 797     publish_stats_to_osd();
 798   }
 799   dout(10) << __func__ << " not readable" << dendl;
 800   waiting_for_readable.push_back(op);
 801   op->mark_delayed("waiting for readable");
 802   return false;
 803 }

3. 5 osds per node.
4. 2 vCPU and 4 GB memory per OSD
5. storage device is NVMe SSD,

Actions #10

Updated by yite gu about 1 year ago

osd.12 no more packets from other OSDs after osd.11 send package at 2023-01-19T08:00:42

2023-01-19T08:00:42.408+0000 7efcfb229700  1 -- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] <== osd.11 v2:172.16.1.178:6802/39 197546 ==== osd_repop(client.3509703.0:356457833 3.17 e4051/4033) v3 ==== 1151+0+41715 (crc 0 0 0) 0x556ec63e2300 con 0x556e4c30e800
2023-01-19T08:00:42.408+0000 7efcdb179700  1 -- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] --> [v2:172.16.1.178:6802/39,v1:172.16.1.178:6803/39] -- osd_repop_reply(client.3509703.0:356457833 3.17 e4051/4033 ondisk, result = 0) v2 -- 0x556ea4050c40 con 0x556e4c30e800
2023-01-19T08:00:42.408+0000 7efcdb179700  5 --2- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] >> [v2:172.16.1.178:6802/39,v1:172.16.1.178:6803/39] conn(0x556e4c30e800 0x556e4dcf1400 crc :-1 s=READY pgs=7673 cs=0 l=0 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x556ea4050c40 type=113 osd_repop_reply(client.3509703.0:356457833 3.17 e4051/4033 ondisk, result = 0) v2
2023-01-19T08:00:42.408+0000 7efcfb229700  5 --2- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] >> [v2:172.16.1.178:6802/39,v1:172.16.1.178:6803/39] conn(0x556e4c30e800 0x556e4dcf1400 crc :-1 s=READY pgs=7673 cs=0 l=0 rev1=1 rx=0 tx=0).write_message sending message m=0x556ea4050c40 seq=197536 osd_repop_reply(client.3509703.0:356457833 3.17 e4051/4033 ondisk, result = 0) v2
2023-01-19T08:00:44.255+0000 7efcfb229700  5 --1- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> v1:169.254.169.29:0/892635534 conn(0x556e85317400 0x556e61df6800 :6801 s=READ_FOOTER_AND_DISPATCH pgs=27559 cs=1 l=1). rx client.10141568 seq 70672 0x556e52c3e780 osd_op(client.10141568.0:87541286 3.18 3.291a55b8 (undecoded) ondisk+write+known_if_redirected e4051) v8
2023-01-19T08:00:44.255+0000 7efcfb229700  1 -- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] <== client.10141568 v1:169.254.169.29:0/892635534 70672 ==== osd_op(client.10141568.0:87541286 3.18 3.291a55b8 (undecoded) ondisk+write+known_if_redirected e4051) v8 ==== 226+0+0 (unknown 1831960680 0 0) 0x556e52c3e780 con 0x556e85317400
2023-01-19T08:00:44.255+0000 7efcde980700  1 -- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] --> v1:169.254.169.29:0/892635534 -- osd_op_reply(87541286 rbd_header.55ecaa326b668d [watch ping cookie 18446462598732841095] v0'0 uv24156599 ondisk = 0) v8 -- 0x556e4c9c18c0 con 0x556e85317400
2023-01-19T08:00:44.987+0000 7efce8193700  1 -- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] --> [v2:172.16.1.158:6800/593,v1:172.16.1.158:6801/593] -- mgrreport(unknown.12 +0-0 packed 1222 daemon_metrics=2) v9 -- 0x556e5c0fb200 con 0x556e66606800
2023-01-19T08:00:44.987+0000 7efce8193700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.16.1.158:6800/593,v1:172.16.1.158:6801/593] conn(0x556e66606800 0x556e65ccf400 secure :-1 s=READY pgs=190341 cs=0 l=1 rev1=1 rx=0x556e5a77acf0 tx=0x556e7c2cc1e0).send_message enqueueing message m=0x556e5c0fb200 type=1794 mgrreport(osd.12 +0-0 packed 1222 daemon_metrics=2) v9
2023-01-19T08:00:44.987+0000 7efce8193700  1 -- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] --> [v2:172.16.1.158:6800/593,v1:172.16.1.158:6801/593] -- pg_stats(2 pgs tid 0 v 0) v2 -- 0x556eba93e600 con 0x556e66606800
2023-01-19T08:00:44.987+0000 7efce8193700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.16.1.158:6800/593,v1:172.16.1.158:6801/593] conn(0x556e66606800 0x556e65ccf400 secure :-1 s=READY pgs=190341 cs=0 l=1 rev1=1 rx=0x556e5a77acf0 tx=0x556e7c2cc1e0).send_message enqueueing message m=0x556eba93e600 type=87 pg_stats(2 pgs tid 0 v 0) v2
2023-01-19T08:00:44.987+0000 7efcfaa28700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.16.1.158:6800/593,v1:172.16.1.158:6801/593] conn(0x556e66606800 0x556e65ccf400 secure :-1 s=READY pgs=190341 cs=0 l=1 rev1=1 rx=0x556e5a77acf0 tx=0x556e7c2cc1e0).write_message sending message m=0x556e5c0fb200 seq=130 mgrreport(osd.12 +0-0 packed 1222 daemon_metrics=2) v9
2023-01-19T08:00:44.987+0000 7efcfaa28700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.16.1.158:6800/593,v1:172.16.1.158:6801/593] conn(0x556e66606800 0x556e65ccf400 secure :-1 s=READY pgs=190341 cs=0 l=1 rev1=1 rx=0x556e5a77acf0 tx=0x556e7c2cc1e0).write_message sending message m=0x556eba93e600 seq=131 pg_stats(2 pgs tid 0 v 0) v2
2023-01-19T08:00:45.756+0000 7efcd8974700  1 -- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] --> [v2:172.16.1.175:6802/39,v1:172.16.1.175:6803/39] -- pg_lease(3.14 pg_lease(ru 13408272.327709109s ub 13408280.327885832s int 16s) e4051/4051) v1 -- 0x556e84af7080 con 0x556eec603000
2023-01-19T08:00:45.756+0000 7efcd8974700  5 --2- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] >> [v2:172.16.1.175:6802/39,v1:172.16.1.175:6803/39] conn(0x556eec603000 0x556ed6dfa800 crc :-1 s=READY pgs=7179 cs=0 l=0 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x556e84af7080 type=133 pg_lease(3.14 pg_lease(ru 13408272.327709109s ub 13408280.327885832s int 16s) e4051/4051) v1
2023-01-19T08:00:45.756+0000 7efcd8974700  1 -- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] --> [v2:172.16.1.189:6802/39,v1:172.16.1.189:6803/39] -- pg_lease(3.14 pg_lease(ru 13408272.327709109s ub 13408280.327885832s int 16s) e4051/4051) v1 -- 0x556ecec10780 con 0x556e53579c00
2023-01-19T08:00:45.756+0000 7efcd8974700  5 --2- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] >> [v2:172.16.1.189:6802/39,v1:172.16.1.189:6803/39] conn(0x556e53579c00 0x556e65dc3200 crc :-1 s=READY pgs=8238 cs=0 l=0 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x556ecec10780 type=133 pg_lease(3.14 pg_lease(ru 13408272.327709109s ub 13408280.327885832s int 16s) e4051/4051) v1
2023-01-19T08:00:45.756+0000 7efcfa227700  5 --2- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] >> [v2:172.16.1.175:6802/39,v1:172.16.1.175:6803/39] conn(0x556eec603000 0x556ed6dfa800 crc :-1 s=READY pgs=7179 cs=0 l=0 rev1=1 rx=0 tx=0).write_message sending message m=0x556e84af7080 seq=302815 pg_lease(3.14 pg_lease(ru 13408272.327709109s ub 13408280.327885832s int 16s) e4051/4051) v1
2023-01-19T08:00:45.756+0000 7efcfaa28700  5 --2- [v2:172.16.1.129:6808/3000039,v1:172.16.1.129:6809/3000039] >> [v2:172.16.1.189:6802/39,v1:172.16.1.189:6803/39] conn(0x556e53579c00 0x556e65dc3200 crc :-1 s=READY pgs=8238 cs=0 l=0 rev1=1 rx=0 tx=0).write_message sending message m=0x556ecec10780 seq=214943 pg_lease(3.14 pg_lease(ru 13408272.327709109s ub 13408280.327885832s int 16s) e4051/4051) v1
2023-01-19T08:00:46.285+0000 7efcd6970700  1 -- 172.16.1.129:0/39 --> [v2:172.16.1.147:6806/822068,v1:172.16.1.147:6807/822068] -- osd_ping(ping e4051 up_from 4030 ping_stamp 2023-01-19T08:00:46.286774+0000/13408264.857620493s send_stamp 13408264.857620493s delta_ub 10447727.501328001s) v5 -- 0x556e43969800 con 0x556e40da2000
2023-01-19T08:00:46.285+0000 7efcd6970700  5 --2- 172.16.1.129:0/39 >> [v2:172.16.1.147:6806/822068,v1:172.16.1.147:6807/822068] conn(0x556e40da2000 0x556e76aba300 crc :-1 s=READY pgs=31 cs=0 l=1 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x556e43969800 type=70 osd_ping(ping e4051 up_from 4030 ping_stamp 2023-01-19T08:00:46.286774+0000/13408264.857620493s send_stamp 13408264.857620493s delta_ub 10447727.501328001s) v5

Until 2023-01-19T08:05:08.321, osd.12 recv package from other daemon again

2023-01-19T08:05:08.321+0000 7efcecb6f700  1 -- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] <== mon.0 v2:172.18.24.168:3300/0 1 ==== mon_map magic: 0 v1 ==== 383+0+0 (secure 0 0 0) 0x556e50724000 con 0x556e50d92c00
2023-01-19T08:05:08.321+0000 7efcfb229700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.18.24.168:3300/0,v1:172.18.24.168:6789/0] conn(0x556e50d92c00 0x556e7ecaf200 secure :-1 s=READY pgs=2522795 cs=0 l=1 rev1=1 rx=0x556e4be53170 tx=0x556e465d08a0).write_message sending message m=0x556e57535500 seq=4 auth(proto 2 2 bytes epoch 0) v1
2023-01-19T08:05:08.321+0000 7efcecb6f700  1 -- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] <== mon.0 v2:172.18.24.168:3300/0 2 ==== config(7 keys) v1 ==== 241+0+0 (secure 0 0 0) 0x556e43968a80 con 0x556e50d92c00
2023-01-19T08:05:08.321+0000 7efcfb229700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.18.24.168:3300/0,v1:172.18.24.168:6789/0] conn(0x556e50d92c00 0x556e7ecaf200 secure :-1 s=READY pgs=2522795 cs=0 l=1 rev1=1 rx=0x556e4be53170 tx=0x556e465d08a0).write_message sending message m=0x556e5732cb60 seq=5 osd_failure(failed timeout osd.0 [v2:172.16.1.147:6800/822068,v1:172.16.1.147:6801/822068] for 267sec e4053 v4053) v4
2023-01-19T08:05:08.321+0000 7efcfb229700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.18.24.168:3300/0,v1:172.18.24.168:6789/0] conn(0x556e50d92c00 0x556e7ecaf200 secure :-1 s=READY pgs=2522795 cs=0 l=1 rev1=1 rx=0x556e4be53170 tx=0x556e465d08a0).write_message sending message m=0x556e5732dd40 seq=6 osd_failure(failed timeout osd.1 [v2:172.16.1.46:6800/663994,v1:172.16.1.46:6801/663994] for 267sec e4053 v4053) v4
2023-01-19T08:05:08.321+0000 7efcfb229700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.18.24.168:3300/0,v1:172.18.24.168:6789/0] conn(0x556e50d92c00 0x556e7ecaf200 secure :-1 s=READY pgs=2522795 cs=0 l=1 rev1=1 rx=0x556e4be53170 tx=0x556e465d08a0).write_message sending message m=0x556ec382bba0 seq=7 osd_failure(failed timeout osd.5 [v2:172.16.1.173:6800/39,v1:172.16.1.173:6801/39] for 267sec e4053 v4053) v4
2023-01-19T08:05:08.321+0000 7efcfb229700  5 --2- [v2:172.16.1.129:6800/39,v1:172.16.1.129:6801/39] >> [v2:172.18.24.168:3300/0,v1:172.18.24.168:6789/0] conn(0x556e50d92c00 0x556e7ecaf200 secure :-1 s=READY pgs=2522795 cs=0 l=1 rev1=1 rx=0x556e4be53170 tx=0x556e465d08a0).write_message sending message m=0x556e5f650000 seq=8 osd_failure(failed timeout osd.10 [v2:172.16.1.175:6800/39,v1:172.16.1.175:6801/39] for 267sec e4053 v4053) v4

I think this is very likely tcp layer problem.

Actions #11

Updated by yite gu about 1 year ago

hi, Radoslaw
my osd pod network use cilium, so I use cmd `cilium monitor -t drop` to capture package on osd.16 pod´╝îresult as below:

Thu Feb 2 13:40:13 UTC 2023
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 40691->6: 00:00:00:00:40:11 -> 08:00:45:00:01:2f UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 40691->6: 08:c0:eb:8f:ce:fa -> 08:c0:eb:8f:e2:92 UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 25478->6: 08:c0:00:00:40:11 -> 08:00:45:00:00:66 UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 25478->6: 08:c0:00:00:40:11 -> 08:00:45:00:00:66 UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 25478->6: ad:de:00:00:40:11 -> 08:00:45:00:00:66 UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 25478->6: 08:c0:eb:8f:ce:fa -> 0c:42:a1:1f:fa:2e UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 40691->6: 00:00:00:00:40:11 -> 08:00:45:00:01:2f UnknownEthernetType
...
...
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 19325->6: 08:c0:eb:8f:ce:fa -> 08:c0:eb:8f:e2:92 UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 19325->6: 08:c0:eb:8f:ce:fa -> 08:c0:eb:8f:e2:92 UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 19325->6: 08:c0:eb:8f:ce:fa -> 08:c0:eb:8f:e2:92 UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 23763->6: 08:c0:eb:8f:ce:fa -> 08:c0:eb:5f:3f:8e UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 23763->6: 08:c0:eb:8f:ce:fa -> 08:c0:eb:5f:3f:8e UnknownEthernetType
xx drop (FIB lookup failed) flow 0x0 to endpoint 0, identity 23763->6: 5f:66:72:6f:6f:74 -> 3d:30:20:63:61:70 UnknownEthernetType
Thu Feb 2 13:45:03 UTC 2023

Osd.12 heartbeat timeout happen when it is heartbeat peer drop package. The network io path as below:
osd pod -> tcp protocol stack -> cilium -> xdp -> host ethernet -> wire -> host ethernet -> xdp -> cilium -> tcp protocol stack -> osd pod

This is a cilium problem.

Actions #12

Updated by Radoslaw Zarzynski about 1 year ago

  • Status changed from Need More Info to Closed

Closing per the comment #11.

Actions

Also available in: Atom PDF