Project

General

Profile

Actions

Support #17892

open

frequent blocked request impacting Ceph client IO

Added by thomas danan over 7 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Hi,

We have a cluster in production who is suffering from intermittent blocked request (25 requests are blocked > 32 sec). The blocked request occurrences are frequent and global to all OSDs.
From the OSD daemon logs, I can see related messages:

16-11-11 18:25:29.917518 7fd28b989700 0 log_channel(cluster) log [WRN] : slow request 30.429723 seconds old, received at 2016-11-11 18:24:59.487570: osd_op(client.2406272.1:336025615 rbd_data.66e952ae8944a.0000000000350167 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~524288] 0.8d3c9da5 snapc 248=[248,216] ondisk+write e201514) currently waiting for subops from 210,499,821

. So I guess the issue is related to replication process when writing new data on the cluster. Again it is never the same secondary OSDs that are displayed in OSD daemon logs.
As a result we are experiencing very important IO Write latency on ceph client side (can be up to 1 hour !!!).
We have checked Network health as well as disk health but we wre not able to find any issue.

Wanted to know if this issue was already observed or if you have ideas to investigate / WA the issue.
Many thanks...

Thomas

The cluster is composed with 37DN and 851 OSDs and 5 MONs
The Ceph clients are accessing the client with RBD
Cluster is Hammer 0.94.5 version

cluster 1a26e029-3734-4b0e-b86e-ca2778d0c990
health HEALTH_WARN
25 requests are blocked > 32 sec
1 near full osd(s)
noout flag(s) set
monmap e3: 5 mons at {NVMBD1CGK190D00=10.137.81.13:6789/0,nvmbd1cgy050d00=10.137.78.226:6789/0,nvmbd1cgy070d00=10.137.78.232:6789/0,nvmbd1cgy090d00=10.137.78.228:6789/0,nvmbd1cgy130d00=10.137.78.218:6789/0}
election epoch 664, quorum 0,1,2,3,4 nvmbd1cgy130d00,nvmbd1cgy050d00,nvmbd1cgy090d00,nvmbd1cgy070d00,NVMBD1CGK190D00
osdmap e205632: 851 osds: 850 up, 850 in
flags noout
pgmap v25919096: 10240 pgs, 1 pools, 197 TB data, 50664 kobjects
597 TB used, 233 TB / 831 TB avail
10208 active+clean
32 active+clean+scrubbing+deep
client io 97822 kB/s rd, 205 MB/s wr, 2402 op/s

Files

ceph-osd.789.log.gz (124 KB) ceph-osd.789.log.gz thomas danan, 11/14/2016 12:30 PM

No data to display

Actions

Also available in: Atom PDF