Project

General

Profile

Actions

Bug #12865

closed

OSDs crashing and not resolving blocked requests after network split

Added by Christian Theune over 8 years ago. Updated over 8 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I saw a couple of instances of this but don't have much data. :(

We reconfigured our network in recent weeks (6 OSD hosts in 2 racks a 3 hosts, with 8 OSDs on each host) and introduced a new interconnect. Whenever we did some short but substantial change that caused many OSDs to loose their connectivity for a while (we had instances take a few seconds and a few minutes) we ended up with a) a few crashed OSDs and b) a few OSDs that would have blocked requests and never continue. We had to restart those OSDs to get the requests unblocked (which apparently worked fine as we did not receive errors from the clients of those requests).

Here's a little output that I could gather in the aftermath:

An affected OSD with a little longer outage (heartbeats started to fail around 20:32)

2015-08-27 20:32:28.757653 7fc4e4c09700  1 mon.cartman06@0(electing).elector(2591) init, last seen epoch 2591
2015-08-27 20:32:32.875632 7fc4e4c09700  0 log_channel(cluster) log [INF] : mon.cartman06@0 won leader election with quorum 0,1,2
2015-08-27 20:32:32.891264 7fc4e4c09700  0 log_channel(cluster) log [INF] : HEALTH_WARN; noout flag(s) set
2015-08-27 20:32:32.915914 7fc4e7cc4700  0 log_channel(cluster) log [INF] : monmap e35: 3 mons at {cartman06=172.22.4.42:6789/0,cartman07=172.22.4.43:6789/0,cartman08=172.22.4.44:6789/0}
2015-08-27 20:32:32.916067 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657669: 14755 pgs: 14755 active+clean; 10364 GB data, 22121 GB used, 26184 GB / 48802 GB avail; 10020 kB/s rd, 23012 kB/s wr, 2355 op/s
2015-08-27 20:32:32.916148 7fc4e7cc4700  0 log_channel(cluster) log [INF] : mdsmap e1: 0/0/1 up
2015-08-27 20:32:32.916623 7fc4e7cc4700  0 log_channel(cluster) log [INF] : osdmap e559882: 48 osds: 48 up, 48 in
2015-08-27 20:32:32.916892 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559882 prepare_failure osd.1 172.22.4.43:6807/31094 from osd.40 172.22.4.54:6803/991 is reporting failure:1
2015-08-27 20:32:32.916911 7fc4e7cc4700  0 log_channel(cluster) log [DBG] : osd.1 172.22.4.43:6807/31094 reported failed by osd.40 172.22.4.54:6803/991
2015-08-27 20:32:32.916969 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559882 prepare_failure osd.15 172.22.4.43:6804/19153 from osd.40 172.22.4.54:6803/991 is reporting failure:1
2015-08-27 20:32:32.916984 7fc4e7cc4700  0 log_channel(cluster) log [DBG] : osd.15 172.22.4.43:6804/19153 reported failed by osd.40 172.22.4.54:6803/991
2015-08-27 20:32:32.917021 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559882 prepare_failure osd.16 172.22.4.43:6803/27129 from osd.40 172.22.4.54:6803/991 is reporting failure:1
2015-08-27 20:32:32.917040 7fc4e7cc4700  0 log_channel(cluster) log [DBG] : osd.16 172.22.4.43:6803/27129 reported failed by osd.40 172.22.4.54:6803/991
2015-08-27 20:32:32.917077 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559882 prepare_failure osd.17 172.22.4.43:6805/8366 from osd.40 172.22.4.54:6803/991 is reporting failure:1
2015-08-27 20:32:32.917091 7fc4e7cc4700  0 log_channel(cluster) log [DBG] : osd.17 172.22.4.43:6805/8366 reported failed by osd.40 172.22.4.54:6803/991
2015-08-27 20:32:32.917125 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559882 prepare_failure osd.18 172.22.4.43:6811/27679 from osd.40 172.22.4.54:6803/991 is reporting failure:1
2015-08-27 20:32:32.917139 7fc4e7cc4700  0 log_channel(cluster) log [DBG] : osd.18 172.22.4.43:6811/27679 reported failed by osd.40 172.22.4.54:6803/991
2015-08-27 20:32:32.917176 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559882 prepare_failure osd.19 172.22.4.43:6801/15377 from osd.40 172.22.4.54:6803/991 is reporting failure:1
2015-08-27 20:32:32.917202 7fc4e7cc4700  0 log_channel(cluster) log [DBG] : osd.19 172.22.4.43:6801/15377 reported failed by osd.40 172.22.4.54:6803/991
2015-08-27 20:32:32.917245 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559882 prepare_failure osd.21 172.22.4.43:6814/15585 from osd.40 172.22.4.54:6803/991 is reporting failure:1
2015-08-27 20:32:32.917259 7fc4e7cc4700  0 log_channel(cluster) log [DBG] : osd.21 172.22.4.43:6814/15585 reported failed by osd.40 172.22.4.54:6803/991

...

2015-08-27 20:39:25.210028 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.33 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210034 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.34 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210053 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.35 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210058 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.36 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210063 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.37 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210067 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.38 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210072 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.40 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210077 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.41 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210082 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.42 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210087 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.43 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210092 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.44 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210097 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.45 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210101 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.46 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.210105 7f0ff7510700 -1 osd.5 559908 heartbeat_check: no reply from osd.47 since back 2015-08-27 20:38:28.313443 front 2015-08-27 20:39:24.257819 (cutoff 2015-08-27 20:39:15.209786)
2015-08-27 20:39:25.492355 7f0feca4b700  0 -- 172.22.8.17:6803/9020065 >> 172.22.8.16:6802/7005972 pipe(0x7f1052a6f000 sd=69 :33399 s=2 pgs=807 cs=1 l=0 c=0x7f1014fe8580).fault with nothing to send, going to standby
2015-08-27 20:39:25.534613 7f0fdcf21700  0 log_channel(cluster) log [WRN] : map e559913 wrongly marked me down
2015-08-27 20:39:26.625173 7f0fb641d700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.49:0/2024316 pipe(0x7f103f4ac000 sd=102 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f103e517b80).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:26.631561 7f0fbd0f2700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.37:0/1016169 pipe(0x7f10148ec000 sd=103 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f105060de40).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:26.741622 7f0fde915700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.52:0/3015459 pipe(0x7f10151a6000 sd=165 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f1024753ce0).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:26.824222 7f0fb651e700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.50:0/3001004 pipe(0x7f103f4b8000 sd=101 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f10247522c0).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:27.531600 7f0fbdcfe700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.41:0/1025254 pipe(0x7f1013b73000 sd=286 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f1052df8580).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:27.642725 7f0fbd7f9700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.52:0/3009511 pipe(0x7f105991f000 sd=289 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f105a2ba420).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:28.038621 7f0fd270c700  0 -- 172.22.4.53:6800/20065 submit_message osd_map(559904..559915 src has 559347..559915) v3 remote, 172.22.4.52:0/2023435, failed lossy con, dropping message 0x7f105314c6c0
2015-08-27 20:39:28.213428 7f0ff7510700  0 log_channel(cluster) log [WRN] : 3 slow requests, 3 included below; oldest blocked for > 15.354212 secs
2015-08-27 20:39:28.213440 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 15.354212 seconds old, received at 2015-08-27 20:39:12.859145: osd_op(client.170315765.0:2008057 rbd_data.1239f2ae8944a.0000000000000386 [set-alloc-hint object_size 4194304 write_size 4194304,write 1753088~8192] 73.60f305
2015-08-27 20:39:28.213450 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 15.353479 seconds old, received at 2015-08-27 20:39:12.859878: osd_op(client.170315765.0:2008058 rbd_data.1239f2ae8944a.0000000000000386 [set-alloc-hint object_size 4194304 write_size 4194304,write 1982464~4096] 73.60f305
2015-08-27 20:39:28.213458 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 15.308287 seconds old, received at 2015-08-27 20:39:12.905069: osd_op(client.169712444.0:4008884 rbd_data.1c52e32ae8944a.0000000000000425 [set-alloc-hint object_size 4194304 write_size 4194304,write 561152~4096] 83.6309b6
2015-08-27 20:39:28.230140 7f0fcd702700  0 log_channel(cluster) log [INF] : 115.156 scrub starts
2015-08-27 20:39:28.252963 7f0fcd702700  0 log_channel(cluster) log [INF] : 115.156 scrub ok
2015-08-27 20:39:28.396700 7f0fbaecf700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.39:0/3031740 pipe(0x7f101e0d2000 sd=343 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f105603c580).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:28.739522 7f0fe78ad700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.52:0/3026394 pipe(0x7f101e0d7000 sd=310 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f1053037ce0).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:29.413015 7f0fbf661700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.39:0/1021836 pipe(0x7f103a00b000 sd=349 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f10149b5600).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:29.779607 7f0fbc6e8700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.38:0/1030221 pipe(0x7f10547bf000 sd=356 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f1043ad2dc0).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:31.948158 7f0fb7918700  0 -- 172.22.4.53:6800/20065 >> 172.22.4.50:0/1002237 pipe(0x7f102bd8c000 sd=371 :6800 s=0 pgs=0 cs=0 l=1 c=0x7f103de56c60).accept replacing existing (lossy) channel (new one lossy=1)
2015-08-27 20:39:32.657517 7f0fcff07700  0 -- 172.22.4.53:6800/20065 submit_message osd_map(559904..559915 src has 559347..559915) v3 remote, 172.22.4.5:0/1019764, failed lossy con, dropping message 0x7f103518f680
2015-08-27 20:39:38.230282 7f0fcd702700  0 log_channel(cluster) log [INF] : 115.25e scrub starts
2015-08-27 20:39:38.240839 7f0fcd702700  0 log_channel(cluster) log [INF] : 115.25e scrub ok
2015-08-27 20:39:40.227356 7f0fcd702700  0 log_channel(cluster) log [INF] : 115.2c0 scrub starts
2015-08-27 20:39:40.594350 7f0fcd702700  0 log_channel(cluster) log [INF] : 115.2c0 scrub ok
2015-08-27 20:39:43.215838 7f0ff7510700  0 log_channel(cluster) log [WRN] : 3 slow requests, 3 included below; oldest blocked for > 30.356639 secs
2015-08-27 20:39:43.215849 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 30.356639 seconds old, received at 2015-08-27 20:39:12.859145: osd_op(client.170315765.0:2008057 rbd_data.1239f2ae8944a.0000000000000386 [set-alloc-hint object_size 4194304 write_size 4194304,write 1753088~8192] 73.60f305
2015-08-27 20:39:43.215855 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 30.355905 seconds old, received at 2015-08-27 20:39:12.859878: osd_op(client.170315765.0:2008058 rbd_data.1239f2ae8944a.0000000000000386 [set-alloc-hint object_size 4194304 write_size 4194304,write 1982464~4096] 73.60f305
2015-08-27 20:39:43.215860 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 30.310714 seconds old, received at 2015-08-27 20:39:12.905069: osd_op(client.169712444.0:4008884 rbd_data.1c52e32ae8944a.0000000000000425 [set-alloc-hint object_size 4194304 write_size 4194304,write 561152~4096] 83.6309b6
2015-08-27 20:39:45.216128 7f0ff7510700  0 log_channel(cluster) log [WRN] : 6 slow requests, 3 included below; oldest blocked for > 32.356936 secs
2015-08-27 20:39:45.216138 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 15.397693 seconds old, received at 2015-08-27 20:39:29.818387: osd_op(client.174020460.0:304806 rbd_data.a59dd852ae8944a.00000000000001a4 [set-alloc-hint object_size 4194304 write_size 4194304,write 798720~4096] 251.598d7
2015-08-27 20:39:45.216144 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 15.398517 seconds old, received at 2015-08-27 20:39:29.817564: osd_op(client.174020460.0:304798 rbd_data.a59dd852ae8944a.0000000000000606 [set-alloc-hint object_size 4194304 write_size 4194304,write 3969024~8192] 251.aa9a
2015-08-27 20:39:45.216149 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 15.397956 seconds old, received at 2015-08-27 20:39:29.818124: osd_op(client.174020460.0:304802 rbd_data.a59dd852ae8944a.0000000000000606 [set-alloc-hint object_size 4194304 write_size 4194304,write 4087808~4096] 251.aa9a
2015-08-27 20:40:04.219759 7f0fcd702700  0 log_channel(cluster) log [INF] : 130.179 scrub starts
2015-08-27 20:40:05.029598 7f0fcd702700  0 log_channel(cluster) log [INF] : 130.179 scrub ok
2015-08-27 20:40:13.220243 7f0ff7510700  0 log_channel(cluster) log [WRN] : 3 slow requests, 3 included below; oldest blocked for > 60.361009 secs
2015-08-27 20:40:13.220258 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 60.361009 seconds old, received at 2015-08-27 20:39:12.859145: osd_op(client.170315765.0:2008057 rbd_data.1239f2ae8944a.0000000000000386 [set-alloc-hint object_size 4194304 write_size 4194304,write 1753088~8192] 73.60f305
2015-08-27 20:40:13.220267 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 60.360276 seconds old, received at 2015-08-27 20:39:12.859878: osd_op(client.170315765.0:2008058 rbd_data.1239f2ae8944a.0000000000000386 [set-alloc-hint object_size 4194304 write_size 4194304,write 1982464~4096] 73.60f305
2015-08-27 20:40:13.220275 7f0ff7510700  0 log_channel(cluster) log [WRN] : slow request 60.315084 seconds old, received at 2015-08-27 20:39:12.905069: osd_op(client.169712444.0:4008884 rbd_data.1c52e32ae8944a.0000000000000425 [set-alloc-hint object_size 4194304 write_size 4194304,write 561152~4096] 83.6309b6
2015-08-27 20:40:44.329513 7f0fc4db1700 -1 osd.5 559915 *** Got signal Terminated ***

After that it shuts down and restarts cleanly.

Could this be related to TCP timeouts or something that they become so long that the requests simply did not continue? OTOH shouldn't this have failed over to a secondary? We had "no out" set at that point to avoid excessive recovery traffic)

Here's output from a mon:

2015-08-27 20:40:44.329061 7fc4e4c09700  0 log_channel(cluster) log [INF] : osd.5 marked itself down
2015-08-27 20:40:45.025784 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657990: 14755 pgs: 14755 active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 385 kB/s rd, 1286 kB/s wr, 131 op/s
2015-08-27 20:40:45.058829 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559916 e559916: 48 osds: 46 up, 48 in
2015-08-27 20:40:45.114525 7fc4e7cc4700  0 log_channel(cluster) log [INF] : osdmap e559916: 48 osds: 46 up, 48 in
2015-08-27 20:40:45.155859 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657991: 14755 pgs: 14194 active+clean, 561 stale+active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 252 kB/s rd, 148 kB/s wr, 35 op/s
2015-08-27 20:40:46.072285 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559917 e559917: 48 osds: 46 up, 48 in
2015-08-27 20:40:46.113885 7fc4e7cc4700  0 log_channel(cluster) log [INF] : osdmap e559917: 48 osds: 46 up, 48 in
2015-08-27 20:40:46.124958 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657992: 14755 pgs: 14194 active+clean, 561 stale+active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail
2015-08-27 20:40:46.710580 7fc4e540a700  0 mon.cartman06@0(leader).data_health(2594) update_stats avail 95% total 7935 MB, used 261 MB, avail 7576 MB
2015-08-27 20:40:46.830278 7fc4e4c09700  1 mon.cartman06@0(leader).pg v51657992  ignoring stats from non-active osd.
2015-08-27 20:40:48.200518 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559918 e559918: 48 osds: 47 up, 48 in
2015-08-27 20:40:48.208119 7fc4e7cc4700  0 log_channel(cluster) log [INF] : osd.0 172.22.4.42:6801/8929 boot
2015-08-27 20:40:48.218093 7fc4e7cc4700  0 log_channel(cluster) log [INF] : osdmap e559918: 48 osds: 47 up, 48 in
2015-08-27 20:40:48.259580 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657993: 14755 pgs: 14194 active+clean, 561 stale+active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail
2015-08-27 20:40:49.213460 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559919 e559919: 48 osds: 47 up, 48 in
2015-08-27 20:40:49.249755 7fc4e7cc4700  0 log_channel(cluster) log [INF] : osdmap e559919: 48 osds: 47 up, 48 in
2015-08-27 20:40:49.264822 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657994: 14755 pgs: 14194 active+clean, 561 stale+active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail
2015-08-27 20:40:50.335322 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657995: 14755 pgs: 14187 active+clean, 11 active+undersized+degraded, 557 stale+active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 3975 B/s rd, 287 kB/s wr, 77 op/s; 2288/5696218 objects degraded (0.040%)
2015-08-27 20:40:51.368875 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657996: 14755 pgs: 14133 active+clean, 110 active+undersized+degraded, 512 stale+active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 4067 kB/s rd, 4100 kB/s wr, 487 op/s; 19940/5696222 objects degraded (0.3
2015-08-27 20:40:53.490041 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657997: 14755 pgs: 14117 active+clean, 146 active+undersized+degraded, 492 stale+active+clean; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 4057 kB/s rd, 4476 kB/s wr, 549 op/s; 26188/5696224 objects degraded (0.4
2015-08-27 20:40:54.686980 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657998: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 31001 kB/s wr, 2181 op/s; 98847/5696218 objects degraded (1.735%); 15839 kB/s, 3 objects/s recove
2015-08-27 20:40:55.819226 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51657999: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 42253 kB/s wr, 2869 op/s; 98847/5696218 objects degraded (1.735%); 21123 kB/s, 5 objects/s recove
2015-08-27 20:40:56.849452 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51658000: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 159 kB/s rd, 4375 kB/s wr, 375 op/s; 98847/5696218 objects degraded (1.735%)
2015-08-27 20:40:58.917642 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51658001: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 2730 kB/s rd, 10651 kB/s wr, 1482 op/s; 98847/5696240 objects degraded (1.735%)
2015-08-27 20:41:00.000367 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51658002: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 4621 kB/s rd, 16153 kB/s wr, 2642 op/s; 98848/5696258 objects degraded (1.735%)
2015-08-27 20:41:01.076701 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51658003: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 2911 kB/s rd, 11681 kB/s wr, 2265 op/s; 98848/5696266 objects degraded (1.735%)
2015-08-27 20:41:04.172559 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51658004: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 4751 kB/s rd, 6808 kB/s wr, 1345 op/s; 98848/5696280 objects degraded (1.735%)
2015-08-27 20:41:05.210424 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51658005: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 5620 kB/s rd, 7056 kB/s wr, 1412 op/s; 98848/5696286 objects degraded (1.735%)
2015-08-27 20:41:06.284712 7fc4e7cc4700  0 log_channel(cluster) log [INF] : pgmap v51658006: 14755 pgs: 14224 active+clean, 531 active+undersized+degraded; 10364 GB data, 22122 GB used, 26183 GB / 48802 GB avail; 1861 kB/s rd, 5746 kB/s wr, 973 op/s; 98849/5696294 objects degraded (1.735%)
2015-08-27 20:41:07.324822 7fc4e7cc4700  1 mon.cartman06@0(leader).osd e559920 e559920: 48 osds: 48 up, 48 in
2015-08-27 20:41:07.362172 7fc4e7cc4700  0 log_channel(cluster) log [INF] : osd.5 172.22.4.53:6800/17967 boot
Actions #1

Updated by Loïc Dachary over 8 years ago

  • Status changed from New to Need More Info

The next time it happens it would be helpful if you could attach the output of ceph report to this issue. And the end of the log of a crashed OSD to get the traceback and some context about when it happened.

Actions #2

Updated by Christian Theune over 8 years ago

FYI: We're currently at 0.94.5 and we did some substantial cleanup over the year (basically rebuilding the whole cluster on the fly, switching to XFS for OSDs, introducing new SSDs for journals, ...)

I haven't seen any further instances of this, so if you like - close it.

Actions #3

Updated by Sage Weil over 8 years ago

  • Status changed from Need More Info to Can't reproduce
Actions

Also available in: Atom PDF