Project

General

Profile

Actions

Bug #14468

closed

osd: dup op detection doesn't handle guarded writes

Added by Josh Durgin over 8 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
OSD
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Ops 17682 and 17686 appear out of order to osd.3 in this log:

2016-01-19 22:19:48.440297 7f0af7f5d700  1 -- 10.8.129.6:6800/22727 <== client.4128 10.8.129.5:0/2005532 158 ==== osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5 ==== 273+0+1019904 (2469515895 0 0) 0x5248c80 con 0x4e31de0
2016-01-19 22:19:48.440384 7f0af7f5d700 20 osd.3 6 should_share_map client.4128 10.8.129.5:0/2005532 6
2016-01-19 22:19:48.440400 7f0af7f5d700 15 osd.3 6 enqueue_op 0x3ca2100 prio 63 cost 1019904 latency 0.014189 osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5
2016-01-19 22:19:48.440426 7f0b0258c700 10 osd.3 6 dequeue_op 0x3ca2100 prio 63 cost 1019904 latency 0.014215 osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5 pg pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]
2016-01-19 22:19:48.440460 7f0b0258c700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] op_has_sufficient_caps pool=0 (rbd ) owner=0 need_read_cap=1 need_write_cap=1 need_class_read_cap=0 need_class_write_cap=0 -> yes
2016-01-19 22:19:48.440452 7f0af7f5d700  1 -- 10.8.129.6:6800/22727 <== client.4128 10.8.129.5:0/2005532 159 ==== osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5 ==== 235+0+10240 (2723890840 0 0) 0x5249680 con 0x4e31de0
2016-01-19 22:19:48.440475 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] handle_message: 0x3ca2100
2016-01-19 22:19:48.440484 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_op osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5 may_write may_read -> write-ordered flags ack+ondisk+write+known_if_redirected
2016-01-19 22:19:48.440509 7f0af7f5d700 20 osd.3 6 should_share_map client.4128 10.8.129.5:0/2005532 6
2016-01-19 22:19:48.440507 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: found obc in cache: 0x4600840
2016-01-19 22:19:48.440515 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: 0x4600840 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 rwstate(none n=0 w=0) oi: 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0(0'0 unknown.0.0:0 wrlock_by=unknown.0.0:0 s 0 uv 0) ssc: 0x69032c0 snapset: 0=[]:[]
2016-01-19 22:19:48.440520 7f0af7f5d700 15 osd.3 6 enqueue_op 0x3ca4200 prio 63 cost 10240 latency 0.000101 osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5
2016-01-19 22:19:48.440530 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] find_object_context 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 @head oi=516ec378/rbd_data.10221a27709e.00000000000003f8/head//0(0'0 unknown.0.0:0 wrlock_by=unknown.0.0:0 s 0 uv 0)
2016-01-19 22:19:48.440544 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: obc NOT found in cache: 516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0
2016-01-19 22:19:48.440553 7f0b0258c700 15 filestore(/var/lib/ceph/osd/ceph-3) getattr 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0 '_'
2016-01-19 22:19:48.440589 7f0b0258c700 10 filestore(/var/lib/ceph/osd/ceph-3) error opening file /var/lib/ceph/osd/ceph-3/current/0.8_head/rbd\udata.10221a27709e.00000000000003f8__snapdir_516EC378__0 with flags=2: (2) No such file or directory
2016-01-19 22:19:48.440599 7f0b0258c700 10 filestore(/var/lib/ceph/osd/ceph-3) getattr 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0 '_' = -2
2016-01-19 22:19:48.440603 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: no obc for soid 516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0 and !can_create
2016-01-19 22:19:48.440624 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] execute_ctx 0x4e29600
2016-01-19 22:19:48.440637 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_op 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] ov 0'0 av 6'2220 snapc 0=[] snapset 0=[]:[]
2016-01-19 22:19:48.440651 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]  taking ondisk_read_lock
2016-01-19 22:19:48.440658 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_osd_op 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904]
2016-01-19 22:19:48.440668 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_osd_op  stat
2016-01-19 22:19:48.440675 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] stat oi object does not exist
2016-01-19 22:19:48.440683 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]  dropping ondisk_read_lock
2016-01-19 22:19:48.440693 7f0b0258c700  1 -- 10.8.129.6:6800/22727 --> 10.8.129.5:0/2005532 -- osd_op_reply(17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] v0'0 uv2219 ondisk = -2 ((2) No such file or directory)) v6 -- ?+0 0x4d183c0 con 0x4e31de0
2016-01-19 22:19:48.440713 7f0b0258c700 10 osd.3 6 dequeue_op 0x3ca2100 finish
2016-01-19 22:19:48.440725 7f0affd87700 10 osd.3 6 dequeue_op 0x3ca4200 prio 63 cost 10240 latency 0.000305 osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5 pg pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]
2016-01-19 22:19:48.440839 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] op_has_sufficient_caps pool=0 (rbd ) owner=0 need_read_cap=0 need_write_cap=1 need_class_read_cap=0 need_class_write_cap=0 -> yes
2016-01-19 22:19:48.440920 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] handle_message: 0x3ca4200
2016-01-19 22:19:48.440932 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_op osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5 may_write -> write-ordered flags ack+ondisk+write+known_if_redirected
2016-01-19 22:19:48.440960 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: found obc in cache: 0x4600840
2016-01-19 22:19:48.440972 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: 0x4600840 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 rwstate(none n=0 w=0) oi: 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0(0'0 unknown.0.0:0 wrlock_by=unknown.0.0:0 s 0 uv 0) ssc: 0x69032c0 snapset: 0=[]:[]
2016-01-19 22:19:48.440988 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] find_object_context 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 @head oi=516ec378/rbd_data.10221a27709e.00000000000003f8/head//0(0'0 unknown.0.0:0 wrlock_by=unknown.0.0:0 s 0 uv 0)
2016-01-19 22:19:48.441003 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: obc NOT found in cache: 516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0
2016-01-19 22:19:48.441015 7f0affd87700 15 filestore(/var/lib/ceph/osd/ceph-3) getattr 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0 '_'
2016-01-19 22:19:48.441051 7f0affd87700 10 filestore(/var/lib/ceph/osd/ceph-3) error opening file /var/lib/ceph/osd/ceph-3/current/0.8_head/rbd\udata.10221a27709e.00000000000003f8__snapdir_516EC378__0 with flags=2: (2) No such file or directory
2016-01-19 22:19:48.441062 7f0affd87700 10 filestore(/var/lib/ceph/osd/ceph-3) getattr 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0 '_' = -2
2016-01-19 22:19:48.441066 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] get_object_context: no obc for soid 516ec378/rbd_data.10221a27709e.00000000000003f8/snapdir//0 and !can_create
2016-01-19 22:19:48.441089 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] execute_ctx 0x59faa00
2016-01-19 22:19:48.441102 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_op 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] ov 0'0 av 6'2220 snapc 0=[] snapset 0=[]:[]
2016-01-19 22:19:48.441131 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_osd_op 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240]
2016-01-19 22:19:48.441141 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_osd_op  set-alloc-hint object_size 4194304 write_size 4194304
2016-01-19 22:19:48.441157 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] do_osd_op  write 0~10240
2016-01-19 22:19:48.441172 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] make_writeable 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 snapset=0x6903300  snapc=0=[]
2016-01-19 22:19:48.441183 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]  setting DIRTY flag
2016-01-19 22:19:48.441195 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] make_writeable 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 done, snapset=0=[]:[]+head
2016-01-19 22:19:48.441204 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] finish_ctx 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 0x59faa00 op modify  
2016-01-19 22:19:48.441215 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]  set mtime to 2016-01-19 22:19:48.380899
2016-01-19 22:19:48.441233 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=75 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]  final snapset 0=[]:[]+head in 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0
2016-01-19 22:19:48.441263 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]  zeroing write result code 0
2016-01-19 22:19:48.441277 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean]  op order client.4128 tid 17686 (first)
2016-01-19 22:19:48.441286 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] new_repop rep_tid 7846 on osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5
2016-01-19 22:19:48.441300 7f0affd87700  7 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] issue_repop rep_tid 7846 o 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0
2016-01-19 22:19:48.441295 7f0af704e700  1 -- 10.8.129.6:6800/22727 >> :/0 pipe(0x5ca6000 sd=141 :6800 s=0 pgs=0 cs=0 l=0 c=0x4e319c0).accept sd=141 10.8.129.5:50287/0
2016-01-19 22:19:48.441342 7f0affd87700 20 osd.3 6 share_map_peer 0x3a9bb20 already has epoch 6
2016-01-19 22:19:48.441347 7f0affd87700  1 -- 10.8.129.6:6801/22727 --> 10.8.129.6:6809/22732 -- osd_repop(client.4128.0:17686 0.8 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 v 6'2220) v1 -- ?+11085 0x536e000 con 0x3a9bb20
2016-01-19 22:19:48.441364 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2219 (0'0,6'2219] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] append_log log((0'0,6'2219], crt=6'2217) [6'2220 (0'0) modify   516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 by client.4128.0:17686 2016-01-19 22:19:48.380899]
2016-01-19 22:19:48.441387 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 lua=6'2219 crt=6'2217 lcod 6'2218 mlcod 6'2218 active+clean] add_log_entry 6'2220 (0'0) modify   516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 by client.4128.0:17686 2016-01-19 22:19:48.380899
2016-01-19 22:19:48.441409 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 lua=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] append_log  adding 1 keys
2016-01-19 22:19:48.441424 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 lua=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] append_log: trimming to 6'2218 entries 6'2218 (6'2217) modify   710f5458/rbd_data.10221a27709e.00000000000003f1/head//0 by client.4128.0:17608 2016-01-19 22:19:47.686938
2016-01-19 22:19:48.441455 7f0affd87700 10 write_log with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, dirty_divergent_priors: 0, writeout_from: 6'2220, trimmed: 
2016-01-19 22:19:48.441592 7f0af704e700 10 osd.3 6  new session 0x43cc000 con=0x4e319c0 addr=10.8.129.5:0/2005532
2016-01-19 22:19:48.441634 7f0af704e700 10 osd.3 6  session 0x43cc000 client.0 has caps osdcap[grant(rwx)] 'allow rwx'
2016-01-19 22:19:48.441643 7f0af704e700  0 -- 10.8.129.6:6800/22727 >> 10.8.129.5:0/2005532 pipe(0x5ca6000 sd=141 :6800 s=0 pgs=0 cs=0 l=1 c=0x4e319c0).accept replacing existing (lossy) channel (new one lossy=1)
2016-01-19 22:19:48.441749 7f0b0dda3700  1 osd.3 6 ms_handle_reset con 0x4e31de0 session 0x5472280
2016-01-19 22:19:48.443828 7f0affd87700  5 filestore(/var/lib/ceph/osd/ceph-3) queue_transactions existing osr(0.8 0x3a443e0)/0x3a443e0
2016-01-19 22:19:48.443845 7f0affd87700 10 journal op_submit_start 15282
2016-01-19 22:19:48.443847 7f0affd87700  5 filestore(/var/lib/ceph/osd/ceph-3) queue_transactions (parallel) 15282 0x6177d40
2016-01-19 22:19:48.443849 7f0affd87700 10 journal op_journal_transactions 15282 0x6177d40
2016-01-19 22:19:48.443857 7f0affd87700  5 journal submit_entry seq 15282 len 12654 (0x43b07e0)
2016-01-19 22:19:48.443881 7f0affd87700  5 filestore(/var/lib/ceph/osd/ceph-3) queue_op 0x7578c60 seq 15282 osr(0.8 0x3a443e0) 12640 bytes   (queue has 1 ops and 12640 bytes)
2016-01-19 22:19:48.443888 7f0affd87700 10 journal op_submit_finish 15282
2016-01-19 22:19:48.443894 7f0b12f1b700 20 journal write_thread_entry woke up
2016-01-19 22:19:48.443895 7f0b11718700 10 journal op_apply_start 15282 open_ops 0 -> 1
2016-01-19 22:19:48.443904 7f0b11718700  5 filestore(/var/lib/ceph/osd/ceph-3) _do_op 0x7578c60 seq 15282 osr(0.8 0x3a443e0)/0x3a443e0 start
2016-01-19 22:19:48.443906 7f0b12f1b700 10 journal room 39092223 max_size 104857600 pos 42934272 header.start 82026496 top 4096
2016-01-19 22:19:48.443909 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) _do_transaction on 0x6177d40
2016-01-19 22:19:48.443891 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 lua=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] eval_repop repgather(0x57ae840 6'2220 rep_tid=7846 committed?=0 applied?=0 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5) wants=ad
2016-01-19 22:19:48.443910 7f0b12f1b700 10 journal check_for_full at 42934272 : 16384 < 39092223
2016-01-19 22:19:48.443913 7f0affd87700 10 osd.3 6 dequeue_op 0x3ca4200 finish
2016-01-19 22:19:48.443912 7f0b12f1b700 15 journal prepare_single_write 1 will write 42934272 : seq 15282 len 12654 -> 16384 (head 40 pre_pad 0 ebl 12654 post_pad 3650 tail 40) (ebl alignment -1)
2016-01-19 22:19:48.443920 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) _omap_setkeys 0.8_head/8//head//0
2016-01-19 22:19:48.443950 7f0b12f1b700 20 journal prepare_multi_write queue_pos now 42950656
2016-01-19 22:19:48.443952 7f0b12f1b700 15 journal do_write writing 42934272~16384
2016-01-19 22:19:48.443958 7f0b11718700 10 filestore oid: 8//head//0 not skipping op, *spos 15282.0.0
2016-01-19 22:19:48.443961 7f0b11718700 10 filestore  > header.spos 0.0.0
2016-01-19 22:19:48.443969 7f0b12f1b700 10 journal align_bl total memcopy: 16384
2016-01-19 22:19:48.443991 7f0b11718700 20 filestore(/var/lib/ceph/osd/ceph-3) _omap_setkeys 0.8_head/8//head//0 = 0
2016-01-19 22:19:48.443998 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) _omap_setkeys 0.8_head/8//head//0
2016-01-19 22:19:48.444018 7f0b11718700 10 filestore oid: 8//head//0 not skipping op, *spos 15282.0.1
2016-01-19 22:19:48.444020 7f0b11718700 10 filestore  > header.spos 0.0.0
2016-01-19 22:19:48.444041 7f0b11718700 20 filestore(/var/lib/ceph/osd/ceph-3) _omap_setkeys 0.8_head/8//head//0 = 0
2016-01-19 22:19:48.444047 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) _omap_setkeys 0.8_head/8//head//0
2016-01-19 22:19:48.444064 7f0b11718700 10 filestore oid: 8//head//0 not skipping op, *spos 15282.0.2
2016-01-19 22:19:48.444066 7f0b11718700 10 filestore  > header.spos 0.0.0
2016-01-19 22:19:48.444120 7f0b11718700 20 filestore(/var/lib/ceph/osd/ceph-3) _omap_setkeys 0.8_head/8//head//0 = 0
2016-01-19 22:19:48.444125 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) touch 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0
2016-01-19 22:19:48.444303 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) touch 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 = 0
2016-01-19 22:19:48.444315 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) set_alloc_hint 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 object_size 4194304 write_size 4194304
2016-01-19 22:19:48.444322 7f0b11718700 20 filestore(/var/lib/ceph/osd/ceph-3) set_alloc_hint hint 1048576 ret -95
2016-01-19 22:19:48.444324 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) set_alloc_hint 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 object_size 4194304 write_size 4194304 = -95
2016-01-19 22:19:48.444329 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) write 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 0~10240
2016-01-19 22:19:48.444364 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) write 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 0~10240 = 10240
2016-01-19 22:19:48.444392 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) setattrs 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0
2016-01-19 22:19:48.444414 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) setattrs 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 = 0
2016-01-19 22:19:48.444433 7f0b11718700 20 filestore(/var/lib/ceph/osd/ceph-3) fgetattrs 78 getting '_'
2016-01-19 22:19:48.444444 7f0b11718700 15 filestore(/var/lib/ceph/osd/ceph-3) setattrs 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0
2016-01-19 22:19:48.444460 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) setattrs 0.8_head/516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 = 0
2016-01-19 22:19:48.444465 7f0b11718700 10 journal op_apply_finish 15282 open_ops 1 -> 0, max_applied_seq 15281 -> 15282
2016-01-19 22:19:48.444467 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) _do_op 0x7578c60 seq 15282 r = 0, finisher 0x43b2e60 0x51ece40
2016-01-19 22:19:48.444471 7f0b11718700 10 filestore(/var/lib/ceph/osd/ceph-3) _finish_op 0x7578c60 seq 15282 osr(0.8 0x3a443e0)/0x3a443e0
2016-01-19 22:19:48.444501 7f0b10f17700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 lua=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] op_applied: 7846
2016-01-19 22:19:48.444535 7f0b10f17700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 lua=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] op_applied version 6'2220
2016-01-19 22:19:48.444692 7f0b12f1b700 20 journal do_write latency 0.000738
2016-01-19 22:19:48.444699 7f0b12f1b700 20 journal do_write queueing finishers through seq 15282
2016-01-19 22:19:48.444701 7f0b12f1b700 10 journal queue_completions_thru seq 15282 queueing seq 15282 0x43b07e0 lat 0.000828
2016-01-19 22:19:48.444722 7f0b12f1b700  5 journal put_throttle finished 1 ops and 12654 bytes, now 0 ops and 0 bytes
2016-01-19 22:19:48.444726 7f0b12f1b700 20 journal write_thread_entry going to sleep
2016-01-19 22:19:48.444714 7f0b1271a700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] op_commit: 7846
2016-01-19 22:19:48.445156 7f0af8765700  0 -- 10.8.129.6:6801/22727 >> 10.8.129.6:6809/22732 pipe(0x4af5000 sd=105 :58065 s=2 pgs=44 cs=53 l=0 c=0x3a9bb20).injecting socket failure
2016-01-19 22:19:48.445218 7f0af8765700  1 -- 10.8.129.6:6801/22727 <== osd.5 10.8.129.6:6809/22732 13855 ==== osd_repop_reply(client.4128.0:17686 0.8 ondisk, result = 0) v1 ==== 83+0+0 (1015328748 0 0) 0x5741680 con 0x3a9bb20
2016-01-19 22:19:48.445253 7f0af8765700 10 osd.3 6 handle_replica_op osd_repop_reply(client.4128.0:17686 0.8 ondisk, result = 0) v1 epoch 6
2016-01-19 22:19:48.445259 7f0af8765700 20 osd.3 6 should_share_map osd.5 10.8.129.6:6809/22732 6
2016-01-19 22:19:48.445265 7f0af8765700 15 osd.3 6 enqueue_op 0x3a8f000 prio 196 cost 0 latency 0.000077 osd_repop_reply(client.4128.0:17686 0.8 ondisk, result = 0) v1
2016-01-19 22:19:48.445249 7f0af6e4c700  1 -- 10.8.129.6:6801/22727 >> 10.8.129.6:6809/22732 pipe(0x4af5000 sd=105 :58065 s=2 pgs=44 cs=53 l=0 c=0x3a9bb20).do_sendmsg error (32) Broken pipe
2016-01-19 22:19:48.445276 7f0af6e4c700  0 -- 10.8.129.6:6801/22727 >> 10.8.129.6:6809/22732 pipe(0x4af5000 sd=105 :58065 s=2 pgs=44 cs=53 l=0 c=0x3a9bb20).fault with nothing to send, going to standby
2016-01-19 22:19:48.445290 7f0b0258c700 10 osd.3 6 dequeue_op 0x3a8f000 prio 196 cost 0 latency 0.000102 osd_repop_reply(client.4128.0:17686 0.8 ondisk, result = 0) v1 pg pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean]
2016-01-19 22:19:48.445323 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] handle_message: 0x3a8f000
2016-01-19 22:19:48.445340 7f0b0258c700  7 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] sub_op_modify_reply: tid 7846 op  ack_type 4 from 5
2016-01-19 22:19:48.445362 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] repop_all_applied: repop tid 7846 all applied 
2016-01-19 22:19:48.445371 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] eval_repop repgather(0x57ae840 6'2220 rep_tid=7846 committed?=0 applied?=1 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5) wants=ad
2016-01-19 22:19:48.445387 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean]  sending ack on repgather(0x57ae840 6'2220 rep_tid=7846 committed?=0 applied?=1 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5) 0x609a3c0
2016-01-19 22:19:48.445400 7f0b0258c700  1 -- 10.8.129.6:6800/22727 --> 10.8.129.5:0/2005532 -- osd_op_reply(17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] v6'2220 uv2220 ack = 0) v6 -- ?+0 0x609a3c0 con 0x4e31de0
2016-01-19 22:19:48.445412 7f0b0258c700  0 -- 10.8.129.6:6800/22727 submit_message osd_op_reply(17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] v6'2220 uv2220 ack = 0) v6 remote, 10.8.129.5:0/2005532, failed lossy con, dropping message 0x609a3c0
2016-01-19 22:19:48.445422 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 luod=6'2219 crt=6'2218 lcod 6'2218 mlcod 6'2218 active+clean] repop_all_committed: repop tid 7846 all committed 
2016-01-19 22:19:48.445431 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2218 active+clean] eval_repop repgather(0x57ae840 6'2220 rep_tid=7846 committed?=1 applied?=1 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5) wants=ad
2016-01-19 22:19:48.445445 7f0b0258c700 15 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2218 active+clean] log_op_stats osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5 inb 10240 outb 0 rlat 0.005002 lat 0.005025
2016-01-19 22:19:48.445460 7f0b0258c700 15 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2218 active+clean] publish_stats_to_osd 6:2014
2016-01-19 22:19:48.445470 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2218 active+clean]  sending commit on repgather(0x57ae840 6'2220 rep_tid=7846 committed?=1 applied?=1 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5) 0x45f7b80
2016-01-19 22:19:48.445483 7f0b0258c700  1 -- 10.8.129.6:6800/22727 --> 10.8.129.5:0/2005532 -- osd_op_reply(17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] v6'2220 uv2220 ondisk = 0) v6 -- ?+0 0x45f7b80 con 0x4e31de0
2016-01-19 22:19:48.445491 7f0b0258c700  0 -- 10.8.129.6:6800/22727 submit_message osd_op_reply(17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] v6'2220 uv2220 ondisk = 0) v6 remote, 10.8.129.5:0/2005532, failed lossy con, dropping message 0x45f7b80
2016-01-19 22:19:48.445489 7f0af8b69700  1 -- 10.8.129.6:6801/22727 >> :/0 pipe(0x4af0000 sd=135 :6801 s=0 pgs=0 cs=0 l=0 c=0x45c1440).accept sd=135 10.8.129.6:55919/0
2016-01-19 22:19:48.445507 7f0b0258c700 15 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2218 active+clean] do_osd_op_effects client.4128 con 0x4e31de0
2016-01-19 22:19:48.445517 7f0b0258c700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  removing repgather(0x57ae840 6'2220 rep_tid=7846 committed?=1 applied?=1 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5)
2016-01-19 22:19:48.445530 7f0b0258c700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]    q front is repgather(0x57ae840 6'2220 rep_tid=7846 committed?=1 applied?=1 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5)
2016-01-19 22:19:48.445542 7f0b0258c700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] remove_repop repgather(0x57ae840 6'2220 rep_tid=7846 committed?=1 applied?=1 lock=0 op=osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 ack+ondisk+write+known_if_redirected e6) v5)
2016-01-19 22:19:48.445554 7f0b0258c700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  obc obc(516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 rwstate(write n=1 w=0))
2016-01-19 22:19:48.445606 7f0b0258c700 10 osd.3 6 dequeue_op 0x3a8f000 finish
2016-01-19 22:19:48.445628 7f0af8b69700 10 osd.3 6  new session 0x5ec3480 con=0x45c1440 addr=10.8.129.6:6809/22732
2016-01-19 22:19:48.445657 7f0af8b69700 10 osd.3 6  session 0x5ec3480 osd.5 has caps osdcap[grant(*)] 'allow *'
2016-01-19 22:19:48.445663 7f0af8b69700  0 -- 10.8.129.6:6801/22727 >> 10.8.129.6:6809/22732 pipe(0x4af0000 sd=135 :6801 s=0 pgs=0 cs=0 l=0 c=0x45c1440).accept connect_seq 54 vs existing 53 state standby
2016-01-19 22:19:48.445692 7f0b0c5a0700  1 osd.3 6 ms_handle_reset con 0x45c1440 session 0x5ec3480
2016-01-19 22:19:48.452092 7f0af704e700  1 -- 10.8.129.6:6800/22727 <== client.4128 10.8.129.5:0/2005532 1 ==== osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 RETRY=1 ack+ondisk+retry+write+known_if_redirected e6) v5 ==== 273+0+1019904 (3541945240 0 0) 0x5ec1900 con 0x4e319c0
2016-01-19 22:19:48.452202 7f0af704e700 20 osd.3 6 should_share_map client.4128 10.8.129.5:0/2005532 6
2016-01-19 22:19:48.452221 7f0af704e700 15 osd.3 6 enqueue_op 0x3a8eb00 prio 63 cost 1019904 latency 0.010149 osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 RETRY=1 ack+ondisk+retry+write+known_if_redirected e6) v5
2016-01-19 22:19:48.452265 7f0affd87700 10 osd.3 6 dequeue_op 0x3a8eb00 prio 63 cost 1019904 latency 0.010193 osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 RETRY=1 ack+ondisk+retry+write+known_if_redirected e6) v5 pg pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]
2016-01-19 22:19:48.452283 7f0af704e700  1 -- 10.8.129.6:6800/22727 <== client.4128 10.8.129.5:0/2005532 2 ==== osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 RETRY=1 ack+ondisk+retry+write+known_if_redirected e6) v5 ==== 235+0+10240 (1738519202 0 0) 0x5ec2a80 con 0x4e319c0
2016-01-19 22:19:48.452310 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] op_has_sufficient_caps pool=0 (rbd ) owner=0 need_read_cap=1 need_write_cap=1 need_class_read_cap=0 need_class_write_cap=0 -> yes
2016-01-19 22:19:48.452326 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] handle_message: 0x3a8eb00
2016-01-19 22:19:48.452338 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] do_op osd_op(client.4128.0:17682 rbd_data.10221a27709e.00000000000003f8 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] 0.516ec378 RETRY=1 ack+ondisk+retry+write+known_if_redirected e6) v5 may_write may_read -> write-ordered flags ack+ondisk+retry+write+known_if_redirected
2016-01-19 22:19:48.452360 7f0af704e700 20 osd.3 6 should_share_map client.4128 10.8.129.5:0/2005532 6
2016-01-19 22:19:48.452370 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] get_object_context: found obc in cache: 0x4600840
2016-01-19 22:19:48.452373 7f0af704e700 15 osd.3 6 enqueue_op 0x3a8f100 prio 63 cost 10240 latency 0.000130 osd_op(client.4128.0:17686 rbd_data.10221a27709e.00000000000003f8 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~10240] 0.516ec378 RETRY=1 ack+ondisk+retry+write+known_if_redirected e6) v5
2016-01-19 22:19:48.452379 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] get_object_context: 0x4600840 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 rwstate(none n=0 w=0) oi: 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0(6'2220 client.4128.0:17686 wrlock_by=unknown.0.0:0 dirty|data_digest s 10240 uv 2220 dd 94de4cde) ssc: 0x69032c0 snapset: 0=[]:[]+head
2016-01-19 22:19:48.452395 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] find_object_context 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 @head oi=516ec378/rbd_data.10221a27709e.00000000000003f8/head//0(6'2220 client.4128.0:17686 wrlock_by=unknown.0.0:0 dirty|data_digest s 10240 uv 2220 dd 94de4cde)
2016-01-19 22:19:48.452424 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] execute_ctx 0x48e4400
2016-01-19 22:19:48.452436 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] do_op 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904] ov 6'2220 av 6'2221 snapc 0=[] snapset 0=[]:[]+head
2016-01-19 22:19:48.452450 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  taking ondisk_read_lock
2016-01-19 22:19:48.452458 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] do_osd_op 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 3174400~1019904]
2016-01-19 22:19:48.452468 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] do_osd_op  stat
2016-01-19 22:19:48.452478 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] stat oi has 10240 2016-01-19 22:19:48.380899
2016-01-19 22:19:48.452487 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] do_osd_op  set-alloc-hint object_size 4194304 write_size 4194304
2016-01-19 22:19:48.452498 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] do_osd_op  write 3174400~1019904
2016-01-19 22:19:48.452511 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] make_writeable 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 snapset=0x6903300  snapc=0=[]
2016-01-19 22:19:48.452525 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] make_writeable 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 done, snapset=0=[]:[]+head
2016-01-19 22:19:48.452536 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] finish_ctx 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0 0x48e4400 op modify  
2016-01-19 22:19:48.452547 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  set mtime to 2016-01-19 22:19:48.374323
2016-01-19 22:19:48.452564 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  final snapset 0=[]:[]+head in 516ec378/rbd_data.10221a27709e.00000000000003f8/head//0
2016-01-19 22:19:48.452579 7f0affd87700 10 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  dropping ondisk_read_lock
2016-01-19 22:19:48.452588 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  zeroing write result code 0
2016-01-19 22:19:48.452598 7f0affd87700 20 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean]  op order client.4128 tid 17682 last was 17686
2016-01-19 22:19:48.452608 7f0affd87700 -1 osd.3 pg_epoch: 6 pg[0.8( v 6'2220 (0'0,6'2220] local-les=4 n=76 ec=1 les/c 4/4 3/3/3) [3,5] r=0 lpr=3 crt=6'2218 lcod 6'2219 mlcod 6'2219 active+clean] bad op order, already applied 17686 > this 17682
2016-01-19 22:19:48.455393 7f0affd87700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::execute_ctx(ReplicatedPG::OpContext*)' thread 7f0affd87700 time 2016-01-19 22:19:48.452616
osd/ReplicatedPG.cc: 2351: FAILED assert(0 == "out of order op")

 ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0xb21595]
 2: (ReplicatedPG::execute_ctx(ReplicatedPG::OpContext*)+0x1c7b) [0x873bcb]
 3: (ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>&)+0x45d7) [0x8782f7]
 4: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x68a) [0x81053a]
 5: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x405) [0x661f35]
 6: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x330) [0x6624a0]
 7: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x86f) [0xb10d1f]
 8: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xb12e50]
 9: (()+0x7dc5) [0x7f0b1eabadc5]
 10: (clone()+0x6d) [0x7f0b1d59d21d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

osd.3 is the only osd to see the first attempt at 17682, since it does a stat before writing, and returns -ENOENT to the client without recording the error in the pg log.

Then op 17686 arrives, without a stat, and proceeds as usual.

The client had ms fault injection on, and discarded the reply for 17682, so it resends it. At this point osd.3 hits the 'out of order op' assert.

This occurred on hammer, but seems to apply equally to master.

Actions #1

Updated by Sage Weil about 8 years ago

  • Status changed from New to 12

It seems like the short answer is "don't pipeline guarded and non-guarded ops".

We can make the crash go away by skipping the out of order check/assertion if the op has a guard, due to this issue. That makes the crash go away but still may bite users who pipeline these ops.

In order to make this actually work, we need to persist the write in the pg log but not change the object. That seems like it's not worth it.

We could partially mitigate this by making a guarded op automagically fail if the last op was the same client id and a higher tid, but that wouldn't work if there were multiple clients (client a guarded_op_1, client a op_2, client b op_3, client a reconnects and sends guarded_op_1). I think it's not worth it.

Josh, how important is it that librbd pipeline these ops?

Actions #2

Updated by Josh Durgin about 8 years ago

With object map it becomes more important since we're dropping the guard sooner, and have more of a chance to hit the race. It's also a confusing semantic for librados users - if we keep it, we at least need to document it well.

IIRC when I talked to Sam about this before, we mentioned two independent changes that would help here: 1) making the dup op log per-object and 2) keeping a longer non-persistent dup op log, e.g. with the object context cache.

Actions #3

Updated by Greg Farnum about 8 years ago

It's not the length of the log that's the problem, though. Apparently we just don't remember ops which have a possible-but-failed write? I don't really see how that semantic works out well for anybody. It doesn't need to be pipelined ops from a single client to be a problem — systems which expose ack prior to commit and then have OSD failure will get in trouble; racing guarded ops from multiple clients can get different results on replay than they did on the "first" (invisible) go-round.

Actions #4

Updated by Josh Durgin about 8 years ago

Right, what I mentioned was assuming we stored the result of failed ops as well.

Actions #5

Updated by Sage Weil about 8 years ago

I think storing failed ops would be problematic too. At least, it would break the current ability we have to return data from a guard, e.g.

assert-unlocked-or-tell-me-who-has-it-locked (read op)
set-lock-to-me (write op)
write-something (write op)

AFAICS this capability is incompatible with pipelining a guarded op + a write op.

So my question is still how important is it for librbd to do the pipelining. I can't think of a user of the above off the top of my head, but switching gears to allow pipelining will be a lot of work in rados to support, so my bias is not to do it...

Actions #6

Updated by Sage Weil about 8 years ago

  • Priority changed from Urgent to High
Actions #7

Updated by Josh Durgin about 8 years ago

For now let's disable the optimization in librbd:

http://tracker.ceph.com/issues/15353

Actions #8

Updated by Jason Dillaman about 8 years ago

Just as a follow-up: there isn't an optimization in librbd where we drop the guard. We only drop the guard when we execute the copy-up. If we have pipelined IO to the same object, there is a chance of intermingling guarded ops and copy-ups:

osd_op_reply(9109 rbd_data.100e71ea1109.00000000000000a6 [stat,write 588800~1032192] v0'0 uv576 ondisk = -2 ((2) No such file or directory)) v7 -- ?+0 0x7fb31a36d8c0 con 0x7fb31b555380
osd_op_reply(9110 rbd_data.100e71ea1109.00000000000000a6 [stat,write 1620992~1032192] v0'0 uv576 ondisk = -2 ((2) No such file or directory)) v7 -- ?+0 0x7fb31bb29080 con 0x7fb31b555380
osd_op_reply(9111 rbd_data.100e71ea1109.00000000000000a6 [stat,write 2653184~1032192] v0'0 uv576 ondisk = -2 ((2) No such file or directory)) v7 -- ?+0 0x7fb31a36cb00 con 0x7fb31b555380
osd_op_reply(9112 rbd_data.100e71ea1109.00000000000000a6 [stat,write 3685376~64512] v0'0 uv576 ondisk = -2 ((2) No such file or directory)) v7 -- ?+0 0x7fb31bb28dc0 con 0x7fb31b555380
osd_op_reply(9113 rbd_data.100e71ea1109.00000000000000a6 [write 588800~1032192] v8'577 uv577 ack = 0) v7 -- ?+0 0x7fb3192662c0 con 0x7fb31b555380

In the case above, 9109 is the original request that kicked off the copy-up and 9113 is the unguarded copy-up (it's missing the exec call since the parent object was empty). In a single client environment, librbd could track in-flight ops to each object and prevent such races. It won't help in a multi client environment, however.

Actions #9

Updated by Josh Durgin almost 8 years ago

  • Assignee set to Josh Durgin
Actions #10

Updated by Josh Durgin over 7 years ago

  • Status changed from 12 to Resolved
Actions

Also available in: Atom PDF