Project

General

Profile

Actions

Bug #21407

closed

backoff causes out of order op

Added by Sage Weil over 6 years ago. Updated about 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

- receive op a and b

2017-09-15 16:22:14.734503 7f9d2ad2c700  1 -- 172.21.15.134:6809/3757 <== client.4184 172.21.15.73:0/1941813039 40 ==== osd_op(client.4184.0:567 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8 ==== 527+0+494245 (2630269462 0 1056019856) 0x55a3a819ea00 con 0x55a3a7d92200
2017-09-15 16:22:14.734520 7f9d2ad2c700 15 osd.7 23 enqueue_op 0x55a3a7f35300 prio 63 cost 494245 latency 0.002316 epoch 23 osd_op(client.4184.0:567 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8
2017-09-15 16:22:14.734524 7f9d2ad2c700 20 osd.7 op_wq(0) _enqueue 2.10 PGQueueable(0x55a3a7f35300 prio 63 cost 494245 e23)
2017-09-15 16:22:14.734536 7f9d4e6b3700 20 osd.7 op_wq(0) _process 2.10 item PGQueueable(0x55a3a7f35300 prio 63 cost 494245 e23) queued
2017-09-15 16:22:14.734544 7f9d4e6b3700 20 osd.7 op_wq(0) _process 2.10 item PGQueueable(0x55a3a7f35300 prio 63 cost 494245 e23) pg 0x55a3a8199000
2017-09-15 16:22:14.734544 7f9d2ad2c700  1 -- 172.21.15.134:6809/3757 <== client.4184 172.21.15.73:0/1941813039 41 ==== osd_op(client.4184.0:568 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8 ==== 565+0+364 (360996996 0 83188795) 0x55a3a819e400 con 0x55a3a7d92200
2017-09-15 16:22:14.734552 7f9d2ad2c700 15 osd.7 23 enqueue_op 0x55a3a7be7780 prio 63 cost 364 latency 0.000020 epoch 23 osd_op(client.4184.0:568 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8

- pg not peered, put on waiting_for_peered
2017-09-15 16:22:14.734547 7f9d4e6b3700 10 osd.7 23 dequeue_op 0x55a3a7f35300 prio 63 cost 494245 latency 0.002343 osd_op(client.4184.0:567 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8 pg pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18)
 [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 unknown]
2017-09-15 16:22:14.734562 7f9d4e6b3700 20 osd.7 23 share_map client.4184 172.21.15.73:0/1941813039 23
2017-09-15 16:22:14.734569 7f9d4e6b3700 20 osd.7 23 should_share_map client.4184 172.21.15.73:0/1941813039 23
2017-09-15 16:22:14.734575 7f9d4e6b3700 10 osd.7 pg_epoch: 23 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 unknown] can_handle_while_inactive: 0x55a3a7f35300
...
2017-09-15 16:22:14.734547 7f9d4e6b3700 10 osd.7 23 dequeue_op 0x55a3a7f35300 prio 63 cost 494245 latency 0.002343 osd_op(client.4184.0:567 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8 pg pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18)
 [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 unknown]
2017-09-15 16:22:14.734562 7f9d4e6b3700 20 osd.7 23 share_map client.4184 172.21.15.73:0/1941813039 23
2017-09-15 16:22:14.734569 7f9d4e6b3700 20 osd.7 23 should_share_map client.4184 172.21.15.73:0/1941813039 23
2017-09-15 16:22:14.734575 7f9d4e6b3700 10 osd.7 pg_epoch: 23 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 unknown] can_handle_while_inactive: 0x55a3a7f35300

- pg peering, gets another op, sends backoff
2017-09-15 16:22:15.507082 7f9d4e6b3700 10 osd.7 24 dequeue_op 0x55a3a84d0700 prio 63 cost 472310 latency 0.002396 osd_op(client.4184.0:650 2.10 2.c5eef50 (undecoded) ondisk+write+known_if_redirected e24) v8 pg pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 peering]
2017-09-15 16:22:15.507109 7f9d4e6b3700 20 osd.7 24 share_map client.4184 172.21.15.73:0/1941813039 24
2017-09-15 16:22:15.507114 7f9d4e6b3700 20 osd.7 24 should_share_map client.4184 172.21.15.73:0/1941813039 24
2017-09-15 16:22:15.507137 7f9d4e6b3700 10 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 peering] add_backoff session 0x55a3a56da200 added Backoff(0x55a3a7b6ca20 2.10 2 new [2:08000000::::head,2:10000000::::head)  session 0x55a3a56da200 pg 0x55a3a8199000)

- client acks.
2017-09-15 16:22:15.507519 7f9d2ad2c700  1 -- 172.21.15.134:6809/3757 <== client.4184 172.21.15.73:0/1941813039 57 ==== osd_backoff(2.10 ack-block id 2 [2:08000000::::head,2:10000000::::head) e24) v1 ==== 115+0+0 (1667154979 0 0) 0x55a3a7b30a00 con 0x55a3a7d92200
...
2017-09-15 16:22:15.507584 7f9d4a6ab700 10 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 peering] handle_backoff backoff ack id 2 [2:08000000::::head,2:10000000::::head)

- flush finishes, queues event, requeues ops
2017-09-15 16:22:15.692369 7f9d576c5700 10 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 peering] flushed
...
2017-09-15 16:22:15.692425 7f9d4f6b5700 10 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 peering] handle_peering_event: epoch_sent: 24 epoch_requested: 24 FlushedEvt

- ...which triggers requeing of waiting_for_peered (***)
...
2017-09-15 16:22:15.692497 7f9d4f6b5700 20 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 peering] requeue_ops 0x55a3a7f35300
...

- dequeue op a, backoff still there
2017-09-15 16:22:15.692958 7f9d4a6ab700 10 osd.7 24 dequeue_op 0x55a3a7f35300 prio 63 cost 494245 latency 0.960754 osd_op(client.4184.0:567 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8 pg pg[2.10( v 22'40 (0'0,22'40] local-lis/les=18/19 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 peering]
2017-09-15 16:22:15.692968 7f9d4e6b3700 20 osd.7 op_wq(0) _process 2.10 item PGQueueable(0x55a3a7be7780 prio 63 cost 364 e24) queued
2017-09-15 16:22:15.692971 7f9d57ec6700 20 bluefs _flush_range caching tail of 0xbd7 and padding block with 0x429
2017-09-15 16:22:15.692975 7f9d4a6ab700 10 check_backoff session 0x55a3a56da200 has backoff Backoff(0x55a3a7b6ca20 2.10 2 acked [2:08000000::::head,2:10000000::::head)  session 0x55a3a56da200 pg 0x55a3a8199000) for osd_op(client.4184.0:567 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8
2017-09-15 16:22:15.692982 7f9d4a6ab700 10 osd.7 24 dequeue_op 0x55a3a7f35300 finish

- pg finishes peering, backoff released
2017-09-15 16:22:15.693189 7f9d4f6b5700 10 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=23/24 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 activating+degraded] release_backoffs [2:08000000::::head,2:10000000::::head)
2017-09-15 16:22:15.693196 7f9d4f6b5700 20 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=23/24 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 activating+degraded] release_backoffs ? -1 2:08000000::::head 0x55a3a7b6ca20
2017-09-15 16:22:15.693204 7f9d4eeb4700 15 bluestore(/var/lib/ceph/osd/ceph-7) _omap_setkeys 2.0_head #2:00000000::::head#
2017-09-15 16:22:15.693206 7f9d4f6b5700 20 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=23/24 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 activating+degraded] release_backoffs checking 2:08000000::::head 0x55a3a7b6ca20
2017-09-15 16:22:15.693215 7f9d4eeb4700 10 bluestore(/var/lib/ceph/osd/ceph-7) _omap_setkeys 2.0_head #2:00000000::::head# = 0
2017-09-15 16:22:15.693216 7f9d4f6b5700 20 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=23/24 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 activating+degraded] release_backoffs checking  0x55a3a7b6ca20
2017-09-15 16:22:15.693218 7f9d4eeb4700 10 bluestore(/var/lib/ceph/osd/ceph-7) _txc_calc_cost 0x55a3a825e840 cost 5292 (1 ios * 4000 + 1292 bytes)
2017-09-15 16:22:15.693222 7f9d4f6b5700 10 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=23/24 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 activating+degraded] release_backoffs Backoff(0x55a3a7b6ca20 2.10 2 acked [2:08000000::::head,2:10000000::::head)  session 0x55a3a56da200 pg 0x55a3a8199000)

- op b dequeued, processed
2017-09-15 16:22:15.693348 7f9d4e6b3700 10 osd.7 24 dequeue_op 0x55a3a7be7780 prio 63 cost 364 latency 0.958815 osd_op(client.4184.0:568 2.10 2.51226b0 (undecoded) ondisk+retry+write+known_if_redirected e23) v8 pg pg[2.10( v 22'40 (0'0,22'40] local-lis/les=23/24 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 activating+degraded]
2017-09-15 16:22:15.693368 7f9d4e6b3700 20 osd.7 pg_epoch: 24 pg[2.10( v 22'40 (0'0,22'40] local-lis/les=23/24 n=7 ec=23/18 lis/c 18/18 les/c/f 19/19/0 18/23/18) [7,1] r=0 lpr=23 pi=[18,23)/1 crt=22'40 lcod 0'0 mlcod 0'0 activating+degraded] 1 flushes_in_progress pending waiting for active on 0x55a3a7be7780
...

- ...and we're out of order, badness!

/a/sage-2017-09-15_15:50:19-rados-wip-sage-testing2-2017-09-14-1256-distro-basic-smithi/1635718


Related issues 1 (0 open1 closed)

Copied to RADOS - Backport #21794: luminous: backoff causes out of order opResolvedShinobu KinjoActions
Actions #1

Updated by Sage Weil over 6 years ago

  • Description updated (diff)
Actions #2

Updated by Sage Weil over 6 years ago

problem seems to be that we are requeueing waiting_for_peered before we are actually peered. that happens from on_flushed().

strangely, start_flush() is done twice on primary PGs: once after processing the master log, and then again when we activate the pg. i think this is to ensure the osd doesn't share pg logs with replicas that aren't persisted... which seems fine.

but the waiting_for_peered requeueing seems to have been written without that in mind.

Actions #4

Updated by Josh Durgin over 6 years ago

  • Status changed from 12 to 7
Actions #5

Updated by Sage Weil over 6 years ago

  • Status changed from 7 to Pending Backport
  • Backport set to luminous
Actions #6

Updated by Nathan Cutler over 6 years ago

  • Copied to Backport #21794: luminous: backoff causes out of order op added
Actions #7

Updated by Nathan Cutler about 6 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF