Bug #913
closed
krbd: handle race between notify and rbd device shutdown
Added by Sage Weil about 13 years ago.
Updated about 11 years ago.
Description
The notify goes off in a work queue. Need to cancel event and drain that before closing down our device.
- Translation missing: en.field_story_points set to 2
- Translation missing: en.field_position set to 1
- Translation missing: en.field_position changed from 1 to 560
- Target version changed from v2.6.39 to v3.0
- Translation missing: en.field_position deleted (
559)
- Translation missing: en.field_position set to 1
- Translation missing: en.field_position changed from 1 to 565
- Target version changed from v3.0 to v3.1
- Target version changed from v3.1 to v3.2
- Target version deleted (
v3.2)
- Translation missing: en.field_position deleted (
571)
- Translation missing: en.field_position set to 13
- Project changed from Linux kernel client to rbd
- Category deleted (
rbd)
- Subject changed from rbd: handle race between notify and rbd device shutdown to krbd: handle race between notify and rbd device shutdown
- Assignee set to Alex Elder
- Status changed from New to Resolved
This is very old. And--provided I understand it--it is resolved
in the current rbd code.
When a mapped rbd image is created it sets up rbd_dev_release
as the function to call when the last reference to the device
is dropped. That gets called when the last reference to a
block device gets dropped.
The first thing rbd_dev_release() does is synchronously drop
the watch request on the header object. So before any other
of this final teardown activity proceeds the watch request
will be gone.
And I think any incoming notify requests that might have
been sent prior to that will be done by then, and we won't
get them thereafter.
So... I'm going to call this resolved.
Also available in: Atom
PDF