Bug #913
closedkrbd: handle race between notify and rbd device shutdown
0%
Description
The notify goes off in a work queue. Need to cancel event and drain that before closing down our device.
Updated by Sage Weil about 13 years ago
- Translation missing: en.field_story_points set to 2
- Translation missing: en.field_position set to 1
- Translation missing: en.field_position changed from 1 to 560
Updated by Sage Weil almost 13 years ago
- Target version changed from v2.6.39 to v3.0
Updated by Sage Weil almost 13 years ago
- Translation missing: en.field_position deleted (
559) - Translation missing: en.field_position set to 1
- Translation missing: en.field_position changed from 1 to 565
Updated by Sage Weil almost 13 years ago
- Target version changed from v3.0 to v3.1
Updated by Sage Weil over 12 years ago
- Translation missing: en.field_position deleted (
571) - Translation missing: en.field_position set to 13
Updated by Sage Weil almost 12 years ago
- Project changed from Linux kernel client to rbd
- Category deleted (
rbd)
Updated by Josh Durgin almost 12 years ago
- Subject changed from rbd: handle race between notify and rbd device shutdown to krbd: handle race between notify and rbd device shutdown
Updated by Alex Elder about 11 years ago
- Status changed from New to Resolved
This is very old. And--provided I understand it--it is resolved
in the current rbd code.
When a mapped rbd image is created it sets up rbd_dev_release
as the function to call when the last reference to the device
is dropped. That gets called when the last reference to a
block device gets dropped.
The first thing rbd_dev_release() does is synchronously drop
the watch request on the header object. So before any other
of this final teardown activity proceeds the watch request
will be gone.
And I think any incoming notify requests that might have
been sent prior to that will be done by then, and we won't
get them thereafter.
So... I'm going to call this resolved.