Feature #14201
closedcope with a pool being removed from under a mapped image
0%
Description
1. [root@c8 /]# uname -r
4.1.1
2. [root@ceph0 ~]# ceph -v
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
3. create image xx
[root@ceph-ci ~]# rbd create xx/xx --image-format 2 --size 1024
[root@ceph-ci ~]# rbd ls xx
xx
4. map image
[root@c8 /]# rbd map xx/xx
/dev/rbd0
[root@c8 /]# rbd showmapped
id pool image snap device
0 xx xx - /dev/rbd0
5. delete pool
[root@ceph-ci ~]# ceph osd pool delete xx xx --yes-i-really-really-mean-it
pool 'xx' removed
6. [root@c8 /]# rbd showmapped -----at this moment, image xx has already deleted,but map info still exist
id pool image snap device
0 xx xx - /dev/rbd0
Updated by Ilya Dryomov about 8 years ago
- Tracker changed from Bug to Feature
- Subject changed from mapped image can't feel changes when the pool of this mapped image is deleted to cope with a pool being removed from under a mapped image
We currently don't cope very well with this - see #11960. When #9779 is finished, there will be a message printed to dmesg, saying that the pool is gone.
Leaving this around as a feature to test and make sure that each rbd operation after the pool is removed fails gracefully with an -EIO. The device won't be teared down though - only the operator can do that.
Updated by Ilya Dryomov almost 8 years ago
- Status changed from New to In Progress
- Assignee set to Ilya Dryomov
Updated by Ilya Dryomov almost 8 years ago
- Related to Feature #9779: libceph: sync up with objecter added
Updated by Ilya Dryomov almost 8 years ago
- Status changed from In Progress to Resolved
Done in 4.7. In-flight requests are -EIOed, the filesystem remounts itself read-only. The filesystem can be unmounted, rbd device can be unmapped.
libceph: tid 607 pool does not existin the dmesg.