Support #42676
closedluminoue:Can't delete cache pool
0%
Description
When I removing a cache tier,some errors occurs.
Steps are as follows
1. ceph osd tier cache-mode cache forward --yes-i-really-mean-it
2. rados -p cache cache-flush-evict-all
when I perform step 2,error follows:
```
failed to evict /rbd_header.dcdf36b8b4567: (16) Device or resource busy
rbd_header.2d1e256b8b4567
failed to evict /rbd_header.2d1e256b8b4567: (16) Device or resource busy
rbd_header.dcad46f615160
failed to evict /rbd_header.dcad46f615160: (16) Device or resource busy
rbd_header.ea8166b8b4567
failed to evict /rbd_header.ea8166b8b4567: (16) Device or resource busy
rbd_header.67c3d6b8b4567
failed to evict /rbd_header.67c3d6b8b4567: (16) Device or resource busy
rbd_header.2dbc366b8b4567
failed to evict /rbd_header.2dbc366b8b4567: (16) Device or resource busy
```
If I enforce to perform step: ceph osd tier remove-overlay {pool},All the vms changed to read-only and can't reboot. I cann't find a good solution .Any ideas?
My ceph version is 12.2.10。
Updated by Greg Farnum over 4 years ago
- Tracker changed from Bug to Support
- Project changed from Ceph to rbd
- Category deleted (
OSD)
As you have to maintain watches on the RBD header images, they can't be evicted. I suspect you can't do a migration live in that case?
Updated by Jason Dillaman over 4 years ago
- Status changed from New to Closed
This is a duplicate of #14865 -- which had a proposed PR [1] which was never merged into the OSDs. We don't support migration on luminous so there is nothing for RBD to help with here if the OSDs cannot support eviction of a watched object.