Bug #11960
closedKernel panic when deleting a pool, which contains a mapped RBD
0%
Description
Hello all,
I've found a reproducible bug in Ceph relating to mapped RBD images.
Steps to reproduce:
1. Create pool
2. Create RBD
4. Mount RBD
5. Write data to mount
6. Delete pool
7. Stop OSD cluster daemons (stop ceph-all)
8. Server which has RBD mapped will panic
We can reproduce this reliably. Here is our dev environment:
- Ubuntu 14.04 LTS, 3.16.0-38-generic (linux-generic-lts-utopic kernel)
- Ceph 0.94.1-1trusty packages
- megaraid_sas kernel module (default) on OSDs
I realise this is not a good idea to delete a pool, without first removing the RBD images - but, deleting them can take time. Also, I'm not always 100% sure if they are / are not mapped. The default behaviour should be to warn the user (dmesg?) - assuming this is a bug...
Kind Regards,
Alex.
Files