Project

General

Profile

Actions

Bug #11960

closed

Kernel panic when deleting a pool, which contains a mapped RBD

Added by Alex Leake almost 9 years ago. Updated almost 8 years ago.

Status:
Closed
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Community (dev)
Tags:
rbd, panic, delete
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rbd
Crash signature (v1):
Crash signature (v2):

Description

Hello all,

I've found a reproducible bug in Ceph relating to mapped RBD images.

Steps to reproduce:

1. Create pool
2. Create RBD
4. Mount RBD
5. Write data to mount
6. Delete pool
7. Stop OSD cluster daemons (stop ceph-all)
8. Server which has RBD mapped will panic

We can reproduce this reliably. Here is our dev environment:

- Ubuntu 14.04 LTS, 3.16.0-38-generic (linux-generic-lts-utopic kernel)
- Ceph 0.94.1-1trusty packages
- megaraid_sas kernel module (default) on OSDs

I realise this is not a good idea to delete a pool, without first removing the RBD images - but, deleting them can take time. Also, I'm not always 100% sure if they are / are not mapped. The default behaviour should be to warn the user (dmesg?) - assuming this is a bug...

Kind Regards,
Alex.


Files

after_removal.jpg (37.5 KB) after_removal.jpg Alex Leake, 06/16/2015 01:44 PM
panic.jpg (90.5 KB) panic.jpg Alex Leake, 06/16/2015 01:44 PM

Related issues 2 (0 open2 closed)

Related to Linux kernel client - Bug #8568: libceph: kernel BUG at net/ceph/osd_client.c:885ClosedIlya Dryomov06/10/2014

Actions
Related to Linux kernel client - Feature #9779: libceph: sync up with objecterResolvedIlya Dryomov10/14/2014

Actions
Actions

Also available in: Atom PDF