Project

General

Profile

Actions

Bug #275

closed

Unable to remove module when monitors or mds'es are down

Added by Wido den Hollander almost 14 years ago. Updated over 13 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

When a cluster fails you can unmount the filesystem with "umount -lf /path/to/ceph", that works fine.

But then "rmmod ceph" fails with the message that "ceph is still in use".

[38079.360211] ceph: mon1 [2001:16f8:10:2::c3c3:2e5c]:6789 connection failed
[38089.380202] ceph: mon0 [2001:16f8:10:2::c3c3:3f9b]:6789 connection failed

That messages keep coming back in the dmesg.

Imho you should be able to remove the module when no filesystem is mounted.

Actions #1

Updated by Yehuda Sadeh almost 14 years ago

I'm not sure that the -l here is in place. It just gives you an illusion that it actually did anything, but it did nothing, as it's waiting for the opened files to be released. Doing just a 'umount -f' will probably give you 'device is busy', as there are hung operations that we're waiting on.

Actions #2

Updated by Sage Weil almost 14 years ago

Yeah, you should really do umount -f. If that fails with 'filesystem busy' then kill -9 on the running procs should work. If it doesn't, that's a bug. And if umount -f (no -l) succeeds but rmmod fails, that's also a bug. The -l makes it unclear which might be broken...

Actions #3

Updated by Sage Weil almost 14 years ago

  • Status changed from New to Can't reproduce
Actions

Also available in: Atom PDF