Bug #7142
list_lockers() never returns after cluster restart and health_ok (librbdpy)
Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Support
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
The cluster was shut off and then all nodes were restarted. Mon's came up first, then OSDs all at once). While the OSDs were stablizing, we started the list_lockers as seen in (https://inktank.zendesk.com/agent/#/tickets/465) and the threads hung for a very long (hours) time after the OSDs were all OK.
Previously a similar issue came up when the cluster had a full OSD: http://tracker.ceph.com/issues/6070
History
#1 Updated by Sage Weil about 9 years ago
- Project changed from Ceph to rbd
#2 Updated by Josh Durgin about 9 years ago
- Status changed from New to Resolved
commit:609f4c56718d8279895b02b8163bbe1976c02bfb