Actions
Bug #12340
closedceph-helpers.sh: test_get_last_scrub_stamp: wait_for_clean fails
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
The osd.0 is not up, hence the cluster can't be clean:
"osds": [ { "osd": 0, "uuid": "d0bf3a6f-f6c7-47d6-be01-3b66da6c1aed", "up": 0, "in": 1, "weight": 1.000000,
the debug output shows
... ../qa/workunits/ceph-helpers.sh:555: wait_for_osd: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 5 up_thru 6 down_at 0 last_clean_interval [0,0) 127.0.0.1:6804/29933 127.0.0.1:6805/29933 127.0.0.1:6806/29933 127.0.0.1:6807/29933 exists,up d0bf3a6f-f6c7-47d6-be01-3b66da6c1aed
and shortly after that it calls wait_for_clean and the ceph-report that is displayed 120 seconds later shows it is no longer up.
../qa/workunits/ceph-helpers.sh:558: wait_for_osd: status=0
Files
Actions