Project

General

Profile

Actions

Bug #12340

closed

ceph-helpers.sh: test_get_last_scrub_stamp: wait_for_clean fails

Added by Loïc Dachary almost 9 years ago. Updated over 8 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The osd.0 is not up, hence the cluster can't be clean:

        "osds": [
            {
                "osd": 0,
                "uuid": "d0bf3a6f-f6c7-47d6-be01-3b66da6c1aed",
                "up": 0,
                "in": 1,
                "weight": 1.000000,

the debug output shows

...
../qa/workunits/ceph-helpers.sh:555: wait_for_osd:  grep 'osd.0 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.0 up   in  weight 1 up_from 5 up_thru 6 down_at 0 last_clean_interval [0,0) 127.0.0.1:6804/29933 127.0.0.1:6805/29933 127.0.0.1:6806/29933 127.0.0.1:6807/29933 exists,up d0bf3a6f-f6c7-47d6-be01-3b66da6c1aed

and shortly after that it calls wait_for_clean and the ceph-report that is displayed 120 seconds later shows it is no longer up.

../qa/workunits/ceph-helpers.sh:558: wait_for_osd: status=0


Files

test.txt (503 KB) test.txt full output of the failed test Loïc Dachary, 07/15/2015 08:48 PM
Actions #2

Updated by Loïc Dachary over 8 years ago

  • Status changed from Need More Info to Can't reproduce

Did not happen in a long time

Actions

Also available in: Atom PDF