Bug #45701
rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health check
Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
teuthology
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
/a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083267
020-05-22T23:10:40.068 INFO:tasks.cephadm:Deploying osd.5 on smithi076 with /dev/vg_nvme/lv_2... 2020-05-22T23:10:40.068 INFO:teuthology.orchestra.run.smithi076:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:c3321b7686f181e1bcb805a1fb24baced390ae4c shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 689a6aee-9c7f-11ea-a06a-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_2 2020-05-22T23:10:40.101 INFO:ceph.osd.4.smithi076.stdout:-- Logs begin at Fri 2020-05-22 22:48:29 UTC. - 2020-05-22T23:10:40.297 INFO:ceph.mon.smithi076.smithi076.stdout:May 22 23:10:40 smithi076 bash[26535]: cluster 2020-05-22T23:10:39.069248+0000 mon.smithi071 (mon.0) 378 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
2020-05-22T23:17:27.028 INFO:teuthology.orchestra.run.smithi071.stdout:{"status":"HEALTH_WARN","checks":{"CEPHADM_REFRESH_FAILED":{"severity":"HEALTH_WARN","summary":{"message":"failed to probe daemons or devices","count":1},"muted":false}},"mutes":[]} 2020-05-22T23:17:27.028 INFO:tasks.cephadm:Teardown begin 2020-05-22T23:17:27.029 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_py2/teuthology/contextutil.py", line 34, in nested yield vars File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri-master_5.22.20/qa/tasks/cephadm.py", line 1129, in task healthy(ctx=ctx, config=config) File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri-master_5.22.20/qa/tasks/ceph.py", line 1423, in healthy manager.wait_until_healthy(timeout=300) File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri-master_5.22.20/qa/tasks/ceph_manager.py", line 2894, in wait_until_healthy 'timeout expired in wait_until_healthy' AssertionError: timeout expired in wait_until_healthy
Related issues
History
#1 Updated by Brad Hubbard almost 4 years ago
/a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085552
/a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085512
/a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085548
/a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085520
#2 Updated by Brad Hubbard almost 4 years ago
- Duplicates Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 429 (Too Many Requests) added
#3 Updated by Brad Hubbard almost 4 years ago
- Status changed from New to Duplicate
#4 Updated by Deepika Upadhyay over 3 years ago
cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
#5 Updated by Brad Hubbard over 3 years ago
Deepika, https://tracker.ceph.com/issues/45701#note-4 is actually https://tracker.ceph.com/issues/45421