Actions
Bug #64344
openrados/cephadm/dashboard: test that expects a HOST_MAINTENANCE_MODE scenario fails due to warning in cluster log
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Following the merge of https://github.com/ceph/ceph/pull/41479 and https://github.com/ceph/ceph/pull/54312, there are some warnings that we expect to see in testing environments that are causing some tests to fail. They need to be whitelisted.
/a/yuriw-2024-02-06_00:23:51-rados-wip-yuri10-testing-2024-02-02-1149-pacific-distro-default-smithi/7548081
2024-02-06T17:47:39.315 INFO:tasks.workunit.client.0.smithi007.stdout: ✓ should enter host into maintenance
2024-02-06T17:47:39.983 INFO:journalctl@ceph.mon.a.smithi007.stdout:Feb 06 17:47:39 smithi007 ceph-56b84c6a-c516-11ee-95b6-87774f69a715-mon-a[39287]: cluster 2024-02-06T17:47:38.220823+0000 mgr.a (mgr.14146) 328 : cluster [DBG] pgmap v285: 0 pgs: ; 0 B data, 871 MiB used, 267 GiB / 268 GiB avail
2024-02-06T17:47:39.983 INFO:journalctl@ceph.mon.a.smithi007.stdout:Feb 06 17:47:39 smithi007 ceph-56b84c6a-c516-11ee-95b6-87774f69a715-mon-a[39287]: audit 2024-02-06T17:47:38.557896+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14146 172.21.15.7:0/3734661507' entity='mgr.a'
2024-02-06T17:47:39.983 INFO:journalctl@ceph.mon.a.smithi007.stdout:Feb 06 17:47:39 smithi007 ceph-56b84c6a-c516-11ee-95b6-87774f69a715-mon-a[39287]: cluster 2024-02-06T17:47:38.558260+0000 mgr.a (mgr.14146) 329 : cluster [DBG] pgmap v286: 0 pgs: ; 0 B data, 871 MiB used, 267 GiB / 268 GiB avail
2024-02-06T17:47:40.983 INFO:journalctl@ceph.mon.a.smithi007.stdout:Feb 06 17:47:40 smithi007 ceph-56b84c6a-c516-11ee-95b6-87774f69a715-mon-a[39287]: cluster 2024-02-06T17:47:39.556426+0000 mon.a (mon.0) 371 : cluster [WRN] Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)
2024-02-06T17:47:41.983 INFO:journalctl@ceph.mon.a.smithi007.stdout:Feb 06 17:47:41 smithi007 ceph-56b84c6a-c516-11ee-95b6-87774f69a715-mon-a[39287]: cluster 2024-02-06T17:47:40.558525+0000 mgr.a (mgr.14146) 330 : cluster [DBG] pgmap v287: 0 pgs: ; 0 B data, 871 MiB used, 267 GiB / 268 GiB avail
2024-02-06T17:47:42.983 INFO:journalctl@ceph.mon.a.smithi007.stdout:Feb 06 17:47:42 smithi007 ceph-56b84c6a-c516-11ee-95b6-87774f69a715-mon-a[39287]: audit 2024-02-06T17:47:41.658557+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14146 172.21.15.7:0/3734661507' entity='mgr.a'
2024-02-06T17:47:42.983 INFO:journalctl@ceph.mon.a.smithi007.stdout:Feb 06 17:47:42 smithi007 ceph-56b84c6a-c516-11ee-95b6-87774f69a715-mon-a[39287]: cluster 2024-02-06T17:47:41.658914+0000 mgr.a (mgr.14146) 331 : cluster [DBG] pgmap v288: 0 pgs: ; 0 B data, 871 MiB used, 267 GiB / 268 GiB avail
2024-02-06T17:47:43.205 INFO:tasks.workunit.client.0.smithi007.stdout: ✓ should exit host from maintenance
2024-02-06T17:47:43.206 INFO:tasks.workunit.client.0.smithi007.stdout:
2024-02-06T17:47:43.207 INFO:tasks.workunit.client.0.smithi007.stdout:
2024-02-06T17:47:43.207 INFO:tasks.workunit.client.0.smithi007.stdout: 7 passing (51s)
2024-02-06T17:47:43.207 INFO:tasks.workunit.client.0.smithi007.stdout:
2024-02-06T17:47:43.215 INFO:tasks.workunit.client.0.smithi007.stdout:
2024-02-06T17:47:43.216 INFO:tasks.workunit.client.0.smithi007.stdout: (Results)
2024-02-06T17:47:43.219 INFO:tasks.workunit.client.0.smithi007.stdout:
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Tests: 7 │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Passing: 7 │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Failing: 0 │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Pending: 0 │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Skipped: 0 │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Screenshots: 0 │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Video: false │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Duration: 50 seconds │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: │ Spec Ran: orchestrator/01-hosts.e2e-spec.ts │
2024-02-06T17:47:43.221 INFO:tasks.workunit.client.0.smithi007.stdout: └────────────────────────────────────────────────────────────────────────────────────────────────┘
...
2024-02-06T17:51:07.298 INFO:teuthology.orchestra.run.smithi007.stdout:2024-02-06T17:47:39.556426+0000 mon.a (mon.0) 371 : cluster [WRN] Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)
Actions