Project

General

Profile

Bug #49456

cephadm dashboard test: failed to connect to the server

Added by Sebastian Wagner 2 months ago. Updated 2 months ago.

Status:
New
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

https://pulpito.ceph.com/swagner-2021-02-22_13:53:31-rados:cephadm-wip-swagner3-testing-2021-02-22-1135-distro-basic-smithi/5903621

2021-02-24T11:30:37.463 INFO:teuthology.orchestra.run.smithi036.stderr:Ceph Dashboard is now available at:
2021-02-24T11:30:37.463 INFO:teuthology.orchestra.run.smithi036.stderr:
2021-02-24T11:30:37.463 INFO:teuthology.orchestra.run.smithi036.stderr:      URL: https://smithi036.front.sepia.ceph.com:8443/
2021-02-24T11:30:37.464 INFO:teuthology.orchestra.run.smithi036.stderr:     User: admin
2021-02-24T11:30:37.464 INFO:teuthology.orchestra.run.smithi036.stderr: Password: v3h0cxe749
2021-02-24T11:30:37.464 INFO:teuthology.orchestra.run.smithi036.stderr:
2021-02-24T11:30:37.464 INFO:teuthology.orchestra.run.smithi036.stderr:You can access the Ceph CLI with:

... snip ...

2021-02-24T11:38:03.016 INFO:tasks.workunit.client.0.smithi036.stdout:Cypress could not verify that this server is running:
2021-02-24T11:38:03.017 INFO:tasks.workunit.client.0.smithi036.stdout:
2021-02-24T11:38:03.017 INFO:tasks.workunit.client.0.smithi036.stdout:  > https://ceph-76535e7c-7693-11eb-9035-001a4aab830c-mgr.a:8443/
2021-02-24T11:38:03.017 INFO:tasks.workunit.client.0.smithi036.stdout:
2021-02-24T11:38:03.017 INFO:tasks.workunit.client.0.smithi036.stdout:We are verifying this server because it has been configured as your `baseUrl`.
2021-02-24T11:38:03.018 INFO:tasks.workunit.client.0.smithi036.stdout:
2021-02-24T11:38:03.018 INFO:tasks.workunit.client.0.smithi036.stdout:Cypress automatically waits until your server is accessible before running tests.
2021-02-24T11:38:03.018 INFO:tasks.workunit.client.0.smithi036.stdout:
2021-02-24T11:38:03.018 INFO:tasks.workunit.client.0.smithi036.stdout:We will try connecting to it 3 more times...
2021-02-24T11:38:03.287 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:03 smithi036 conmon[31056]: audit 2021-02-24T11:38:02.081910+0000
2021-02-24T11:38:03.287 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:03 smithi036 conmon[31056]:  mon.a (mon.0) 261 : audit [INF] from='mgr.14144 172.21.15.36:0/788023904' entity='mgr.a'
2021-02-24T11:38:03.287 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:03 smithi036 conmon[31056]: audit 2021-02-24T11:38:02.084066+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14144 172.21.15.36:0/788023904' entity='mgr.a'
2021-02-24T11:38:03.288 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:03 smithi036 conmon[31056]: audit 2021-02-24T11:38:02.085244+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14144 172.21.15.36:0/788023904' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
2021-02-24T11:38:03.288 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:03 smithi036 conmon[31056]: cluster 2021-02-24T11:38:02.510634+0000 mgr.a (mgr.14144) 266 : cluster [DBG] pgmap v238: 0 pgs: ; 0 B data, 15 MiB used, 268 GiB / 268 GiB avail
2021-02-24T11:38:05.786 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:05 smithi036 conmon[31056]: cluster 2021-02-24T11:38:04.510900+0000 mgr.a (mgr.14144) 267 : cluster [DBG] pgmap v239: 0 pgs: ; 0 B data, 15 MiB used, 268 GiB / 268 GiB avail
2021-02-24T11:38:06.023 INFO:tasks.workunit.client.0.smithi036.stdout:We will try connecting to it 2 more times...
2021-02-24T11:38:07.036 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:06 smithi036 conmon[31056]: audit 2021-02-24T11:38:05.576848+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14144 172.21.15.36:0/788023904' entity='mgr.a'
2021-02-24T11:38:07.037 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:06 smithi036 conmon[31056]: audit 2021-02-24T11:38:05.893489+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14144 172.21.15.36:0/788023904' entity='mgr.a'
2021-02-24T11:38:07.037 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:06 smithi036 conmon[31056]: audit 2021-02-24T11:38:05.896394+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14144 172.21.15.36:0/788023904' entity='mgr.a'
2021-02-24T11:38:08.037 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:07 smithi036 conmon[31056]: cluster 2021-02-24T11:38:06.511227+0000 mgr.a (mgr.14144) 268 : cluster [DBG] pgmap v240: 0 pgs: ; 0 B data, 15 MiB used, 268 GiB / 268 GiB avail
2021-02-24T11:38:09.025 INFO:tasks.workunit.client.0.smithi036.stdout:We will try connecting to it 1 more time...
2021-02-24T11:38:09.025 INFO:tasks.workunit.client.0.smithi036.stdout:
2021-02-24T11:38:09.538 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:09 smithi036 conmon[31056]: cluster 2021-02-24T11:38:08.511549+0000 mgr.a (mgr.14144
2021-02-24T11:38:09.539 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:09 smithi036 conmon[31056]: ) 269 : cluster [DBG] pgmap v241: 0 pgs: ; 0 B data, 15 MiB used, 268 GiB / 268 GiB avail
2021-02-24T11:38:12.037 INFO:journalctl@ceph.mon.a.smithi036.stdout:Feb 24 11:38:11 smithi036 conmon[31056]: cluster 2021-02-24T11:38:10.511979+0000 mgr.a (mgr.14144) 270 : cluster [DBG] pgmap v242: 0 pgs: ; 0 B data, 15 MiB used, 268 GiB / 268 GiB avail
2021-02-24T11:38:13.034 INFO:tasks.workunit.client.0.smithi036.stdout:Cypress failed to verify that your server is running.
2021-02-24T11:38:13.034 INFO:tasks.workunit.client.0.smithi036.stdout:
2021-02-24T11:38:13.035 INFO:tasks.workunit.client.0.smithi036.stdout:Please start this server and then run Cypress again.

History

#1 Updated by Kefu Chai 2 months ago

  • Priority changed from Normal to High

/a/kchai-2021-02-24_13:00:59-rados-wip-kefu-testing-2021-02-24-1742-distro-basic-smithi/5911838/

#2 Updated by Ken Dreyer 2 months ago

  • Subject changed from cephadm dashboard test: failed to connecto the the server to cephadm dashboard test: failed to connect to the server

#3 Updated by Kefu Chai 2 months ago

  • File consoleText.txt added

#4 Updated by Kefu Chai 2 months ago

  • File deleted (consoleText.txt)

#5 Updated by Kefu Chai 2 months ago

qa/workunits/cephadm/test_dashboard_e2e.sh (for SEO)

#6 Updated by Ernesto Puerta 2 months ago

How often this one happens? We rarely see that in the none-cephadm dashboard e2e tests... It might be due to the different cypress.json settings (dashboard-only is in src/pybind/mgr/dashboard/frontend/cypress.json) and this one is generated via CLI in qa/workunits/cephadm/test_dashboard_e2e.sh:

  • retries is 1 in dashboard, while 0 in cephadm.
  • all dashboard e2e tests are run with a timeout of 120 secs, while most cephadm ones don't have a timeout (inf?), except for the OSD which has a 300 sec timeout.

Also available in: Atom PDF