Project

General

Profile

Actions

Bug #57386

closed

cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did

Added by Yaarit Hatuka over 1 year ago. Updated 3 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
backport_processed
Backport:
quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Happened in a Quincy run:

http://qa-proxy.ceph.com/teuthology/yuriw-2022-09-01_16:26:28-rados-wip-lflores-testing-2-2022-08-26-2240-quincy-distro-default-smithi/7005514/teuthology.log

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=96808b212a3260b0ed0435d4ab1f1a1b796d377d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2022-09-01T19:11:59.952 INFO:tasks.workunit.client.0.smithi119.stdout:      1) should edit host labels
2022-09-01T19:12:00.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: pgmap v317: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:00.405 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
2022-09-01T19:12:00.406 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2022-09-01T19:12:00.406 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
2022-09-01T19:12:01.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:01 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:01.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:01 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:01.405 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:01 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:02.411 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:02 smithi119 ceph-mon[102611]: pgmap v318: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:02.411 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:02 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:02.693 INFO:tasks.workunit.client.0.smithi119.stdout:      ✓ should enter host into maintenance
2022-09-01T19:12:03.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:03 smithi119 ceph-mon[102611]: pgmap v319: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:03.405 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:03 smithi119 ceph-mon[102611]: Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)
2022-09-01T19:12:05.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:05 smithi119 ceph-mon[102611]: pgmap v320: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:06.775 INFO:tasks.workunit.client.0.smithi119.stdout:      ✓ should exit host from maintenance
2022-09-01T19:12:06.776 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.779 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.780 INFO:tasks.workunit.client.0.smithi119.stdout:  6 passing (3m)
2022-09-01T19:12:06.780 INFO:tasks.workunit.client.0.smithi119.stdout:  1 failing
2022-09-01T19:12:06.781 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.781 INFO:tasks.workunit.client.0.smithi119.stdout:  1) Hosts page
2022-09-01T19:12:06.781 INFO:tasks.workunit.client.0.smithi119.stdout:       when Orchestrator is available
2022-09-01T19:12:06.782 INFO:tasks.workunit.client.0.smithi119.stdout:         should edit host labels:
2022-09-01T19:12:06.782 INFO:tasks.workunit.client.0.smithi119.stdout:     AssertionError: Timed out retrying after 120000ms: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did.
2022-09-01T19:12:06.782 INFO:tasks.workunit.client.0.smithi119.stdout:      at HostsPageHelper.editLabels (https://172.21.15.119:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-spec.ts:134:24)
2022-09-01T19:12:06.783 INFO:tasks.workunit.client.0.smithi119.stdout:      at Context.eval (https://172.21.15.119:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-spec.ts:360:19)
2022-09-01T19:12:06.783 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.783 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.784 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.790 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.791 INFO:tasks.workunit.client.0.smithi119.stdout:  (Results)
2022-09-01T19:12:06.795 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.801 INFO:tasks.workunit.client.0.smithi119.stdout:  ┌────────────────────────────────────────────────────────────────────────────────────────────────â”
2022-09-01T19:12:06.802 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Tests:        7                                                                                │
2022-09-01T19:12:06.802 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Passing:      6                                                                                │
2022-09-01T19:12:06.803 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Failing:      1                                                                                │
2022-09-01T19:12:06.803 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Pending:      0                                                                                │
2022-09-01T19:12:06.803 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Skipped:      0                                                                                │
2022-09-01T19:12:06.804 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Screenshots:  1                                                                                │
2022-09-01T19:12:06.804 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Video:        false                                                                            │
2022-09-01T19:12:06.805 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Duration:     2 minutes, 50 seconds                                                            │
2022-09-01T19:12:06.805 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Spec Ran:     orchestrator/01-hosts.e2e-spec.ts                                                │
2022-09-01T19:12:06.806 INFO:tasks.workunit.client.0.smithi119.stdout:  └────────────────────────────────────────────────────────────────────────────────────────────────┘
2022-09-01T19:12:06.806 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.806 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.807 INFO:tasks.workunit.client.0.smithi119.stdout:  (Screenshots)
2022-09-01T19:12:06.807 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.808 INFO:tasks.workunit.client.0.smithi119.stdout:  -  /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/dashboard/frontend/cypress/s     (1280x720)
2022-09-01T19:12:06.808 INFO:tasks.workunit.client.0.smithi119.stdout:     creenshots/orchestrator/01-hosts.e2e-spec.ts/Hosts page -- when Orchestrator is
2022-09-01T19:12:06.809 INFO:tasks.workunit.client.0.smithi119.stdout:     available -- should edit host labels (failed).png
2022-09-01T19:12:06.809 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.811 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.812 INFO:tasks.workunit.client.0.smithi119.stdout:Important notice: Your Applitools visual tests are currently running with a testConcurrency value of 5.
2022-09-01T19:12:06.812 INFO:tasks.workunit.client.0.smithi119.stdout:This means that only up to 5 visual tests can run in parallel, and therefore the execution might be slower.
2022-09-01T19:12:06.812 INFO:tasks.workunit.client.0.smithi119.stdout:If your Applitools license supports a higher concurrency level, learn how to configure it here: https://www.npmjs.com/package/@applitools/eyes-cypress#concurrency.
2022-09-01T19:12:06.813 INFO:tasks.workunit.client.0.smithi119.stdout:Need a higher concurrency in your account? Email us @ sdr@applitools.com with your required concurrency level.
2022-09-01T19:12:06.814 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.827 INFO:tasks.workunit.client.0.smithi119.stderr:tput: No value for $TERM and no -T specified
2022-09-01T19:12:06.828 INFO:tasks.workunit.client.0.smithi119.stdout:================================================================================
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:  (Run Finished)
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:       Spec                                              Tests  Passing  Failing  Pending  Skipped
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:  ┌────────────────────────────────────────────────────────────────────────────────────────────────â”
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:  │ ✖  orchestrator/01-hosts.e2e-spec.ts        02:50        7        6        1        -        - │
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:  └────────────────────────────────────────────────────────────────────────────────────────────────┘
2022-09-01T19:12:06.831 INFO:tasks.workunit.client.0.smithi119.stdout:    ✖  1 of 1 failed (100%)                     02:50        7        6        1        -        -
2022-09-01T19:12:06.831 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.884 DEBUG:teuthology.orchestra.run:got remote process result: 1
2022-09-01T19:12:06.885 INFO:tasks.workunit:Stopping ['cephadm/test_dashboard_e2e.sh'] on client.0...
2022-09-01T19:12:06.886 DEBUG:teuthology.orchestra.run.smithi119:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0
2022-09-01T19:12:06.904 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:06 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:06.904 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:06 smithi119 ceph-mon[102611]: pgmap v321: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:07.904 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:07 smithi119 ceph-mon[102611]: Health check cleared: HOST_IN_MAINTENANCE (was: 1 host is in maintenance mode)
2022-09-01T19:12:07.905 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:07 smithi119 ceph-mon[102611]: Cluster is now healthy
2022-09-01T19:12:08.687 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 103, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 82, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 135, in task
    coverage_and_limits=not config.get('no_coverage_and_limits', None))
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 84, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 98, in __next__
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 30, in resurrect_traceback
    raise exc.exc_info[1]
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 23, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 427, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/remote.py", line 510, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=96808b212a3260b0ed0435d4ab1f1a1b796d377d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2022-09-01T19:12:08.924 ERROR:teuthology.run_tasks: Sentry event: https://sentry.ceph.com/organizations/ceph/?query=fd17b73f9cd04ac3b49a313fa3fdce96
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 103, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 82, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 135, in task
    coverage_and_limits=not config.get('no_coverage_and_limits', None))
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 84, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 98, in __next__
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 30, in resurrect_traceback
    raise exc.exc_info[1]
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 23, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 427, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/remote.py", line 510, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=96808b212a3260b0ed0435d4ab1f1a1b796d377d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2022-09-01T19:12:08.927 DEBUG:teuthology.run_tasks:Unwinding manager cephadm
2022-09-01T19:12:08.945 INFO:tasks.cephadm:Teardown begin

Looks related to:
https://tracker.ceph.com/issues/53781
https://tracker.ceph.com/issues/53499


Related issues 3 (0 open3 closed)

Related to Dashboard - Bug #57207: AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it.ResolvedNizamudeen A

Actions
Copied to Dashboard - Backport #57828: quincy: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never didResolvedNizamudeen AActions
Copied to Dashboard - Backport #57829: pacific: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never didResolvedNizamudeen AActions
Actions

Also available in: Atom PDF