Project

General

Profile

Actions

Bug #57386

closed

cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did

Added by Yaarit Hatuka over 1 year ago. Updated 3 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
backport_processed
Backport:
quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Happened in a Quincy run:

http://qa-proxy.ceph.com/teuthology/yuriw-2022-09-01_16:26:28-rados-wip-lflores-testing-2-2022-08-26-2240-quincy-distro-default-smithi/7005514/teuthology.log

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=96808b212a3260b0ed0435d4ab1f1a1b796d377d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2022-09-01T19:11:59.952 INFO:tasks.workunit.client.0.smithi119.stdout:      1) should edit host labels
2022-09-01T19:12:00.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: pgmap v317: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:00.405 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
2022-09-01T19:12:00.406 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2022-09-01T19:12:00.406 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:11:59 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
2022-09-01T19:12:01.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:01 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:01.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:01 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:01.405 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:01 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:02.411 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:02 smithi119 ceph-mon[102611]: pgmap v318: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:02.411 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:02 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:02.693 INFO:tasks.workunit.client.0.smithi119.stdout:      ✓ should enter host into maintenance
2022-09-01T19:12:03.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:03 smithi119 ceph-mon[102611]: pgmap v319: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:03.405 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:03 smithi119 ceph-mon[102611]: Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)
2022-09-01T19:12:05.404 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:05 smithi119 ceph-mon[102611]: pgmap v320: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:06.775 INFO:tasks.workunit.client.0.smithi119.stdout:      ✓ should exit host from maintenance
2022-09-01T19:12:06.776 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.779 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.780 INFO:tasks.workunit.client.0.smithi119.stdout:  6 passing (3m)
2022-09-01T19:12:06.780 INFO:tasks.workunit.client.0.smithi119.stdout:  1 failing
2022-09-01T19:12:06.781 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.781 INFO:tasks.workunit.client.0.smithi119.stdout:  1) Hosts page
2022-09-01T19:12:06.781 INFO:tasks.workunit.client.0.smithi119.stdout:       when Orchestrator is available
2022-09-01T19:12:06.782 INFO:tasks.workunit.client.0.smithi119.stdout:         should edit host labels:
2022-09-01T19:12:06.782 INFO:tasks.workunit.client.0.smithi119.stdout:     AssertionError: Timed out retrying after 120000ms: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did.
2022-09-01T19:12:06.782 INFO:tasks.workunit.client.0.smithi119.stdout:      at HostsPageHelper.editLabels (https://172.21.15.119:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-spec.ts:134:24)
2022-09-01T19:12:06.783 INFO:tasks.workunit.client.0.smithi119.stdout:      at Context.eval (https://172.21.15.119:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-spec.ts:360:19)
2022-09-01T19:12:06.783 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.783 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.784 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.790 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.791 INFO:tasks.workunit.client.0.smithi119.stdout:  (Results)
2022-09-01T19:12:06.795 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.801 INFO:tasks.workunit.client.0.smithi119.stdout:  ┌────────────────────────────────────────────────────────────────────────────────────────────────â”
2022-09-01T19:12:06.802 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Tests:        7                                                                                │
2022-09-01T19:12:06.802 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Passing:      6                                                                                │
2022-09-01T19:12:06.803 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Failing:      1                                                                                │
2022-09-01T19:12:06.803 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Pending:      0                                                                                │
2022-09-01T19:12:06.803 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Skipped:      0                                                                                │
2022-09-01T19:12:06.804 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Screenshots:  1                                                                                │
2022-09-01T19:12:06.804 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Video:        false                                                                            │
2022-09-01T19:12:06.805 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Duration:     2 minutes, 50 seconds                                                            │
2022-09-01T19:12:06.805 INFO:tasks.workunit.client.0.smithi119.stdout:  │ Spec Ran:     orchestrator/01-hosts.e2e-spec.ts                                                │
2022-09-01T19:12:06.806 INFO:tasks.workunit.client.0.smithi119.stdout:  └────────────────────────────────────────────────────────────────────────────────────────────────┘
2022-09-01T19:12:06.806 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.806 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.807 INFO:tasks.workunit.client.0.smithi119.stdout:  (Screenshots)
2022-09-01T19:12:06.807 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.808 INFO:tasks.workunit.client.0.smithi119.stdout:  -  /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/dashboard/frontend/cypress/s     (1280x720)
2022-09-01T19:12:06.808 INFO:tasks.workunit.client.0.smithi119.stdout:     creenshots/orchestrator/01-hosts.e2e-spec.ts/Hosts page -- when Orchestrator is
2022-09-01T19:12:06.809 INFO:tasks.workunit.client.0.smithi119.stdout:     available -- should edit host labels (failed).png
2022-09-01T19:12:06.809 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.811 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.812 INFO:tasks.workunit.client.0.smithi119.stdout:Important notice: Your Applitools visual tests are currently running with a testConcurrency value of 5.
2022-09-01T19:12:06.812 INFO:tasks.workunit.client.0.smithi119.stdout:This means that only up to 5 visual tests can run in parallel, and therefore the execution might be slower.
2022-09-01T19:12:06.812 INFO:tasks.workunit.client.0.smithi119.stdout:If your Applitools license supports a higher concurrency level, learn how to configure it here: https://www.npmjs.com/package/@applitools/eyes-cypress#concurrency.
2022-09-01T19:12:06.813 INFO:tasks.workunit.client.0.smithi119.stdout:Need a higher concurrency in your account? Email us @ sdr@applitools.com with your required concurrency level.
2022-09-01T19:12:06.814 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.827 INFO:tasks.workunit.client.0.smithi119.stderr:tput: No value for $TERM and no -T specified
2022-09-01T19:12:06.828 INFO:tasks.workunit.client.0.smithi119.stdout:================================================================================
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:  (Run Finished)
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.829 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:       Spec                                              Tests  Passing  Failing  Pending  Skipped
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:  ┌────────────────────────────────────────────────────────────────────────────────────────────────â”
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:  │ ✖  orchestrator/01-hosts.e2e-spec.ts        02:50        7        6        1        -        - │
2022-09-01T19:12:06.830 INFO:tasks.workunit.client.0.smithi119.stdout:  └────────────────────────────────────────────────────────────────────────────────────────────────┘
2022-09-01T19:12:06.831 INFO:tasks.workunit.client.0.smithi119.stdout:    ✖  1 of 1 failed (100%)                     02:50        7        6        1        -        -
2022-09-01T19:12:06.831 INFO:tasks.workunit.client.0.smithi119.stdout:
2022-09-01T19:12:06.884 DEBUG:teuthology.orchestra.run:got remote process result: 1
2022-09-01T19:12:06.885 INFO:tasks.workunit:Stopping ['cephadm/test_dashboard_e2e.sh'] on client.0...
2022-09-01T19:12:06.886 DEBUG:teuthology.orchestra.run.smithi119:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0
2022-09-01T19:12:06.904 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:06 smithi119 ceph-mon[102611]: from='mgr.14150 172.21.15.119:0/876211869' entity='mgr.a'
2022-09-01T19:12:06.904 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:06 smithi119 ceph-mon[102611]: pgmap v321: 1 pgs: 1 active+clean; 577 KiB data, 16 MiB used, 268 GiB / 268 GiB avail
2022-09-01T19:12:07.904 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:07 smithi119 ceph-mon[102611]: Health check cleared: HOST_IN_MAINTENANCE (was: 1 host is in maintenance mode)
2022-09-01T19:12:07.905 INFO:journalctl@ceph.mon.a.smithi119.stdout:Sep 01 19:12:07 smithi119 ceph-mon[102611]: Cluster is now healthy
2022-09-01T19:12:08.687 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 103, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 82, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 135, in task
    coverage_and_limits=not config.get('no_coverage_and_limits', None))
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 84, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 98, in __next__
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 30, in resurrect_traceback
    raise exc.exc_info[1]
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 23, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 427, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/remote.py", line 510, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=96808b212a3260b0ed0435d4ab1f1a1b796d377d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2022-09-01T19:12:08.924 ERROR:teuthology.run_tasks: Sentry event: https://sentry.ceph.com/organizations/ceph/?query=fd17b73f9cd04ac3b49a313fa3fdce96
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 103, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/run_tasks.py", line 82, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 135, in task
    coverage_and_limits=not config.get('no_coverage_and_limits', None))
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 84, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 98, in __next__
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 30, in resurrect_traceback
    raise exc.exc_info[1]
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/parallel.py", line 23, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_96808b212a3260b0ed0435d4ab1f1a1b796d377d/qa/tasks/workunit.py", line 427, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/remote.py", line 510, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b58f2c18636eb10faa77ed3614abd00cb85dfc2c/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=96808b212a3260b0ed0435d4ab1f1a1b796d377d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2022-09-01T19:12:08.927 DEBUG:teuthology.run_tasks:Unwinding manager cephadm
2022-09-01T19:12:08.945 INFO:tasks.cephadm:Teardown begin

Looks related to:
https://tracker.ceph.com/issues/53781
https://tracker.ceph.com/issues/53499


Related issues 3 (0 open3 closed)

Related to Dashboard - Bug #57207: AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it.ResolvedNizamudeen A

Actions
Copied to Dashboard - Backport #57828: quincy: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never didResolvedNizamudeen AActions
Copied to Dashboard - Backport #57829: pacific: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never didResolvedNizamudeen AActions
Actions #1

Updated by Yaarit Hatuka over 1 year ago

  • Translation missing: en.field_tag_list set to test-failure
  • Tags deleted (test-failure)
Actions #2

Updated by Kefu Chai over 1 year ago

pacific also: /a/kchai-2022-09-03_01:50:21-rados-wip-pacific-update-fio-kefu-8-distro-default-smithi/7009265/

Actions #3

Updated by Kefu Chai over 1 year ago

  • Backport changed from quincy to quincy, pacific
Actions #4

Updated by Nitzan Mordechai over 1 year ago

/a/yuriw-2022-09-03_14:52:22-rados-wip-yuri-testing-2022-09-02-0945-quincy-distro-default-smithi/7009632

Actions #5

Updated by Laura Flores over 1 year ago

  • Related to Bug #57207: AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. added
Actions #6

Updated by Laura Flores over 1 year ago

/a/yuriw-2022-09-04_23:20:10-rados-wip-yuri10-testing-2022-09-04-0811-quincy-distro-default-smithi/7011701

Actions #7

Updated by Matan Breizman over 1 year ago

/a/yuriw-2022-09-14_13:16:11-rados-wip-yuri6-testing-2022-09-13-1352-distro-default-smithi/7032198

Actions #8

Updated by Laura Flores over 1 year ago

/a/yuriw-2022-09-23_20:38:59-rados-wip-yuri6-testing-2022-09-23-1008-quincy-distro-default-smithi/7042734

Actions #9

Updated by Kamoltat (Junior) Sirivadhna over 1 year ago

yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046226

Actions #10

Updated by Nizamudeen A over 1 year ago

  • Status changed from New to Pending Backport
  • Assignee set to Nizamudeen A
  • Pull request ID set to 48163
Actions #11

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57828: quincy: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did added
Actions #12

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57829: pacific: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did added
Actions #13

Updated by Backport Bot over 1 year ago

  • Tags set to backport_processed
Actions #14

Updated by Laura Flores about 1 year ago

/a/yuriw-2023-02-03_23:37:14-rados-wip-yuri4-testing-2023-02-03-1341-pacific-distro-default-smithi/7148404

Actions #15

Updated by Laura Flores about 1 year ago

/a/yuriw-2023-03-13_19:57:13-rados-wip-yuri6-testing-2023-03-12-0918-pacific-distro-default-smithi/7205989

Actions #16

Updated by Laura Flores almost 1 year ago

/a/yuriw-2023-04-04_21:18:37-rados-wip-yuri3-testing-2023-04-04-0833-pacific-distro-default-smithi/7231811

Actions #17

Updated by Laura Flores 12 months ago

/a/yuriw-2023-04-25_14:15:40-rados-pacific-release-distro-default-smithi/7251538

Actions #18

Updated by Laura Flores 12 months ago

/a/yuriw-2023-04-25_18:56:08-rados-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/7252619

Actions #19

Updated by Laura Flores 12 months ago

/a/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/7264439

Actions #20

Updated by Laura Flores 12 months ago

/a/yuriw-2023-04-26_01:16:19-rados-wip-yuri11-testing-2023-04-25-1605-pacific-distro-default-smithi/7253763

Actions #21

Updated by Laura Flores 11 months ago

/a/yuriw-2023-05-17_19:39:18-rados-wip-yuri5-testing-2023-05-09-1324-pacific-distro-default-smithi/7276763

Actions #22

Updated by Laura Flores 10 months ago

/a/lflores-2023-07-05_17:10:12-rados-wip-yuri8-testing-2023-06-22-1309-pacific-distro-default-smithi/7327310

Actions #23

Updated by Laura Flores 9 months ago

/a/yuriw-2023-08-02_20:21:03-rados-wip-yuri3-testing-2023-08-01-0825-pacific-distro-default-smithi/7358598

Actions #24

Updated by Laura Flores 8 months ago

/a/yuriw-2023-08-21_23:10:07-rados-pacific-release-distro-default-smithi/7375110

Actions #25

Updated by Laura Flores 6 months ago

/a/yuriw-2023-10-25_14:34:26-rados-wip-yuri5-testing-2023-10-24-0737-pacific-distro-default-smithi/7436962

Actions #26

Updated by Laura Flores 5 months ago

/a/yuriw-2023-11-16_22:27:54-rados-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/7461192

Actions #27

Updated by Nitzan Mordechai 4 months ago

/a/yuriw-2023-12-26_16:04:49-rados-wip-yuri5-testing-2023-12-15-0747-pacific-distro-default-smithi/7501355
/a/yuriw-2023-12-26_16:04:49-rados-wip-yuri5-testing-2023-12-15-0747-pacific-distro-default-smithi/7501364
/a/yuriw-2023-12-26_16:04:49-rados-wip-yuri5-testing-2023-12-15-0747-pacific-distro-default-smithi/7501380

Actions #28

Updated by Nitzan Mordechai 4 months ago

/a/yuriw-2023-12-27_21:01:10-rados-wip-yuri7-testing-2023-12-27-1008-pacific-distro-default-smithi/7502386
/a/yuriw-2023-12-27_21:01:10-rados-wip-yuri7-testing-2023-12-27-1008-pacific-distro-default-smithi/7502334

Actions #29

Updated by Laura Flores 3 months ago

/a/yuriw-2024-01-25_17:23:47-rados-pacific-release-distro-default-smithi/7532536
/a/yuriw-2024-01-25_17:23:47-rados-pacific-release-distro-default-smithi/7532319

Actions #30

Updated by Nizamudeen A 3 months ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF