Actions
Bug #58862
openunittest run-tox-mgr failed
Status:
New
Priority:
Normal
Assignee:
-
Category:
build
Target version:
-
% Done:
0%
Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Ceph env¶
- Branch: pacific
- CommitID: 4b4d7c4a77505502c9acbf47943386e2e3c70621
Docker env¶
OS: CentOS 8.5.2111
Kernel: 5.4.0-100-generic
Python: 3.8.16 (Use pyenv)
Pip: 23.0.1 (Use pyenv)
Test log¶
The following tests passed:
run-tox-mgr-dashboard-check
run-tox-mgr-dashboard-openapi
run-tox-mgr-dashboard-lint
run-tox-mgr-dashboard-py3
80% tests passed, 1 tests failed out of 5
Total Test time (real) = 123.46 sec
The following tests FAILED:
1 - run-tox-mgr (Failed)
Errors while running CTest
Test Detail log¶
1: =================================== FAILURES ===================================
1: ___________________ TestMonitoring.test_alertmanager_config ____________________
1:
1: self = <cephadm.tests.test_services.TestMonitoring object at 0x7f6e253d5490>
1: mock_get = <MagicMock name='get' id='140110953497024'>
1: _run_cephadm = <MagicMock name='_run_cephadm' id='140110953500928'>
1: cephadm_module = <cephadm.module.CephadmOrchestrator object at 0x7f6e1fa24a00>
1:
1: @patch("cephadm.serve.CephadmServe._run_cephadm")
1: @patch("mgr_module.MgrModule.get")
1: def test_alertmanager_config(self, mock_get, _run_cephadm,
1: cephadm_module: CephadmOrchestrator):
1: _run_cephadm.return_value = ('{}', '', 0)
1: mock_get.return_value = {"services": {"dashboard": "http://[::1]:8080"}}
1:
1: with with_host(cephadm_module, 'test'):
1: with with_service(cephadm_module, AlertManagerSpec()):
1: y = dedent(self._get_config('http://localhost:8080')).lstrip()
1: > _run_cephadm.assert_called_with(
1: 'test',
1: 'alertmanager.test',
1: 'deploy',
1: [
1: '--name', 'alertmanager.test',
1: '--meta-json', '{"service_name": "alertmanager", "ports": [9093, 9094], "ip": null, "deployed_by": [], "rank": null, "rank_generation": null, "extra_container_args": null}',
1: '--config-json', '-', '--tcp-ports', '9093 9094'
1: ],
1: stdin=json.dumps({"files": {"alertmanager.yml": y}, "peers": []}),
1: image='')
1:
1: cephadm/tests/test_services.py:326:
1: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1:
1: self = <MagicMock name='_run_cephadm' id='140110953500928'>
1: args = ('test', 'alertmanager.test', 'deploy', ['--name', 'alertmanager.test', '--meta-json', '{"service_name": "alertmanager..., "deployed_by": [], "rank": null, "rank_generation": null, "extra_container_args": null}', '--config-json', '-', ...])
1: kwargs = {'image': '', 'stdin': '{"files": {"alertmanager.yml": "# This file is generated by cephadm.\\n# See https://prometheu...ceph-dashboard\'\\n webhook_configs:\\n - url: \'http://localhost:8080/api/prometheus_receiver\'\\n"}, "peers": []}'}
1: expected = (('test', 'alertmanager.test', 'deploy', ['--name', 'alertmanager.test', '--meta-json', '{"service_name": "alertmanage...eph-dashboard\'\\n webhook_configs:\\n - url: \'http://localhost:8080/api/prometheus_receiver\'\\n"}, "peers": []}'})
1: actual = call('test', 'alertmanager.test', 'deploy', ['--name', 'alertmanager.test', '--meta-json', '{"service_name": "alertman...\'\\n webhook_configs:\\n - url: \'http://ip6-localhost:8080/api/prometheus_receiver\'\\n"}, "peers": []}', image='')
1: _error_message = <function NonCallableMock.assert_called_with.<locals>._error_message at 0x7f6e1fc26e50>
1: cause = None
1:
1: def assert_called_with(self, /, *args, **kwargs):
1: """assert that the last call was made with the specified arguments.
1:
1: Raises an AssertionError if the args and keyword args passed in are
1: different to the last call to the mock."""
1: if self.call_args is None:
1: expected = self._format_mock_call_signature(args, kwargs)
1: actual = 'not called.'
1: error_message = ('expected call not found.\nExpected: %s\nActual: %s'
1: % (expected, actual))
1: raise AssertionError(error_message)
1:
1: def _error_message():
1: msg = self._format_mock_failure_message(args, kwargs)
1: return msg
1: expected = self._call_matcher((args, kwargs))
1: actual = self._call_matcher(self.call_args)
1: if expected != actual:
1: cause = expected if isinstance(expected, Exception) else None
1: > raise AssertionError(_error_message()) from cause
1: E AssertionError: expected call not found.
1: E Expected: _run_cephadm('test', 'alertmanager.test', 'deploy', ['--name', 'alertmanager.test', '--meta-json', '{"service_name": "alertmanager", "ports": [9093, 9094], "ip": null, "deployed_by": [], "rank": null, "rank_generation": null, "extra_container_args": null}', '--config-json', '-', '--tcp-ports', '9093 9094'], stdin='{"files": {"alertmanager.yml": "# This file is generated by cephadm.\\n# See https://prometheus.io/docs/alerting/configuration/ for documentation.\\n\\nglobal:\\n resolve_timeout: 5m\\n http_config:\\n tls_config:\\n insecure_skip_verify: true\\n\\nroute:\\n receiver: \'default\'\\n routes:\\n - group_by: [\'alertname\']\\n group_wait: 10s\\n group_interval: 10s\\n repeat_interval: 1h\\n receiver: \'ceph-dashboard\'\\n\\nreceivers:\\n- name: \'default\'\\n webhook_configs:\\n- name: \'ceph-dashboard\'\\n webhook_configs:\\n - url: \'http://localhost:8080/api/prometheus_receiver\'\\n"}, "peers": []}', image='')
1: E Actual: _run_cephadm('test', 'alertmanager.test', 'deploy', ['--name', 'alertmanager.test', '--meta-json', '{"service_name": "alertmanager", "ports": [9093, 9094], "ip": null, "deployed_by": [], "rank": null, "rank_generation": null, "extra_container_args": null}', '--config-json', '-', '--tcp-ports', '9093 9094'], stdin='{"files": {"alertmanager.yml": "# This file is generated by cephadm.\\n# See https://prometheus.io/docs/alerting/configuration/ for documentation.\\n\\nglobal:\\n resolve_timeout: 5m\\n http_config:\\n tls_config:\\n insecure_skip_verify: true\\n\\nroute:\\n receiver: \'default\'\\n routes:\\n - group_by: [\'alertname\']\\n group_wait: 10s\\n group_interval: 10s\\n repeat_interval: 1h\\n receiver: \'ceph-dashboard\'\\n\\nreceivers:\\n- name: \'default\'\\n webhook_configs:\\n- name: \'ceph-dashboard\'\\n webhook_configs:\\n - url: \'http://ip6-localhost:8080/api/prometheus_receiver\'\\n"}, "peers": []}', image='')
1:
1: /root/.pyenv/versions/3.8.16/lib/python3.8/unittest/mock.py:913: AssertionError
1: ------------------------------ Captured log setup ------------------------------
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option ssh_config_file = None
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option device_cache_timeout = 1800
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option device_enhanced_scan = False
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option daemon_cache_timeout = 600
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option facts_cache_timeout = 60
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option host_check_interval = 600
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option mode = root
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_base = quay.io/ceph/ceph
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_prometheus = quay.io/prometheus/prometheus:v2.33.4
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_grafana = quay.io/ceph/ceph-grafana:8.3.5
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_alertmanager = quay.io/prometheus/alertmanager:v0.23.0
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_node_exporter = quay.io/prometheus/node-exporter:v1.3.1
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_haproxy = docker.io/library/haproxy:2.3
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_keepalived = docker.io/arcts/keepalived
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_image_snmp_gateway = docker.io/maxwo/snmp-notifier:v1.2.1
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option warn_on_stray_hosts = True
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option warn_on_stray_daemons = True
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option warn_on_failed_host_check = True
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option log_to_cluster = True
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option allow_ptrace = False
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option container_init = True
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option prometheus_alerts_path = /etc/prometheus/ceph/ceph_default_alerts.yml
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option migration_current = None
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option config_dashboard = True
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option manage_etc_ceph_ceph_conf = False
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option manage_etc_ceph_ceph_conf_hosts = *
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option registry_url = None
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option registry_username = None
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option registry_password = None
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option registry_insecure = False
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option use_repo_digest = True
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option config_checks_enabled = False
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option default_registry = docker.io
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option max_count_per_host = 10
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option autotune_memory_target_ratio = 0.7
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option autotune_interval = 600
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option max_osd_draining_count = 10
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option log_level =
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option log_to_file = False
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: DEBUG tests:module.py:561 mgr option log_to_cluster_level = info
1: DEBUG cephadm.inventory:inventory.py:83 Loaded inventory {}
1: INFO cephadm.migrations:migrations.py:56 Found migration_current of "None". Setting to last migration.
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config set' -> 0 in 0.000s
1: ------------------------------ Captured log call -------------------------------
1: INFO tests:module.py:1540 Added host test
1: DEBUG cephadm.serve:serve.py:193 _refresh_hosts_and_daemons
1: DEBUG cephadm.serve:serve.py:273 checking test
1: DEBUG cephadm.serve:serve.py:286 host test (1::4) ok
1: DEBUG cephadm.serve:serve.py:209 refreshing test daemons
1: DEBUG cephadm.serve:serve.py:350 Refreshed host test daemons (0)
1: DEBUG cephadm.serve:serve.py:222 refreshing test devices
1: DEBUG cephadm.serve:serve.py:390 Refreshed host test devices (0) networks (0)
1: DEBUG tests:osd.py:339 Finding OSDSpecs for host: <test>
1: DEBUG tests:osd.py:281 Generating OSDSpec previews for []
1: DEBUG cephadm.serve:serve.py:411 Loading OSDSpec previews to HostCache for host <test>
1: DEBUG cephadm.serve:serve.py:228 Refreshing test facts
1: DEBUG cephadm.serve:serve.py:234 refreshing OSDSpec previews for test
1: DEBUG tests:osd.py:339 Finding OSDSpecs for host: <test>
1: DEBUG tests:osd.py:281 Generating OSDSpec previews for []
1: DEBUG cephadm.serve:serve.py:411 Loading OSDSpec previews to HostCache for host <test>
1: DEBUG cephadm.serve:serve.py:401 Refreshed OSDSpec previews for host <test>
1: DEBUG cephadm.serve:serve.py:243 autotuning memory for test
1: DEBUG tests:mgr_module.py:1351 mon_command: 'config get' -> 0 in 0.000s
1: INFO tests:module.py:2690 Saving service alertmanager spec with placement count:1
1: DEBUG tests:module.py:530 _kick_serve_loop
1: DEBUG cephadm.serve:serve.py:509 _apply_all_services
1: DEBUG cephadm.serve:serve.py:588 Applying service alertmanager spec
1: DEBUG cephadm.schedule:schedule.py:356 Combine hosts with existing daemons [] + new hosts [DaemonPlacement(daemon_type='alertmanager', hostname='test', network='', name='', ip=None, ports=[9093, 9094], rank=None, rank_generation=None)]
1: DEBUG cephadm.serve:serve.py:645 Add [DaemonPlacement(daemon_type='alertmanager', hostname='test', network='', name='', ip=None, ports=[9093, 9094], rank=None, rank_generation=None)], remove []
1: DEBUG cephadm.serve:serve.py:693 Hosts that will receive new daemons: [DaemonPlacement(daemon_type='alertmanager', hostname='test', network='', name='', ip=None, ports=[9093, 9094], rank=None, rank_generation=None)]
1: DEBUG cephadm.serve:serve.py:694 Daemons that will be removed: []
1: DEBUG cephadm.serve:serve.py:768 Placing alertmanager.test on host test
1: INFO cephadm.serve:serve.py:1136 Deploying daemon alertmanager.test on test
1: =============================== warnings summary ===============================
1: cephadm/module.py:73
1: cephadm/module.py:73
1: /home/data/ceph/src/pybind/mgr/cephadm/module.py:73: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
1: if StrictVersion(remoto.__version__) <= StrictVersion('1.2'):
1:
1: -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
1: =========================== short test summary info ============================
1: FAILED cephadm/tests/test_services.py::TestMonitoring::test_alertmanager_config
1: ============ 1 failed, 1190 passed, 4 skipped, 2 warnings in 13.53s ============
1: py3: exit 1 (14.54 seconds) /home/data/ceph/src/pybind/mgr> pytest --doctest-modules mgr_util.py tests/ cephadm/ mds_autoscaler/ nfs/ orchestrator/ insights/ pg_autoscaler/ progress/ snap_schedule pid=584809
1: py3: FAIL ✖ in 14.6 seconds
1: mypy: commands[0] /home/data/ceph/src/pybind/mgr> mypy --config-file=../../mypy.ini -m cephadm -m dashboard -m devicehealth -m mds_autoscaler -m mgr_module -m mgr_util -m mirroring -m nfs -m orchestrator -m pg_autoscaler -m progress -m prometheus -m rook -m snap_schedule -m stats -m test_orchestrator -m volumes
1: Success: no issues found in 17 source files
1: mypy: OK ✔ in 0.76 seconds
1: flake8: commands[0] /home/data/ceph/src/pybind/mgr> flake8 --config=tox.ini cephadm cli_api nfs orchestrator prometheus
1: flake8: commands[1] /home/data/ceph/src/pybind/mgr> bash -c 'test $(git ls-files cephadm | grep ".py$" | grep -v tests | xargs grep "docker.io" | wc -l) == 13'
1: flake8: OK ✔ in 0.97 seconds
1: jinjalint: commands[0] /home/data/ceph/src/pybind/mgr> jinja-ninja cephadm/templates
1: py3: FAIL code 1 (14.60=setup[0.06]+cmd[14.54] seconds)
1: mypy: OK (0.76=setup[0.02]+cmd[0.74] seconds)
1: flake8: OK (0.97=setup[0.02]+cmd[0.90,0.05] seconds)
1: jinjalint: OK (0.06=setup[0.01]+cmd[0.05] seconds)
1: evaluation failed :( (16.45 seconds)
3/5 Test #1: run-tox-mgr ......................***Failed 18.64 sec
2: .......................... [ 18%]
2: tests/test_api_auditing.py ........ [ 20%]
2: tests/test_auth.py ...... [ 22%]
2: tests/test_ceph_service.py .......... [ 24%]
2: tests/test_cephfs.py .... [ 25%]
2: tests/test_controllers.py .................... [ 29%]
2: tests/test_daemon.py .... [ 30%]
2: tests/test_docs.py .......... [ 33%]
2: tests/test_erasure_code_profile.py ... [ 33%]
2: tests/test_exceptions.py ............ [ 36%]
2: tests/test_feature_toggles.py .. [ 36%]
2: tests/test_grafana.py ............ [ 39%]
2: tests/test_home.py ....... [ 41%]
2: tests/test_host.py ................... [ 45%]
2: tests/test_iscsi.py ................................. [ 53%]
2: tests/test_nfs.py .............. [ 56%]
2: tests/test_notification.py .... [ 57%]
2: tests/test_orchestrator.py .... [ 58%]
2: tests/test_osd.py ........... [ 60%]
2: tests/test_plugin_debug.py .... [ 61%]
3:
3: --------------------------------------------------------------------
3: Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
3:
3: lint: commands[5] /home/data/ceph/src/pybind/mgr/dashboard> rstcheck --report info --debug -- README.rst HACKING.rst
3: lint: OK (62.99=setup[0.09]+cmd[0.78,0.61,1.01,0.43,59.74,0.33] seconds)
3: congratulations :) (63.06 seconds)
4/5 Test #3: run-tox-mgr-dashboard-lint ....... Passed 64.51 sec
2: tests/test_pool.py ..... [ 62%]
2: tests/test_prometheus.py ................ [ 66%]
2: tests/test_rbd_mirroring.py ............ [ 69%]
2: tests/test_rbd_service.py ........ [ 71%]
2: tests/test_rest_client.py ......... [ 73%]
2: tests/test_rest_tasks.py ........ [ 74%]
2: tests/test_rgw.py ................. [ 78%]
2: tests/test_rgw_client.py ................................. [ 86%]
2: tests/test_settings.py ................. [ 90%]
2: tests/test_ssl.py .. [ 90%]
2: tests/test_sso.py ..... [ 91%]
2: tests/test_task.py ............... [ 95%]
2: tests/test_tools.py ............... [ 98%]
2: tests/test_versioning.py ...... [100%]
2:
2: ---------- coverage: platform linux, python 3.8.16-final-0 -----------
2: Name Stmts Miss Cover
2: -------------------------------------------------------------------------------
2: /home/data/ceph/src/pybind/ceph_argparse.py 908 702 23%
2: /home/data/ceph/src/pybind/mgr/mgr_util.py 454 294 35%
2: /home/data/ceph/src/pybind/mgr/orchestrator/__init__.py 5 0 100%
2: /home/data/ceph/src/pybind/mgr/orchestrator/_interface.py 683 428 37%
2: /home/data/ceph/src/pybind/mgr/orchestrator/module.py 770 592 23%
2: /home/data/ceph/src/pybind/mgr/tests/__init__.py 145 94 35%
2: api/__init__.py 1 0 100%
2: api/doc.py 28 0 100%
2: awsauth.py 67 50 25%
2: cherrypy_backports.py 86 63 27%
2: ci/check_grafana_dashboards.py 105 87 17%
2: controllers/__init__.py 12 0 100%
2: controllers/_api_router.py 8 0 100%
2: controllers/_auth.py 15 0 100%
2: controllers/_base_controller.py 205 32 84%
2: controllers/_docs.py 75 6 92%
2: controllers/_endpoint.py 50 5 90%
2: controllers/_helpers.py 67 10 85%
2: controllers/_permissions.py 27 4 85%
2: controllers/_rest_controller.py 130 5 96%
2: controllers/_router.py 49 18 63%
2: controllers/_task.py 61 1 98%
2: controllers/_ui_router.py 8 0 100%
2: controllers/_version.py 24 0 100%
2: controllers/auth.py 70 22 69%
2: controllers/cephfs.py 250 172 31%
2: controllers/cluster.py 15 2 87%
2: controllers/cluster_configuration.py 71 49 31%
2: controllers/crush_rule.py 34 13 62%
2: controllers/daemon.py 22 1 95%
2: controllers/docs.py 236 66 72%
2: controllers/erasure_code_profile.py 32 10 69%
2: controllers/frontend_logging.py 9 1 89%
2: controllers/grafana.py 38 2 95%
2: controllers/health.py 132 95 28%
2: controllers/home.py 99 22 78%
2: controllers/host.py 196 16 92%
2: controllers/iscsi.py 720 177 75%
2: controllers/logs.py 36 17 53%
2: controllers/mgr_modules.py 85 54 36%
2: controllers/monitor.py 28 15 46%
2: controllers/nfs.py 115 8 93%
2: controllers/orchestrator.py 28 2 93%
2: controllers/osd.py 302 112 63%
2: controllers/perf_counters.py 48 5 90%
2: controllers/pool.py 175 91 48%
2: controllers/prometheus.py 64 8 88%
2: controllers/rbd.py 332 210 37%
2: controllers/rbd_mirroring.py 368 142 61%
2: controllers/rgw.py 291 143 51%
2: controllers/role.py 95 61 36%
2: controllers/saml2.py 77 48 38%
2: controllers/service.py 55 15 73%
2: controllers/settings.py 47 2 96%
2: controllers/summary.py 44 0 100%
2: controllers/task.py 17 0 100%
2: controllers/telemetry.py 19 6 68%
2: controllers/user.py 135 97 28%
2: exceptions.py 65 2 97%
2: grafana.py 80 12 85%
2: module.py 344 230 33%
2: plugins/__init__.py 39 4 90%
2: plugins/debug.py 52 6 88%
2: plugins/feature_toggles.py 88 8 91%
2: plugins/interfaces.py 48 5 90%
2: plugins/lru_cache.py 30 24 20%
2: plugins/motd.py 64 28 56%
2: plugins/pluggy.py 42 4 90%
2: plugins/plugin.py 26 2 92%
2: plugins/ttl_cache.py 39 10 74%
2: rest_client.py 273 154 44%
2: security.py 37 0 100%
2: services/__init__.py 1 0 100%
2: services/access_control.py 596 104 83%
2: services/auth.py 151 62 59%
2: services/ceph_service.py 242 96 60%
2: services/cephfs.py 131 103 21%
2: services/cluster.py 16 4 75%
2: services/exception.py 88 11 88%
2: services/iscsi_cli.py 38 11 71%
2: services/iscsi_client.py 149 72 52%
2: services/iscsi_config.py 74 14 81%
2: services/orchestrator.py 155 50 68%
2: services/osd.py 18 0 100%
2: services/progress.py 38 28 26%
2: services/rbd.py 377 216 43%
2: services/rgw_client.py 408 118 71%
2: services/sso.py 131 28 79%
2: services/tcmu_service.py 68 57 16%
2: settings.py 149 7 95%
2: tools.py 532 30 94%
2: -------------------------------------------------------------------------------
2: TOTAL 12857 5575 57%
2:
2:
2: ======================= 438 passed in 120.59s (0:02:00) ========================
2: py3: OK (121.18=setup[0.05]+cmd[121.13] seconds)
2: congratulations :) (121.25 seconds)
5/5 Test #2: run-tox-mgr-dashboard-py3 ........ Passed 123.45 sec
The following tests passed:
run-tox-mgr-dashboard-check
run-tox-mgr-dashboard-openapi
run-tox-mgr-dashboard-lint
run-tox-mgr-dashboard-py3
80% tests passed, 1 tests failed out of 5
Total Test time (real) = 123.46 sec
The following tests FAILED:
1 - run-tox-mgr (Failed)
Errors while running CTest
Output from these tests are in: /home/data/ceph/build/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
Updated by bugwz bugwz about 1 year ago
I found the problem here, but i don't know how to fix it.
@patch("cephadm.serve.CephadmServe._run_cephadm")
@patch("mgr_module.MgrModule.get")
def test_alertmanager_config(self, mock_get, _run_cephadm,
cephadm_module: CephadmOrchestrator):
_run_cephadm.return_value = ('{}', '', 0)
mock_get.return_value = {"services": {"dashboard": "http://[::1]:8080"}}
with with_host(cephadm_module, 'test'):
with with_service(cephadm_module, AlertManagerSpec()):
y = dedent(self._get_config('http://localhost:8080')).lstrip()
_run_cephadm.assert_called_with(
'test',
'alertmanager.test',
'deploy',
[
'--name', 'alertmanager.test',
'--meta-json', '{"service_name": "alertmanager", "ports": [9093, 9094], "ip": null, "deployed_by": [], "rank": null, "rank_generation": null, "extra_container_args": null}',
'--config-json', '-', '--tcp-ports', '9093 9094'
],
stdin=json.dumps({"files": {"alertmanager.yml": y}, "peers": []}),
image='')
and the /etc/hosts in my local is:
127.0.0.1 localhost
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Updated by Ilya Dryomov about 1 year ago
- Target version changed from v16.2.12 to v16.2.13
Updated by John Mulligan 10 months ago
Hi, I recently made some changes to these tests on ceph's main branch. Would you be willing to try running the test on main and see if you still encounter a similar issue? FWIW, I doubt my changes will be backported to pacific. Thanks.
Actions