Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-10-01T10:03:53ZCeph
Redmine Dashboard - Bug #47714 (New): mgr/dashboard: Implement an expert settinghttps://tracker.ceph.com/issues/477142020-10-01T10:03:53ZStephan Müller
<p>To simplify forms implement an expert setting that if disabled hides not mandatory fields in the forms as the first step.</p>
<p>How should it look like?<br />Maybe on the top right an expert slider on the form and on the top panel. As it will have impact on what a users sees in the future.</p>
<p>Tip before getting to deep into the implementation please ask in the stand up if that's path we want to go down.</p> Dashboard - Bug #46757 (New): mgr/dashboard: Only show identify action if inventory device can blinkhttps://tracker.ceph.com/issues/467572020-07-29T14:03:08ZStephan Müller
<p>If a device can't be blink but is manged by cephadm the action "Identify" will be shown in the inventory page. The problem is that the command doesn't throw an error if it fails on the dashboard. I observed the following error through running `ceph -W cephadm` in parallel to the execution.</p>
<pre>
2020-07-29T08:53:30.649950-0500 mgr.x [ERR] executing blink(([DeviceLightLoc(host='osd0', dev='/dev/vdb', path='/dev/vdb')],)) failed.
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
2020-07-29T08:53:30.653157-0500 mgr.x [ERR] _Promise failed
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/orchestrator/_interface.py", line 292, in _finalize
next_result = self._on_complete(self._value)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 102, in <lambda>
return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1599, in blink_device_light
return blink(locs)
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 73, in forall_hosts_wrapper
return CephadmOrchestrator.instance._worker_pool.map(do_work, vals)
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
</pre> Dashboard - Bug #46667 (Resolved): mgr/dashboard: Handle buckets without a realm_idhttps://tracker.ceph.com/issues/466672020-07-22T12:13:42ZStephan Müller
<p>The dashboard should not fail hard or handle buckets without a set realm_id.</p>
<p>The API fails with something like this:</p>
<pre>
RGW REST API failed request with status code 400
(b'{
"Code":"InvalidLocationConstraint",
"Message":"The specified location-constr' b'aint is not valid",
"BucketName":"test",
"RequestId":"tx00000000000000001b64c-' b'005f16b722-137187-my-store",
"HostId":"137187-my-store-my-store"
}')
</pre> Orchestrator - Tasks #46551 (Resolved): cephadm: Add better a better hint how to add a hosthttps://tracker.ceph.com/issues/465512020-07-15T14:14:32ZStephan Müller
<p>Currently:</p>
<pre>
master:~ # ceph orch host add mgr0 192.168.121.230
Error ENOENT: Failed to connect to mgr0 (192.168.121.230).
Check that the host is reachable and accepts connections using the cephadm SSH key
you may want to run:
> ceph cephadm get-ssh-config > ssh_config
> ceph config-key get mgr/cephadm/ssh_identity_key > key
> ssh -F ssh_config -i key root@mgr0
</pre>
<p>What actually needs to be done:<br /><pre>
master:~ # ceph config-key get mgr/cephadm/ssh_identity_pub > key.pub
master:~ # ssh-copy-id -i "key.pub" root@mgr0
</pre></p>
<p>What the message should look like in the end:<br /><pre>
master:~ # ceph orch host add mgr0 192.168.121.230
Error ENOENT: Failed to connect to mgr0 (192.168.121.230).
Check that the host is reachable and accepts connections using the cephadm SSH key
you may want to add the SSH key to the host:
> ceph config-key get mgr/cephadm/ssh_identity_pub > ~/cephadm_ssh_key.pub
> ssh-copy-id -i ~/cephadm_ssh_key.pub root@mgr0
you may want to check that everything works, before rerunning the command:
> ceph cephadm get-ssh-config > ssh_config
> ceph config-key get mgr/cephadm/ssh_identity_key > ~/cephadm_ssh_key
> ssh -F ssh_config -i ~/cephadm_ssh_key root@mgr0
</pre></p> Orchestrator - Support #46547 (Resolved): cephadm: Exception adding host via FQDN if host was alr...https://tracker.ceph.com/issues/465472020-07-15T12:17:02ZStephan Müller
<p>To reproduce you need nodes that have a subdomain (not like in current Vagrantfile). I used sesdev to find this issue.</p>
<pre>
master:~ # ceph orch host add node1.pacific.test
Error ENOENT: New host node1.pacific.test (node1.pacific.test) failed check: [
'INFO:cephadm:podman|docker (/usr/bin/podman) is present',
'INFO:cephadm:systemctl is present', 'INFO:cephadm:lvcreate is present',
'INFO:cephadm:Unit chronyd.service is enabled and running',
'INFO:cephadm:Hostname "node1.pacific.test" matches what is expected.',
'ERROR: hostname "node1" does not match expected hostname "node1.pacific.test"'
]
</pre>
<p>With `ceph -W cephadm` one observes</p>
<pre>
2020-07-15T13:24:21.159126+0200 mgr.node1.zybwkb [ERR] _Promise failed
Traceback (most recent call last):
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 277, in _finalize
next_result = self._on_complete(self._value)
File "/usr/share/ceph/mgr/cephadm/module.py", line 132, in <lambda>
return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
File "/usr/share/ceph/mgr/cephadm/module.py", line 1098, in add_host
return self._add_host(spec)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1087, in _add_host
spec.hostname, spec.addr, err))
orchestrator._interface.OrchestratorError: New host node1.pacific.test (node1.pacific.test) failed check: ['INFO:cephadm:podman|docker (/usr/bin/podman) is present', 'INFO:cephadm:systemctl is present', 'INFO:cephadm:lvcreate is present', 'INFO:cephadm:Unit chronyd.service is enabled and running', 'INFO:cephadm:Hostname "node1.pacific.test" matches what is expected.', 'ERROR: hostname "node1" does not match expected hostname "node1.pacific.test"']
</pre> Ceph - Documentation #45874 (Fix Under Review): doc: Extend resolving conflict section in "Submit...https://tracker.ceph.com/issues/458742020-06-04T08:09:20ZStephan Müller
<p>Currently it's not clear how to easily continue with the backport script when a conflict is encountered.</p> Orchestrator - Bug #45724 (Resolved): check-host should not fail using fqdn or not that hardhttps://tracker.ceph.com/issues/457242020-05-27T09:26:51ZStephan Müller
<p>I would suggest either identify that it's an FQDN or answer "Host not found. Use 'ceph orch host ls' to see all managed hosts."</p>
<pre>
# ceph cephadm check-host node3.ses7.com
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 63, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1482, in check_host
error_ok=True, no_fsid=True)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1569, in _run_cephadm
conn, connr = self._get_connection(addr)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1521, in _get_connection
n = self.ssh_user + '@' + host
TypeError: must be str, not NoneType
</pre> Dashboard - Bug #44753 (New): mgr/dashboard: Secure the Alertmanger receiver endpointhttps://tracker.ceph.com/issues/447532020-03-25T14:10:42ZStephan Müller
<p>Currently it is possible send push notification unauthenticated to the dashboard and the push notifications are not verified if they actually are coming from an Alertmanager instance.</p>
<p>To see whats configurable see <a class="external" href="https://prometheus.io/docs/alerting/configuration/#http_config">https://prometheus.io/docs/alerting/configuration/#http_config</a></p>
<p>Removing the endpoint is not a solution to be considered as ceph orchestrator is configuring every Alertmanager instance to talk to the receiver of the dashboard.</p>
<p>The receiver is at the moment the only part that can handle multiple Altermanger instances.</p> Dashboard - Bug #44224 (New): mgr/dashboard: Timeouts for rbd.py callshttps://tracker.ceph.com/issues/442242020-02-20T10:10:30ZStephan Müller
<p>As the corner cases are not implemented in many rbd.py methods, they can fail without a response on a specific pool (mostly bad pools).</p>
<p>If this is implemented remove the workaround that was implemented to fix <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>For details what known issue exists see <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a>.</p>
<p>For details about the discussion that was made look at the PR that fixed <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>Make sure that <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a> is still not addressed before starting with this issue.</p>
<p>For details how this was implemented in openATTIC look <a href="https://bitbucket.org/openattic/openattic/pull-requests/682/add-librados-command-name-to-external/diff" class="external">here</a></p> Dashboard - Bug #44223 (Duplicate): mgr/dashboard: Timeouts for rbd.py callshttps://tracker.ceph.com/issues/442232020-02-20T10:05:49ZStephan Müller
<p>As the corner cases are not implemented in many rbd methods, they can fail without a response on a specific pool (mostly bad pools).</p>
<p>If this is implemented remove the workaround that was implemented to fix <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>For details what known issue exists see <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a>.</p>
<p>For details about the discussion that was made look at the PR that fixed <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>Make sure that <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a> is still not addressed before starting with this issue.</p> Dashboard - Bug #43938 (Duplicate): mgr/dashboard: Test failure in test_safe_to_destroy in tasks....https://tracker.ceph.com/issues/439382020-01-31T15:14:52ZStephan Müller
<p>There is an API test failure in master not sure where it came from.</p>
<pre>
2020-01-29 15:05:17,789.789 INFO:__main__:Stopped test: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest) in 1.567141s
2020-01-29 15:05:17,789.789 INFO:__main__:
2020-01-29 15:05:17,790.790 INFO:__main__:======================================================================
2020-01-29 15:05:17,790.790 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,790.790 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,790.790 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,790.790 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,790.790 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,790.790 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,790.790 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,791.791 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Ran 93 tests in 1125.270s
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:FAILED (failures=1)
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:======================================================================
2020-01-29 15:05:17,792.792 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,792.792 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,793.793 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,793.793 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,793.793 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,793.793 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,793.793 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,793.793 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,793.793 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,794.794 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,794.794 INFO:__main__:
Using guessed paths /home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/ ['/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/cython_modules/lib.3', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/src/pybind']
</pre> Dashboard - Bug #43594 (Resolved): mgr/dashboard: E2E pools page failurehttps://tracker.ceph.com/issues/435942020-01-14T09:58:44ZStephan Müller
<p>On the current master the pools page fail:</p>
<pre>
npm run e2e:ci -- --specs e2e/pools/pools.e2e-spec.ts
> ceph-dashboard@0.0.0 e2e:ci /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend
> npm run env_build && npm run e2e:update && ng e2e --dev-server-target --webdriverUpdate=false "--specs" "e2e/pools/pools.e2e-spec.ts"
> ceph-dashboard@0.0.0 env_build /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend
> cp src/environments/environment.tpl.ts src/environments/environment.prod.ts && cp src/environments/environment.tpl.ts src/environments/environment.ts && node ./environment.build.js
Environment variables have been set
> ceph-dashboard@0.0.0 e2e:update /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend
> npx webdriver-manager update --gecko=false --versions.chrome=$(google-chrome --version | awk '{ print $3 }')
[10:50:42] I/update - chromedriver: file exists /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.70.zip
[10:50:42] I/update - chromedriver: unzipping chromedriver_78.0.3904.70.zip
[10:50:42] I/update - chromedriver: setting permissions to 0755 for /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.70
[10:50:42] I/update - chromedriver: chromedriver_78.0.3904.70 up to date
[10:50:42] I/update - selenium standalone: file exists /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.141.59.jar
[10:50:42] I/update - selenium standalone: selenium-server-standalone-3.141.59.jar up to date
[10:50:56] I/launcher - Running 1 instances of WebDriver
[10:50:56] I/direct - Using ChromeDriver directly...
Activated Protractor Screenshoter Plugin, ver. 0.10.3 (c) 2016 - 2020 Andrej Zachar and contributors
Creating reporter at .protractor-report/
Jasmine started
Pools page
breadcrumb and tab tests
✓ should open and show breadcrumb (0.181 sec)
✓ should show two tabs (0.066 sec)
✓ should show pools list tab at first (0.075 sec)
✓ should show overall performance as a second tab (0.08 sec)
✗ should create a pool (15 secs)
- Failed: No element found using locator: By(css selector, input[name=pgNum])
at elementArrayFinder.getWebElements.then (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:814:27)
at process._tickCallback (internal/process/next_tick.js:68:7)Error:
at ElementArrayFinder.applyAction_ (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:459:27)
at ElementArrayFinder.(anonymous function).args [as sendKeys] (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:91:29)
at ElementFinder.(anonymous function).args [as sendKeys] (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:831:22)
at PoolPageHelper.<anonymous> (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/e2e/pools/pools.po.ts:44:34)
at step (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/tslib/tslib.js:136:27)
at Object.next (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/tslib/tslib.js:117:57)
at fulfilled (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/tslib/tslib.js:107:62)
at process._tickCallback (internal/process/next_tick.js:68:7)
From asynchronous test:
Error:
at Suite.<anonymous> (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/e2e/pools/pools.e2e-spec.ts:34:3)
at apply (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/lodash/lodash.js:476:27)
at Env.wrapper [as describe] (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/lodash/lodash.js:5317:16)
at Object.<anonymous> (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/e2e/pools/pools.e2e-spec.ts:3:1)
at Module._compile (internal/modules/cjs/loader.js:689:30)
**************************************************
* Failures *
**************************************************
1) Pools page should create a pool
- Failed: No element found using locator: By(css selector, input[name=pgNum])
Executed 5 of 7 specs (1 FAILED) (2 SKIPPED) in 19 secs.
[10:51:46] I/launcher - 0 instance(s) of WebDriver still running
[10:51:46] I/launcher - chrome #01 failed 1 test(s)
[10:51:46] I/launcher - overall: 1 failed spec(s)
[10:51:46] E/launcher - Process exited with error code 1
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! ceph-dashboard@0.0.0 e2e:ci: `npm run env_build && npm run e2e:update && ng e2e --dev-server-target --webdriverUpdate=false "--specs" "e2e/pools/pools.e2e-spec.ts"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the ceph-dashboard@0.0.0 e2e:ci script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/albatros/.npm/_logs/2020-01-14T09_51_46_846Z-debug.log
</pre> mgr - Feature #40365 (Resolved): mgr: Add get_rates_from_data from the dashboard to the mgr_util.pyhttps://tracker.ceph.com/issues/403652019-06-14T13:31:58ZStephan Müller
<p>Other modules need this too.</p>
<p>Origin: <a class="external" href="https://github.com/ceph/ceph/pull/28153#discussion_r285974000">https://github.com/ceph/ceph/pull/28153#discussion_r285974000</a></p> Dashboard - Bug #39579 (Resolved): mgr/dashboard: Fix run-tox script to accept cli arguments againhttps://tracker.ceph.com/issues/395792019-05-03T10:30:12ZStephan Müller
<p>A regression was introduced by <a href="https://github.com/ceph/ceph/commit/9426f1f2045d0ae0f319530c3dc3a9240d838d07#diff-cc2ee9d8e56f3a2cd98b8148935d3829L37" class="external">this change</a> , causing the script to not accept command line arguments. Therefore command described in the hacking.rst to only run a single tox test ("WITH_PYTHON2=OFF ./run-tox.sh pytest tests/test_rgw_client.py::RgwClientTest::test_ssl_verify") did not work anymore and caused tox to <a href="https://paste.opensuse.org/view//64659695" class="external">fail</a> .</p>
<pre>
Traceback (most recent call last):
File "/usr/bin/tox", line 11, in <module>
load_entry_point('tox==3.7.0', 'console_scripts', 'tox')()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 47, in cmdline
main(args)
File "/usr/lib/python3.7/site-packages/tox/session.py", line 54, in main
retcode = build_session(config).runcommand()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 467, in runcommand
return self.subcommand_test()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 590, in subcommand_test
self.run_sequential()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 609, in run_sequential
self.runtestenv(venv)
File "/usr/lib/python3.7/site-packages/tox/session.py", line 728, in runtestenv
self.hook.tox_runtest(venv=venv, redirect=redirect)
File "/usr/lib/python3.7/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/usr/lib/python3.7/site-packages/pluggy/manager.py", line 68, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/lib/python3.7/site-packages/pluggy/manager.py", line 62, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/usr/lib/python3.7/site-packages/tox/venv.py", line 597, in tox_runtest
venv.test(redirect=redirect)
File "/usr/lib/python3.7/site-packages/tox/venv.py", line 468, in test
if argv[0].startswith("-"):
IndexError: list index out of range
</pre> Dashboard - Bug #39300 (Resolved): mgr/dashboard: Can't login with a bigger time difference betwe...https://tracker.ceph.com/issues/393002019-04-15T15:45:19ZStephan Müller
<p>With the time difference of -7h to the backend, I couldn't log in. The log throw the error `AMT: user info changed after token was issued, iat=%s lastUpdate=%s` which can be found in line 150 in `dashboard/services/auth.py`. I removed as a quick fix line 146 in the same document which said that `user.lastUpdate <= token['iat']` has to be true in order to login.</p>