Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-07-29T14:03:08ZCeph
Redmine Dashboard - Bug #46757 (New): mgr/dashboard: Only show identify action if inventory device can blinkhttps://tracker.ceph.com/issues/467572020-07-29T14:03:08ZStephan Müller
<p>If a device can't be blink but is manged by cephadm the action "Identify" will be shown in the inventory page. The problem is that the command doesn't throw an error if it fails on the dashboard. I observed the following error through running `ceph -W cephadm` in parallel to the execution.</p>
<pre>
2020-07-29T08:53:30.649950-0500 mgr.x [ERR] executing blink(([DeviceLightLoc(host='osd0', dev='/dev/vdb', path='/dev/vdb')],)) failed.
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
2020-07-29T08:53:30.653157-0500 mgr.x [ERR] _Promise failed
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/orchestrator/_interface.py", line 292, in _finalize
next_result = self._on_complete(self._value)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 102, in <lambda>
return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1599, in blink_device_light
return blink(locs)
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 73, in forall_hosts_wrapper
return CephadmOrchestrator.instance._worker_pool.map(do_work, vals)
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
</pre> Dashboard - Bug #46667 (Resolved): mgr/dashboard: Handle buckets without a realm_idhttps://tracker.ceph.com/issues/466672020-07-22T12:13:42ZStephan Müller
<p>The dashboard should not fail hard or handle buckets without a set realm_id.</p>
<p>The API fails with something like this:</p>
<pre>
RGW REST API failed request with status code 400
(b'{
"Code":"InvalidLocationConstraint",
"Message":"The specified location-constr' b'aint is not valid",
"BucketName":"test",
"RequestId":"tx00000000000000001b64c-' b'005f16b722-137187-my-store",
"HostId":"137187-my-store-my-store"
}')
</pre> Dashboard - Bug #46660 (Resolved): mgr/dashboard: Regression on table error handlinghttps://tracker.ceph.com/issues/466602020-07-21T14:47:00ZStephan Müller
<p>Regression was introduced by <a class="issue tracker-1 status-3 priority-5 priority-high3 closed" title="Bug: mgr/dashboard: OSD page is slow at loading all the inline pages and tabs (Resolved)" href="https://tracker.ceph.com/issues/45017">#45017</a> (PR mgr/dashboard: Migrate Tabs from ngx-bootstrap to ng-bootstrap #35290). Through the use of the new plugin some tables got wrapped into the new usage - however some components needed the use of the table as viewchild and though they got a static viewchild, but it's now dynamic.</p>
<p>The error occurs on the host and pools listing.</p> Dashboard - Bug #46303 (Resolved): mgr/dashboard: ExpressionChangedAfterItHasBeenCheckedError in ...https://tracker.ceph.com/issues/463032020-07-01T14:22:14ZStephan Müller
<p>ExpressionChangedAfterItHasBeenCheckedError in device selection modal in OSD creation form. It looks like it has something to do with the modal switch <a class="issue tracker-2 status-3 priority-4 priority-default closed child" title="Feature: mgr/dashboard: Use ng-bootstrap for Modal (Resolved)" href="https://tracker.ceph.com/issues/45759">#45759</a>.</p> Dashboard - Bug #46135 (Resolved): mgr/dashboard: Typeahead regression in the silence matcherhttps://tracker.ceph.com/issues/461352020-06-22T08:26:03ZStephan Müller
<p>This regression was introduced by PR #35300 which updated the typeahead module usage from ngx-bootstrap to ng-bootstrap's typeahead module.</p>
<p>The regression is that the typeahead didn't open on click into the<br />input field. Another regression is that the suggestions didn't overlap<br />the modal anymore.</p> Orchestrator - Bug #45724 (Resolved): check-host should not fail using fqdn or not that hardhttps://tracker.ceph.com/issues/457242020-05-27T09:26:51ZStephan Müller
<p>I would suggest either identify that it's an FQDN or answer "Host not found. Use 'ceph orch host ls' to see all managed hosts."</p>
<pre>
# ceph cephadm check-host node3.ses7.com
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 63, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1482, in check_host
error_ok=True, no_fsid=True)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1569, in _run_cephadm
conn, connr = self._get_connection(addr)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1521, in _get_connection
n = self.ssh_user + '@' + host
TypeError: must be str, not NoneType
</pre> Dashboard - Bug #44620 (Resolved): mgr/dashboard: Pool form max sizehttps://tracker.ceph.com/issues/446202020-03-16T12:07:27ZStephan Müller
<p>Currently the pool form max size is determined by "max_size" of the selected rule or the maximum amount of available OSDs. The amount can be wrong if the failure domain of the rule is not OSD.</p>
<p>I'm also currently not sure if "max_size" and "min_size" are useful values to show, at least pools created on a vstart cluster always show the same min and max size values. Please make that sure that those values can still be used safely.</p> Dashboard - Bug #43938 (Duplicate): mgr/dashboard: Test failure in test_safe_to_destroy in tasks....https://tracker.ceph.com/issues/439382020-01-31T15:14:52ZStephan Müller
<p>There is an API test failure in master not sure where it came from.</p>
<pre>
2020-01-29 15:05:17,789.789 INFO:__main__:Stopped test: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest) in 1.567141s
2020-01-29 15:05:17,789.789 INFO:__main__:
2020-01-29 15:05:17,790.790 INFO:__main__:======================================================================
2020-01-29 15:05:17,790.790 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,790.790 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,790.790 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,790.790 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,790.790 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,790.790 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,790.790 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,791.791 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Ran 93 tests in 1125.270s
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:FAILED (failures=1)
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:======================================================================
2020-01-29 15:05:17,792.792 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,792.792 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,793.793 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,793.793 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,793.793 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,793.793 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,793.793 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,793.793 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,793.793 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,794.794 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,794.794 INFO:__main__:
Using guessed paths /home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/ ['/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/cython_modules/lib.3', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/src/pybind']
</pre> Dashboard - Bug #43765 (Resolved): mgr/dashboard: Dashboard breaks on the selection of a bad poolhttps://tracker.ceph.com/issues/437652020-01-23T13:34:10ZStephan Müller
<p>If a pool is created with a wrong rule that is oversized for the cluster it gets into the "creating+incomplete" state.<br />This wont break the dashboard yet, but when clicking on the pool to get it's details every API request will be left unanswerd. The first unanswered call is "/api/pool/$badPool/configuration".</p>
<p>To reproduce this you can create an erasure code profile with a greater value for k than you have OSDs.</p> Dashboard - Bug #40330 (Resolved): mgr/dashboard: Warning about stale data makes it hard to click...https://tracker.ceph.com/issues/403302019-06-13T12:29:53ZStephan Müller
<p>Warning about Stale data in the datatable makes the content move up and down, making it hard to hit a certain row</p> Dashboard - Bug #39579 (Resolved): mgr/dashboard: Fix run-tox script to accept cli arguments againhttps://tracker.ceph.com/issues/395792019-05-03T10:30:12ZStephan Müller
<p>A regression was introduced by <a href="https://github.com/ceph/ceph/commit/9426f1f2045d0ae0f319530c3dc3a9240d838d07#diff-cc2ee9d8e56f3a2cd98b8148935d3829L37" class="external">this change</a> , causing the script to not accept command line arguments. Therefore command described in the hacking.rst to only run a single tox test ("WITH_PYTHON2=OFF ./run-tox.sh pytest tests/test_rgw_client.py::RgwClientTest::test_ssl_verify") did not work anymore and caused tox to <a href="https://paste.opensuse.org/view//64659695" class="external">fail</a> .</p>
<pre>
Traceback (most recent call last):
File "/usr/bin/tox", line 11, in <module>
load_entry_point('tox==3.7.0', 'console_scripts', 'tox')()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 47, in cmdline
main(args)
File "/usr/lib/python3.7/site-packages/tox/session.py", line 54, in main
retcode = build_session(config).runcommand()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 467, in runcommand
return self.subcommand_test()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 590, in subcommand_test
self.run_sequential()
File "/usr/lib/python3.7/site-packages/tox/session.py", line 609, in run_sequential
self.runtestenv(venv)
File "/usr/lib/python3.7/site-packages/tox/session.py", line 728, in runtestenv
self.hook.tox_runtest(venv=venv, redirect=redirect)
File "/usr/lib/python3.7/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/usr/lib/python3.7/site-packages/pluggy/manager.py", line 68, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/lib/python3.7/site-packages/pluggy/manager.py", line 62, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/usr/lib/python3.7/site-packages/tox/venv.py", line 597, in tox_runtest
venv.test(redirect=redirect)
File "/usr/lib/python3.7/site-packages/tox/venv.py", line 468, in test
if argv[0].startswith("-"):
IndexError: list index out of range
</pre> Dashboard - Bug #39300 (Resolved): mgr/dashboard: Can't login with a bigger time difference betwe...https://tracker.ceph.com/issues/393002019-04-15T15:45:19ZStephan Müller
<p>With the time difference of -7h to the backend, I couldn't log in. The log throw the error `AMT: user info changed after token was issued, iat=%s lastUpdate=%s` which can be found in line 150 in `dashboard/services/auth.py`. I removed as a quick fix line 146 in the same document which said that `user.lastUpdate <= token['iat']` has to be true in order to login.</p> Dashboard - Bug #39299 (New): mgr/dashboard: Pools API should provide times in UTC that will be c...https://tracker.ceph.com/issues/392992019-04-15T15:42:18ZStephan Müller
<p>Pool -> details -> 'create_time' attribute will provide the local server time instead UTC time</p> Dashboard - Bug #39297 (Resolved): mgr/dashboard: Logs provided by the API should provide timesta...https://tracker.ceph.com/issues/392972019-04-15T15:40:33ZStephan Müller
<p>Log timestamps will provide the local server time instead UTC time</p> Dashboard - Bug #39295 (Resolved): mgr/dashboard: RGW Bucket API should provide times in UTC that...https://tracker.ceph.com/issues/392952019-04-15T15:38:09ZStephan Müller
<p>RGW -> Bucket -> details -> 'modification time' attribute will provide the local server time instead UTC time</p>