Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-07-29T14:03:08ZCeph
Redmine Dashboard - Bug #46757 (New): mgr/dashboard: Only show identify action if inventory device can blinkhttps://tracker.ceph.com/issues/467572020-07-29T14:03:08ZStephan Müller
<p>If a device can't be blink but is manged by cephadm the action "Identify" will be shown in the inventory page. The problem is that the command doesn't throw an error if it fails on the dashboard. I observed the following error through running `ceph -W cephadm` in parallel to the execution.</p>
<pre>
2020-07-29T08:53:30.649950-0500 mgr.x [ERR] executing blink(([DeviceLightLoc(host='osd0', dev='/dev/vdb', path='/dev/vdb')],)) failed.
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
2020-07-29T08:53:30.653157-0500 mgr.x [ERR] _Promise failed
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/orchestrator/_interface.py", line 292, in _finalize
next_result = self._on_complete(self._value)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 102, in <lambda>
return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1599, in blink_device_light
return blink(locs)
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 73, in forall_hosts_wrapper
return CephadmOrchestrator.instance._worker_pool.map(do_work, vals)
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
</pre> Dashboard - Bug #46667 (Resolved): mgr/dashboard: Handle buckets without a realm_idhttps://tracker.ceph.com/issues/466672020-07-22T12:13:42ZStephan Müller
<p>The dashboard should not fail hard or handle buckets without a set realm_id.</p>
<p>The API fails with something like this:</p>
<pre>
RGW REST API failed request with status code 400
(b'{
"Code":"InvalidLocationConstraint",
"Message":"The specified location-constr' b'aint is not valid",
"BucketName":"test",
"RequestId":"tx00000000000000001b64c-' b'005f16b722-137187-my-store",
"HostId":"137187-my-store-my-store"
}')
</pre> Dashboard - Bug #44224 (New): mgr/dashboard: Timeouts for rbd.py callshttps://tracker.ceph.com/issues/442242020-02-20T10:10:30ZStephan Müller
<p>As the corner cases are not implemented in many rbd.py methods, they can fail without a response on a specific pool (mostly bad pools).</p>
<p>If this is implemented remove the workaround that was implemented to fix <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>For details what known issue exists see <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a>.</p>
<p>For details about the discussion that was made look at the PR that fixed <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>Make sure that <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a> is still not addressed before starting with this issue.</p>
<p>For details how this was implemented in openATTIC look <a href="https://bitbucket.org/openattic/openattic/pull-requests/682/add-librados-command-name-to-external/diff" class="external">here</a></p> Dashboard - Bug #44223 (Duplicate): mgr/dashboard: Timeouts for rbd.py callshttps://tracker.ceph.com/issues/442232020-02-20T10:05:49ZStephan Müller
<p>As the corner cases are not implemented in many rbd methods, they can fail without a response on a specific pool (mostly bad pools).</p>
<p>If this is implemented remove the workaround that was implemented to fix <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>For details what known issue exists see <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a>.</p>
<p>For details about the discussion that was made look at the PR that fixed <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>Make sure that <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a> is still not addressed before starting with this issue.</p> Dashboard - Bug #43938 (Duplicate): mgr/dashboard: Test failure in test_safe_to_destroy in tasks....https://tracker.ceph.com/issues/439382020-01-31T15:14:52ZStephan Müller
<p>There is an API test failure in master not sure where it came from.</p>
<pre>
2020-01-29 15:05:17,789.789 INFO:__main__:Stopped test: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest) in 1.567141s
2020-01-29 15:05:17,789.789 INFO:__main__:
2020-01-29 15:05:17,790.790 INFO:__main__:======================================================================
2020-01-29 15:05:17,790.790 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,790.790 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,790.790 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,790.790 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,790.790 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,790.790 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,790.790 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,791.791 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Ran 93 tests in 1125.270s
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:FAILED (failures=1)
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:======================================================================
2020-01-29 15:05:17,792.792 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,792.792 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,793.793 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,793.793 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,793.793 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,793.793 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,793.793 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,793.793 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,793.793 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,794.794 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,794.794 INFO:__main__:
Using guessed paths /home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/ ['/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/cython_modules/lib.3', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/src/pybind']
</pre> Dashboard - Feature #43930 (New): mgr/dashboard: Make user creation with password change on logon...https://tracker.ceph.com/issues/439302020-01-31T11:37:12ZStephan Müller
<p>After <a class="issue tracker-2 status-5 priority-6 priority-high2 closed child" title="Feature: mgr/dashboard: Enforce password change upon first login (Closed)" href="https://tracker.ceph.com/issues/24655">#24655</a> has been resolved, I can create a new user without a password an check "User must change password at next logon".</p>
<p>But I can't click the logon button without typing a password, but the user has no password as it should be set on next logon.</p>
<p>There are 3 ways to solve this:<br />1. Enable logon without a password<br />2. Enable passwords without password rule if the user has to change his PW anyway on his first logon according to the rules.<br />3. Button to automatically generate a password that complies with the rules and copy into to the clipboard.</p> Dashboard - Bug #43384 (Duplicate): mgr/dashboard: Pool size is calculated with data with differe...https://tracker.ceph.com/issues/433842019-12-19T14:01:20ZStephan Müller
<p>Currently we use the following calculation in the dashboard to calculate the usage:</p>
<pre><code>const avail = stats.bytes_used.latest + stats.max_avail.latest;<br /> pool['usage'] = avail > 0 ? stats.bytes_used.latest / avail : avail;</code></pre>
<p>[pool-list.component.ts:253-4]</p>
<p>The problem is that "max_avail" is calculated somewhat strangely as it does not show the real available space as it divides the available space at least through the number of replications for a replicated and by "(m+k)/k" for an ec pool.</p>
<p>To look the calculation up go to the following locations:<br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6033">https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6033</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6040">https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6040</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6054">https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6054</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L909">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L909</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L920">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L920</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L924">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L924</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L947">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L947</a></p>
<p>This problem was found out by a user who notified us about the percentage mismatch, here is the link to the mail:<br /><a class="external" href="http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-December/037680.html">http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-December/037680.html</a></p>
<p>Also found this thread on the new mailing list regarding the used percentage and max_avail topic:<br /><a class="external" href="https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NH2LMMX5KVRWCURI3BARRUAETKE2T2QN/#JDHXOQKWF6NZLQMOGEPAQCLI44KB54A3">https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NH2LMMX5KVRWCURI3BARRUAETKE2T2QN/#JDHXOQKWF6NZLQMOGEPAQCLI44KB54A3</a></p> Dashboard - Bug #42243 (Duplicate): mgr/dashboard: Fix unit test that is failing in a negative ti...https://tracker.ceph.com/issues/422432019-10-09T08:54:36ZStephan Müller
<p>Found a test that is failing in a negative timezone (Washington (-05:00))</p>
<pre>
● RbdSnapshotListComponent › snapshot modal dialog › should display suggested snapshot name
expect(received).toMatch(expected)
Expected pattern: /^image01_[\d-]+T[\d.:]+\+[\d:]+$/
Received string: "image01_2019-10-09T03:40:18.664-05:00"
204 | it('should display suggested snapshot name', () => {
205 | component.openCreateSnapshotModal();
> 206 | expect(component.modalRef.content.snapName).toMatch(
| ^
207 | RegExp(`^${component.rbdName}_[\\d-]+T[\\d.:]+\\+[\\d:]+\$`)
208 | );
209 | });
at src/app/ceph/block/rbd-snapshot-list/rbd-snapshot-list.component.spec.ts:206:51
at ZoneDelegate.Object.<anonymous>.ZoneDelegate.invoke (node_modules/zone.js/dist/zone.js:391:26)
at ProxyZoneSpec.Object.<anonymous>.ProxyZoneSpec.onInvoke (node_modules/zone.js/dist/proxy.js:129:39)
at ZoneDelegate.Object.<anonymous>.ZoneDelegate.invoke (node_modules/zone.js/dist/zone.js:390:52)
at Zone.Object.<anonymous>.Zone.run (node_modules/zone.js/dist/zone.js:150:43)
at Object.testBody.length (node_modules/jest-preset-angular/zone-patch/index.js:52:27)
</pre> Dashboard - Feature #42232 (New): mgr/dashboard: CephFs directory size calculationhttps://tracker.ceph.com/issues/422322019-10-08T15:02:25ZStephan Müller
<p>Add a button to calculate the size of the current selected directory</p> Dashboard - Feature #40332 (Duplicate): mgr/dashboard: Make PGs in pool creation form optional if...https://tracker.ceph.com/issues/403322019-06-13T12:37:47ZStephan Müller
<p>Make PGs in pool creation form optional if pg autoscaler is in use</p>
<p><a class="external" href="https://docker.pkg.github.com/ceph/ceph/tree/master/src/pybind/mgr/pg_autoscaler">https://docker.pkg.github.com/ceph/ceph/tree/master/src/pybind/mgr/pg_autoscaler</a></p> Dashboard - Bug #40331 (New): mgr/dashboard: Details should not display information that is shown...https://tracker.ceph.com/issues/403312019-06-13T12:31:31ZStephan Müller
<p>"Details" about table items should not display redundant information. Details should not be provided, if the table already displays all information available.</p> Dashboard - Bug #40328 (New): mgr/dashboard: Permanent notifications instead of repeated notifica...https://tracker.ceph.com/issues/403282019-06-13T12:25:30ZStephan Müller
<p>Need more "permanent" notifications for persisting issues instead of repeated popping up notifications (e.g. when the backend is unreachable)</p> Dashboard - Cleanup #38936 (New): mgr/dashboard: Unify polling behaviorhttps://tracker.ceph.com/issues/389362019-03-25T13:41:58ZStephan Müller
<p>Unify the polling behavior means that all API calls should be handled similar on failure.</p>
<p>The idea is that the dashboard can recover from connection issues automatically, but it should not send out notification on every failure after the initial or it should raise the polling time on each failure.</p>
<p>INHO muting notifications that would be triggered after the initial failure sounds like the best idea.</p> Dashboard - Bug #36445 (Resolved): Missing requirement "python-werkzeug" for running the dashboar...https://tracker.ceph.com/issues/364452018-10-15T14:31:45ZStephan Müller
<p>With a fresh build docker container I got the following message through <strong>ceph -s</strong> after sourcing <strong>run-backend-api-tests.sh</strong><br /><pre>
Module 'restful' has failed dependency: No module named 'werkzeug'
</pre></p>
<p>After that I found out that only the pip2 version of werkzeug was not installed. After I had installed it and recreated the cluster the message that lead to a warning state disappeared.</p> Dashboard - Feature #25165 (Rejected): mgr/dashboard: Manage Ceph pool snapshotshttps://tracker.ceph.com/issues/251652018-07-30T14:42:38ZStephan Müller
<p>Currently there is no way to create / delete pool snapshots</p>