Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-10-01T10:03:53ZCeph
Redmine Dashboard - Bug #47714 (New): mgr/dashboard: Implement an expert settinghttps://tracker.ceph.com/issues/477142020-10-01T10:03:53ZStephan Müller
<p>To simplify forms implement an expert setting that if disabled hides not mandatory fields in the forms as the first step.</p>
<p>How should it look like?<br />Maybe on the top right an expert slider on the form and on the top panel. As it will have impact on what a users sees in the future.</p>
<p>Tip before getting to deep into the implementation please ask in the stand up if that's path we want to go down.</p> Dashboard - Bug #46757 (New): mgr/dashboard: Only show identify action if inventory device can blinkhttps://tracker.ceph.com/issues/467572020-07-29T14:03:08ZStephan Müller
<p>If a device can't be blink but is manged by cephadm the action "Identify" will be shown in the inventory page. The problem is that the command doesn't throw an error if it fails on the dashboard. I observed the following error through running `ceph -W cephadm` in parallel to the execution.</p>
<pre>
2020-07-29T08:53:30.649950-0500 mgr.x [ERR] executing blink(([DeviceLightLoc(host='osd0', dev='/dev/vdb', path='/dev/vdb')],)) failed.
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
2020-07-29T08:53:30.653157-0500 mgr.x [ERR] _Promise failed
Traceback (most recent call last):
File "/ceph/src/pybind/mgr/orchestrator/_interface.py", line 292, in _finalize
next_result = self._on_complete(self._value)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 102, in <lambda>
return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1599, in blink_device_light
return blink(locs)
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 73, in forall_hosts_wrapper
return CephadmOrchestrator.instance._worker_pool.map(do_work, vals)
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib64/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/ceph/src/pybind/mgr/cephadm/utils.py", line 67, in do_work
return f(*arg)
File "/ceph/src/pybind/mgr/cephadm/module.py", line 1591, in blink
raise OrchestratorError(
orchestrator._interface.OrchestratorError: Unable to affect ident light for osd0:/dev/vdb. Command: lsmcli local-disk-ident-led-on --path /dev/vdb
</pre> Dashboard - Bug #46667 (Resolved): mgr/dashboard: Handle buckets without a realm_idhttps://tracker.ceph.com/issues/466672020-07-22T12:13:42ZStephan Müller
<p>The dashboard should not fail hard or handle buckets without a set realm_id.</p>
<p>The API fails with something like this:</p>
<pre>
RGW REST API failed request with status code 400
(b'{
"Code":"InvalidLocationConstraint",
"Message":"The specified location-constr' b'aint is not valid",
"BucketName":"test",
"RequestId":"tx00000000000000001b64c-' b'005f16b722-137187-my-store",
"HostId":"137187-my-store-my-store"
}')
</pre> Dashboard - Bug #46660 (Resolved): mgr/dashboard: Regression on table error handlinghttps://tracker.ceph.com/issues/466602020-07-21T14:47:00ZStephan Müller
<p>Regression was introduced by <a class="issue tracker-1 status-3 priority-5 priority-high3 closed" title="Bug: mgr/dashboard: OSD page is slow at loading all the inline pages and tabs (Resolved)" href="https://tracker.ceph.com/issues/45017">#45017</a> (PR mgr/dashboard: Migrate Tabs from ngx-bootstrap to ng-bootstrap #35290). Through the use of the new plugin some tables got wrapped into the new usage - however some components needed the use of the table as viewchild and though they got a static viewchild, but it's now dynamic.</p>
<p>The error occurs on the host and pools listing.</p> Dashboard - Bug #46303 (Resolved): mgr/dashboard: ExpressionChangedAfterItHasBeenCheckedError in ...https://tracker.ceph.com/issues/463032020-07-01T14:22:14ZStephan Müller
<p>ExpressionChangedAfterItHasBeenCheckedError in device selection modal in OSD creation form. It looks like it has something to do with the modal switch <a class="issue tracker-2 status-3 priority-4 priority-default closed child" title="Feature: mgr/dashboard: Use ng-bootstrap for Modal (Resolved)" href="https://tracker.ceph.com/issues/45759">#45759</a>.</p> Dashboard - Bug #46135 (Resolved): mgr/dashboard: Typeahead regression in the silence matcherhttps://tracker.ceph.com/issues/461352020-06-22T08:26:03ZStephan Müller
<p>This regression was introduced by PR #35300 which updated the typeahead module usage from ngx-bootstrap to ng-bootstrap's typeahead module.</p>
<p>The regression is that the typeahead didn't open on click into the<br />input field. Another regression is that the suggestions didn't overlap<br />the modal anymore.</p> Dashboard - Bug #44753 (New): mgr/dashboard: Secure the Alertmanger receiver endpointhttps://tracker.ceph.com/issues/447532020-03-25T14:10:42ZStephan Müller
<p>Currently it is possible send push notification unauthenticated to the dashboard and the push notifications are not verified if they actually are coming from an Alertmanager instance.</p>
<p>To see whats configurable see <a class="external" href="https://prometheus.io/docs/alerting/configuration/#http_config">https://prometheus.io/docs/alerting/configuration/#http_config</a></p>
<p>Removing the endpoint is not a solution to be considered as ceph orchestrator is configuring every Alertmanager instance to talk to the receiver of the dashboard.</p>
<p>The receiver is at the moment the only part that can handle multiple Altermanger instances.</p> Dashboard - Bug #44620 (Resolved): mgr/dashboard: Pool form max sizehttps://tracker.ceph.com/issues/446202020-03-16T12:07:27ZStephan Müller
<p>Currently the pool form max size is determined by "max_size" of the selected rule or the maximum amount of available OSDs. The amount can be wrong if the failure domain of the rule is not OSD.</p>
<p>I'm also currently not sure if "max_size" and "min_size" are useful values to show, at least pools created on a vstart cluster always show the same min and max size values. Please make that sure that those values can still be used safely.</p> Dashboard - Bug #44617 (New): mgr/dashboard: Some notifications are not shown in the notification...https://tracker.ceph.com/issues/446172020-03-16T10:02:31ZStephan Müller
<p>Some notifications are only popping up, but are not stored in the notifications table.<br />Observed the behavior for the erasure code profile creation and deletion as well for the crush rule creation and deletion.<br />It's possible that this affects also other pages.</p> Dashboard - Bug #44224 (New): mgr/dashboard: Timeouts for rbd.py callshttps://tracker.ceph.com/issues/442242020-02-20T10:10:30ZStephan Müller
<p>As the corner cases are not implemented in many rbd.py methods, they can fail without a response on a specific pool (mostly bad pools).</p>
<p>If this is implemented remove the workaround that was implemented to fix <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>For details what known issue exists see <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a>.</p>
<p>For details about the discussion that was made look at the PR that fixed <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>Make sure that <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a> is still not addressed before starting with this issue.</p>
<p>For details how this was implemented in openATTIC look <a href="https://bitbucket.org/openattic/openattic/pull-requests/682/add-librados-command-name-to-external/diff" class="external">here</a></p> Dashboard - Bug #44223 (Duplicate): mgr/dashboard: Timeouts for rbd.py callshttps://tracker.ceph.com/issues/442232020-02-20T10:05:49ZStephan Müller
<p>As the corner cases are not implemented in many rbd methods, they can fail without a response on a specific pool (mostly bad pools).</p>
<p>If this is implemented remove the workaround that was implemented to fix <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>For details what known issue exists see <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a>.</p>
<p>For details about the discussion that was made look at the PR that fixed <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mgr/dashboard: Dashboard breaks on the selection of a bad pool (Resolved)" href="https://tracker.ceph.com/issues/43765">#43765</a>.</p>
<p>Make sure that <a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: pybind/rbd: config_list hangs if given an pool with a bad pg state (Rejected)" href="https://tracker.ceph.com/issues/43771">#43771</a> is still not addressed before starting with this issue.</p> Dashboard - Bug #43938 (Duplicate): mgr/dashboard: Test failure in test_safe_to_destroy in tasks....https://tracker.ceph.com/issues/439382020-01-31T15:14:52ZStephan Müller
<p>There is an API test failure in master not sure where it came from.</p>
<pre>
2020-01-29 15:05:17,789.789 INFO:__main__:Stopped test: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest) in 1.567141s
2020-01-29 15:05:17,789.789 INFO:__main__:
2020-01-29 15:05:17,790.790 INFO:__main__:======================================================================
2020-01-29 15:05:17,790.790 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,790.790 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,790.790 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,790.790 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,790.790 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,790.790 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,790.790 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,790.790 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,791.791 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,791.791 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,791.791 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,791.791 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,791.791 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Ran 93 tests in 1125.270s
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:FAILED (failures=1)
2020-01-29 15:05:17,792.792 INFO:__main__:
2020-01-29 15:05:17,792.792 INFO:__main__:======================================================================
2020-01-29 15:05:17,792.792 INFO:__main__:FAIL: test_safe_to_destroy (tasks.mgr.dashboard.test_osd.OsdTest)
2020-01-29 15:05:17,792.792 INFO:__main__:----------------------------------------------------------------------
2020-01-29 15:05:17,792.792 INFO:__main__:Traceback (most recent call last):
2020-01-29 15:05:17,792.792 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_osd.py", line 113, in test_safe_to_destroy
2020-01-29 15:05:17,793.793 INFO:__main__: 'stored_pgs': [],
2020-01-29 15:05:17,793.793 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 343, in assertJsonBody
2020-01-29 15:05:17,793.793 INFO:__main__: self.assertEqual(body, data)
2020-01-29 15:05:17,793.793 INFO:__main__:AssertionError: {u'is_safe_to_destroy': False, u'message': u'[errno -11] 3 pgs have unknown stat [truncated]... != {'active': [], 'is_safe_to_destroy': True, 'stored_pgs': [], 'safe_to_destroy': [truncated]...
2020-01-29 15:05:17,793.793 INFO:__main__:+ {'active': [],
2020-01-29 15:05:17,793.793 INFO:__main__:- {u'is_safe_to_destroy': False,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^^ ^^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,793.793 INFO:__main__:+ 'is_safe_to_destroy': True,
2020-01-29 15:05:17,793.793 INFO:__main__:? ^ ^^^
2020-01-29 15:05:17,793.793 INFO:__main__:
2020-01-29 15:05:17,794.794 INFO:__main__:- u'message': u'[errno -11] 3 pgs have unknown state; cannot draw any conclusions'}
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'missing_stats': [],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'safe_to_destroy': [13],
2020-01-29 15:05:17,794.794 INFO:__main__:+ 'stored_pgs': []}
2020-01-29 15:05:17,794.794 INFO:__main__:
Using guessed paths /home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/ ['/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/build/lib/cython_modules/lib.3', '/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/src/pybind']
</pre> Dashboard - Bug #43765 (Resolved): mgr/dashboard: Dashboard breaks on the selection of a bad poolhttps://tracker.ceph.com/issues/437652020-01-23T13:34:10ZStephan Müller
<p>If a pool is created with a wrong rule that is oversized for the cluster it gets into the "creating+incomplete" state.<br />This wont break the dashboard yet, but when clicking on the pool to get it's details every API request will be left unanswerd. The first unanswered call is "/api/pool/$badPool/configuration".</p>
<p>To reproduce this you can create an erasure code profile with a greater value for k than you have OSDs.</p> Dashboard - Bug #43594 (Resolved): mgr/dashboard: E2E pools page failurehttps://tracker.ceph.com/issues/435942020-01-14T09:58:44ZStephan Müller
<p>On the current master the pools page fail:</p>
<pre>
npm run e2e:ci -- --specs e2e/pools/pools.e2e-spec.ts
> ceph-dashboard@0.0.0 e2e:ci /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend
> npm run env_build && npm run e2e:update && ng e2e --dev-server-target --webdriverUpdate=false "--specs" "e2e/pools/pools.e2e-spec.ts"
> ceph-dashboard@0.0.0 env_build /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend
> cp src/environments/environment.tpl.ts src/environments/environment.prod.ts && cp src/environments/environment.tpl.ts src/environments/environment.ts && node ./environment.build.js
Environment variables have been set
> ceph-dashboard@0.0.0 e2e:update /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend
> npx webdriver-manager update --gecko=false --versions.chrome=$(google-chrome --version | awk '{ print $3 }')
[10:50:42] I/update - chromedriver: file exists /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.70.zip
[10:50:42] I/update - chromedriver: unzipping chromedriver_78.0.3904.70.zip
[10:50:42] I/update - chromedriver: setting permissions to 0755 for /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.70
[10:50:42] I/update - chromedriver: chromedriver_78.0.3904.70 up to date
[10:50:42] I/update - selenium standalone: file exists /srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.141.59.jar
[10:50:42] I/update - selenium standalone: selenium-server-standalone-3.141.59.jar up to date
[10:50:56] I/launcher - Running 1 instances of WebDriver
[10:50:56] I/direct - Using ChromeDriver directly...
Activated Protractor Screenshoter Plugin, ver. 0.10.3 (c) 2016 - 2020 Andrej Zachar and contributors
Creating reporter at .protractor-report/
Jasmine started
Pools page
breadcrumb and tab tests
✓ should open and show breadcrumb (0.181 sec)
✓ should show two tabs (0.066 sec)
✓ should show pools list tab at first (0.075 sec)
✓ should show overall performance as a second tab (0.08 sec)
✗ should create a pool (15 secs)
- Failed: No element found using locator: By(css selector, input[name=pgNum])
at elementArrayFinder.getWebElements.then (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:814:27)
at process._tickCallback (internal/process/next_tick.js:68:7)Error:
at ElementArrayFinder.applyAction_ (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:459:27)
at ElementArrayFinder.(anonymous function).args [as sendKeys] (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:91:29)
at ElementFinder.(anonymous function).args [as sendKeys] (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/protractor/built/element.js:831:22)
at PoolPageHelper.<anonymous> (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/e2e/pools/pools.po.ts:44:34)
at step (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/tslib/tslib.js:136:27)
at Object.next (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/tslib/tslib.js:117:57)
at fulfilled (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/tslib/tslib.js:107:62)
at process._tickCallback (internal/process/next_tick.js:68:7)
From asynchronous test:
Error:
at Suite.<anonymous> (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/e2e/pools/pools.e2e-spec.ts:34:3)
at apply (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/lodash/lodash.js:476:27)
at Env.wrapper [as describe] (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/node_modules/lodash/lodash.js:5317:16)
at Object.<anonymous> (/srv/cephmgr/ceph-dev/src/pybind/mgr/dashboard/frontend/e2e/pools/pools.e2e-spec.ts:3:1)
at Module._compile (internal/modules/cjs/loader.js:689:30)
**************************************************
* Failures *
**************************************************
1) Pools page should create a pool
- Failed: No element found using locator: By(css selector, input[name=pgNum])
Executed 5 of 7 specs (1 FAILED) (2 SKIPPED) in 19 secs.
[10:51:46] I/launcher - 0 instance(s) of WebDriver still running
[10:51:46] I/launcher - chrome #01 failed 1 test(s)
[10:51:46] I/launcher - overall: 1 failed spec(s)
[10:51:46] E/launcher - Process exited with error code 1
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! ceph-dashboard@0.0.0 e2e:ci: `npm run env_build && npm run e2e:update && ng e2e --dev-server-target --webdriverUpdate=false "--specs" "e2e/pools/pools.e2e-spec.ts"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the ceph-dashboard@0.0.0 e2e:ci script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/albatros/.npm/_logs/2020-01-14T09_51_46_846Z-debug.log
</pre> Dashboard - Bug #43384 (Duplicate): mgr/dashboard: Pool size is calculated with data with differe...https://tracker.ceph.com/issues/433842019-12-19T14:01:20ZStephan Müller
<p>Currently we use the following calculation in the dashboard to calculate the usage:</p>
<pre><code>const avail = stats.bytes_used.latest + stats.max_avail.latest;<br /> pool['usage'] = avail > 0 ? stats.bytes_used.latest / avail : avail;</code></pre>
<p>[pool-list.component.ts:253-4]</p>
<p>The problem is that "max_avail" is calculated somewhat strangely as it does not show the real available space as it divides the available space at least through the number of replications for a replicated and by "(m+k)/k" for an ec pool.</p>
<p>To look the calculation up go to the following locations:<br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6033">https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6033</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6040">https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6040</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6054">https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L6054</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L909">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L909</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L920">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L920</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L924">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L924</a><br /><a class="external" href="https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L947">https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc#L947</a></p>
<p>This problem was found out by a user who notified us about the percentage mismatch, here is the link to the mail:<br /><a class="external" href="http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-December/037680.html">http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-December/037680.html</a></p>
<p>Also found this thread on the new mailing list regarding the used percentage and max_avail topic:<br /><a class="external" href="https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NH2LMMX5KVRWCURI3BARRUAETKE2T2QN/#JDHXOQKWF6NZLQMOGEPAQCLI44KB54A3">https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NH2LMMX5KVRWCURI3BARRUAETKE2T2QN/#JDHXOQKWF6NZLQMOGEPAQCLI44KB54A3</a></p>