Actions
Bug #42055
closed[osd]osd space's usage rate will be achieve to 100%.
% Done:
0%
Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
os: centos 7.6.1807, ceph: ceph-13.2.5
Reproduce steps:
1, create a pool and a image in this pool.
2, Write a test script include steps list belows:
* a, create a rados client.
* b, connect to the rados cluster.
* c, create a io context.
* d, set full try flag of the io context use "rados_set_osdmap_full_try".
* e, open the image.
* d, get watcher of this image.
* f, add the watcher to black list and the expire time is 30s.
3, Loop 100000 times of the test script.
4, use 'ceph osd df' to display the usage rate of osd. The usage rate of osd will be increased.
The usage of the osd is taken by osdmap.
I have a question list belows:
Is it have a function to trim the osdmap of osd?
Actions