Project

General

Profile

Actions

Bug #42055

closed

[osd]osd space's usage rate will be achieve to 100%.

Added by haitao chen over 4 years ago. Updated over 4 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
OSDMap
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

os: centos 7.6.1807, ceph: ceph-13.2.5
Reproduce steps:
1, create a pool and a image in this pool.

2, Write a test script include steps list belows:

* a, create a rados client.
* b, connect to the rados cluster.
* c, create a io context.
* d, set full try flag of the io context use "rados_set_osdmap_full_try".
* e, open the image.
* d, get watcher of this image.
* f, add the watcher to black list and the expire time is 30s.

3, Loop 100000 times of the test script.

4, use 'ceph osd df' to display the usage rate of osd. The usage rate of osd will be increased.

   The usage of the osd is taken by osdmap. 
   I have a question list belows:
   Is it have a function to trim the osdmap of osd?

Actions #1

Updated by Greg Farnum over 4 years ago

Can you clarify? It sounds like you deliberately made the OSDMap very large via blacklisting (which, yes, can happen) and then managed to fill up a (I presume very small) OSD with those maps?

There are many mechanisms to deal with that but if you are using very small fake disks they won't work with default configurations and you'll need to update them.

Actions #2

Updated by haitao chen over 4 years ago

Greg Farnum wrote:

Can you clarify? It sounds like you deliberately made the OSDMap very large via blacklisting (which, yes, can happen) and then managed to fill up a (I presume very small) OSD with those maps?

There are many mechanisms to deal with that but if you are using very small fake disks they won't work with default configurations and you'll need to update them.

Hi Greg,
Can you show me the default configurations? Thanks a lot.

Another question: Is there have a configuration options to trim osdmap on osd?

Actions #3

Updated by Greg Farnum over 4 years ago

  • Status changed from New to Closed

haitao chen wrote:

Greg Farnum wrote:

Can you clarify? It sounds like you deliberately made the OSDMap very large via blacklisting (which, yes, can happen) and then managed to fill up a (I presume very small) OSD with those maps?

There are many mechanisms to deal with that but if you are using very small fake disks they won't work with default configurations and you'll need to update them.

Hi Greg,
Can you show me the default configurations? Thanks a lot.

It's defined in the "ceph.git/src/common/options.cc" file and you can get it out of a running daemon with "ceph daemon osd.0 config help" (for osd.0, or whatever other one you specify).
That said there are a lot of different values to control the disk space usage and it'll probably be easier to just allocate disks of reasonable size. We do a lot of testing on 100GiB partitions and that should work fine.

Another question: Is there have a configuration options to trim osdmap on osd?

They generally trim as much as is safe to do, although I think there's a minimum number they keep around in case clients connect with an old map.

Actions

Also available in: Atom PDF