Project

General

Profile

Actions

Feature #11692

closed

online change of mon_osd_full_ratio and mon_osd_nearfull_ratio doesn't take effect.

Added by cory gu almost 9 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

I want to verify the behavior of ceph cluster with full osd case, so I change the default full ratio with following cmds:

> ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config set  mon_osd_full_ratio 0.20
> ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config set  mon_osd_nearfull_ratio 0.15
> 

Then I create big file in the associated disk. df -lh shows the disk is over 20%. I expected ceph cluster should give osd full warnning.
However, ceph -s doesn't show osd full warnnings.
The observation is online change of mon_osd_full_ratio and mon_osd_nearfull_ratio doesn't take effect.

currently we can change the runtime full_ratio settings using

ceph pg set_full_ratio 0.20
ceph pg set_nearfull_ratio  0.15

but this might confuse our user: as per http://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/#storage-capacity, we are using

        mon osd full ratio = .80
        mon osd nearfull ratio = .70

for setting the full ratio, but adjusting them does not work if the PG is already created.

maybe we can let PGMonitor inherit from md_config_obs_t and watch the change of "mon osd full ratio" and "mon osd nearfull ratio", and add the new setting to the pending proposal.

Actions #1

Updated by Kefu Chai almost 9 years ago

  • Status changed from New to 12

cory, you might want to try out

ceph pg set_full_ratio 0.20
ceph pg set_nearfull_ratio  0.15

but i am not sure why we are using different settings for changing the full_ratios at runtime, though.

Actions #2

Updated by Kefu Chai almost 9 years ago

  • Assignee set to Kefu Chai
Actions #3

Updated by cory gu almost 9 years ago

Hi Kefu,
Thank you for your reply. Just tried your suggested input, and it works.
a few questions here:
So those settings are in pg level, does this mean all pools have the unified full ratio setting? we can't set the full raio pool by pool?

Actions #4

Updated by Kefu Chai almost 9 years ago

cory gu wrote:

a few questions here:
So those settings are in pg level, does this mean all pools have the unified full ratio setting?

it's a cluster-wide setting. all pools share the same full ratio settings in the same cluster.

we can't set the full raio pool by pool?

no, we can't. not at this moment. just out of curiosity, why would you want this to be a pool specific setting?

Actions #5

Updated by Kefu Chai almost 9 years ago

Kefu Chai wrote:

cory, you might want to try out
[...]

but i am not sure why we are using different settings for changing the full_ratios at runtime, though.

to identify the important settings which is supposed to be propagated to all OSDs in the cluster, we added pg set_{,near}full_ratio commands.

Actions #6

Updated by Kefu Chai almost 9 years ago

  • Status changed from 12 to Rejected

these options work as expected. so i am closing this issue. please feel free to reopen it if you think otherwise, thanks!

Actions #7

Updated by Jan Schermer almost 9 years ago

Sorry for hijacking this issue, but IMO it should be possible to set different thresholds on each OSD, not cluster-wide.

For example if
1) the OSDs are not the same size - leaving 15% free on a 400GB OSD is 60GB and that's probably allright. But on a 1600GB OSD 15% translates to 240GB which is probably a bit too much wasted space.
2) while in a production cluster the OSDs should have the filesystems to themelves, in a lab environment or a PoC it is conceivable I'll for example run them on root filesystems of some VMs with other apps living alongside them - uniformity is a non-issue in this case and I'm forced to commit more resources because of this

Actions #8

Updated by Kefu Chai over 8 years ago

  • Description updated (diff)
  • Status changed from Rejected to 12
Actions #9

Updated by Kefu Chai over 8 years ago

after a second thought: before the new setting applies to the new osdmap, and be sent to OSD nodes. it needs to be agreed by the quorum. so that might be why we made this more explicit.

Actions #10

Updated by Greg Farnum over 8 years ago

Yeah, I don't envision any good way to make it possible to change these values by updating the runtime config.

When thinking about alternatives, I think we could extend the config framework so that it can spit out more information when the user tries to inject a change. Then we could either mark these (and, perhaps, other non-dynamic changes) as unchangeable or at least return some kind of warning message to the user.

Actions #11

Updated by Kefu Chai over 8 years ago

Right, that's a good idea. if no observers is tracking a config, changing it at runtime would lead to a warning.

Actions #12

Updated by Kefu Chai over 8 years ago

  • Tracker changed from Bug to Feature
Actions #13

Updated by Kefu Chai over 8 years ago

  • Status changed from 12 to Fix Under Review
Actions #14

Updated by Sage Weil over 8 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF