Project

General

Profile

Actions

Feature #40986

open

cephfs qos: implement cephfs qos base on tokenbucket algorighm

Added by songbo wang almost 5 years ago. Updated almost 2 years ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Reviewed:
Affected Versions:
Component(FS):
Client, Common/Protocol, MDS
Labels (FS):
Pull request ID:

Description

The basic idea is as follows:

Set QoS info as one of the dir's xattrs;
All clients that can access the same dirs with the same QoS setting.
Follow the Quota's config flow. when the MDS receives the QoS setting, it'll also broadcast the message to all clients.
We can change the limit online.
[support]:
limit && burst config

[usage]:
setfattr -n ceph.qos.limit.iops -v 200 /mnt/cephfs/testdirs/
setfattr -n ceph.qos.burst.read_bps -v 200 /mnt/cephfs/testdirs/
getfattr -n ceph.qos.limit.iops /mnt/cephfs/testdirs/
getfattr -n ceph.qos /mnt/cephfs/testdirs/

[problems]:
Because there is no queue in CephFS IO path, If the bps is lower than the request's block size, the whole Client will be blocked until it gets enough token.

Actions #1

Updated by songbo wang almost 5 years ago

I think there are two kinds of design:
1. all clients use the same QoS setting, just as the implementation
in this PR. Maybe there are multiple mount points, if we limit the
total IO, the number of total mount points is also limited.
So in my implementation, the total IO & BPS is not limited.

2. all clients share a specific QoS setting. I think there are two
kinds of use cases in detail.
2.1 setting a total limit, all clients limited by the average:
total_limit/clients_num.
2.2 setting a total limit, the mds decide the client's limitation by
their historical IO&BPS.

Actions #2

Updated by Greg Farnum almost 5 years ago

  • Project changed from Ceph to CephFS
Actions #3

Updated by Patrick Donnelly almost 5 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to songbo wang
  • Target version set to v15.0.0
  • Start date deleted (07/26/2019)
  • Source set to Community (dev)
  • Component(FS) Client, Common/Protocol, MDS added
Actions #4

Updated by Patrick Donnelly over 4 years ago

  • Target version changed from v15.0.0 to v16.0.0
Actions #5

Updated by Patrick Donnelly over 3 years ago

  • Target version changed from v16.0.0 to v17.0.0
Actions #6

Updated by xianpao chen almost 3 years ago

I'm also interested in the status of QoS for CephFS. Is there any available and mature CephFS QOS mechanism?

Actions #7

Updated by Patrick Donnelly almost 2 years ago

  • Target version deleted (v17.0.0)
Actions

Also available in: Atom PDF