Bug #57532
Notice discrepancies in the performance of mclock built-in profiles
0%
Related issues
History
#1 Updated by Srinivasa Bharath Kanta about 1 year ago
From the following data, I noticed that -
1. In the case-1, for all profiles the IO reservations for high_client, High_recover, and balanced are 80%,50%, and 50% but as per requirements it should be 50%,30%, and 40%.
2. In Case2, The balanced reservations are more compared with the provided requirements.
Data by setting noscrub,nodeep scrub,norecover and noBackfill flags:
avg=2046.40
avg=2163.40
avg=2089.77
avg=2188.73
Case-1 :
-------
Data by setting noscrub & nodeep scrub flags(Back ground operations are null):
High_Client:
avg=1613.91
avg=1654.76
avg=1756.19
avg=1762.92
High Recovery:
avg=1194.26
avg=1050.96
avg=1191.49
avg=1067.76
Balanced:
avg=1203.38
avg=1097.60
avg=1172.83
avg=1223.66
Case-2 :
--------
All Flags are Unset:
High Client
675.55
709.15
755.74
635.84
High_recover
669.11
772.88
785.91
922.74
Balanced
1059.94
1271.86
1142.54
1223.66
Version-Release number of selected component (if applicable):
ceph version 17.2.3-21.el9cp
#2 Updated by Aishwarya Mathuria about 1 year ago
Hi Bharath, could you also add the mClock configuration values from osd config show command here?
#3 Updated by Aishwarya Mathuria about 1 year ago
- Assignee set to Sridhar Seshasayee
#4 Updated by Aishwarya Mathuria about 1 year ago
As Sridhar has mentioned in the BZ, the Case 2 results are due to the max limit setting for best effort clients. This will be fixed in the following PR: https://github.com/ceph/ceph/pull/48226/files
#5 Updated by Radoslaw Zarzynski 12 months ago
- Duplicates Bug #57529: mclock backfill is getting higher priority than WPQ added
#6 Updated by Radoslaw Zarzynski 12 months ago
- Status changed from New to Duplicate
Marked as duplicate per comment #4.