Actions
Bug #58894
closed[pg-autoscaler][mgr] does not throw warn to increase PG count on pools with autoscale_mode set to warn
% Done:
0%
Source:
Tags:
backport_processed
Backport:
pacific,quincy,reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Here pool test 1-3 should be spitting health warnings like: PG TOO FEW PLEASE SCALE.
2023-02-26T06:42:38.655+0000 7fbb4c335640 0 log_channel(cluster) log [DBG] : pgmap v210496: 190 pgs: 190 active+clean; 17 GiB data, 73 GiB used, 302 GiB / 375 GiB avail 2023-02-26T06:42:40.656+0000 7fbb4c335640 0 log_channel(cluster) log [DBG] : pgmap v210497: 190 pgs: 190 active+clean; 17 GiB data, 73 GiB used, 302 GiB / 375 GiB avail 2023-02-26T06:42:40.749+0000 7fbb3b213640 0 [pg_autoscaler INFO root] _maybe_adjust 2023-02-26T06:42:40.756+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.756+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.563247399286174e-06 of space, bias 1.0, pg target 0.002281623699643087 quantized to 1 (current 1) 2023-02-26T06:42:40.756+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.757+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5662077498223599e-07 of space, bias 4.0, pg target 0.000313241549964472 quantized to 16 (current 16) 2023-02-26T06:42:40.757+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.757+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) 2023-02-26T06:42:40.757+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.757+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0156728342410679e-08 of space, bias 1.0, pg target 5.0783641712053395e-06 quantized to 32 (current 32) 2023-02-26T06:42:40.757+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.758+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.7586359738521156e-08 of space, bias 1.0, pg target 1.3793179869260579e-05 quantized to 32 (current 32) 2023-02-26T06:42:40.758+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.758+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) 2023-02-26T06:42:40.760+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.760+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2042019810224254e-08 of space, bias 4.0, pg target 2.408403962044851e-05 quantized to 32 (current 32) 2023-02-26T06:42:40.760+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.761+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'test' root_id -1 using 0.10212092923454837 of space, bias 1.0, pg target 51.060464617274185 quantized to 64 (current 8) 2023-02-26T06:42:40.761+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.761+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'test1' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 4) 2023-02-26T06:42:40.762+0000 7fbb3b213640 0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 402590269440 2023-02-26T06:42:40.762+0000 7fbb3b213640 0 [pg_autoscaler INFO root] Pool 'test3' root_id -1 using 0.03071969710843491 of space, bias 1.0, pg target 13.793144001687274 quantized to 32 (current 1) 2023-02-26T06:42:42.658+0000 7fbb4c335640 0 log_channel(cluster) log [DBG] : pgmap v210498: 190 pgs: 190 active+clean; 17 GiB data, 73 GiB used, 302 GiB / 375 GiB avail
Updated by Kamoltat (Junior) Sirivadhna about 1 year ago
- Status changed from New to Fix Under Review
- Pull request ID set to 50334
Updated by Neha Ojha about 1 year ago
- Status changed from Fix Under Review to Pending Backport
- Backport set to pacific,quincy
Updated by Backport Bot about 1 year ago
- Copied to Backport #59179: pacific: [pg-autoscaler][mgr] does not throw warn to increase PG count on pools with autoscale_mode set to warn added
Updated by Backport Bot about 1 year ago
- Copied to Backport #59180: quincy: [pg-autoscaler][mgr] does not throw warn to increase PG count on pools with autoscale_mode set to warn added
Updated by Yuri Weinstein 12 months ago
Updated by Kamoltat (Junior) Sirivadhna 12 months ago
- Status changed from Pending Backport to Resolved
all backports merged
Updated by Radoslaw Zarzynski 8 months ago
- Status changed from Resolved to Pending Backport
- Backport changed from pacific,quincy to pacific,quincy,reef
Oops, looks this tracker missed a reef backport (the patches are absent in v18.2.0).
Updated by Kamoltat (Junior) Sirivadhna 8 months ago
- Related to Backport #62820: reef: [pg-autoscaler][mgr] does not throw warn to increase PG count on pools with autoscale_mode set to warn added
Updated by Kamoltat (Junior) Sirivadhna 8 months ago
- Status changed from Pending Backport to Resolved
All backports has resolved.
Actions