Actions
Bug #61922
closed[pg_autoscaler] PG auto-scaler configs on individual pools is changed after set & unset of "noautoscale" flag
Status:
Resolved
Priority:
Normal
Assignee:
Category:
pg_autoscaler module
Target version:
-
% Done:
100%
Source:
Tags:
backport_processed
Backport:
reef, quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Description
Description of problem: The PG auto-scaler settings on the pool are reset and all are either turned OFF or ON upon setting the "noautoscale" flag, irrespective of the previous states. [ceph: root@ceph-node1-installer /]# ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 10353k 4.0 53920G 0.0000 1.0 1 off False cephfs.cephfs.meta 308.2M 4.0 53920G 0.0000 4.0 16 warn False cephfs.cephfs.data 71680M 4.0 53920G 0.0052 1.0 32 warn False .rgw.root 2562 4.0 53920G 0.0000 1.0 32 on False default.rgw.log 3729 4.0 53920G 0.0000 1.0 32 on False default.rgw.control 0 4.0 53920G 0.0000 1.0 32 off False default.rgw.meta 3546k 4.0 53920G 0.0000 4.0 32 on False [ceph: root@ceph-node1-installer /]# ceph osd pool set noautoscale noautoscale is set, all pools now have autoscale off [ceph: root@ceph-node1-installer /]# ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 10353k 4.0 53920G 0.0000 1.0 1 off False cephfs.cephfs.meta 308.2M 4.0 53920G 0.0000 4.0 16 off False cephfs.cephfs.data 71680M 4.0 53920G 0.0052 1.0 32 off False .rgw.root 2562 4.0 53920G 0.0000 1.0 32 off False default.rgw.log 3729 4.0 53920G 0.0000 1.0 32 off False default.rgw.control 0 4.0 53920G 0.0000 1.0 32 off False default.rgw.meta 3546k 4.0 53920G 0.0000 4.0 32 off False # ceph osd pool unset noautoscale noautoscale is unset, all pools now have autoscale on [ceph: root@ceph-node1-installer /]# ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 10353k 4.0 53920G 0.0000 1.0 1 on False cephfs.cephfs.meta 308.2M 4.0 53920G 0.0000 4.0 16 on False cephfs.cephfs.data 71680M 4.0 53920G 0.0052 1.0 32 on False .rgw.root 2562 4.0 53920G 0.0000 1.0 32 on False default.rgw.log 3729 4.0 53920G 0.0000 1.0 32 on False default.rgw.control 0 4.0 53920G 0.0000 1.0 32 on False default.rgw.meta 3546k 4.0 53920G 0.0000 4.0 32 on False
Updated by Kamoltat (Junior) Sirivadhna 10 months ago
- Pull request ID set to 52442
Attaching PR ID: https://github.com/ceph/ceph/pull/52442
Updated by Kamoltat (Junior) Sirivadhna 10 months ago
- Status changed from New to Fix Under Review
Updated by Kamoltat (Junior) Sirivadhna 7 months ago
- Status changed from Fix Under Review to Pending Backport
- Backport set to reef, quincy, pacific
main is merged -> pending backport
Updated by Backport Bot 7 months ago
- Copied to Backport #62976: pacific: [pg_autoscaler] PG auto-scaler configs on individual pools is changed after set & unset of "noautoscale" flag added
Updated by Backport Bot 7 months ago
- Copied to Backport #62977: reef: [pg_autoscaler] PG auto-scaler configs on individual pools is changed after set & unset of "noautoscale" flag added
Updated by Backport Bot 7 months ago
- Copied to Backport #62978: quincy: [pg_autoscaler] PG auto-scaler configs on individual pools is changed after set & unset of "noautoscale" flag added
Updated by Yuri Weinstein 6 months ago
Updated by Konstantin Shalygin about 1 month ago
- Status changed from Pending Backport to Resolved
- % Done changed from 0 to 100
Actions