Project

General

Profile

Bug #41735

Updated by Sage Weil over 4 years ago

Old pools have auto_scale on and ceph health still shows HEALTH_WARN (20 < 30) 


 <pre> 
 

 ```pre 

 sh-4.2# ceph osd pool autoscale-status 
  POOL                                   SIZE    TARGET SIZE    RATE    RAW CAPACITY     RATIO    TARGET RATIO    BIAS    PG_NUM    NEW PG_NUM    AUTOSCALE  
  rook-ceph-cephfilesystem-data0           0                   3.0          297.0G    0.0000                   1.0         4                on         
  .rgw.root                                0                   3.0          297.0G    0.0000                   1.0         8                on         
  rook-ceph-cephblockpool              576.9k                  3.0          297.0G    0.0000                   1.0         4                on         
  rook-ceph-cephfilesystem-metadata     1549k                  3.0          297.0G    0.0000                   1.0         4                on         

 sh-4.2# ceph version 
 ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable) 


 sh-4.2# ceph pg ls 
 PG    OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE          SINCE VERSION REPORTED    UP          ACTING      SCRUB_STAMP                  DEEP_SCRUB_STAMP            
 1.0         2          0           0         0       0             0            0     0 active+clean     90m      42'4     401:849 [0,2,1]p0 [0,2,1]p0 2019-09-09 23:25:58.767705 2019-09-06 23:04:48.764917  
 1.1         1          0           0         0       0             0            0     0 active+clean     91m      39'2    401:1180 [2,1,0]p2 [2,1,0]p2 2019-09-09 23:22:17.830001 2019-09-09 23:15:15.200786  
 1.2         4          0           0         0      36           345           27     0 active+clean     91m      42'8    401:6455 [0,1,2]p0 [0,1,2]p0 2019-09-09 23:25:20.698986 2019-09-09 23:25:20.698986  
 1.3         2          0           0         0      17             0            0     0 active+clean     91m      42'2    401:4809 [1,0,2]p1 [1,0,2]p1 2019-09-09 23:25:09.661832 2019-09-09 23:15:21.986925  
 2.0         9          0           0         0    1526             0            0     0 active+clean     90m      40'8 402:11312 [0,2,1]p0 [0,2,1]p0 2019-09-09 23:26:01.722927 2019-09-06 23:04:51.466968  
 2.1         4          0           0         0     160             0            0     0 active+clean     91m      40'6     401:462 [1,2,0]p1 [1,2,0]p1 2019-09-09 23:25:54.683444 2019-09-06 23:04:51.466968  
 2.2         2          0           0         0       0             0            0     0 active+clean     91m      42'4     401:464 [0,1,2]p0 [0,1,2]p0 2019-09-09 23:25:23.719986 2019-09-06 23:04:51.466968  
 2.3         7          0           0         0     600         13860           30     0 active+clean     91m      40'7     401:468 [0,1,2]p0 [0,1,2]p0 2019-09-09 23:24:06.789026 2019-09-09 23:24:06.789026  
 3.0         0          0           0         0       0             0            0     0 active+clean     91m       0'0     401:455 [2,0,1]p2 [2,0,1]p2 2019-09-09 23:15:33.106745 2019-09-09 23:15:33.106745  
 3.1         0          0           0         0       0             0            0     0 active+clean     91m       0'0     401:458 [1,0,2]p1 [1,0,2]p1 2019-09-09 23:15:36.899952 2019-09-06 23:04:55.747801  
 3.2         0          0           0         0       0             0            0     0 active+clean     91m       0'0     401:461 [1,2,0]p1 [1,2,0]p1 2019-09-09 23:21:59.619765 2019-09-09 23:21:59.619765  
 3.3         0          0           0         0       0             0            0     0 active+clean     91m       0'0     401:455 [0,1,2]p0 [0,1,2]p0 2019-09-09 23:15:58.340066 2019-09-06 23:04:55.747801  
 4.0         0          0           0         0       0             0            0     0 active+clean     17h       0'0     401:395 [1,0,2]p1 [1,0,2]p1 2019-09-09 07:08:50.651610 2019-09-06 23:09:43.671929  
 4.1         0          0           0         0       0             0            0     0 active+clean     16h       0'0     401:395 [2,0,1]p2 [2,0,1]p2 2019-09-09 08:47:42.989819 2019-09-06 23:09:43.671929  
 4.2         0          0           0         0       0             0            0     0 active+clean     15h       0'0     401:395 [2,0,1]p2 [2,0,1]p2 2019-09-09 09:02:26.803052 2019-09-09 09:02:26.803052  
 4.3         0          0           0         0       0             0            0     0 active+clean      7h       0'0     401:395 [2,0,1]p2 [2,0,1]p2 2019-09-09 16:57:00.042792 2019-09-09 16:57:00.042792  
 4.4         0          0           0         0       0             0            0     0 active+clean     14h       0'0     401:395 [1,2,0]p1 [1,2,0]p1 2019-09-09 10:45:22.251071 2019-09-06 23:09:43.671929  
 4.5         0          0           0         0       0             0            0     0 active+clean      9h       0'0     401:395 [0,2,1]p0 [0,2,1]p0 2019-09-09 14:57:47.728906 2019-09-06 23:09:43.671929  
 4.6         0          0           0         0       0             0            0     0 active+clean      4h       0'0     401:395 [2,1,0]p2 [2,1,0]p2 2019-09-09 20:05:52.983982 2019-09-06 23:09:43.671929  
 4.7         0          0           0         0       0             0            0     0 active+clean     12h       0'0     401:395 [1,2,0]p1 [1,2,0]p1 2019-09-09 12:14:59.441725 2019-09-06 23:09:43.671929  

 * NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilisation. See http://docs.ceph.com/docs/master/dev/placement-group/#omap-statistics for further details. 
 </pre> ```

Back