Project

General

Profile

How to fix 'too many PGs' luminous » History » Version 2

Mike Sirs, 04/03/2018 09:13 PM

1 2 Mike Sirs
h1. %{color:white}How to fix 'too many PGs' luminous%
2 1 Mike Sirs
3
how did i fix it in 12.2.4 luminous
4
too many PGs per OSD (380 > max 200) may lead you to many blocking requests
5
6
first you need to set 
7
8
<pre><code class="text">
9
[global]
10
11
mon_max_pg_per_osd = 800  # < depends on you amount of PGs
12
osd max pg per osd hard ratio = 10 # < default is 2, try to set at least 5. It will be
13
mon allow pool delete = true # without it you can't remove a pool 
14
15
</code></pre>
16
17
#restart all MONs and OSDs, one by one
18
19
#check the value {$id} <- replace it with your mon/osd id
20
<pre><code class="text">
21
ceph --admin-daemon /var/run/ceph/ceph-mon.{$id}.asok config get  mon_max_pg_per_osd
22
ceph --admin-daemon /var/run/ceph/ceph-osd.{$id}.asok config get osd_max_pg_per_osd_hard_ratio
23
</code></pre>
24
25
26
#now look at here
27
28
<pre><code class="text">
29
rados lspools
30
ceph osd pool get .users.email pg_num
31
</code></pre>
32
33
34
in my case by default pg_num was 128 or something like that (my cluster is 4 years old, it was a lot of upgrades a lot of changes)
35
you can reduse it like that 
36
37
!!!BE VERY CAREFUL!!!
38
39
<pre><code class="text">
40
ceph osd pool create .users.email.new 8
41
rados cppool .users.email default.rgw.lc.new
42
ceph osd pool delete .users.email .users.email --yes-i-really-really-mean-it
43
ceph osd pool rename .users.email.new .users.email
44
ceph osd pool application enable .users.email rgw
45
</code></pre>
46
47
48
if it wasn't enough, try to find another pool you can cut
49
50
this is it, i hope it will help someone