Project

General

Profile

Bug #22735

about mon_max_pg_per_osd

Added by jiantao zhu about 6 years ago. Updated about 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
% Done:

0%

Source:
Community (user)
Tags:
mon_max_pg_per_osd
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I have only two osd and one mon, and mon_max_pg_per_osd=200 in my environment, while I can create two pools with 402 pgs totally.Is it a bug?

[root@xenserver-001 block]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.24979 root default
-3 0.24979 host xenserver-001
0 hdd 0.12489 osd.0 up 1.00000 1.00000
1 hdd 0.12489 osd.1 up 1.00000 1.00000

[root@xenserver-001 block]# ceph daemon osd.0 config show | grep mon_max_pg_per_osd
"mon_max_pg_per_osd": "200",

[root@xenserver-001 block]# ceph daemon osd.1 config show | grep mon_max_pg_per_osd
"mon_max_pg_per_osd": "200",

[root@xenserver-001 block]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.24979 root default
-3 0.24979 host xenserver-001
0 hdd 0.12489 osd.0 up 1.00000 1.00000
1 hdd 0.12489 osd.1 up 1.00000 1.00000

[root@xenserver-001 block]# ceph -s
cluster:
id: 305177d0-ec44-4675-9d2d-90d454a056bd
health: HEALTH_WARN
too many PGs per OSD (201 > max 200)

services:
mon: 1 daemons, quorum xenserver-001
mgr: xenserver-001(active)
osd: 2 osds: 2 up, 2 in
data:
pools: 2 pools, 402 pgs
objects: 0 objects, 0 bytes
usage: 2110 MB used, 253 GB / 255 GB avail
pgs: 402 active+clean

There is a problem else,see:

[root@xenserver-001 block]# ceph osd pool create R_X-4ceb0f8a-1539-40a4-bee2-450a025b04e 1280 1280 replicated
Error ERANGE: pg_num 1280 size 1 would mean 1408 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)

[root@xenserver-001 block]# cat /etc/ceph/ceph.conf
[global]
fsid = 305177d0-ec44-4675-9d2d-90d454a056bd
mon_initial_members = xenserver-001
mon_host = 192.168.212.210
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 1
osd pool default min size = 1
mon_allow_pool_delete = true
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128

History

#1 Updated by John Spray about 6 years ago

  • Status changed from New to Closed

The factor osd_max_pg_per_osd_hard_ratio (default 2) is applied to the PG count limit before actually preventing PG creations.

Also available in: Atom PDF