Actions
Bug #3780
closedpg_num inappropriately low on new pools
Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Version: 0.48.2-0ubuntu2~cloud0
On a Ceph cluster with 18 OSDs, new object pools are being created with a pg_num of 8. Upstream recommends that there be more like 100 or so PGs per OSD: http://article.gmane.org/gmane.comp.file-systems.ceph.devel/10242
I've worked around this by removing and recreating the pools with a higher pg_num before we started using the cluster, but since we aim for fully automated deployment (using Juju and MaaS) this is suboptimal.
Actions