Project

General

Profile

Bug #40193

Changing pg_num and other pool settings are ignored

Added by Nathan Fish almost 2 years ago. Updated almost 2 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

root@dc-3558-422:~# ceph osd pool set cephfs_data_cscf-home pg_num 64
set pool 1 pg_num to 64
root@dc-3558-422:~# ceph osd pool get cephfs_data_cscf-home pg_num
pg_num: 16
root@dc-3558-422:~# ceph osd pool set cephfs_data_cscf-home pgp_num 64
Error EINVAL: specified pgp_num 64 > pg_num 16

Nautilus 14.2.1 on Ubuntu 18.04.2 (HWE kernel 4.18)

mon log snippet while running "ceph osd pool set cephfs_data_cscf-home pg_num 64" with "debug mon = 15":

http://paste.ubuntu.com/p/cbpZFX29Z6/

I had the autoscaler enabled, I disabled it but that did not fix the problem. This worked on a previous edition of the cluster running 14.2.0. Should I roll back to 14.2.0 and try to reproduce it?


Related issues

Duplicates RADOS - Bug #39570: nautilus with requrie_osd_release < nautilus cannot increase pg_num Resolved 05/02/2019

History

#1 Updated by Greg Farnum almost 2 years ago

  • Project changed from Ceph to RADOS

#2 Updated by Nathan Fish almost 2 years ago

By the way, I tried rolling back to 14.2.0 (with existing cluster state) and also kernel 4.15. Neither did anything.

#3 Updated by Josh Durgin almost 2 years ago

  • Duplicates Bug #39570: nautilus with requrie_osd_release < nautilus cannot increase pg_num added

#4 Updated by Josh Durgin almost 2 years ago

  • Status changed from New to Duplicate

#5 Updated by Greg Farnum almost 2 years ago

Also check your cluster health — according to the monitor it's got "736 pgs creating" which is unusual, and probably related to whatever's not working here.

Also available in: Atom PDF