Actions
Bug #36285
openqa/workunits/cephtool/test.sh test fails setting pg_num to 97
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
/a/dzafman-2018-09-26_22:31:44-rados-wip-zafman-testing-distro-basic-smithi/3074562
The portion of test shown here is nothing new.
2018-09-27T11:28:37.353 INFO:tasks.workunit.client.0.smithi072.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2024: test_mon_osd_pool_set: ceph osd pool get pool_getset pg_num 2018-09-27T11:28:37.353 INFO:tasks.workunit.client.0.smithi072.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2024: test_mon_osd_pool_set: sed -e 's/pg_num: //' 2018-09-27T11:28:37.726 INFO:tasks.workunit.client.0.smithi072.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2024: test_mon_osd_pool_set: old_pgs=1 2018-09-27T11:28:37.727 INFO:tasks.workunit.client.0.smithi072.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2025: test_mon_osd_pool_set: ceph osd stat --format json 2018-09-27T11:28:37.727 INFO:tasks.workunit.client.0.smithi072.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2025: test_mon_osd_pool_set: jq .num_osds 2018-09-27T11:28:38.084 INFO:tasks.workunit.client.0.smithi072.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2025: test_mon_osd_pool_set: new_pgs=97 2018-09-27T11:28:38.084 INFO:tasks.workunit.client.0.smithi072.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2026: test_mon_osd_pool_set: ceph osd pool set pool_getset pg_num 97 2018-09-27T11:28:38.357 INFO:tasks.workunit.client.0.smithi072.stderr:Error E2BIG: specified pg_num 97 is too large (creating 87 new PGs on ~1 OSDs exceeds per-OSD max with mon_osd_max_split_count of 32)
Updated by Greg Farnum over 5 years ago
Did all the OSDs actually come on? Note the last line there: specified pg_num 97 is too large (creating 87 new PGs on ~1 OSDs exceeds per-OSD max with mon_osd_max_split_count of 32)
Josh suggests this might be because we shifted to creating pools with 1 PG following the pg-merging code, but I'm not sure that matches since it already has 10?
Actions