Actions
Bug #47389
openceph fs volume create fails to create pool
Status:
Need More Info
Priority:
Normal
Assignee:
Category:
Administration/Usability
Target version:
-
% Done:
0%
Source:
Development
Tags:
Backport:
pacific,octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
VolumeClient, mgr/volumes
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
➜ build (master) ✔ ceph fs volume create test_fs_1 Error EINVAL: pool size is smaller than the crush rule min size ➜ build (master) ✔ ceph osd crush rule dump [ { "rule_id": 0, "rule_name": "replicated_rule", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -1, "item_name": "default" }, { "op": "choose_firstn", "num": 0, "type": "osd" }, { "op": "emit" } ] } ]
There is no `--size` param as there is for `ceph osd pool create`. The default size however doesn't seem to be between 1(min_size) and 10(max_size) for whatever reason.
Updated by Patrick Donnelly over 3 years ago
- Status changed from New to Triaged
- Assignee set to Kotresh Hiremath Ravishankar
- Backport set to octopus,nautilus
Updated by Kotresh Hiremath Ravishankar over 3 years ago
Hi Joshua,
I tried on the master and it works for me. The HEAD was at 240c46a75a44cb9363cf994cb264e9d7048c98a1 dated "Sat Sep 19 11:36:43 2020 +0200"
root@hrk-3020# bin/ceph fs volume create test_fs_2 Error EINVAL: Creation of multiple filesystems is disabled. To enable this experimental feature, use 'ceph fs flag set enable_multiple true' root@hrk-3020:# bin/ceph fs flag set enable_multiple true Warning! This feature is experimental.It may cause problems up to and including data loss.Consult the documentation at ceph.com, and if unsure, do not proceed.Add --yes-i-really-mean-it if you are certain. root@hrk-3020:# bin/ceph fs volume create test_fs_2 new fs with metadata pool 12 and data pool 13 root@hrk-3020:# bin/ceph fs status test_fs_1 - 0 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active a Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs.test_fs_1.meta metadata 96.0k 98.9G cephfs.test_fs_1.data data 0 98.9G test_fs_2 - 0 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active b Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs.test_fs_2.meta metadata 32.0k 98.9G cephfs.test_fs_2.data data 0 98.9G STANDBY MDS c MDS version: ceph version Development (no_version) pacific (dev)
Could you please try once again and let me know if it's consistently reproducible or provide logs ?
Updated by Kotresh Hiremath Ravishankar over 3 years ago
- Status changed from Triaged to Need More Info
Updated by Patrick Donnelly over 3 years ago
- Target version changed from v16.0.0 to v17.0.0
- Backport changed from octopus,nautilus to pacific,octopus,nautilus
Actions