Actions
Bug #19934
closedceph fs set cephfs standby_count_wanted 0 fails on jewel upgrade
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2017-05-15T18:45:45.334 INFO:teuthology.orchestra.run.smithi055:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs set cephfs standby_count_wanted 0' 2017-05-15T18:45:45.484 INFO:teuthology.orchestra.run.smithi055.stderr:Invalid command: standby_count_wanted not in max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_multimds|allow_dirfrags 2017-05-15T18:45:45.484 INFO:teuthology.orchestra.run.smithi055.stderr:fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_multimds|allow_dirfrags <val> {<confirm>} : set mds parameter <var> to <val> 2017-05-15T18:45:45.484 INFO:teuthology.orchestra.run.smithi055.stderr:Error EINVAL: invalid command </prE> description: rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml}} /a/sage-2017-05-15_17:07:41-rados-wip-sage-testing2---basic-smithi/1180525
Actions