Actions
Bug #19934
closedceph fs set cephfs standby_count_wanted 0 fails on jewel upgrade
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2017-05-15T18:45:45.334 INFO:teuthology.orchestra.run.smithi055:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs set cephfs standby_count_wanted 0' 2017-05-15T18:45:45.484 INFO:teuthology.orchestra.run.smithi055.stderr:Invalid command: standby_count_wanted not in max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_multimds|allow_dirfrags 2017-05-15T18:45:45.484 INFO:teuthology.orchestra.run.smithi055.stderr:fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_multimds|allow_dirfrags <val> {<confirm>} : set mds parameter <var> to <val> 2017-05-15T18:45:45.484 INFO:teuthology.orchestra.run.smithi055.stderr:Error EINVAL: invalid command </prE> description: rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml}} /a/sage-2017-05-15_17:07:41-rados-wip-sage-testing2---basic-smithi/1180525
Updated by John Spray almost 7 years ago
- Project changed from Ceph to CephFS
- Category set to Testing
Updated by John Spray almost 7 years ago
- Assignee set to Patrick Donnelly
commit a4cb10900d3472fbf06db913dfe0680dabf55c7e Author: Patrick Donnelly <pdonnell@redhat.com> Date: Tue May 9 22:58:08 2017 -0400 qa: turn off spurious standby health warning Signed-off-by: Patrick Donnelly <pdonnell@redhat.com> diff --git a/qa/tasks/cephfs/filesystem.py b/qa/tasks/cephfs/filesystem.py index 273bb39..cb2420f 100644 --- a/qa/tasks/cephfs/filesystem.py +++ b/qa/tasks/cephfs/filesystem.py @@ -456,6 +456,8 @@ class Filesystem(MDSCluster): data_pool_name, pgs_per_fs_pool.__str__()) self.mon_manager.raw_cluster_cmd('fs', 'new', self.name, self.metadata_pool_name, data_pool_name) + # Turn off spurious standby count warnings from modifying max_mds in tests. + self.mon_manager.raw_cluster_cmd('fs', 'set', self.name, 'standby_count_wanted', '0') self.getinfo(refresh = True)
Updated by Kefu Chai almost 7 years ago
- Status changed from New to Resolved
Actions