Project

General

Profile

Bug #43946

mimic: EINVAL on 'osd dump'

Added by Sage Weil 2 months ago. Updated about 2 months ago.

Status:
Resolved
Priority:
High
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

2020-02-02T09:50:36.864 INFO:teuthology.orchestra.run.smithi117:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early osd dump --format=json
2020-02-02T09:50:37.130 INFO:teuthology.orchestra.run.smithi117.stderr:no valid command found; 10 closest matches:
2020-02-02T09:50:37.130 INFO:teuthology.orchestra.run.smithi117.stderr:osd utilization
2020-02-02T09:50:37.130 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool application get {<poolname>} {<app>} {<key>}
2020-02-02T09:50:37.131 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool application rm <poolname> <app> <key>
2020-02-02T09:50:37.131 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool application set <poolname> <app> <key> <value>
2020-02-02T09:50:37.131 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool application disable <poolname> <app> {--yes-i-really-mean-it}
2020-02-02T09:50:37.131 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool application enable <poolname> <app> {--yes-i-really-mean-it}
2020-02-02T09:50:37.131 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool get-quota <poolname>
2020-02-02T09:50:37.131 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool set-quota <poolname> max_objects|max_bytes <val>
2020-02-02T09:50:37.131 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool set <poolname> size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_
max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recove
ry_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites <val> {--yes-i-really-mean-it}
2020-02-02T09:50:37.132 INFO:teuthology.orchestra.run.smithi117.stderr:osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|auid|target_max_objects|
target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote|all|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|d
eep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites
2020-02-02T09:50:37.132 INFO:teuthology.orchestra.run.smithi117.stderr:Error EINVAL: invalid command
2020-02-02T09:50:37.139 DEBUG:teuthology.orchestra.run:got remote process result: 22
2020-02-02T09:50:37.140 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 32, in nested
    vars.append(enter())
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/ceph.py", line 410, in cephfs_setup
    ec_profile=config.get('cephfs_ec_profile', None))
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/cephfs/filesystem.py", line 433, in __init__
    super(Filesystem, self).__init__(ctx)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/cephfs/filesystem.py", line 249, in __init__
    super(MDSCluster, self).__init__(ctx)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/cephfs/filesystem.py", line 198, in __init__
    self.mon_manager = ceph_manager.CephManager(self.admin_remote, ctx=ctx, logger=log.getChild('ceph_manager'))
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/ceph_manager.py", line 1324, in __init__
    pools = self.list_pools()
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/ceph_manager.py", line 1717, in list_pools
    osd_dump = self.get_osd_dump_json()
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/ceph_manager.py", line 2290, in get_osd_dump_json
    out = self.raw_cluster_cmd('osd', 'dump', '--format=json')
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sage-testing-2020-02-01-1055/qa/tasks/ceph_manager.py", line 1358, in raw_cluster_cmd
    stdout=StringIO(),
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 198, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 433, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 158, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 180, in _raise_for_status
    node=self.hostname, label=self.label
CommandFailedError: Command failed on smithi117 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early osd dump --format=json'

/a/sage-2020-02-01_20:59:41-rados-wip-sage-testing-2020-02-01-1055-distro-basic-smithi/4725065

History

#1 Updated by Sage Weil 2 months ago

  • Subject changed from EINVAL on 'osd dump' during mimic->octopus upgrade to mimic: EINVAL on 'osd dump'
  • Status changed from New to Triaged
  • Priority changed from Urgent to Normal

this is actually on the mimic version. shortly after cluster creation, but hte mgr was active and should have provided commands

2020-02-02 09:50:22.219 7f2fceb47700  4 mon.b@0(leader).mgr e3 mkfs or daemon transitioned to available, loading commands

#2 Updated by Sage Weil about 2 months ago

/a/sage-2020-02-05_03:10:48-rados-wip-sage2-testing-2020-02-04-1448-distro-basic-smithi/4733167

#3 Updated by Neha Ojha about 2 months ago

I think the problem here is that the test is using wip-sage2-testing-2020-02-04-1448 as the qa branch, which has https://github.com/ceph/ceph/pull/32989 in it. The "--log-early" argument is not understood by mimic, it got added in 933d5084cb66f299a7bf60f0a2a6382c0bd3cb2f and was backported only to nautilus.

#4 Updated by Patrick Donnelly about 2 months ago

  • Target version set to v15.0.0

/ceph/teuthology-archive/pdonnell-2020-02-06_01:51:32-fs-wip-pdonnell-testing-20200205.204218-distro-basic-smithi/4736783/teuthology.log

#5 Updated by Sage Weil about 2 months ago

/a/sage-2020-02-06_19:01:25-rados-wip-sage2-testing-2020-02-05-1649-distro-basic-smithi/4738937

#6 Updated by Sage Weil about 2 months ago

  • Priority changed from Normal to High

#7 Updated by Sage Weil about 2 months ago

  • Status changed from Triaged to Fix Under Review
  • Pull request ID set to 33130

#8 Updated by Sage Weil about 2 months ago

  • Status changed from Fix Under Review to Resolved

Also available in: Atom PDF