Project

General

Profile

Actions

Bug #49672

closed

nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi

Added by Yuri Weinstein about 3 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This is for 14.2.17 release

Run: https://pulpito.ceph.com/yuriw-2021-03-08_16:51:42-fs-nautilus-distro-basic-smithi/
Jobs: 28
Logs: http://qa-proxy.ceph.com/teuthology/yuriw-2021-03-08_16:51:42-fs-nautilus-distro-basic-smithi/5946803/teuthology.log

2021-03-08T17:13:27.632 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.632411 7fb1757fa700  1 -- 172.21.15.39:0/633374342 <== mon.0 172.21.15.39:6789/0 8 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0  v0) v1 ==== 72+0+66328 (1092875540 0 1538068772) 0x7fb16c0035f0 con 0x7fb17817de90
2021-03-08T17:13:27.757 INFO:teuthology.orchestra.run.smithi039.stderr:no valid command found; 10 closest matches:
2021-03-08T17:13:27.757 INFO:teuthology.orchestra.run.smithi039.stderr:fs set_default <fs_name>
2021-03-08T17:13:27.757 INFO:teuthology.orchestra.run.smithi039.stderr:fs set-default <fs_name>
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs add_data_pool <fs_name> <pool>
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs rm_data_pool <fs_name> <pool>
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_multimds|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose <val> {<confirm>}
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs flag set enable_multiple <val> {--yes-i-really-mean-it}
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs ls
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs get <fs_name>
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs rm <fs_name> {--yes-i-really-mean-it}
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs reset <fs_name> {--yes-i-really-mean-it}
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:Error EINVAL: invalid command
2021-03-08T17:13:27.760 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.758963 7fb17e20e700  1 -- 172.21.15.39:0/633374342 >> 172.21.15.39:6800/12811 conn(0x7fb15800dc90 :-1 s=STATE_OPEN pgs=24 cs=1 l=1).mark_down
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759004 7fb17e20e700  1 -- 172.21.15.39:0/633374342 >> 172.21.15.39:6789/0 conn(0x7fb17817de90 :-1 s=STATE_OPEN pgs=14 cs=1 l=1).mark_down
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759198 7fb17e20e700  1 -- 172.21.15.39:0/633374342 shutdown_connections
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759379 7fb17e20e700  1 -- 172.21.15.39:0/633374342 shutdown_connections
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759438 7fb17e20e700  1 -- 172.21.15.39:0/633374342 wait complete.
2021-03-08T17:13:27.762 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759449 7fb17e20e700  1 -- 172.21.15.39:0/633374342 >> 172.21.15.39:0/633374342 conn(0x7fb1781715c0 :-1 s=STATE_NONE pgs=0 cs=0 l=0).mark_down
2021-03-08T17:13:27.771 DEBUG:teuthology.orchestra.run:got remote process result: 22
2021-03-08T17:13:27.772 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/contextutil.py", line 31, in nested
    vars.append(enter())
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph.py", line 405, in cephfs_setup
    create=True, ec_profile=config.get('cephfs_ec_profile', None))
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/cephfs/filesystem.py", line 408, in __init__
    super(Filesystem, self).__init__(ctx)
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/cephfs/filesystem.py", line 223, in __init__
    super(MDSCluster, self).__init__(ctx)
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/cephfs/filesystem.py", line 178, in __init__
    self.mon_manager = ceph_manager.CephManager(self.admin_remote, ctx=ctx, logger=log.getChild('ceph_manager'))
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 1133, in __init__
    pools = self.list_pools()
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 1456, in list_pools
    osd_dump = self.get_osd_dump_json()
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 2024, in get_osd_dump_json
    out = self.raw_cluster_cmd('osd', 'dump', '--format=json')
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 1162, in raw_cluster_cmd
    stdout=StringIO(),
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/remote.py", line 215, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.2021-03-08T17:13:27.632 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.632411 7fb1757fa700  1 -- 172.21.15.39:0/633374342 <== mon.0 172.21.15.39:6789/0 8 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0  v0) v1 ==== 72+0+66328 (1092875540 0 1538068772) 0x7fb16c0035f0 con 0x7fb17817de90
2021-03-08T17:13:27.757 INFO:teuthology.orchestra.run.smithi039.stderr:no valid command found; 10 closest matches:
2021-03-08T17:13:27.757 INFO:teuthology.orchestra.run.smithi039.stderr:fs set_default <fs_name>
2021-03-08T17:13:27.757 INFO:teuthology.orchestra.run.smithi039.stderr:fs set-default <fs_name>
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs add_data_pool <fs_name> <pool>
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs rm_data_pool <fs_name> <pool>
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_multimds|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose <val> {<confirm>}
2021-03-08T17:13:27.758 INFO:teuthology.orchestra.run.smithi039.stderr:fs flag set enable_multiple <val> {--yes-i-really-mean-it}
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs ls
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs get <fs_name>
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs rm <fs_name> {--yes-i-really-mean-it}
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:fs reset <fs_name> {--yes-i-really-mean-it}
2021-03-08T17:13:27.759 INFO:teuthology.orchestra.run.smithi039.stderr:Error EINVAL: invalid command
2021-03-08T17:13:27.760 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.758963 7fb17e20e700  1 -- 172.21.15.39:0/633374342 >> 172.21.15.39:6800/12811 conn(0x7fb15800dc90 :-1 s=STATE_OPEN pgs=24 cs=1 l=1).mark_down
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759004 7fb17e20e700  1 -- 172.21.15.39:0/633374342 >> 172.21.15.39:6789/0 conn(0x7fb17817de90 :-1 s=STATE_OPEN pgs=14 cs=1 l=1).mark_down
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759198 7fb17e20e700  1 -- 172.21.15.39:0/633374342 shutdown_connections
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759379 7fb17e20e700  1 -- 172.21.15.39:0/633374342 shutdown_connections
2021-03-08T17:13:27.761 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759438 7fb17e20e700  1 -- 172.21.15.39:0/633374342 wait complete.
2021-03-08T17:13:27.762 INFO:teuthology.orchestra.run.smithi039.stderr:2021-03-08 17:13:27.759449 7fb17e20e700  1 -- 172.21.15.39:0/633374342 >> 172.21.15.39:0/633374342 conn(0x7fb1781715c0 :-1 s=STATE_NONE pgs=0 cs=0 l=0).mark_down
2021-03-08T17:13:27.771 DEBUG:teuthology.orchestra.run:got remote process result: 22
2021-03-08T17:13:27.772 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/contextutil.py", line 31, in nested
    vars.append(enter())
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph.py", line 405, in cephfs_setup
    create=True, ec_profile=config.get('cephfs_ec_profile', None))
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/cephfs/filesystem.py", line 408, in __init__
    super(Filesystem, self).__init__(ctx)
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/cephfs/filesystem.py", line 223, in __init__
    super(MDSCluster, self).__init__(ctx)
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/cephfs/filesystem.py", line 178, in __init__
    self.mon_manager = ceph_manager.CephManager(self.admin_remote, ctx=ctx, logger=log.getChild('ceph_manager'))
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 1133, in __init__
    pools = self.list_pools()
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 1456, in list_pools
    osd_dump = self.get_osd_dump_json()
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 2024, in get_osd_dump_json
    out = self.raw_cluster_cmd('osd', 'dump', '--format=json')
  File "/home/teuthworker/src/github.com_ceph_ceph_9b829698f3d6269668c9402131bbfba81df6df0f/qa/tasks/ceph_manager.py", line 1162, in raw_cluster_cmd
    stdout=StringIO(),
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/remote.py", line 215, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3082387bba74fcd24c9700593d10418152d53c97/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed on smithi039 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early osd dump --format=json'exceptions.CommandFailedError: Command failed on smithi039 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early osd dump --format=json'
Actions #2

Updated by Neha Ojha about 3 years ago

  • Subject changed from "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi to nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
  • Status changed from New to Fix Under Review
  • Assignee set to Neha Ojha
  • Pull request ID set to 39960
Actions #4

Updated by Sage Weil almost 3 years ago

  • Project changed from Ceph to CephFS
Actions #5

Updated by Sage Weil almost 3 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF