Project

General

Profile

Bug #15339

"Error EINVAL: invalid command (ceph fs get cephfs)" in upgrade:infernalis-x-jewel-distro-basic-openstack

Added by Yuri Weinstein almost 8 years ago. Updated almost 8 years ago.

Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
upgrade/hammer-x, upgrade/infernalis-x
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In several runs
Logs: http://teuthology.ovh.sepia.ceph.com/teuthology/teuthology-2016-03-31_02:10:02-upgrade:infernalis-x-jewel-distro-basic-openstack/26205/teuthology.log

2016-03-31T05:29:20.351 INFO:teuthology.orchestra.run.target067138:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph fs get cephfs --format=json-pretty'
2016-03-31T05:29:21.091 INFO:teuthology.orchestra.run.target067138.stderr:no valid command found; 10 closest matches:
2016-03-31T05:29:21.091 INFO:teuthology.orchestra.run.target067138.stderr:fs reset <fs_name> {--yes-i-really-mean-it}
2016-03-31T05:29:21.091 INFO:teuthology.orchestra.run.target067138.stderr:fs rm <fs_name> {--yes-i-really-mean-it}
2016-03-31T05:29:21.091 INFO:teuthology.orchestra.run.target067138.stderr:fs new <fs_name> <metadata> <data>
2016-03-31T05:29:21.092 INFO:teuthology.orchestra.run.target067138.stderr:fs ls
2016-03-31T05:29:21.092 INFO:teuthology.orchestra.run.target067138.stderr:Error EINVAL: invalid command

Related issues

Duplicates Ceph - Bug #15049: ceph.py wait_for_daemons doesn't work with firefly Resolved 03/10/2016

History

#2 Updated by Yuri Weinstein almost 8 years ago

  • Priority changed from Normal to Urgent

#3 Updated by Kefu Chai almost 8 years ago

  • Status changed from New to Duplicate

#4 Updated by Kefu Chai almost 8 years ago

  • Duplicates Bug #15049: ceph.py wait_for_daemons doesn't work with firefly added

#5 Updated by Kefu Chai almost 8 years ago

the error does not fail the test, and as a fallback of the failed command, "ceph mds dump" was used instead:

2016-04-05T02:48:16.101 INFO:teuthology.orchestra.run.vpm119:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph fs get cephfs --format=json-pretty'
2016-04-05T02:48:16.392 INFO:teuthology.orchestra.run.vpm119.stderr:no valid command found; 10 closest matches:
2016-04-05T02:48:16.392 INFO:teuthology.orchestra.run.vpm119.stderr:fs new <fs_name> <metadata> <data>
2016-04-05T02:48:16.393 INFO:teuthology.orchestra.run.vpm119.stderr:fs reset <fs_name> {--yes-i-really-mean-it}
2016-04-05T02:48:16.393 INFO:teuthology.orchestra.run.vpm119.stderr:fs rm <fs_name> {--yes-i-really-mean-it}
2016-04-05T02:48:16.393 INFO:teuthology.orchestra.run.vpm119.stderr:fs ls
2016-04-05T02:48:16.393 INFO:teuthology.orchestra.run.vpm119.stderr:Error EINVAL: invalid command
2016-04-05T02:48:16.402 INFO:teuthology.orchestra.run.vpm119:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph mds dump --format=json'
2016-04-05T02:48:16.709 INFO:teuthology.orchestra.run.vpm119.stderr:dumped mdsmap epoch 6
2016-04-05T02:48:16.719 INFO:tasks.cephfs.filesystem:are_daemons_healthy: mds map: {u'session_autoclose': 300, u'up': {u'mds_0': 4120}, u'last_failure_osd_epoch': 0, u'in': [0], u'last_failure': 0, u'max_file_size': 1099511627776, u'tableserver': 0, u'metadata_pool': 1, u'failed': [], u'epoch': 6, u'flags': 0, u'max_mds': 1, u'compat': {u'compat': {}, u'ro_compat': {}, u'incompat': {u'feature_8': u'no anchor table', u'feature_2': u'client writeable ranges', u'feature_3': u'default file layouts on dirs', u'feature_1': u'base v0.20', u'feature_6': u'dirfrag is stored in omap', u'feature_4': u'dir inode in separate object', u'feature_5': u'mds uses versioned encoding'}}, u'data_pools': [2], u'info': {u'gid_4120': {u'standby_for_rank': -1, u'export_targets': [], u'name': u'a', u'incarnation': 1, u'state_seq': 2, u'state': u'up:active', u'gid': 4120, u'rank': 0, u'standby_for_name': u'', u'addr': u'172.21.2.119:6812/4195'}}, u'fs_name': u'cephfs', u'created': u'2016-04-05 09:46:03.084092', u'enabled': True, u'modified': u'2016-04-05 09:46:06.171000', u'session_timeout': 60, u'stopped': [], u'root': 0}
2016-04-05T02:48:16.719 INFO:tasks.cephfs.filesystem:are_daemons_healthy: 1/1
2016-04-05T02:48:16.719 INFO:teuthology.orchestra.run.vpm119:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --admin-daemon /var/run/ceph/ceph-mds.a.asok status'
2016-04-05T02:48:16.872 INFO:tasks.cephfs.filesystem:_json_asok output: {
    "cluster_fsid": "1982fa81-dd49-4430-86dd-83f148da4a96",
    "whoami": 0,
    "state": "up:active",
    "mdsmap_epoch": 6,
    "osdmap_epoch": 25,
    "osdmap_epoch_barrier": 7
}

2016-04-05T02:48:16.873 INFO:teuthology.run_tasks:Running task print...
2016-04-05T02:48:16.929 INFO:teuthology.task.print:**** done ceph.restart 1st half

Also available in: Atom PDF