Actions
Bug #8711
closedError "ceph --format=json-pretty osd lspools" is "unrecognized command" in cuttlefish
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
cuttlefish backward compatibility problem¶
ceph --format=json-pretty osd lspools
does not work on cuttlefish. It is required byhttps://github.com/ceph/teuthology/commit/8be756a0 which is used in upgrade tests and fails as described in the original report.
Original description¶
2014-07-01T09:18:54.038 INFO:teuthology.task.ceph.mon.a.vpm077.stdout:starting mon.a rank 0 at 10.214.138.132:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid f9bbf9e3-0521-4a89-88e7-68af5981790c 2014-07-01T09:19:03.043 INFO:teuthology.orchestra.run.vpm077.stderr:unrecognized command 2014-07-01T09:19:03.044 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-master/teuthology/contextutil.py", line 27, in nested vars.append(enter()) File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/teuthology-master/teuthology/task/ceph.py", line 450, in cephfs_setup stdout=StringIO()) File "/home/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 114, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 401, in run r.wait() File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 102, in wait exitstatus=status, node=self.hostname) CommandFailedError: Command failed on vpm077 with status 22: 'sudo ceph --format=json-pretty osd lspools' 2014-07-01T09:19:03.046 INFO:teuthology.misc:Shutting down osd daemons..
archive_path: /var/lib/teuthworker/archive/teuthology-2014-06-30_19:33:04-upgrade:dumpling-x:parallel-firefly---basic-vps/335479 branch: firefly description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/cuttlefish-dumpling.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-upgrade/client.yaml 5-final-workload/rbd_cls.yaml distros/ubuntu_12.04.yaml} email: null job_id: '335479' last_in_suite: false machine_type: vps name: teuthology-2014-06-30_19:33:04-upgrade:dumpling-x:parallel-firefly---basic-vps nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: firefly ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on legacy crush tunables: false osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 log-whitelist: - slow request - scrub mismatch - ScrubResult sha1: d43e7113dd501aea1db33fdae30d56e96e9c3897 ceph-deploy: branch: dev: firefly conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: d43e7113dd501aea1db33fdae30d56e96e9c3897 s3tests: branch: firefly workunit: sha1: d43e7113dd501aea1db33fdae30d56e96e9c3897 owner: scheduled_teuthology@teuthology priority: 1000 roles: - - mon.a - mds.a - osd.0 - osd.1 - - mon.b - mon.c - osd.2 - osd.3 - - client.0 - client.1 suite: upgrade:dumpling-x:parallel targets: ubuntu@vpm077.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDUFym2Nl+ih/gtHh0a52mbQjt1U2bnfAfcPZoHEscPCP3Ch8b6jKfNdknOR5/DiNnw6RVQKbvPHE/lRrwGyjoKrulKAarkn7lRFgv2qnjZhJjI+rmCrqKKYKeCo2Cd4RnsY89WpnZOyQRa/4pJcfXyroT0sGYh74F4MBrTL6c4NOFyC150kW7DRR6M/Rf2nQkxnVE90CmszN943mNWOsRBz8XR/jLRzwOUy0eHRQ4VwONQc/NmP/UCZZBmbzHsJAf3j/25fJkXZkjmEQRPHsSVfG/GrLWB6qg0EOaDjb9tr/btMR6QYlYnpSYedEmqi+Y/jI4Ob0GoIO++bFUrAiIX ubuntu@vpm078.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDIdxoFPzYkBtV7jrCXFquoRPwvmjZWUkuRbIPfEW0zrw9k3U+9HtUyvXfEtJyFH1a2yA2XZOxb5QwXYLrSycjCdUp+QtO2XbPrwcNApBabtdb0qcIziIhcEUydM1NKdR9baavas3bCcIYVO0GmHz3ylNK8vYF7akhCPxkgCy/niX/AQ1TEO1+NwCzgttUAf7JvKAnMSaZ4E4d8Fcet4HjWkakpksOFLI/90YuS+uCzANCFuECN9fW4mJTbFMviNCZANgrBFcXDo8DyVTlXDn/b49iP0HKOSFPoYlOdCse1cnw7GxQcEeGgIxNhhAaSC76jLwzJT9Y0qrg25vG4BSm7 ubuntu@vpm093.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDYOowmFCI+kKfy5rMaLiOmV11h0SZCjKFsgvpC6KpVF4FY8MwE9cEWVR5Fuw6WaFhcklJsDDJNvj+trb6FXUy18KvZGo1xDLJ+PBA4FT4SeYGS+3jSYMYjduKTx35w2T0NkDydIuXwvspL6U2L0Qd1786HuHY3Ad4YiieMzQFZIQV2FQDJDpXQBP23cADiCWL3e4pyOb8miV+u53xasr4dzRT7PMHbKzKbwbN8n3FdjDlsdPD5AHiNsWkK2I+Ie6bCo7bAHg7SwTWu3Bp99knfmfxbUe0nWb7SA8W+YitU7MdO4WzxeQZn3KwDf3G2snC/BMoOg0l4x2eUFeDT86Ad tasks: - internal.lock_machines: - 3 - vps - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.serialize_remote_roles: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: cuttlefish - print: '**** done cuttlefish install' - ceph: fs: xfs - print: '**** done ceph' - install.upgrade: all: branch: dumpling - ceph.restart: null - parallel: - workload - upgrade-sequence - print: '**** done parallel' - install.upgrade: client.0: null - print: '**** done install.upgrade' - workunit: clients: client.1: - cls/test_cls_rbd.sh teuthology_branch: master tube: vps upgrade-sequence: sequential: - install.upgrade: mon.a: null mon.b: null - ceph.restart: - mon.a - mon.b - mon.c - mds.a - osd.0 - osd.1 - osd.2 - osd.3 verbose: false worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.12742 workload: sequential: - workunit: branch: dumpling clients: client.0: - rados/load-gen-big.sh
description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/cuttlefish-dumpling.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-upgrade/client.yaml 5-final-workload/rbd_cls.yaml distros/ubuntu_12.04.yaml} duration: 227.58310794830322 failure_reason: 'Command failed on vpm077 with status 22: ''sudo ceph --format=json-pretty osd lspools''' flavor: basic owner: scheduled_teuthology@teuthology success: false
Updated by Loïc Dachary almost 10 years ago
This happened while working on the suite. It is entirely possible that it was bugous to begin with.
Updated by Yuri Weinstein almost 10 years ago
- Severity changed from 3 - minor to 2 - major
Updated by Ian Colle almost 10 years ago
- Assignee set to Loïc Dachary
Loic - could you please take another look at this?
Updated by Loïc Dachary almost 10 years ago
- Status changed from New to 12
This is a cuttlefish error
loic@fold:~/software/ceph/ceph/src$ ceph --format=json-pretty osd lspools unrecognized command loic@fold:~/software/ceph/ceph/src$ ceph --version ceph version 0.61.9-11-ge146934 (e146934ea488219075209816ee96dd16b6d89da2) loic@fold:~/software/ceph/ceph/src$
and the original report suggests cuttlefish is installed
description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/cuttlefish-dumpling.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-upgrade/client.yaml 5-final-workload/rbd_cls.yaml distros/ubuntu_12.04.yaml}
Updated by Loïc Dachary almost 10 years ago
- Subject changed from Error "ceph --format=json-pretty osd lspools" in upgrade:dumpling-x:parallel-firefly---basic-vps suite to Error "ceph --format=json-pretty osd lspools" is "unrecognized command" in cuttlefish
- Description updated (diff)
- Assignee changed from Loïc Dachary to John Spray
Updated by Samuel Just over 9 years ago
Probably best to change the test to cope?
Updated by John Spray over 9 years ago
- Status changed from 12 to Resolved
Oops, this should have been closed already
commit cbc73f710f821938e49bfbfc9274aed9af831f8c Author: John Spray <jspray@redhat.com> Date: Thu Jul 10 16:28:29 2014 +0100 task/ceph: Make cephfs_setup cuttlefish-compatible Signed-off-by: John Spray <john.spray@redhat.com> Fixes: #8711
Actions