Actions
Bug #9552
closed"EINVAL: invalid command" in upgrade:dumpling-x-giant-distro-basic-vps run
Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):
Description
2014-09-20T14:14:35.887 INFO:teuthology.orchestra.run.vpm037:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd erasure-code-profile set teuthologyprofile k=2 m=1 ruleset-failure-domain=osd' 2014-09-20T14:14:37.447 INFO:teuthology.orchestra.run.vpm037.stderr:no valid command found; 10 closest matches: 2014-09-20T14:14:37.447 INFO:teuthology.orchestra.run.vpm037.stderr:osd dump {<int[0-]>} 2014-09-20T14:14:37.447 INFO:teuthology.orchestra.run.vpm037.stderr:osd thrash <int[0-]> 2014-09-20T14:14:37.447 INFO:teuthology.orchestra.run.vpm037.stderr:osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool <int> 2014-09-20T14:14:37.447 INFO:teuthology.orchestra.run.vpm037.stderr:osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset 2014-09-20T14:14:37.448 INFO:teuthology.orchestra.run.vpm037.stderr:osd reweight-by-utilization {<int[100-]>} 2014-09-20T14:14:37.449 INFO:teuthology.orchestra.run.vpm037.stderr:osd pool set-quota <poolname> max_objects|max_bytes <val> 2014-09-20T14:14:37.449 INFO:teuthology.orchestra.run.vpm037.stderr:osd pool delete <poolname> <poolname> --yes-i-really-really-mean-it 2014-09-20T14:14:37.449 INFO:teuthology.orchestra.run.vpm037.stderr:osd pool rename <poolname> <poolname> 2014-09-20T14:14:37.449 INFO:teuthology.orchestra.run.vpm037.stderr:osd rm <ids> [<ids>...] 2014-09-20T14:14:37.449 INFO:teuthology.orchestra.run.vpm037.stderr:osd reweight <int[0-]> <float[0.0-1.0]> 2014-09-20T14:14:37.449 INFO:teuthology.orchestra.run.vpm037.stderr:Error EINVAL: invalid command 2014-09-20T14:14:37.457 ERROR:teuthology.parallel:Exception in parallel execution Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 82, in __exit__ for result in self: File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next resurrect_traceback(result) File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback return func(*args, **kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/task/parallel.py", line 50, in _run_spawned mgr = run_tasks.run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 39, in run_one_task return fn(**kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/task/sequential.py", line 55, in task mgr.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/var/lib/teuthworker/src/ceph-qa-suite_master/tasks/rados.py", line 190, in task running.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 331, in get raise self._exception CommandFailedError: Command failed on vpm037 with status 22: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd erasure-code-profile set teuthologyprofile k=2 m=1 ruleset-failure-domain=osd' 2014-09-20T14:14:37.536 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 51, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 39, in run_one_task return fn(**kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/task/parallel.py", line 43, in task p.spawn(_run_spawned, ctx, confg, taskname) File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 89, in __exit__ raise CommandFailedError: Command failed on vpm037 with status 22: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd erasure-code-profile set teuthologyprofile k=2 m=1 ruleset-failure-domain=osd'
archive_path: /var/lib/teuthworker/archive/teuthology-2014-09-20_13:42:56-upgrade:dumpling-x-giant-distro-basic-vps/500834 branch: giant description: upgrade:dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-upgrade/client.yaml 5-final-workload/ec-rados-default.yaml distros/ubuntu_12.04.yaml} email: ceph-qa@ceph.com job_id: '500834' kernel: &id001 kdb: true sha1: distro last_in_suite: false machine_type: vps name: teuthology-2014-09-20_13:42:56-upgrade:dumpling-x-giant-distro-basic-vps nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: giant ceph: conf: global: osd heartbeat grace: 100 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on legacy crush tunables: false osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 log-whitelist: - slow request - scrub mismatch - ScrubResult sha1: 2a2711daf86534ece11cad4527d69d43ec91d661 ceph-deploy: branch: dev: giant conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 2a2711daf86534ece11cad4527d69d43ec91d661 rgw: default_idle_timeout: 1200 s3tests: branch: giant idle_timeout: 1200 workunit: sha1: 2a2711daf86534ece11cad4527d69d43ec91d661 owner: scheduled_teuthology@teuthology priority: 1000 roles: - - mon.a - mds.a - osd.0 - osd.1 - - mon.b - mon.c - osd.2 - osd.3 - - client.0 - client.1 suite: upgrade:dumpling-x suite_branch: master suite_path: /var/lib/teuthworker/src/ceph-qa-suite_master targets: ubuntu@vpm037.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCumYtDH5xhh+blQwJWvFWgtsKEzwJTiSyX6CPS+H1GmEwC4lNCn4s41G2UnkEoK0rfBbmjg6X0KZmEyOik05jO+xhMtHtwHQZWlbb5zcYSaRl/spiq4dEMiFyngSfwOPWcocqPugmnOb+8mqvdM6WxWD7gMVVyyasi2GtrRw1ifA++Gq58Wj/cPDTq+6eowR5fMoqxqNG1zkanCntcexB3Fs6LZp1+QRn15SJ4j5A6BGfCFbvj3LvdDn2bSzv/8kGhZh1zn6J8AkZsNpukWpp9BsLMKFvc72OLsr5dhz74DpCbEeiRMKuvrTO0Ed2aci1fjGLsX24TdvucotRDuKGR ubuntu@vpm101.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfS8MLzRt0kfQGbsExzl8gi5OhwGj0ct4nHyWdZps2Tbb6GEzjZJNG3ayKtDplnwrHSGh59kJGAjAIAQXAyw4VUpw0gLz4TQYn/72DQxpyAqGGxrjlJYQ8YvRmts1cDmNvDYW3xwgJIR+z44ejfWKQ0yyzcuOh8VRA+Be7Q5lTWcRPxHzQIxOzNkEhnpJRK+iEwJaW9h1L9+7KidGxmydS2kLCEAtqh7DG14kSD21uTxEFLIlUG7QlwKtBUV0B1WYPkxJWILCIgh07m6quKqi5gGyU8HcO0TgJMHcKQVrl7vnTU/OVQsQ7pfH2QPMATSJmXrjHVNp1IB0kDoNaiROh ubuntu@vpm137.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAIdKGZ51y60vK88m43Nim1rHX0/xPyl0kwHBD0VN42qnF1LZkD/YyE3T1ohjGfdLLkznWbNW6pNvp/IbhIqFDlHUkR02YhOmuYdiO32XjyPX16sUoqOaDW/XJEHFtO3zPekZekx2HrlORD8gPePYv5m6SKKKOpAN1WLJZC++sdDc/z5rfsJco+/yiNfXmCXD0h9S4EurV+MTPlI8aExh44uWqZLrcShy9+4HcieccNWEXVKIpbjU2vUpvJiuEZLyTCZK4sStgLE6sAvoQuGlDnQd6jhrOVjxFm0tLSt08P10whvoFlmt0McuvkaKyO3WwJDzQ7NASJOAYUsJK9Ddl tasks: - internal.lock_machines: - 3 - vps - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.push_inventory: null - internal.serialize_remote_roles: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - print: '**** done install' - ceph: fs: xfs - print: '**** done ceph' - parallel: - workload - upgrade-sequence - print: '**** done parallel' - install.upgrade: client.0: null - print: '**** done install.upgrade' teuthology_branch: master tube: vps upgrade-sequence: sequential: - install.upgrade: mon.a: null mon.b: null - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - sleep: duration: 60 - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - sleep: duration: 60 - ceph.restart: - mon.c - sleep: duration: 60 - ceph.restart: - osd.0 - sleep: duration: 60 - ceph.restart: - osd.1 - sleep: duration: 60 - ceph.restart: - osd.2 - sleep: duration: 60 - ceph.restart: - osd.3 - sleep: duration: 60 - ceph.restart: - mds.a verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.3020 workload: sequential: - workunit: branch: dumpling clients: client.0: - rados/test-upgrade-firefly.sh - cls - rados: clients: - client.0 ec_pool: true objects: 50 op_weights: append: 100 copy_from: 50 delete: 50 read: 100 rmattr: 25 rollback: 50 setattr: 25 snap_create: 50 snap_remove: 50 write: 0 ops: 4000
description: upgrade:dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-upgrade/client.yaml 5-final-workload/ec-rados-default.yaml distros/ubuntu_12.04.yaml} duration: 997.1536929607391 failure_reason: 'Command failed on vpm037 with status 22: ''adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd erasure-code-profile set teuthologyprofile k=2 m=1 ruleset-failure-domain=osd''' flavor: basic owner: scheduled_teuthology@teuthology success: false
Updated by Loïc Dachary over 9 years ago
- Project changed from Ceph to teuthology
- Status changed from New to Duplicate
Duplicate of #9549
Actions