Actions
Bug #8294
closed"Error EINVAL" in upgrade:dumpling-dumpling-testing-basic-vps
Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Logs are in /a/yuriw/231774
2014-05-05T16:01:24.643 DEBUG:teuthology.orchestra.run:Running [10.214.132.20]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd primary-affinity 0 0.493108749111' 2014-05-05T16:01:25.002 INFO:teuthology.orchestra.run.err:[10.214.132.20]: no valid command found; 10 closest matches: 2014-05-05T16:01:25.002 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd dump {<int[0-]>} 2014-05-05T16:01:25.002 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd thrash <int[0-]> 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool <int> 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd reweight-by-utilization {<int[100-]>} 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd pool set-quota <poolname> max_objects|max_bytes <val> 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd pool delete <poolname> <poolname> --yes-i-really-really-mean-it 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd pool rename <poolname> <poolname> 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd rm <ids> [<ids>...] 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: osd reweight <int[0-]> <float[0.0-1.0]> 2014-05-05T16:01:25.003 INFO:teuthology.orchestra.run.err:[10.214.132.20]: Error EINVAL: invalid command 2014-05-05T16:10:42.654 INFO:teuthology.task.workunit.client.0.out:[10.214.132.19]: stopping iogen 2014-05-05T16:10:42.952 INFO:teuthology.task.workunit.client.0.out:[10.214.132.19]: OK 2014-05-05T16:10:45.666 DEBUG:teuthology.orchestra.run:Running [10.214.132.19]: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp' 2014-05-05T16:10:47.369 INFO:teuthology.task.workunit:Stopping suites/iogen.sh on client.0... 2014-05-05T16:10:47.369 DEBUG:teuthology.orchestra.run:Running [10.214.132.19]: 'rm -rf -- /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.0' 2014-05-05T16:10:47.411 DEBUG:teuthology.parallel:result is None 2014-05-05T16:10:47.412 DEBUG:teuthology.orchestra.run:Running [10.214.132.19]: 'rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0' 2014-05-05T16:10:47.419 INFO:teuthology.task.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2014-05-05T16:10:47.419 DEBUG:teuthology.orchestra.run:Running [10.214.132.19]: 'rmdir -- /home/ubuntu/cephtest/mnt.0' 2014-05-05T16:10:47.505 INFO:teuthology.orchestra.run.err:[10.214.132.19]: rmdir: failed to remove `/home/ubuntu/cephtest/mnt.0': Device or resource busy 2014-05-05T16:10:47.505 ERROR:teuthology.task.workunit:Caught an exception deleting dir /home/ubuntu/cephtest/mnt.0 Traceback (most recent call last): File "/home/ubuntu/yuriw/code/teuthology/teuthology/task/workunit.py", line 152, in _delete_dir mnt, File "/home/ubuntu/yuriw/code/teuthology/teuthology/orchestra/remote.py", line 106, in run r = self._runner(client=self.ssh, **kwargs) File "/home/ubuntu/yuriw/code/teuthology/teuthology/orchestra/run.py", line 330, in run r.exitstatus = _check_status(r.exitstatus) File "/home/ubuntu/yuriw/code/teuthology/teuthology/orchestra/run.py", line 326, in _check_status raise CommandFailedError(command=r.command, exitstatus=status, node=host) CommandFailedError: Command failed on 10.214.132.19 with status 1: 'rmdir -- /home/ubuntu/cephtest/mnt.0' 2014-05-05T16:10:47.517 DEBUG:teuthology.run_tasks:Unwinding manager ceph-fuse
nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: dumpling ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 fs: xfs log-whitelist: - slow request - scrub - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 3f1d7f5e0a67ad646de465335fb7ee00eb07e220 ceph-deploy: branch: dev: dumpling conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 3f1d7f5e0a67ad646de465335fb7ee00eb07e220 s3tests: branch: dumpling workunit: sha1: 3f1d7f5e0a67ad646de465335fb7ee00eb07e220 owner: scheduled_ubuntu@yw roles: - - mon.a - mds.a - osd.0 - osd.1 - osd.2 - - mon.b - mon.c - osd.3 - osd.4 - osd.5 - client.0 suite: upgrade:dumpling targets: ubuntu@plana58.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Dgv5XfnXOHEuwwOoK+xQtHEZusRHTm9I4oBJ41RYkPq2y5GA5OrXiWVlOrwBoXXdtCeW4ynErDhqiFuL3tYmwNEYzRWqnyqZte4qfsTz93Lhv7UEkageJ2iHNaUNt+H071A8JULR2CRtIxXu6zSSKC8vwmEirxqYj3pPRVm9TCa1iPaj8R3wPmeBjwVD9IU+zAuvIi6oWcqKrxZEdEOciMa72nGO58V7Wo0yICMST6day1jxIBnNaOqGnKafMQSiLAIUSChY+Q544o0LRZO3HW6k9eZlO5yqRUJN1p2H+QxOSG/PicKR2Trode3A/tZmtYqelF2FgOfLjEgBkkKF ubuntu@plana59.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDaHG83zRXo6ydv6IGWDFTf6YNjWG9M5LRbbYIpPXKOqCg9zfI/4ZjymLpznESFIACVrqe06jqD7uvsQPOlbcm3W/H44su70C21KrzMs77IpskMT7tYgCzY75uxbwg949qYIRf1SEY2RW0Bf2zldbOeKAY/TcnGIkLtc4NCIDPfCxMG0rAJJgUAwbvbKVUqLKe/jcyu3RiiAxV3TGjTAzTz+XHwT46gDXB5Fxt49Sfx+AgpILHk7DvN/HILtU3gRT9ac0D2WlQi1sJLDgjeTAZxyfpRR5iZH4tWYBFIS7C4ugHYye95zUYTc/3Jt364Jl/giUherGjE5od7p65VjxRJ tasks: - internal.lock_machines: - 2 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.serialize_remote_roles: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: tag: v0.67.8 - ceph: null - parallel: - workload - upgrade-sequence - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 timeout: 1200 - ceph-fuse: null - workunit: clients: all: - suites/iogen.sh teuthology_branch: dumpling upgrade-sequence: sequential: - install.upgrade: all: branch: dumpling - ceph.restart: - mon.a - sleep: duration: 60 - ceph.restart: - mon.b - sleep: duration: 60 - ceph.restart: - mon.c - sleep: duration: 60 - ceph.restart: - mds.a - sleep: duration: 60 - ceph.restart: - osd.0 - sleep: duration: 30 - ceph.restart: - osd.1 - sleep: duration: 30 - ceph.restart: - osd.2 - sleep: duration: 30 - ceph.restart: - osd.3 - sleep: duration: 30 - ceph.restart: - osd.4 - sleep: duration: 30 - ceph.restart: - osd.5 verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.30587 workload: workunit: clients: all: - suites/blogbench.sh
duration: 1775.9894881248474 failure_reason: 'Command failed on 10.214.132.20 with status 22: ''adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd primary-affinity 0 0.493108749111''' flavor: basic owner: ubuntu@teuthology success: false
Actions