Actions
Bug #8989
closedFailed running iogen.sh in upgrade:firefly-firefly-testing-basic-vps suite
Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
There majority of failures related to this in this run: http://pulpito.front.sepia.ceph.com/teuthology-2014-07-30_12:36:02-upgrade:firefly-firefly-testing-basic-vps/
It was not duplicated in manual run.
Logs for one run are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-07-30_12:36:02-upgrade:firefly-firefly-testing-basic-vps/387308/
2014-07-30T13:47:51.791 INFO:teuthology.orchestra.run.err:[10.214.138.64]: marked out osd.3. 2014-07-30T13:47:56.820 INFO:teuthology.task.thrashosds.thrasher:in_osds: [0, 1, 2] out_osds: [4, 5, 3] dead_osds: [4] live_osds: [1, 0, 3, 2, 5] 2014-07-30T13:47:56.820 INFO:teuthology.task.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0 2014-07-30T13:47:56.821 INFO:teuthology.task.thrashosds.thrasher:inject_pause on 3 2014-07-30T13:47:56.821 INFO:teuthology.task.thrashosds.thrasher:Testing filestore_inject_stall pause injection for duration 3 2014-07-30T13:47:56.821 INFO:teuthology.task.thrashosds.thrasher:Checking after 0, should_be_down=False 2014-07-30T13:47:56.821 DEBUG:teuthology.orchestra.run:Running [10.214.138.73]: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config set filestore_inject_stall 3' 2014-07-30T13:48:03.548 INFO:teuthology.task.thrashosds.thrasher:in_osds: [0, 1, 2] out_osds: [4, 5, 3] dead_osds: [4] live_osds: [1, 0, 3, 2, 5] 2014-07-30T13:48:03.549 INFO:teuthology.task.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0 2014-07-30T13:48:03.549 INFO:teuthology.task.thrashosds.thrasher:Adding osd 3 2014-07-30T13:48:03.549 DEBUG:teuthology.orchestra.run:Running [10.214.138.64]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd in 3' 2014-07-30T13:48:04.967 INFO:teuthology.orchestra.run.err:[10.214.138.64]: marked in osd.3. 2014-07-30T13:48:04.977 INFO:teuthology.task.thrashosds.thrasher:Added osd 3 2014-07-30T13:48:07.472 INFO:teuthology.task.workunit:Stopping suites/iogen.sh on client.0... 2014-07-30T13:48:07.473 DEBUG:teuthology.orchestra.run:Running [10.214.138.73]: 'rm -rf -- /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.0' 2014-07-30T13:48:07.700 ERROR:teuthology.parallel:Exception in parallel execution Traceback (most recent call last): File "/home/teuthworker/src/teuthology_firefly/teuthology/parallel.py", line 82, in __exit__ for result in self: File "/home/teuthworker/src/teuthology_firefly/teuthology/parallel.py", line 101, in next resurrect_traceback(result) File "/home/teuthworker/src/teuthology_firefly/teuthology/parallel.py", line 19, in capture_traceback return func(*args, **kwargs) File "/home/teuthworker/src/teuthology_firefly/teuthology/task/workunit.py", line 359, in _run_tests args=args, File "/home/teuthworker/src/teuthology_firefly/teuthology/orchestra/remote.py", line 106, in run r = self._runner(client=self.ssh, **kwargs) File "/home/teuthworker/src/teuthology_firefly/teuthology/orchestra/run.py", line 330, in run r.exitstatus = _check_status(r.exitstatus) File "/home/teuthworker/src/teuthology_firefly/teuthology/orchestra/run.py", line 326, in _check_status raise CommandFailedError(command=r.command, exitstatus=status, node=host) CommandFailedError: Command failed on 10.214.138.73 with status 143: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b73c67d375a2552d8ed67843c8a65c2c0feba6 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/suites/iogen.sh' 2014-07-30T13:48:07.700 INFO:teuthology.task.workunit:Stopping suites/iogen.sh on client.1... 2014-07-30T13:48:07.701 DEBUG:teuthology.orchestra.run:Running [10.214.138.73]: 'rm -rf -- /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.1'
archive_path: /var/lib/teuthworker/archive/teuthology-2014-07-30_12:36:02-upgrade:firefly-firefly-testing-basic-vps/387308 branch: firefly description: upgrade:firefly/newer/{0-cluster/start.yaml 1-install/v0.80.4.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_6.5.yaml} email: ceph-qa@ceph.com job_id: '387308' kernel: &id001 kdb: true sha1: 967166011221589288348b893720d358150176b9 last_in_suite: false machine_type: vps name: teuthology-2014-07-30_12:36:02-upgrade:firefly-firefly-testing-basic-vps nuke-on-error: true os_type: centos os_version: '6.5' overrides: admin_socket: branch: firefly ceph: conf: global: osd heartbeat grace: 40 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 fs: xfs log-whitelist: - slow request - scrub - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 38b73c67d375a2552d8ed67843c8a65c2c0feba6 ceph-deploy: branch: dev: firefly conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 38b73c67d375a2552d8ed67843c8a65c2c0feba6 s3tests: branch: firefly workunit: sha1: 38b73c67d375a2552d8ed67843c8a65c2c0feba6 owner: scheduled_teuthology@teuthology priority: 1000 roles: - - mon.a - mds.a - osd.0 - osd.1 - osd.2 - - mon.b - mon.c - osd.3 - osd.4 - osd.5 - client.0 - client.1 suite: upgrade:firefly suite_branch: firefly targets: ubuntu@vpm028.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxLmEY3t5DkWjoONxQ3fSWj2ZmNr8TdfnQ4YGyVtTsTi+kfHy7K7+ue90D9Vf+zujBvtP0ucE3S22arHgphyllwh9EVp6Sw4TUKUVYfxM57EucCzNGKqnzUw5ivQH/lRb6s5Be6nULR2+tRdfbBQaRFRjfFSFwPTPaGRGvRGmWAbGiXxaCsA++Iw00+wenc42bQvTPXEw1mSM4MeaSIFWmMZeoHe77kTrwiPTDx3Zox8ZETylWcG/Og6v2LUztdzltO+TN1+FG9wyPC2iIKrawueqEvv4gqJkABPykuR6JUylkA8QcEl9izRCSJckULY+NEMI86XWmNLrS4lihEtu3w== ubuntu@vpm031.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2VQsD81cUeXheTn1xKD880q5dJb0eO27znGI+ZG+PYzHt5Hx98H5uZEbQiuUIDPDJYK240gwuBhh7GpN+cfeYmDiNdSfgHwPUT8+kh9GQ3lNnNAnlbTuEbXBH+GQgZv6qAzW04KXgsu/zOrj7X+9KlVIi54i71eBGN9xXBj54D0wH8Yy29c4UXidJ17C2Xx2fYj/6yOvvaSVISwbtjWK886ub1NipbITc5lJ1l27T5VgtDlgvS2KkXgaggoYMQ1DOhSVailPGaockJKbc7aUencTjO078qqjEBArQpTa82WyJxB/rK7XJd911CM4xWVa8T6KHhwZte10IIgokWHHHQ== tasks: - internal.lock_machines: - 2 - vps - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.serialize_remote_roles: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: tag: v0.80.4 - ceph: log-whitelist: - scrub mismatch - ScrubResult - parallel: - workload - upgrade-sequence - sequential: - mon_thrash: revive_delay: 20 thrash_delay: 1 - ceph-fuse: null - workunit: clients: client.0: - suites/dbench.sh - sequential: - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 timeout: 1200 - ceph-fuse: null - workunit: clients: all: - suites/iogen.sh - sequential: - rgw: - client.1 - s3tests: client.1: rgw_server: client.1 teuthology_branch: firefly tube: vps upgrade-sequence: sequential: - install.upgrade: all: branch: firefly - ceph.restart: - mon.a - sleep: duration: 60 - ceph.restart: - mon.b - sleep: duration: 60 - ceph.restart: - mon.c - sleep: duration: 60 - ceph.restart: - mds.a - sleep: duration: 60 - ceph.restart: - osd.0 - sleep: duration: 30 - ceph.restart: - osd.1 - sleep: duration: 30 - ceph.restart: - osd.2 - sleep: duration: 30 - ceph.restart: - osd.3 - sleep: duration: 30 - ceph.restart: - osd.4 - sleep: duration: 30 - ceph.restart: - osd.5 verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.11606 workload: workunit: clients: all: - suites/blogbench.sh
description: upgrade:firefly/newer/{0-cluster/start.yaml 1-install/v0.80.4.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_6.5.yaml} duration: 3813.5555601119995 failure_reason: 'Command failed on 10.214.138.73 with status 143: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b73c67d375a2552d8ed67843c8a65c2c0feba6 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/suites/iogen.sh''' flavor: basic mon.a-kernel-sha1: 967166011221589288348b893720d358150176b9 mon.b-kernel-sha1: 967166011221589288348b893720d358150176b9 owner: scheduled_teuthology@teuthology success: false
Updated by Yuri Weinstein over 9 years ago
- Status changed from New to Rejected
It was a test mis-configuration. When we added a new client to run workload on, we had to be more specific about on which client run workload on.
https://github.com/ceph/ceph-qa-suite/pull/71
https://github.com/ceph/ceph-qa-suite/pull/72
Actions