Actions
Bug #15477
closed"failed (workunit test osdc/stress_objectcacher.sh)" in rados-hammer-distro-basic-vps/
Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Run: http://pulpito.ceph.com/teuthology-2016-04-12_09:00:02-rados-hammer-distro-basic-vps/
Jobs: ['124207', '124242']
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2016-04-12_09:00:02-rados-hammer-distro-basic-vps/124207/teuthology.log
131072 --client-oc-max-dirty 0 2016-04-12T11:02:42.717 INFO:tasks.workunit.client.0.vpm053.stderr:+ ceph_test_objectcacher_stress --ops 10000 --percent-read 0.90 --delay-ns 0 --objects 100 --max-op-size 131072 --client-oc-max-dirty 25165824 2016-04-12T11:02:44.082 INFO:tasks.workunit.client.0.vpm053.stderr:+ ceph_test_objectcacher_stress --ops 10000 --percent-read 0.90 --delay-ns 0 --objects 100 --max-op-size 1048576 --client-oc-max-dirty 0 2016-04-12T11:02:46.425 INFO:tasks.workunit:Stopping ['osdc/stress_objectcacher.sh'] on client.0... 2016-04-12T11:02:46.425 INFO:teuthology.orchestra.run.vpm053:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0 /home/ubuntu/cephtest/clone' 2016-04-12T11:02:47.784 ERROR:teuthology.parallel:Exception in parallel execution Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 83, in __exit__ for result in self: File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next resurrect_traceback(result) File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback return func(*args, **kwargs) File "/var/lib/teuthworker/src/ceph-qa-suite_hammer/tasks/workunit.py", line 385, in _run_tests label="workunit test {workunit}".format(workunit=workunit) File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 196, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run r.wait() File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait label=self.label) CommandFailedError: Command failed (workunit test osdc/stress_objectcacher.sh) on vpm053 with status 137: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e219e85be00088eecde7b1f29d7699493a79bc4d TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/osdc/stress_objectcacher.sh' 2016-04-12T11:02:48.737 ERROR:teuthology.run_tasks:Saw exception from tasks. ...
Updated by David Zafman about 8 years ago
Does this make a difference early in the run of 124207?
libvirt.libvirtError: cannot unlink file '/srv/libvirtpool/vpm171/vpm171.img': Input/output error
Updated by David Zafman about 8 years ago
- Related to Bug #15368: "api_misc: [ FAILED ] LibRadosMiscConnectFailure.ConnectFailure" added
Updated by David Zafman about 8 years ago
Job 124242 looks like http://tracker.ceph.com/issues/15368
Updated by David Zafman about 8 years ago
For 124207 the test is terminating with signal 9 (status 137 0x89) SIGKILL. So could this have been the result of trying to terminate the job in the middle of the run? It was only running about 37 minutes.
Updated by Greg Farnum about 7 years ago
- Status changed from New to Can't reproduce
I don't see this in nightlies since 6/2/16.
Actions