Actions
Bug #7799
closedErrors in upgrade:dumpling-x:stress-split-firefly---basic-plana suite
Status:
Can't reproduce
Priority:
High
Assignee:
Ian Colle
Target version:
-
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2014-03-20T06:20:06.756 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ====================================================================== 2014-03-20T06:20:06.756 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ERROR: testContainerSerializedInfo (test.functional.tests.TestAccountUTF8) 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ---------------------------------------------------------------------- 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: Traceback (most recent call last): 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 219, in testContainerSerializedInfo 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: file.write_random(bytes) 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: File "/home/ubuntu/cephtest/swift/test/functional/swift.py", line 739, in write_random 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: if not self.write(data, hdrs=hdrs, parms=parms, cfg=cfg): 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: File "/home/ubuntu/cephtest/swift/test/functional/swift.py", line 731, in write 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: raise ResponseError(self.conn.response) 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ResponseError: 500: Internal Server Error 2014-03-20T06:20:06.757 INFO:teuthology.orchestra.run.err:[10.214.131.38]: 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ====================================================================== 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ERROR: testContainersOrderedByName (test.functional.tests.TestAccountUTF8) 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ---------------------------------------------------------------------- 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: Traceback (most recent call last): 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 300, in testContainersOrderedByName 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: parms={'format':format}) 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: File "/home/ubuntu/cephtest/swift/test/functional/swift.py", line 355, in containers 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: raise ResponseError(self.conn.response) 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ResponseError: 500: Internal Server Error 2014-03-20T06:20:06.758 INFO:teuthology.orchestra.run.err:[10.214.131.38]: 2014-03-20T06:20:06.759 INFO:teuthology.orchestra.run.err:[10.214.131.38]: ---------------------------------------------------------------------- 2014-03-20T06:20:06.759 INFO:teuthology.orchestra.run.err:[10.214.131.38]: Ran 137 tests in 374.608s 2014-03-20T06:20:06.759 INFO:teuthology.orchestra.run.err:[10.214.131.38]: 2014-03-20T06:20:06.759 INFO:teuthology.orchestra.run.err:[10.214.131.38]: FAILED (errors=2) 2014-03-20T06:20:06.774 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 27, in nested vars.append(enter()) File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/swift.py", line 175, in run_tests args=args, File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/cluster.py", line 61, in run return [remote.run(**kwargs) for remote in remotes] File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run r = self._runner(client=self.ssh, **kwargs) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run r.exitstatus = _check_status(r.exitstatus) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status raise CommandFailedError(command=r.command, exitstatus=status, node=host) CommandFailedError: Command failed on 10.214.131.38 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'" 2014-03-20T06:20:06.839 DEBUG:teuthology.orchestra.run:Running [10.214.131.38]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data' 2014-03-20T06:20:07.089 INFO:teuthology.task.thrashosds.thrasher:in_osds: [5, 0, 3, 1, 4, 2] out_osds: [] dead_osds: [] live_osds: [5, 1, 2, 4, 0, 3] 2014-03-20T06:20:07.089 INFO:teuthology.task.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0 2014-03-20T06:20:07.089 INFO:teuthology.task.thrashosds.thrasher:fixing pg num pool unique_pool_0 2014-03-20T06:20:07.089 DEBUG:teuthology.orchestra.run:Running [10.214.133.37]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format=json' 2014-03-20T06:20:07.499 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: 3855: oids not in use 50 2014-03-20T06:20:07.499 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: Reading 40 2014-03-20T06:20:07.499 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: 3856: oids not in use 49 2014-03-20T06:20:07.500 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: Deleting 26 current snap is 505 2014-03-20T06:20:07.690 INFO:teuthology.orchestra.run.err:[10.214.133.37]: dumped all in format json 2014-03-20T06:20:08.278 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: 3857: oids not in use 50 2014-03-20T06:20:08.278 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: RemovingSnap 504 2014-03-20T06:20:08.834 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: 3858: oids not in use 50 2014-03-20T06:20:08.834 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: Deleting 32 current snap is 505 2014-03-20T06:20:09.052 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: 3859: oids not in use 50 2014-03-20T06:20:09.053 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: Snapping 2014-03-20T06:20:10.196 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: 3860: oids not in use 50 2014-03-20T06:20:10.196 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: RemovingSnap 461 2014-03-20T06:20:10.277 DEBUG:teuthology.orchestra.run:Running [10.214.131.38]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid bar.client.0 --purge-data' 2014-03-20T06:20:10.561 INFO:teuthology.task.swift:Removing swift... 2014-03-20T06:20:10.561 DEBUG:teuthology.orchestra.run:Running [10.214.131.38]: 'rm -rf /home/ubuntu/cephtest/swift' 2014-03-20T06:20:10.631 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 33, in run_tasks manager.__enter__() File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/swift.py", line 255, in task lambda: run_tests(ctx=ctx, config=config), File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 27, in nested vars.append(enter()) File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/swift.py", line 175, in run_tests args=args, File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/cluster.py", line 61, in run return [remote.run(**kwargs) for remote in remotes] File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run r = self._runner(client=self.ssh, **kwargs) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run r.exitstatus = _check_status(r.exitstatus) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status raise CommandFailedError(command=r.command, exitstatus=status, node=host) CommandFailedError: Command failed on 10.214.131.38 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'" 2014-03-20T06:20:10.687 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/inktank/teuthology/search?q=5c548aa37ae74a2fa4e083f601d7d79c CommandFailedError: Command failed on 10.214.131.38 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'"
also pay attention to this:
2014-03-20T06:13:13.501 INFO:teuthology.task.rados.rados.0.out:[10.214.131.38]: RollingBack 5 to 381 2014-03-20T06:13:13.636 INFO:teuthology.orchestra.run.err:[10.214.131.38]: 2014-03-20 06:13:13.635756 7f2cd970d7c0 0 couldn't find old data placement pools config, setting up new ones for the zone 2014-03-20T06:13:13.947 INFO:teuthology.task.rgw.client.0.out:[10.214.131.38]: 2014-03-20 06:13:13.946636 7fac5895a7c0 -1 error storing region info: (17) File exists
Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/check_libyaml.c -o build/temp.linux-x86_64-2.7/check_libyaml.o 2014-03-20T06:13:38.038 INFO:teuthology.orchestra.run.out:[10.214.131.38]: build/temp.linux-x86_64-2.7/check_libyaml.c:2:18: fatal error: yaml.h: No such file or directory 2014-03-20T06:13:38.038 INFO:teuthology.orchestra.run.out:[10.214.131.38]: #include <yaml.h> 2014-03-20T06:13:38.038 INFO:teuthology.orchestra.run.out:[10.214.131.38]: ^ 2014-03-20T06:13:38.038 INFO:teuthology.orchestra.run.out:[10.214.131.38]: compilation terminated. 2014-03-20T06:13:38.067 INFO:teuthology.orchestra.run.out:[10.214.131.38]: 2014-03-20T06:13:38.067 INFO:teuthology.orchestra.run.out:[10.214.131.38]: libyaml is not found or a compiler error: forcing --without-libyaml
archive_path: /var/lib/teuthworker/archive/teuthology-2014-03-20_01:35:01-upgrade:dumpling-x:stress-split-firefly---basic-plana/139311 description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/snaps-few-objects.yaml 6-next-mon/monb.yaml 7-workload/rbd_api.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml} email: null job_id: '139311' last_in_suite: false machine_type: plana name: teuthology-2014-03-20_01:35:01-upgrade:dumpling-x:stress-split-firefly---basic-plana nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: firefly ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on legacy crush tunables: false osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 log-whitelist: - slow request - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: cb744ca3825c42ddf8eb708abe5bc92f0f240287 ceph-deploy: branch: dev: firefly conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: cb744ca3825c42ddf8eb708abe5bc92f0f240287 s3tests: branch: master workunit: sha1: cb744ca3825c42ddf8eb708abe5bc92f0f240287 owner: scheduled_teuthology@teuthology roles: - - mon.a - mon.b - mds.a - osd.0 - osd.1 - osd.2 - - osd.3 - osd.4 - osd.5 - mon.c - - client.0 targets: ubuntu@plana02.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCxQlTnGvBkgmzUvtJu1+nb3bT0QzoBmdGTBZfi6l/NcVViyQpfVQxUImsFtegXZsgqlGdbDKmJfyhNAP/Y2yV2nNOjOv05b6N4cQg824zU3tBovfGk/UWPtGgDcdx1X64rKi44WwAxeBFqvAUU9OWCyWpgW7G09lhK6kF+vL3ySCzjDOVNy3bHW9gBfieMxkmDdYHhQYbSQPIw1z9IBsycv81efHznOpe9VE8LQM4MxulPyrOMtiBwzsM36VPPnqc36cKXf7vBOkVomYQ2LoxLEqdlW2Phh602idQnOE9WRJzBhSXeUVaTT7Ba19lMYB21sQXN+E4wd1l60H6ud1tD ubuntu@plana75.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDd+2bdpxp8O9cLQ+wpb3wxDMG2+muqLa/mYfxwBZ4AiPKSVO7aVAfSLo22Po5b9AbYp7D0i9/qBfHuAnhqaDlqXRldd8UnFiqUa0Am+brWaxezcAnfv9CJGnw8gQcWVbQP3BprM0Z1FOs/zX3oiRBhKLfRIpn58odVjwFPuek+EUiBxQc4xb1ZZ8NF4KebhOeFTQdFD79lLMCjwS7fLvJOZ7xzV/97MY53bp9xBWO5ZTsAtTZNBWKvZCsP4bDtph8CQ078KF6ZKhVkjzmgf2VRf7n4qiKKsrghCEZ+Mpjpzxy2CWHtRGr9tkWZDU9JbSk/P4WhJnlp9+ePEr4s6c5z ubuntu@plana79.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGJX9/cZFlm+ll32X716yKrmR/RE94iH9TusIFLY5vLoE8CBCupjAKdkn4mqOB5eNvwqapMG63Vww5Cl9zo0wKPEHi3jsZCwbAxByc9sBFivZeBnmTUrYNesQvAs1Izr49/4h71oCt98hfX3hl81iEhIEGjjj7XoD3blyYHlyR9LBNoYMllLfi9Pw4KD1snEikGKlR8zMyAgndlZ5ODU4usiCMXLypAa4wFKfR6w17nZI2/Q2xkUy7l59oap50bOR9mSNMOPqpN1KgsK717JCKMU4jHHw+zN0oWvQPmDZK2ckzbyftBfpOqKyidGHpLjpXfuQ1Xk4R9fgUkpE6gcD5 tasks: - internal.lock_machines: - 3 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - ceph: fs: xfs - install.upgrade: osd.0: null - ceph.restart: daemons: - osd.0 - osd.1 - osd.2 - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 thrash_primary_affinity: false timeout: 1200 - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - rados: clients: - client.0 objects: 50 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - workunit: branch: dumpling clients: client.0: - rbd/test_librbd.sh - install.upgrade: mon.c: null - ceph.restart: daemons: - mon.c wait-for-healthy: false wait-for-osds-up: true - ceph.wait_for_mon_quorum: - a - b - c - workunit: branch: dumpling clients: client.0: - rados/test-upgrade-firefly.sh - workunit: branch: dumpling clients: client.0: - rbd/test_librbd_python.sh - rgw: - client.0 - swift: client.0: rgw_server: client.0 - rados: clients: - client.0 objects: 500 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 teuthology_branch: firefly verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.11488
description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/snaps-few-objects.yaml 6-next-mon/monb.yaml 7-workload/rbd_api.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml} duration: 3272.5434489250183 failure_reason: 'Command failed on 10.214.131.38 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a ''!fails_on_rgw''"' flavor: basic owner: scheduled_teuthology@teuthology sentry_event: http://sentry.ceph.com/inktank/teuthology/search?q=5c548aa37ae74a2fa4e083f601d7d79c success: false
Updated by Ian Colle about 10 years ago
- Status changed from New to 4
- Assignee changed from Yehuda Sadeh to Yuri Weinstein
Yuri - have we seen this in a while? We believe it's fixed, but want to confirm.
Updated by Yuri Weinstein about 10 years ago
- Assignee changed from Yuri Weinstein to Ian Colle
I don't see those errors in latest runs.
Updated by Ian Colle about 10 years ago
- Status changed from 4 to Can't reproduce
Updated by Loïc Dachary almost 10 years ago
- Status changed from Can't reproduce to 12
2014-07-03T00:03:05.568 INFO:teuthology.orchestra.run.vpm058.stderr:testStackedOverwrite (test.functional.tests.TestFileUTF8) ... ok 2014-07-03T00:03:05.571 INFO:teuthology.orchestra.run.vpm058.stderr:testTooLongName (test.functional.tests.TestFileUTF8) ... ok 2014-07-03T00:03:05.615 INFO:teuthology.orchestra.run.vpm058.stderr:testZeroByteFile (test.functional.tests.TestFileUTF8) ... ok 2014-07-03T00:03:05.616 INFO:teuthology.orchestra.run.vpm058.stderr: 2014-07-03T00:03:05.617 INFO:teuthology.orchestra.run.vpm058.stderr:====================================================================== 2014-07-03T00:03:05.617 INFO:teuthology.orchestra.run.vpm058.stderr:FAIL: testGetRequest (test.functional.tests.TestAccountNoContainers) 2014-07-03T00:03:05.617 INFO:teuthology.orchestra.run.vpm058.stderr:---------------------------------------------------------------------- 2014-07-03T00:03:05.617 INFO:teuthology.orchestra.run.vpm058.stderr:Traceback (most recent call last): 2014-07-03T00:03:05.617 INFO:teuthology.orchestra.run.vpm058.stderr: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 325, in testGetRequest 2014-07-03T00:03:05.617 INFO:teuthology.orchestra.run.vpm058.stderr: parms={'format':format})) 2014-07-03T00:03:05.617 INFO:teuthology.orchestra.run.vpm058.stderr:AssertionError: False is not true 2014-07-03T00:03:05.618 INFO:teuthology.orchestra.run.vpm058.stderr: 2014-07-03T00:03:05.618 INFO:teuthology.orchestra.run.vpm058.stderr:====================================================================== 2014-07-03T00:03:05.618 INFO:teuthology.orchestra.run.vpm058.stderr:FAIL: testSerialization (test.functional.tests.TestFileUTF8) 2014-07-03T00:03:05.619 INFO:teuthology.orchestra.run.vpm058.stderr:---------------------------------------------------------------------- 2014-07-03T00:03:05.619 INFO:teuthology.orchestra.run.vpm058.stderr:Traceback (most recent call last): 2014-07-03T00:03:05.619 INFO:teuthology.orchestra.run.vpm058.stderr: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 1315, in testSerialization 2014-07-03T00:03:05.620 INFO:teuthology.orchestra.run.vpm058.stderr: self.assert_(container.create()) 2014-07-03T00:03:05.620 INFO:teuthology.orchestra.run.vpm058.stderr:AssertionError: False is not true 2014-07-03T00:03:05.620 INFO:teuthology.orchestra.run.vpm058.stderr: 2014-07-03T00:03:05.620 INFO:teuthology.orchestra.run.vpm058.stderr:---------------------------------------------------------------------- 2014-07-03T00:03:05.620 INFO:teuthology.orchestra.run.vpm058.stderr:Ran 137 tests in 912.357s 2014-07-03T00:03:05.620 INFO:teuthology.orchestra.run.vpm058.stderr: 2014-07-03T00:03:05.620 INFO:teuthology.orchestra.run.vpm058.stderr:FAILED (failures=2)
Updated by Loïc Dachary almost 10 years ago
- Status changed from 12 to Can't reproduce
It's not the same error, my bad
Updated by Loïc Dachary almost 10 years ago
- Status changed from Can't reproduce to 12
This one looks exactly the same http://pulpito.ceph.com/loic-2014-07-02_22:04:23-upgrade:firefly-x:stress-split-master-testing-basic-vps/338855/
2014-07-02T16:40:04.611 INFO:teuthology.task.thrashosds.thrasher:in_osds: [5, 4, 3, 2, 0] out_osds: [1] dead_osds: [0] live_osds: [2, 5, 1, 3, 4] 2014-07-02T16:40:04.611 INFO:teuthology.task.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0 2014-07-02T16:40:04.611 INFO:teuthology.task.thrashosds.thrasher:Removing osd 0, in_osds are: [5, 4, 3, 2, 0] 2014-07-02T16:40:04.611 INFO:teuthology.orchestra.run.vpm016:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd out 0' 2014-07-02T16:40:05.715 INFO:teuthology.orchestra.run.vpm016.stderr:marked out osd.0. 2014-07-02T16:40:08.874 INFO:teuthology.orchestra.run.vpm017.stderr:testAccountHead (test.functional.tests.TestAccountUTF8) ... ERROR 2014-07-02T16:40:10.738 INFO:teuthology.task.thrashosds.thrasher:in_osds: [5, 4, 3, 2] out_osds: [1, 0] dead_osds: [0] live_osds: [2, 5, 1, 3, 4] 2014-07-02T16:40:10.738 INFO:teuthology.task.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0 2014-07-02T16:40:10.738 INFO:teuthology.task.thrashosds.thrasher:Reviving osd 0
and later
2014-07-02T16:44:50.487 INFO:teuthology.orchestra.run.vpm017.stderr:====================================================================== 2014-07-02T16:44:50.487 INFO:teuthology.orchestra.run.vpm017.stderr:ERROR: testAccountHead (test.functional.tests.TestAccountUTF8) 2014-07-02T16:44:50.487 INFO:teuthology.orchestra.run.vpm017.stderr:---------------------------------------------------------------------- 2014-07-02T16:44:50.487 INFO:teuthology.orchestra.run.vpm017.stderr:Traceback (most recent call last): 2014-07-02T16:44:50.487 INFO:teuthology.orchestra.run.vpm017.stderr: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 122, in setUp 2014-07-02T16:44:50.487 INFO:teuthology.orchestra.run.vpm017.stderr: super(Base2, self).setUp() 2014-07-02T16:44:50.488 INFO:teuthology.orchestra.run.vpm017.stderr: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 104, in setUp 2014-07-02T16:44:50.488 INFO:teuthology.orchestra.run.vpm017.stderr: cls.env.setUp() 2014-07-02T16:44:50.488 INFO:teuthology.orchestra.run.vpm017.stderr: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 140, in setUp 2014-07-02T16:44:50.488 INFO:teuthology.orchestra.run.vpm017.stderr: raise ResponseError(cls.conn.response) 2014-07-02T16:44:50.488 INFO:teuthology.orchestra.run.vpm017.stderr:ResponseError: 500: Internal Server Error 2014-07-02T16:44:50.488 INFO:teuthology.orchestra.run.vpm017.stderr: 2014-07-02T16:44:50.488 INFO:teuthology.orchestra.run.vpm017.stderr:---------------------------------------------------------------------- 2014-07-02T16:44:50.489 INFO:teuthology.orchestra.run.vpm017.stderr:Ran 137 tests in 788.461s 2014-07-02T16:44:50.489 INFO:teuthology.orchestra.run.vpm017.stderr: 2014-07-02T16:44:50.489 INFO:teuthology.orchestra.run.vpm017.stderr:FAILED (errors=1) 2014-07-02T16:44:50.509 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-master/teuthology/contextutil.py", line 27, in nested vars.append(enter()) File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/teuthology-master/teuthology/task/swift.py", line 175, in run_tests args=args, File "/home/teuthworker/teuthology-master/teuthology/orchestra/cluster.py", line 64, in run return [remote.run(**kwargs) for remote in remotes] File "/home/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 114, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 401, in run r.wait() File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 102, in wait exitstatus=status, node=self.hostname) CommandFailedError: Command failed on vpm017 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'"
Actions