Bug #8016
"testPrefixAndLimit (test.functional.tests.TestContainerUTF8) ... ERROR" in upgrade:dumpling-x:stress-split-firefly-distro-basic-vps suite
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2014-04-07T06:46:10.975 INFO:teuthology.orchestra.run.err:[10.214.138.94]: ERROR: testPrefixAndLimit (test.functional.tests.TestContainerUTF8) 2014-04-07T06:46:10.976 INFO:teuthology.orchestra.run.err:[10.214.138.94]: ---------------------------------------------------------------------- 2014-04-07T06:46:10.976 INFO:teuthology.orchestra.run.err:[10.214.138.94]: Traceback (most recent call last): 2014-04-07T06:46:10.976 INFO:teuthology.orchestra.run.err:[10.214.138.94]: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 428, in testPrefixAndLimit 2014-04-07T06:46:10.976 INFO:teuthology.orchestra.run.err:[10.214.138.94]: file.write() 2014-04-07T06:46:10.976 INFO:teuthology.orchestra.run.err:[10.214.138.94]: File "/home/ubuntu/cephtest/swift/test/functional/swift.py", line 731, in write 2014-04-07T06:46:10.976 INFO:teuthology.orchestra.run.err:[10.214.138.94]: raise ResponseError(self.conn.response) 2014-04-07T06:46:10.976 INFO:teuthology.orchestra.run.err:[10.214.138.94]: ResponseError: 500: Internal Server Error 2014-04-07T06:46:10.977 INFO:teuthology.orchestra.run.err:[10.214.138.94]: 2014-04-07T06:46:10.978 INFO:teuthology.orchestra.run.err:[10.214.138.94]: ---------------------------------------------------------------------- 2014-04-07T06:46:10.978 INFO:teuthology.orchestra.run.err:[10.214.138.94]: Ran 137 tests in 718.384s 2014-04-07T06:46:10.978 INFO:teuthology.orchestra.run.err:[10.214.138.94]: 2014-04-07T06:46:10.978 INFO:teuthology.orchestra.run.err:[10.214.138.94]: FAILED (errors=1) 2014-04-07T06:46:11.031 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 27, in nested vars.append(enter()) File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/swift.py", line 175, in run_tests args=args, File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/cluster.py", line 61, in run return [remote.run(**kwargs) for remote in remotes] File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run r = self._runner(client=self.ssh, **kwargs) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run r.exitstatus = _check_status(r.exitstatus) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status raise CommandFailedError(command=r.command, exitstatus=status, node=host) CommandFailedError: Command failed on 10.214.138.94 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'"
archive_path: /var/lib/teuthworker/archive/teuthology-2014-04-06_22:35:23-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps/175597 description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/snaps-few-objects.yaml 6-next-mon/monb.yaml 7-workload/rados_api_tests.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} email: null job_id: '175597' kernel: &id001 kdb: true sha1: distro last_in_suite: false machine_type: vps name: teuthology-2014-04-06_22:35:23-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps nuke-on-error: true os_type: debian os_version: '7.0' overrides: admin_socket: branch: firefly ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on legacy crush tunables: false osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 log-whitelist: - slow request - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 4aef403dbc2ba3dd572d13c43b5192f04941dc07 ceph-deploy: branch: dev: firefly conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 4aef403dbc2ba3dd572d13c43b5192f04941dc07 s3tests: branch: master workunit: sha1: 4aef403dbc2ba3dd572d13c43b5192f04941dc07 owner: scheduled_teuthology@teuthology roles: - - mon.a - mon.b - mds.a - osd.0 - osd.1 - osd.2 - - osd.3 - osd.4 - osd.5 - mon.c - - client.0 targets: ubuntu@vpm011.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA6EW/lEkHzyUQqr0ocZg653hFeqSY7pRo9ZJIrKCFRnaYmSuUc6R6L2grhWXJRCihTEM3hzDeDT1wfnNW8qkO8ADkF0clQy6ey5eKhTLksUy2rAmP/nZ7Uk1JrBa1R4K/EmIm8t2Y29hKdzSWuUD24MxHE8T18m86sLEN/OVXc/gtHt1HvBK5awSLTssBKYAAvrtaJaA0NZIDF+mhM6b23NJHzYXYJTOqqSBfXTGChGqTOSJXrF3BBGvRlluRBG+dxc7lIxOkeBmNTh4duU/X53ux7XuAEP7qwczgiehPFs565E2tAwhFj/+3fe6t+DFPcWxjZWhOzFuY2wMXTiw5 ubuntu@vpm031.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDU2JxVvxv+NTlyC2zH6DgE0h0xel9QRnQR4372+8H95JJRzQFYf5qjEkuBmUK01Mmg4M1M1LaxHownFW17pmj2F0E9KNi2HAirnR5mVyEbitbaswX4r+rJyjNrocUSoDr6cMoOubzp2bhZSVjAA1rS9l9IjEbpWhEj0AoG3P/hiK3ApBdF4g35F32a+PnNORMifCRYI94RXE5wYGrHWS6wSGjlA6Tv6NXpQ1imocxDiKRAga0UskhgWTIeBg3843AhGHlUzmuujIPWiLqoamEe3clBVAKvlsp4QwtO3xUD0j0XXdW5n/OSFFyPX5OJNX0ZyRMBjLQuQgxjznPmnJTl ubuntu@vpm040.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7BacN3Jhug2eedc9aJqMYiF10BylkRArNBSEVe0ws3zWE697FliYyqQocJR4wdeOn87ub6pKbBnf9rfDN3CrVX/QMfwaDaQ+AzX3KwMg1WNAb+NRfw4kgxBunrB66aV08S5I5Amd4Em9NDwwhbW8FaNaVMXpTZJCmBfR4Gm8yLbhJblQPwwfp/9+BqxmuFc5+XW2DbEh8swiDTwPGZcfX7oUrLWHYf36VEoPNP84mZ0PtNoWPbAJNCgVKPFN9MU3UjA5X/nr3fXGC8MJo5UZn2pbBEODmEs7r85oTLdLvtP2uhx46tKRHYAtQpwpAogC3GeaNNL9lWRgMtfpahv/h tasks: - internal.lock_machines: - 3 - vps - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - ceph: fs: xfs - install.upgrade: osd.0: null - ceph.restart: daemons: - osd.0 - osd.1 - osd.2 - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 thrash_primary_affinity: false timeout: 1200 - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - rados: clients: - client.0 objects: 50 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - workunit: branch: dumpling clients: client.0: - rados/test-upgrade-firefly.sh - install.upgrade: mon.c: null - ceph.restart: daemons: - mon.c wait-for-healthy: false wait-for-osds-up: true - ceph.wait_for_mon_quorum: - a - b - c - workunit: branch: dumpling clients: client.0: - rados/test-upgrade-firefly.sh - workunit: branch: dumpling clients: client.0: - rbd/test_librbd_python.sh - rgw: client.0: idle_timeout: 120 - swift: client.0: rgw_server: client.0 - rados: clients: - client.0 objects: 500 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 teuthology_branch: firefly verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.17023
description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/snaps-few-objects.yaml 6-next-mon/monb.yaml 7-workload/rados_api_tests.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} duration: 4851.660161972046 failure_reason: 'Command failed on 10.214.138.94 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a ''!fails_on_rgw''"' flavor: basic owner: scheduled_teuthology@teuthology sentry_event: http://sentry.ceph.com/inktank/teuthology/search?q=1d0162012b2444cbb093acecd6406cff success: false
Related issues
History
#1 Updated by Sage Weil almost 10 years ago
- Assignee set to Yuri Weinstein
- Source changed from other to Q/A
I think the 120s time just isn't long enough. Let's make it 300s (here and in the other thrashing/upgrade test). git grep for idle_timeout: and change them all.
#2 Updated by Sage Weil almost 10 years ago
- Status changed from New to Resolved
#3 Updated by Yuri Weinstein almost 10 years ago
I see seemingly the same issue in today run.
Logs are here - http://qa-proxy.ceph.com/teuthology/teuthology-2014-04-09_22:35:02-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps/182623/teuthology.log
Maybe time is not long enough?
#4 Updated by Loïc Dachary over 9 years ago
- Project changed from Ceph to rgw