Actions
Bug #6902
closedupgrade-parallel test failed in the nightlies
Status:
Can't reproduce
Priority:
Urgent
Assignee:
David Zafman
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
the rados test failed when upgrading from dumpling to next branch.
logs: ubuntu@teuthology:/a/teuthology-2013-11-25_19:40:02-upgrade-parallel-next-testing-basic-plana/118239
ubuntu@teuthology:/a/teuthology-2013-11-25_19:40:02-upgrade-parallel-next-testing-basic-plana/118239$ cat config.yaml archive_path: /var/lib/teuthworker/archive/teuthology-2013-11-25_19:40:02-upgrade-parallel-next-testing-basic-plana/118239 description: upgrade-parallel/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/more.yaml 5-workload/snaps-many-objects.yaml 6-next-mon/monb.yaml 7-workload/rados_api_tests.yaml distro/ubuntu_12.04.yaml} email: null job_id: '118239' kernel: &id001 kdb: true sha1: 68174f0c97e7c0561aa844059569e3cbf0a43de1 last_in_suite: false machine_type: plana name: teuthology-2013-11-25_19:40:02-upgrade-parallel-next-testing-basic-plana nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: next ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 5 log-whitelist: - slow request - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 1804e136da199a61c2fd17183bd18b0df6172a32 ceph-deploy: branch: dev: next conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 install: ceph: sha1: 1804e136da199a61c2fd17183bd18b0df6172a32 s3tests: branch: next workunit: sha1: 1804e136da199a61c2fd17183bd18b0df6172a32 owner: scheduled_teuthology@teuthology roles: - - mon.a - mon.b - mds.a - osd.0 - osd.1 - osd.2 - - osd.3 - osd.4 - osd.5 - client.0 - mon.c targets: ubuntu@plana37.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8FPsPKV1KVlb89QL2k0kNMTM3mIenC2wHxnVb9EgA7MGjC/gJFv4FoYFtTn0SadJl2hZNJ8kk7HjBsgCQG3f+LL3l7DPlqSJG8zFFXW6LCzjk0YQX/JX7X6nK33HdxzzOZVecglaQnTSWKbPDp8ofd9EQX4gN7mPb/C0/FUtT0Hjrb97QBYqDDVWEMBo7BCT4YdsisPBkCFpQ1Khl2K89e9uhfw4wvVvqveLnU3NEAULbEhMeLg0LMsSlmK2gfiyJbyxweApXo4VqfuNd6DnUqUzilAM0VJL3KgJqJGW46IYC76VPMSHPKD66kgrYiyBm12iLEy70kODNVaNe3wnX ubuntu@plana41.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDO1APNwMPhJnDeyE8W4NBl4Fdsx8dBV8IzROahOlY56SZsVVthCKUZm1cHPE6nN9L4iEZw7ibM9JpqAI/cfFoSQh4HVAwe3lfIQTO3dh7EF7vPjMNowiiPEmQcby0RNi85x33Q6m+44E+5A72ZmVdmmuOLsi7ERd+m3eAnzI4GdTLL4bJuxLMfpj8X4aGdMnopICuCOmzJGCw8+ye5pC/NdX9PBmMsg2G/Yjb54SQXELTMTgSCPOt+LoemSD1CxsPgaXVM3KMSPGQ3cRaOr3n+UpnC8bpzfvqGEIYtU40zTqLeElgkrn7lTZl35+yG3y20mcvCDiL0VITU/zToTLaD tasks: - internal.lock_machines: - 2 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - ceph: null - install.upgrade: osd.0: null - ceph.restart: daemons: - osd.0 - osd.1 - osd.2 - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 timeout: 1200 - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - rados: clients: - client.0 objects: 500 op_weights: copy_from: 50 delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - ceph.wait_for_mon_quorum: - a - b - workunit: branch: dumpling clients: client.0: - rados/test.sh teuthology_branch: next 2013-11-26T10:54:38.235 INFO:teuthology.task.rados:joining rados 2013-11-26T10:54:38.235 ERROR:teuthology.run_tasks:Manager failed: <contextlib.GeneratorContextManager object at 0x366a8d0> Traceback (most recent call last): File "/home/teuthworker/teuthology-next/teuthology/run_tasks.py", line 82, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/teuthology-next/teuthology/task/rados.py", line 132, in task running.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get raise self._exception CommandFailedError: Command failed on 10.214.131.3 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 0 --op rmattr 0 --op watch 0 --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op copy_from 50 --pool unique_pool_0' 2013-11-26T10:54:38.236 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x367b310> 2013-11-26T10:54:38.236 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x3776d90>
Updated by David Zafman over 10 years ago
- Status changed from New to Need More Info
All the logs indicate is that an exit status of 1 came from ceph_test_rados. There are no other apparent messages indicating a test failed or what went wrong. Let's see if this happens again.
Updated by Sage Weil over 10 years ago
- Status changed from Need More Info to Can't reproduce
Actions