Actions
Bug #6769
closedrados upgrade tests failing in the nightlies
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
logs: /a/teuthology-2013-11-07_05:30:02-upgrade-next-testing-basic-plana/87919
ubuntu@teuthology:/a/teuthology-2013-11-07_05:30:02-upgrade-next-testing-basic-plana/87919$ cat config.yaml archive_path: /var/lib/teuthworker/archive/teuthology-2013-11-07_05:30:02-upgrade-next-testing-basic-plana/87919 description: upgrade/rados/{0-cluster/start.yaml 1-cuttlefish-install/cuttlefish.yaml 2-cuttlefish-workload/api.yaml 3-upgrade/dumpling.yaml 4-restart/upgrade_osd_mds_mon.yaml 5-dumpling-workload/snaps-few-objects.yaml 6-upgrade-emp/emperor.yaml 7-restart/upgrade_mon_mds_osd.yaml 8-emperor-workload/snaps-few-objects.yaml} email: null job_id: '87919' kernel: &id001 kdb: true sha1: b3efdaef9e1ac82e95ade43861f625b732dab06d last_in_suite: false machine_type: plana name: teuthology-2013-11-07_05:30:02-upgrade-next-testing-basic-plana nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: next ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 5 log-whitelist: - slow request sha1: 1ee112fa2efcf743c3f0451d73386d3364b59f1a ceph-deploy: branch: dev: next conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 install: ceph: sha1: 1ee112fa2efcf743c3f0451d73386d3364b59f1a s3tests: branch: next workunit: sha1: 1ee112fa2efcf743c3f0451d73386d3364b59f1a owner: scheduled_teuthology@teuthology roles: - - mon.a - mds.a - osd.0 - osd.1 - - mon.b - mon.c - osd.2 - osd.3 - - client.0 targets: ubuntu@plana62.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNblOa9z8QAG1Uc2XGDM+3p2UtZhXIK4mBhguSJQLJ2pxgl8CB9cw3IbPTMIXgkful0QnhPMVTYpceXQ+jIuSi8qrFjRO4wewxVmDN2+AKlrSy5T9ynmrU0D+4RlA8aKvXb0CcOUz0EjNcfA440GwN0MAzmkDTpViuKW+qTXRoltiIWTdJfadzikhcbgJB7Q2cSeCYK6svT14wkcgJoYKXp4NmrB1WH251c42GpUt1KtUQfDkvzDCK/O7HrHsxiu5joxuK3+LgjdAMnKaUgqUSEDoiLHs/rONYxvtAIsbleST8NruIK0bxRQrsVX8KrMBIAl++ZeY8t7UrWj6zxakH ubuntu@plana63.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBp8JmpiVIxiByMuxR33JGuGsc7pJGWcIDmzL08nnBRZfKhVqEC1D3GY6EyyVCtpfQsUf2Xl02OA8tCYpLWTv85hKS2sGWvXULFkpgKsazdJ78Ifdf3fXmEj1Unj3ng9V5JmV4vUHQbptsXOnON/sQDDqNqS7bN5JFfjfSo9hsbBw0bibRPHddbUDdY6RIEp1h8kZMnv8akkRlZ5p9RzMokvpfjkvoHyqRFrOemM+4qGjk5V9YFcQSejmSyhitpuA8VskVSkvfYi30oQh8QYgdXlXLyLAoyG7mqOxfHEfOEu1KUlU+OrNJCX/PJnFrXhwaVyjLesZNl3fZLr/zdydR ubuntu@plana68.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCk8EYGKdzsq6sKetc5IXfxLMcta+UDLEDk+a9OAih+iRON3upQ6JJqu2WHq4wNk7W+YjXyAN9u8FkJ/WzDX4tmhT8MeK//Ejy+A5tYIxePYrcJ7ujVZHj3wusN+DwpyfXD6UWc2jT0f8ejJ+Cb7bR1or+/TpxT7YmLmd6vQKZUJzHpwsM2gyB3+63dRA7D6lyiqB3OBYfzOdVf4kpKRepLCIk2r3Ai+JpxnXV+LSbWoqbzfRBBmQlI3YbtOd/sEAbgf8iwgG2eiK1PN45yTSxcZAJ9cNUzJW81JVw02cQmtWTt9kKRTnvyfcriB8RrhsXKOQsJLgmeeWJJ0oYLidDx tasks: - internal.lock_machines: - 3 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: cuttlefish - ceph: fs: xfs - workunit: branch: cuttlefish clients: client.0: - rados/test.sh - cls - install.upgrade: all: branch: dumpling - ceph.restart: - osd.0 - osd.1 - osd.2 - osd.3 - mds.a - mon.a - mon.b - mon.c - rados: clients: - client.0 objects: 50 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 - install.upgrade: all: branch: emperor - ceph.restart: - mon.a - mon.b - mon.c - mds.a - osd.0 - osd.1 - osd.2 - osd.3 - rados: clients: - client.0 objects: 50 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 teuthology_branch: next 2013-11-07T13:43:49.230 DEBUG:teuthology.orchestra.run:Running [10.214.132.10]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2013-11-07T13:43:49.488 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN 81 pgs stale 2013-11-07T13:43:49.831 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: 767: oids not in use 50 2013-11-07T13:43:49.832 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: RollingBack 32 to 86 2013-11-07T13:43:50.068 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: 768: oids not in use 50 2013-11-07T13:43:50.068 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: Snapping 2013-11-07T13:43:50.489 DEBUG:teuthology.orchestra.run:Running [10.214.132.10]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2013-11-07T13:43:50.726 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN 81 pgs stale 2013-11-07T13:43:51.041 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: 769: oids not in use 50 2013-11-07T13:43:51.042 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: RollingBack 16 to 86 2013-11-07T13:43:51.096 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: 770: oids not in use 50 2013-11-07T13:43:51.096 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: Reading 38 2013-11-07T13:43:51.096 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: 771: oids not in use 49 2013-11-07T13:43:51.097 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: RollingBack 30 to 89 2013-11-07T13:43:51.726 DEBUG:teuthology.orchestra.run:Running [10.214.132.10]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2013-11-07T13:43:51.965 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN 36 pgs stale 2013-11-07T13:43:52.354 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: 772: oids not in use 50 2013-11-07T13:43:52.355 INFO:teuthology.task.rados.rados.0.out:[10.214.132.16]: Snapping 2013-11-07T13:43:52.966 DEBUG:teuthology.orchestra.run:Running [10.214.132.10]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2013-11-07T13:43:53.202 DEBUG:teuthology.misc:Ceph health: HEALTH_OK 2013-11-07T13:43:53.202 INFO:teuthology.run_tasks:Running task rados... 2013-11-07T13:43:53.202 INFO:teuthology.task.rados:Beginning rados... 2013-11-07T13:43:53.202 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x1972410> 2013-11-07T13:43:53.203 INFO:teuthology.task.rados:joining rados 2013-11-07T13:43:53.203 INFO:teuthology.task.rados:clients are ['client.0'] 2013-11-07T13:43:53.203 INFO:teuthology.task.rados:starting run 0 out of 1 2013-11-07T13:43:53.205 ERROR:teuthology.run_tasks:Manager failed: <contextlib.GeneratorContextManager object at 0x1972410> Traceback (most recent call last): File "/home/teuthworker/teuthology-next/teuthology/run_tasks.py", line 84, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/teuthology-next/teuthology/task/rados.py", line 133, in task running.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 331, in get raise self._exception AssertionError
not sure what causes this issue but most upgrade tests seems to have failed with this.
Actions