Bug #7868
"failed to recover before timeout expired" in powercycle-firefly---basic-plana suite
Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-03-23_23:55:02-powercycle-firefly---basic-plana/144193/
2014-03-25T23:04:27.655 DEBUG:teuthology.parallel:result is None 2014-03-25T23:04:27.655 DEBUG:teuthology.orchestra.run:Running [10.214.132.12]: 'rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0' 2014-03-25T23:04:27.766 INFO:teuthology.task.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2014-03-25T23:04:27.766 DEBUG:teuthology.orchestra.run:Running [10.214.132.12]: 'rmdir -- /home/ubuntu/cephtest/mnt.0' 2014-03-25T23:04:27.772 INFO:teuthology.orchestra.run.err:[10.214.132.12]: rmdir: failed to remove `/home/ubuntu/cephtest/mnt.0': Device or resource busy 2014-03-25T23:04:27.772 ERROR:teuthology.task.workunit:Caught an exception deleting dir /home/ubuntu/cephtest/mnt.0 Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/task/workunit.py", line 152, in _delete_dir mnt, File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run r = self._runner(client=self.ssh, **kwargs) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run r.exitstatus = _check_status(r.exitstatus) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status raise CommandFailedError(command=r.command, exitstatus=status, node=host) CommandFailedError: Command failed on 10.214.132.12 with status 1: 'rmdir -- /home/ubuntu/cephtest/mnt.0' 2014-03-25T23:04:27.773 DEBUG:teuthology.run_tasks:Unwinding manager ceph-fuse 2014-03-25T23:04:27.773 INFO:teuthology.task.ceph-fuse:Unmounting ceph-fuse clients... 2014-03-25T23:04:27.773 DEBUG:teuthology.orchestra.run:Running [10.214.132.12]: 'sudo fusermount -u /home/ubuntu/cephtest/mnt.0' 2014-03-25T23:04:27.880 INFO:teuthology.task.ceph-fuse.ceph-fuse.0.err:[10.214.132.12]: ceph-fuse[4818]: fuse finished with error 0 2014-03-25T23:04:29.027 DEBUG:teuthology.orchestra.run:Running [10.214.132.12]: 'rmdir -- /home/ubuntu/cephtest/mnt.0' 2014-03-25T23:04:29.034 DEBUG:teuthology.run_tasks:Unwinding manager thrashosds 2014-03-25T23:04:29.034 INFO:teuthology.task.thrashosds:joining thrashosds 2014-03-25T23:04:29.034 ERROR:teuthology.run_tasks:Manager failed: thrashosds Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 84, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/thrashosds.py", line 172, in task thrash_proc.do_join() File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph_manager.py", line 153, in do_join self.thread.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get raise self._exception AssertionError: failed to recover before timeout expired 2014-03-25T23:04:29.101 DEBUG:teuthology.run_tasks:Unwinding manager ceph 2014-03-25T23:04:29.101 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:04:30.842 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:04:31.840 INFO:teuthology.task.ceph:Scrubbing osd osd.2 2014-03-25T23:04:31.840 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd scrub osd.2' 2014-03-25T23:04:32.073 INFO:teuthology.orchestra.run.err:[10.214.133.34]: osd.2 instructed to scrub 2014-03-25T23:04:32.084 INFO:teuthology.task.ceph:Scrubbing osd osd.1 2014-03-25T23:04:32.085 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd scrub osd.1' 2014-03-25T23:04:32.312 INFO:teuthology.orchestra.run.err:[10.214.133.34]: osd.1 instructed to scrub 2014-03-25T23:04:32.324 INFO:teuthology.task.ceph:Scrubbing osd osd.0 2014-03-25T23:04:32.324 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd scrub osd.0' 2014-03-25T23:04:32.551 INFO:teuthology.orchestra.run.err:[10.214.133.34]: osd.0 instructed to scrub 2014-03-25T23:04:32.563 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:04:32.795 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:04:32.811 INFO:teuthology.task.ceph:Still waiting for all pgs to be scrubbed. 2014-03-25T23:04:42.802 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:04:43.142 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:04:43.156 INFO:teuthology.task.ceph:Still waiting for all pgs to be scrubbed. 2014-03-25T23:04:53.150 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:04:53.405 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:04:53.419 INFO:teuthology.task.ceph:Still waiting for all pgs to be scrubbed. 2014-03-25T23:05:03.412 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:05:03.640 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:05:03.653 INFO:teuthology.task.ceph:Still waiting for all pgs to be scrubbed. 2014-03-25T23:05:13.649 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:05:13.877 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:05:13.892 INFO:teuthology.task.ceph:Still waiting for all pgs to be scrubbed. 2014-03-25T23:05:23.889 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:05:24.123 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:05:24.134 INFO:teuthology.task.ceph:Still waiting for all pgs to be scrubbed. 2014-03-25T23:05:34.132 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:05:34.370 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:05:34.383 INFO:teuthology.task.ceph:Still waiting for all pgs to be scrubbed. 2014-03-25T23:05:44.377 DEBUG:teuthology.orchestra.run:Running [10.214.133.34]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-03-25T23:05:44.615 INFO:teuthology.orchestra.run.err:[10.214.133.34]: dumped all in format json 2014-03-25T23:05:44.629 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 29, in nested yield vars File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 1456, in task yield File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 84, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/thrashosds.py", line 172, in task thrash_proc.do_join() File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph_manager.py", line 153, in do_join self.thread.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get raise self._exception AssertionError: failed to recover before timeout expired
archive_path: /var/lib/teuthworker/archive/teuthology-2014-03-23_23:55:02-powercycle-firefly---basic-plana/144193 description: powercycle/osd/{clusters/3osd-1per-target.yaml fs/btrfs.yaml powercycle/default.yaml tasks/cfuse_workunit_misc.yaml} email: null job_id: '144193' last_in_suite: false machine_type: plana name: teuthology-2014-03-23_23:55:02-powercycle-firefly---basic-plana nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: firefly ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 osd op thread timeout: 60 osd sloppy crc: true fs: btrfs log-whitelist: - slow request sha1: 361b251e15e6f96219107d08d77094307679bf8e ceph-deploy: branch: dev: firefly conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 361b251e15e6f96219107d08d77094307679bf8e s3tests: branch: master workunit: sha1: 361b251e15e6f96219107d08d77094307679bf8e owner: scheduled_teuthology@teuthology roles: - - mon.0 - mon.1 - mon.2 - mds.0 - client.0 - - osd.0 - - osd.1 - - osd.2 targets: ubuntu@plana26.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7ggZ7aqae7QJjZ2+KyYSYys2vhqpMkjksuRNBQ+pjdzcHSSbxSRtDxGyZcucG0e3K0O2KJUkco/38peR9dU4iyL0oEdAEs6+391mBDBd5PmLhGeINaIpE6Q9TUGh7jMVdNqsKX03K8cmyd0ryR+QGJZC8aKlRCo0ZsbY9Pb5/tML1zDy/9V/n9bHxvqxSdh8LGJAOzt6+zxMKbLyTULT89lPni/uxB+dmErEDCzG3LG3BghU+t1LjIKSL5N6bhJqBztdGjctX6as4mpmatbwkYowCrJH3PV7DVy3MuXDxl93yLyrm7TiVjb3KjhT9RcfHVq23M4otok9OVV0AXvNV ubuntu@plana41.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDO1APNwMPhJnDeyE8W4NBl4Fdsx8dBV8IzROahOlY56SZsVVthCKUZm1cHPE6nN9L4iEZw7ibM9JpqAI/cfFoSQh4HVAwe3lfIQTO3dh7EF7vPjMNowiiPEmQcby0RNi85x33Q6m+44E+5A72ZmVdmmuOLsi7ERd+m3eAnzI4GdTLL4bJuxLMfpj8X4aGdMnopICuCOmzJGCw8+ye5pC/NdX9PBmMsg2G/Yjb54SQXELTMTgSCPOt+LoemSD1CxsPgaXVM3KMSPGQ3cRaOr3n+UpnC8bpzfvqGEIYtU40zTqLeElgkrn7lTZl35+yG3y20mcvCDiL0VITU/zToTLaD ubuntu@plana66.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA0vDozYONKDSOwceUM2xseqpoYVAl88wZfuBYHCshx3vL+J9n383t//jQH4wEXYJPb41jlQGPL3jFbd28OsBfjjGtQP53SH/LPvsirYUMFjM+1WtNr/roJP4wF/Z5skE70yXSnO3uYawm8/nK0xf8o3d/B4xuS55UWtH0bNzTloXsP6L4H+4PqSb5GQ5mukFhfKSQBN2CdqAJSSi0mMdG3uOglqWQmuJjEEnL6+pS//Q6yzYlrgCZrAmmsp99U654RZGT9CngqUyp7i7M2CM6wYR3klMI4DDBRcILhczHKN6DdFTw4+rejCMDTRk3xLZHhkzMW5W2PdtuZmF0od4f ubuntu@plana82.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGnXo8Gpxlr9AWSlOhrSp5/Tqs2crPuTwKDJ9+PMJwf28byF41A0RNf9zQhHErAe6LeT6ypEeEFTnesd5s2lXySJCUoqoVI3q5/XIHPClUSY7O16JSq6y4g/SW/To92puNuacEYIU1IVv48cqtzeIrgo72EYMeyprPAaBqErcKkVteFHO5fhX/NkiSe83UvBDLj9DQydYfl9yBmZItnsMQkPAsKIKglqeUu97oaq8m2eCbecsVtQdDtckTRzxsWY89ZtWbwbNYloAkPwotYvOFJU0syPvn5nmchbFU/c90UeePW9gqJk3QhsZs94wxX/ZQeo5UbIgjvZtEc2zxJkmx tasks: - internal.lock_machines: - 4 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: null - ceph: null - thrashosds: chance_down: 1.0 powercycle: true timeout: 600 - ceph-fuse: null - workunit: clients: all: - fs/misc teuthology_branch: firefly verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.16916
description: powercycle/osd/{clusters/3osd-1per-target.yaml fs/btrfs.yaml powercycle/default.yaml tasks/cfuse_workunit_misc.yaml} duration: 9031.649965047836 failure_reason: failed to recover before timeout expired flavor: basic owner: scheduled_teuthology@teuthology success: false
History
#1 Updated by Sage Weil almost 10 years ago
- Status changed from New to Can't reproduce