Actions
Bug #9788
closed"Assertion: common/HeartbeatMap.cc: 79" placeholder for "hit suicide timeout" issues
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Error from 'scrap':
Assertion: common/HeartbeatMap.cc: 79: FAILED assert(0 == "hit suicide timeout") ceph version 0.67.11-22-gddc8a82 (ddc8a827d1baabc0bcb1df9ded37edc9820d8cac) 1: (ceph::HeartbeatMap::_check(ceph::heartbeat_handle_d*, char const*, long)+0x107) [0x816bb7] 2: (ceph::HeartbeatMap::reset_timeout(ceph::heartbeat_handle_d*, long, long)+0x8e) [0x81705e] 3: (ThreadPool::worker(ThreadPool::WorkThread*)+0x471) [0x8b6ae1] 4: (ThreadPool::WorkThread::entry()+0x10) [0x8b8b70] 5: (()+0x7e9a) [0x7f8b876b5e9a] 6: (clone()+0x6d) [0x7f8b859a53fd] ['546345']
2014-10-15T00:16:27.453 ERROR:teuthology.run_tasks:Manager failed: radosbench Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 117, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__ self.gen.throw(type, value, traceback) File "/var/lib/teuthworker/src/ceph-qa-suite_giant/tasks/radosbench.py", line 92, in task run.wait(radosbench.itervalues(), timeout=timeout) File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 381, in wait check_time() File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 127, in __call__ raise MaxWhileTries(error_msg) MaxWhileTries: reached maximum tries (1500) after waiting for 9000 seconds
archive_path: /var/lib/teuthworker/archive/teuthology-2014-10-13_19:30:01-upgrade:dumpling-firefly-x:stress-split-giant-distro-basic-multi/546345 branch: giant description: upgrade:dumpling-firefly-x:stress-split/{00-cluster/start.yaml 01-dumpling-install/dumpling.yaml 02-partial-upgrade-firefly/firsthalf.yaml 03-thrash/default.yaml 04-mona-upgrade-firefly/mona.yaml 05-workload/rbd-cls.yaml 06-monb-upgrade-firefly/monb.yaml 07-workload/radosbench.yaml 08-monc-upgrade-firefly/monc.yaml 09-workload/{rbd-python.yaml rgw-s3tests.yaml} 10-osds-upgrade-firefly/secondhalf.yaml 11-workload/snaps-few-objects.yaml 12-partial-upgrade-x/first.yaml 13-thrash/default.yaml 14-mona-upgrade-x/mona.yaml 15-workload/rbd-import-export.yaml 16-monb-upgrade-x/monb.yaml 17-workload/readwrite.yaml 18-monc-upgrade-x/monc.yaml 19-workload/radosbench.yaml 20-osds-upgrade-x/osds_secondhalf.yaml 21-final-workload/rados_stress_watch.yaml distros/ubuntu_12.04.yaml} email: ceph-qa@ceph.com job_id: '546345' kernel: &id001 kdb: true sha1: distro last_in_suite: false machine_type: plana,burnupi,mira name: teuthology-2014-10-13_19:30:01-upgrade:dumpling-firefly-x:stress-split-giant-distro-basic-multi nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: giant ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on legacy crush tunables: false osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 log-whitelist: - slow request - wrongly marked me down - objects unfound and apparently lost - log bound mismatch - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 674781960b8856ae684520c3b0e9a6b8c2bc7bec ceph-deploy: branch: dev: giant conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 674781960b8856ae684520c3b0e9a6b8c2bc7bec s3tests: branch: giant workunit: sha1: 674781960b8856ae684520c3b0e9a6b8c2bc7bec owner: scheduled_teuthology@teuthology priority: 1000 roles: - - mon.a - mon.b - mds.a - osd.0 - osd.1 - osd.2 - mon.c - - osd.3 - osd.4 - osd.5 - - client.0 suite: upgrade:dumpling-firefly-x:stress-split suite_branch: giant suite_path: /var/lib/teuthworker/src/ceph-qa-suite_giant targets: ubuntu@mira076.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCnEuRqBgd2DrRVNhCSPfldQUUJ4HeQKbPVLUtbC8wNRrv2Nk9ZuVUn5cb1LBJQreJM/p17q4fIO8bZyApZ6RZu+Q9pW70WIE3U+Z6xtINgi9xq6/mqnMauuqkDYiePhR9CDCbVVfBp/zVDOJVeCdV9TG5AZ0Xt2YciQkaVmmvxdRr4v5zhdw6vDumnfZsI5K+J0p2hII8e2HUrUkMTVKO0mu1rXzIqGQFOSArPTfCLAOgQfUG5s/e6QMC4NI+BOy2cVp/8yCzKv6FPDDvdEknmLh9tQ9HbS8SyOGPtdj9wfoIKo7UbOnJiDSu2KOliyljEB3YUTrzNClM7W/pWpobV ubuntu@plana15.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA0GHi/HAXxKVnAdyh6NHBoqEq2Qk7z6Hb3SFt5+mljUWThTAkVPTf4QdpSshH/D+5v4VJHXp7lHYhZZJCS50z3w+af8cmREqwUgnA0zEjKKaXaVIdkAfDkh7LH3vllIGah3PlMPKF6njfvuocJ1pr1QneCLTmbHVCYsdWTGgRW7te1fn7vhXDJbGZMumHL5k/HO7iRDaw9cNuozWuqI5/d8UwdvQ/rhhbSKNef3w2hh2C4CU/nCkOGXFVyJZdo2pSJ2k/jBcPWSh+V3qNtIpthDqzTDmmpD8BFdW9MXxO5pfFRDsInWdgTsxZOrWtPuQy9+an20KbU2N5F4JoQX6N ubuntu@plana78.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8m+86JHGSyRkSWj9p/K6JUbRcPjB7TtLZ9OBudXAGZNgReiOJoCU5kkpwejl0uXXCOHe/DB/bH81JCQbqY3XCJjU5JZ1wBsL/owaErPSfbbaouNV2k1FQjiSXYtPzx+qwEOeOZtEBPQ4p04npai6NzPLX43OGx/UiAwpyEGfVxZedmci0VBtC7QdCQkP3sNJqSxFYdoVGjU5jv6BarPqV8LM4v00f8TmD1GdP51bfLGSKii6UU1IKXXR78ifb+9QUX4p/Clkl6Qgz8CJ70Iu+mcBZclJaGoAyuoKBhXE2oi2W1cQVquPqloxbN+VbbjoOL5OHbGg2euxyohZhgJaF tasks: - internal.lock_machines: - 3 - plana,burnupi,mira - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.push_inventory: null - internal.serialize_remote_roles: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - ceph: fs: xfs - install.upgrade: osd.0: branch: firefly - ceph.restart: daemons: - osd.0 - osd.1 - osd.2 - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 thrash_primary_affinity: false timeout: 1200 - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - workunit: branch: dumpling clients: client.0: - cls/test_cls_rbd.sh - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - radosbench: clients: - client.0 time: 1800 - install.upgrade: mon.c: null - ceph.restart: daemons: - mon.c wait-for-healthy: false wait-for-osds-up: true - ceph.wait_for_mon_quorum: - a - b - c - workunit: clients: client.0: - rbd/test_librbd_python.sh - rgw: client.0: null default_idle_timeout: 300 - s3tests: client.0: rgw_server: client.0 - install.upgrade: osd.3: branch: firefly - ceph.restart: daemons: - osd.3 - osd.4 - osd.5 - rados: clients: - client.0 objects: 50 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 - install.upgrade: osd.0: null - ceph.restart: daemons: - osd.0 - osd.1 - osd.2 - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 thrash_primary_affinity: false timeout: 1200 - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - workunit: clients: client.0: - rbd/import_export.sh env: RBD_CREATE_ARGS: --new-format - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - rados: clients: - client.0 objects: 500 op_weights: delete: 10 read: 45 write: 45 ops: 4000 - ceph.restart: daemons: - mon.c wait-for-healthy: false wait-for-osds-up: true - ceph.wait_for_mon_quorum: - a - b - c - radosbench: clients: - client.0 time: 1800 - install.upgrade: osd.3: null - ceph.restart: daemons: - osd.3 - osd.4 - osd.5 - workunit: clients: client.0: - rados/stress_watch.sh teuthology_branch: master tube: multi verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.multi.3124
description: upgrade:dumpling-firefly-x:stress-split/{00-cluster/start.yaml 01-dumpling-install/dumpling.yaml 02-partial-upgrade-firefly/firsthalf.yaml 03-thrash/default.yaml 04-mona-upgrade-firefly/mona.yaml 05-workload/rbd-cls.yaml 06-monb-upgrade-firefly/monb.yaml 07-workload/radosbench.yaml 08-monc-upgrade-firefly/monc.yaml 09-workload/{rbd-python.yaml rgw-s3tests.yaml} 10-osds-upgrade-firefly/secondhalf.yaml 11-workload/snaps-few-objects.yaml 12-partial-upgrade-x/first.yaml 13-thrash/default.yaml 14-mona-upgrade-x/mona.yaml 15-workload/rbd-import-export.yaml 16-monb-upgrade-x/monb.yaml 17-workload/readwrite.yaml 18-monc-upgrade-x/monc.yaml 19-workload/radosbench.yaml 20-osds-upgrade-x/osds_secondhalf.yaml 21-final-workload/rados_stress_watch.yaml distros/ubuntu_12.04.yaml} duration: 20570.046264886856 failure_reason: 'Command failed on plana78 with status 124: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=674781960b8856ae684520c3b0e9a6b8c2bc7bec TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh''' flavor: basic owner: scheduled_teuthology@teuthology success: false
Actions