Actions
Bug #7713
closed"Error: finished tid 3 when last_acked_tid was 4" in upgrade:dumpling-x:parallel-firefly---basic-plana
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2014-03-14T00:51:55.587 INFO:teuthology.task.rados.rados.1.out:[10.214.133.35]: 669: finishing write tid 4 to plana818892-15 2014-03-14T00:51:55.587 INFO:teuthology.task.rados.rados.1.out:[10.214.133.35]: 669: finishing write tid 3 to plana818892-15 2014-03-14T00:51:55.587 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: Error: finished tid 3 when last_acked_tid was 4 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: ./test/osd/RadosModel.h: In function 'virtual void WriteOp::_finish(TestOp::CallbackInfo*)' thread 7ff146ffd700 time 2014-03-14 00:51:55.587005 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: ./test/osd/RadosModel.h: 811: FAILED assert(0) 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: ceph version 0.77-868-g4f43e53 (4f43e53ced66a5a24f5cbd5ef56b2b5937b73b97) 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: 1: (WriteOp::_finish(TestOp::CallbackInfo*)+0x318) [0x419dc8] 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: 2: (write_callback(void*, void*)+0x21) [0x4274b1] 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: 3: (librados::C_AioSafe::finish(int)+0x1d) [0x7ff150e1553d] 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: 4: (Context::complete(int)+0x9) [0x7ff150df2f89] 2014-03-14T00:51:55.588 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: 5: (Finisher::finisher_thread_entry()+0x1c0) [0x7ff150ea5b70] 2014-03-14T00:51:55.589 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: 6: (()+0x7e9a) [0x7ff150a4ce9a] 2014-03-14T00:51:55.589 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: 7: (clone()+0x6d) [0x7ff150263ccd] 2014-03-14T00:51:55.589 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. 2014-03-14T00:51:55.589 INFO:teuthology.task.rados.rados.1.err:[10.214.133.35]: terminate called after throwing an instance of 'ceph::FailedAssertion' 2014-03-14T00:51:55.920 ERROR:teuthology.run_tasks:Manager failed: rados Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 84, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/rados.py", line 170, in task running.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 331, in get raise self._exception CommandCrashedError: Command crashed: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_create 50 --op snap_remove 50 --op rollback 50 --pool unique_pool_0'
archive_path: /var/lib/teuthworker/archive/teuthology-2014-03-13_19:33:10-upgrade:dumpling-x:parallel-firefly---basic-plana/129632 description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-upgrade/client.yaml 5-final-workload/rados-snaps-few-objects.yaml distros/ubuntu_12.04.yaml} email: null job_id: '129632' last_in_suite: false machine_type: plana name: teuthology-2014-03-13_19:33:10-upgrade:dumpling-x:parallel-firefly---basic-plana nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: firefly ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on legacy crush tunables: false osd: debug filestore: 20 debug ms: 1 debug osd: 20 log-whitelist: - slow request - scrub mismatch sha1: 4f43e53ced66a5a24f5cbd5ef56b2b5937b73b97 ceph-deploy: branch: dev: firefly conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 4f43e53ced66a5a24f5cbd5ef56b2b5937b73b97 s3tests: branch: master workunit: sha1: 4f43e53ced66a5a24f5cbd5ef56b2b5937b73b97 owner: scheduled_teuthology@teuthology roles: - - mon.a - mds.a - osd.0 - osd.1 - - mon.b - mon.c - osd.2 - osd.3 - - client.0 - client.1 targets: ubuntu@plana23.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqVudeWAUTT3INu2wBfJdPJ+V1/bVDDG026e1Fn+JnJiqcyc+mSUCo3koxP7pSzeb94DKcEimkyZmaL1JAxS4OMsDrjlkjebBZEzYrVWmMflD1vpu4bhigvKDVmIKMvBJzGx08ngj3IwSfZ0R31eUZfCUYBf2Y9HoIeox8ueCfhBftWXt13zQaQ7Z7Nt0Gb1atxE2cZsTpyVDjgKZG2fTjZmwD8x/hapuQk96rLJe3oeQgVgDWai0ddd9CI/Y3HnQ9Lc2kYFqNDavek27G3oEb3sHNWdp29HVb9lyGbzJFsHmlgimhDGBg226nsEt51Rw25nq+o+a6WKStRYPrAK4T ubuntu@plana50.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtM9yFUXCpvsuyO/hzgW2MWGPjInZ9iZ24dRax9LfbWAdXLPbzEU/Y/gJY+gDlTwXLpIZ+7Lx/UjvnMCJXzgJm9Bz8x8+40czsXadF3hYekjQjkhX5rvb9Ah5ABhy6EEFUJUIzzzSEtOtLKm3xcQ4UQ5H5viH0pkeXnzxGgYSLud+08WBsopaboGHmEMR9+KOGzIYRvjdXK5mUZzh9cMQK+YSv1oUse5j1BotzaHyWFsqWRoBr4Rk6jREZsRl5Jv1gWN+QOOFmsEHOxfdJuOyKzeEZbrVhxp4iX9nEjJjFVdZlbsp7+KYvaWt6t9CxOkF/Aj7Ql0Lnj/MsIB3n7SAh ubuntu@plana81.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLTZA4SJyS92fQdL5falBnYGzteCeH5jJREjKHsIjPu7zLB5glIkGkhoJqvDVCu1otZxJ/cVfszYpW2esYtZ5ZTHHg0SToPbRDFB2IVKOmQSa66CZDOb5vxxvgTsGMObdVFxm47kFJzM9h2JGPU3UQ5i9Td+FJVFF6WrioGp2c6izRI3mTjHXURwe1DecsF7/zx+BygYkPD1DrAF4AGavG++bDMvRdY2xyubO7tRLa4RyYNeAB4dgjuSmLlJo22GqnQ7I8rfC0Q2505CZ6YxbBzBstp321k8/Qca4rEHihBul/3W8nwL75ot473Y5CbLtcNLRVJ5tkpeeVALh5/XPt tasks: - internal.lock_machines: - 3 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - ceph: fs: xfs - parallel: - workload - upgrade-sequence - install.upgrade: client.0: null - rados: clients: - client.1 objects: 50 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 teuthology_branch: firefly upgrade-sequence: sequential: - install.upgrade: mon.a: null mon.b: null - ceph.restart: - mon.a - mon.b - mon.c - mds.a - osd.0 - osd.1 - osd.2 - osd.3 verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.11129 workload: sequential: - workunit: branch: dumpling clients: client.0: - rbd/test_librbd.sh
description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-upgrade/client.yaml 5-final-workload/rados-snaps-few-objects.yaml distros/ubuntu_12.04.yaml} duration: 765.7725410461426 failure_reason: 'Command crashed: ''CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_create 50 --op snap_remove 50 --op rollback 50 --pool unique_pool_0''' flavor: basic owner: scheduled_teuthology@teuthology success: false
Updated by Sage Weil about 10 years ago
- Status changed from New to Duplicate
this was #7709, now fixed
Updated by Sage Weil about 10 years ago
- Category set to OSD
- Status changed from Duplicate to 12
- Priority changed from High to Urgent
- Source changed from other to Q/A
actually, the problem is probably that dumpling still has the bug.
Actions