Bug #3383
rbd copy fails in the nightlies
0%
Description
Logs: ubuntu@teuthology:/a/teuthology-2012-10-21_19:00:07-regression-master-testing-gcov/5323
2012-10-21T22:55:29.123 INFO:teuthology.task.workunit.client.0.err:+ rbd copy testimg1 --snap=snap1 testimg2 2012-10-21T22:55:29.277 INFO:teuthology.task.workunit.client.0.err:2012-10-21 22:55:29.286945 7f3cc0d43780 -1 librbd: src size 268435456 != dest size 0 2012-10-21T22:55:29.281 INFO:teuthology.task.workunit.client.0.out:^MImage copy: 100% complete...done. 2012-10-21T22:55:29.281 INFO:teuthology.task.workunit.client.0.err:rbd: copy failed: (22) Invalid argument 2012-10-21T22:55:29.314 DEBUG:teuthology.orchestra.run:Running: 'rm -rf -- /tmp/cephtest/workunits.list /tmp/cephtest/workunit.client.0' 2012-10-21T22:55:29.326 ERROR:teuthology.run_tasks:Saw exception from tasks Traceback (most recent call last): File "/var/lib/teuthworker/teuthology-master/teuthology/run_tasks.py", line 25, in run_tasks manager = _run_one_task(taskname, ctx=ctx, config=config) File "/var/lib/teuthworker/teuthology-master/teuthology/run_tasks.py", line 14, in _run_one_task return fn(**kwargs) File "/var/lib/teuthworker/teuthology-master/teuthology/task/workunit.py", line 86, in task all_spec = True File "/var/lib/teuthworker/teuthology-master/teuthology/parallel.py", line 83, in __exit__ for result in self: File "/var/lib/teuthworker/teuthology-master/teuthology/parallel.py", line 100, in next resurrect_traceback(result) File "/var/lib/teuthworker/teuthology-master/teuthology/parallel.py", line 19, in capture_traceback return func(*args, **kwargs) File "/var/lib/teuthworker/teuthology-master/teuthology/task/workunit.py", line 219, in _run_tests args=args, File "/var/lib/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 40, in run r = self._runner(client=self.ssh, **kwargs) File "/var/lib/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 257, in run r.exitstatus = _check_status(r.exitstatus) File "/var/lib/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 253, in _check_status raise CommandFailedError(command=r.command, exitstatus=status) CommandFailedError: Command failed with status 1: 'mkdir -- /tmp/cephtest/mnt.0/client.0/tmp && cd -- /tmp/cephtest/mnt.0/client.0/tmp && CEPH_REF=7d9ee17e82efbf7e09e1227df5fb25bc032a9ba2 PATH="$PATH:/tmp/cephtest/binary/usr/local/bin" LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/tmp/cephtest/binary/usr/local/lib" CEPH_CONF="/tmp/cephtest/ceph.conf" CEPH_SECRET_FILE="/tmp/cephtest/data/client.0.secret" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/tmp/cephtest/binary/usr/local/lib/python2.7/dist-packages:/tmp/cephtest/binary/usr/local/lib/python2.6/dist-packages" RBD_CREATE_ARGS=--new-format /tmp/cephtest/enable-coredump /tmp/cephtest/binary/usr/local/bin/ceph-coverage /tmp/cephtest/archive/coverage /tmp/cephtest/workunit.client.0/rbd/copy.sh && rm -rf -- /tmp/cephtest/mnt.0/client.0/tmp'
Associated revisions
librbd: validate copy size against proper snapshot id
Fixes: #3383
Signed-off-by: Dan Mick <dan.mick@inktank.com>
History
#1 Updated by Tamilarasi muthamizhan almost 11 years ago
More logs:
ubuntu@teuthology:/a/teuthology-2012-10-21_19:00:07-regression-master-testing-gcov/5324
ubuntu@teuthology:/a/teuthology-2012-10-21_19:00:07-regression-master-testing-gcov/5331
ubuntu@teuthology:/a/teuthology-2012-10-21_19:00:07-regression-master-testing-gcov/5332
#2 Updated by Dan Mick almost 11 years ago
- Status changed from New to 7
- Assignee set to Dan Mick
So I understand the problem; the check for 'size is the same' was checking
the size of the source image at the requested snapshot to the destination
image at the same snapshot. What perplexes me is: this could never have
worked, and the test is old, and I know we've been running the workunit
tests. No idea yet why this just started failing.
#3 Updated by Dan Mick almost 11 years ago
Ah, I was just assuming copy was old; it was changed in 62420599006691d70a1634223bd0d1a3dc10e9ee, and the new size check added,
so this is just a typo. Mystery solved.
#4 Updated by Dan Mick almost 11 years ago
- Status changed from 7 to Resolved