https://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2013-10-21T22:22:09ZCeph CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=287852013-10-21T22:22:09ZGreg Farnumgfarnum@redhat.com
<ul><li><strong>Category</strong> deleted (<del><i>53</i></del>)</li></ul><p>/a/teuthology-2013-10-20_02:13:31-kcephfs-next-testing-basic-plana/60994<br />/a/teuthology-2013-10-20_02:13:10-fs-next-testing-basic-plana/60899/<br />This second one is all userspace and so has plenty of logs available.</p> CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=287912013-10-22T04:17:32ZZheng Yanukernel@gmail.com
<ul></ul><p>both tests only sent directory share/doc (but didn't sent files in share/doc) when rsync was executed for the second time. sounds like a timestamp issue, no idea how this can happen.</p> CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=288352013-10-23T15:18:24ZGreg Farnumgfarnum@redhat.com
<ul></ul><p>I didn't look at the details much (even to figure out what the file transfer issues were). What kind of timestamp issue could have caused it to not sync the files appropriately?</p> CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=288432013-10-23T18:49:40ZZheng Yanukernel@gmail.com
<ul></ul><p>files were synced appropriately. rsync only sync directory share/doc/ 's timestamp or mode when it was executed for the second time. maybe someone else modified 'share/doc/' when rsync was running. I think we should re-run the test, check how reliable the issue can be reproduced.</p> CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=293152013-11-08T16:14:29ZGreg Farnumgfarnum@redhat.com
<ul></ul><p>/a/teuthology-2013-10-31_23:01:45-kcephfs-next-testing-basic-plana/78406</p>
<p>I haven't checked what this is doing any more, but if it's because the timestamps are different, could this just be a new incarnation of the issue where the client sets a timestamp and the server doesn't take it because their clocks aren't synchronized?</p> CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=293242013-11-10T16:49:12ZZheng Yanukernel@gmail.com
<ul></ul><blockquote>
<p>2013-11-01T13:37:12.841 DEBUG:teuthology.orchestra.run:Running [10.214.133.35]: 'sudo rm <del>rf -</del> /home/ubuntu/cephtest/mnt.0/client.0/tmp'<br />2013-11-01T13:37:12.994 INFO:teuthology.task.workunit:Stopping misc on client.0...<br />2013-11-01T13:37:12.994 DEBUG:teuthology.orchestra.run:Running [10.214.133.35]: 'rm <del>rf -</del> /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.0'<br />2013-11-01T13:37:13.009 DEBUG:teuthology.parallel:result is None<br />2013-11-01T13:37:13.010 DEBUG:teuthology.orchestra.run:Running [10.214.133.35]: 'rm <del>rf -</del> /home/ubuntu/cephtest/mnt.0/client.0'<br />2013-11-01T13:37:13.196 INFO:teuthology.orchestra.run.err:[10.214.133.35]: rm: cannot remove `/home/ubuntu/cephtest/mnt.0/client.0': Permission denied<br />2013-11-01T13:37:13.196 ERROR:teuthology.task.workunit:Caught an execption deleting dir /home/ubuntu/cephtest/mnt.0/client.0<br />Traceback (most recent call last):<br />File "/home/teuthworker/teuthology-next/teuthology/task/workunit.py", line 132, in _delete_dir<br />client,<br />File "/home/teuthworker/teuthology-next/teuthology/orchestra/remote.py", line 47, in run<br />r = self._runner(client=self.ssh, **kwargs)<br />File "/home/teuthworker/teuthology-next/teuthology/orchestra/run.py", line 271, in run<br />r.exitstatus = _check_status(r.exitstatus)<br />File "/home/teuthworker/teuthology-next/teuthology/orchestra/run.py", line 267, in _check_status<br />raise CommandFailedError(command=r.command, exitstatus=status, node=host)<br />CommandFailedError: Command failed on 10.214.133.35 with status 1: 'rm <del>rf -</del> /home/ubuntu/cephtest/mnt.0/client.0'<br />2013-11-01T13:37:13.197 DEBUG:teuthology.orchestra.run:Running [10.214.133.35]: 'rmdir -- /home/ubuntu/cephtest/mnt.0'<br />2013-11-01T13:37:13.204 INFO:teuthology.orchestra.run.err:[10.214.133.35]: rmdir: failed to remove `/home/ubuntu/cephtest/mnt.0': Device or resource busy<br />2013-11-01T13:37:13.204 ERROR:teuthology.task.workunit:Caught an execption deleting dir /home/ubuntu/cephtest/mnt.0<br />Traceback (most recent call last):<br />File "/home/teuthworker/teuthology-next/teuthology/task/workunit.py", line 144, in _delete_dir<br />mnt,<br />File "/home/teuthworker/teuthology-next/teuthology/orchestra/remote.py", line 47, in run<br />r = self._runner(client=self.ssh, **kwargs)<br />File "/home/teuthworker/teuthology-next/teuthology/orchestra/run.py", line 271, in run<br />r.exitstatus = _check_status(r.exitstatus)<br />File "/home/teuthworker/teuthology-next/teuthology/orchestra/run.py", line 267, in _check_status<br />raise CommandFailedError(command=r.command, exitstatus=status, node=host)<br />CommandFailedError: Command failed on 10.214.133.35 with status 1: 'rmdir -- /home/ubuntu/cephtest/mnt.0'</p>
</blockquote>
<p>the issue of 78406 is not the same as previous issues. It's more like test script/env issue.</p> CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=326182014-02-27T13:23:58ZGreg Farnumgfarnum@redhat.com
<ul><li><strong>Priority</strong> changed from <i>Normal</i> to <i>High</i></li></ul><p>I haven't noticed this in a while, but upgrading as it was a failure across both clients.</p> CephFS - Bug #6609: teuthology rsync workunit failurehttps://tracker.ceph.com/issues/6609?journal_id=381242014-07-15T06:55:43ZSage Weilsage@newdream.net
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Can't reproduce</i></li></ul>