https://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-04-21T11:58:49ZCeph CephFS - Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)https://tracker.ceph.com/issues/44384?journal_id=1636432020-04-21T11:58:49ZJeff Laytonjlayton@redhat.com
<ul></ul><p>Sorry for the long delay on this. I haven't heard of this happening since the original occurrence, but I decided to dig out the log and have a closer look. Basically this test has one client do a write, and then expects that write to hang due to the other client not giving up caps (because it's down):</p>
<pre>
2020-02-29T11:30:43.006 DEBUG:tasks.cephfs.mount:File background_file became visible from 1 after 0s
2020-02-29T11:30:43.035 INFO:teuthology.orchestra.console:Performing hard reset of smithi196
2020-02-29T11:30:43.035 DEBUG:teuthology.orchestra.console:pexpect command: ipmitool -H smithi196.ipmi.sepia.ceph.com -I lanplus -U inktank -P ApGNXcA7 power reset
2020-02-29T11:30:43.096 INFO:teuthology.orchestra.console:Hard reset for smithi196 completed
2020-02-29T11:30:48.198 INFO:teuthology.orchestra.run.smithi160:> sudo adjust-ulimits daemon-helper kill python3 -c '
2020-02-29T11:30:48.199 INFO:teuthology.orchestra.run.smithi160:> import os
2020-02-29T11:30:48.199 INFO:teuthology.orchestra.run.smithi160:> import time
2020-02-29T11:30:48.199 INFO:teuthology.orchestra.run.smithi160:>
2020-02-29T11:30:48.199 INFO:teuthology.orchestra.run.smithi160:> fd = os.open("/home/ubuntu/cephtest/mnt.1/background_file", os.O_RDWR | os.O_CREAT, 0o644)
2020-02-29T11:30:48.199 INFO:teuthology.orchestra.run.smithi160:> try:
2020-02-29T11:30:48.200 INFO:teuthology.orchestra.run.smithi160:> while True:
2020-02-29T11:30:48.200 INFO:teuthology.orchestra.run.smithi160:> os.write(fd, b'"'"'content'"'"')
2020-02-29T11:30:48.200 INFO:teuthology.orchestra.run.smithi160:> time.sleep(1)
2020-02-29T11:30:48.200 INFO:teuthology.orchestra.run.smithi160:> if not False:
2020-02-29T11:30:48.200 INFO:teuthology.orchestra.run.smithi160:> break
2020-02-29T11:30:48.200 INFO:teuthology.orchestra.run.smithi160:> except IOError as e:
2020-02-29T11:30:48.201 INFO:teuthology.orchestra.run.smithi160:> pass
2020-02-29T11:30:48.201 INFO:teuthology.orchestra.run.smithi160:> os.close(fd)
2020-02-29T11:30:48.201 INFO:teuthology.orchestra.run.smithi160:> '
2020-02-29T11:31:06.852 INFO:teuthology.orchestra.run.smithi130:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-02-29T11:31:06.856 INFO:teuthology.orchestra.run.smithi132:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-02-29T11:31:06.861 INFO:teuthology.orchestra.run.smithi160:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-02-29T11:31:06.866 INFO:teuthology.orchestra.run.smithi162:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-02-29T11:31:06.874 INFO:teuthology.orchestra.run.smithi183:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-02-29T11:31:06.877 INFO:teuthology.orchestra.run.smithi196:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-02-29T11:31:23.208 INFO:teuthology.misc:Re-opening connections...
2020-02-29T11:31:23.208 INFO:teuthology.misc:trying to connect to ubuntu@smithi196.front.sepia.ceph.com
2020-02-29T11:31:23.210 INFO:teuthology.orchestra.remote:Trying to reconnect to host
2020-02-29T11:31:23.210 DEBUG:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'smithi196.front.sepia.ceph.com', 'timeout': 60}
2020-02-29T11:31:23.240 DEBUG:tasks.ceph:Missed logrotate, EOFError
2020-02-29T11:31:53.241 INFO:teuthology.orchestra.run.smithi130:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-02-29T11:31:53.244 INFO:teuthology.orchestra.run.smithi132:> sudo logrotate /etc/logrotate.d/ceph-test.conf
</pre>
<p>Unfortunately, it doesn't say what happened during the write attempt. Did it succeed? Did it fail with a hard error besides -EIO? We may need to doctor this test to log more info about what happened with the write.</p> CephFS - Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)https://tracker.ceph.com/issues/44384?journal_id=1649962020-05-05T17:16:46ZPatrick Donnellypdonnell@redhat.com
<ul><li><strong>Target version</strong> changed from <i>v15.0.0</i> to <i>v16.0.0</i></li></ul> CephFS - Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)https://tracker.ceph.com/issues/44384?journal_id=1827202021-01-15T22:44:17ZPatrick Donnellypdonnell@redhat.com
<ul><li><strong>Target version</strong> changed from <i>v16.0.0</i> to <i>v17.0.0</i></li><li><strong>Backport</strong> set to <i>pacific,octopus,nautilus</i></li></ul> CephFS - Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)https://tracker.ceph.com/issues/44384?journal_id=1920982021-04-20T12:59:34ZJeff Laytonjlayton@redhat.com
<ul><li><strong>Priority</strong> changed from <i>Urgent</i> to <i>Normal</i></li></ul><p>Lowering priority to Normal. Patrick have there been any more occurrences of this?</p> CephFS - Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)https://tracker.ceph.com/issues/44384?journal_id=1921052021-04-20T17:56:37ZPatrick Donnellypdonnell@redhat.com
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Can't reproduce</i></li></ul><p>Jeff Layton wrote:</p>
<blockquote>
<p>Lowering priority to Normal. Patrick have there been any more occurrences of this?</p>
</blockquote>
<p>I have not seen it recently. We'll close it for now and I'll reopen if I see it again.</p>