Bug #45100
closedqa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
0%
Description
Hit this in the kcephfs suite of yuri's nautilus run, http://qa-proxy.ceph.com/teuthology/yuriw-2020-04-15_00:09:40-kcephfs-wip-yuri-testing-2020-04-14-1606-nautilus-distro-basic-smithi/4954403/teuthology.log
2020-04-15T07:47:45.852 INFO:tasks.cephfs_test_runner:====================================================================== 2020-04-15T07:47:45.852 INFO:tasks.cephfs_test_runner:FAIL: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage) 2020-04-15T07:47:45.852 INFO:tasks.cephfs_test_runner:---------------------------------------------------------------------- 2020-04-15T07:47:45.853 INFO:tasks.cephfs_test_runner:Traceback (most recent call last): 2020-04-15T07:47:45.853 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-yuri-testing-2020-04-14-1606-nautilus/qa/tasks/cephfs/test_damage.py", line 446, in test_damaged_dentry 2020-04-15T07:47:45.853 INFO:tasks.cephfs_test_runner: self.assertEqual(e.exitstatus, errno.ENOENT) 2020-04-15T07:47:45.853 INFO:tasks.cephfs_test_runner:AssertionError: 5 != 2 2020-04-15T07:47:45.853 INFO:tasks.cephfs_test_runner:
The test hits EIO, but expects to hit ENOENT. kernel client version is 5.7.0-rc1-ceph-gd78a6438525e
Updated by Greg Farnum about 4 years ago
Did you look to see if the test differs from master or we did a backport of something that changed behavior? Not sure which one this is but ENOENT is a very definitive thing whereas if we've detected damage EIO is often the right response...
Updated by Patrick Donnelly almost 4 years ago
Octopus too: /ceph/teuthology-archive/yuriw-2020-05-09_21:30:44-kcephfs-wip-yuri-octopus_15.2.2_RC0-distro-basic-smithi/5040718/teuthology.log
It's a testing branch issue apparently.
Updated by Patrick Donnelly almost 4 years ago
and master: /ceph/teuthology-archive/pdonnell-2020-06-12_09:37:27-kcephfs-wip-pdonnell-testing-20200612.063208-distro-basic-smithi/5141859/teuthology.log
Updated by Patrick Donnelly over 3 years ago
/ceph/teuthology-archive/pdonnell-2020-10-13_22:14:10-kcephfs-wip-pdonnell-testing-20201013.174240-distro-basic-smithi/5523850/teuthology.log
Updated by Patrick Donnelly over 3 years ago
- Subject changed from nautilus: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage) to qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- Assignee set to Xiubo Li
- Priority changed from Normal to Urgent
- Target version set to v16.0.0
- Component(FS) kceph added
Several more. Only testing branch of kernel.
Updated by Xiubo Li over 3 years ago
For the Kclient:
When we first time to do the :
touch /mnt/cephfs/subdir/file_to_be_damaged
It drop the Fs cap and then the MDS will send and grant it back, then the complete flag for "subdir/" will be cleared. But this won't affect the following `stat /mnt/cephfs/subdir/file_to_be_damaged` commands' return values. Because the vfs layer will `dput(dentry)` the dentry once the atomic_open() fails, which will be called when doing the above touch command. So when next time we do stat, it will miss it in the dcache and then do a lookup from the MDS, which will fail with -EIO too.
Updated by Xiubo Li over 3 years ago
- Status changed from In Progress to Fix Under Review
Updated by Patrick Donnelly over 3 years ago
- Status changed from Fix Under Review to Resolved
Updated by Ramana Raja about 3 years ago
See this in Octopus testing,
https://pulpito.ceph.com/yuriw-2021-02-09_00:31:50-kcephfs-wip-yuri2-testing-2021-02-08-1048-octopus-testing-basic-gibba/5868938/
Xiubo, should we backport this to octopus as well?
Updated by Patrick Donnelly about 3 years ago
- Status changed from Resolved to Pending Backport
- Backport set to octopus
Updated by Backport Bot about 3 years ago
- Copied to Backport #49347: octopus: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage) added
Updated by Loïc Dachary almost 3 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".