Actions
Bug #43601
closedqa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS, qa-suite
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2020-01-13T03:41:24.682 INFO:teuthology.orchestra.run:Running command with timeout 900 2020-01-13T03:41:24.682 INFO:teuthology.orchestra.run.smithi047:> sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections 2020-01-13T03:41:24.708 DEBUG:teuthology.orchestra.run:got remote process result: 32 2020-01-13T03:41:24.726 INFO:teuthology.orchestra.run.smithi047.stderr:mount: /sys/fs/fuse/connections: /sys/fs/fuse/connections already mounted or mount point busy. 2020-01-13T03:41:24.726 INFO:teuthology.orchestra.run:Running command with timeout 900 2020-01-13T03:41:24.726 INFO:teuthology.orchestra.run.smithi047:> ls /sys/fs/fuse/connections 2020-01-13T03:41:24.753 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi047.stderr:2020-01-13T03:41:24.757+0000 7f0e1907c0c0 -1 init, newargv = 0x55ddccd3e500 newargc=7ceph-fuse[25950 2020-01-13T03:41:24.754 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi047.stderr:]: starting ceph client 2020-01-13T03:41:24.763 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi047.stderr:ceph-fuse[25950]: ceph mount failed with (30) Read-only file system 2020-01-13T03:41:24.932 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi047.stderr:daemon-helper: command failed with exit status 1 2020-01-13T03:41:25.864 INFO:teuthology.orchestra.run:Running command with timeout 900 2020-01-13T03:41:25.864 INFO:teuthology.orchestra.run.smithi047:> sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections 2020-01-13T03:41:25.882 DEBUG:teuthology.orchestra.run:got remote process result: 32 2020-01-13T03:41:25.883 INFO:teuthology.orchestra.run.smithi047.stderr:mount: /sys/fs/fuse/connections: /sys/fs/fuse/connections already mounted or mount point busy. 2020-01-13T03:41:25.883 INFO:teuthology.orchestra.run:Running command with timeout 900 2020-01-13T03:41:25.884 INFO:teuthology.orchestra.run.smithi047:> ls /sys/fs/fuse/connections 2020-01-13T03:41:25.935 DEBUG:teuthology.orchestra.run:got remote process result: 1 2020-01-13T03:41:25.936 INFO:tasks.cephfs_test_runner:test_object_deletion (tasks.cephfs.test_damage.TestDamage) ... ERROR
From: /ceph/teuthology-archive/pdonnell-2020-01-13_01:49:14-fs-wip-pdonnell-testing-20200112.224135-distro-basic-smithi/4661010/teuthology.log
Updated by Patrick Donnelly over 4 years ago
- Status changed from New to Triaged
- Assignee set to Zheng Yan
Looks like it's just that the MDS is responding to a getattr request on the root inode with EROFS:
2020-01-13T03:41:24.763+0000 7f1951812700 4 mds.0.server handle_client_request client_request(client.6179:1 getattr pAsLsXsFs #0x1 2020-01-13T03:41:24.763000+0000 caller_uid=0, caller_gid=0{}) v4 2020-01-13T03:41:24.763+0000 7f1951812700 20 mds.0.137 get_session have 0x559e0901a880 client.6179 172.21.15.47:0/3172485399 state open 2020-01-13T03:41:24.763+0000 7f1951812700 15 mds.0.server oldest_client_tid=1 2020-01-13T03:41:24.763+0000 7f1951812700 7 mds.0.cache request_start request(client.6179:1 nref=2 cr=0x559e08342100) 2020-01-13T03:41:24.763+0000 7f1951812700 7 mds.0.server dispatch_client_request client_request(client.6179:1 getattr pAsLsXsFs #0x1 2020-01-13T03:41:24.763000+0000 caller_uid=0, caller_gid=0{}) v4 2020-01-13T03:41:24.763+0000 7f1951812700 10 mds.0.server read-only FS 2020-01-13T03:41:24.763+0000 7f1951812700 7 mds.0.server reply_client_request -30 ((30) Read-only file system) client_request(client.6179:1 getattr pAsLsXsFs #0x1 2020-01-13T03:41:24.763000+0000 caller_uid=0, caller_gid=0{}) v4
The change in behavior was caused by:
commit 60f03a489e5dfa00835ebf46cea812d4b13ef0f7 HEAD Author: Yan, Zheng <zyan@redhat.com> Date: Wed Aug 14 11:22:35 2019 +0800 mds: let Locker::acquire_locks()'s caller choose locking order This patch makes Locker::acquire_locks() lock objects in the order specified by its caller. Locker::acquire_locks() only rearranges locks in the same object (relieve of remembering the order). This patch is preparation for 'lock object in top-down order'. Besides, this patch allows MDRequest to lock objects step by step. For example: call Locker::acquire_locks() to lock a dentry. After the dentry is locked, call Locker::acquire_locks() to lock inode that is linked by the dentry. Locking object step by step introduces a problem. MDRequest may needs to auth pin extra objects after taking same locks. If any object can not be auth pinned, MDRequest needs to drop all locks before going to wait. For slave auth pin request, this patch make slave mds send a notification back to master mds if the auth pin request is blocked. The master mds drops locks when receiving the notification. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> --- src/mds/Server.cc | 158 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------------------------------------------------------- 1 file changed, 83 insertions(+), 75 deletions(-) diff --git a/src/mds/Server.cc b/src/mds/Server.cc index f81f4288a49..e05344ccf45 100644 --- a/src/mds/Server.cc +++ b/src/mds/Server.cc @@ -2410,17 +2410,11 @@ void Server::dispatch_client_request(MDRequestRef& mdr) dout(7) << "dispatch_client_request " << *req << dendl; - if (req->may_write()) { - if (mdcache->is_readonly()) { - dout(10) << " read-only FS" << dendl; - respond_to_request(mdr, -EROFS); - return; - } - if (mdr->has_more() && mdr->more()->slave_error) { - dout(10) << " got error from slaves" << dendl; - respond_to_request(mdr, mdr->more()->slave_error); - return; - } + if (mdcache->is_readonly() || + (mdr->has_more() && mdr->more()->slave_error == -EROFS)) { + dout(10) << " read-only FS" << dendl; + respond_to_request(mdr, -EROFS); + return; }
Updated by Zheng Yan over 4 years ago
- Status changed from Triaged to Fix Under Review
- Pull request ID set to 32679
Updated by Zheng Yan over 4 years ago
- Pull request ID changed from 32679 to 32676
Updated by Patrick Donnelly over 4 years ago
- Status changed from Fix Under Review to Resolved
- Component(FS) MDS added
Actions