Project

General

Profile

Actions

Feature #118

closed

kclient: clean pages when throwing out dirty metadata on session teardown

Added by Sage Weil almost 14 years ago. Updated over 4 years ago.

Status:
Rejected
Priority:
Low
Assignee:
Category:
Code Hygiene
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Reviewed:
Affected Versions:
Component(FS):
kceph
Labels (FS):
Pull request ID:

Description

see 'ceph: throw out dirty caps metadata, data on session teardown'

Actions #1

Updated by Sage Weil over 13 years ago

  • Target version changed from v2.6.35 to v2.6.36
Actions #2

Updated by Sage Weil over 13 years ago

  • Target version changed from v2.6.36 to v2.6.37
Actions #3

Updated by Sage Weil over 13 years ago

  • Target version deleted (v2.6.37)
Actions #4

Updated by Sage Weil over 12 years ago

  • Translation missing: en.field_position deleted (416)
  • Translation missing: en.field_position set to 14
Actions #5

Updated by Sage Weil over 12 years ago

  • Translation missing: en.field_position deleted (36)
  • Translation missing: en.field_position set to 35
Actions #6

Updated by Sage Weil about 12 years ago

  • Priority changed from Normal to Low
Actions #7

Updated by Ian Colle about 11 years ago

  • Tracker changed from Bug to Feature
  • Project changed from Linux kernel client to CephFS
Actions #8

Updated by Greg Farnum about 10 years ago

I can't find the referenced ticket anywhere. Anybody know what this is supposed to be and if it still applies? (I think it's actually about the kclient, but not sure.)

Actions #9

Updated by Sage Weil almost 10 years ago

  • Subject changed from clean pages when throwing out dirty metadata on session teardown to kclient: clean pages when throwing out dirty metadata on session teardown
Actions #10

Updated by Greg Farnum almost 8 years ago

  • Category set to 53
  • Assignee set to Zheng Yan

Zheng, do you have any idea what this is about?

Actions #11

Updated by Greg Farnum almost 8 years ago

  • Component(FS) kceph added
Actions #12

Updated by Greg Farnum almost 8 years ago

  • Category changed from 53 to Code Hygiene
Actions #13

Updated by Zheng Yan over 6 years ago

I think sage means clean dirty pages at the same time of cleaning dirty metadata. This haven't been implemented yet

Actions #14

Updated by Patrick Donnelly about 5 years ago

  • Assignee deleted (Zheng Yan)
Actions #15

Updated by Patrick Donnelly over 4 years ago

  • Assignee set to Xiubo Li
Actions #16

Updated by Xiubo Li over 4 years ago

  • Status changed from New to In Progress
Actions #17

Updated by Xiubo Li over 4 years ago

If I have correctly comprehended it and from the current code and my test we had already implemented it.

There has 2 cases where the mds sessions are closing:

Case 1) unmount:

In this case, the VFS will help us do the dirty pages cleaning and then sync all the ods requests to the ODSs:

ceph_kill_sb() -->
ceph_mdsc_pre_umount() -->
ceph_flush_dirty_caps(): will do the dirty caps flushing
generic_shutdown_super() -->
sync_filesystem() -->
sync_inodes_sb()/writeback_inodes_sb(): will help submit the dirty pages as ods requests
sb->s_op->sync_fs() -->
ceph_sync_fs() -->
ceph_osdc_sync() : will sync the ods requests to the ODSs
sop->put_super() -->
ceph_put_super() -->
ceph_mdsc_close_sessions() : will close the sessions

Case 2) in handle_session and (op is CEPH_SESSION_OPEN && mdsc->stopping):
This case is the same as Case 1, it is also doing the unmounting or mount fail path. And since the relating session was not even opened and there shouldn't any dirty pages exist, so no need to do the clean too.

Actions #18

Updated by Jeff Layton over 4 years ago

I'm not sure what this tracker is really asking for, tbqh.

Hmm...now that I look, I do see this:

commit 6c99f2545dbb9e53afe0d1d037c51ab04ef1ff4e
Author: Sage Weil <>
Date: Mon May 10 16:12:25 2010 -0700

ceph: throw out dirty caps metadata, data on session teardown

The remove_session_caps() helper is called when an MDS closes out our
session (either normally, or as a result of a failed reconnect), and when
we tear down state for umount. If we remove the last cap, and there are
no cap migrations in progress, then there is little hope of us flushing
out that data to the mds (without heroic efforts to reconnect and flush).

So, to avoid leaving inodes pinned (due to dirty state) and crashing after
umount, throw out dirty caps state and unpin the inodes. Print a warning
to the console so we know something was lost.

NOTE: Although we drop wrbuffer refs, we don't actually mark pages clean;
maybe a truncate should be queued?

Signed-off-by: Sage Weil <>

Maybe implementing what's in that last NOTE is what was meant? I'm not sure we're really gaining anything by doing this though. Is anything actually broken here? What needs to be fixed?

Actions #19

Updated by Xiubo Li over 4 years ago

Case1:
In the case when unmounting, the vfs will do this for us.

Case2:
In the case when the session is reconnecting after rejected due to the client is blacklisted, we have already fulfilled the dirty pages' invalidation.

Case3:
And in another case when we are decreasing the max_mds in the cluster side, and the kclient will release some extra sessions, as discussed with Yan Zheng, there is no need to do the dirty pages' invalidation.

Case4:
In other cases, such as when the session is _OPENing and in the _OPEN reply, but mdsc->stopping is set, the session will be set tear down too, in this case there is no need care about the dirty pages.

So currently there is no any further work to do.
THanks.

Actions #20

Updated by Jeff Layton over 4 years ago

  • Status changed from In Progress to Rejected

Excellent. In that case, let's go ahead and close this out.

Actions

Also available in: Atom PDF