Feature #44044

qa: add network namespaces to kernel/ceph-fuse mounts for partition testing

Added by Patrick Donnelly about 2 months ago. Updated 21 days ago.

Fix Under Review
Target version:
% Done:


Affected Versions:
ceph-fuse, kceph, qa-suite
Labels (FS):
Pull request ID:


In teuthology, we want to shutdown the kernel mount without any kind of cleanup like sending SIGKILL to ceph-fuse. We end up doing this by putting the kernel client on a separate node and use impi to hard reset the machine. This is not optimal because we require a separate node for each kernel client.

It'd be better if we had a way to shutdown the cephfs mount without any kind of cleanup. This would allow us to have kernel clients all on the same node and selectively "kill" them.

Obviously, this shouldn't necessarily cause the unmount. Applications may still be using the mount (open fd) or their cwd is on the mount. All operations should return ESHUTDOWN (or similar). `umount [-f]` should work normally.


#1 Updated by Xiubo Li about 2 months ago

  • Status changed from New to In Progress

#2 Updated by Xiubo Li about 2 months ago

Will add one mount option "suspend=<on|off>" to suspend the specified mount point.

Currently the remount is not working, need to fix this first.

#3 Updated by Xiubo Li about 1 month ago

From Jeff's idea and comments of the first version to fulfill the "halt" mount option, which will try to close all the monc/osdc/mdsc connections without doing any cleanup beforehand, but the socket close routine will send one FIN to the peer, so this couldn't be 100% simulate pulling cable or hard reset the node case.

Dig into the iptable/netfilter code, we can fulfill the iptable DROP rules in kceph directly if there is no any potential problems for this, but it will by pass the userspace iptable app.

#4 Updated by Xiubo Li about 1 month ago

This is for ceph-fuse:

This will use a separating network namespace to isolate the fuse client from the os, then we can just shutdown
the veth inferace of the network namespace container, with this it will just DROP all the socket packets from the cluster without any response.

This is just for fuse client in userspace, next will try this in kclient.

#5 Updated by Xiubo Li about 1 month ago

For now both kernel and fuse are working the

# ./ 

This will help to isolate the network namespace from OS for the mount client!

usage: [OPTIONS [paramters]] [--brxip <ip_address/mask>]
  --fuse    <ceph-fuse options>
    The ceph-fuse command options
    $ --fuse -m /mnt/cephfs -o nonempty

  --kernel  <mount options>
    The mount command options
    $ --kernel -t ceph /mnt/cephfs -o fs=a

  --suspend <mountpoint>
    Down the veth interface in the network namespace
    $ --suspend /mnt/cephfs

  --resume  <mountpoint>
    Up the veth interface in the network namespace
    $ --resume /mnt/cephfs

  --umount  <mountpoint>
    Umount and delete the network namespace
    $ --umount /mnt/cephfs

  --brxip   <ip_address/mask>
    Specify ip/mask for ceph-brx and it only makes sense for --fuse/--kernel options
    (default:, netns ip: ~
    $ --fuse -m /mnt/cephfs --brxip
    $ --kernel /mnt/cephfs --brxip

  -h, --help
    Print help

Defaultly it will use the 192.168.X.Y/16 private network IPs for the ceph-brx and netnses as above. And you can also specify your own new ip/mask for the ceph-brx, like:

  $ --fuse /mnt/cephfs --brxip

Then the each netns will get a new ip from the ranges:

 [ ~]/12 and [ ~]/12

#6 Updated by Patrick Donnelly about 1 month ago

  • Tracker changed from Bug to Feature
  • Project changed from Linux kernel client to fs
  • Subject changed from fs/ceph: add sysfs control file to hard shutdown mount to qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
  • Target version set to v16.0.0
  • Source set to Development
  • Pull request ID set to 33576
  • Component(FS) ceph-fuse, kceph, qa-suite added
  • Labels (FS) qa added

#7 Updated by Xiubo Li 24 days ago

Have added the qa/ test case by transfering the bash code to python.

In some case could just s/mount_X.kill()/mount_X.suspend_netns()/ and s/mount_X.kill_cleanup() ...mount() again /mount_X.resume_netns()/

#8 Updated by Xiubo Li 21 days ago

  • Status changed from In Progress to Fix Under Review

Also available in: Atom PDF