Actions
Bug #725
closedmds: set_layout on root inode isn't persistent
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
From: Jim Schutt <jaschut@sandia.gov> To: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org> Subject: cephfs set_layout on filesystem root? Hi, I've been experimenting with using cephfs to set the object/stripe size on a Ceph filesystem root, and it seems to not persist across a filesystem restart. Is that expected behavior? To reproduce on current testing branch (6d0dc4bf6): --- # create a new filesystem, mount it; then # from a client: [root@an1024 ~]# df -h /mnt/ceph Filesystem Size Used Avail Use% Mounted on 172.17.40.34:/ 13T 510M 13T 1% /ram/mnt/ceph [root@an1024 ~]# cephfs /mnt/ceph/ show_layout layout.data_pool: 0 layout.object_size: 4194304 layout.stripe_unit: 4194304 layout.stripe_count: 1 layout.preferred_osd: -1 [root@an1024 ~]# cephfs /mnt/ceph set_layout -s 262144 -c 1 -u 262144 [root@an1024 ~]# cephfs /mnt/ceph/ show_layout layout.data_pool: 0 layout.object_size: 262144 layout.stripe_unit: 262144 layout.stripe_count: 1 layout.preferred_osd: -1 [root@an1024 ~]# touch /mnt/ceph/test1 [root@an1024 ~]# cephfs /mnt/ceph/test1 show_layout layout.data_pool: 0 layout.object_size: 262144 layout.stripe_unit: 262144 layout.stripe_count: 1 layout.preferred_osd: -1 [root@an1024 ~]# umount /mnt/ceph [root@an1024 ~]# mount.ceph an14-ib0:/ /mnt/ceph [root@an1024 ~]# df -h /mnt/ceph Filesystem Size Used Avail Use% Mounted on 172.17.40.34:/ 13T 178M 13T 1% /ram/mnt/ceph [root@an1024 ~]# cephfs /mnt/ceph/ show_layout layout.data_pool: 0 layout.object_size: 262144 layout.stripe_unit: 262144 layout.stripe_count: 1 layout.preferred_osd: -1 --- OK, so far. After filesystem unmount/shutdown/restart/mount: --- [root@an1024 ~]# df -h /mnt/ceph Filesystem Size Used Avail Use% Mounted on 172.17.40.34:/ 13T 450M 13T 1% /ram/mnt/ceph [root@an1024 ~]# cephfs /mnt/ceph/ show_layout layout not specified --- Hmmm, not what I was expecting. Also: --- [root@an1024 ~]# cephfs /mnt/ceph/test1 show_layout layout.data_pool: 0 layout.object_size: 262144 layout.stripe_unit: 262144 layout.stripe_count: 1 layout.preferred_osd: -1 [root@an1024 ~]# touch /mnt/ceph/test2 [root@an1024 ~]# cephfs /mnt/ceph/test2 show_layout layout.data_pool: 0 layout.object_size: 4194304 layout.stripe_unit: 4194304 layout.stripe_count: 1 layout.preferred_osd: -1 --- Also not what I was expecting. I thought my 256 KiB setting from before should still be in effect. Am I missing something? Thanks -- Jim
Updated by Sage Weil over 13 years ago
- Status changed from New to Resolved
commit:457e3e09bc78c297f83f0e85757a4d238a1da968
Updated by John Spray over 7 years ago
- Project changed from Ceph to CephFS
- Category deleted (
1) - Target version deleted (
v0.24.2)
Bulk updating project=ceph category=mds bugs so that I can remove the MDS category from the Ceph project to avoid confusion.
Actions