Support #17079
openIo runs only on one pool even though 2 pools are attached to cephfs FS.
0%
Description
Steps:-
1) Create a pool and a metadata pool and create a new cephfs using the pools.
2) Now create another data pool and attach it to the already created cephfs FS using "ceph fs add_data_pool cephfs cephfs_data1"
3) Mount the FS on a client and start running IO.
4) Run rados df on the pools.
Result: Objects are being sent only on first data pool. There are no objects on the secondary data pool. Logs below
=========================================================#ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,3 cephfs_data1 ==========================================================
#ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data1 ] =============================================================================================
- ceph fs dump
dumped fsmap epoch 187
e187
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 187
flags 0
created 2016-08-19 16:25:06.653481
modified 2016-08-19 16:25:06.653481
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 110
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=14580}
failed
damaged
stopped
data_pools 1,3
metadata_pool 2
inline_data disabled
14580: 10.242.42.216:6800/2113986 'rack2-client-3' mds.0.8 up:active seq 304
=======================================================================================================================================================================
- rados df
pool name KB objects clones degraded unfound rd rd KB wr wr KB
cephfs_data 1536000001 1500001 0 0 0 32367 2048 4632499 1569910789
cephfs_data1 0 0 0 0 0 0 0 0 0
cephfs_metadata 37768 30 0 0 0 334 5225545 304221 12098790
rbd 0 0 0 0 0 0 0 0 0
total used 3088403440 1500031
total avail 236556147856
total space 239644551296 ======================================================================================================================================================== - rados --pool=cephfs_data1 ls
- rados --pool=cephfs_data ls
10000043c26.00000000
1000012098d.00000000
10000099f97.00000000 ========================================================================================================================================================
Updated by Zheng Yan over 7 years ago
the first pool is default pool. see http://docs.ceph.com/docs/master/cephfs/file-layouts/ for how to store file in non-default pool
Updated by Rohith Radhakrishnan over 7 years ago
Tried setting a non-default pool using "SETFATT", but I am not able to set more than one pool to a directory at a time, as it does not give me an option. This means that I cannot have cephfs writing to more that one pool from root, even though both pools have been added to cephfs. Is there any way this can be achieved?
Updated by Rohith Radhakrishnan over 7 years ago
@Zheng: What I would like to achieve is after adding 2 pools to a ceph FS, I should be able to redirect the objects from root to both the resource pools one after the other. This would be helpful in cases where the first pool fills up and we can move the remaining objects to secondary pool
Updated by Zheng Yan over 7 years ago
There is no option to do that. Your requirement is strange, why not enlarge quota of the first pool.
Updated by Rohith Radhakrishnan over 7 years ago
You are right. I could do that.
Updated by Greg Farnum over 7 years ago
- Tracker changed from Bug to Support
- Priority changed from Urgent to Normal