ceph client heavy loading in Metadata server
3 - minor
Pull request ID:
Hi Ceph Developers,
I use the latest version of ceph v0.25.1 . For experiment purpose, we tried to produce one million files of which each size is from 1k to 10k.
1st Issue: DFS architecture we built contains 4 storage nodes, one metadata server and one ceph client where each of these has 4G memory on board. We executed scripts to produce one million files on our testing environment which should take around one hour. However, certain error and/or warning messages, as shown in enclosed, displayed on our screen.
In this regard, we are writing to report this issue and meanwhile we are looking for solutions or supports. It would be also appreciated if you can test our python script to check whether similar error/warning messages would occur in your environment.
2nd Issue: It is found that cache size of metadata server continuously grows and this causes a problem where data does not flush to disks (please also see the enclosed). To solve the issue, we are wondering if there is any way that we can set the ceph.conf parameters; that being said, if we can set the cache size for data to flush to disks?
the pyhton usage:
ex: ./filegen.py 1000000 10000 1 10
lst parameter : prouduce one million files
2st parameter : create one folder based on creating 10000 files
3st parameter : min_size of file
4st parameter : max_size of file