Project

General

Profile

Actions

Bug #935

closed

ceph client heavy loading in Metadata server

Added by zac chiang about 13 years ago. Updated about 13 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi Ceph Developers,

I use the latest version of ceph v0.25.1 .  For experiment purpose, we tried to produce one million files of which each size is from 1k to 10k.
1st Issue: DFS architecture we built contains 4 storage nodes, one metadata server and one ceph client where each of these has 4G memory on board. We executed scripts to produce one million files on our testing environment which should take around one hour. However, certain error and/or warning messages, as shown in enclosed, displayed on our screen.
In this regard, we are writing to report this issue and meanwhile we are looking for solutions or supports. It would be also appreciated if you can test our python script to  check whether similar error/warning messages would occur in your environment.
2nd Issue: It is found that cache size of metadata server continuously grows and this causes a problem where data does not flush to disks (please also see the enclosed). To solve the issue, we are wondering if there is any way that we can set the ceph.conf parameters; that being said, if we can set the cache size for data to flush to disks?

the pyhton usage:

ex: ./filegen.py 1000000 10000 1 10

lst parameter : prouduce one million files 
2st parameter : create one folder based on creating 10000 files
3st parameter : min_size of file
4st parameter : max_size of file

Files

ceph_pic.rar (750 KB) ceph_pic.rar zac chiang, 03/24/2011 11:46 PM
Actions

Also available in: Atom PDF