Project

General

Profile

Actions

Bug #57523

open

CephFS performance degredation in mountpoint

Added by Robert Sander over 1 year ago. Updated over 1 year ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Performance/Resource Usage
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

we have a cluster with 7 nodes each with 10 SSD OSDs providing CephFS to
a CloudStack system as primary storage.

When copying a large file into the mountpoint of the CephFS the
bandwidth drops from 500MB/s to 50MB/s after around 30 seconds. We see
some MDS activity in the output of "ceph fs status" at the same time.

It only affects the first file that gets written to.
Additional files can be written to with full speed at
the same time if started a little bit later.

When copying the same file to a subdirectory of the mountpoint the
performance stays at 500MB/s for the whole time. MDS activity does not
seems to influence the performance here.

There are appr 270 other files in the mountpoint directory. CloudStack stores
VM images in qcow2 format there. Appr a dozen clients have mounted the filesystem.

There is no quota set in the filesystem.

Actions

Also available in: Atom PDF