Project

General

Profile

Actions

Cleanup #24745

closed

Spurious empty files in CephFS root pool when multiple pools associated

Added by Jesús Cea almost 6 years ago. Updated about 5 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Tags:
Backport:
Reviewed:
Affected Versions:
Component(FS):
Labels (FS):
Pull request ID:

Description

Hi there.

I have an issue with cephfs and multiple datapools inside. I have like SIX datapools inside the cephfs, I control where files are stored using xattrs in the directories.

The "root" directory only contains directories with "xattrs" requesting new objects to be stored in different pools. So good so far. The problem is that "root" datapool has a ghost file per file inside the cephfs, even when the object is actually stored in a different datapool. Taking no space at all, but counting against the number of objects in the cephfs.

The files are empty (0 bytes) but they have xattrs saying in what pool the object is actually stored.

Should this data be stored in the metadata pool?. By comparison, my metadata pool is 244MB in size but it basically uses the same size when I had no objects and now, with 1.3 million objects: ~250 MB.

cephfsROOT_data         70         0         0          354G     1332929 
cephfsROOT_metadata 71 244M 0.07 354G 1625
black_1 72 944G 52.58 851G 241736
black_2 73 944G 52.59 851G 241744
black_3 74 953G 52.82 851G 243990
black_4 75 934G 52.33 851G 239243
black_5 76 944G 52.59 851G 241814
black_6 77 531G 38.44 851G 136081

Black_* are associated to the cephfs via "ceph fs add_data_pool XXX". For each object created inside a "black_*", a ghost empty object is created in "cephfsROOT_data".

That huge number of files is causing other issues, like the warning "1 pools have many more objects per pg than average".

Actions #1

Updated by John Spray almost 6 years ago

  • Project changed from Ceph to CephFS

(see discussion on thread "[ceph-users] Spurious empty files in CephFS root pool when multiple pools associated")

Assigning fs subproject. Patrick: although this isn't a bug, perhaps a cue to revisit storing backtraces in the metadata pool?

Actions #2

Updated by Patrick Donnelly almost 6 years ago

  • Tracker changed from Bug to Cleanup
Actions #3

Updated by Patrick Donnelly about 5 years ago

  • Status changed from New to Won't Fix
  • Start date deleted (07/02/2018)

As John noted, this isn't a bug. It may be we eventually try to relax the requirement for backtraces to be stored on the default data pool. There are no plans to do so now.

Actions

Also available in: Atom PDF