Project

General

Profile

Support #44703

"ceph osd df" reports an OSD using 147GB of disk for 47GB data, 3MB omap, 1021MB meta

Added by Shikhar Goel about 4 years ago. Updated about 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Component(RADOS):
Pull request ID:

Description

We are seeing the file miss match in rook 1.2 when using replication instead of earsure coding as ceph object store…so basically our total data size is 6gb and aur rook cluster osd are having total 300 gb of storage space but still it is using up all space for 6gb data

When we do `rados ls` command we are seeing alot of object keys and most of them are duplicate….my understanding it should be unique

There is health error and it was showing osd and pools are full .Initially we were using 3 osd’s so we have added 3 more osd’d to get the cluster in running state and after that the output of command is as attached.We are running ceph v14.2.7-20200206 image for ceph.

Screenshot 2020-03-21 at 10.27.43 PM.png View (72.7 KB) Shikhar Goel, 03/21/2020 05:46 PM

History

#1 Updated by Greg Farnum about 4 years ago

  • Tracker changed from Bug to Support
  • Project changed from Ceph to RADOS
  • Subject changed from Ceph Cluster is getting full when used with replication 3 with less size of data in rook. to "ceph osd df" reports an OSD using 147GB of disk for 47GB data, 3MB omap, 1021MB meta

Also available in: Atom PDF