Project

General

Profile

Actions

Bug #37875

closed

osdmaps aren't being cleaned up automatically on healthy cluster

Added by Bryan Stillwell over 5 years ago. Updated almost 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

After doing an expansion from ~1,500 OSDs to ~1,900 OSDs on a Luminous 12.2.8 cluster using FileStore, I've noticed that osdmaps aren't being trimmed automatically:

  1. find /var/lib/ceph/osd/ceph-1719/current/meta -name 'osdmap*' | wc -l
    42515

With an average size of ~700KB, this adds up to around 30GB/OSD:

  1. du -sh /var/lib/ceph/osd/ceph-1719/current/meta
    30G /var/lib/ceph/osd/ceph-1719/current/meta

I remember something like this happening during the hammer days (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013060.html), and the same workaround works for trimming them slowly:

for i in {1653..1719}; do ceph osd crush reweight osd.$i 4.00001; sleep 4; ceph osd crush reweight osd.$i 4; sleep 4; done


Related issues 2 (0 open2 closed)

Is duplicate of Ceph - Bug #45400: mon/OSDMonitor: maps not trimmed if osds are downResolvedJoao Eduardo Luis

Actions
Copied to RADOS - Bug #47290: osdmaps aren't being cleaned up automatically on healthy clusterResolved

Actions
Actions

Also available in: Atom PDF