Project

General

Profile

Actions

Bug #2779

closed

mon: [near]full status doesn't get purged when osds are removed

Added by Sage Weil almost 12 years ago. Updated almost 12 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Date: Fri, 13 Jul 2012 12:17:47 +0400
From: Andrey Korolyov <>
To:
Subject: ceph status reporting non-existing osd

[ The following text is in the "UTF-8" character set. ]
[ Your display is set for the "ANSI_X3.4-1968" character set. ]
[ Some characters may be displayed incorrectly. ]

Hi,

Recently I`ve reduced my test suite from 6 to 4 osds at ~60% usage on
six-node,
and I have removed a bunch of rbd objects during recovery to avoid
overfill.
Right now I`m constantly receiving a warn about nearfull state on
non-existing osd: {0=192.168.10.129:6789/0,1=192.168.10.128:6789/0,2=192.168.10.127:6789/0},
election epoch 240, quorum 0,1,2 0,1,2
osdmap e2098: 4 osds: 4 up, 4 in
pgmap v518696: 464 pgs: 464 active+clean; 61070 MB data, 181 GB
used, 143 GB / 324 GB avail
mdsmap e181: 1/1/1 up {0=a=up:active}

health HEALTH_WARN 1 near full osd(s)
monmap e3: 3 mons at

HEALTH_WARN 1 near full osd(s)
osd.4 is near full at 89%

Needless to say, osd.4 remains only in ceph.conf, but not at crushmap.
Reducing has been done 'on-line', e.g. without restart entire cluster.

Actions #1

Updated by Sage Weil almost 12 years ago

  • Status changed from New to Fix Under Review
  • Assignee changed from Sage Weil to Greg Farnum

tag!

Actions #2

Updated by Sage Weil almost 12 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF