Bug #2779
mon: [near]full status doesn't get purged when osds are removed
0%
Description
Date: Fri, 13 Jul 2012 12:17:47 +0400
From: Andrey Korolyov <andrey@xdel.ru>
To: ceph-devel@vger.kernel.org
Subject: ceph status reporting non-existing osd
[ The following text is in the "UTF-8" character set. ]
[ Your display is set for the "ANSI_X3.4-1968" character set. ]
[ Some characters may be displayed incorrectly. ]
Hi,
Recently I`ve reduced my test suite from 6 to 4 osds at ~60% usage on
six-node,
and I have removed a bunch of rbd objects during recovery to avoid
overfill.
Right now I`m constantly receiving a warn about nearfull state on
non-existing osd:
{0=192.168.10.129:6789/0,1=192.168.10.128:6789/0,2=192.168.10.127:6789/0},
election epoch 240, quorum 0,1,2 0,1,2
osdmap e2098: 4 osds: 4 up, 4 in
pgmap v518696: 464 pgs: 464 active+clean; 61070 MB data, 181 GB
used, 143 GB / 324 GB avail
mdsmap e181: 1/1/1 up {0=a=up:active}
health HEALTH_WARN 1 near full osd(s)
monmap e3: 3 mons at
HEALTH_WARN 1 near full osd(s)
osd.4 is near full at 89%
Needless to say, osd.4 remains only in ceph.conf, but not at crushmap.
Reducing has been done 'on-line', e.g. without restart entire cluster.
Associated revisions
mon: purge removed osds from [near]full sets
The [near]full sets are volatile state. Remove removed (or created)
osds from the set when we process a map.
Fixes: #2779
Signed-off-by: Sage Weil <sage@inktank.com>
History
#1 Updated by Sage Weil about 11 years ago
- Status changed from New to Fix Under Review
- Assignee changed from Sage Weil to Greg Farnum
tag!
#2 Updated by Sage Weil about 11 years ago
- Status changed from Fix Under Review to Resolved