Project

General

Profile

Bug #22350

nearfull OSD count in 'ceph -w'

Added by Richard Arends over 1 year ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
Administration/Usability
Target version:
-
Start date:
12/08/2017
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor
Pull request ID:

Description

Hello,

While looking at the 'ceph -w' output i noticed that sometimes the 'nearfull' information is wrong:

"2017-12-08 07:39:20.497508 mon.0 <ip address>:6789/0 66079 : cluster [INF] osdmap e8139: 153 osds: 153 up, 153 in nearfull"

The information from 'sudo ceph health detail|grep full' is:

HEALTH_WARN 4 pgs backfilling; 4 pgs stuck unclean; recovery 199946/77329902 objects misplaced (0.259%); 1 near full osd(s)
osd.62 is near full at 86%

Grepping trough the logile i see sometimes the '153 in nearfull' message.

Regards,
Richard.

ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)

History

#1 Updated by Greg Farnum over 1 year ago

  • Project changed from Ceph to RADOS
  • Category changed from common to Administration/Usability
  • Component(RADOS) Monitor added

Can you produce logs of the monitor doing this? With "debug mon = 20" set?

#2 Updated by Greg Farnum over 1 year ago

  • Status changed from New to Need More Info

#3 Updated by Richard Arends over 1 year ago

Greg Farnum wrote:

Hi Greg,

Can you produce logs of the monitor doing this? With "debug mon = 20" set?

Not at the moment, i would have to wait for another 'nearly full' disk.

With regards,
Richard.

#4 Updated by Sage Weil over 1 year ago

  • Status changed from Need More Info to Resolved

Also available in: Atom PDF