Bug #22350
closednearfull OSD count in 'ceph -w'
0%
Description
Hello,
While looking at the 'ceph -w' output i noticed that sometimes the 'nearfull' information is wrong:
"2017-12-08 07:39:20.497508 mon.0 <ip address>:6789/0 66079 : cluster [INF] osdmap e8139: 153 osds: 153 up, 153 in nearfull"
The information from 'sudo ceph health detail|grep full' is:
HEALTH_WARN 4 pgs backfilling; 4 pgs stuck unclean; recovery 199946/77329902 objects misplaced (0.259%); 1 near full osd(s)
osd.62 is near full at 86%
Grepping trough the logile i see sometimes the '153 in nearfull' message.
Regards,
Richard.
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
Updated by Greg Farnum over 6 years ago
- Project changed from Ceph to RADOS
- Category changed from common to Administration/Usability
- Component(RADOS) Monitor added
Can you produce logs of the monitor doing this? With "debug mon = 20" set?
Updated by Greg Farnum over 6 years ago
- Status changed from New to Need More Info
Updated by Richard Arends over 6 years ago
Greg Farnum wrote:
Hi Greg,
Can you produce logs of the monitor doing this? With "debug mon = 20" set?
Not at the moment, i would have to wait for another 'nearly full' disk.
With regards,
Richard.
Updated by Sage Weil about 6 years ago
- Status changed from Need More Info to Resolved
Updated by Nathan Cutler over 4 years ago
Note: backported to luminous via https://github.com/ceph/ceph/pull/30902