Project

General

Profile

Actions

Bug #43224

closed

ceph osd status error

Added by Alexander Kazansky over 4 years ago. Updated over 4 years ago.

Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
low-hanging-fruit
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hello.

I received error when try execute 'ceph osd status', it happens when cluster have some osd down.

# ceph osd status
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/status/module.py", line 253, in handle_command
    return self.handle_osd_status(cmd)
  File "/usr/share/ceph/mgr/status/module.py", line 237, in handle_osd_status
    mgr_util.format_dimless(self.get_rate("osd", osd_id.__str__(), "osd.op_w") +
  File "/usr/share/ceph/mgr/status/module.py", line 47, in get_rate
    return (data[-1][1] - data[-2][1]) / float(data[-1][0] - data[-2][0])
ZeroDivisionError: float division by zero

if all osd up, command executed normal.

Related issues 1 (0 open1 closed)

Is duplicate of mgr - Feature #40365: mgr: Add get_rates_from_data from the dashboard to the mgr_util.pyResolvedStephan Müller

Actions
Actions

Also available in: Atom PDF