Project

General

Profile

Actions

Bug #44572

open

ceph osd status crash

Added by Fyodor Ustinov about 4 years ago. Updated about 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
low-hanging-fruit
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[root@S-26-6-1-2 ~]# ceph -s
cluster:
id: cef4c4fd-82d3-4b79-8b48-3d94e9496c9b
health: HEALTH_WARN
6 osds down
1 host (6 osds) down
1 rack (6 osds) down
Degraded data redundancy: 792892/33170337 objects degraded (2.390%), 138 pgs degraded, 138 pgs undersized

services:
mon: 3 daemons, quorum S-26-4-1-2,S-26-4-2-2,S-26-6-1-2 (age 18h)
mgr: S-26-4-1-2(active, since 18h), standbys: S-26-4-2-2, S-26-6-1-2
osd: 176 osds: 170 up (since 6m), 176 in (since 2w); 236 remapped pgs
tcmu-runner: 26 daemons active (ceph-iscsi1.dcv:iscsi-hdd/esxi1, ceph-iscsi1.dcv:iscsi-hdd/esxi10, ceph-iscsi1.dcv:iscsi-hdd/esxi2, ceph-iscsi1.dcv:iscsi-hdd/esxi3, ceph-iscsi1.dcv:iscsi-hdd/esxi4, ceph-iscsi1.dcv:iscsi-hdd/esxi5, ceph-iscsi1.dcv:iscsi-hdd/esxi6, ceph-iscsi1.dcv:iscsi-hdd/esxi7, ceph-iscsi1.dcv:iscsi-hdd/esxi8, ceph-iscsi1.dcv:iscsi-hdd/esxi9, ceph-iscsi1.dcv:iscsi-ssd/esxi-ssd1, ceph-iscsi1.dcv:iscsi-ssd/esxi-ssd2, ceph-iscsi1.dcv:iscsi-ssd/esxi-ssd3, ceph-iscsi2.dcv:iscsi-hdd/esxi1, ceph-iscsi2.dcv:iscsi-hdd/esxi10, ceph-iscsi2.dcv:iscsi-hdd/esxi2, ceph-iscsi2.dcv:iscsi-hdd/esxi3, ceph-iscsi2.dcv:iscsi-hdd/esxi4, ceph-iscsi2.dcv:iscsi-hdd/esxi5, ceph-iscsi2.dcv:iscsi-hdd/esxi6, ceph-iscsi2.dcv:iscsi-hdd/esxi7, ceph-iscsi2.dcv:iscsi-hdd/esxi8, ceph-iscsi2.dcv:iscsi-hdd/esxi9, ceph-iscsi2.dcv:iscsi-ssd/esxi-ssd1, ceph-iscsi2.dcv:iscsi-ssd/esxi-ssd2, ceph-iscsi2.dcv:iscsi-ssd/esxi-ssd3)
data:
pools: 6 pools, 3332 pgs
objects: 11.06M objects, 42 TiB
usage: 122 TiB used, 386 TiB / 507 TiB avail
pgs: 792892/33170337 objects degraded (2.390%)
2031841/33170337 objects misplaced (6.125%)
2958 active+clean
222 active+remapped+backfill_wait
131 active+undersized+degraded
6 active+undersized+degraded+remapped+backfilling
5 active+clean+scrubbing
4 active+remapped+backfilling
3 active+clean+remapped
2 active+clean+scrubbing+deep
1 active+undersized+degraded+remapped+backfill_wait
io:
client: 5.7 MiB/s rd, 14 MiB/s wr, 148 op/s rd, 402 op/s wr
recovery: 136 MiB/s, 34 objects/s

[root@S-26-6-1-2 ~]# ceph osd status
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 974, in handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/status/module.py", line 253, in handle_command
return self.handle_osd_status(cmd)
File "/usr/share/ceph/mgr/status/module.py", line 237, in handle_osd_status
mgr_util.format_dimless(self.get_rate("osd", osd_id.
_str__(), "osd.op_w") +
File "/usr/share/ceph/mgr/status/module.py", line 47, in get_rate
return (data[-1]1 - data[-2]1) / float(data[-1]0 - data[-2]0)
ZeroDivisionError: float division by zero

Actions #1

Updated by Greg Farnum about 4 years ago

  • Project changed from Ceph to mgr
  • Category deleted (ceph cli)
Actions #2

Updated by Neha Ojha about 4 years ago

  • Tags set to low-hanging-fruit
Actions #3

Updated by Ernesto Puerta about 3 years ago

  • Translation missing: en.field_tag_list set to low-hanging-fruit
Actions

Also available in: Atom PDF