Project

General

Profile

Bug #50928

OSD size count mismatched with ceph orch commands

Added by Juan Miguel Olmo Martínez almost 3 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
Normal
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
pacific
Regression:
No
Severity:
4 - irritation
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Listing OSD service particularly shows 0 out of 4 OSDs running.
----------------------------------------------------------------

[ceph: root@ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e /]# ceph orch ls osd osd.all-available-devices
NAME RUNNING REFRESHED AGE PLACEMENT
osd.all-available-devices 0/4 - 4h *

[ceph: root@ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e /]# ceph orch ls osd osd.all-available-devices -f yaml
service_type: osd
service_id: all-available-devices
service_name: osd.all-available-devices
placement:
host_pattern: '*'
spec:
data_devices:
all: true
filter_logic: AND
objectstore: bluestore
status:
created: '2021-05-11T11:57:24.961415Z'
running: 0
size: 4

Listing OSD daemons using "ceph orch ps" command
---------------------------------------------------

[ceph: root@ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e /]# ceph orch ps '' osd.all-available-devices
NAME HOST STATUS REFRESHED AGE PORTS VERSION IMAGE ID CONTAINER ID
osd.0 ceph-sunil2adm-1620733273073-node2-osd-mon-mgr-mds-node-exporte running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 6983f2af41bf
osd.1 ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e running (4h) 2m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 c1cd39a0ac65
osd.10 ceph-sunil2adm-1620733273073-node3-mon-osd-node-exporter-crash running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 55f673a1abd7
osd.11 ceph-sunil2adm-1620733273073-node3-mon-osd-node-exporter-crash running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 d1e29f785d1d
osd.2 ceph-sunil2adm-1620733273073-node2-osd-mon-mgr-mds-node-exporte running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 01c822d7eee6
osd.3 ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e running (4h) 2m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 c0a5015a4ffa
osd.4 ceph-sunil2adm-1620733273073-node2-osd-mon-mgr-mds-node-exporte running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 1e5b5c541046
osd.5 ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e running (4h) 2m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 1ceebf0500ef
osd.6 ceph-sunil2adm-1620733273073-node2-osd-mon-mgr-mds-node-exporte running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 1591bf19799a
osd.7 ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e running (4h) 2m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 c1f3900556f0
osd.8 ceph-sunil2adm-1620733273073-node3-mon-osd-node-exporter-crash running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 9fbaa959f0fe
osd.9 ceph-sunil2adm-1620733273073-node3-mon-osd-node-exporter-crash running (4h) 3m ago 4h - 16.2.0-33.el8cp bf81cae90cb7 7cc4d520990a

The OSD status from "ceph status" command response looks fine.

[ceph: root@ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e /]# ceph -s
cluster:
id: f64f341c-655d-11eb-8778-fa163e914bcc
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e,ceph-sunil2adm-1620733273073-node2-osd-mon-mgr-mds-node-exporte,ceph-sunil2adm-1620733273073-node3-mon-osd-node-exporter-crash (age 4h)
mgr: ceph-sunil2adm-1620733273073-node2-osd-mon-mgr-mds-node-exporte.eodzdr(active, since 4h), standbys: ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e.hzffik
osd: 12 osds: 12 up (since 4h), 12 in (since 4h)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 71 MiB used, 180 GiB / 180 GiB avail
pgs: 1 active+clean

[ceph: root@ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17505 root default
-3 0.05835 host ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e
1 hdd 0.01459 osd.1 up 1.00000 1.00000
3 hdd 0.01459 osd.3 up 1.00000 1.00000
5 hdd 0.01459 osd.5 up 1.00000 1.00000
7 hdd 0.01459 osd.7 up 1.00000 1.00000
-5 0.05835 host ceph-sunil2adm-1620733273073-node2-osd-mon-mgr-mds-node-exporte
0 hdd 0.01459 osd.0 up 1.00000 1.00000
2 hdd 0.01459 osd.2 up 1.00000 1.00000
4 hdd 0.01459 osd.4 up 1.00000 1.00000
6 hdd 0.01459 osd.6 up 1.00000 1.00000
-7 0.05835 host ceph-sunil2adm-1620733273073-node3-mon-osd-node-exporter-crash
8 hdd 0.01459 osd.8 up 1.00000 1.00000
9 hdd 0.01459 osd.9 up 1.00000 1.00000
10 hdd 0.01459 osd.10 up 1.00000 1.00000
11 hdd 0.01459 osd.11 up 1.00000 1.00000

[ceph: root@ceph-sunil2adm-1620733273073-node1-installer-mon-mgr-osd-node-e /]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
1 hdd 0.01459 1.00000 15 GiB 6 MiB 568 KiB 0 B 5.4 MiB 15 GiB 0.04 1.01 0 up
3 hdd 0.01459 1.00000 15 GiB 6 MiB 568 KiB 0 B 5.4 MiB 15 GiB 0.04 1.01 1 up
5 hdd 0.01459 1.00000 15 GiB 5.9 MiB 568 KiB 0 B 5.4 MiB 15 GiB 0.04 1.00 0 up
7 hdd 0.01459 1.00000 15 GiB 5.9 MiB 572 KiB 0 B 5.4 MiB 15 GiB 0.04 1.00 0 up
0 hdd 0.01459 1.00000 15 GiB 6 MiB 568 KiB 0 B 5.4 MiB 15 GiB 0.04 1.01 0 up
2 hdd 0.01459 1.00000 15 GiB 6.1 MiB 572 KiB 0 B 5.5 MiB 15 GiB 0.04 1.02 0 up
4 hdd 0.01459 1.00000 15 GiB 6 MiB 568 KiB 0 B 5.4 MiB 15 GiB 0.04 1.01 0 up
6 hdd 0.01459 1.00000 15 GiB 6 MiB 568 KiB 0 B 5.4 MiB 15 GiB 0.04 1.01 1 up
8 hdd 0.01459 1.00000 15 GiB 5.9 MiB 572 KiB 0 B 5.3 MiB 15 GiB 0.04 0.99 0 up
9 hdd 0.01459 1.00000 15 GiB 5.8 MiB 572 KiB 0 B 5.2 MiB 15 GiB 0.04 0.98 0 up
10 hdd 0.01459 1.00000 15 GiB 5.9 MiB 572 KiB 0 B 5.4 MiB 15 GiB 0.04 1.00 0 up
11 hdd 0.01459 1.00000 15 GiB 5.9 MiB 568 KiB 0 B 5.3 MiB 15 GiB 0.04 0.99 1 up
TOTAL 180 GiB 71 MiB 6.7 MiB 0 B 65 MiB 180 GiB 0.04
MIN/MAX VAR: 0.98/1.02 STDDEV: 0

History

#1 Updated by Juan Miguel Olmo Martínez almost 3 years ago

  • Pull request ID set to 41477

#2 Updated by Sebastian Wagner over 2 years ago

  • Status changed from New to Fix Under Review

#3 Updated by Ken Dreyer over 2 years ago

  • Status changed from Fix Under Review to Pending Backport
  • Backport set to pacific
  • Pull request ID changed from 41477 to 43253

#4 Updated by Sebastian Wagner over 2 years ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF