Project

General

Profile

Actions

Bug #48884

closed

ceph osd df tree reporting incorrect SIZE value for rack having an empty host node

Added by Brad Hubbard over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Support
Tags:
Backport:
pacific, octopus, nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This was discovered on luminous but master behaves similarly.

$ ceph osd df tree |  egrep root\|rack\|host
ID   CLASS WEIGHT     REWEIGHT SIZE    USE     DATA    OMAP    META    AVAIL   %USE  VAR  PGS TYPE NAME····
-201       1142.68286        -      0B      0B      0B      0B      0B      0B     0    0   - root BLOCK······························
-101        418.48694        -  174TiB 95.4TiB 78.7TiB 3.80GiB      0B 78.7TiB     0    0   -     rack BLOCK-AZ1······················
  -9         34.93088        - 34.9TiB 19.1TiB 19.1TiB  643MiB 35.9GiB 15.8TiB 54.69 0.98   -         host my-ceph-cluster1-osd001
  -5         34.93088        - 34.9TiB 19.7TiB 19.6TiB  568MiB 37.0GiB 15.3TiB 56.26 1.01   -         host my-ceph-cluster1-osd004
 -15         34.93088        - 34.9TiB 20.7TiB 20.6TiB  688MiB 39.0GiB 14.2TiB 59.21 1.06   -         host my-ceph-cluster1-osd007
  -1         34.93088        - 34.9TiB 19.7TiB 19.7TiB  635MiB 36.9GiB 15.2TiB 56.36 1.01   -         host my-ceph-cluster1-osd019
 -21         34.93088        - 34.9TiB 20.4TiB 20.4TiB  661MiB 35.9GiB 14.5TiB 58.38 1.05   -         host my-ceph-cluster1-osd022
 -23         34.93088        - 34.9TiB 19.3TiB 19.2TiB  645MiB 34.3GiB 15.7TiB 55.13 0.99   -         host my-ceph-cluster1-osd025
 -27         34.81705        -      0B      0B      0B      0B      0B      0B     0    0   -         host my-ceph-cluster1-osd028
 -29         34.81705        - 34.8TiB 18.2TiB 16.6TiB  804MiB      0B 16.6TiB 52.32 0.94   -         host my-ceph-cluster1-osd031
 -59         34.81689        - 34.8TiB 19.0TiB 15.8TiB  738MiB      0B 15.8TiB 54.65 0.98   -         host my-ceph-cluster1-osd034
 -63         34.81689        - 34.8TiB 18.6TiB 16.2TiB  763MiB      0B 16.2TiB 53.49 0.96   -         host my-ceph-cluster1-osd037
 -57         34.81689        - 34.8TiB 18.8TiB 16.1TiB  795MiB      0B 16.1TiB 53.88 0.97   -         host my-ceph-cluster1-osd040
 -61         34.81689        - 34.8TiB 20.8TiB 14.0TiB  788MiB      0B 14.0TiB 59.76 1.07   -         host my-ceph-cluster1-osd043
-103        376.02588        - 34.8TiB 20.5TiB 14.3TiB  710MiB      0B 14.3TiB     0    0   -     rack BLOCK-AZ2······················
  -3         27.85596        - 34.8TiB 15.8TiB 19.1TiB  704MiB      0B 19.1TiB 45.25 0.81   -         host my-ceph-cluster2-osd002
 -11         34.81700        - 34.8TiB 18.6TiB 16.2TiB  834MiB      0B 16.2TiB 53.49 0.96   -         host my-ceph-cluster2-osd005
 -13         34.81700        - 34.8TiB 18.8TiB 16.0TiB  744MiB      0B 16.0TiB 54.07 0.97   -         host my-ceph-cluster2-osd008
 -25         34.81705        - 34.8TiB 19.2TiB 15.6TiB  863MiB      0B 15.6TiB 55.08 0.99   -         host my-ceph-cluster2-osd020
 -31         34.81705        - 34.8TiB 18.5TiB 16.3TiB  708MiB      0B 16.3TiB 53.06 0.95   -         host my-ceph-cluster2-osd023
 -35         34.81705        - 34.8TiB 19.9TiB 14.9TiB  837MiB      0B 14.9TiB 57.08 1.02   -         host my-ceph-cluster2-osd026
 -33         34.81705        - 34.8TiB 20.3TiB 14.5TiB  973MiB      0B 14.5TiB 58.34 1.05   -         host my-ceph-cluster2-osd029
 -37         34.81705        - 34.8TiB 19.6TiB 15.2TiB  823MiB      0B 15.2TiB 56.39 1.01   -         host my-ceph-cluster2-osd032
 -65         34.81689        - 34.8TiB 19.4TiB 15.4TiB  770MiB      0B 15.4TiB 55.65 1.00   -         host my-ceph-cluster2-osd035
 -71         34.81689        - 34.8TiB 17.7TiB 17.1TiB  871MiB      0B 17.1TiB 50.81 0.91   -         host my-ceph-cluster2-osd038
 -67                0        -      0B      0B      0B      0B      0B      0B     0    0   -         host my-ceph-cluster2-osd041
 -69         34.81689        - 34.8TiB 20.5TiB 14.3TiB  710MiB      0B 14.3TiB 58.94 1.06   -         host my-ceph-cluster2-osd044
-105        348.17010        -  348TiB  200TiB  149TiB 8.10GiB      0B  149TiB     0    0   -     rack BLOCK-AZ3······················
  -7                0        -      0B      0B      0B      0B      0B      0B     0    0   -         host my-ceph-cluster3-osd003
 -17         34.81700        - 34.8TiB 19.5TiB 15.3TiB  855MiB      0B 15.3TiB 56.03 1.00   -         host my-ceph-cluster3-osd006
 -39         34.81705        - 34.8TiB 21.0TiB 13.8TiB  848MiB      0B 13.8TiB 60.45 1.08   -         host my-ceph-cluster3-osd021
 -41         34.81705        - 34.8TiB 19.5TiB 15.3TiB  817MiB      0B 15.3TiB 56.02 1.00   -         host my-ceph-cluster3-osd024
 -43         34.81705        - 34.8TiB 21.2TiB 13.6TiB  873MiB      0B 13.6TiB 60.97 1.09   -         host my-ceph-cluster3-osd027
 -45         34.81734        - 34.8TiB 20.0TiB 14.8TiB  930MiB      0B 14.8TiB 57.38 1.03   -         host my-ceph-cluster3-osd030
 -47         34.81705        - 34.8TiB 19.0TiB 15.8TiB  759MiB      0B 15.8TiB 54.56 0.98   -         host my-ceph-cluster3-osd033
 -49         34.81689        - 34.8TiB 19.2TiB 15.7TiB  694MiB      0B 15.7TiB 55.00 0.99   -         host my-ceph-cluster3-osd036
 -51         34.81689        - 34.8TiB 19.5TiB 15.3TiB  812MiB      0B 15.3TiB 56.12 1.01   -         host my-ceph-cluster3-osd039
 -55         34.81689        - 34.8TiB 21.0TiB 13.8TiB  734MiB      0B 13.8TiB 60.26 1.08   -         host my-ceph-cluster3-osd042
 -53         34.81689        - 34.8TiB 19.6TiB 15.2TiB  969MiB      0B 15.2TiB 56.34 1.01   -         host my-ceph-cluster3-osd045
  -2                0        -      0B      0B      0B      0B      0B      0B     0    0   - root default

Related issues 3 (0 open3 closed)

Copied to RADOS - Backport #48985: octopus: ceph osd df tree reporting incorrect SIZE value for rack having an empty host nodeResolvedsinguliere _Actions
Copied to RADOS - Backport #48986: pacific: ceph osd df tree reporting incorrect SIZE value for rack having an empty host nodeResolvedActions
Copied to RADOS - Backport #48987: nautilus: ceph osd df tree reporting incorrect SIZE value for rack having an empty host nodeResolvedNathan CutlerActions
Actions #1

Updated by Brad Hubbard over 3 years ago

  • Description updated (diff)
Actions #2

Updated by Brad Hubbard about 3 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 38958
Actions #3

Updated by Brad Hubbard about 3 years ago

  • Backport changed from octopus, nautilus to pacific, octopus, nautilus
Actions #4

Updated by Neha Ojha about 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #5

Updated by Backport Bot about 3 years ago

  • Copied to Backport #48985: octopus: ceph osd df tree reporting incorrect SIZE value for rack having an empty host node added
Actions #6

Updated by Backport Bot about 3 years ago

  • Copied to Backport #48986: pacific: ceph osd df tree reporting incorrect SIZE value for rack having an empty host node added
Actions #7

Updated by Backport Bot about 3 years ago

  • Copied to Backport #48987: nautilus: ceph osd df tree reporting incorrect SIZE value for rack having an empty host node added
Actions #8

Updated by Loïc Dachary about 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF