Project

General

Profile

Feature #11833

Method or utility to report OSDs in a particular bucket

Added by Vikhyat Umrao over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Monitor
Target version:
-
Start date:
06/01/2015
Due date:
% Done:

50%

Source:
Q/A
Tags:
Backport:
hammer
Reviewed:
Affected Versions:
Pull request ID:

Description

Description of New Feature:

Method or utility to report OSDs in a particular bucket

A CLI to report the OSD list in a bucket (e.g.,in a host).

  1. ceph osd list --bucket host_name_A

I have verified in my test cluster we do not have commands which can give similar results as customer is asking, like :

1. osd ls {<int[0-]>} show all OSD ids
2. osd tree {<int[0-]>} print OSD tree

both of above given command does not list osds as requirement and this ticket goes under new feature.

As per my understanding or just a thought , instead of adding new CLI can we add one option in "ceph osd tree" command like "ceph osd tree --bucket host_name_A".


Related issues

Copied to Ceph - Backport #12335: ceph: Method or utility to report OSDs in a particular bucket Resolved 06/01/2015

Associated revisions

Revision 5436c290 (diff)
Added by Kefu Chai over 3 years ago

mon: add an "osd crush tree" command

  • to print crush buckets/items in a tree

Fixes: #11833
Signed-off-by: Kefu Chai <>

Revision 89aa8ff9 (diff)
Added by Kefu Chai over 3 years ago

mon: add an "osd crush tree" command

  • to print crush buckets/items in a tree

Fixes: #11833
Signed-off-by: Kefu Chai <>
(cherry picked from commit 5436c290f3622feb8d4b279ed6552b2510e0cee9)

Conflicts:
src/test/mon/osd-crush.sh:
do not start mon as run() takes care of it already

History

#1 Updated by Kefu Chai over 3 years ago

Method or utility to report OSDs in a particular bucket

Vikhyat, could you be more specific about what "report" is supposed to do ?

ceph node ls osd is able to print all OSDs grouped by the host,

$ ceph node ls osd
{
    "rex001.front.sepia.ceph.com": [
        0,
        1,
        2
    ]
}

with some tooling, one is able to print all OSD # :

$ ceph node ls osd --format=json-pretty|jq '.["rex001.front.sepia.ceph.com"]'
[
  0,
  1,
  2
]

please note that the host is the hostname which is very likely a FQDN, but not the one configured in crush.

yes, i understand, bucket is a concept in crush map, and we can name a bucket's type name to "host", and a non-bucket's type to "osd" accordingly. it you are looking for a "bucket" in crushmap, then a more general way to do this is probably to print the crush map in a tree, it is pretty much a hierarchical version of "osd tree" or "osd crush dump":


  "nodes": [
    {
      "id": -1,
      "name": "default",
      "type": "root",
      "type_id": 10,
      "children": [
        {
          "id": -2,
          "name": "rex001",
          "type": "host",
          "type_id": 1,
          "children": [
            {
              "id": 0,
              "name": "osd.0",
              "type": "osd",
              "type_id": 0,
              "crush_weight": 1,
              "depth": 2,
              "exists": 1,
              "status": "up",
              "reweight": 1,
              "primary_affinity": 1
            },
            {
              "id": 1,
              "name": "osd.1",
              "type": "osd",
              "type_id": 0,
              "crush_weight": 1,
              "depth": 2,
              "exists": 1,
              "status": "up",
              "reweight": 1,
              "primary_affinity": 1
            },
            {
              "id": 2,
              "name": "osd.2",
              "type": "osd",
              "type_id": 0,
              "crush_weight": 1,
              "depth": 2,
              "exists": 1,
              "status": "up",
              "reweight": 1,
              "primary_affinity": 1
            }
          ]
        }
      ]
    }
  ],
}

so one is able to query this json response with given filter.

#2 Updated by Vikhyat Umrao over 3 years ago

Hey Kefu,

Thanks for your reply, is ceph node ls command is available from Hammer as in firefly it is not , am I right?

Regards,
Vikhyat

#3 Updated by Kefu Chai over 3 years ago

  • Category changed from ceph cli to Monitor

#4 Updated by Kefu Chai over 3 years ago

  • Status changed from New to Verified
  • Assignee changed from Vikhyat Umrao to Kefu Chai

Thanks for your reply, is ceph node ls command is available from Hammer as in firefly it is not , am I right?

"ceph node ls" is not in hammer or firefly.

per the discussion with Vikhyat and Joao, the command we will go with is "ceph crush", and let it print out the tree structure as shown in the comment#2.

#5 Updated by Vikhyat Umrao over 3 years ago

<vikhyat> kefu: Hey
<vikhyat> kefu: How are you doing ?
<kefu> vikhyat: hi
<kefu> i am replying on http://tracker.ceph.com/issues/11833
<kefu> vikhyat: done
<kefu> vikhyat: good =)
<vikhyat> kefu: great :)
<vikhyat> kefu: here we wanted to have a command which can list the crush bucket
<kefu> my q is 1) what the host is, is it a FQDN, or the "host" in crush map. 2) if it is the former, probably we can use "node ls osd" 3) otherwise we need to print the crush map in a tree.
<kefu> maybe our user can leverage some tools like "jq" to query the returned JSON output.
<vikhyat> kefu: okay
<vikhyat> kefu: use case is very simple
<kefu> it's more flexible to have a tree IMO.
<kefu> so what we need is simply all the OSD id in a bucket.
<vikhyat> kefu: we need to list the buckets till leaf
<kefu> "ceph node ls" is not in firefly.
<vikhyat> kefu: ohh okay
<kefu> i see.
<vikhyat> so let us say if I have dc in root and if I will provide "ceph osd tree --bucket dc"
<vikhyat> it should print till leaf means last osd
<kefu> so i will make sure there is a way to query all leaf nodes whose type is "osd" in a given node whose name is "dc" and type is "host".
<vikhyat> or if I will give "ceph osd tree --bucket <crush_hostname>" it should list all osd in that host
<vikhyat> nope buckt is type dc
<vikhyat> so it would be like
<kefu> okay.
<vikhyat> dc->rack->host->osd
<kefu> i don't what to hard code the "host" in this command .
<vikhyat> right we should not
<vikhyat> so if we print currently ceph osd tree it gives all the data of crush
<kefu> so, the simplest way i have in my mind, would be print out the osd tree in a "tree" in the way i put in http://tracker.ceph.com/issues/11833#note-1, are you good with this approach?
<kefu> yes.
<kefu> not exactly though,
<vikhyat> it can have 4 dcs 12 racks (3 racks per dc) and 30 hosts (10 hosts per rack) and 300 (10 osd per host) so this output becomes so big to see in one screen
<kefu> "ceph osd tree" prints all OSD nodes in crush.
<loicd> morning Ceph !
<kefu> and the ones alive but not in the crush map, that what "stray" for.
<vikhyat> so what we want to do let us say if I want to print only one dc
<kefu> s/that/that's/
<kefu> morning loicd.
<vikhyat> morning locid !
<loicd> \o
<vikhyat> kefu: okay
<kefu> vikhyat: but we can filter it using jq or any other script which is able to handle json.
<vikhyat> kefu: with ceph osd tree or with ceph node ls
<kefu> vikhyat: with "osd tree" i suppose.
<kefu> "ceph node ls" does not understand crush.
<vikhyat> kefu: okay
<kefu> ( i will update the ticket with our discussion once we are all good )
<vikhyat> kefu: that would be great
<vikhyat> kefu: but I was thinking is it adding a command line option would be a good idea
<vikhyat> kefu: means then we do not have to use any extra script nothing
<vikhyat> kefu: it just simple pass that option to ceph osd tree
<vikhyat> ceph osd tree --bucket <dc> or rack or host
<kefu> vikyat: i agree it won't be difficult to add such a command.
<kefu> vikhyat
<kefu> ^
<vikhyat> kefu: right as we a customer who is looking for this
<vikhyat> have*
<jluis> i think that if you want the crush bucket then this must be performed on the monitor; if you want actual hosts, this could be wrapped on the ceph tool
<jluis> although traditionally we haven't delegated that sort of thing to the ceph tool
<kefu> jluis: seems vkhyat is looking for a host in the sense of type="host" in crush map.
<jluis> yeah, monitor it should be then
<jluis> I recommend changing the category to Monitor on the ticket then
<kefu> yup.
<kefu> will do.
<vikhyat_> joao: thanks yup kefu has changed it

<kefu> jluis: thanks. will add an option to let "osd tree" to print out the tree structure of a crush map
<kefu> joao
<diurchenko> Hi, guys I have problems with building Calamari Clients packages :-(
<diurchenko> Well for now when I try to build packages from Romana, I get Command "git clone /git/calamari-clients '/home/vagrant/clients' " failed. Stderr: "fatal: repository '/git/calamari-clients'
<diurchenko> But also had problems with calamari-clients repo
<loicd> diurchenko: morning !
<joao> kefu, outputting the tree structure of the crushmap would probably be better done under 'osd crush' instead of 'osd tree'
<diurchenko> loicd, Hi ^

<joao> I guess the way we accomplish this depends on what we want to get out of it
<loicd> diurchenko: thanks :-D
<joao> if we want to show the crush location for the osds, then 'osd tree' may be the place; but if we want to output all osds in the crushmap in a formatter manner, then 'osd crush' would be the place
<joao> kefu, ^
<joao> s/formatter/formatted/
<kefu> joao: we need to enumerate the osds under a certain bucket.
<kefu> joao: makes sense. "osd crush" then.
<joao> yes, I think in this case 'osd crush' is the right place to have this
<joao> brb
<kefu> \o
<kefu> vikhyat
: seems we will use "osd crush" for this.
<kefu> vikhyat ^

#6 Updated by Kefu Chai over 3 years ago

  • Status changed from Verified to In Progress

#7 Updated by Kefu Chai over 3 years ago

to print all OSD ids under a bucket whose name is "default" and type is "root":

$ ceph osd crush dump tree --format=xml-pretty | xmlstarlet sel -t -m "//item[type='root' and name='default']//item[type='osd']/id" -v . -n
0
1
2

#8 Updated by Kefu Chai over 3 years ago

  • Status changed from In Progress to Need Review
  • % Done changed from 0 to 50

#9 Updated by Kefu Chai over 3 years ago

  • Status changed from Need Review to Resolved
  • Source changed from other to Q/A

#10 Updated by Kefu Chai over 3 years ago

  • Backport set to hammer

#11 Updated by Kefu Chai over 3 years ago

  • Priority changed from Normal to High

#12 Updated by Kefu Chai over 3 years ago

  • Status changed from Resolved to Pending Backport

#13 Updated by Loic Dachary about 3 years ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF