Project

General

Profile

Actions

Feature #45203

closed

OSD Spec: allow filtering via explicit hosts and labels

Added by Nathan Cutler about 4 years ago. Updated almost 4 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

How to reproduce:

1. bootstrap a single-node cluster on a machine with 4 free disks, and then run "ceph versions" to verify that you're using the most recent code from master (or octopus)

master:~ # ceph versions
{
    "mon": {
        "ceph version 16.0.0-907-gb2a545aa43 (b2a545aa4308e3d2c1d680b410e452198ca14b71) pacific (dev)": 1
    },
    "mgr": {
        "ceph version 16.0.0-907-gb2a545aa43 (b2a545aa4308e3d2c1d680b410e452198ca14b71) pacific (dev)": 1
    },
    "osd": {},
    "mds": {},
    "overall": {
        "ceph version 16.0.0-907-gb2a545aa43 (b2a545aa4308e3d2c1d680b410e452198ca14b71) pacific (dev)": 2
    }
}

2. create service_spec_osd.yml like so

    master: ++ cat service_spec_osd.yml
    master: service_type: osd
    master: placement:
    master:     hosts:
    master:         - 'master'
    master: service_id: generic_osd_deployment
    master: data_devices:
    master:     all: True

3. Run the following commands and get the output shown:

    master: ++ ceph orch device ls --refresh
    master: HOST    PATH      TYPE   SIZE  DEVICE  AVAIL  REJECT REASONS  
    master: master  /dev/vdb  hdd   8192M  690903  True                   
    master: master  /dev/vdc  hdd   8192M  641806  True                   
    master: master  /dev/vdd  hdd   8192M  723194  True                   
    master: master  /dev/vde  hdd   8192M  944878  True                   
    master: master  /dev/vda  hdd   42.0G          False  locked          
    master: ++ ceph orch apply osd -i service_spec_osd.yml
    master: Error EINVAL: Traceback (most recent call last):
    master:   File "/usr/share/ceph/mgr/mgr_module.py", line 1157, in _handle_command
    master:     return self.handle_command(inbuf, cmd)
    master:   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command
    master:     return dispatch[cmd['prefix']].call(self, cmd, inbuf)
    master:   File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call
    master:     return self.func(mgr, **kwargs)
    master:   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda>
    master:     wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
    master:   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 63, in wrapper
    master:     return func(*args, **kwargs)
    master:   File "/usr/share/ceph/mgr/orchestrator/module.py", line 574, in _apply_osd
    master:     completion = self.apply_drivegroups(dg_specs)
    master:   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1537, in inner
    master:     completion = self._oremote(method_name, args, kwargs)
    master:   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1608, in _oremote
    master:     return mgr.remote(o, meth, *args, **kwargs)
    master:   File "/usr/share/ceph/mgr/mgr_module.py", line 1519, in remote
    master:     args, kwargs)
    master: RuntimeError: Remote method threw exception: Traceback (most recent call last):
    master:   File "/usr/share/ceph/mgr/cephadm/module.py", line 559, in wrapper
    master:     return AsyncCompletion(value=f(*args, **kwargs), name=f.__name__)
    master:   File "/usr/share/ceph/mgr/cephadm/module.py", line 2103, in apply_drivegroups
    master:     return [self._apply(spec) for spec in specs]
    master:   File "/usr/share/ceph/mgr/cephadm/module.py", line 2103, in <listcomp>
    master:     return [self._apply(spec) for spec in specs]
    master:   File "/usr/share/ceph/mgr/cephadm/module.py", line 2788, in _apply
    master:     get_daemons_func=self.cache.get_daemons_by_service,
    master:   File "/usr/share/ceph/mgr/cephadm/module.py", line 3568, in validate
    master:     self.spec.validate()
    master:   File "/usr/lib/python3.6/site-packages/ceph/deployment/drive_group.py", line 245, in validate
    master:     raise DriveGroupValidationError('host_pattern must be of type string')
    master: ceph.deployment.drive_group.DriveGroupValidationError: Failed to validate Drive Group: host_pattern must be of type string
    master: 
Actions #1

Updated by Nathan Cutler about 4 years ago

  • Subject changed from OSD deployment from service spec still broken to OSD deployment from service spec using "hosts" placement still broken
Actions #2

Updated by Sebastian Wagner about 4 years ago

  • Priority changed from Normal to Urgent
Actions #3

Updated by Sebastian Wagner almost 4 years ago

  • Tracker changed from Bug to Feature
  • Subject changed from OSD deployment from service spec using "hosts" placement still broken to OSD Spec: allow filtering via explicit hosts and labels
  • Category set to cephadm
  • Priority changed from Urgent to High

the error is technically correct, becuase of

https://github.com/ceph/ceph/blob/e2c8d49906e11650945fafef92296a9dfcb6592d/src/pybind/mgr/cephadm/module.py#L2102

Changing this to a feature: We have to also allow other placements types. I.e. refactor something like

https://github.com/ceph/ceph/blob/e2c8d49906e11650945fafef92296a9dfcb6592d/src/pybind/mgr/cephadm/module.py#L1800-L1805

into a

class Placement:
    def filter_matching_hosts(all: List[HostSpec]) -> List[HostSpec]: 
        if self.hosts:
            return self.spec.placement.hosts
        elif self.label:
            return ...
        elif self.host_pattern:
            return ...
Actions #4

Updated by Nathan Cutler almost 4 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Joshua Schmid
  • Pull request ID set to 34860
Actions #5

Updated by Joshua Schmid almost 4 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF