Feature #44205
closedFeature #43962: cephadm: Make mgr/cephadm declarative
cephadm: push/apply config.yml
0%
Description
Having a push/apply-config option would enable us to define multiple services/daemons before the actual deployment.
This may be helpful for one-button deployments such as POCs or reproducer environments.
For the technical side:
We have to define a structure that the orchestrator understands.
Since we have ServiceSpec[0] and PlacementSpec[1] that can parse from_json/yaml()
this shouldn't be too hard.
After parsing the SeriveSpecs we have to call the corresponding add$component()
functions in order. We have to take care that:
- We execute in the right order (not services that create pools before OSDs exist)
- and respect dependencies (mds before NFS/RGW if specified)
- aggregate the completion objects and wait/track progress if async
The config will probably be saved in the persistent mon store and should be inspectable
example config.yaml:
service_type: mon
placement:
count: 1 #optional (mutex with hosts)
label: foo #optional (mutex with hosts)
hosts: #optional (mutex with label and count)
- 'hostname1:ip/CIDR/addr_vec=optional_name1' # an example for a HostSpec
- 'hostname2:ip/CIDR/addr_vec=optional_name2'
- 'hostname3:ip/CIDR/addr_vec=optional_name3'
---
service_type: osd
spec:
drivegroups:
default_drivegroup:
host_pattern: foo*
data_devices:
all: True
---
service_type: rgw: #more rgw_custom entries if more than one realm/zone
placement:
count: 1 #optional (mutex with hosts)
label: foo #optional (mutex with hosts)
hosts: #optional (mutex with label and count)
- x
- y
- z
spec:
rgw_realm: realm1
rgw_zone: zone1
# Similar options for mds/nfs/mgr etc
CLI:
ceph <orch> <config> apply -i config.yaml ceph <orch> <config> show
The actual syntax is totally up for discussion. Please leave your suggestions in the comments.
Updated by Joshua Schmid about 4 years ago
- Category set to cephadm
- Target version set to v15.0.0
- Source set to Community (dev)
Updated by Sebastian Wagner about 4 years ago
hm, what about not inventing a new schema here? and instead simply concatenate the service specs for all types?
Like adding a service_type
to ServiceSpec in
https://github.com/ceph/ceph/blob/b24230a74bf92eeb0dfabb3ed9efae0d7e814b0f/src/pybind/mgr/orchestrator/_interface.py#L1338
and then decode each individual spec independently?
service_type: mon placement: count: 1 #optional (mutex with hosts) label: foo #optional (mutex with hosts) hosts: #optional (mutex with label and count) - x - y - z --- service_type: osd spec: drivegroups: default_drivegroup: host_pattern: foo* data_devices: all: True --- service_type: rgw: #more rgw_custom entries if more than one realm/zone placement: count: 1 #optional (mutex with hosts) label: foo #optional (mutex with hosts) hosts: #optional (mutex with label and count) - x - y - z spec: rgw_realm: realm1 rgw_zone: zone1 # Similar options for mds/nfs/mgr etc
This would create a direct relationship between the yaml definitions and the ServiceSpec class!
Updated by Joshua Schmid about 4 years ago
Sebastian Wagner wrote:
hm, what about not inventing a new schema here? and instead simply concatenate the service specs for all types?
[..snip..]
This would create a direct relationship between the yaml definitions and the ServiceSpec class!
Even better.. I'll migrate to your format in the description
Updated by Sebastian Wagner about 4 years ago
- Status changed from New to In Progress
Updated by Sebastian Wagner about 4 years ago
to sum up our discussion from Friday:
- What about doing all calls synchronously and only return async completions from the the orch interface?
- have apply_specs() from serve() be the only way to make changes to the cluster
- CLI returns as soon as the change was successfully and persistently scheduled. Not necessary completed.
Updated by Joshua Schmid about 4 years ago
- Status changed from In Progress to Fix Under Review
- Pull request ID set to 33553
Updated by Sage Weil about 4 years ago
- Status changed from Fix Under Review to Resolved