Feature #47145
closedcephadm: Multiple daemons of the same service on single host
0%
Description
ceph orch apply rgw myorg us-east-1 --placement="2 myhost1 myhost1"
We use multiple replicas to make a service HA. Having them on the same host provides little benefit.
Exception is a ceph-nano style deployment where you need multiple MGRs to be able to upgrade them properly.
Adding this would be great, but low priority.
Updated by Sebastian Wagner over 3 years ago
- Related to Bug #44910: cephadm: PlacementSpec host1:192.168.0.2,host1:192.168.0.2 added
Updated by Sebastian Wagner over 3 years ago
- Has duplicate Feature #48114: Cephadm to support Adding multiple instances of RGW in same node for 5.0 release added
Updated by Sebastian Wagner about 3 years ago
- Blocked by Feature #48822: Add proper port management to mgr/cephadm added
Updated by Sebastian Wagner about 3 years ago
in order to co-locate daemons, we have to use different ports for those new daemons.
Updated by Sebastian Wagner about 3 years ago
service_type: rgw
service_id: realm.zone
placement:
label: rgw
count: 3
allow-co-located: true
alternatively:
service_type: rgw
service_id: realm.zone
placement:
hosts:
- host1:1.2.3.0/24=name
- host1:1.2.3.0/24=name
- host2
- host2
or
Edit: Doesn't work, as it breaks the possibility to name individual daemons
service_type: rgw
service_id: realm.zone
placement:
hosts:
- host1,count=8 # add "count" to host placement spec
or
Edit: IMO daemons-per-host is in conflict with count. And I don't see a clear use case except for rgw.py
service_type: rgw
service_id: realm.zone
placement:
label: rgw
count: 3
daemons-per-host: 8
or
Edit: IMO too complicated
placement:
- label: rgw-big
count-per-host: 8
count: 24
- label: rgw-small
count: 2
- hosts: host1
names: a,b,c
count-per-host: 3
- label: foo
count: 3
Updated by Sebastian Wagner about 3 years ago
- Blocks Tasks #49490: cephadm additions/changes to support everything rgw.py needs added
Updated by Sebastian Wagner about 3 years ago
Kubernetes:
The below yaml snippet of the webserver deployment has podAntiAffinity and podAffinity configured. This informs the scheduler that all its replicas are to be co-located with pods that have selector label app=store. This will also ensure that each web-server replica does not co-locate on a single node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: web-store
replicas: 3
template:
metadata:
labels:
app: web-store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-store
topologyKey: "kubernetes.io/hostname"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: web-app
image: nginx:1.16-alpine
Updated by Sebastian Wagner about 3 years ago
- Status changed from New to Fix Under Review
- Assignee set to Sage Weil
- Pull request ID set to 39979
Updated by Sebastian Wagner about 3 years ago
- Status changed from Fix Under Review to Closed