Bug #43802
Updated by Sebastian Wagner about 4 years ago
After the deployment of Prometheus using
<pre>
CEPHADM_IMAGE='prom/prometheus:latest' cephadm deploy --name prometheus.myhost.com --fsid 93c29e18-309f-11ea-83a2-52540028a9f3 --config-json prometheus.json
</pre>
the resulting systemd service refused to start. There was not indication of a failure on the command line.
By issuing the unit file directly, the following problem became visible:
<pre>
admin:/var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3 # sh /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/unit.run
Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap.
Error: error creating container storage: the container name "ceph-93c29e18-309f-11ea-83a2-52540028a9f3-prometheus.admin.com" is already in use by "d6f4ab08d9886a9b753c5d2462c15b093df8c88023bb752ad7720d0094c8af0b". You have to remove that container to be able to reuse that name.: that name is already in use
admin:/var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3 #
</pre>
Removing the RAM restriction (–memory 4GB) inside the unit.run file turned out to work. I changed
<pre>
admin:~ # cat /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/unit.run
/usr/bin/podman run --rm --net=host --user 65534 --cpus 2 --memory 4GB --name ceph-93c29e18-309f-11ea-83a2-52540028a9f3-prometheus.admin.com -e CONTAINER_IMAGE=prom/prometheus:latest -e NODE_NAME=admin -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/etc/prometheus:/etc/prometheus:Z -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/data:/prometheus:Z prom/prometheus:latest --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.listen-address=:9095
admin:~ #
</pre>
to
<pre>
admin:~ # cat /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/unit.run
/usr/bin/podman run --rm --net=host --user 65534 --cpus 2 --name ceph-93c29e18-309f-11ea-83a2-52540028a9f3-prometheus.admin.com -e CONTAINER_IMAGE=prom/prometheus:latest -e NODE_NAME=admin -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/etc/prometheus:/etc/prometheus:Z -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/data:/prometheus:Z prom/prometheus:latest --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.listen-address=:9095
admin:~ #
</pre>
Is there a way to detect this restriction before executing podman?
Relates to https://github.com/SUSE/sesdev/issues/59