Project

General

Profile

Bug #43802

cephadm: error creating container: "Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."

Added by Sebastian Wagner 10 months ago. Updated 10 months ago.

Status:
Resolved
Priority:
High
Assignee:
-
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

After the deployment of Prometheus using

CEPHADM_IMAGE='prom/prometheus:latest' cephadm deploy --name prometheus.myhost.com --fsid 93c29e18-309f-11ea-83a2-52540028a9f3  --config-json prometheus.json

the resulting systemd service refused to start. There was not indication of a failure on the command line.

By issuing the unit file directly, the following problem became visible:

admin:/var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3 # sh /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/unit.run
Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap.
Error: error creating container storage: the container name "ceph-93c29e18-309f-11ea-83a2-52540028a9f3-prometheus.admin.com" is already in use by "d6f4ab08d9886a9b753c5d2462c15b093df8c88023bb752ad7720d0094c8af0b". You have to remove that container to be able to reuse that name.: that name is already in use
admin:/var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3 #

Removing the RAM restriction (–memory 4GB) inside the unit.run file turned out to work. I changed

admin:~ # cat /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/unit.run
/usr/bin/podman run --rm --net=host --user 65534 --cpus 2 --memory 4GB --name ceph-93c29e18-309f-11ea-83a2-52540028a9f3-prometheus.admin.com -e CONTAINER_IMAGE=prom/prometheus:latest -e NODE_NAME=admin -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/etc/prometheus:/etc/prometheus:Z -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/data:/prometheus:Z prom/prometheus:latest --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.listen-address=:9095
admin:~ # 


to
admin:~ # cat /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/unit.run
/usr/bin/podman run --rm --net=host --user 65534 --cpus 2 --name ceph-93c29e18-309f-11ea-83a2-52540028a9f3-prometheus.admin.com -e CONTAINER_IMAGE=prom/prometheus:latest -e NODE_NAME=admin -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/etc/prometheus:/etc/prometheus:Z -v /var/lib/ceph/93c29e18-309f-11ea-83a2-52540028a9f3/prometheus.admin.com/data:/prometheus:Z prom/prometheus:latest --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.listen-address=:9095
admin:~ # 

Is there a way to detect this restriction before executing podman?

Relates to https://github.com/SUSE/sesdev/issues/59

History

#1 Updated by Sebastian Wagner 10 months ago

  • Description updated (diff)

#2 Updated by Sebastian Wagner 10 months ago

  • Status changed from New to Resolved
  • Pull request ID set to 33133

Also available in: Atom PDF