Project

General

Profile

Actions

Feature #47274

open

cephadm: make the container_image setting available to the cephadm binary independent of any deployed daemons (maybe ceph.conf?)

Added by Sebastian Wagner over 3 years ago. Updated almost 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

The problem is:

If someone calls

cephadm shell -- ceph -s

and no daemon was ever deployed on this host, cephadm will pull the default image which is hardcoded
in cephadm, even if this was overwritten by in the config store.

In order to not depend on downstream tools to pre-pull the image on every host,
mgr/cephadm should add

[client.cephadm]
container_image = registry/image:tag

to the host's /etc/ceph/ceph.conf and also letting cephadm use the container_image as written to the
ceph.conf. Thus making sure we pull the correct image.

wdyt?


Related issues 3 (1 open2 closed)

Related to Orchestrator - Bug #46561: cephadm: monitoring services adoption doesn't honor the container imageNew

Actions
Related to Orchestrator - Feature #45111: cephadm: choose distribution specific images based on etc/os-releaesRejected

Actions
Related to Orchestrator - Bug #50502: cephadm pull doesn't get latest imageClosed

Actions
Actions #1

Updated by Sebastian Wagner over 3 years ago

  • Related to Bug #46561: cephadm: monitoring services adoption doesn't honor the container image added
Actions #2

Updated by Sebastian Wagner over 3 years ago

  • Related to Feature #45111: cephadm: choose distribution specific images based on etc/os-releaes added
Actions #3

Updated by Sebastian Wagner almost 3 years ago

  • Related to Bug #50502: cephadm pull doesn't get latest image added
Actions #4

Updated by Jeff Layton almost 3 years ago

Seems reasonable. So what happens during a "cephadm pull"? I imagine:

  1. determine the new version
  2. set it in the config
  3. then do the pull and upgrade

...and then unwind the mess if something fails, of course.

Actions #5

Updated by Sebastian Wagner almost 3 years ago

Jeff Layton wrote:

Seems reasonable. So what happens during a "cephadm pull"? I imagine:

  1. determine the new version
  2. set it in the config
  3. then do the pull and upgrade

...and then unwind the mess if something fails, of course.

I wouldn't make cephadm pull that complicated. I'd simply pull the image that is configured in the config. Cause that is the image that is going to be used by future deployment on that host. Pulling a different image by default would be a bit strange.

If we want to fetch a new image, we could make ceph orch upgrade ... more elaborate.

Actions #6

Updated by Sage Weil over 2 years ago

I'm not sure this is a good idea since container_image now references a specific digest. That would mean that 'cephadm pull' would be a no-op after the initial pull. Also, we don't currently updated ceph.conf if the container image changes (e.g. upgrade) so that would be another round of updates (though we could do this).

My sense is that it is pretty harmless now because cephadm in each stable series will pull the latest for that series, and having 'cephadm shell' use the latest pacific (or whatever) seems ok?

Actions #7

Updated by Sebastian Wagner about 2 years ago

  • Subject changed from cephadm: put the container_image setting into the deployed ceph.conf to cephadm: make the container_image setting available to the cephadm binary independent of any deployed daemons (maybe ceph.conf?)
Actions #8

Updated by Redouane Kachach Elhichou almost 2 years ago

default image now is still hard-coded by it points to the 'quay.ceph.io/ceph-ci/ceph:master'

Actions

Also available in: Atom PDF