Activity
From 03/29/2021 to 04/27/2021
04/27/2021
- 06:52 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
- If you want to set monitoring stack container images during bootstrap by setting a config option like "mgr/cephadm/co...
- 03:22 PM Feature #46827: cephadm: Pin OSDs to pmem modules connected to specific CPUs
- workaround: manually set the config option
- 02:57 PM Feature #44874 (Rejected): cephadm: add Filestore support
- Sort of too late by now. I'd still accept PRs for this
- 02:55 PM Feature #46044 (Fix Under Review): cephadm: Distribute admin keyring.
- 02:54 PM Feature #50236 (Rejected): cephadm: NFSv3
- 01:39 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
- Jeff Layton wrote:
> Seems reasonable. So what happens during a "cephadm pull"? I imagine:
>
> # determine the ne... - 09:05 AM Bug #50535 (Resolved): add local cephadm bootstrap dev env.
- ...
- 09:04 AM Documentation #50534 (Resolved): docs: add full cluster purge
- 06:14 AM Bug #49506 (Resolved): cephadm: `cephadm ls` broken for SUSE's downstream alertmanager container
04/26/2021
- 04:37 PM Feature #50529 (Resolved): cephadm rm-cluster is also not resetting any disks that were used as osds
- see title.
should probably be an optional argument or something. - 03:43 PM Bug #50364 (Pending Backport): cephadm: removing daemons from hosts in maintenance mode
- 03:24 PM Bug #50526 (Resolved): OSD massive creation: OSDs not created
- OSDs are not created when the drive group used to launch the osd creation affect a big number of OSDs (75 in my case)...
- 02:06 PM Bug #50524 (Resolved): placement spec: irritating error message if passed a string for count_per_...
- ...
- 08:25 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- Paul Cuzner wrote:
> Sebastian Wagner wrote:
> > A few problems:
> >
> > * *cephadm rm-cluster* only removes the...
04/24/2021
04/23/2021
- 09:33 PM Bug #50502: cephadm pull doesn't get latest image
- https://github.com/ceph/ceph/pull/39058 caused a subtle behavior change.
Previously, if we used a non-stable tag,... - 01:52 PM Bug #50502: cephadm pull doesn't get latest image
- This is a tricky one!
Imagine you set... - 01:49 PM Bug #50502 (Closed): cephadm pull doesn't get latest image
- I tried to do a "cephadm pull" this morning on my mini-cluster and it got v16.2.0. Dockerhub has v16.2.1 currently th...
- 02:46 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
- Seems reasonable. So what happens during a "cephadm pull"? I imagine:
# determine the new version
# set it in the... - 01:56 PM Feature #45111 (Rejected): cephadm: choose distribution specific images based on etc/os-releaes
- don't know. I'd like to avoid that complexity. Please reopen, if you think this is a good idea.
- 12:22 PM Bug #50114 (Resolved): cephadm: upgrade loop on target_digests list mismatch
- 05:50 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- Sebastian Wagner wrote:
> A few problems:
>
> * *cephadm rm-cluster* only removes the cluster on the local host
...
04/22/2021
- 01:19 PM Bug #50444 (Pending Backport): host labels order is random
- 11:47 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- A few problems:
* *cephadm rm-cluster* only removes the cluster on the local host
* *mgr/cephadm* cannot remove t... - 07:24 AM Support #48630: non-LVM OSD do not start after upgrade from 15.2.4 -> 15.2.7
- Sebastian Wagner wrote:
> I think you probably want to migrate to ceph-volume for now.
Hi Sebastian,
Thanks fo...
04/21/2021
- 09:18 PM Bug #50472 (Resolved): orchestrator doesn't provide a way to remove an entire cluster
- In prior toolchains like ceph-ansible, purging a cluster and returning a set of hosts to their original state was pos...
- 02:58 PM Bug #47513 (Pending Backport): rook: 'ceph orch ps' does not show image and container id correctly
- 09:53 AM Support #49497: Cephadm fails to upgrade from 15.2.8 to 15.2.9
- Illya S. wrote:
> The error is still here with 15.2.10
>
> Stuck on 15.2.8
15.2.11 -- nothing changed - 02:30 AM Bug #50443 (Fix Under Review): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
04/20/2021
- 07:44 PM Bug #50444 (Resolved): host labels order is random
- host labels are not stored in the order entered or a logical order like alphabetically. they stored in a randomized o...
- 07:30 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- If you have < 2 running mgr daemons than the upgrade won't work because there will be no mgr to fail over to.
If you... - 09:45 AM Bug #49954 (Resolved): cephadm is not persisting the grafana.db file, so any local customizations...
04/19/2021
- 09:30 PM Bug #50306 (Fix Under Review): /etc/hosts is not passed to ceph containers. clusters that were re...
- 12:30 PM Bug #50401 (Pending Backport): cephadm: Daemons that don't use ceph image always marked as needin...
04/16/2021
- 04:48 PM Bug #50401 (Resolved): cephadm: Daemons that don't use ceph image always marked as needing upgrad...
- The upgrade check command checks the image is of each daemon against the image id for the image the user would like t...
- 03:47 PM Bug #50399: cephadm ignores registry settings
- I also can't seem to edit this but this is on the latest octopus - 15.2.10
- 02:05 PM Bug #50399: cephadm ignores registry settings
- just to be clear this happened after I wanted to add an mds and did...
- 01:52 PM Bug #50399 (Can't reproduce): cephadm ignores registry settings
- even after setting mgr/cephadm/registry_user, mgr/cephadm/registry_password and mgr/cephadm/registry_url to a docker ...
- 02:48 PM Bug #50369 (Pending Backport): mgr/volumes/nfs: drop type param during cluster create
- 02:48 PM Feature #49960 (Pending Backport): cephadm: put max on number of daemons in placement count based...
- 04:46 AM Bug #49737 (Resolved): cephadm bootstrap --skip-ssh skips too much
- 04:45 AM Feature #50361 (Resolved): cephadm: report on unexpected exception in upgrade loop
- 04:30 AM Bug #50102 (Pending Backport): spec jsons that expect a list in a field dont verify that a list w...
04/15/2021
- 04:27 PM Documentation #50362 (Duplicate): pacific curl-based-installation docs link to octopus binary
- 04:25 PM Documentation #49806 (Pending Backport): minor problems in cephadm docs
- 04:03 PM Feature #48624: ceph orch drain <host>
- todo. this could include:
Temporarily disable scrubbing
Limit backfill and recovery
- 09:45 AM Cleanup #50375 (Rejected): cephadm firewall: move to unit.run?
- Right now, firewall ports are opened when deploying a unit.
We should investigate, if the firewall could be config... - 08:20 AM Bug #46097 (Won't Fix): package mode has a hardcoded ssh user
- let's remove the package mode
- 08:20 AM Tasks #46352 (Won't Fix): add leap support for cephadm
- feel free to reopen this!
- 08:19 AM Feature #44429 (Rejected): cephadm: make upgrade work with 'packaged' mode
- let's remove the package mode
- 08:18 AM Bug #48779 (Won't Fix): orchestrator provides no ceph-[mon,mgr,osd,mds,...].target equivalent
- let's encourage users to use ...
- 08:17 AM Bug #45973 (Rejected): Adopted MDS daemons are removed by the orchestrator because they're orphans
- fixed by both downstreams
- 08:17 AM Bug #48656 (Can't reproduce): cephadm botched install of ceph-fuse (symbol lookup error)
- 08:16 AM Feature #46651 (Rejected): cephadm: allow daemon/service restarts on a host basis
- that's probably the maintenance mode
- 12:29 AM Bug #50369: mgr/volumes/nfs: drop type param during cluster create
- Michael Fritch wrote:
> PR #37600 introduced support for both cephfs and rgw exports
> to be configured using a sin... - 12:19 AM Bug #50369 (Resolved): mgr/volumes/nfs: drop type param during cluster create
- PR #37600 introduced support for both cephfs and rgw exports
to be configured using a single nfs-ganesha cluster.
04/14/2021
- 09:12 PM Bug #50364 (Fix Under Review): cephadm: removing daemons from hosts in maintenance mode
- 07:29 PM Bug #50364 (Resolved): cephadm: removing daemons from hosts in maintenance mode
- Right now, when applying services in the serve loop, we will try to remove all daemons that are on hosts in maintenan...
- 09:12 PM Feature #50361 (Fix Under Review): cephadm: report on unexpected exception in upgrade loop
- 03:39 PM Feature #50361 (Resolved): cephadm: report on unexpected exception in upgrade loop
- Right now, if an unexpected exception such as https://tracker.ceph.com/issues/50043 is to happen during the upgrade, ...
- 07:33 PM Bug #49910: cephadm | Creating initial admin user... | Please specify the file containing the pas...
- As per /u/lynxeur suggestion here https://www.reddit.com/r/ceph/comments/mi3asa/cephadm_on_ubuntu_2004/gufey25?utm_so...
- 06:13 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- @john im keeping the bug open and just changing the subject and providing more details on the real problem here
... - 03:00 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- FWIW I see nothing wrong with closing this bug as invalid.
Unless you want to follow up on https://github.com/ceph... - 05:52 PM Documentation #50362 (In Progress): pacific curl-based-installation docs link to octopus binary
- 05:39 PM Documentation #50362 (Duplicate): pacific curl-based-installation docs link to octopus binary
- the link in the curl command here https://docs.ceph.com/en/pacific/cephadm/install/#curl-based-installation currently...
- 03:16 PM Feature #50360 (Resolved): Configure the IP address for Ganesha
- The nfs ganesha service is deployed using cephadm via the spec:
---... - 03:05 PM Bug #50359: Configure the IP address for the monitoring stack components
- Not sure if this can be considered an RFE rather than a bug, but this should be a must have
for any deployment. - 03:02 PM Bug #50359 (Resolved): Configure the IP address for the monitoring stack components
- When the dashboard is deployed using cephadm, a monitoring stack (node_exporter, prometheus, alertmanager, grafana) ...
- 02:34 PM Feature #45115 (New): cephadm: Deploy Ceph Dashboard behind a HAProxy instance
- 02:34 PM Feature #45115 (Resolved): cephadm: Deploy Ceph Dashboard behind a HAProxy instance
- 02:30 PM Bug #48939 (Can't reproduce): Orchestrator removes mon daemon from wrong host when removing host ...
- 02:29 PM Feature #43687 (Resolved): cephadm: haproxy (or lb)
- 02:27 PM Bug #48325 (Pending Backport): PlacementSpec: 'NoneType' object has no attribute 'copy'
- 02:27 PM Bug #49273 (Pending Backport): cephadm fails deployment of node-exporter when ipv6 is disabled
- 02:19 PM Feature #47711 (Resolved): mgr/cephadm: add a feature to examine the host facts to look for confi...
- 02:19 PM Feature #49407 (Resolved): Enable the ability of cephadm to trigger libstoragemgmt info from ceph...
- 02:19 PM Bug #49339 (Resolved): cephadm/osd: OSD draining: don't mark OSDs out
- 02:18 PM Bug #47916 (Resolved): podman containers running in a detached state do not output logs to journald
- 02:15 PM Bug #49436 (New): cephadm bootstrap fails to create /etc/ceph directory
- no time to look into this
- 02:12 PM Feature #49159 (Resolved): "cephadm ceph-volume activate" does not support cephadm
- 02:09 PM Bug #49889: mgr/orchestrator/_interface.py: ZeroDivisionError
- https://github.com/ceph/ceph/blob/ff97629375a4a4e82b79f0fdcdb25f411b74d48d/src/pybind/mgr/test_orchestrator/module.py...
- 02:08 PM Bug #49755 (Can't reproduce): OSD service is not found
- https://github.com/ceph/ceph/pull/40736
- 02:07 PM Bug #49724 (Resolved): fsid is not validated during accessing the shell through cli
- 02:04 PM Bug #49223 (Resolved): unrecognized arguments: --container-init
- 01:55 PM Bug #50267 (Pending Backport): rgw service can be deploy with realm and no zone or vise versa
- 01:49 PM Bug #46606 (New): cephadm: post-bootstrap monitoring deployment only works if the command "ceph m...
- 01:49 PM Bug #48597 (Resolved): pybind/mgr/cephadm: mds_join_fs not cleaned up
- 01:48 PM Bug #49675 (Resolved): ceph daemon 'reconfig' populates daemon cache with 'starting' state
- 01:46 PM Bug #49890 (Resolved): podman makes socket.getfqdn() return container name instead of hostname
- 01:45 PM Bug #50114 (Pending Backport): cephadm: upgrade loop on target_digests list mismatch
04/13/2021
- 04:56 AM Feature #48292: cephadm: allow more than 60 OSDs per host
- Sebastian Wagner wrote:
> If the cluster is set to have very dense nodes (>60 OSDs per host) please make sure to ass... - 12:41 AM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- I confirm I could apply a spec on bootstrap. Thanks!
Conclusions:
- Ensure you have the fix for bug #50041
- Do ... - 12:03 AM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- Wait, I think I can't apply it at bootstrap because I am currently missing the fix for bug #50041 (I had rolled it ba...
04/12/2021
- 11:59 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- If I use a spec with IPs then I can add my hosts after bootstrap [1] but not at bootstrap [2].
[1]... - 10:20 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- i was able to determine this was caused because the host name could not resolve when trying to add hosts....
- 08:22 PM Bug #50306 (Resolved): /etc/hosts is not passed to ceph containers. clusters that were relying on...
- While using `cephadm bootstrap --apply-spec` to bootstrap a spec containing other hosts, cephadm attempts to set up S...
- 08:23 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- Because I'm having the same issue in Pacific let's use a new bug https://tracker.ceph.com/issues/50306
- 08:11 PM Bug #49277 (Duplicate): cephadm bootstrap --apply-spec <cluster.yaml> hangs
- 08:02 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- When using pacific with the fixing patch https://github.com/ceph/ceph/pull/40477 from Bug #50041, the deployment fail...
- 04:08 PM Bug #49910: cephadm | Creating initial admin user... | Please specify the file containing the pas...
- Conversation from another few users who were impacted by this issue: https://www.reddit.com/r/ceph/comments/mi3asa/ce...
- 10:05 AM Bug #50296 (Can't reproduce): Failed to remove OSD service
- I have some services created when adding ODSs from dashboard (I think):...
- 08:04 AM Bug #50295 (Closed): cephadm bootstrap mon container fails to start with podman 3.1 in CentOS 8 S...
- When attempting to bootstrap a container on CentOS stream after Appstream changed from podman version 3.0.0-0.33rc2.m...
04/09/2021
- 09:07 PM Documentation #50273 (Resolved): remove keepalived_user from haproxy docs
- keepalived_user is not used and not required
putting it in the spec results in an error - 08:08 PM Bug #50272: cephadm: after downsizing mon service from 5 to 3 daemons, cephadm reports "stray" da...
- downsizing* both services
- 08:07 PM Bug #50272 (New): cephadm: after downsizing mon service from 5 to 3 daemons, cephadm reports "str...
- After having 5 mon/mgr daemons and then donsizin both services to 3 daemons list_servers, which is used to detect str...
- 07:31 PM Bug #50267 (Fix Under Review): rgw service can be deploy with realm and no zone or vise versa
- 02:32 PM Bug #50267 (Resolved): rgw service can be deploy with realm and no zone or vise versa
- --realm and --zone both need to be supplied when doing 'orch apply rgw'
if just --realm is supplied the rgw servic... - 07:16 PM Bug #50041 (Pending Backport): cephadm bootstrap with apply-spec anmd ssh-user option failed whil...
- 07:01 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- We think this is a duplicate of https://tracker.ceph.com/issues/50041 and fixed by https://github.com/ceph/ceph/pull/...
- 12:36 PM Bug #49551 (Fix Under Review): cephadm journald logs are mangled
- 12:36 PM Bug #49551 (Pending Backport): cephadm journald logs are mangled
04/08/2021
- 08:22 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- I reproduced this with 15.2.10. I'll follow up on IRC to offer the assignee live access to the reproducer....
- 08:04 PM Bug #50248 (Fix Under Review): rgw-nfs daemons marked as stray
- 06:15 PM Bug #50248 (Resolved): rgw-nfs daemons marked as stray
- ...
- 07:56 PM Documentation #50257 (In Progress): cephadm docs: wrong command for getting events for single daemon
- 07:42 PM Documentation #50257 (Resolved): cephadm docs: wrong command for getting events for single daemon
- The current documented command at https://docs.ceph.com/en/latest/cephadm/troubleshooting/#per-service-and-per-daemon...
- 02:21 PM Documentation #50239 (Resolved): cephadm docs: add RGW SSL certificates
- We need to document how to set the SSL certificate in RGWSpec:...
- 12:22 PM Feature #50236 (Rejected): cephadm: NFSv3
- Some users might be interested in NFSv3 mainly from Windows clients.
Is there a need to support NFSv3 in cephadm? ...
04/06/2021
- 02:24 PM Feature #49171 (In Progress): cephadm: set osd-memory-target
- 01:38 PM Bug #45327 (Closed): cephadm: Orch daemon add is not idempotent
- 01:20 PM Cleanup #50168 (Resolved): cephadm: move bin/cephadm from the git tree to download.ceph.com
- Right now, people are downloading cephadm directly from the source tree.
https://docs.ceph.com/en/latest/cephadm/i... - 10:00 AM Feature #50160 (Resolved): Self document attributes for each kind of service
- Apart of the documentation we do not have another way to know what are the properties/attributes that we can use in a...
04/05/2021
- 06:37 PM Bug #50041 (Resolved): cephadm bootstrap with apply-spec anmd ssh-user option failed while adding...
- 06:37 PM Bug #50041: cephadm bootstrap with apply-spec anmd ssh-user option failed while adding the hosts
- Backported to Pacific in https://github.com/ceph/ceph/pull/40544 , will be released in v16.2.1.
- 06:37 PM Bug #50043: cephadm: account for possible "." in patch portion of ceph version
- Backported to Pacific in https://github.com/ceph/ceph/pull/40544 , will be released in v16.2.1.
- 06:14 PM Bug #49872 (Pending Backport): cephadm: Don't remove the daemon keyring, if redeploy failes
- 06:14 PM Bug #50062 (Pending Backport): orch host add with multiple labels and no addr
- 04:14 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- ...
- 11:49 AM Documentation #49806 (Resolved): minor problems in cephadm docs
- Merged.
04/03/2021
- 02:27 PM Bug #49757 (Pending Backport): orch: --format flag name not included in help for 'orch ps' and 'o...
- 01:14 PM Bug #50114 (Fix Under Review): cephadm: upgrade loop on target_digests list mismatch
- 01:02 PM Bug #50114 (In Progress): cephadm: upgrade loop on target_digests list mismatch
- 01:02 PM Bug #50114: cephadm: upgrade loop on target_digests list mismatch
- it is alternating between two digest variants, one with docker.io/ prefix, and one without:...
04/02/2021
- 10:40 PM Feature #47145 (Closed): cephadm: Multiple daemons of the same service on single host
- 10:40 PM Feature #48822 (Closed): Add proper port management to mgr/cephadm
- 10:32 PM Cleanup #50117 (Duplicate): orch apply kind: introduce another layer on top of service_type. I.e....
- *Current situation*
Right now, we have already three different type of things that *ceph orch apply* supports:
... - 10:19 PM Bug #50116: remove cephadm --dashboard-password-noupdate
- https://github.com/ceph/ceph/pull/32990
- 10:02 PM Bug #50116 (Rejected): remove cephadm --dashboard-password-noupdate
- This already appears already everywhere as a recommended option for bootstrap.
This a usability issue and we shou... - 03:08 PM Bug #50114 (Resolved): cephadm: upgrade loop on target_digests list mismatch
- I tried to update Ceph 15.2.10 to 16.2.0 via ceph orch. In the
beginning everything seems to work fine and the new M... - 02:59 PM Bug #50043 (Resolved): cephadm: account for possible "." in patch portion of ceph version
- 02:03 PM Bug #50113 (Resolved): Upgrading to v16 breaks rgw_frontends setting
- We are upgrading our cluster to v16 today with cephadm.
We have rgw daemons set up and the "rgw_frontends" config ...
04/01/2021
- 06:13 PM Bug #50102 (Fix Under Review): spec jsons that expect a list in a field dont verify that a list w...
- 06:08 PM Bug #50102 (Resolved): spec jsons that expect a list in a field dont verify that a list was actua...
- examples labels in a hostspec, networks in servicespec
>>>[ceph: root@vm-00 /]# cat spec.yaml
>>> service_t... - 04:43 AM Bug #49954 (Fix Under Review): cephadm is not persisting the grafana.db file, so any local custom...
- PR submitted for review
03/31/2021
- 10:11 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /ceph/teuthology-archive/yuriw-2021-03-24_23:05:33-rados-wip-yuri2-testing-2021-03-24-1212-octopus-distro-basic-smith...
- 06:39 PM Bug #50085: `ceph orch upgrade` assumes the target version is always 3 digits
- duplicate of https://tracker.ceph.com/issues/50043
- 06:39 PM Bug #50085 (Closed): `ceph orch upgrade` assumes the target version is always 3 digits
- 02:55 PM Bug #50085 (Closed): `ceph orch upgrade` assumes the target version is always 3 digits
- Typical error:
```
Mar 31 08:44:40 ceph-vasi-node2-mon-mgr conmon[346502]: debug 2021-03-31T12:44:40.996+0000 7fe... - 12:20 PM Feature #50061 (Fix Under Review): cephadm: automatically redeploy daemons if user changes which ...
03/30/2021
- 08:28 PM Bug #50062 (Fix Under Review): orch host add with multiple labels and no addr
- 08:17 PM Bug #50062 (Resolved): orch host add with multiple labels and no addr
- Host add operation throws error saying connection issue as below, if add operation is executed with labels and skippe...
- 07:09 PM Feature #50061 (Closed): cephadm: automatically redeploy daemons if user changes which container ...
- If a user changes the image to use for a monitoring stack daemon using 'ceph config set mgr mgr/cephadm/container_ima...
- 12:07 PM Bug #50043 (Fix Under Review): cephadm: account for possible "." in patch portion of ceph version
03/29/2021
- 07:01 PM Bug #50043 (Resolved): cephadm: account for possible "." in patch portion of ceph version
- It's possible when making custom ceph images (for downstream for example) to get 'ceph --version' output like
ceph... - 05:58 PM Bug #50041 (Fix Under Review): cephadm bootstrap with apply-spec anmd ssh-user option failed whil...
- 05:57 PM Bug #50041 (Resolved): cephadm bootstrap with apply-spec anmd ssh-user option failed while adding...
- ssh-copy-id is being run as the root user because cephadm requires sudo
so it is trying to use the root users ssh ke... - 04:50 AM Bug #49872 (Fix Under Review): cephadm: Don't remove the daemon keyring, if redeploy failes
Also available in: Atom