Activity
From 04/08/2021 to 05/07/2021
05/07/2021
- 10:13 PM Bug #50671: cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client.admin.ke...
- I think this might be a permissions issue - it looks like cephadm is writing the keyring without changing its permiss...
- 08:47 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- Deepika Upadhyay wrote:
> /ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-... - 08:01 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- #50693 explains the root-cause, which is that @/sys/kernel/security/apparmor/profiles@ is an empty file and cephadm i...
- 07:57 PM Bug #50693 (Resolved): cephadm: commands fail with "ValueError: not enough values to unpack (expe...
- Occurs with ceph/cephadm 16.2.1 running on a clean Debian 10.9 install.
The following error is from a failed OSD D... - 07:31 PM Bug #50691 (Resolved): cephadm: bootstrap fails with "IndexError: list index out of range" during...
- Running on a cleanly installed Debian 10.9 host with ceph/cephadm 16.2.3.
The same command in 16.2.1, running on t... - 06:29 PM Bug #50690 (Can't reproduce): ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command not...
- Description of problem:
ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command is not generating the e... - 03:27 PM Documentation #50687 (In Progress): cephadm: must redeploy monitoring stack daemon after changing...
- 02:03 PM Documentation #50687 (Resolved): cephadm: must redeploy monitoring stack daemon after changing im...
- We document that, to use a different image from the default for a monitoring stack daemon, you must change the image ...
- 10:09 AM Bug #50685 (Resolved): wrong exception type: Exception("No filters applied")
- ...
- 09:20 AM Bug #48930: when removing the iscsi service, the gateway config object remains
- follow-up PR: https://github.com/ceph/ceph/pull/41181
- 09:19 AM Bug #48930 (Fix Under Review): when removing the iscsi service, the gateway config object remains
- 04:22 AM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- ...
05/06/2021
- 06:29 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- 06:24 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
- 06:23 PM Bug #50364 (Resolved): cephadm: removing daemons from hosts in maintenance mode
- 05:43 AM Bug #50671 (Closed): cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client...
- OSD status Check fails with no keyring found.
CLI:
2021-05-01T12:08:20.050 INFO:tasks.cephadm:Waiting for OSDs t...
05/05/2021
- 02:52 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- still seeing in octopus: http://qa-proxy.ceph.com/teuthology/yuriw-2021-05-04_19:53:28-rados-wip-yuri-testing-2021-05...
05/04/2021
- 04:44 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- /ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-octopus-distro-basic-smithi...
- 11:44 AM Feature #50639 (New): Request to provide an option to specify erasure coded pool as datapool whil...
- ...
- 04:01 AM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- Just to confirm this is how the section looks after my edit...
05/03/2021
- 04:51 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- I did as suggested but the upgrade still fails with the following new error...
- 03:40 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- workaround is to replace
/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adf... - 03:36 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- ...
- 01:57 PM Bug #50616 (Duplicate): ValueError: not enough values to unpack (expected 2, got 1) during upgrad...
- Started an upgrade from 15.2.8 to 16.2.1 via cephadm running on Ubuntu 20.04 & Docker.
MON/MGR/MDS upgraded fine a... - 03:49 PM Bug #50399: cephadm ignores registry settings
- you also have to update the image to point to your registry. otherwise cephadm don't actually use the registry
- 03:45 PM Bug #44587 (New): failed to write <pid> to cgroup.procs:
05/02/2021
- 08:55 AM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- > - make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe include a --force in ca...
04/30/2021
- 07:42 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- I'd *definitively* go for make 'orch apply prometheus' silently enable the prometheus module.
- 04:46 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- A couple options:
- make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe incl... - 06:49 PM Support #50594 (Resolved): ceph orch / cephadm does not allow deploying multiple MDS daemons per ...
- I have 3 hosts, with lots of cores. I have a filesystem with ~150M files that requires several active MDS daemons to ...
- 10:15 AM Feature #50593 (Resolved): cephadm: cephfs-mirror service should enable "mgr/mirror"
- cephadm: cephfs-mirror service should enable "mgr/mirror"
- 07:00 AM Bug #50592 (Closed): "ceph orch apply <svc_type>" applies placement by default without providing ...
- ...
04/29/2021
- 09:13 AM Bug #50526: OSD massive creation: OSDs not created
- Andreas Håkansson wrote:
> We have the same or a very similar problem,
> In out test case adding more than 8 disk w...
04/28/2021
- 08:07 PM Bug #50102 (Resolved): spec jsons that expect a list in a field dont verify that a list was actua...
- 06:27 PM Bug #50306 (Pending Backport): /etc/hosts is not passed to ceph containers. clusters that were re...
- 06:26 PM Feature #46044 (Pending Backport): cephadm: Distribute admin keyring.
- 06:26 PM Bug #50443 (Pending Backport): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- 06:25 PM Bug #50544 (Pending Backport): cephadm: monitoring stack containers in conf file passed to bootst...
- 12:47 PM Bug #50544 (Fix Under Review): cephadm: monitoring stack containers in conf file passed to bootst...
- 06:24 PM Bug #50548 (Pending Backport): cephadm doesn't deploy monitors when multiple public networks
- 07:21 AM Bug #50548: cephadm doesn't deploy monitors when multiple public networks
- PR created: https://github.com/ceph/ceph/pull/41055
- 06:58 AM Bug #50548 (Resolved): cephadm doesn't deploy monitors when multiple public networks
- The issue spotted on Ceph 16.2.1 deployed with cephadm+docker, although the master branch seems to also be affected.
... - 05:44 PM Bug #50062 (Resolved): orch host add with multiple labels and no addr
- 05:32 PM Bug #50248 (Resolved): rgw-nfs daemons marked as stray
- 04:07 PM Feature #49960 (Resolved): cephadm: put max on number of daemons in placement count based on numb...
- 04:06 PM Documentation #50257 (Resolved): cephadm docs: wrong command for getting events for single daemon
- 04:06 PM Bug #49757 (Resolved): orch: --format flag name not included in help for 'orch ps' and 'orch ls'
- 09:48 AM Bug #50526: OSD massive creation: OSDs not created
- We have the same or a very similar problem,
In out test case adding more than 8 disk with db on a separate nvme devi... - 09:26 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
- duplicates #47873
- 09:21 AM Bug #50551: Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by default
- We've been setting fs.aio-max-nr to 1048576 since early bluestore days with no apparent downside. That would be a sim...
- 09:14 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
- fs.aio-max-nr: The Asynchronous non-blocking I/O (AIO) feature that allows a process to initiate multiple I/O operati...
04/27/2021
- 06:52 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
- If you want to set monitoring stack container images during bootstrap by setting a config option like "mgr/cephadm/co...
- 03:22 PM Feature #46827: cephadm: Pin OSDs to pmem modules connected to specific CPUs
- workaround: manually set the config option
- 02:57 PM Feature #44874 (Rejected): cephadm: add Filestore support
- Sort of too late by now. I'd still accept PRs for this
- 02:55 PM Feature #46044 (Fix Under Review): cephadm: Distribute admin keyring.
- 02:54 PM Feature #50236 (Rejected): cephadm: NFSv3
- 01:39 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
- Jeff Layton wrote:
> Seems reasonable. So what happens during a "cephadm pull"? I imagine:
>
> # determine the ne... - 09:05 AM Bug #50535 (Resolved): add local cephadm bootstrap dev env.
- ...
- 09:04 AM Documentation #50534 (Resolved): docs: add full cluster purge
- 06:14 AM Bug #49506 (Resolved): cephadm: `cephadm ls` broken for SUSE's downstream alertmanager container
04/26/2021
- 04:37 PM Feature #50529 (Resolved): cephadm rm-cluster is also not resetting any disks that were used as osds
- see title.
should probably be an optional argument or something. - 03:43 PM Bug #50364 (Pending Backport): cephadm: removing daemons from hosts in maintenance mode
- 03:24 PM Bug #50526 (Resolved): OSD massive creation: OSDs not created
- OSDs are not created when the drive group used to launch the osd creation affect a big number of OSDs (75 in my case)...
- 02:06 PM Bug #50524 (Resolved): placement spec: irritating error message if passed a string for count_per_...
- ...
- 08:25 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- Paul Cuzner wrote:
> Sebastian Wagner wrote:
> > A few problems:
> >
> > * *cephadm rm-cluster* only removes the...
04/24/2021
04/23/2021
- 09:33 PM Bug #50502: cephadm pull doesn't get latest image
- https://github.com/ceph/ceph/pull/39058 caused a subtle behavior change.
Previously, if we used a non-stable tag,... - 01:52 PM Bug #50502: cephadm pull doesn't get latest image
- This is a tricky one!
Imagine you set... - 01:49 PM Bug #50502 (Closed): cephadm pull doesn't get latest image
- I tried to do a "cephadm pull" this morning on my mini-cluster and it got v16.2.0. Dockerhub has v16.2.1 currently th...
- 02:46 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
- Seems reasonable. So what happens during a "cephadm pull"? I imagine:
# determine the new version
# set it in the... - 01:56 PM Feature #45111 (Rejected): cephadm: choose distribution specific images based on etc/os-releaes
- don't know. I'd like to avoid that complexity. Please reopen, if you think this is a good idea.
- 12:22 PM Bug #50114 (Resolved): cephadm: upgrade loop on target_digests list mismatch
- 05:50 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- Sebastian Wagner wrote:
> A few problems:
>
> * *cephadm rm-cluster* only removes the cluster on the local host
...
04/22/2021
- 01:19 PM Bug #50444 (Pending Backport): host labels order is random
- 11:47 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- A few problems:
* *cephadm rm-cluster* only removes the cluster on the local host
* *mgr/cephadm* cannot remove t... - 07:24 AM Support #48630: non-LVM OSD do not start after upgrade from 15.2.4 -> 15.2.7
- Sebastian Wagner wrote:
> I think you probably want to migrate to ceph-volume for now.
Hi Sebastian,
Thanks fo...
04/21/2021
- 09:18 PM Bug #50472 (Resolved): orchestrator doesn't provide a way to remove an entire cluster
- In prior toolchains like ceph-ansible, purging a cluster and returning a set of hosts to their original state was pos...
- 02:58 PM Bug #47513 (Pending Backport): rook: 'ceph orch ps' does not show image and container id correctly
- 09:53 AM Support #49497: Cephadm fails to upgrade from 15.2.8 to 15.2.9
- Illya S. wrote:
> The error is still here with 15.2.10
>
> Stuck on 15.2.8
15.2.11 -- nothing changed - 02:30 AM Bug #50443 (Fix Under Review): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
04/20/2021
- 07:44 PM Bug #50444 (Resolved): host labels order is random
- host labels are not stored in the order entered or a logical order like alphabetically. they stored in a randomized o...
- 07:30 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- If you have < 2 running mgr daemons than the upgrade won't work because there will be no mgr to fail over to.
If you... - 09:45 AM Bug #49954 (Resolved): cephadm is not persisting the grafana.db file, so any local customizations...
04/19/2021
- 09:30 PM Bug #50306 (Fix Under Review): /etc/hosts is not passed to ceph containers. clusters that were re...
- 12:30 PM Bug #50401 (Pending Backport): cephadm: Daemons that don't use ceph image always marked as needin...
04/16/2021
- 04:48 PM Bug #50401 (Resolved): cephadm: Daemons that don't use ceph image always marked as needing upgrad...
- The upgrade check command checks the image is of each daemon against the image id for the image the user would like t...
- 03:47 PM Bug #50399: cephadm ignores registry settings
- I also can't seem to edit this but this is on the latest octopus - 15.2.10
- 02:05 PM Bug #50399: cephadm ignores registry settings
- just to be clear this happened after I wanted to add an mds and did...
- 01:52 PM Bug #50399 (Can't reproduce): cephadm ignores registry settings
- even after setting mgr/cephadm/registry_user, mgr/cephadm/registry_password and mgr/cephadm/registry_url to a docker ...
- 02:48 PM Bug #50369 (Pending Backport): mgr/volumes/nfs: drop type param during cluster create
- 02:48 PM Feature #49960 (Pending Backport): cephadm: put max on number of daemons in placement count based...
- 04:46 AM Bug #49737 (Resolved): cephadm bootstrap --skip-ssh skips too much
- 04:45 AM Feature #50361 (Resolved): cephadm: report on unexpected exception in upgrade loop
- 04:30 AM Bug #50102 (Pending Backport): spec jsons that expect a list in a field dont verify that a list w...
04/15/2021
- 04:27 PM Documentation #50362 (Duplicate): pacific curl-based-installation docs link to octopus binary
- 04:25 PM Documentation #49806 (Pending Backport): minor problems in cephadm docs
- 04:03 PM Feature #48624: ceph orch drain <host>
- todo. this could include:
Temporarily disable scrubbing
Limit backfill and recovery
- 09:45 AM Cleanup #50375 (Rejected): cephadm firewall: move to unit.run?
- Right now, firewall ports are opened when deploying a unit.
We should investigate, if the firewall could be config... - 08:20 AM Bug #46097 (Won't Fix): package mode has a hardcoded ssh user
- let's remove the package mode
- 08:20 AM Tasks #46352 (Won't Fix): add leap support for cephadm
- feel free to reopen this!
- 08:19 AM Feature #44429 (Rejected): cephadm: make upgrade work with 'packaged' mode
- let's remove the package mode
- 08:18 AM Bug #48779 (Won't Fix): orchestrator provides no ceph-[mon,mgr,osd,mds,...].target equivalent
- let's encourage users to use ...
- 08:17 AM Bug #45973 (Rejected): Adopted MDS daemons are removed by the orchestrator because they're orphans
- fixed by both downstreams
- 08:17 AM Bug #48656 (Can't reproduce): cephadm botched install of ceph-fuse (symbol lookup error)
- 08:16 AM Feature #46651 (Rejected): cephadm: allow daemon/service restarts on a host basis
- that's probably the maintenance mode
- 12:29 AM Bug #50369: mgr/volumes/nfs: drop type param during cluster create
- Michael Fritch wrote:
> PR #37600 introduced support for both cephfs and rgw exports
> to be configured using a sin... - 12:19 AM Bug #50369 (Resolved): mgr/volumes/nfs: drop type param during cluster create
- PR #37600 introduced support for both cephfs and rgw exports
to be configured using a single nfs-ganesha cluster.
04/14/2021
- 09:12 PM Bug #50364 (Fix Under Review): cephadm: removing daemons from hosts in maintenance mode
- 07:29 PM Bug #50364 (Resolved): cephadm: removing daemons from hosts in maintenance mode
- Right now, when applying services in the serve loop, we will try to remove all daemons that are on hosts in maintenan...
- 09:12 PM Feature #50361 (Fix Under Review): cephadm: report on unexpected exception in upgrade loop
- 03:39 PM Feature #50361 (Resolved): cephadm: report on unexpected exception in upgrade loop
- Right now, if an unexpected exception such as https://tracker.ceph.com/issues/50043 is to happen during the upgrade, ...
- 07:33 PM Bug #49910: cephadm | Creating initial admin user... | Please specify the file containing the pas...
- As per /u/lynxeur suggestion here https://www.reddit.com/r/ceph/comments/mi3asa/cephadm_on_ubuntu_2004/gufey25?utm_so...
- 06:13 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- @john im keeping the bug open and just changing the subject and providing more details on the real problem here
... - 03:00 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- FWIW I see nothing wrong with closing this bug as invalid.
Unless you want to follow up on https://github.com/ceph... - 05:52 PM Documentation #50362 (In Progress): pacific curl-based-installation docs link to octopus binary
- 05:39 PM Documentation #50362 (Duplicate): pacific curl-based-installation docs link to octopus binary
- the link in the curl command here https://docs.ceph.com/en/pacific/cephadm/install/#curl-based-installation currently...
- 03:16 PM Feature #50360 (Resolved): Configure the IP address for Ganesha
- The nfs ganesha service is deployed using cephadm via the spec:
---... - 03:05 PM Bug #50359: Configure the IP address for the monitoring stack components
- Not sure if this can be considered an RFE rather than a bug, but this should be a must have
for any deployment. - 03:02 PM Bug #50359 (Resolved): Configure the IP address for the monitoring stack components
- When the dashboard is deployed using cephadm, a monitoring stack (node_exporter, prometheus, alertmanager, grafana) ...
- 02:34 PM Feature #45115 (New): cephadm: Deploy Ceph Dashboard behind a HAProxy instance
- 02:34 PM Feature #45115 (Resolved): cephadm: Deploy Ceph Dashboard behind a HAProxy instance
- 02:30 PM Bug #48939 (Can't reproduce): Orchestrator removes mon daemon from wrong host when removing host ...
- 02:29 PM Feature #43687 (Resolved): cephadm: haproxy (or lb)
- 02:27 PM Bug #48325 (Pending Backport): PlacementSpec: 'NoneType' object has no attribute 'copy'
- 02:27 PM Bug #49273 (Pending Backport): cephadm fails deployment of node-exporter when ipv6 is disabled
- 02:19 PM Feature #47711 (Resolved): mgr/cephadm: add a feature to examine the host facts to look for confi...
- 02:19 PM Feature #49407 (Resolved): Enable the ability of cephadm to trigger libstoragemgmt info from ceph...
- 02:19 PM Bug #49339 (Resolved): cephadm/osd: OSD draining: don't mark OSDs out
- 02:18 PM Bug #47916 (Resolved): podman containers running in a detached state do not output logs to journald
- 02:15 PM Bug #49436 (New): cephadm bootstrap fails to create /etc/ceph directory
- no time to look into this
- 02:12 PM Feature #49159 (Resolved): "cephadm ceph-volume activate" does not support cephadm
- 02:09 PM Bug #49889: mgr/orchestrator/_interface.py: ZeroDivisionError
- https://github.com/ceph/ceph/blob/ff97629375a4a4e82b79f0fdcdb25f411b74d48d/src/pybind/mgr/test_orchestrator/module.py...
- 02:08 PM Bug #49755 (Can't reproduce): OSD service is not found
- https://github.com/ceph/ceph/pull/40736
- 02:07 PM Bug #49724 (Resolved): fsid is not validated during accessing the shell through cli
- 02:04 PM Bug #49223 (Resolved): unrecognized arguments: --container-init
- 01:55 PM Bug #50267 (Pending Backport): rgw service can be deploy with realm and no zone or vise versa
- 01:49 PM Bug #46606 (New): cephadm: post-bootstrap monitoring deployment only works if the command "ceph m...
- 01:49 PM Bug #48597 (Resolved): pybind/mgr/cephadm: mds_join_fs not cleaned up
- 01:48 PM Bug #49675 (Resolved): ceph daemon 'reconfig' populates daemon cache with 'starting' state
- 01:46 PM Bug #49890 (Resolved): podman makes socket.getfqdn() return container name instead of hostname
- 01:45 PM Bug #50114 (Pending Backport): cephadm: upgrade loop on target_digests list mismatch
04/13/2021
- 04:56 AM Feature #48292: cephadm: allow more than 60 OSDs per host
- Sebastian Wagner wrote:
> If the cluster is set to have very dense nodes (>60 OSDs per host) please make sure to ass... - 12:41 AM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- I confirm I could apply a spec on bootstrap. Thanks!
Conclusions:
- Ensure you have the fix for bug #50041
- Do ... - 12:03 AM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- Wait, I think I can't apply it at bootstrap because I am currently missing the fix for bug #50041 (I had rolled it ba...
04/12/2021
- 11:59 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- If I use a spec with IPs then I can add my hosts after bootstrap [1] but not at bootstrap [2].
[1]... - 10:20 PM Bug #50306: /etc/hosts is not passed to ceph containers. clusters that were relying on /etc/hosts...
- i was able to determine this was caused because the host name could not resolve when trying to add hosts....
- 08:22 PM Bug #50306 (Resolved): /etc/hosts is not passed to ceph containers. clusters that were relying on...
- While using `cephadm bootstrap --apply-spec` to bootstrap a spec containing other hosts, cephadm attempts to set up S...
- 08:23 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- Because I'm having the same issue in Pacific let's use a new bug https://tracker.ceph.com/issues/50306
- 08:11 PM Bug #49277 (Duplicate): cephadm bootstrap --apply-spec <cluster.yaml> hangs
- 08:02 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- When using pacific with the fixing patch https://github.com/ceph/ceph/pull/40477 from Bug #50041, the deployment fail...
- 04:08 PM Bug #49910: cephadm | Creating initial admin user... | Please specify the file containing the pas...
- Conversation from another few users who were impacted by this issue: https://www.reddit.com/r/ceph/comments/mi3asa/ce...
- 10:05 AM Bug #50296 (Can't reproduce): Failed to remove OSD service
- I have some services created when adding ODSs from dashboard (I think):...
- 08:04 AM Bug #50295 (Closed): cephadm bootstrap mon container fails to start with podman 3.1 in CentOS 8 S...
- When attempting to bootstrap a container on CentOS stream after Appstream changed from podman version 3.0.0-0.33rc2.m...
04/09/2021
- 09:07 PM Documentation #50273 (Resolved): remove keepalived_user from haproxy docs
- keepalived_user is not used and not required
putting it in the spec results in an error - 08:08 PM Bug #50272: cephadm: after downsizing mon service from 5 to 3 daemons, cephadm reports "stray" da...
- downsizing* both services
- 08:07 PM Bug #50272 (New): cephadm: after downsizing mon service from 5 to 3 daemons, cephadm reports "str...
- After having 5 mon/mgr daemons and then donsizin both services to 3 daemons list_servers, which is used to detect str...
- 07:31 PM Bug #50267 (Fix Under Review): rgw service can be deploy with realm and no zone or vise versa
- 02:32 PM Bug #50267 (Resolved): rgw service can be deploy with realm and no zone or vise versa
- --realm and --zone both need to be supplied when doing 'orch apply rgw'
if just --realm is supplied the rgw servic... - 07:16 PM Bug #50041 (Pending Backport): cephadm bootstrap with apply-spec anmd ssh-user option failed whil...
- 07:01 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- We think this is a duplicate of https://tracker.ceph.com/issues/50041 and fixed by https://github.com/ceph/ceph/pull/...
- 12:36 PM Bug #49551 (Fix Under Review): cephadm journald logs are mangled
- 12:36 PM Bug #49551 (Pending Backport): cephadm journald logs are mangled
04/08/2021
- 08:22 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- I reproduced this with 15.2.10. I'll follow up on IRC to offer the assignee live access to the reproducer....
- 08:04 PM Bug #50248 (Fix Under Review): rgw-nfs daemons marked as stray
- 06:15 PM Bug #50248 (Resolved): rgw-nfs daemons marked as stray
- ...
- 07:56 PM Documentation #50257 (In Progress): cephadm docs: wrong command for getting events for single daemon
- 07:42 PM Documentation #50257 (Resolved): cephadm docs: wrong command for getting events for single daemon
- The current documented command at https://docs.ceph.com/en/latest/cephadm/troubleshooting/#per-service-and-per-daemon...
- 02:21 PM Documentation #50239 (Resolved): cephadm docs: add RGW SSL certificates
- We need to document how to set the SSL certificate in RGWSpec:...
- 12:22 PM Feature #50236 (Rejected): cephadm: NFSv3
- Some users might be interested in NFSv3 mainly from Windows clients.
Is there a need to support NFSv3 in cephadm? ...
Also available in: Atom