Project

General

Profile

Activity

From 04/15/2021 to 05/14/2021

05/14/2021

07:26 PM Bug #50526: OSD massive creation: OSDs not created
@Juan, allow me to provide more detail on the scenario that we encountered. As far as I can tell, the root cause of o... Cory Snyder
05:04 PM Bug #50526: OSD massive creation: OSDs not created
David Orman wrote:
> We've created a PR to fix the root cause of this issue: https://github.com/alfredodeza/remoto/p...
Juan Miguel Olmo Martínez
04:00 PM Bug #50526: OSD massive creation: OSDs not created
We've created a PR to fix the root cause of this issue: https://github.com/alfredodeza/remoto/pull/63 David Orman
06:01 PM Bug #50817 (Closed): cephadm: upgrade loops forever if not enough mds daemons
If you don't have enough mds daemons that ok-to-stop will ever pass the upgrade just loops forever without providing ... Adam King
04:18 PM Bug #50717 (Fix Under Review): cephadm: prometheus.yml.j2 contains "tab" character
Dimitri Savineau
01:59 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
... Deepika Upadhyay
09:51 AM Feature #50815 (Resolved): cephadm: Removing an offline host
but doesn't that address only part of the problem. For example, any daemons that Ceph (not cephadm) knew about are st... Sebastian Wagner

05/13/2021

04:55 PM Bug #50805 (Resolved): Replacement of OSDs not working in hosts with FQDN host name
In a host with FQDN name:
#hostname
test1.lab.com
#ceph orch osd rm 4 --replace
# ceph osd tree
ID CLASS ...
Juan Miguel Olmo Martínez
02:45 PM Bug #50041 (Resolved): cephadm bootstrap with apply-spec anmd ssh-user option failed while adding...
Daniel Pivonka
02:15 PM Tasks #50804 (Resolved): cephadm bootstrap. add a warning that users should not use --fsid
cephadm bootstrap. add a warning that users should not use --fsid
Reason: this isn't really give the user any adva...
Sebastian Wagner
01:55 PM Bug #50359 (In Progress): Configure the IP address for the monitoring stack components
Daniel Pivonka

05/12/2021

05:49 PM Feature #50784 (Fix Under Review): cephadm: orch upgrade check should check if the target image p...
Adam King
05:43 PM Feature #50784 (Resolved): cephadm: orch upgrade check should check if the target image provided ...
If the user provides an image to orch upgrade check that they could not actually upgrade to because of the ceph versi... Adam King
03:01 PM Bug #50776 (New): cephadm: CRUSH uses bare host names
https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/module.py#L1411
https://github.com/ceph/ceph/blob/...
Sebastian Wagner
12:08 PM Bug #48930 (Resolved): when removing the iscsi service, the gateway config object remains
Sage Weil
09:59 AM Bug #50359: Configure the IP address for the monitoring stack components
I think also that being able to customize the port (exposing a spec parameter is also required in this context) Francesco Pantano

05/11/2021

06:28 PM Feature #50733 (In Progress): cephadm: provide message in orch upgrade status saying upgrade is c...
Adam King
04:06 PM Bug #50526: OSD massive creation: OSDs not created
To be clear, we have not applied this patch. I was merely adding information to point out the impact is not restricte... David Orman
03:57 PM Bug #50526: OSD massive creation: OSDs not created
David Orman wrote:
> Juan Miguel Olmo Martínez wrote:
> > I think that the fix will also work for your issue, it wo...
Juan Miguel Olmo Martínez
03:55 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
Before you set the port (if it's not too late), can you attach the rgw portion of the 'ceph orch ls --export' output? Sage Weil
02:41 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
workaround is to manually set the port:... Sebastian Wagner
02:04 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s... Sebastian Wagner
02:21 PM Bug #50759 (Rejected): Redeploying daemon prometheus.a on host smithi159 failed: 'latin-1' codec ...
was caused by https://github.com/ceph/ceph/pull/40172 Sebastian Wagner
02:17 PM Bug #50759 (Rejected): Redeploying daemon prometheus.a on host smithi159 failed: 'latin-1' codec ...
https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s... Sebastian Wagner
02:10 PM Bug #47480: cephadm: tcmu-runner container is logging inside the container
https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s... Sebastian Wagner
11:42 AM Bug #50691: cephadm: bootstrap fails with "IndexError: list index out of range" during cephadm se...
workaround: Do not specify the ssh user when bootstrapping, but later on. Sebastian Wagner
11:09 AM Bug #50691 (Fix Under Review): cephadm: bootstrap fails with "IndexError: list index out of range...
Sebastian Wagner
10:23 AM Bug #50691: cephadm: bootstrap fails with "IndexError: list index out of range" during cephadm se...
https://github.com/ceph/ceph/commit/777f236ad885b03b551dd820f41a00b9c89761eb#diff-d0f7acffbce59b9e36a1479d1b1f32955cd... Sebastian Wagner

05/10/2021

10:32 PM Bug #50526: OSD massive creation: OSDs not created
Juan Miguel Olmo Martínez wrote:
> I think that the fix will also work for your issue, it would be nice if you can c...
David Orman
06:58 PM Feature #50733 (Closed): cephadm: provide message in orch upgrade status saying upgrade is complete
Right now, the upgrade status just says the upgrade is no longer in progress and no explicit message is given to say ... Adam King
01:57 PM Feature #45864 (Resolved): cephadm: include monitoring components in usual upgrade process
Adam King
09:21 AM Bug #49860: cephadm adopt - Report conf file missing - now it says could not detect legacy fsid
Do you remember, were there any other ceph daemons deployed on that host? cephadm needs to know the fsid of the clus... Sebastian Wagner
08:57 AM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
prio=normal, as this is not trivial to implement Sebastian Wagner
08:53 AM Feature #48102: cephadm: configure HA (cluster flags) for Alertmanager
Isn't the altertmanager already HA by itself? I thought that alertmanager already creates a fault-tolerant cluster on... Sebastian Wagner
08:48 AM Feature #48980 (Closed): orch: add image properties to monitoring spec files
Sebastian Wagner
08:42 AM Feature #48560 (Closed): Spec files for each daemon in the monitoring stack
Sebastian Wagner

05/09/2021

11:13 AM Bug #50717 (Resolved): cephadm: prometheus.yml.j2 contains "tab" character
Hello.
in file /usr/share/ceph/mgr/cephadm/templates/services/prometheus/prometheus.yml.j2 provided by package
c...
marek czardybon

05/08/2021

09:33 AM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
Sam Overton wrote:
> #50693 explains the root-cause, which is that @/sys/kernel/security/apparmor/profiles@ is an em...
Ashley Merrick

05/07/2021

10:13 PM Bug #50671: cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client.admin.ke...
I think this might be a permissions issue - it looks like cephadm is writing the keyring without changing its permiss... Josh Durgin
08:47 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
Deepika Upadhyay wrote:
> /ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-...
Deepika Upadhyay
08:01 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
#50693 explains the root-cause, which is that @/sys/kernel/security/apparmor/profiles@ is an empty file and cephadm i... Sam Overton
07:57 PM Bug #50693 (Resolved): cephadm: commands fail with "ValueError: not enough values to unpack (expe...
Occurs with ceph/cephadm 16.2.1 running on a clean Debian 10.9 install.
The following error is from a failed OSD D...
Sam Overton
07:31 PM Bug #50691 (Resolved): cephadm: bootstrap fails with "IndexError: list index out of range" during...
Running on a cleanly installed Debian 10.9 host with ceph/cephadm 16.2.3.
The same command in 16.2.1, running on t...
Sam Overton
06:29 PM Bug #50690 (Can't reproduce): ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command not...
Description of problem:
ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command is not generating the e...
Juan Miguel Olmo Martínez
03:27 PM Documentation #50687 (In Progress): cephadm: must redeploy monitoring stack daemon after changing...
Adam King
02:03 PM Documentation #50687 (Resolved): cephadm: must redeploy monitoring stack daemon after changing im...
We document that, to use a different image from the default for a monitoring stack daemon, you must change the image ... Adam King
10:09 AM Bug #50685 (Resolved): wrong exception type: Exception("No filters applied")
... Sebastian Wagner
09:20 AM Bug #48930: when removing the iscsi service, the gateway config object remains
follow-up PR: https://github.com/ceph/ceph/pull/41181 Sebastian Wagner
09:19 AM Bug #48930 (Fix Under Review): when removing the iscsi service, the gateway config object remains
Sebastian Wagner
04:22 AM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
... Deepika Upadhyay

05/06/2021

06:29 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
Adam King
06:24 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
Adam King
06:23 PM Bug #50364 (Resolved): cephadm: removing daemons from hosts in maintenance mode
Adam King
05:43 AM Bug #50671 (Closed): cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client...
OSD status Check fails with no keyring found.
CLI:
2021-05-01T12:08:20.050 INFO:tasks.cephadm:Waiting for OSDs t...
Harish Munjulur

05/05/2021

02:52 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
still seeing in octopus: http://qa-proxy.ceph.com/teuthology/yuriw-2021-05-04_19:53:28-rados-wip-yuri-testing-2021-05... Deepika Upadhyay

05/04/2021

04:44 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
/ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-octopus-distro-basic-smithi... Deepika Upadhyay
11:44 AM Feature #50639 (New): Request to provide an option to specify erasure coded pool as datapool whil...
... Sebastian Wagner
04:01 AM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
Just to confirm this is how the section looks after my edit... Ashley Merrick

05/03/2021

04:51 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
I did as suggested but the upgrade still fails with the following new error... Ashley Merrick
03:40 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
workaround is to replace
/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adf...
Sebastian Wagner
03:36 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
... Sebastian Wagner
01:57 PM Bug #50616 (Duplicate): ValueError: not enough values to unpack (expected 2, got 1) during upgrad...
Started an upgrade from 15.2.8 to 16.2.1 via cephadm running on Ubuntu 20.04 & Docker.
MON/MGR/MDS upgraded fine a...
Ashley Merrick
03:49 PM Bug #50399: cephadm ignores registry settings
you also have to update the image to point to your registry. otherwise cephadm don't actually use the registry Sebastian Wagner
03:45 PM Bug #44587 (New): failed to write <pid> to cgroup.procs:
Sebastian Wagner

05/02/2021

08:55 AM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
> - make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe include a --force in ca... Nathan Cutler

04/30/2021

07:42 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
I'd *definitively* go for make 'orch apply prometheus' silently enable the prometheus module. Sebastian Wagner
04:46 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
A couple options:
- make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe incl...
Sage Weil
06:49 PM Support #50594 (Resolved): ceph orch / cephadm does not allow deploying multiple MDS daemons per ...
I have 3 hosts, with lots of cores. I have a filesystem with ~150M files that requires several active MDS daemons to ... Nathan Fish
10:15 AM Feature #50593 (Resolved): cephadm: cephfs-mirror service should enable "mgr/mirror"
cephadm: cephfs-mirror service should enable "mgr/mirror" Sebastian Wagner
07:00 AM Bug #50592 (Closed): "ceph orch apply <svc_type>" applies placement by default without providing ...
... Juan Miguel Olmo Martínez

04/29/2021

09:13 AM Bug #50526: OSD massive creation: OSDs not created
Andreas Håkansson wrote:
> We have the same or a very similar problem,
> In out test case adding more than 8 disk w...
Juan Miguel Olmo Martínez

04/28/2021

08:07 PM Bug #50102 (Resolved): spec jsons that expect a list in a field dont verify that a list was actua...
Daniel Pivonka
06:27 PM Bug #50306 (Pending Backport): /etc/hosts is not passed to ceph containers. clusters that were re...
Sage Weil
06:26 PM Feature #46044 (Pending Backport): cephadm: Distribute admin keyring.
Sage Weil
06:26 PM Bug #50443 (Pending Backport): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
Sage Weil
06:25 PM Bug #50544 (Pending Backport): cephadm: monitoring stack containers in conf file passed to bootst...
Sage Weil
12:47 PM Bug #50544 (Fix Under Review): cephadm: monitoring stack containers in conf file passed to bootst...
Adam King
06:24 PM Bug #50548 (Pending Backport): cephadm doesn't deploy monitors when multiple public networks
Sage Weil
07:21 AM Bug #50548: cephadm doesn't deploy monitors when multiple public networks
PR created: https://github.com/ceph/ceph/pull/41055 Stanislav Datskevych
06:58 AM Bug #50548 (Resolved): cephadm doesn't deploy monitors when multiple public networks
The issue spotted on Ceph 16.2.1 deployed with cephadm+docker, although the master branch seems to also be affected.
...
Stanislav Datskevych
05:44 PM Bug #50062 (Resolved): orch host add with multiple labels and no addr
Daniel Pivonka
05:32 PM Bug #50248 (Resolved): rgw-nfs daemons marked as stray
Daniel Pivonka
04:07 PM Feature #49960 (Resolved): cephadm: put max on number of daemons in placement count based on numb...
Adam King
04:06 PM Documentation #50257 (Resolved): cephadm docs: wrong command for getting events for single daemon
Adam King
04:06 PM Bug #49757 (Resolved): orch: --format flag name not included in help for 'orch ps' and 'orch ls'
Adam King
09:48 AM Bug #50526: OSD massive creation: OSDs not created
We have the same or a very similar problem,
In out test case adding more than 8 disk with db on a separate nvme devi...
Andreas Håkansson
09:26 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
duplicates #47873 Sebastian Wagner
09:21 AM Bug #50551: Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by default
We've been setting fs.aio-max-nr to 1048576 since early bluestore days with no apparent downside. That would be a sim... Dan van der Ster
09:14 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
fs.aio-max-nr: The Asynchronous non-blocking I/O (AIO) feature that allows a process to initiate multiple I/O operati... Juan Miguel Olmo Martínez

04/27/2021

06:52 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
If you want to set monitoring stack container images during bootstrap by setting a config option like "mgr/cephadm/co... Adam King
03:22 PM Feature #46827: cephadm: Pin OSDs to pmem modules connected to specific CPUs
workaround: manually set the config option Sebastian Wagner
02:57 PM Feature #44874 (Rejected): cephadm: add Filestore support
Sort of too late by now. I'd still accept PRs for this Sebastian Wagner
02:55 PM Feature #46044 (Fix Under Review): cephadm: Distribute admin keyring.
Sebastian Wagner
02:54 PM Feature #50236 (Rejected): cephadm: NFSv3
Sebastian Wagner
01:39 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
Jeff Layton wrote:
> Seems reasonable. So what happens during a "cephadm pull"? I imagine:
>
> # determine the ne...
Sebastian Wagner
09:05 AM Bug #50535 (Resolved): add local cephadm bootstrap dev env.
... Sebastian Wagner
09:04 AM Documentation #50534 (Resolved): docs: add full cluster purge
Sebastian Wagner
06:14 AM Bug #49506 (Resolved): cephadm: `cephadm ls` broken for SUSE's downstream alertmanager container
Kefu Chai

04/26/2021

04:37 PM Feature #50529 (Resolved): cephadm rm-cluster is also not resetting any disks that were used as osds
see title.
should probably be an optional argument or something.
Sebastian Wagner
03:43 PM Bug #50364 (Pending Backport): cephadm: removing daemons from hosts in maintenance mode
Adam King
03:24 PM Bug #50526 (Resolved): OSD massive creation: OSDs not created
OSDs are not created when the drive group used to launch the osd creation affect a big number of OSDs (75 in my case)... Juan Miguel Olmo Martínez
02:06 PM Bug #50524 (Resolved): placement spec: irritating error message if passed a string for count_per_...
... Sebastian Wagner
08:25 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
Paul Cuzner wrote:
> Sebastian Wagner wrote:
> > A few problems:
> >
> > * *cephadm rm-cluster* only removes the...
Sebastian Wagner

04/24/2021

05:54 AM Bug #50364 (Resolved): cephadm: removing daemons from hosts in maintenance mode
Kefu Chai

04/23/2021

09:33 PM Bug #50502: cephadm pull doesn't get latest image
https://github.com/ceph/ceph/pull/39058 caused a subtle behavior change.
Previously, if we used a non-stable tag,...
Sebastian Wagner
01:52 PM Bug #50502: cephadm pull doesn't get latest image
This is a tricky one!
Imagine you set...
Sebastian Wagner
01:49 PM Bug #50502 (Closed): cephadm pull doesn't get latest image
I tried to do a "cephadm pull" this morning on my mini-cluster and it got v16.2.0. Dockerhub has v16.2.1 currently th... Jeff Layton
02:46 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
Seems reasonable. So what happens during a "cephadm pull"? I imagine:
# determine the new version
# set it in the...
Jeff Layton
01:56 PM Feature #45111 (Rejected): cephadm: choose distribution specific images based on etc/os-releaes
don't know. I'd like to avoid that complexity. Please reopen, if you think this is a good idea. Sebastian Wagner
12:22 PM Bug #50114 (Resolved): cephadm: upgrade loop on target_digests list mismatch
Sage Weil
05:50 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
Sebastian Wagner wrote:
> A few problems:
>
> * *cephadm rm-cluster* only removes the cluster on the local host
...
Paul Cuzner

04/22/2021

01:19 PM Bug #50444 (Pending Backport): host labels order is random
Kefu Chai
11:47 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
A few problems:
* *cephadm rm-cluster* only removes the cluster on the local host
* *mgr/cephadm* cannot remove t...
Sebastian Wagner
07:24 AM Support #48630: non-LVM OSD do not start after upgrade from 15.2.4 -> 15.2.7
Sebastian Wagner wrote:
> I think you probably want to migrate to ceph-volume for now.
Hi Sebastian,
Thanks fo...
ronnie laptop

04/21/2021

09:18 PM Bug #50472 (Resolved): orchestrator doesn't provide a way to remove an entire cluster
In prior toolchains like ceph-ansible, purging a cluster and returning a set of hosts to their original state was pos... Paul Cuzner
02:58 PM Bug #47513 (Pending Backport): rook: 'ceph orch ps' does not show image and container id correctly
Sage Weil
09:53 AM Support #49497: Cephadm fails to upgrade from 15.2.8 to 15.2.9
Illya S. wrote:
> The error is still here with 15.2.10
>
> Stuck on 15.2.8
15.2.11 -- nothing changed
Illya S.
02:30 AM Bug #50443 (Fix Under Review): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
Adam King

04/20/2021

07:44 PM Bug #50444 (Resolved): host labels order is random
host labels are not stored in the order entered or a logical order like alphabetically. they stored in a randomized o... Daniel Pivonka
07:30 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
If you have < 2 running mgr daemons than the upgrade won't work because there will be no mgr to fail over to.
If you...
Adam King
09:45 AM Bug #49954 (Resolved): cephadm is not persisting the grafana.db file, so any local customizations...
Juan Miguel Olmo Martínez

04/19/2021

09:30 PM Bug #50306 (Fix Under Review): /etc/hosts is not passed to ceph containers. clusters that were re...
Daniel Pivonka
12:30 PM Bug #50401 (Pending Backport): cephadm: Daemons that don't use ceph image always marked as needin...
Sage Weil

04/16/2021

04:48 PM Bug #50401 (Resolved): cephadm: Daemons that don't use ceph image always marked as needing upgrad...
The upgrade check command checks the image is of each daemon against the image id for the image the user would like t... Adam King
03:47 PM Bug #50399: cephadm ignores registry settings
I also can't seem to edit this but this is on the latest octopus - 15.2.10 Zoltan Arnold Nagy
02:05 PM Bug #50399: cephadm ignores registry settings
just to be clear this happened after I wanted to add an mds and did... Zoltan Arnold Nagy
01:52 PM Bug #50399 (Can't reproduce): cephadm ignores registry settings
even after setting mgr/cephadm/registry_user, mgr/cephadm/registry_password and mgr/cephadm/registry_url to a docker ... Zoltan Arnold Nagy
02:48 PM Bug #50369 (Pending Backport): mgr/volumes/nfs: drop type param during cluster create
Sage Weil
02:48 PM Feature #49960 (Pending Backport): cephadm: put max on number of daemons in placement count based...
Sage Weil
04:46 AM Bug #49737 (Resolved): cephadm bootstrap --skip-ssh skips too much
Kefu Chai
04:45 AM Feature #50361 (Resolved): cephadm: report on unexpected exception in upgrade loop
Kefu Chai
04:30 AM Bug #50102 (Pending Backport): spec jsons that expect a list in a field dont verify that a list w...
Kefu Chai

04/15/2021

04:27 PM Documentation #50362 (Duplicate): pacific curl-based-installation docs link to octopus binary
Daniel Pivonka
04:25 PM Documentation #49806 (Pending Backport): minor problems in cephadm docs
Daniel Pivonka
04:03 PM Feature #48624: ceph orch drain <host>
todo. this could include:
Temporarily disable scrubbing
Limit backfill and recovery
Sebastian Wagner
09:45 AM Cleanup #50375 (Rejected): cephadm firewall: move to unit.run?
Right now, firewall ports are opened when deploying a unit.
We should investigate, if the firewall could be config...
Sebastian Wagner
08:20 AM Bug #46097 (Won't Fix): package mode has a hardcoded ssh user
let's remove the package mode Sebastian Wagner
08:20 AM Tasks #46352 (Won't Fix): add leap support for cephadm
feel free to reopen this! Sebastian Wagner
08:19 AM Feature #44429 (Rejected): cephadm: make upgrade work with 'packaged' mode
let's remove the package mode Sebastian Wagner
08:18 AM Bug #48779 (Won't Fix): orchestrator provides no ceph-[mon,mgr,osd,mds,...].target equivalent
let's encourage users to use ... Sebastian Wagner
08:17 AM Bug #45973 (Rejected): Adopted MDS daemons are removed by the orchestrator because they're orphans
fixed by both downstreams Sebastian Wagner
08:17 AM Bug #48656 (Can't reproduce): cephadm botched install of ceph-fuse (symbol lookup error)
Sebastian Wagner
08:16 AM Feature #46651 (Rejected): cephadm: allow daemon/service restarts on a host basis
that's probably the maintenance mode Sebastian Wagner
12:29 AM Bug #50369: mgr/volumes/nfs: drop type param during cluster create
Michael Fritch wrote:
> PR #37600 introduced support for both cephfs and rgw exports
> to be configured using a sin...
Michael Fritch
12:19 AM Bug #50369 (Resolved): mgr/volumes/nfs: drop type param during cluster create
PR #37600 introduced support for both cephfs and rgw exports
to be configured using a single nfs-ganesha cluster.
Michael Fritch
 

Also available in: Atom