Activity
From 07/07/2021 to 08/05/2021
08/05/2021
- 08:21 PM Bug #51027 (In Progress): monmap drops rebooted mon if deployed via label
- 12:08 PM Feature #51901 (Closed): cephadm: support pulling images from insecure registries
- closing this one in favor of #52065
- 11:18 AM Bug #49287 (Resolved): podman: setting cgroup config for procHooks process caused: Unit libpod-$h...
- Fixed by https://github.com/opencontainers/runc/pull/2614
- 09:56 AM Feature #52065 (Fix Under Review): make cephadm support passing any additional parameter to eithe...
- This is a feature request to make cephadm support passing any existing parameters from docker or podman CLI.
- 08:52 AM Bug #52064 (In Progress): octopus: cephadm bootstrap --container-init broken in Octopus
- 08:46 AM Bug #52064 (Resolved): octopus: cephadm bootstrap --container-init broken in Octopus
- In Octopus, when the user provides the "--container-init" option to "cephadm bootstrap", all containerized daemons in...
08/04/2021
- 09:10 PM Bug #51978: podman version check broken on cent7
- podman 1.x isn't supported with Pacific
https://docs.ceph.com/en/latest/cephadm/compatibility/#cephadm-compatibili... - 05:04 PM Bug #51978: podman version check broken on cent7
- Please try to avoid running cephadm on centos 7: WE had to disable automated QE for it, cause the old kernel wasn't a...
- 12:56 AM Bug #52042 (Resolved): After deployment the example of cephadm shell invocation is overly complex
- If there is only one ceph instance on the host (which is the most likely user scenario), cephadm shell can infer the ...
- 12:52 AM Bug #52041 (New): `orch ps` shows wrong ports for MGR
- Only one mgr is active, so any mgr module that has an associated listening port should only be listed against the act...
- 12:46 AM Bug #52040 (Resolved): during an apply the host must be online otherwise the apply fails with a t...
- If a host is offline during an apply, the process stops with a traceback instead of continuing to the next host.
... - 12:40 AM Bug #52039 (New): cephadm rm-cluster should check whether the given fsid exists
- if the fsid provided is not present in /var/lib/ceph a warning should be printed. The return status should be *succes...
08/03/2021
- 04:48 PM Bug #51794 (Pending Backport): mgr/test_orchestrator: remove pool and namespace from nfs service
08/02/2021
- 06:48 PM Bug #51806: cephadm: stopped contains end up in error state
- I'm thinking this might not be a cephadm specific issue. For one thing, no matter what version I tested this with (I ...
07/31/2021
07/30/2021
- 10:55 PM Bug #51978 (Closed): podman version check broken on cent7
- I tried upgrading a test cluster (built on CentOS 7) from v15.2.13 to v16.2.5 today and ran into this problem:
---... - 07:29 PM Bug #51973 (Fix Under Review): cephadm: global default ingress container images value
- 07:22 PM Bug #51973 (Resolved): cephadm: global default ingress container images value
- All services (ceph/iscsi/ganesha, prometheus, alertmanager, node-exporter and grafana) have a global default containe...
- 07:02 PM Feature #51972 (Resolved): cephadm/ingress: support TLS RGW backend
- As per the documentation (and the code), the ingress service via haproxy doesn't support RGW backend with TLS. [1][2]...
- 06:38 PM Feature #51971 (Resolved): cephadm/ingress: update keepalived container image
- The default keepalived container image is : arcts/keepalived [1]
There's multiple issues here:
- We don't use a... - 02:44 PM Feature #44414 (Fix Under Review): bubble up errors during 'apply' phase to 'cluster warnings'
- 10:59 AM Bug #51902 (Resolved): cephadm adopt fails on clean_cgroup
- 05:04 AM Bug #51616: Updating node-exporter deployment progress stuck
- Thanks, Harry, this woraround worked for me, though I watched some different daemons which stuck during the update pr...
- 01:51 AM Feature #51947: cephadm: Redeploy services, on property update (was: Ingress for RGW does not app...
- Ok looks like you didn't redeploy the service after updating the spec file with the intermediate ca certificate right...
07/29/2021
- 09:51 PM Feature #51947: cephadm: Redeploy services, on property update (was: Ingress for RGW does not app...
- I finished to test with v16.2.5 and I counldn't reproduce the issue....
- 08:30 PM Feature #51947: cephadm: Redeploy services, on property update (was: Ingress for RGW does not app...
- That's weird because the code doesn't do anything special from the ssl_cert value in the spec
https://github.com/c... - 07:06 PM Feature #51901 (Fix Under Review): cephadm: support pulling images from insecure registries
- 06:06 PM Bug #51961 (Resolved): Stuck progress indicators in ceph status output
- If an exception is thrown while cephadm is attempting to apply a service spec in the serve loop, the progress indicat...
- 01:25 PM Bug #51601: mgr/dashboard: server does not bind to all addresses anymore
- I think this is related to the change in the URI setting and hostname/IP address handling.
07/28/2021
- 09:06 PM Bug #51902 (Fix Under Review): cephadm adopt fails on clean_cgroup
- 02:06 PM Bug #51902 (Resolved): cephadm adopt fails on clean_cgroup
- Until recently, the cephadm adopt command was working perfectly.
Now this ends up with a stack trace... - 07:32 PM Feature #51947 (New): cephadm: Redeploy services, on property update (was: Ingress for RGW does n...
- Using v16.2.4, Ubuntu 20.04 hosts for cluster and ingress (haproxy) for RGW instances. Multisite setup with one zone ...
- 04:29 PM Bug #51829 (Resolved): cephadm: deploying cephadm-exporter fails with shutil SameFileError
- 03:01 PM Bug #49633: podman: ERROR (catatonit:2): failed to exec pid1: No such file or directory
- ...
- 01:50 PM Feature #51901 (Closed): cephadm: support pulling images from insecure registries
- For convenience, it would be nice if cephadm could support pulling images from insecure registries with a native opti...
- 04:48 AM Fix #51721 (Resolved): ingress: Fix for virtual_interface_networks not working
07/26/2021
- 06:56 AM Bug #51736: mgr hung forever when execute multiprocessing.pool.ThreadPool accidentally
- Sebastian Wagner wrote:
> you're sure you did not hit #51733 ?
I think that the bug is different from #51733. The...
07/25/2021
- 08:20 PM Bug #51616: Updating node-exporter deployment progress stuck
Workaround (caution: temporarily disruptive), Assuming this is the only reported problem remaining after upgrade o...- 01:43 PM Bug #51298 (Resolved): ceph orch stop mgr should not stop all the mgrs and should give a warning ...
07/24/2021
07/23/2021
- 03:24 PM Bug #51796 (Fix Under Review): cephadm: unable to deploy grafana without mgr/dashboard
- 03:03 PM Bug #51796 (In Progress): cephadm: unable to deploy grafana without mgr/dashboard
- 02:08 PM Bug #51829 (Resolved): cephadm: deploying cephadm-exporter fails with shutil SameFileError
- ...
- 02:06 PM Bug #51818: "ceph orch host add" presents unhelpful error message if target host is missing cephadm
- in any case this is broken. the list (*[]*) should contain the error message. Somehow it got lost, leading to a non-h...
- 01:24 AM Bug #51818: "ceph orch host add" presents unhelpful error message if target host is missing cephadm
- Argh, sorry for the typos there, s/cephadm/ceph orch/ in several places.
- 01:23 AM Bug #51818 (Resolved): "ceph orch host add" presents unhelpful error message if target host is mi...
- When using "ceph orch host add", if the remote host is missing cephadm
and its dependencies then "ceph orch host add... - 11:10 AM Documentation #47637: mgr/cephadm: document how to configure custom TLS certificate for Grafana
- Reconfigure can be done easier with...
- 09:56 AM Bug #51794 (Fix Under Review): mgr/test_orchestrator: remove pool and namespace from nfs service
- 12:10 AM Bug #51817 (Resolved): FAILED tests/test_cephadm.py::TestShell::test_fsid - AttributeError: 'Fake...
- ...
07/22/2021
- 04:27 PM Bug #51806: cephadm: stopped contains end up in error state
- Adding ...
- 03:26 PM Bug #51806 (Need More Info): cephadm: stopped contains end up in error state
- ...
- 03:53 PM Bug #51111: Pacific: CEPHADM_STRAY_DAEMON after deploying iSCSI gateway with cephadm due to tcmu-...
- ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
- 03:53 PM Bug #51111: Pacific: CEPHADM_STRAY_DAEMON after deploying iSCSI gateway with cephadm due to tcmu-...
- same here as well:...
- 12:04 PM Bug #51796 (Resolved): cephadm: unable to deploy grafana without mgr/dashboard
- ...
- 11:08 AM Bug #51794 (Resolved): mgr/test_orchestrator: remove pool and namespace from nfs service
- ...
- 09:20 AM Feature #51793 (Closed): cephadm: Grafana: Add switch to enable the Gafana admin account
- Right now, users need to replace the Jinja2 template in order to enable the admin account. This has a few downsides. ...
- 12:56 AM Bug #51355 (Resolved): ingress service /var/lib/haproxy/haproxy.cfg
07/21/2021
- 06:57 PM Feature #50815 (Fix Under Review): cephadm: Removing an offline host
- 02:22 PM Bug #51298 (In Progress): ceph orch stop mgr should not stop all the mgrs and should give a warni...
- 12:36 PM Bug #51733: offline host hangs serve loop for 15 mins
- only is happening is host is not gracefully shutdown
- 12:05 PM Bug #51761 (Closed): journald logs are broken up again
- ...
- 11:46 AM Bug #51311: Failed to apply ingress.rgw: IndexError: list index out of range
- # ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ... - 08:50 AM Bug #51311 (Fix Under Review): Failed to apply ingress.rgw: IndexError: list index out of range
- 08:15 AM Bug #51713 (Duplicate): Cephadm: Timeout waiting for ingress.nfs.foo to start
- 02:46 AM Support #51737: How to restore data after I reinstall host opera system
- Sebastian Wagner wrote:
> I hope you still have a few MONs and MGRs left. Cause then, you can follow https://docs.ce...
07/20/2021
- 03:23 PM Bug #51713: Cephadm: Timeout waiting for ingress.nfs.foo to start
- Seems to be failing consistently
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingres... - 02:13 PM Bug #51713: Cephadm: Timeout waiting for ingress.nfs.foo to start
- ...
- 02:11 PM Bug #51355 (Fix Under Review): ingress service /var/lib/haproxy/haproxy.cfg
- 01:38 PM Bug #51736: mgr hung forever when execute multiprocessing.pool.ThreadPool accidentally
- you're sure you did not hit #51733 ?
- 02:50 AM Bug #51736 (Resolved): mgr hung forever when execute multiprocessing.pool.ThreadPool accidentally
- Envrionment:
We have 30+ hosts cluster ceph. 3 mons, 3 mgrs, 330 osds.
Description:
After running one day approx... - 12:52 PM Support #51737: How to restore data after I reinstall host opera system
- I hope you still have a few MONs and MGRs left. Cause then, you can follow https://docs.ceph.com/en/latest/cephadm/os...
- 02:51 AM Support #51737 (Resolved): How to restore data after I reinstall host opera system
- I use cephadm bootstrap ceph cluster(2 host, 1 mon, 2 mgr, 6 osd, 2 mds, 1 cephfs).
For some reason, I need reinstal...
07/19/2021
- 08:43 PM Bug #51733 (Resolved): offline host hangs serve loop for 15 mins
- when a host in your cluster goes offline the next time the serve loop starts _refresh_hosts_and_daemons() will be cal...
- 06:04 PM Bug #51546 (Pending Backport): cephadm: remove iscsi service fails when the dashboard isn't deployed
- 07:29 AM Documentation #47637: mgr/cephadm: document how to configure custom TLS certificate for Grafana
- This is how I did it
Since @cephadm shell ceph config-key set mgr/cephadm/grafana_crt -i <cert-file>@ can't read f...
07/18/2021
- 03:57 PM Bug #51355: ingress service /var/lib/haproxy/haproxy.cfg
- The 2.3 haproxy version is pinned in 16.2.5, it works for me, please close this.
- 03:51 PM Fix #51721 (Fix Under Review): ingress: Fix for virtual_interface_networks not working
- 03:43 PM Fix #51721: ingress: Fix for virtual_interface_networks not working
- PR: https://github.com/ceph/ceph/pull/42389
- 03:39 PM Fix #51721 (Resolved): ingress: Fix for virtual_interface_networks not working
- If you follow the documentation and try to specify the interface by specificing subnets in the spec file, it does not...
- 09:21 AM Bug #51311: Failed to apply ingress.rgw: IndexError: list index out of range
- What does ...
07/17/2021
- 10:53 AM Bug #51616: Updating node-exporter deployment progress stuck
- I have exactly the same issue. In addition the dashboard is not working anymore. mgr is listenning on port 8443 but ...
07/16/2021
- 04:17 PM Bug #51713 (Duplicate): Cephadm: Timeout waiting for ingress.nfs.foo to start
- Test Description:
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress2 3-final}
L... - 01:12 PM Bug #51665: document unforunate interactions between cephadm and restrictive sshd_config?
- Let's set aside my earlier mention of ceph-salt for a moment, and focus only on the use of @cephadm bootstrap --ssh-u...
07/15/2021
- 01:48 PM Bug #51027: monmap drops rebooted mon if deployed via label
- Still a problem in Pacific 16.2.5. Pretty much makes the 'assignment of mons by label' useless since the mon is lost...
- 01:45 PM Bug #51027: monmap drops rebooted mon if deployed via label
- I think it's a mistake to put this in the 'orchestrator' problem list, because I think the logic that decides whether...
07/14/2021
- 03:59 PM Feature #47774: orch,cephadm: host search with filters
- Paul Cuzner wrote:
> Do we really need this? In most enterprise environments I;ve worked in this type of thing gets ... - 02:19 PM Bug #51642 (Resolved): cephadm/rgw : RGW server is not coming up: Initialization timeout, failed ...
- 02:19 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- Dimitri Savineau wrote:
> Answering to myself...
>
> [...]
>
> So I think we can close this issue.
I was ab... - 11:53 AM Bug #51668: cephadm shell repeats WARNING: The same type, major and minor should not be used for ...
- also appears in https://tracker.ceph.com/issues/48261
- 09:47 AM Bug #51668 (Closed): cephadm shell repeats WARNING: The same type, major and minor should not be ...
- Launching cephadm shell is quite noisy on our nodes running OSDs (48 OSDs, 96 copies of the warning).
It's probably ... - 10:22 AM Bug #51671 (Resolved): cephadm rm-cluster stuck: No indication that /run/cephadm/<fsid>.lock was ...
- rm-cluster, it seemed to hang forever until I removed /run/cephadm/<fsid>.lock (and the cephadm.log was quickly flood...
- 09:39 AM Bug #51667 (Resolved): cephadm: host add existing host should be noop
- I ran `host add` from an _admin/mon host and it reset the addr to 127.0.1.1 and removed the labels.
I think this sho... - 09:12 AM Bug #51632 (Fix Under Review): cephadm: selinux is not checked against running configuration
- 08:41 AM Bug #51665 (Resolved): document unforunate interactions between cephadm and restrictive sshd_config?
- This one is a little obscure, so please bear with me.
If you deploy ceph using ceph-salt, it will invoke @cephadm ... - 01:02 AM Bug #51620 (Resolved): Ceph orch upgrade to 16.2.5 fails
07/13/2021
- 06:41 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- https://github.com/ceph/ceph/pull/42000#issuecomment-879313562
- 11:34 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- https://bugzilla.redhat.com/show_bug.cgi?id=1972209
- 11:26 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- https://pulpito.ceph.com/pdonnell-2021-07-12_18:17:18-fs:workload-wip-pdonnell-testing-20210710.030422-distro-basic-s...
- 06:35 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- Answering to myself......
- 06:12 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- ...
- 12:48 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- Try running...
- 12:43 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- the rgw log looks like so:...
- 12:24 PM Bug #51642 (Resolved): cephadm/rgw : RGW server is not coming up: Initialization timeout, failed ...
- I have created the ceph cluster via cephadm and it looks fine but when I tried to deploy RGW it was failing and does ...
- 04:23 PM Bug #51629: cephadm reports nodes offline after rolling reboot
- Ian Merrick wrote:
> This could be because the IP address for the ceph hosts are not defined, __and__ if you are r... - 12:40 PM Bug #51629: cephadm reports nodes offline after rolling reboot
- does a failing *all* MGRs daemons (calling `ceph mgr fail ...`) helps?
- 11:06 AM Bug #51629: cephadm reports nodes offline after rolling reboot
- Hi,
This could be because the IP address for the ceph hosts are not defined, __and__ if you are relying on /etc/ho... - 03:22 PM Bug #51291: Adoption fails for Ceph MDS servers
- I've tracked down the issue. More details with fix here: https://github.com/alfredodeza/remoto/issues/65
The pro... - 12:25 PM Feature #48292 (Resolved): cephadm: allow more than 60 OSDs per host
- 11:20 AM Bug #50399 (Can't reproduce): cephadm ignores registry settings
- Please reopen this, if you still have this issue
- 11:19 AM Bug #51102 (Duplicate): RuntimeError: uid/gid not found in "rados/thrash-old-clients" test
- 04:32 AM Feature #47774: orch,cephadm: host search with filters
- Do we really need this? In most enterprise environments I;ve worked in this type of thing gets jumped on by security ...
07/12/2021
- 09:25 PM Bug #51102: RuntimeError: uid/gid not found in "rados/thrash-old-clients" test
- spotted again at /a/ksirivad-2021-07-11_01:45:00-rados-wip-pg-autoscaler-overlap-distro-basic-smithi/6263029/
- 08:47 PM Bug #51291: Adoption fails for Ceph MDS servers
- I had put this task on the shelf for a while to work on other stuff and since the cluster was still in a functional s...
- 02:57 PM Bug #51620 (Fix Under Review): Ceph orch upgrade to 16.2.5 fails
- 12:53 PM Bug #51634 (Closed): Validate allowed characters for rgw reams
- ...
- 12:09 PM Bug #51632 (Resolved): cephadm: selinux is not checked against running configuration
- The _fetch_selinux function inside kernel_security in cephadm is not checking the actual selinux mode in which the ke...
- 11:52 AM Feature #47774: orch,cephadm: host search with filters
- That would require some magic tricks to generate the list of hosts, like https://docs.ansible.com/ansible/latest/coll...
- 10:24 AM Bug #51629 (New): cephadm reports nodes offline after rolling reboot
- Hello,
On my 8 nodes Octopus 15.2.13 cluster which I installed using cephadm and podman for containers, I did a ro...
07/11/2021
- 01:56 PM Bug #51616: Updating node-exporter deployment progress stuck
- Confirmed.
cluster:
id: 4067126d-01cb-40af-824a-881c130140f8
health: HEALTH_OK
(muted: A...
07/09/2021
- 10:48 PM Bug #51621: Multiple "Updating node-exporter deployment"
- After some debugging I've managed to "fix" this by removing the node-exporter service, rebooting all managers, and th...
- 10:37 PM Bug #51621 (Duplicate): Multiple "Updating node-exporter deployment"
- After updating to 16.2.5 the orchistrator seems to be starting a new task every minute with the following message: "U...
- 09:37 PM Bug #51620 (Resolved): Ceph orch upgrade to 16.2.5 fails
- Hi there,
While upgrading my cluster from 16.2.4 to the 16.2.5 release the upgrade seems to get stuck on upgrading... - 05:06 PM Feature #51618 (Rejected): rgw_frontends configuration in config database not persistent
- following config settings in mon config DB...
- 03:30 PM Bug #51617 (New): cephadm: remove rgw block from minimal ganesha configuration
- Check the keyring section
https://pad.ceph.com/p/nfs
https://github.com/ceph/ceph/blob/master/src/pybind/mgr/ceph... - 03:15 PM Bug #51616 (Resolved): Updating node-exporter deployment progress stuck
- after upgrading to 16.2.5 via "ceph orch upgrade start --ceph-version 16.2.5" I get more and more instances of...
- 01:02 PM Bug #51571: cephadm: remove iscsi service fails due to incorrect gateway name
- This seems to be a duplicate of the other issue. Python's socket.getfqdn() unfortunately picks up the container name ...
- 07:33 AM Bug #51601 (Closed): mgr/dashboard: server does not bind to all addresses anymore
- I've upgraded ceph-mgr from 16.2.4 to 16.2.5 and the dashboard stopped listing on all addresses. Instead, it uses the...
- 06:53 AM Bug #51446: Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168.122.121.',)
- The same for all other places, where now get_mgr_ip is used. For example the prometheus module: docs say default is 0...
- 06:44 AM Bug #51446: Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168.122.121.',)
- Despite being closed, the change does affect certain setups using HA-IPs and/or IPVS because contrary to the docs say...
07/08/2021
- 02:30 PM Bug #48291 (Fix Under Review): Grafana should not have a predictable default password
- 02:07 PM Tasks #49490 (Need More Info): cephadm additions/changes to support everything rgw.py needs
- 01:21 PM Tasks #49490 (In Progress): cephadm additions/changes to support everything rgw.py needs
- 01:28 PM Bug #51277 (Can't reproduce): cephadm bootstrap: unable to set up admin label
- 12:32 PM Feature #51596: implement "gather-logs" feature in cephadm
- replaces https://github.com/ceph/cephadm-ansible/issues/18
- 12:20 PM Feature #51596 (New): implement "gather-logs" feature in cephadm
- ceph-ansible used to provide a playbook for fetching logs/config/keys from a running cluster.
Since Ansible isn't ... - 11:41 AM Bug #45420 (Can't reproduce): cephadmunit.py: teuthology.exceptions.CommandFailedError: Command f...
- 11:36 AM Tasks #45914 (Won't Fix): cephamd: make src/cephadm/vstart-smoke.sh a proper teuthology test
- don't care
- 11:35 AM Bug #43415 (Won't Fix): python3-remoto not available in ubuntu
- out of scope
- 10:47 AM Bug #51592 (Resolved): cephadm should not use the lvm binary of the container
- See Gluster's way of doing this:
https://github.com/gluster/gluster-containers/blob/master/CentOS/exec-on-host.sh
... - 10:24 AM Bug #51590 (In Progress): cephadm: iscsi: The first gateway defined must be the local machine
- 09:58 AM Bug #51590 (Resolved): cephadm: iscsi: The first gateway defined must be the local machine
- 1. Deploy cluster using cephadm
2. Deploy iscsi services using iscsi.yml file...
07/07/2021
- 08:16 PM Bug #49622 (Pending Backport): cephad orchestrator allows to delete hosts with ceph daemons running
- 08:16 PM Feature #48624 (Pending Backport): ceph orch drain <host>
- 04:10 PM Bug #51571: cephadm: remove iscsi service fails due to incorrect gateway name
- > <cluster name>-<cluster fsid>-<daemon name>
That's in fact the container name... - 02:42 PM Bug #51571 (Resolved): cephadm: remove iscsi service fails due to incorrect gateway name
- If the iscsi service is removed and the dashboard is deployed (dashboard mgr module enabled) then the cluster status ...
- 02:41 PM Feature #49171 (Resolved): cephadm: set osd-memory-target
- 02:38 PM Feature #51004 (In Progress): cephadm agent 2.0
- 02:34 PM Bug #51546 (Fix Under Review): cephadm: remove iscsi service fails when the dashboard isn't deployed
- 02:22 PM Bug #51567 (Closed): monitoring spec file doesn't support custom port option as per the doc
- unable to reproduce. issue originated from a downstream build that does not have the monitoring ports feature backpor...
- 01:25 PM Bug #51567 (Closed): monitoring spec file doesn't support custom port option as per the doc
- ...
- 01:07 PM Feature #51566 (Resolved): cephadm: cpu limit
- ceph-ansible allowed customization of CPU limits for containerized daemons per each type (so you could have different...
- 12:47 PM Bug #51111: Pacific: CEPHADM_STRAY_DAEMON after deploying iSCSI gateway with cephadm due to tcmu-...
- Hello,
I have the very same Issue on a fresh install of Pacific 16.2.4 on Ubuntu with podman, but I only have used s... - 12:17 PM Tasks #51562 (Pending Backport): Enable autotune for osd_memory_target
- Description of problem:
Enable autotune for osd_memory_target
cephadm brings the support for osd_memory_target,... - 12:05 PM Bug #51541: cephadm: gather_facts for the host in maintainence returns empty list
- Gather facts is never callied due to https://github.com/ceph/ceph/blob/f0b79b3a48e78a4b47c5c23c3991728cee01887a/src/p...
- 10:16 AM Feature #48292 (Fix Under Review): cephadm: allow more than 60 OSDs per host
- 10:16 AM Bug #51061 (Fix Under Review): GPT partitioning table: OSD "all-available-devices" tries to use "...
- 10:15 AM Bug #50928 (Fix Under Review): OSD size count mismatched with ceph orch commands
- 10:14 AM Bug #50690 (Fix Under Review): ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command no...
- 10:14 AM Bug #50592 (Fix Under Review): "ceph orch apply <svc_type>" applies placement by default without ...
- 09:51 AM Bug #48107 (Can't reproduce): cephadm fails to deploy iscsi gateway when selinux is enabled
- 09:48 AM Documentation #47142 (Fix Under Review): docs: explain the difference between services and daemons
- 09:41 AM Bug #46687 (Can't reproduce): MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- Setting this to can't reproduce as this didn't popped up again.
- 09:35 AM Feature #51302 (Duplicate): mgr/cephadm: automatically configure dashboard <-> RGW connection
- 09:34 AM Bug #49449 (Won't Fix): cephadm: synchronize container timezone with host
- 09:33 AM Bug #49273 (Resolved): cephadm fails deployment of node-exporter when ipv6 is disabled
- 09:31 AM Bug #48142 (Resolved): rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privilege...
- 09:30 AM Bug #48071 (Resolved): rook: 'ceph orch ls' does not list nfs-ganesha daemons
- 09:30 AM Bug #47968 (Resolved): rook: 'ceph orch rm' throws type error
- 09:30 AM Bug #47923 (Resolved): rook: 'ceph orch apply nfs' throws error if no ganesha daemons are deployed
- 09:29 AM Bug #47511 (Resolved): rook: 'ceph orch status' returns 403 error
- 09:28 AM Bug #46558 (Resolved): cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- 09:27 AM Bug #47513 (Resolved): rook: 'ceph orch ps' does not show image and container id correctly
- 09:05 AM Bug #51355: ingress service /var/lib/haproxy/haproxy.cfg
- An alternative approach would be to change the owner of the haproxy config to the haproxy user (currently 99 in the d...
- 07:47 AM Bug #51176 (Pending Backport): Module 'cephadm' has failed: 'MegaSAS'
Also available in: Atom