Activity
From 04/20/2021 to 05/19/2021
05/19/2021
- 02:22 PM Support #50887 (Closed): ERROR: Daemon not found: mgr.ceph-node1.ruuwlz. See cephadm ls
- The previous access address of CEPH dashboard is https://ceph-node1: 8443, but now it's https://ceph-node2:443, to ch...
- 02:20 PM Bug #50886 (Can't reproduce): TypeError: can't subtract offset-naive and offset-aware datetimes
- ...
- 02:12 PM Bug #50526: OSD massive creation: OSDs not created
- Juan Miguel Olmo Martínez wrote:
> @Cory Snyder wrote:
> > @Juan, allow me to provide more detail on the scenario t... - 07:40 AM Bug #50526: OSD massive creation: OSDs not created
- @Cory Snyder wrote:
> @Juan, allow me to provide more detail on the scenario that we encountered. As far as I can te... - 12:05 PM Bug #50113 (Fix Under Review): Upgrading to v16 breaks rgw_frontends setting
- 11:55 AM Feature #47507 (Resolved): qa: add testing for Rook
- 11:52 AM Bug #50830 (Pending Backport): rgw-ingress does not install
- 11:18 AM Documentation #50883 (Duplicate): cephadm: mds_cache_memory_limit
- Users can apply:...
05/17/2021
- 08:07 PM Bug #50830 (Fix Under Review): rgw-ingress does not install
- 03:50 AM Bug #50830: rgw-ingress does not install
- Sage, could you kindly take a look at this failure?
- 03:50 AM Bug #50830: rgw-ingress does not install
- i tried to rebuild https://github.com/ceph/ceph/commit/1a55d822295f03309a696a301bbd1314953974a1, which is the merge c...
- 10:24 AM Bug #50693 (Resolved): cephadm: commands fail with "ValueError: not enough values to unpack (expe...
- backported now.
- 10:23 AM Bug #50616 (Duplicate): ValueError: not enough values to unpack (expected 2, got 1) during upgrad...
05/16/2021
- 03:55 PM Feature #50784 (Resolved): cephadm: orch upgrade check should check if the target image provided ...
- 03:54 PM Bug #50805 (Resolved): Replacement of OSDs not working in hosts with FQDN host name
- 03:51 PM Bug #50717 (Pending Backport): cephadm: prometheus.yml.j2 contains "tab" character
- 03:39 PM Bug #50830: rgw-ingress does not install
- this failure is reproducible .
- 03:39 PM Bug #50830 (Resolved): rgw-ingress does not install
- rados:cephadm:smoke-roleless/{0-distro/centos_8.2_kubic_stable 1-start 2-services/rgw-ingress 3-final}...
- 02:22 PM Bug #50693: cephadm: commands fail with "ValueError: not enough values to unpack (expected 2, got...
- Kefu Chai wrote:
> @Sam, does https://github.com/ceph/ceph/pull/40555 address this issue?
Yes, I can confirm. App... - 10:09 AM Bug #50693 (Need More Info): cephadm: commands fail with "ValueError: not enough values to unpack...
- @Sam, does https://github.com/ceph/ceph/pull/40555 address this issue?
05/14/2021
- 07:26 PM Bug #50526: OSD massive creation: OSDs not created
- @Juan, allow me to provide more detail on the scenario that we encountered. As far as I can tell, the root cause of o...
- 05:04 PM Bug #50526: OSD massive creation: OSDs not created
- David Orman wrote:
> We've created a PR to fix the root cause of this issue: https://github.com/alfredodeza/remoto/p... - 04:00 PM Bug #50526: OSD massive creation: OSDs not created
- We've created a PR to fix the root cause of this issue: https://github.com/alfredodeza/remoto/pull/63
- 06:01 PM Bug #50817 (Closed): cephadm: upgrade loops forever if not enough mds daemons
- If you don't have enough mds daemons that ok-to-stop will ever pass the upgrade just loops forever without providing ...
- 04:18 PM Bug #50717 (Fix Under Review): cephadm: prometheus.yml.j2 contains "tab" character
- 01:59 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- ...
- 09:51 AM Feature #50815 (Resolved): cephadm: Removing an offline host
- but doesn't that address only part of the problem. For example, any daemons that Ceph (not cephadm) knew about are st...
05/13/2021
- 04:55 PM Bug #50805 (Resolved): Replacement of OSDs not working in hosts with FQDN host name
- In a host with FQDN name:
#hostname
test1.lab.com
#ceph orch osd rm 4 --replace
# ceph osd tree
ID CLASS ... - 02:45 PM Bug #50041 (Resolved): cephadm bootstrap with apply-spec anmd ssh-user option failed while adding...
- 02:15 PM Tasks #50804 (Resolved): cephadm bootstrap. add a warning that users should not use --fsid
- cephadm bootstrap. add a warning that users should not use --fsid
Reason: this isn't really give the user any adva... - 01:55 PM Bug #50359 (In Progress): Configure the IP address for the monitoring stack components
05/12/2021
- 05:49 PM Feature #50784 (Fix Under Review): cephadm: orch upgrade check should check if the target image p...
- 05:43 PM Feature #50784 (Resolved): cephadm: orch upgrade check should check if the target image provided ...
- If the user provides an image to orch upgrade check that they could not actually upgrade to because of the ceph versi...
- 03:01 PM Bug #50776 (New): cephadm: CRUSH uses bare host names
- https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/module.py#L1411
https://github.com/ceph/ceph/blob/... - 12:08 PM Bug #48930 (Resolved): when removing the iscsi service, the gateway config object remains
- 09:59 AM Bug #50359: Configure the IP address for the monitoring stack components
- I think also that being able to customize the port (exposing a spec parameter is also required in this context)
05/11/2021
- 06:28 PM Feature #50733 (In Progress): cephadm: provide message in orch upgrade status saying upgrade is c...
- 04:06 PM Bug #50526: OSD massive creation: OSDs not created
- To be clear, we have not applied this patch. I was merely adding information to point out the impact is not restricte...
- 03:57 PM Bug #50526: OSD massive creation: OSDs not created
- David Orman wrote:
> Juan Miguel Olmo Martínez wrote:
> > I think that the fix will also work for your issue, it wo... - 03:55 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- Before you set the port (if it's not too late), can you attach the rgw portion of the 'ceph orch ls --export' output?
- 02:41 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- workaround is to manually set the port:...
- 02:04 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s...
- 02:21 PM Bug #50759 (Rejected): Redeploying daemon prometheus.a on host smithi159 failed: 'latin-1' codec ...
- was caused by https://github.com/ceph/ceph/pull/40172
- 02:17 PM Bug #50759 (Rejected): Redeploying daemon prometheus.a on host smithi159 failed: 'latin-1' codec ...
- https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s...
- 02:10 PM Bug #47480: cephadm: tcmu-runner container is logging inside the container
- https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s...
- 11:42 AM Bug #50691: cephadm: bootstrap fails with "IndexError: list index out of range" during cephadm se...
- workaround: Do not specify the ssh user when bootstrapping, but later on.
- 11:09 AM Bug #50691 (Fix Under Review): cephadm: bootstrap fails with "IndexError: list index out of range...
- 10:23 AM Bug #50691: cephadm: bootstrap fails with "IndexError: list index out of range" during cephadm se...
- https://github.com/ceph/ceph/commit/777f236ad885b03b551dd820f41a00b9c89761eb#diff-d0f7acffbce59b9e36a1479d1b1f32955cd...
05/10/2021
- 10:32 PM Bug #50526: OSD massive creation: OSDs not created
- Juan Miguel Olmo Martínez wrote:
> I think that the fix will also work for your issue, it would be nice if you can c... - 06:58 PM Feature #50733 (Closed): cephadm: provide message in orch upgrade status saying upgrade is complete
- Right now, the upgrade status just says the upgrade is no longer in progress and no explicit message is given to say ...
- 01:57 PM Feature #45864 (Resolved): cephadm: include monitoring components in usual upgrade process
- 09:21 AM Bug #49860: cephadm adopt - Report conf file missing - now it says could not detect legacy fsid
- Do you remember, were there any other ceph daemons deployed on that host? cephadm needs to know the fsid of the clus...
- 08:57 AM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- prio=normal, as this is not trivial to implement
- 08:53 AM Feature #48102: cephadm: configure HA (cluster flags) for Alertmanager
- Isn't the altertmanager already HA by itself? I thought that alertmanager already creates a fault-tolerant cluster on...
- 08:48 AM Feature #48980 (Closed): orch: add image properties to monitoring spec files
- 08:42 AM Feature #48560 (Closed): Spec files for each daemon in the monitoring stack
05/09/2021
- 11:13 AM Bug #50717 (Resolved): cephadm: prometheus.yml.j2 contains "tab" character
- Hello.
in file /usr/share/ceph/mgr/cephadm/templates/services/prometheus/prometheus.yml.j2 provided by package
c...
05/08/2021
- 09:33 AM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- Sam Overton wrote:
> #50693 explains the root-cause, which is that @/sys/kernel/security/apparmor/profiles@ is an em...
05/07/2021
- 10:13 PM Bug #50671: cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client.admin.ke...
- I think this might be a permissions issue - it looks like cephadm is writing the keyring without changing its permiss...
- 08:47 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- Deepika Upadhyay wrote:
> /ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-... - 08:01 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- #50693 explains the root-cause, which is that @/sys/kernel/security/apparmor/profiles@ is an empty file and cephadm i...
- 07:57 PM Bug #50693 (Resolved): cephadm: commands fail with "ValueError: not enough values to unpack (expe...
- Occurs with ceph/cephadm 16.2.1 running on a clean Debian 10.9 install.
The following error is from a failed OSD D... - 07:31 PM Bug #50691 (Resolved): cephadm: bootstrap fails with "IndexError: list index out of range" during...
- Running on a cleanly installed Debian 10.9 host with ceph/cephadm 16.2.3.
The same command in 16.2.1, running on t... - 06:29 PM Bug #50690 (Can't reproduce): ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command not...
- Description of problem:
ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command is not generating the e... - 03:27 PM Documentation #50687 (In Progress): cephadm: must redeploy monitoring stack daemon after changing...
- 02:03 PM Documentation #50687 (Resolved): cephadm: must redeploy monitoring stack daemon after changing im...
- We document that, to use a different image from the default for a monitoring stack daemon, you must change the image ...
- 10:09 AM Bug #50685 (Resolved): wrong exception type: Exception("No filters applied")
- ...
- 09:20 AM Bug #48930: when removing the iscsi service, the gateway config object remains
- follow-up PR: https://github.com/ceph/ceph/pull/41181
- 09:19 AM Bug #48930 (Fix Under Review): when removing the iscsi service, the gateway config object remains
- 04:22 AM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- ...
05/06/2021
- 06:29 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- 06:24 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
- 06:23 PM Bug #50364 (Resolved): cephadm: removing daemons from hosts in maintenance mode
- 05:43 AM Bug #50671 (Closed): cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client...
- OSD status Check fails with no keyring found.
CLI:
2021-05-01T12:08:20.050 INFO:tasks.cephadm:Waiting for OSDs t...
05/05/2021
- 02:52 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- still seeing in octopus: http://qa-proxy.ceph.com/teuthology/yuriw-2021-05-04_19:53:28-rados-wip-yuri-testing-2021-05...
05/04/2021
- 04:44 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- /ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-octopus-distro-basic-smithi...
- 11:44 AM Feature #50639 (New): Request to provide an option to specify erasure coded pool as datapool whil...
- ...
- 04:01 AM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- Just to confirm this is how the section looks after my edit...
05/03/2021
- 04:51 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- I did as suggested but the upgrade still fails with the following new error...
- 03:40 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- workaround is to replace
/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adf... - 03:36 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- ...
- 01:57 PM Bug #50616 (Duplicate): ValueError: not enough values to unpack (expected 2, got 1) during upgrad...
- Started an upgrade from 15.2.8 to 16.2.1 via cephadm running on Ubuntu 20.04 & Docker.
MON/MGR/MDS upgraded fine a... - 03:49 PM Bug #50399: cephadm ignores registry settings
- you also have to update the image to point to your registry. otherwise cephadm don't actually use the registry
- 03:45 PM Bug #44587 (New): failed to write <pid> to cgroup.procs:
05/02/2021
- 08:55 AM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- > - make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe include a --force in ca...
04/30/2021
- 07:42 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- I'd *definitively* go for make 'orch apply prometheus' silently enable the prometheus module.
- 04:46 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- A couple options:
- make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe incl... - 06:49 PM Support #50594 (Resolved): ceph orch / cephadm does not allow deploying multiple MDS daemons per ...
- I have 3 hosts, with lots of cores. I have a filesystem with ~150M files that requires several active MDS daemons to ...
- 10:15 AM Feature #50593 (Resolved): cephadm: cephfs-mirror service should enable "mgr/mirror"
- cephadm: cephfs-mirror service should enable "mgr/mirror"
- 07:00 AM Bug #50592 (Closed): "ceph orch apply <svc_type>" applies placement by default without providing ...
- ...
04/29/2021
- 09:13 AM Bug #50526: OSD massive creation: OSDs not created
- Andreas Håkansson wrote:
> We have the same or a very similar problem,
> In out test case adding more than 8 disk w...
04/28/2021
- 08:07 PM Bug #50102 (Resolved): spec jsons that expect a list in a field dont verify that a list was actua...
- 06:27 PM Bug #50306 (Pending Backport): /etc/hosts is not passed to ceph containers. clusters that were re...
- 06:26 PM Feature #46044 (Pending Backport): cephadm: Distribute admin keyring.
- 06:26 PM Bug #50443 (Pending Backport): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- 06:25 PM Bug #50544 (Pending Backport): cephadm: monitoring stack containers in conf file passed to bootst...
- 12:47 PM Bug #50544 (Fix Under Review): cephadm: monitoring stack containers in conf file passed to bootst...
- 06:24 PM Bug #50548 (Pending Backport): cephadm doesn't deploy monitors when multiple public networks
- 07:21 AM Bug #50548: cephadm doesn't deploy monitors when multiple public networks
- PR created: https://github.com/ceph/ceph/pull/41055
- 06:58 AM Bug #50548 (Resolved): cephadm doesn't deploy monitors when multiple public networks
- The issue spotted on Ceph 16.2.1 deployed with cephadm+docker, although the master branch seems to also be affected.
... - 05:44 PM Bug #50062 (Resolved): orch host add with multiple labels and no addr
- 05:32 PM Bug #50248 (Resolved): rgw-nfs daemons marked as stray
- 04:07 PM Feature #49960 (Resolved): cephadm: put max on number of daemons in placement count based on numb...
- 04:06 PM Documentation #50257 (Resolved): cephadm docs: wrong command for getting events for single daemon
- 04:06 PM Bug #49757 (Resolved): orch: --format flag name not included in help for 'orch ps' and 'orch ls'
- 09:48 AM Bug #50526: OSD massive creation: OSDs not created
- We have the same or a very similar problem,
In out test case adding more than 8 disk with db on a separate nvme devi... - 09:26 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
- duplicates #47873
- 09:21 AM Bug #50551: Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by default
- We've been setting fs.aio-max-nr to 1048576 since early bluestore days with no apparent downside. That would be a sim...
- 09:14 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
- fs.aio-max-nr: The Asynchronous non-blocking I/O (AIO) feature that allows a process to initiate multiple I/O operati...
04/27/2021
- 06:52 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
- If you want to set monitoring stack container images during bootstrap by setting a config option like "mgr/cephadm/co...
- 03:22 PM Feature #46827: cephadm: Pin OSDs to pmem modules connected to specific CPUs
- workaround: manually set the config option
- 02:57 PM Feature #44874 (Rejected): cephadm: add Filestore support
- Sort of too late by now. I'd still accept PRs for this
- 02:55 PM Feature #46044 (Fix Under Review): cephadm: Distribute admin keyring.
- 02:54 PM Feature #50236 (Rejected): cephadm: NFSv3
- 01:39 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
- Jeff Layton wrote:
> Seems reasonable. So what happens during a "cephadm pull"? I imagine:
>
> # determine the ne... - 09:05 AM Bug #50535 (Resolved): add local cephadm bootstrap dev env.
- ...
- 09:04 AM Documentation #50534 (Resolved): docs: add full cluster purge
- 06:14 AM Bug #49506 (Resolved): cephadm: `cephadm ls` broken for SUSE's downstream alertmanager container
04/26/2021
- 04:37 PM Feature #50529 (Resolved): cephadm rm-cluster is also not resetting any disks that were used as osds
- see title.
should probably be an optional argument or something. - 03:43 PM Bug #50364 (Pending Backport): cephadm: removing daemons from hosts in maintenance mode
- 03:24 PM Bug #50526 (Resolved): OSD massive creation: OSDs not created
- OSDs are not created when the drive group used to launch the osd creation affect a big number of OSDs (75 in my case)...
- 02:06 PM Bug #50524 (Resolved): placement spec: irritating error message if passed a string for count_per_...
- ...
- 08:25 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- Paul Cuzner wrote:
> Sebastian Wagner wrote:
> > A few problems:
> >
> > * *cephadm rm-cluster* only removes the...
04/24/2021
04/23/2021
- 09:33 PM Bug #50502: cephadm pull doesn't get latest image
- https://github.com/ceph/ceph/pull/39058 caused a subtle behavior change.
Previously, if we used a non-stable tag,... - 01:52 PM Bug #50502: cephadm pull doesn't get latest image
- This is a tricky one!
Imagine you set... - 01:49 PM Bug #50502 (Closed): cephadm pull doesn't get latest image
- I tried to do a "cephadm pull" this morning on my mini-cluster and it got v16.2.0. Dockerhub has v16.2.1 currently th...
- 02:46 PM Feature #47274: cephadm: make the container_image setting available to the cephadm binary indepen...
- Seems reasonable. So what happens during a "cephadm pull"? I imagine:
# determine the new version
# set it in the... - 01:56 PM Feature #45111 (Rejected): cephadm: choose distribution specific images based on etc/os-releaes
- don't know. I'd like to avoid that complexity. Please reopen, if you think this is a good idea.
- 12:22 PM Bug #50114 (Resolved): cephadm: upgrade loop on target_digests list mismatch
- 05:50 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- Sebastian Wagner wrote:
> A few problems:
>
> * *cephadm rm-cluster* only removes the cluster on the local host
...
04/22/2021
- 01:19 PM Bug #50444 (Pending Backport): host labels order is random
- 11:47 AM Bug #50472: orchestrator doesn't provide a way to remove an entire cluster
- A few problems:
* *cephadm rm-cluster* only removes the cluster on the local host
* *mgr/cephadm* cannot remove t... - 07:24 AM Support #48630: non-LVM OSD do not start after upgrade from 15.2.4 -> 15.2.7
- Sebastian Wagner wrote:
> I think you probably want to migrate to ceph-volume for now.
Hi Sebastian,
Thanks fo...
04/21/2021
- 09:18 PM Bug #50472 (Resolved): orchestrator doesn't provide a way to remove an entire cluster
- In prior toolchains like ceph-ansible, purging a cluster and returning a set of hosts to their original state was pos...
- 02:58 PM Bug #47513 (Pending Backport): rook: 'ceph orch ps' does not show image and container id correctly
- 09:53 AM Support #49497: Cephadm fails to upgrade from 15.2.8 to 15.2.9
- Illya S. wrote:
> The error is still here with 15.2.10
>
> Stuck on 15.2.8
15.2.11 -- nothing changed - 02:30 AM Bug #50443 (Fix Under Review): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
04/20/2021
- 07:44 PM Bug #50444 (Resolved): host labels order is random
- host labels are not stored in the order entered or a logical order like alphabetically. they stored in a randomized o...
- 07:30 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- If you have < 2 running mgr daemons than the upgrade won't work because there will be no mgr to fail over to.
If you... - 09:45 AM Bug #49954 (Resolved): cephadm is not persisting the grafana.db file, so any local customizations...
Also available in: Atom