Activity
From 09/29/2020 to 10/28/2020
10/28/2020
- 05:49 PM Bug #48031: Cephadm: Needs to pass cluster.listen-address to alertmanager
- I've submitted a pull request.
https://github.com/ceph/ceph/pull/37883
- 05:12 PM Bug #48031 (Resolved): Cephadm: Needs to pass cluster.listen-address to alertmanager
- When using public IP with the ceph dashboard/monitoring, alert manager will not start with the following error messag...
- 03:41 PM Bug #47923 (Fix Under Review): rook: 'ceph orch apply nfs' throws error if no ganesha daemons are...
10/27/2020
- 04:07 PM Bug #48019 (Can't reproduce): cephadm: `ceph daemon <daemon-name> ...` is broken
- ...
- 04:06 PM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- I just noticed this ticket is in the Documentation tracker. I opened a new ticket #48019 to track the actual bug (it ...
10/26/2020
- 09:47 PM Bug #47107: device-health-metrics unavailable because image ceph/ceph:latest has smartmontools 6.6
- RHEL 8.3 is going to ship smartmontools 7 (https://bugzilla.redhat.com/show_bug.cgi?id=1671154).
In the meantime, ... - 11:14 AM Bug #45628: cephadm qa: smoke should verify daemons are actually running
- The teuthology project is for tracking issues in teuthology itself, not for tracking missing test cases.
According...
10/24/2020
- 10:06 AM Bug #47387 (Resolved): rook: 'ceph orch ps' does not list daemons correctly
- The backport of this issue is done in https://github.com/ceph/ceph/pull/37436
10/23/2020
- 10:29 AM Feature #47970 (New): cephadm: enable user to retrieve configuration templates for monitoring com...
- Due to the necessity to migrate configuration files of monitoring components manually, the user should be able to con...
- 10:20 AM Bug #47969 (New): cephadm: output migration of alertmanager is confusing
- Though the migration process runs through successfully, the displayed output is probably confusing:...
- 10:05 AM Bug #47924 (Fix Under Review): rook: 'ceph orch daemon add nfs' fails due to invalid field value
- 09:56 AM Bug #47923: rook: 'ceph orch apply nfs' throws error if no ganesha daemons are deployed
- Currently, it works only if nfs-ganesha daemons are already deployed. Otherwise too, it should work. Since 'orch appl...
- 09:48 AM Bug #47968 (Resolved): rook: 'ceph orch rm' throws type error
- ...
- 08:10 AM Bug #47501 (Resolved): cephadm: Error bootstraping with '--container-init' option
10/22/2020
- 11:42 AM Bug #44990 (New): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or dir...
- seems like this issue still exists as i see the fix PR was backported to octopus, seen this in:
/a/yuriw-2020-10-...
10/21/2020
- 03:42 PM Bug #47924 (In Progress): rook: 'ceph orch daemon add nfs' fails due to invalid field value
- 08:44 AM Bug #47924 (Closed): rook: 'ceph orch daemon add nfs' fails due to invalid field value
- ...
- 10:55 AM Bug #46031 (Resolved): Exception: Failed to validate Drive Group: block_wal_size must be of type int
- 08:39 AM Bug #47923 (Resolved): rook: 'ceph orch apply nfs' throws error if no ganesha daemons are deployed
- ...
- 08:03 AM Bug #47921: Bad auth caps for orchestrated mds daemon
- mds created by "ceph orch apply mds 1" get auth caps of
mds.test_fs.******.gpqtol
key: ***************==
caps: [... - 08:01 AM Bug #47921 (Can't reproduce): Bad auth caps for orchestrated mds daemon
- mds created by "ceph orch apply mds 1" get auth caps of...
- 08:02 AM Bug #47922 (Closed): rook: Failed to load ceph-mgr modules: cephadm, dashboard
- Mgr log...
- 07:23 AM Bug #47862 (Pending Backport): cepham no longer requires apparmor-abstractions on SUSE
10/20/2020
- 07:15 PM Bug #47916: podman containers running in a detached state do not output logs to journald
- Configuring journald as the log driver allows for the conmon logs to show the actual output from the failed daemon co...
- 07:11 PM Bug #47916 (Resolved): podman containers running in a detached state do not output logs to journald
- When a service has failed, it can be difficult to determine the actual cause:...
10/19/2020
- 07:00 PM Bug #47905 (Resolved): cephadm: cephadm bootstrap is missing structured output. (was: logging to...
- cephadm CLI is currently using stderr for logging instead of stdout for info/debug statements....
- 08:10 AM Feature #47885 (New): Add networking checks
- The networking checks could be added to the orchestrator, given its access to each of the nodes.
It could e.g.
1...
10/15/2020
- 06:40 PM Bug #47873 (Resolved): /usr/lib/sysctl.d/90-ceph-osd.conf getting installed in container, renderi...
- The file /usr/lib/sysctl.d/90-ceph-osd.conf has the following contents:...
- 04:21 PM Bug #47872 (Resolved): cephadm: ceph orch add osd does not show error
- Trying to use:
# ceph orch daemon osd add hots1:/dev/sdc
No error presented and "hots1" does not exist in the clu... - 10:17 AM Bug #47525 (Resolved): cephadm prepare-host time sync service list missing ntpsec
10/14/2020
- 02:50 PM Bug #47862 (Fix Under Review): cepham no longer requires apparmor-abstractions on SUSE
- 02:37 PM Bug #47862 (Resolved): cepham no longer requires apparmor-abstractions on SUSE
- We are currently carrying a commit - 070e5c3e35ea476815ff9fa4f71aed147fe1ea79 - which adds the following lines to cep...
10/13/2020
- 04:36 PM Bug #47796 (Can't reproduce): octopus: not possible to schedule "--suite rados/cephadm" teutholog...
- Octopus centos 7 builds suddenly started to appear on Shaman, so the bug is presumably no longer reproducible.
- 10:59 AM Bug #45909 (New): already existing cluster deployed: cephadm bootstrap failure
- 10:58 AM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Hey Sebastian,
I see this failure still, in conditions when there is an already existing cluster deployed: verifie... - 04:00 AM Bug #47841 (Fix Under Review): `ceph orch device ls` assumes lsm data is present
- 01:01 AM Bug #47841 (Resolved): `ceph orch device ls` assumes lsm data is present
- Found via the rook toolbox in a k8s environment:...
10/10/2020
- 11:38 AM Bug #47745 (Pending Backport): cephadm: adopt {prometheus,grafana,alertmanager} fails with "Runti...
10/09/2020
- 02:59 PM Feature #43686 (Fix Under Review): cephadm: support rgw nfs
- 02:48 AM Feature #47805: orchestrator: add the ability to place a host into and out of maintenance
- Design PR - https://github.com/ceph/ceph/pull/37607
- 12:17 AM Feature #47805 (Closed): orchestrator: add the ability to place a host into and out of maintenance
- Maintenance windows are a fact of life, so in order to provide a consistent way that ceph hosts are placed into maint...
10/08/2020
- 04:35 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- /a/yuriw-2020-10-05_22:17:06-rados-wip-yuri7-testing-2020-10-05-1338-octopus-distro-basic-smithi/5500303/teuthology.log
- 04:16 PM Feature #46182 (Pending Backport): cephadm should use the same image reference across the cluster
- 04:02 PM Bug #47796 (Fix Under Review): octopus: not possible to schedule "--suite rados/cephadm" teutholo...
- 03:39 PM Bug #47796: octopus: not possible to schedule "--suite rados/cephadm" teuthology run without spec...
- ...
- 03:31 PM Bug #47796 (Can't reproduce): octopus: not possible to schedule "--suite rados/cephadm" teutholog...
- The "rados/cephadm" suite in octopus includes jobs that run on the following operating systems for which Shaman does ...
- 12:48 PM Bug #47366 (Closed): Mgr keeps dispatching `osd crush reweight` after OSD removal process is done
- Daniël Vos wrote:
> I've just tested this on version 15.2.5 and can confirm this is fixed. You can close this :-)
... - 08:32 AM Bug #47648 (Pending Backport): mgr/cephadm: Rendering custom template HTML escapes wrongly
10/07/2020
- 04:24 PM Feature #47782: ceph orch host rm <host> is not stopping the services deployed in the respective ...
- Severity needs to be changed to sev-2
- 04:01 PM Feature #47782 (Duplicate): ceph orch host rm <host> is not stopping the services deployed in the...
- ceph orch host rm <host> is successful but services deployed in the removed hosts are still active and running instea...
- 02:42 PM Bug #47107: device-health-metrics unavailable because image ceph/ceph:latest has smartmontools 6.6
- Hi Daniël,
Thanks for reporting this.
We are aware of this issue and working to fix it.
Regards,
Yaarit - 09:49 AM Feature #47774 (Resolved): orch,cephadm: host search with filters
- For the bulk addition of nodes, a @search@ command (or @add --dry-run@) would be useful. Additionally, for dashboard ...
10/06/2020
- 03:13 PM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- running "ceph config-key rm mgr/cephadm/osd_remove_queue" and restarting active mgr fixed the issue - "ceph orch" wor...
- 07:50 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- running latest Version:
"overall": "ceph version 15.2.5 (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)"... - 07:45 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- Having same problem here:
added new host & osds yesterday evening. while the cluster was still rebalancing removed a...
10/05/2020
- 08:01 PM Bug #47726: disk selector should pass all devices to ceph-volume (available and unavailable)
- Octopus backport is https://github.com/ceph/ceph/pull/37520
- 01:27 PM Bug #46764 (New): cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy any OSDs
- This issue was "fixed" by inserting a one-minute grace period between
(a) the completion of "cephadm bootstrap"
... - 09:21 AM Bug #47745 (Fix Under Review): cephadm: adopt {prometheus,grafana,alertmanager} fails with "Runti...
- 09:12 AM Bug #47745 (Resolved): cephadm: adopt {prometheus,grafana,alertmanager} fails with "RuntimeError:...
- This is essentially the same problem as https://tracker.ceph.com/issues/46398, but occurs when adopting prometheus/gr...
10/04/2020
- 05:44 AM Bug #47726 (Pending Backport): disk selector should pass all devices to ceph-volume (available an...
10/03/2020
- 01:46 AM Bug #47580 (Pending Backport): cephadm: "Error ENOENT: Module not found": TypeError: type object ...
10/02/2020
- 09:45 AM Bug #47726 (Fix Under Review): disk selector should pass all devices to ceph-volume (available an...
- 09:39 AM Bug #47726 (Resolved): disk selector should pass all devices to ceph-volume (available and unavai...
- With the lvm batch refactor merged (https://github.com/ceph/ceph/pull/34740) we need to pass available and unavailabl...
- 06:42 AM Bug #47366: Mgr keeps dispatching `osd crush reweight` after OSD removal process is done
- I've just tested this on version 15.2.5 and can confirm this is fixed. You can close this :-)
- 06:41 AM Bug #47107: device-health-metrics unavailable because image ceph/ceph:latest has smartmontools 6.6
- Unfortunately ceph/ceph:15.2.5 still comes with smartmontools version 6.6. device-health-metrics rely on smartmontool...
09/30/2020
- 11:00 PM Feature #47711 (Resolved): mgr/cephadm: add a feature to examine the host facts to look for confi...
- Once the host facts are part of the mgr/cephadm cache, he idea is to periodically check the host cache to look for an...
- 07:30 PM Bug #47694: downgrading via ceph orch upgrade start results in partial application and mixed state
- After a while the status and versions are as expected. We should probably still put a validation against downgrades i...
- 09:10 AM Bug #47694 (Won't Fix): downgrading via ceph orch upgrade start results in partial application an...
- Following https://docs.ceph.com/en/latest/cephadm/upgrade/#using-customized-container-images I attempted to _downgrad...
- 07:29 PM Bug #47702: upgrading via ceph orch upgrade start results in partial application and mixed state
- Seems like its a reporting issue. After a while the status and versions are as expected.
- 03:12 PM Bug #47702 (Can't reproduce): upgrading via ceph orch upgrade start results in partial applicatio...
- Following https://docs.ceph.com/en/latest/cephadm/upgrade/#using-customized-container-images I attempted to upgrade m...
- 06:34 PM Bug #47709 (Duplicate): orchestrator._interface.OrchestratorValidationError: name mon.c already i...
- ...
- 03:58 PM Bug #47501 (Fix Under Review): cephadm: Error bootstraping with '--container-init' option
- 03:29 PM Bug #47501 (In Progress): cephadm: Error bootstraping with '--container-init' option
- 02:48 PM Bug #47700 (Fix Under Review): during OSD deletion: Module 'cephadm' has failed: Set changed size...
- 02:44 PM Bug #47700 (Resolved): during OSD deletion: Module 'cephadm' has failed: Set changed size during ...
- ...
- 11:48 AM Bug #47639 (Resolved): cephadm: rbd-mirror daemons automatically marked as stray
- 10:01 AM Bug #46764: cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy any OSDs
- Could you include the mgr log the next time you see this issue?
09/29/2020
- 05:44 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- seeing this in recent octopus branch with related issue fix:
http://qa-proxy.ceph.com/teuthology/yuriw-2020-09-28_1... - 02:04 PM Bug #47381: "ceph orch apply --dry-run" reports empty osdspec even though OSDs will be deployed
- > But AFAICT we are not requiring users to issue a "ceph orch device ls --refresh" before running "ceph orch apply --...
- 11:03 AM Bug #47580 (Fix Under Review): cephadm: "Error ENOENT: Module not found": TypeError: type object ...
- 11:02 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- moved to https://tracker.ceph.com/issues/47684
- 10:49 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- Attached.
Thanks - 08:47 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- do you have a chance to upload new MGR logs?
- 04:13 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- The above two commands worked and allowed the upgrade to start, it progressed to 2% (Upgraded the MON and a few OSD's...
- 11:01 AM Bug #47684 (Resolved): cephadm: auth get failed: failed to find osd.27 in keyring retval: -2
- ...
Also available in: Atom