Activity
From 10/06/2020 to 11/04/2020
11/04/2020
- 06:52 PM Bug #48120 (Fix Under Review): JSONDecodeErrors unclear during cephadm upgrade
- 06:42 PM Bug #48120 (Resolved): JSONDecodeErrors unclear during cephadm upgrade
- a/teuthology/mgfritch-2020-11-03_14:22:58-rados:cephadm-wip-mgfritch-testing-2020-11-02-1045-distro-basic-smithi/5586...
- 06:40 PM Bug #48119 (Fix Under Review): JSONDecodeErrors break OSD deploy
- 06:24 PM Bug #48119: JSONDecodeErrors break OSD deploy
- a/teuthology/mgfritch-2020-11-03_14:22:58-rados:cephadm-wip-mgfritch-testing-2020-11-02-1045-distro-basic-smithi/5586...
- 06:23 PM Bug #48119 (Resolved): JSONDecodeErrors break OSD deploy
- a/teuthology/mgfritch-2020-11-03_14:22:58-rados:cephadm-wip-mgfritch-testing-2020-11-02-1045-distro-basic-smithi/5586...
- 06:23 PM Feature #48074 (Resolved): cephadm logs needs a -f (follow) option
- Ahh thanks! That works fine, and since it's documented in later versions of cephadm, we can probably close this.
- 06:18 PM Bug #48118 (Fix Under Review): JSONDecodeErrors break the server loop
- 06:16 PM Bug #48118 (Resolved): JSONDecodeErrors break the server loop
- a/mgfritch-2020-11-03_14:22:58-rados:cephadm-wip-mgfritch-testing-2020-11-02-1045-distro-basic-smithi/5586920/teuthol...
- 01:52 PM Feature #48114 (Duplicate): Cephadm to support Adding multiple instances of RGW in same node for ...
- Cephadm to support Adding multiple instances of RGW in same node for 5.0
- 01:41 PM Cleanup #48113 (Fix Under Review): doc/mgr/orchestrator: Add hints related to custom containers t...
- 01:38 PM Cleanup #48113 (Resolved): doc/mgr/orchestrator: Add hints related to custom containers to the docs
- The documentation how to deploy custom containers need some cleanup and additional information how to handle paths.
- 10:22 AM Bug #47924 (Closed): rook: 'ceph orch daemon add nfs' fails due to invalid field value
- Fixed by https://tracker.ceph.com/issues/47923
- 10:17 AM Bug #48105: cephadm.py: failure on interactive on error for archive file handling
- to reproduce: on teuthology: ...
- 09:20 AM Bug #48107 (Resolved): cephadm fails to deploy iscsi gateway when selinux is enabled
- This looks like a mistake. https://github.com/ceph/ceph/pull/31321 contains a single commit 0444025aaf559a662882abc49...
- 07:38 AM Bug #47922 (Closed): rook: Failed to load ceph-mgr modules: cephadm, dashboard
- This issue no longer occurs with latest master branch image.
- 12:23 AM Bug #47923 (Pending Backport): rook: 'ceph orch apply nfs' throws error if no ganesha daemons are...
11/03/2020
- 10:46 PM Feature #48074: cephadm logs needs a -f (follow) option
- arbitrary args to be can be passed to journalctl:...
- 10:46 PM Bug #48107 (Can't reproduce): cephadm fails to deploy iscsi gateway when selinux is enabled
- With Ceph 15.2.5 iSCSI Gateway containers will still fail start.
Applying the changes from the following pull requ... - 06:17 PM Bug #48105 (Can't reproduce): cephadm.py: failure on interactive on error for archive file handling
when running on interactive on error mode,iirc, observed we might not be handling archive path generation:...- 11:30 AM Feature #48102 (New): cephadm: configure HA (cluster flags) for Alertmanager
- While HA in metrics (Grafana) is less relevant (manual switchover is the basic workaround), alerting should always pe...
- 09:32 AM Bug #48068 (Fix Under Review): cephadm: Various properties like 'last_refresh' do not contain tim...
11/02/2020
- 09:56 PM Bug #47107: device-health-metrics unavailable because image ceph/ceph:latest has smartmontools 6.6
- Dimitri temporarily added the Copr repository to the container images in https://github.com/ceph/ceph-container/pull/...
- 06:36 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
- observed in octopus:
http://qa-proxy.ceph.com/teuthology/yuriw-2020-10-30_15:36:14-rados-wip-yuri-testing-2020-10-2... - 05:35 PM Feature #48074 (Resolved): cephadm logs needs a -f (follow) option
- It would be nice to be able to follow logs as they are being printed from cephadm. Basically, be able to tell it to s...
- 03:57 PM Bug #48072 (Fix Under Review): ppa:projectatomic is no longer maintained
- 03:54 PM Bug #48072: ppa:projectatomic is no longer maintained
- ubuntu bionic (18.04) installs podman via ppa:projectatomic...
- 03:52 PM Bug #48072 (Resolved): ppa:projectatomic is no longer maintained
- a/mgfritch-2020-10-29_02:49:56-rados:cephadm-wip-mgfritch-testing-2020-10-28-1744-distro-basic-smithi/5569611/
<pr... - 03:51 PM Bug #48071 (Resolved): rook: 'ceph orch ls' does not list nfs-ganesha daemons
- ...
- 10:04 AM Bug #48068 (In Progress): cephadm: Various properties like 'last_refresh' do not contain timezone
- 08:19 AM Bug #48068 (Resolved): cephadm: Various properties like 'last_refresh' do not contain timezone
- The property 'last_refresh', 'created', 'started', 'last_configured' or 'last_deployed' in 'ServiceDescription' and '...
10/31/2020
- 10:48 AM Bug #48031 (Fix Under Review): Cephadm: Needs to pass cluster.listen-address to alertmanager
10/30/2020
- 11:08 AM Bug #48041: mgr/cephadm: Allow customizing mgr/cephadm/lsmcli_blink_lights_cmd per host
- Volker Theile wrote:
...
> The problem is that the template to build the command line to enable/disable the LEDs is... - 08:22 AM Bug #48041 (Fix Under Review): mgr/cephadm: Allow customizing mgr/cephadm/lsmcli_blink_lights_cmd...
- 07:11 AM Bug #48041: mgr/cephadm: Allow customizing mgr/cephadm/lsmcli_blink_lights_cmd per host
- Juan Miguel Olmo Martínez wrote:
> Could you mind to explain better the problem?
>
> The blinking light feature i...
10/29/2020
- 04:35 PM Bug #46764 (Can't reproduce): cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy ...
- no longer reproducible
- 04:13 PM Bug #48041: mgr/cephadm: Allow customizing mgr/cephadm/lsmcli_blink_lights_cmd per host
Could you mind to explain better the problem?
The blinking light feature is to use with individual storage devic...- 03:59 PM Bug #48041 (Resolved): mgr/cephadm: Allow customizing mgr/cephadm/lsmcli_blink_lights_cmd per host
- PR https://github.com/ceph/ceph/pull/36911 introduced a feature to customize the command that is internally used to s...
- 03:06 PM Bug #47360 (Resolved): cephadm: osd unit.run creates /var/run/ceph/$FSID too late, so OSD may not...
- backport to octopus via https://github.com/ceph/ceph/pull/37436
- 01:57 PM Feature #46182 (Resolved): cephadm should use the same image reference across the cluster
- backported to octopus via https://github.com/ceph/ceph/pull/37436
- 01:56 PM Bug #46814 (Resolved): cephadm: Deploying alertmanager image is broken
- backported to octopus via https://github.com/ceph/ceph/pull/36450
- 01:06 PM Bug #46665 (Pending Backport): cephadm plugin: Failure to start service stops service loop; no ot...
10/28/2020
- 05:49 PM Bug #48031: Cephadm: Needs to pass cluster.listen-address to alertmanager
- I've submitted a pull request.
https://github.com/ceph/ceph/pull/37883
- 05:12 PM Bug #48031 (Resolved): Cephadm: Needs to pass cluster.listen-address to alertmanager
- When using public IP with the ceph dashboard/monitoring, alert manager will not start with the following error messag...
- 03:41 PM Bug #47923 (Fix Under Review): rook: 'ceph orch apply nfs' throws error if no ganesha daemons are...
10/27/2020
- 04:07 PM Bug #48019 (Can't reproduce): cephadm: `ceph daemon <daemon-name> ...` is broken
- ...
- 04:06 PM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- I just noticed this ticket is in the Documentation tracker. I opened a new ticket #48019 to track the actual bug (it ...
10/26/2020
- 09:47 PM Bug #47107: device-health-metrics unavailable because image ceph/ceph:latest has smartmontools 6.6
- RHEL 8.3 is going to ship smartmontools 7 (https://bugzilla.redhat.com/show_bug.cgi?id=1671154).
In the meantime, ... - 11:14 AM Bug #45628: cephadm qa: smoke should verify daemons are actually running
- The teuthology project is for tracking issues in teuthology itself, not for tracking missing test cases.
According...
10/24/2020
- 10:06 AM Bug #47387 (Resolved): rook: 'ceph orch ps' does not list daemons correctly
- The backport of this issue is done in https://github.com/ceph/ceph/pull/37436
10/23/2020
- 10:29 AM Feature #47970 (New): cephadm: enable user to retrieve configuration templates for monitoring com...
- Due to the necessity to migrate configuration files of monitoring components manually, the user should be able to con...
- 10:20 AM Bug #47969 (New): cephadm: output migration of alertmanager is confusing
- Though the migration process runs through successfully, the displayed output is probably confusing:...
- 10:05 AM Bug #47924 (Fix Under Review): rook: 'ceph orch daemon add nfs' fails due to invalid field value
- 09:56 AM Bug #47923: rook: 'ceph orch apply nfs' throws error if no ganesha daemons are deployed
- Currently, it works only if nfs-ganesha daemons are already deployed. Otherwise too, it should work. Since 'orch appl...
- 09:48 AM Bug #47968 (Resolved): rook: 'ceph orch rm' throws type error
- ...
- 08:10 AM Bug #47501 (Resolved): cephadm: Error bootstraping with '--container-init' option
10/22/2020
- 11:42 AM Bug #44990 (New): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or dir...
- seems like this issue still exists as i see the fix PR was backported to octopus, seen this in:
/a/yuriw-2020-10-...
10/21/2020
- 03:42 PM Bug #47924 (In Progress): rook: 'ceph orch daemon add nfs' fails due to invalid field value
- 08:44 AM Bug #47924 (Closed): rook: 'ceph orch daemon add nfs' fails due to invalid field value
- ...
- 10:55 AM Bug #46031 (Resolved): Exception: Failed to validate Drive Group: block_wal_size must be of type int
- 08:39 AM Bug #47923 (Resolved): rook: 'ceph orch apply nfs' throws error if no ganesha daemons are deployed
- ...
- 08:03 AM Bug #47921: Bad auth caps for orchestrated mds daemon
- mds created by "ceph orch apply mds 1" get auth caps of
mds.test_fs.******.gpqtol
key: ***************==
caps: [... - 08:01 AM Bug #47921 (Can't reproduce): Bad auth caps for orchestrated mds daemon
- mds created by "ceph orch apply mds 1" get auth caps of...
- 08:02 AM Bug #47922 (Closed): rook: Failed to load ceph-mgr modules: cephadm, dashboard
- Mgr log...
- 07:23 AM Bug #47862 (Pending Backport): cepham no longer requires apparmor-abstractions on SUSE
10/20/2020
- 07:15 PM Bug #47916: podman containers running in a detached state do not output logs to journald
- Configuring journald as the log driver allows for the conmon logs to show the actual output from the failed daemon co...
- 07:11 PM Bug #47916 (Resolved): podman containers running in a detached state do not output logs to journald
- When a service has failed, it can be difficult to determine the actual cause:...
10/19/2020
- 07:00 PM Bug #47905 (Resolved): cephadm: cephadm bootstrap is missing structured output. (was: logging to...
- cephadm CLI is currently using stderr for logging instead of stdout for info/debug statements....
- 08:10 AM Feature #47885 (New): Add networking checks
- The networking checks could be added to the orchestrator, given its access to each of the nodes.
It could e.g.
1...
10/15/2020
- 06:40 PM Bug #47873 (Resolved): /usr/lib/sysctl.d/90-ceph-osd.conf getting installed in container, renderi...
- The file /usr/lib/sysctl.d/90-ceph-osd.conf has the following contents:...
- 04:21 PM Bug #47872 (Resolved): cephadm: ceph orch add osd does not show error
- Trying to use:
# ceph orch daemon osd add hots1:/dev/sdc
No error presented and "hots1" does not exist in the clu... - 10:17 AM Bug #47525 (Resolved): cephadm prepare-host time sync service list missing ntpsec
10/14/2020
- 02:50 PM Bug #47862 (Fix Under Review): cepham no longer requires apparmor-abstractions on SUSE
- 02:37 PM Bug #47862 (Resolved): cepham no longer requires apparmor-abstractions on SUSE
- We are currently carrying a commit - 070e5c3e35ea476815ff9fa4f71aed147fe1ea79 - which adds the following lines to cep...
10/13/2020
- 04:36 PM Bug #47796 (Can't reproduce): octopus: not possible to schedule "--suite rados/cephadm" teutholog...
- Octopus centos 7 builds suddenly started to appear on Shaman, so the bug is presumably no longer reproducible.
- 10:59 AM Bug #45909 (New): already existing cluster deployed: cephadm bootstrap failure
- 10:58 AM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Hey Sebastian,
I see this failure still, in conditions when there is an already existing cluster deployed: verifie... - 04:00 AM Bug #47841 (Fix Under Review): `ceph orch device ls` assumes lsm data is present
- 01:01 AM Bug #47841 (Resolved): `ceph orch device ls` assumes lsm data is present
- Found via the rook toolbox in a k8s environment:...
10/10/2020
- 11:38 AM Bug #47745 (Pending Backport): cephadm: adopt {prometheus,grafana,alertmanager} fails with "Runti...
10/09/2020
- 02:59 PM Feature #43686 (Fix Under Review): cephadm: support rgw nfs
- 02:48 AM Feature #47805: orchestrator: add the ability to place a host into and out of maintenance
- Design PR - https://github.com/ceph/ceph/pull/37607
- 12:17 AM Feature #47805 (Closed): orchestrator: add the ability to place a host into and out of maintenance
- Maintenance windows are a fact of life, so in order to provide a consistent way that ceph hosts are placed into maint...
10/08/2020
- 04:35 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- /a/yuriw-2020-10-05_22:17:06-rados-wip-yuri7-testing-2020-10-05-1338-octopus-distro-basic-smithi/5500303/teuthology.log
- 04:16 PM Feature #46182 (Pending Backport): cephadm should use the same image reference across the cluster
- 04:02 PM Bug #47796 (Fix Under Review): octopus: not possible to schedule "--suite rados/cephadm" teutholo...
- 03:39 PM Bug #47796: octopus: not possible to schedule "--suite rados/cephadm" teuthology run without spec...
- ...
- 03:31 PM Bug #47796 (Can't reproduce): octopus: not possible to schedule "--suite rados/cephadm" teutholog...
- The "rados/cephadm" suite in octopus includes jobs that run on the following operating systems for which Shaman does ...
- 12:48 PM Bug #47366 (Closed): Mgr keeps dispatching `osd crush reweight` after OSD removal process is done
- Daniël Vos wrote:
> I've just tested this on version 15.2.5 and can confirm this is fixed. You can close this :-)
... - 08:32 AM Bug #47648 (Pending Backport): mgr/cephadm: Rendering custom template HTML escapes wrongly
10/07/2020
- 04:24 PM Feature #47782: ceph orch host rm <host> is not stopping the services deployed in the respective ...
- Severity needs to be changed to sev-2
- 04:01 PM Feature #47782 (Duplicate): ceph orch host rm <host> is not stopping the services deployed in the...
- ceph orch host rm <host> is successful but services deployed in the removed hosts are still active and running instea...
- 02:42 PM Bug #47107: device-health-metrics unavailable because image ceph/ceph:latest has smartmontools 6.6
- Hi Daniël,
Thanks for reporting this.
We are aware of this issue and working to fix it.
Regards,
Yaarit - 09:49 AM Feature #47774 (Resolved): orch,cephadm: host search with filters
- For the bulk addition of nodes, a @search@ command (or @add --dry-run@) would be useful. Additionally, for dashboard ...
10/06/2020
- 03:13 PM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- running "ceph config-key rm mgr/cephadm/osd_remove_queue" and restarting active mgr fixed the issue - "ceph orch" wor...
- 07:50 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- running latest Version:
"overall": "ceph version 15.2.5 (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)"... - 07:45 AM Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
- Having same problem here:
added new host & osds yesterday evening. while the cluster was still rebalancing removed a...
Also available in: Atom