Activity
From 03/24/2020 to 04/22/2020
04/22/2020
- 11:27 PM Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi'
- Isn't that normal. Others do that too, look at _add_rgw above it. if you want to set the non-standard options you are...
- 11:19 AM Bug #45174 (Triaged): cephadm: missing parameters on 'orch daemon add iscsi'
- 09:23 AM Bug #45174 (Resolved): cephadm: missing parameters on 'orch daemon add iscsi'
- `orch daemon add iscsi` is missing some parameters when compared to iSCSI service spec:
add command parameter: htt... - 07:41 PM Bug #45162 (Fix Under Review): cephadm: iscsi should use the correct container image
- 02:59 PM Bug #45087 (Triaged): cephadm: add-repo: cephadm uses the container image ID as Debian repo base
- 02:59 PM Bug #45087: cephadm: add-repo: cephadm uses the container image ID as Debian repo base
- Right, That doesn't make sense.
- 02:55 PM Feature #44869 (Need More Info): cephadm: automatic auth key rotation
- need info
- 02:55 PM Bug #45120 (Fix Under Review): cephadm: adopt prometheus doesn't work
- 02:17 PM Feature #43690: cephadm: service resource limits
- [16:13:08] <jlayton> need to lower the daemon memory limits to try and reproduce a problem
[16:13:27] <jlayton> (plu... - 02:11 PM Bug #44832 (Fix Under Review): cephadm: `ceph cephadm generate-key` fails with No such file or di...
- 02:00 PM Bug #44826 (Fix Under Review): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring pr...
- 01:49 PM Documentation #44828 (Fix Under Review): cephadm: clarify "Failed to infer CIDR network for mon ip"
- 01:49 PM Documentation #44905 (Pending Backport): cephadm troubleshooting SSH errors
- 01:20 PM Bug #45095 (Pending Backport): cephadm adopt can't handle offline OSDs
- 01:20 PM Bug #45108 (Pending Backport): test_orchestrator: service ls doesn't work
- 01:19 PM Bug #44609 (Resolved): cephadm: grafana: cert problem prevents dashbaord integration
- 01:18 PM Bug #45081 (Pending Backport): cephadm: `upgrade check 15.2.1` : OrchestratorError: Failed to pul...
- 12:53 PM Bug #45129: simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot
- Sebastian Wagner wrote:
> Hm, we're already injecting this lvm activate into the unit file:
>
> * https://github.... - 09:16 AM Documentation #44971 (Fix Under Review): cephadm: document the cephadm binary
- 09:01 AM Bug #45172 (Resolved): bin/cephadm: logs: Traceback: not enough values to unpack (expected 2, got 1)
- query logs for a daemon that doesn't exist results in a uncaught traceback....
04/21/2020
- 09:07 PM Bug #45167 (Can't reproduce): cephadm: mons are not properly deployed
- ...
- 06:47 PM Backport #44893: octopus: racey concurrent ceph-volume callKeyError: 'ceph.type'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34423
m... - 06:05 PM Documentation #45165 (Can't reproduce): cephadm troubleshooting: recover from broken daemons
- ...
- 05:41 PM Bug #45162: cephadm: iscsi should use the correct container image
- docker.io/ceph/ceph:v15 is the default cephadm container image so if you're using a different container image via the...
- 02:24 PM Bug #45162 (Resolved): cephadm: iscsi should use the correct container image
- I've used "registry/ceph/ceph:latest" to bootstrap my cluster, but when I add an iSCSI gateway I see that "docker.io/...
- 03:59 PM Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons
- sorry for the noise, mike
- 10:46 AM Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons
- the traceback is:...
- 10:18 AM Bug #45155 (Closed): mgr/dashboard: Error listing orchestrator NFS daemons
- I've used orchestrator to add an NFS gateway:...
- 03:06 PM Documentation #45128 (Resolved): cephadm: document: `orch device zap`
- https://github.com/ceph/ceph/pull/34668
This PR addresses this issue. - 02:54 PM Feature #45163 (Resolved): cephadm: iscsi: read and write config-key for the dashboard
- Make use of https://github.com/ceph/ceph/blob/master/src/pybind/mgr/dashboard/services/iscsi_config.py#L46 to set isc...
- 02:04 PM Bug #45161 (Resolved): cephadm: iscsi should validate the existence of the given pool
- Let's create an iscsi gw, without having the "rbd" pool:...
04/20/2020
- 04:46 PM Bug #45152 (Rejected): cephadm: data structure doesn't work for multiple CephFS
- It is possible to move an MDS from one FS to another:...
- 12:37 PM Bug #45120: cephadm: adopt prometheus doesn't work
- So we should have something to address issue 2/ since the prometheus data directory depends on the OS.
I'll update...
04/19/2020
- 09:33 PM Feature #44919 (New): cephadm MONSpec: special dir for the mon store?
- 09:27 PM Tasks #45143 (Closed): cephadm: scheduler improvements
- h3. Prio = high
* Removal: make sure, enough daemons joined the maps
* Removal: priorize standby daemons
* docum...
04/18/2020
- 08:52 AM Feature #45091: cephadm: CephX disabled: bad_method + failed to fetch mon config
- It seems this isn't just a bug related to new MDS, I also hit the exact same erorr trying to add a new OSD/MON via ce...
04/17/2020
- 09:46 PM Bug #45120: cephadm: adopt prometheus doesn't work
- This path appears to be platform and/or config dependent based the '--storage.tsdb.path' argument.
Debian uses a d... - 09:44 PM Feature #45138 (Closed): cephadm: remove legacy daemons
- rm-daemon doesn't support removing those daemons:...
- 09:26 PM Bug #45129: simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot
- Hm, we're already injecting this lvm activate into the unit file:
* https://github.com/ceph/ceph/blob/138018eddc8a... - 12:44 PM Bug #45129: simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot
- OK, here's what's going on: outside the container world, simple OSDs have a unit enabled named something like ceph-vo...
- 11:33 AM Bug #45129 (Resolved): simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot
- When running @cephadm adopt@ against a simple (ceph-disk) style OSD, the adopt runs fine, and the OSD starts, but lat...
- 09:09 PM Backport #44845 (Resolved): octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 09:09 PM Backport #44893 (Resolved): octopus: racey concurrent ceph-volume callKeyError: 'ceph.type'
- 09:08 PM Backport #45061 (Resolved): octopus: cephadm: iscsi
- 10:31 AM Documentation #45128 (Resolved): cephadm: document: `orch device zap`
- There is no mention of `orch device zap` in the docs: https://docs.ceph.com/docs/master/search/?q=orch+device+zap
... - 03:46 AM Bug #44769 (Pending Backport): cephadm doesn't reuse osd_id of 'destroyed' osds
04/16/2020
- 08:28 PM Bug #45120 (Resolved): cephadm: adopt prometheus doesn't work
- The cephadm adopt prometheus command has two major problems:
1/ the etc/prometheus directory in the destination di... - 06:43 PM Cleanup #45118 (Closed): orch (pacific): cleanup CLI
- use ...
- 06:30 PM Feature #44556 (Pending Backport): cephadm: preview drivegroups
- 07:46 AM Feature #44556 (Resolved): cephadm: preview drivegroups
- 06:29 PM Feature #43689 (Resolved): cephadm: iscsi
- 06:28 PM Bug #44602 (Resolved): cephadm: `orch ls` shows daemons as online, despite host is down
- 06:27 PM Bug #44934 (Resolved): cephadm RGW: scary remove-deploy loop
- 06:25 PM Feature #43839 (Resolved): enhance `host ls`
- 03:51 PM Subtask #45116 (Resolved): cephadm: RGW Load balancer using HAproxy
- The Ceph Object Gateway allows you to assign many instances of the object gateway to a single zone so that you can sc...
- 03:27 PM Feature #45115 (New): cephadm: Deploy Ceph Dashboard behind a HAProxy instance
- A very common scenario is putting Ceph Dashboard behind a proxy server, for load balancing and security purposes. Thi...
- 01:22 PM Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip"
- See
https://github.com/ceph/ceph/pull/34589
This PR addresses this issue. - 10:51 AM Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip"
- https://docs.ceph.com/docs/master/cephadm/install/?highlight=public_network#deploy-additional-monitors-optional
Li... - 10:50 AM Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip"
- https://ceph.readthedocs.io/en/latest/cephadm/install/#deploy-additional-monitors-optional
- 12:52 PM Bug #44577: cephadm: reconfigure Prometheus on MGR failover
- How can we make sure, we're not missing any MgrMap updates, as we have to deal with a mgr/cephadm failover simultaneo...
- 10:21 AM Feature #45111 (Rejected): cephadm: choose distribution specific images based on etc/os-releaes
- Would be great to autoamticllay download the leap images, if cephadm was started on leap.
- 09:44 AM Bug #45108 (Fix Under Review): test_orchestrator: service ls doesn't work
- 07:51 AM Bug #45108 (Resolved): test_orchestrator: service ls doesn't work
- *How to reproduce*
- vstart a cluster.
- Enable test_orchestrator and set it as the backend.
- listing services ra... - 09:04 AM Bug #45086 (Resolved): cephadm: upgrade from v15 to v15.2.1 not working
- 08:22 AM Bug #45093 (Can't reproduce): cephadm: mgrs transiently getting co-located (one node gets two whe...
- It is not 100% reproducible.
- 07:46 AM Bug #44769 (Fix Under Review): cephadm doesn't reuse osd_id of 'destroyed' osds
04/15/2020
- 01:58 PM Bug #43838 (In Progress): cephadm: Forcefully Remove Services (unresponsive hosts)
- 01:48 PM Bug #45086: cephadm: upgrade from v15 to v15.2.1 not working
- yes - thank you!
ceph orch upgrade start --ceph-version 15.2.1
Initiating upgrade to docker.io/ceph/ceph:v15.2.1 - 01:44 PM Bug #45086 (Need More Info): cephadm: upgrade from v15 to v15.2.1 not working
- 01:43 PM Bug #45086: cephadm: upgrade from v15 to v15.2.1 not working
- Is this fixed by https://github.com/ceph/ceph/pull/34556 ?
- 01:36 PM Bug #45097 (Resolved): cephadm: UX: Traceback, if `orch host add mon1` fails.
- This should not show a Traceback: ...
- 01:31 PM Bug #45093 (Need More Info): cephadm: mgrs transiently getting co-located (one node gets two when...
- 09:22 AM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- could you attach ...
- 07:54 AM Bug #45093 (Resolved): cephadm: mgrs transiently getting co-located (one node gets two when only ...
- h3. After "ceph orch apply mgr node1,node2,node3", cluster has four MGRs
This started happening in master very rec... - 01:26 PM Bug #45081 (Fix Under Review): cephadm: `upgrade check 15.2.1` : OrchestratorError: Failed to pul...
- 01:14 PM Documentation #44905: cephadm troubleshooting SSH errors
- <SebastianW> 1. ceph cephadm get-ssh-config > config
<SebastianW> 2. ceph config-key get mgr/cephadm/ssh_identity_ke... - 12:43 PM Bug #43816 (New): cephadm: Unable to use IPv6 on "cephadm bootstrap"
- This covers PR 34180.
- 12:40 PM Bug #44820 (Resolved): racey concurrent ceph-volume callKeyError: 'ceph.type'
- 12:40 PM Bug #44609 (Pending Backport): cephadm: grafana: cert problem prevents dashbaord integration
- 12:39 PM Bug #44602 (Pending Backport): cephadm: `orch ls` shows daemons as online, despite host is down
- 11:37 AM Bug #44950 (Duplicate): OSDSpec: Reserving storage on db_devices
- 11:28 AM Bug #45029 (Pending Backport): cephadm: add-repo fails (silently) when no arguments are given
- 11:27 AM Backport #45061 (In Progress): octopus: cephadm: iscsi
- 11:25 AM Bug #45065 (Pending Backport): cephadm: Config option warn_on_stray_daemons does not work as expe...
- 11:24 AM Bug #44934 (Pending Backport): cephadm RGW: scary remove-deploy loop
- 09:44 AM Bug #45095 (Fix Under Review): cephadm adopt can't handle offline OSDs
- 09:17 AM Bug #45095 (Resolved): cephadm adopt can't handle offline OSDs
- @cephadm adopt@ for OSDs relies on the OSD actually being up and running (it checks /var/lib/ceph/osd/ceph-$ID/{fsid,...
- 09:12 AM Bug #45092 (Duplicate): granfana api url - cephadm
- Thanks for reporting this! (duplicates #44877)
- 05:41 AM Bug #45092 (Duplicate): granfana api url - cephadm
- Upon creating a cephadm grafana using 'ceph orch apply grafana 1'
The api url is automatically set to the internal... - 05:36 AM Feature #45091 (Closed): cephadm: CephX disabled: bad_method + failed to fetch mon config
- Upon trying to migrate an existing cluster to cephadm the new MDS via cephadm fail to start.
Cluster is running wi...
04/14/2020
- 05:21 PM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
- Quick check in vstart suggests the mgr works fine (current octopus).
Guess a next step could be to run this with d... - 04:17 PM Bug #45065 (Fix Under Review): cephadm: Config option warn_on_stray_daemons does not work as expe...
- 01:52 PM Bug #45065: cephadm: Config option warn_on_stray_daemons does not work as expected
- As far as I can see, the problem is line 1050 in mgr/cephadm/module.py. The line states
if self.warn_on_stray_ho... - 02:53 PM Bug #45086: cephadm: upgrade from v15 to v15.2.1 not working
- I tried both - v15.2.1 and 15.2.1 (as stated in documentation) - same result
- 02:47 PM Bug #45086: cephadm: upgrade from v15 to v15.2.1 not working
- have you tried without the v prefix?...
- 01:17 PM Bug #45086 (Resolved): cephadm: upgrade from v15 to v15.2.1 not working
- start updgrade:...
- 02:43 PM Feature #45089 (New): cephadm: mgr thrasher
- Add a mgr thrasher
And make sure, *all things* are idempotent everywhere. - 01:27 PM Bug #45087 (Closed): cephadm: add-repo: cephadm uses the container image ID as Debian repo base
- installed cephadm last week according to docu and added repo via cephadm:...
- 10:33 AM Bug #45081 (Resolved): cephadm: `upgrade check 15.2.1` : OrchestratorError: Failed to pull 15.2.1...
- ...
04/13/2020
- 10:47 AM Bug #45065: cephadm: Config option warn_on_stray_daemons does not work as expected
- Debian 10, CEPH v15.2.1. The same problem.
- 10:44 AM Bug #45065: cephadm: Config option warn_on_stray_daemons does not work as expected
- On Octopus cluster we've installed tcmu-runner, ceph-iscsi as stated in Manual Installation for iSCSI Gateways. We've...
04/11/2020
- 06:47 PM Bug #45065 (Resolved): cephadm: Config option warn_on_stray_daemons does not work as expected
- On a Octopus cluster I configured tcmu-runner to export storage via iscsi. As the tcmu-runner isn't configured by cep...
- 09:41 AM Backport #45061 (Resolved): octopus: cephadm: iscsi
- https://github.com/ceph/ceph/pull/34554
- 09:36 AM Bug #45037 (Resolved): octopus: cephadm: non-ceph units put wrong uid:gid in systemd unit file
- https://github.com/ceph/ceph/pull/34438
04/10/2020
- 09:00 PM Bug #45029 (In Progress): cephadm: add-repo fails (silently) when no arguments are given
- > Would you be open for a PR that either prints out an error when no arguments are given or that assumes some default...
- 12:58 PM Bug #45029 (Resolved): cephadm: add-repo fails (silently) when no arguments are given
- I've been looking around to see if there are any low-level things that were broken for me during the installation of ...
- 08:54 PM Bug #45032 (Resolved): cephadm: Not recovering from `OSError: cannot send (already closed?)`
- Workaround for this was:...
04/09/2020
- 01:15 PM Bug #43803 (Resolved): ceph orchestrator rgw rm: no valid command found
- 01:11 PM Bug #45016 (Resolved): mgr: `ceph tell mgr mgr_status` hangs
- cephadm bootstrap hangs:...
- 12:32 PM Bug #44825: cephadm: bootstrap is not idempotent
- Making cephadm bootstrap truly idempotent will not be trivial. As there are going to be some special cases, like: wha...
- 12:09 PM Feature #45015 (Closed): cephadm: bind mount `ceph` from a container into the host system?
- "snaps" can actually bind mount the binaries from their containers into the host OS environment, rather than needing ...
- 11:43 AM Bug #45010 (Fix Under Review): cephadm: /etc/ceph/ceph.conf directory /etc/ceph does not exist
- 08:01 AM Bug #45010 (Can't reproduce): cephadm: /etc/ceph/ceph.conf directory /etc/ceph does not exist
- ...
- 11:42 AM Bug #44909 (Can't reproduce): cephadm on Debian Buster: journald logs are empty
- ...
- 09:42 AM Backport #43995 (Resolved): nautilus: ceph orchestrator rgw rm: no valid command found
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33160
m... - 07:48 AM Bug #44972 (New): cephadm: add-repo on ubuntu broken
- 07:36 AM Bug #44972: cephadm: add-repo on ubuntu broken
- Turns out apt-key is unusable by default:...
- 06:29 AM Bug #44950: OSDSpec: Reserving storage on db_devices
- Jan Fajerski wrote:
> Which version are you running? There was a bug recently where the slots arguments were ignored...
04/08/2020
- 04:33 PM Bug #44934: cephadm RGW: scary remove-deploy loop
- The GitHub PR states "turns out users put dot into their RGW service names" - all the dots you can see in the log ste...
- 02:43 PM Bug #44972 (In Progress): cephadm: add-repo on ubuntu broken
- 01:29 PM Bug #44313 (Resolved): ceph-volume prepare is not idempotent and may get called twice
- this particular issue was resolved.
- 01:29 PM Bug #44824: cephadm: adding osd device is not idempotent
- See https://github.com/ceph/ceph/pull/33755/files#r405524847
- 01:15 PM Bug #44965 (Resolved): cephadm: git archive from ci failes
- Fixed by Kefu. Big Thanks!
- 11:45 AM Feature #44993 (New): cephadm: Resource-aware daemons placement
- Use resource limits (cpu, memory) to allow to schedule properly the deploy of the different daemons in the best suite...
- 09:34 AM Bug #44990 (Can't reproduce): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such...
- http://pulpito.ceph.com/yuriw-2020-04-07_17:39:28-rados-wip-octopus-rgw-msg-fixes-distro-basic-smithi/4931485/
<pr... - 08:11 AM Bug #44950: OSDSpec: Reserving storage on db_devices
- Maran H wrote:
> I've read this as, 'it's not implement in batch' therefor I assumed it would be implemented in crea... - 08:03 AM Bug #44950: OSDSpec: Reserving storage on db_devices
- Maran H wrote:
> Joshua Schmid wrote:
> > This can be achieved using the `slots` option of ceph-volume.
>
> I've... - 01:57 AM Bug #44968: cehpadm: another "RuntimeError: Set changed size during iteration"
- The issue was reported by an IRC user.
Basically he tried to select 6 OSDs for deleting from Dashboard, requests are... - 01:36 AM Feature #43689 (Pending Backport): cephadm: iscsi
04/07/2020
- 10:46 PM Bug #44965 (In Progress): cephadm: git archive from ci failes
- 10:46 PM Bug #44965: cephadm: git archive from ci failes
- turns out, the tag is missing in cpeh/ceph-ci
- 05:49 PM Backport #43995: nautilus: ceph orchestrator rgw rm: no valid command found
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/33160
merged - 03:50 PM Bug #44950: OSDSpec: Reserving storage on db_devices
- Joshua Schmid wrote:
> This can be achieved using the `slots` option of ceph-volume.
I've read this as, 'it's not... - 01:07 PM Bug #44972 (Closed): cephadm: add-repo on ubuntu broken
- Right now this fails:...
- 12:46 PM Bug #44720 (Need More Info): rook: rgw: allow realm != zone
- 12:46 PM Bug #44720: rook: rgw: allow realm != zone
- Relates to https://github.com/ceph/ceph/pull/34042 but I cannot really tell, how exactly.
- 12:39 PM Bug #44758 (Resolved): Drive Groups: limit:1 does not imply all:true
- 12:37 PM Documentation #44716 (Resolved): orchestrator/cephadm: document ceph orch apply -i -
- 12:35 PM Documentation #44971 (Resolved): cephadm: document the cephadm binary
- Right now (without any docs), this is totally confusing.
Topics to cover:
h2. Man page
h2. overview
... - 11:36 AM Feature #43687: cephadm: haproxy (or lb)
- Retrieving detailed requirements.
If only monitoring we will need at least the parameters needed to launch the conta... - 11:27 AM Feature #43687: cephadm: haproxy (or lb)
- Maybe something like this???...
- 11:25 AM Feature #43687: cephadm: haproxy (or lb)
- also: haproxy prometheus exporter!
- 10:05 AM Bug #44968 (Can't reproduce): cehpadm: another "RuntimeError: Set changed size during iteration"
- ...
04/06/2020
- 11:18 PM Bug #44965 (Resolved): cephadm: git archive from ci failes
- http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-06_21:21:01-rados-wip-mgfritch-testing-2020-04-06-1246-distro-ba...
- 03:06 PM Backport #44893 (In Progress): octopus: racey concurrent ceph-volume callKeyError: 'ceph.type'
- 11:18 AM Bug #44950: OSDSpec: Reserving storage on db_devices
- This can be achieved using the `slots` option of ceph-volume.
Unfortunately the slots option for wal/db (taken fro... - 10:37 AM Bug #44950 (Duplicate): OSDSpec: Reserving storage on db_devices
- I'm trying to setup a new Ceph cluster using cephadm.
To save costs I've gotten four OSD servers with only a hand... - 10:38 AM Backport #44710: octopus: doc/cephadm: replace `osd create` with `apply osd`
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34355
m... - 10:28 AM Bug #44934 (Fix Under Review): cephadm RGW: scary remove-deploy loop
04/04/2020
- 06:22 AM Bug #44823 (Won't Fix): cephadm: tries to parse arguments command passed to shell
- Sebastian Wagner wrote:
> right. should be
>
> [...]
Ah, that is right. Thanks!
04/03/2020
- 07:16 PM Bug #44894 (Pending Backport): cephadm: non-ceph units put wrong uid:gid in systemd unit file
- 03:59 PM Bug #44934: cephadm RGW: scary remove-deploy loop
- Caught the intermittent container (at the very bottom):...
- 03:19 PM Bug #44934 (Resolved): cephadm RGW: scary remove-deploy loop
- ...
- 11:40 AM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Commands used to create the object gateway:...
- 11:26 AM Bug #44926 (Resolved): dashboard: creating a new bucket causes InvalidLocationConstraint
- In Octopus I created an object gateway as mentioned in https://ceph.io/ceph-management/introducing-cephadm/ chapter "...
- 10:51 AM Bug #44692 (Resolved): doc/cephadm: replace `osd create` with `apply osd`
- 10:51 AM Backport #44710 (Resolved): octopus: doc/cephadm: replace `osd create` with `apply osd`
- 10:47 AM Feature #44925 (In Progress): Please give cephadm a --version option
- Figuring out which version of cephadm a given environment is running seems needlessly difficult:...
- 09:17 AM Feature #43690: cephadm: service resource limits
- Rook actually recommends to *not* set resource limits for Ceph daemons, as all daemons are critical for the cluster....
- 09:10 AM Feature #44919 (Resolved): cephadm MONSpec: special dir for the mon store?
- Rook supports a dataDirHostPath to set the data dir for the mons. Do we need this as well? Is it worth?
e.g for p... - 04:02 AM Backport #44845: octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- https://github.com/ceph/ceph/pull/34350
- 04:02 AM Backport #44845 (Need More Info): octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
04/02/2020
- 11:37 PM Bug #44894 (Fix Under Review): cephadm: non-ceph units put wrong uid:gid in systemd unit file
- 06:01 PM Bug #44894: cephadm: non-ceph units put wrong uid:gid in systemd unit file
- When a node-export deploy runs, the generated systemd unit file puts 65534:65534 for /var/run/ceph/$fsid
- 02:31 PM Bug #44598 (Resolved): cephadm: Traceback, if Python 3 is not installed on remote host
- 02:18 PM Bug #44609 (Fix Under Review): cephadm: grafana: cert problem prevents dashbaord integration
- 11:04 AM Bug #44910 (Rejected): cephadm: PlacementSpec host1:192.168.0.2,host1:192.168.0.2
- Make sure...
- 10:58 AM Bug #44909 (Can't reproduce): cephadm on Debian Buster: journald logs are empty
- * Debian Buster
* docker... - 10:07 AM Documentation #44905 (Resolved): cephadm troubleshooting SSH errors
- ...
- 10:02 AM Documentation #43834 (Can't reproduce): cephadm: some command only support `hosts`. make sure use...
- close for now.
- 09:58 AM Feature #43709: mgr/rook: remove OSDs
- Design discussion:
https://github.com/rook/rook/pull/3954 - 09:39 AM Bug #44720: rook: rgw: allow realm != zone
- It seems sensible to wait for changes explained in the RGW multisite design:
https://github.com/rook/rook/pull/4520
04/01/2020
- 08:30 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- Sebastian Wagner wrote:
> prerequisite: https://github.com/ceph/ceph/pull/34091
This one has been merged.
- 08:27 PM Backport #44710 (In Progress): octopus: doc/cephadm: replace `osd create` with `apply osd`
- 06:50 PM Bug #44894 (Resolved): cephadm: non-ceph units put wrong uid:gid in systemd unit file
- ...
- 06:42 PM Bug #44832: cephadm: `ceph cephadm generate-key` fails with No such file or directory: '/tmp/...
- Here is the Debug Log:...
- 05:04 PM Backport #44893 (Resolved): octopus: racey concurrent ceph-volume callKeyError: 'ceph.type'
- https://github.com/ceph/ceph/pull/34423
- 03:59 PM Bug #44820 (Pending Backport): racey concurrent ceph-volume callKeyError: 'ceph.type'
- 02:36 PM Bug #44887 (Fix Under Review): cephadm: Simplify mounting of host dirs into prom/node-exporter
- 02:10 PM Bug #44887 (Rejected): cephadm: Simplify mounting of host dirs into prom/node-exporter
- With https://github.com/ceph/ceph/pull/32340 the following host directories are mounted into the prom/node-exporter c...
- 02:11 PM Bug #44888 (Resolved): Drivegroup's :limit: isn't working correctly
- Each iteration of an osd deployment deploys OSDs up to a set :limit:
Since we're deploying every $sleep_interval sec... - 02:10 PM Feature #44886: cephadm: allow use of authenticated registry
- and then the next request will be to support untrusted registries... and so on
- 01:49 PM Feature #44886 (Resolved): cephadm: allow use of authenticated registry
- Users may need to use an authenticated registry, e.g. in air-gapped deployments.
We could punt and require that th... - 02:06 PM Feature #44556 (Fix Under Review): cephadm: preview drivegroups
- 10:15 AM Bug #44777 (Resolved): podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to ...
- 10:04 AM Bug #44876 (New): mgr/rook: track minor upgrades
- ...
- 09:59 AM Feature #44875 (Resolved): mgr/rook: PlacementSpec to K8s POD scheduling conversion
- https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#affinity-v1-core
orch -> k8s... - 09:53 AM Feature #41239: mgr/rook: support creating OSDs on Persistent Volumes
- Some thoughts:
* OSDSpec specifies an existing StorageClass, num OSDs per node, ...?
* mgr/rook triggers osd crea... - 09:50 AM Feature #44874 (Rejected): cephadm: add Filestore support
- If someone wants to work on it:
* Has to mount the filesystem. ceph-ansible uses a complex bind-mount for it.
* c... - 09:46 AM Feature #44873 (Resolved): cephadm bootstrap: add --apply-spec <cluster.yaml>
- to have truly a single command when setting up a cluster for Day 1
Requirements:
h3. Adding hosts
* YAML spe... - 08:28 AM Bug #44769 (In Progress): cephadm doesn't reuse osd_id of 'destroyed' osds
03/31/2020
- 07:58 PM Feature #43687: cephadm: haproxy (or lb)
- Deploy and configure haproxy with cephadm; configure a service/lb with kubernetes/rook. Can we generalize these into...
- 07:54 PM Feature #44869 (Resolved): cephadm: automatic auth key rotation
- This is about periodically deploying new keys for daemons and clients. This is a bit more involved at we need to make...
- 04:21 PM Documentation #44867 (Rejected): cephadm: document "package" mode
- ...
- 04:07 PM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
- Let's say, someone does:...
- 03:45 PM Feature #44864 (New): cephadm: garbage collect old container images
- cephadm: garbage collect old container images
- 01:39 PM Bug #44820: racey concurrent ceph-volume callKeyError: 'ceph.type'
- Copying to fix in c-v as well.
- 01:18 PM Bug #44820 (Fix Under Review): racey concurrent ceph-volume callKeyError: 'ceph.type'
- 10:00 AM Backport #44845 (Resolved): octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 09:54 AM Bug #44823 (Triaged): cephadm: tries to parse arguments command passed to shell
- 09:54 AM Bug #44823: cephadm: tries to parse arguments command passed to shell
- right. should be ...
- 12:32 AM Bug #44823 (Won't Fix): cephadm: tries to parse arguments command passed to shell
- ...
- 09:49 AM Bug #44832 (Resolved): cephadm: `ceph cephadm generate-key` fails with No such file or directory:...
- ...
- 09:43 AM Bug #44810 (Won't Fix): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- according to kai , this was a false alarm
- 09:42 AM Bug #44830 (Duplicate): cpehadm bootstrap: improve error message, if `host add` fails
- ...
- 09:07 AM Documentation #44828 (Resolved): cephadm: clarify "Failed to infer CIDR network for mon ip"
- ...
- 03:41 AM Bug #44826 (Resolved): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring provided"
- ...
- 12:36 AM Bug #44825 (Rejected): cephadm: bootstrap is not idempotent
- It would be helpful if this command did nothing if the cluster is already bootstrapped. This would simplify ansible r...
- 12:35 AM Bug #44824 (Resolved): cephadm: adding osd device is not idempotent
- ...
03/30/2020
- 11:28 PM Bug #44820: racey concurrent ceph-volume callKeyError: 'ceph.type'
- the problem seems to be a racing invocation of inventory and prepare:...
- 08:29 PM Bug #44820 (Resolved): racey concurrent ceph-volume callKeyError: 'ceph.type'
- ...
- 09:01 PM Bug #44810 (Fix Under Review): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- 05:28 PM Bug #44810 (Need More Info): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- 11:09 AM Bug #44810 (Won't Fix): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- cephadm bootstrap creates @/etc/ceph/ceph.pub@ with a wrong permissions
- 08:23 PM Bug #44669 (Resolved): cephadm: rm-cluster should clean up /etc/ceph
- 06:08 PM Feature #44305 (Resolved): mgr/cephadm: Add support for removing MONs
- 06:08 PM Bug #44039 (Rejected): bin/cephadm: Remove --allow-fqdn-hostname
- seems that this might be a valid config!
- 06:07 PM Feature #44287: cephadm: Graceful Shutdown of the Whole Ceph Cluster
- open Q: how do we shutdown the mons, after we already did shut down all mgrs?
- 05:53 PM Bug #44758 (Fix Under Review): Drive Groups: limit:1 does not imply all:true
- 05:35 PM Feature #43708 (Resolved): mgr/rook: Blink enclosure LED
- 05:35 PM Feature #43696: cephadm: check that units start
- low, until someone complains.
- 05:33 PM Bug #44739 (Need More Info): ceph.conf parameters set via "cephadm bootstrap -c" are not persiste...
- Interesting. the code there is rather old: https://github.com/ceph/ceph/blame/master/src/cephadm/cephadm#L2115-L2124
... - 05:29 PM Documentation #44716 (Fix Under Review): orchestrator/cephadm: document ceph orch apply -i -
- 01:01 PM Feature #44556: cephadm: preview drivegroups
- Shortened this discussion with a f2t talk with Sebastian. Here are the results
Keep the old syntax but expand with... - 10:45 AM Feature #44556: cephadm: preview drivegroups
- > I guess we want to have the following features:
>
> 1) preview a service if the spec is already applied (useful ... - 08:31 AM Feature #44556: cephadm: preview drivegroups
- I guess we want to have the following features:
1) preview a service if the spec is already applied (useful when h... - 10:37 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- Not yet understsand, why https://github.com/ceph/ceph/pull/34260 fixes this issue.
- 10:21 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- ...
- 09:16 AM Bug #44777 (Fix Under Review): podman: stat /usr/bin/ceph-mon: no such file or directory, then un...
- 10:33 AM Bug #44642: cephadm: mgr dump might be too huge
- Backported to octopus by https://github.com/ceph/ceph/pull/34258
03/29/2020
- 10:38 PM Bug #44642 (Resolved): cephadm: mgr dump might be too huge
- 12:17 PM Bug #44642 (Pending Backport): cephadm: mgr dump might be too huge
03/28/2020
- 04:24 PM Feature #44599 (Resolved): cephadm: check-host: Returns only a single problem
- 09:59 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- Fascinating!
- 12:53 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- not sure why, but I'm pretty sure that https://github.com/ceph/ceph/pull/34091 is responsible for the regression.
...
03/27/2020
- 10:10 PM Bug #44792 (Resolved): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- /etc/ceph/ceph.conf is often use by tools in the ceph ecosystem. We should provide a mechanism to keep this up to dat...
- 08:58 PM Bug #44598 (Fix Under Review): cephadm: Traceback, if Python 3 is not installed on remote host
- 06:47 PM Bug #44598 (In Progress): cephadm: Traceback, if Python 3 is not installed on remote host
- 02:53 PM Feature #44556: cephadm: preview drivegroups
- When we want to have preview functionality for other components aswell, we should generalize the CLI a bit more.
... - 12:37 PM Bug #44781 (Fix Under Review): cephadm: monitoring: root volume alert doesn't work in container
- 10:19 AM Bug #44781 (Resolved): cephadm: monitoring: root volume alert doesn't work in container
- This is due the the root filesystem being mapped inside the container as `/rootfs` but the Prometheus alert checking ...
- 03:58 AM Bug #44642: cephadm: mgr dump might be too huge
- Same, @cephadm shell -- ceph mgr dump@ seems perfectly happy for me too. So it's only a problem during bootstrap some...
03/26/2020
- 10:56 PM Bug #44777 (Resolved): podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to ...
- the mon.b unit:...
- 08:43 PM Feature #43677 (Resolved): monitoring: create rpm for alerts rules also for centos
- 08:42 PM Documentation #43672 (Resolved): doc: point release upgrades
- 08:37 PM Bug #44559 (Need More Info): cephadm logs an invalid stat command
- 08:36 PM Feature #43708 (Pending Backport): mgr/rook: Blink enclosure LED
- backport https://github.com/ceph/ceph/pull/34199
- 08:35 PM Bug #44603: cephadm: `ls --refresh` shows Tracebacks in the log
- prio low, till someone complains
- 08:32 PM Bug #44699: cephadm: removing services leaves configs behind
- which config? things in the mon store?
- 08:27 PM Bug #44609 (In Progress): cephadm: grafana: cert problem prevents dashbaord integration
- 08:27 PM Bug #44513 (Resolved): mgr/cephadm: `orch ps --refresh` returns no results
- 02:17 AM Bug #44513 (Pending Backport): mgr/cephadm: `orch ps --refresh` returns no results
- https://github.com/ceph/ceph/pull/34190
- 08:26 PM Bug #44608 (Resolved): cephadm: grafana: bound to 127.0.0.1
- 02:18 AM Bug #44608 (Pending Backport): cephadm: grafana: bound to 127.0.0.1
- backport https://github.com/ceph/ceph/pull/34191
- 08:26 PM Bug #43890 (Resolved): cephadm: default hardcoded to non-ceph dockerhub
- 08:25 PM Feature #44775 (Resolved): cephadm: NFS stage 2
- * Teuthology integration
* cephadm adopt
* make container upgrades work - 08:22 PM Feature #44718 (Resolved): NFS ganesha (mgr/cephadm)
- 03:24 PM Bug #44642: cephadm: mgr dump might be too huge
- ...
- 02:47 PM Bug #44642: cephadm: mgr dump might be too huge
- ...
- 11:33 AM Bug #44642: cephadm: mgr dump might be too huge
- Just ran another loop of ten with https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm and docker.io/ceph/cep...
- 11:24 AM Bug #44642: cephadm: mgr dump might be too huge
- Ten runs using cephadm from https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm (so the container image is d...
- 11:01 AM Bug #44642: cephadm: mgr dump might be too huge
- To try to obtain some further clarity, I ran this:
@for n in $(seq 1 10); do sleep 5; systemctl stop ceph.target ;... - 08:15 AM Bug #44642: cephadm: mgr dump might be too huge
- OK, I've now seen it work at least once without that patch applied, and I've also seen it fail at least once without ...
- 03:35 AM Bug #44642: cephadm: mgr dump might be too huge
- Sebastian Wagner wrote:
> interesting. does it work when using https://github.com/ceph/ceph/pull/34031 ?
Yeah, it... - 01:32 PM Bug #44602 (Fix Under Review): cephadm: `orch ls` shows daemons as online, despite host is down
- 12:57 PM Feature #44556: cephadm: preview drivegroups
- This is the output of:...
- 10:16 AM Feature #44556 (In Progress): cephadm: preview drivegroups
- 10:53 AM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- https://github.com/ceph/ceph/compare/master...sebastian-philipp:cephadm-add-ipv6-routes?expand=1
- 10:15 AM Bug #44769 (Resolved): cephadm doesn't reuse osd_id of 'destroyed' osds
- The replacement operation is supposed to work like this:...
- 09:27 AM Documentation #44768 (Rejected): cephadm: document allow_ptrace true
- ...
- 09:18 AM Bug #44729: cephadm enter using docker is broken
- hm. Jan, close as "Can't reproduce"?
03/25/2020
- 11:57 PM Backport #43994 (Rejected): luminous: ceph orchestrator rgw rm: no valid command found
- 11:57 PM Backport #43993 (Rejected): mimic: ceph orchestrator rgw rm: no valid command found
- 09:42 PM Bug #43816 (Pending Backport): cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 03:16 PM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- > This command produced the following error:
> ... - 05:34 PM Bug #44758 (Resolved): Drive Groups: limit:1 does not imply all:true
- This drive group:...
- 05:29 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- prerequisite: https://github.com/ceph/ceph/pull/34091
- 05:28 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- I'm already getting bug reports, like...
- 04:23 PM Bug #44642: cephadm: mgr dump might be too huge
- interesting. does it work when using https://github.com/ceph/ceph/pull/34031 ?
- 08:26 AM Bug #44642: cephadm: mgr dump might be too huge
- I can't help but think this is somehow related to the hangs we're getting after ~100k output when podman is run via s...
- 04:12 PM Bug #44513: mgr/cephadm: `orch ps --refresh` returns no results
- Confirmed PR 34182 fixes this.
Saw a similar thing with 3 hosts and only the last host was shown during refersh:
... - 03:41 PM Bug #44513: mgr/cephadm: `orch ps --refresh` returns no results
- (pretty sure i'm fixing the same bug... it would happen if you had 2 hosts in your test cluster above and the last on...
- 03:40 PM Bug #44513 (Fix Under Review): mgr/cephadm: `orch ps --refresh` returns no results
- 03:44 PM Bug #44729 (Need More Info): cephadm enter using docker is broken
- It works for me......
- 03:35 PM Bug #44608 (Fix Under Review): cephadm: grafana: bound to 127.0.0.1
- 03:21 PM Bug #44756 (Resolved): drivegroups: replacement op will ignore existing wal/dbs
- Since the db/wal is considered "locked/non-avialable" by ceph-volume after the first deployment, the DriveGroup algor...
- 12:11 PM Bug #44747: orch: `ceph orch ls --service_type` is broken
- This looks actually random. In the same session:...
- 12:07 PM Bug #44747 (Can't reproduce): orch: `ceph orch ls --service_type` is broken
- ...
- 11:52 AM Bug #44746 (Closed): cephadm: vstart.sh --cephadm: don't deploy crash by default
- as it's not part of the normal cluster, it's getting left behind and stays running. ...
- 09:34 AM Bug #44739 (Can't reproduce): ceph.conf parameters set via "cephadm bootstrap -c" are not persist...
- Now, when I set e.g. "osd crush chooseleaf type = 0" via "cephadm bootstrap -c", the initial CRUSH map has failure do...
- 09:32 AM Bug #44738: drivegroups/cephadm: db_devices don't get applied correctly when using "paths"
- -It seems that db_devices are ignored whenever "paths" is used in the "data_devices" section.-
Ignore that. - 09:15 AM Bug #44738 (Won't Fix): drivegroups/cephadm: db_devices don't get applied correctly when using "p...
- ...
03/24/2020
- 10:05 PM Bug #44642: cephadm: mgr dump might be too huge
- > Do you know why the line "j = json.loads(out)" is choking on the integer value sent by "ceph mgr dump"?
Now I se... - 07:38 PM Bug #44669 (Fix Under Review): cephadm: rm-cluster should clean up /etc/ceph
- 02:35 PM Bug #44669 (In Progress): cephadm: rm-cluster should clean up /etc/ceph
- 02:57 PM Bug #44729: cephadm enter using docker is broken
- ls works though...
- 02:56 PM Bug #44729 (Can't reproduce): cephadm enter using docker is broken
- ...
Also available in: Atom