Activity
From 05/21/2020 to 06/19/2020
06/19/2020
- 10:02 PM Bug #46098: Exception adding host using cephadm
- lol, just discovered this myself. Confirm that the suggested fix is appropriate.
- 08:17 AM Bug #46098 (Triaged): Exception adding host using cephadm
- 04:43 AM Bug #46098: Exception adding host using cephadm
- Typo in 'Environment' section. 15.2.3 not 15.2.2
- 03:21 AM Bug #46098 (Resolved): Exception adding host using cephadm
- After bootstrapping 1st host using cephadm, attempting to add another host fails with an exception (variable referenc...
- 01:14 PM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- I start to suspect that this comes from a race between host refresh and the scheduler, who starts to create new daemo...
- 10:31 AM Bug #45973 (Fix Under Review): Adopted MDS daemons are removed by the orchestrator because they'r...
- 08:41 AM Bug #46103 (Duplicate): Restart service command restarts all the services and accepts service typ...
- ...
06/18/2020
- 10:03 PM Bug #46097 (Won't Fix): package mode has a hardcoded ssh user
- package mode user is hardcoded to 'cephadm'
- 08:25 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
- Michael Fritch wrote:
> I think there is some confusion on the `orch` cli commands.
>
> `orch ps` will list the c... - 07:29 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
- I think there is some confusion on the `orch` cli commands.
`orch ps` will list the cephadm daemons, whereas `orch... - 05:14 PM Documentation #46082 (Can't reproduce): cephadm: deleting (mds) service doesn't work?
- ...
- 02:14 PM Bug #46036 (Fix Under Review): cephadm: killmode=none: systemd units failed, but containers still...
- 01:15 PM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
- Machine is amd64 in virtualbox on Windows
- 12:53 PM Documentation #46073 (Can't reproduce): cephadm install fails: apt:stderr E: Unable to locate pac...
- When following the installation guide on https://ceph.readthedocs.io/en/latest/cephadm/install/ I ran cephadm install...
- 12:36 PM Feature #44875 (Fix Under Review): mgr/rook: PlacementSpec to K8s POD scheduling conversion
- 11:41 AM Bug #45155 (Pending Backport): mgr/dashboard: Error listing orchestrator NFS daemons
- 08:59 AM Documentation #46052 (Fix Under Review): Module 'cephadm' has failed: DaemonDescription: Cannot c...
- 07:55 AM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
- Sebastian Wagner wrote:
> the correct call is
>
> [...]
>
> which documentation / example did you use for this... - 06:42 AM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- Seems to work once I've applied https://github.com/ceph/ceph/pull/35633 and added the --ipv6 to bootstrap.
- 01:09 AM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
- Aha! Solved it. We bind mon to ipv6 (::1), in reality it's messenger is bound to ::1, however the mgr is still bindin...
06/17/2020
- 08:19 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
- I wound up getting around this by using an Ansible role in which this worked successfully. You can feel free to close...
- 01:46 PM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
- the correct call is...
- 01:40 PM Documentation #46052 (Resolved): Module 'cephadm' has failed: DaemonDescription: Cannot calculate...
- ceph version 15.2.3
using Cephadm... - 11:45 AM Bug #45097 (Resolved): cephadm: UX: Traceback, if `orch host add mon1` fails.
- 11:10 AM Bug #46045 (Resolved): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- http://qa-proxy.ceph.com/teuthology/kchai-2020-06-17_08:41:50-rados-wip-kefu-testing-2020-06-17-1349-distro-basic-smi...
- 09:46 AM Feature #46044 (Resolved): cephadm: Distribute admin keyring.
- This is similar to the the ceph.conf, but more complicated.
Maybe use a placement spec? ... - 09:40 AM Bug #46037: ceph orch command hangs forever when trying to add osd
- `daemon add` violates https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers ....
- 08:42 AM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
06/16/2020
- 09:58 PM Feature #44866 (In Progress): cephadm root mode: support non-root users + sudo
- 09:58 PM Feature #45653 (In Progress): cephadm: Improve safety by using a specific user
- 03:33 PM Bug #46038 (Closed): cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcf...
- commmand used:...
- 03:10 PM Bug #46037 (Can't reproduce): ceph orch command hangs forever when trying to add osd
- after bootstrapping the cephadm cluster when we login to ceph shell, ...
- 02:26 PM Bug #46036: cephadm: killmode=none: systemd units failed, but containers still running
- https://github.com/ceph/ceph/pull/35524 is part of the solution. the other part is adding a @set -e@
- 02:23 PM Bug #46036 (Resolved): cephadm: killmode=none: systemd units failed, but containers still running
- ...
- 11:08 AM Feature #45859 (Fix Under Review): cephadm: use fixed versions
- 10:59 AM Feature #45859 (In Progress): cephadm: use fixed versions
- 10:49 AM Bug #45594: cephadm: weight of a replaced OSD is 0
- The initial weight is never restored after `draining` the OSDs.
We can save the initial weight/reweight and reset ... - 09:53 AM Bug #46031 (Resolved): Exception: Failed to validate Drive Group: block_wal_size must be of type int
- ...
- 04:42 AM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
- I'll have a poke around and see if I can get this unblocked so we can continue your IPv6 adventure :)
06/15/2020
- 09:24 PM Bug #45999 (Fix Under Review): cephadm shell: picking up legacy_dir
- 09:16 PM Bug #45999 (In Progress): cephadm shell: picking up legacy_dir
- 03:27 PM Bug #45999 (Resolved): cephadm shell: picking up legacy_dir
- ...
- 02:02 PM Feature #45378 (In Progress): cephadm: manage /etc/ceph/ceph.conf
- 11:13 AM Feature #45996 (New): adopted prometheus instance uses port 9095, regardless of original port number
- When adopting prometheus (@cephadm adopt --style legacy --name prometheus.HOSTNAME@), the new prometheus daemon start...
- 11:01 AM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- We have the same problem with adopted prometheus instances (I adopted one, it was working fine for a few minutes, the...
- 08:54 AM Documentation #45977: cephadm: Improve Service removal docs
- Yes, it worked when I entered the command above vor every service.
So I deleted every nfs service and daemon and sta...
06/12/2020
- 12:30 PM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- It's not an accident that this is working. OTOH, this needs behavior needs improvement. Let me think about the chicke...
- 02:32 AM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- Sebastian Wagner wrote:
> Hm. Isn't this a big flaw in adopt, not just for MDS?
Not in practice so far. The docs... - 06:51 AM Bug #45980: cephadm: implement missing "FileStore not supported" error message and update DriveGr...
- DriveGroups allow specifying filestore, they are not that tightly coupled to cephadm.
I'd argue cephadm should det... - 04:36 AM Bug #45980: cephadm: implement missing "FileStore not supported" error message and update DriveGr...
- https://docs.ceph.com/docs/master/cephadm/adoption/#limitations says "Cephadm only works with BlueStore OSDs. If ther...
- 06:18 AM Bug #45155 (Fix Under Review): mgr/dashboard: Error listing orchestrator NFS daemons
- 06:14 AM Feature #45982 (Resolved): mgr/cephadm: remove or update Dashboard settings after daemons are des...
- When these services are deployed, cephadm calls Dashboard's command to set settings to make features available in the...
06/11/2020
- 06:19 PM Bug #45097 (In Progress): cephadm: UX: Traceback, if `orch host add mon1` fails.
- 04:57 PM Bug #45980 (Resolved): cephadm: implement missing "FileStore not supported" error message and upd...
- This one is easy to reproduce.
Ask cephadm to create FileStore OSDs:... - 03:38 PM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- Hm. Isn't this a big flaw in adopt, not just for MDS?
We might need to apply something like this before adopting... - 10:13 AM Bug #45973 (Rejected): Adopted MDS daemons are removed by the orchestrator because they're orphans
- The "docs":https://docs.ceph.com/docs/master/cephadm/adoption/ say that when converting to cephadm, one needs to rede...
- 03:22 PM Bug #45976: cephadm: prevent rm-daemon from removing legacy daemons
- we should probably prevent *legacy* daemons from being removed altogether.
workarounds:
* Either adopt to a ce... - 01:10 PM Bug #45976 (Duplicate): cephadm: prevent rm-daemon from removing legacy daemons
cephadm displays daemon when none could be found at /var/lib/ceph/unknown/osd.0...- 03:19 PM Documentation #45977: cephadm: Improve Service removal docs
- ...
- 01:20 PM Documentation #45977 (Resolved): cephadm: Improve Service removal docs
- First of all, yes I know, nfs under ceph orch is still under development but I couldn't find any information about th...
- 12:03 PM Feature #44055 (New): cephadm: make 'ls' faster
- I think a requirements for future refactorization here is:
* having a vera good pytest coverage with example outpu... - 10:38 AM Cleanup #45321 (Fix Under Review): Servcie spec: unify `spec:` vs omitting `spec:`
06/10/2020
- 02:40 PM Bug #44746: cephadm: vstart.sh --cephadm: don't deploy crash by default
- might be fixed https://github.com/ceph/ceph/pull/35472
- 12:29 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- https://github.com/ceph/ceph/pull/35524
06/09/2020
- 10:05 PM Bug #45961 (Fix Under Review): cephadm: high load and slow disk make "cephadm bootstrap" fail
- 08:41 PM Bug #45961: cephadm: high load and slow disk make "cephadm bootstrap" fail
- apparently `ceph -s` can take longer than 30sec to return as seen by the partial output in between retries.
- 08:10 PM Bug #45961: cephadm: high load and slow disk make "cephadm bootstrap" fail
- the code: https://github.com/ceph/ceph/blob/5a7d75290f4480764b24c241ba11f93fe8917c4b/src/cephadm/cephadm#L2497-L2517
- 08:09 PM Bug #45961: cephadm: high load and slow disk make "cephadm bootstrap" fail
- You can see in the log excerpt that the "ceph -s" command eventually produces output, but by the time the earlier att...
- 07:50 PM Bug #45961 (Resolved): cephadm: high load and slow disk make "cephadm bootstrap" fail
- When running "cephadm bootstrap" in a libvirt-based virtual environment (four VMs) running on a machine that has a si...
- 08:30 PM Bug #45962 (Closed): "ceph orch apply nfs" seems to deploy an nfs daemon, but that doesn't show u...
- ...
- 08:17 PM Bug #45962 (Closed): "ceph orch apply nfs" seems to deploy an nfs daemon, but that doesn't show u...
- After running the following commands:...
- 05:28 PM Feature #44055 (In Progress): cephadm: make 'ls' faster
- 09:35 AM Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons
- The exception in the description was already fixed.
Thest rest of work is to set pool and namespace within cephadm... - 09:33 AM Bug #45155 (In Progress): mgr/dashboard: Error listing orchestrator NFS daemons
- 08:30 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- fascinating:...
- 05:34 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/kchai-2020-06-08_10:56:36-rados-wip-kefu-testing-2020-06-08-1713-distro-basic-smithi/5128793/
06/08/2020
- 04:21 PM Support #45940 (Closed): Orchestrator to be able to deploy multiple OSDs per single drive
- One might want to have multiple OSDs for a single fast (e.g. NVMe) drive. E.g. single BlueStore instance is known for...
- 04:15 PM Feature #45939 (New): Unable to use device that already has existing LVs.
- Currently Orchestrator is unable to use a device that LVM has already been initialized at. As a result:
1) User to... - 03:27 PM Feature #45938 (Closed): "ceph orch daemon add osd" lacks an ability to specify DB/WAL devices
- Looks like this command's functionality is pretty limited - it's able to add new OSDs backed by single main device on...
- 03:11 PM Bug #45343 (In Progress): Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS hands...
- Sebastian Wagner wrote:
> quay.io closed the ticket with "it's your job to do retries"
lol. Well, quay.ceph.io i... - 01:11 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- quay.io closed the ticket with "it's your job to do retries"
- 02:35 PM Bug #45909 (Resolved): already existing cluster deployed: cephadm bootstrap failure
- 02:35 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Deepika Upadhyay wrote:
> Sebastian Wagner wrote:
> > Deepika Upadhyay wrote:
> > > Sebastian Wagner wrote:
> > >... - 02:33 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Sebastian Wagner wrote:
> Deepika Upadhyay wrote:
> > Sebastian Wagner wrote:
> > > hm. you already have plenty of... - 12:12 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Deepika Upadhyay wrote:
> Sebastian Wagner wrote:
> > hm. you already have plenty of clusters already running on yo... - 08:51 AM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Sebastian Wagner wrote:
> hm. you already have plenty of clusters already running on your machine. is this on purpos... - 02:34 PM Documentation #45937 (New): cephadm: setting the various certificates
- *Grafana*
how to set the grafana certificate and key:... - 01:46 PM Documentation #45936 (New): cephadm: document restart the whole cluster
- ...
- 01:35 PM Feature #44414: bubble up errors during 'apply' phase to 'cluster warnings'
- https://github.com/ceph/ceph/pull/35456 will go into this direction.
- 01:34 PM Feature #45905 (Duplicate): cephadm: errors in serve() should create a HEALTH warning
- 12:35 PM Feature #45905: cephadm: errors in serve() should create a HEALTH warning
- https://github.com/ceph/ceph/pull/35456 will go into this direction.
- 01:32 PM Bug #44603 (Rejected): cephadm: `ls --refresh` shows Tracebacks in the log
- I don't plan to fix this. instead, I'm about to remove support for --refresh
- 01:26 PM Bug #45172 (Pending Backport): bin/cephadm: logs: Traceback: not enough values to unpack (expecte...
- 01:24 PM Cleanup #45321: Servcie spec: unify `spec:` vs omitting `spec:`
- decison was to use @spec@
- 01:19 PM Feature #43911 (Resolved): test cephadm rgw deployment
- 01:17 PM Feature #45654: orchestrator: support OSDs backed by LVM LV/VG
- Is this something we need to improve in ceph-volume?
- 01:13 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
- might want to run ...
- 01:13 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
- execnet is again very helpful with their exceptions this time.
- 12:41 PM Documentation #45862: orch mds rm is documented but does not exist
- If you want to remove the service, you can use:...
- 12:38 PM Bug #45867: orchestrator: Errors while deployment are hidden behind the log wall
- relates to https://github.com/ceph/ceph/pull/35456
- 12:32 PM Bug #45174 (Resolved): cephadm: missing parameters on 'orch daemon add iscsi'
- 12:30 PM Feature #43836 (Resolved): cephadm adopt: also adopt Prometheus and Grafana daemons from DeepSea
- 12:27 PM Bug #45032 (Resolved): cephadm: Not recovering from `OSError: cannot send (already closed?)`
- 12:27 PM Bug #45627 (Resolved): cephadm: frequently getting `1 hosts fail cephadm check`
- 12:26 PM Documentation #45411 (Resolved): cephadm: add section about container images
- 12:26 PM Bug #45625 (Resolved): cephadm: when configuring monitoring with ceph orch, ceph dashboard is onl...
- 12:25 PM Feature #44886 (New): cephadm: allow use of authenticated registry
- 12:24 PM Bug #45696 (Resolved): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:24 PM Feature #45463 (Resolved): cephadm: allow custom images for grafana, prometheus, alertmanager and...
- 12:24 PM Bug #45632 (Resolved): nfs: auth credentials for recovery database include mds
- 12:24 PM Bug #45629 (Resolved): cephadm: Allow users to provide ssh keys during bootstrap
- 12:23 PM Bug #45617 (Resolved): mgr/orch: mds with explicit naming
- 09:07 AM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- ...
06/06/2020
- 09:46 PM Tasks #45914 (Won't Fix): cephamd: make src/cephadm/vstart-smoke.sh a proper teuthology test
- src/cephadm/vstart-smoke.sh is a simple bash script. This is a perfect template to extend qa/suites/rados/cephadm/wor...
- 09:29 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- hm. you already have plenty of clusters already running on your machine. is this on purpose? In yes, I think ceph-a23...
- 05:18 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Sebastian Wagner wrote:
> looks as if cephadm wasn't able to start the mons. can you attach
>
> [...]
sure!
...
06/05/2020
- 04:08 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- looks as if cephadm wasn't able to start the mons. can you attach...
- 02:44 PM Bug #45909 (Duplicate): already existing cluster deployed: cephadm bootstrap failure
- ...
- 01:31 PM Bug #45907 (Resolved): cepham: daemon rm for managed services is completely broken
- ...
- 01:14 PM Feature #45463 (Closed): cephadm: allow custom images for grafana, prometheus, alertmanager and n...
- Backport PR: https://github.com/ceph/ceph/pull/35347
- 12:25 PM Feature #45378: cephadm: manage /etc/ceph/ceph.conf
- downstream https://github.com/fmount/tripleo-ceph/issues/21
- 09:50 AM Feature #45905 (Duplicate): cephadm: errors in serve() should create a HEALTH warning
- othwerwise users need to search the mgr log for hints manually.
06/04/2020
- 01:18 PM Documentation #45896 (New): cephadm: Need a manual howto: "upgrade the cluster manually"
- symptom:...
- 10:10 AM Feature #45876 (New): cephadm: handle port conflicts gracefully
- ...
- 09:30 AM Bug #45872: ceph orch device ls exposes the `device_id` under the DEVICES column which isn't too ...
- Sebastian Wagner wrote:
> what about poiting users to --format yaml? that would be an easy fix.
Yep, taht's proba... - 08:03 AM Bug #45872: ceph orch device ls exposes the `device_id` under the DEVICES column which isn't too ...
- what about poiting users to --format yaml? that would be an easy fix.
- 07:43 AM Bug #45872 (Resolved): ceph orch device ls exposes the `device_id` under the DEVICES column which...
- Instead of just listing the device_id, we should consider adding columns for disk properties that can be filtered for...
- 08:11 AM Documentation #45820 (Pending Backport): create OSDs doc refer to --use-all-devices
- 08:05 AM Documentation #45865 (Fix Under Review): cephadm: The service spec documentation is lacking impor...
- 07:59 AM Bug #45867: orchestrator: Errors while deployment are hidden behind the log wall
- relates to https://github.com/ceph/ceph/pull/35375
- 07:13 AM Bug #45604: mgr/cephadm: Failed to create an OSD
- I haven't seen this issue in a while now. Would be interesting if this still exists in the latest master.
- 07:10 AM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- There is an `unmanaged` flag that can be set for any ServiceSpec...
- 02:14 AM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- As an additional comment on this...
I also think similar logic should apply to any drivespec that is used to apply...
06/03/2020
- 09:38 PM Bug #45808 (Fix Under Review): cephadm/test_adoption.sh: Error parsing image configuration: Inval...
- 05:47 PM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- https://github.com/ceph/teuthology/pull/1501
- 05:48 PM Bug #45807: cephadm/test_cephadm.sh: unable to pull image: Error parsing image configuration: too...
- https://github.com/ceph/teuthology/pull/1501
- 05:48 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- https://github.com/ceph/teuthology/pull/1501
- 04:32 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- We stepped into that too, IMO quite non-intuitive behavior too. One removes an OSD and it reappears again shortly aft...
- 04:19 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- That is exactly the behavior we see.
I do not think that is intuitive or should be the expected behavior.
Maybe... - 04:09 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- -Hmm there is something racey going on. I have a cluster too deployed with --all-available-devices. I remove and osd ...
- 02:43 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- I'd say that this is the intended behavior. There is a tracking issue for adding a `blacklisting` command that allows...
- 04:08 PM Bug #45867 (Resolved): orchestrator: Errors while deployment are hidden behind the log wall
- As an end user i expect that when running a 'ceph orch apply xxxx' command i will see the errors on the CLI when some...
- 03:41 PM Documentation #45865 (Resolved): cephadm: The service spec documentation is lacking important inf...
- The service spec documentation does not explain when to use the 'unmanaged' field or what happens when this is set to...
- 02:55 PM Feature #45864 (Resolved): cephadm: include monitoring components in usual upgrade process
- Monitoring components are tied to the Ceph version released.
For instance, on Ceph Nautilus, Grafana 5 is supporte... - 02:46 PM Documentation #45820 (In Progress): create OSDs doc refer to --use-all-devices
- 02:41 PM Documentation #45862 (Resolved): orch mds rm is documented but does not exist
- ...
- 02:25 PM Bug #45861 (Resolved): data_devices: limit 3 deployed 6 osds per node
- We have 5 OSD nodes, all looks similar (except node 4 has -1 ssd):...
- 02:20 PM Documentation #45860 (Rejected): cephadm: document upgrades of monitoring components
- Since the container images used for monitoring components can now be specified manually, it is possible to upgrade th...
- 02:19 PM Feature #45859 (Resolved): cephadm: use fixed versions
- Use fixed versions for container image in cephadm.
Missing: Grafana.
It also needs to be checked if the same ne... - 01:51 PM Documentation #45858 (Resolved): `ceph orch status` doesn't show in progress actions
- https://docs.ceph.com/docs/master/mgr/orchestrator/#status documents that @ceph orch status@ show in-progress operati...
- 09:20 AM Bug #45737: Module 'cephadm' has failed: cannot send (already closed?)
- Hi,
So besides the fact this is a duplicate of an issue, that is waiting to have a fix reviewed, what should I do ... - 08:59 AM Feature #45463 (Pending Backport): cephadm: allow custom images for grafana, prometheus, alertman...
- 08:45 AM Documentation #45833 (Resolved): cephadm: properly document labels
- # How to add / remove labels to hosts
# How to show labels
# how to use labels for placing daemons
Those bits ar... - 08:40 AM Bug #45832: cephadm: "ceph orch apply mon" moves daemons
- you used ceph-salt to deploy the initial mon?
- 08:27 AM Bug #45832 (Resolved): cephadm: "ceph orch apply mon" moves daemons
- Somewhat related to the issue I copied this from.
I started with a bootstrap cluster (1 mon, 1 mgr), wanted to add... - 07:02 AM Bug #45407: cephadm: Speed up OSD deployment preview
- PR https://github.com/ceph/ceph/pull/34665 won't be beneficial from the preview cache implemented in this issue, beca...
06/02/2020
- 08:29 PM Bug #45819: cephadm: Possible error in deploying-nfs-ganesha docs
- This is likely due to NFSv3 config which requires a running rpcbind service....
- 03:21 PM Bug #45819 (Can't reproduce): cephadm: Possible error in deploying-nfs-ganesha docs
- Simon Sutter sent an email to ceph-users that included this:
https://docs.ceph.com/docs/master/cephadm/install/#de... - 03:37 PM Documentation #45820 (Resolved): create OSDs doc refer to --use-all-devices
- https://docs.ceph.com/docs/octopus/mgr/orchestrator/#create-osds (also master) refer to...
- 02:32 PM Feature #44873 (Resolved): cephadm bootstrap: add --apply-spec <cluster.yaml>
- 01:56 PM Bug #45462 (Resolved): 'https://download.ceph.com/debian-octopus focal Release' does not have a R...
- 01:54 PM Bug #45407 (Resolved): cephadm: Speed up OSD deployment preview
- 01:54 PM Bug #45245 (Resolved): cephadm: print iscsi container's log to stdout/stderr
- 01:13 PM Bug #45249 (Resolved): cephadm: fail to apply a iSCSI ServiceSpec
- 01:11 PM Bug #45293 (Resolved): cephadm: service_id can contain a '.' char (mds, nfs, iscsi)
- 01:11 PM Bug #45294 (Resolved): cephdam: rgw realm/zone could contain 'hostname'
- 01:10 PM Bug #45284 (Resolved): cephadm: Access host files on "cephadm shell"
- 01:09 PM Feature #44625 (Resolved): cephadm: test dmcrypt
- 01:09 PM Bug #45587 (Resolved): mgr/cephadm: Failed to create encrypted OSD
- 01:09 PM Bug #45700 (Resolved): cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Oc...
- 01:09 PM Documentation #44284 (Resolved): cephadm: provide a way to modify the initial crushmap
- 01:08 PM Bug #45427 (Resolved): cephadm: auth get failed: invalid entity_auth mon
- 01:08 PM Bug #45129 (Resolved): simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot
- 01:08 PM Bug #45393 (Resolved): Containerized osd config must be updated when adding/removing mons
- 01:07 PM Bug #45458 (Resolved): non-ascii chars in /etc/prometheus/ceph/ceph_default_alerts.yml
- 01:06 PM Bug #45417 (Resolved): cephadm: nfs grace remove killed before completion
- 01:06 PM Bug #45394 (Resolved): cephadm: fail to create/preview OSDs via drive group
- 01:03 PM Bug #45252 (Resolved): cephadm: fail to insert modules when creating iSCSI targets
- 01:03 PM Feature #45163 (Resolved): cephadm: iscsi: read and write config-key for the dashboard
- 12:59 PM Bug #45632 (Pending Backport): nfs: auth credentials for recovery database include mds
- 12:53 PM Bug #45632 (Resolved): nfs: auth credentials for recovery database include mds
- 12:59 PM Bug #45696 (Pending Backport): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:53 PM Bug #45696 (Resolved): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:54 PM Bug #45629 (Pending Backport): cephadm: Allow users to provide ssh keys during bootstrap
- 12:53 PM Bug #45629 (Resolved): cephadm: Allow users to provide ssh keys during bootstrap
- 12:52 PM Bug #45560 (Resolved): cephadm: fail to create OSDs
- 12:51 PM Documentation #44828 (Resolved): cephadm: clarify "Failed to infer CIDR network for mon ip"
- 11:11 AM Tasks #45814 (Resolved): tasks/cephadm.py: Add iSCSI smoke test
- We need a block similar to
https://github.com/ceph/ceph/blob/cca6533da2dbb756769bf3640b19705a1d0ea1fa/qa/tasks/cep... - 10:50 AM Bug #45791: cephadm: Upgrade is failing octopus on centos 8 %d format a numbe ri srequired: not ...
- If more log is needed I can add, but I believe below is the relevant snip you need that pertains to the upgrade....
... - 08:33 AM Bug #45791 (Need More Info): cephadm: Upgrade is failing octopus on centos 8 %d format a numbe r...
- Can you please attache the full MGR log file?
- 08:24 AM Bug #45807 (Duplicate): cephadm/test_cephadm.sh: unable to pull image: Error parsing image config...
- Different traceback, but same origin. Close as duplicate. Thank you for reporting this!
- 04:42 AM Bug #45807 (Duplicate): cephadm/test_cephadm.sh: unable to pull image: Error parsing image config...
- /a/yuriw-2020-05-30_02:18:17-rados-wip-yuri-master_5.29.20-distro-basic-smithi/5104549...
- 08:23 AM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- right. test_adoption doesn't use tasks/cephadmp.py
- 08:21 AM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- should work! I'm seeing the registry getting accessed:...
- 05:52 AM Bug #45808 (Resolved): cephadm/test_adoption.sh: Error parsing image configuration: Invalid statu...
- /a/yuriw-2020-05-30_02:18:17-rados-wip-yuri-master_5.29.20-distro-basic-smithi/5104472...
- 04:43 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- Looks like this change needs to be applied more widely. See https://tracker.ceph.com/issues/45807
- 03:20 AM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- When using command `ceph orch apply osd --all-available-devices` to create OSDs, an OSDSpec is created to apply all u...
06/01/2020
- 09:45 PM Bug #45399: NFS Ganesha : Error searching service specs for all nodes after nfs orch apply nfs......
- This also affects other services such as MDS etc, by causing the orchestrator to deploy/remove them in a loop:
<pr... - 09:44 PM Bug #45399 (Fix Under Review): NFS Ganesha : Error searching service specs for all nodes after nf...
- A short hostname should not contain a dot char ('.'), but looks like the validation is missing during `host add` ...
... - 03:53 PM Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip"
- Zac Dover wrote:
> https://pad.ceph.com/p/cidr_error_cephadm_docs
>
> The list of commands in this Etherpad repre...
05/30/2020
- 06:54 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- One thing I didn't mentioned is that cephadm attempting to apply OSDs to available devices SURVIVES disabling and ena...
- 06:35 PM Bug #45792 (Resolved): cephadm: zapped OSD gets re-added to the cluster.
- Using version 15.2.1 with octopus cluster running centos 8.
When the cluster was initially deployed, OSDs were cre... - 06:28 PM Bug #45791 (Can't reproduce): cephadm: Upgrade is failing octopus on centos 8 %d format a numbe ...
- New dev cluster running centos 8 with octopus release deployed using cephadm. Deployed on version 15.2.1.
Attempt...
05/29/2020
- 06:55 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- ...
- 05:40 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Yeah, that's what I meant when I said I can't figure out the appropriate ...
- 09:49 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- ...
- 01:48 PM Tasks #45143 (Closed): cephadm: scheduler improvements
- split into different issues
- 01:47 PM Feature #45770 (Rejected): cephadm: allow count=0 to have services without daemons
- 01:47 PM Feature #45769 (Resolved): cephadm: Don't deploy on offline hosts
- 01:46 PM Feature #45768 (Rejected): cephadm: remove daemons should check HEALTH
- 01:45 PM Documentation #45767 (Resolved): documentation: disable the scheduler: unmanaged=True + ceph orch...
- 01:45 PM Feature #45766 (New): cephadm: Removal: make sure, enough daemons joined the maps
- 01:43 PM Bug #45726 (Fix Under Review): Module 'cephadm' has failed: auth get failed: failed to find clien...
- 01:42 PM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
- 01:20 PM Bug #45560 (Pending Backport): cephadm: fail to create OSDs
- 01:20 PM Bug #45632 (Pending Backport): nfs: auth credentials for recovery database include mds
- 09:09 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- maybe we can actually fix this by moving to a our internal registry
- 04:29 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-05-28_02:23:45-rados-wip-yuri-master_5.27.20-distro-basic-smithi/5098059
/a/yuriw-2020-05-28_02:23:45-... - 12:21 AM Bug #45627 (Fix Under Review): cephadm: frequently getting `1 hosts fail cephadm check`
- 12:16 AM Bug #45700: cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Octopus
- I expect that the backport of https://github.com/ceph/ceph/pull/34745 will fix this.
- 12:13 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- right. we're now using containers seriously. Thanks David for setting up the registries.
05/28/2020
- 09:45 PM Bug #45631 (In Progress): Error parsing image configuration: Invalid status code returned when fe...
- 09:45 PM Bug #45631 (New): Error parsing image configuration: Invalid status code returned when fetching b...
- /a/yuriw-2020-05-28_02:23:45-rados-wip-yuri-master_5.27.20-distro-basic-smithi/5098001...
- 08:57 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Okay, I set up a docker mirror and a quay registry today.
h3. docker-mirror.front.sepia.ceph.com... - 03:44 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Not getting any better:...
- 08:21 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- David Galloway wrote:
> Sebastian Wagner wrote:
> > David, do you have an idea how to make quay.io reliable?
>
>... - 06:26 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- ...
- 03:46 PM Bug #45737 (Duplicate): Module 'cephadm' has failed: cannot send (already closed?)
- 09:39 AM Bug #45737 (Duplicate): Module 'cephadm' has failed: cannot send (already closed?)
- Hi,
I have a development cluster running on 4 VM. They're all running CentOS8 Stream and were bootstraped using ce... - 02:31 PM Bug #45656 (Duplicate): orchestrator: Host affinity/antiaffinity
- 12:37 PM Bug #45696 (Pending Backport): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:37 PM Bug #45629 (Pending Backport): cephadm: Allow users to provide ssh keys during bootstrap
- 07:12 AM Bug #45625 (Pending Backport): cephadm: when configuring monitoring with ceph orch, ceph dashboar...
05/27/2020
- 09:35 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- I have no idea about the timing of this run other than to say as a general rule I wait until the shaman build page is...
- 12:04 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- I'm trying since ages to reproduce this one, but failed so far. See my attempts at https://github.com/ceph/ceph/pull/...
- 12:01 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- Removed /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014074 since it's a diff...
- 11:52 AM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014074...
- 04:59 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Sebastian Wagner wrote:
> David, do you have an idea how to make quay.io reliable?
I'm still a container noob. S... - 09:42 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- David, do you have an idea how to make quay.io reliable?
- 01:25 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083366...
- 01:49 PM Bug #45596 (Resolved): qa/tasks/cephadm: No cephadm module detected
- 01:21 PM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Error lines related to this issue can be found in line 660 and next.
- 11:00 AM Bug #45726 (Resolved): Module 'cephadm' has failed: auth get failed: failed to find client.crash....
- [ceph: root@ceph-node-00 /]# ceph -s
cluster:
id: 8dc6f04a-9fee-11ea-a46a-525400622549
health: HEALT... - 12:34 PM Documentation #45728 (Resolved): Add an example for custom images to the "bootstrap a new cluster...
- ...
- 12:29 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- for this, we'll need control and information about the ports all the daemons use. Especially if they're configurable,...
- 12:26 PM Bug #45624: cephadm: "ceph orch apply mgr" is deploying in wrong nodes
- probably getting fixed by https://github.com/ceph/ceph/pull/34633
- 12:26 PM Documentation #45623: cephadm: "ceph orch apply mon" is deploying in wrong nodes
- probably getting fixed by https://github.com/ceph/ceph/pull/34633
- 12:24 PM Bug #45621 (Duplicate): check-host returns terrible unhelpful error message
- 12:22 PM Bug #45617 (Pending Backport): mgr/orch: mds with explicit naming
- 12:09 PM Bug #45174 (Fix Under Review): cephadm: missing parameters on 'orch daemon add iscsi'
- 12:08 PM Bug #45587 (Need More Info): mgr/cephadm: Failed to create encrypted OSD
- Is this reproducible with the latest master?
- 10:17 AM Feature #44886: cephadm: allow use of authenticated registry
- see also https://github.com/ceph/ceph/pull/35217
- 09:50 AM Bug #45725 (Can't reproduce): cephadm: Further improve "Failed to infer CIDR network for mon ip"
- ...
- 09:33 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- Good idea! Will push up a PR for ceph (and on for remoto) in the morning :)
- 07:33 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- I'd go with solution 1.
Plus a monkey patch. Something like... - 05:25 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- We just need to be connection aware.
Solution 1
===================
Remoto annoyingly doesn't have a method in... - 03:01 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- I've updated the duplicate bug with:
I've managed to recreate, I have 2 nodes, node1(10.20.92.201) and node2(10.2... - 09:26 AM Bug #45724 (Resolved): check-host should not fail using fqdn or not that hard
- I would suggest either identify that it's an FQDN or answer "Host not found. Use 'ceph orch host ls' to see all manag...
- 07:52 AM Bug #45701 (Duplicate): rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health c...
- 07:43 AM Bug #45701: rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health check
- /a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085552
/a/yuriw-2020-05-23_15:15:01-... - 07:30 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- solution:
1. set up a mirror at vossi04
2. added toml as a dependency for test, see https://github.com/ceph/teuth... - 07:27 AM Bug #45631 (Resolved): Error parsing image configuration: Invalid status code returned when fetch...
- 01:12 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083351...
- 02:59 AM Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
- I've managed to recreate, I have 2 nodes, node1(10.20.92.201) and node2(10.20.92.202).
Node2 happens to be the cu... - 02:58 AM Bug #45719 (Can't reproduce): CommandFailedError: Command failed on smithi073 with status 1: 'tes...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083241
/a/yuriw-2020-05-22_19:55:53-... - 02:45 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083157
/a/yuriw-2020-05-22_19:55:53-...
05/26/2020
- 03:00 PM Feature #45712 (Duplicate): Add 'state' attribute to ServiceSpec
- We should track the following states:
* Pending
* Failure
* Success
* Creating
when applying/creating servic... - 08:54 AM Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip"
- https://pad.ceph.com/p/cidr_error_cephadm_docs
The list of commands in this Etherpad represents Zac Dover's 26 May... - 08:11 AM Bug #45631 (In Progress): Error parsing image configuration: Invalid status code returned when fe...
- 05:34 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083319
INFO:cephadm:ceph:stderr Er... - 04:33 AM Bug #45701 (Duplicate): rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health c...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083267...
05/25/2020
- 08:48 PM Bug #44832: cephadm: `ceph cephadm generate-key` fails with No such file or directory: '/tmp/...
- @Vladimir: with Octopus, to find the version you are using it is no longer sufficient to examine RPMs. Please post th...
- 03:56 PM Bug #45700: cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Octopus
- further testing: it appears none of the cryptsetup commands exit properly when called from ceph. when running them on...
- 03:40 PM Bug #45700 (Resolved): cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Oc...
- When creating or starting an encrypted OSD a cryptsetup process hangs indefinitely and prevents the creation or start...
- 12:02 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- - https://planet.jboss.org/post/deploy_and_configure_a_local_docker_caching_proxy
- https://docs.docker.com/registry... - 10:59 AM Bug #45696 (Fix Under Review): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 10:45 AM Bug #45696 (In Progress): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 10:44 AM Bug #45696 (Resolved): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- While bootstrapping a cluster, I've provided "--dashboard-key" and "--dashboard-crt" options pointing to files that d...
- 09:39 AM Bug #45629 (Fix Under Review): cephadm: Allow users to provide ssh keys during bootstrap
05/24/2020
- 12:19 PM Bug #45672 (Can't reproduce): Unable to add additional hosts to cluster using cephadm
- After configuring nodes 2 and 3 with permission for the node 1 root user to SSH with Ceph's configuration and key, co...
- 11:52 AM Bug #44832: cephadm: `ceph cephadm generate-key` fails with No such file or directory: '/tmp/...
- Seems the target version does not include fix for the issue:
...
INFO:cephadm:Generating ssh key...
INFO:cephadm:N... - 05:26 AM Feature #44886 (In Progress): cephadm: allow use of authenticated registry
05/23/2020
- 03:52 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- ...
- 03:18 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- Sebastian Wagner wrote:
> I think this is caused by our ci downloading too many monitoring images.
>
> I think we... - 03:05 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- ...
05/22/2020
- 11:58 AM Bug #45596 (Fix Under Review): qa/tasks/cephadm: No cephadm module detected
- 10:43 AM Bug #45656 (Duplicate): orchestrator: Host affinity/antiaffinity
- Maybe we should follow the design/"spirit" embedded in:
"k8s's Assigning Pods to Nodes":https://kubernetes.io/docs/... - 10:41 AM Feature #45655 (New): orchestrator: Host affinity/antiaffinity
- Maybe we should follow the design/"spirit" embedded in:
[[k8s's Assigning Pods to Nodes:https://kubernetes.io/docs/... - 09:18 AM Feature #45654 (Rejected): orchestrator: support OSDs backed by LVM LV/VG
- Available devices should show LVs/VGs not used.
OSDs should use other logical volumes/volumes groups as "storage dev... - 09:17 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- Michael Fritch wrote:
> Also, any ideas why the fsname is `all` ??
It is from here in ceph_mdss()... - 08:41 AM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
- cephadm must support a <username> option so you can specify any user that has password-less sudo.
Needed also to ... - 08:35 AM Feature #45652 (Duplicate): cephadm: Allow user to select monitoring stack ports
- User must be able to change the ports settings for the monitoring stack
We can continue using as default:... - 07:42 AM Feature #44628: cephadm: Add initial firewall management to cephadm
- User must be able to decide what ports to use (both http/https).
- 06:53 AM Bug #45625 (Fix Under Review): cephadm: when configuring monitoring with ceph orch, ceph dashboar...
- 04:44 AM Bug #45584 (Resolved): qa/tasks/cephadm: With roleless feature no mons are deployed
- 03:11 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- ...
- 02:54 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 01:03 AM Bug #45037 (Resolved): octopus: cephadm: non-ceph units put wrong uid:gid in systemd unit file
- 12:56 AM Bug #44894 (Resolved): cephadm: non-ceph units put wrong uid:gid in systemd unit file
05/21/2020
- 08:23 AM Bug #45625 (In Progress): cephadm: when configuring monitoring with ceph orch, ceph dashboard is ...
- 03:25 AM Feature #45163 (Pending Backport): cephadm: iscsi: read and write config-key for the dashboard
- 12:43 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- Looks similar...
- 12:05 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- I think this is caused by our ci downloading too many monitoring images.
I think we have two options now:
1. co...
Also available in: Atom