Activity
From 06/16/2022 to 07/15/2022
07/15/2022
- 07:11 PM Bug #56573 (New): test_cephadm.sh: KeyError: 'TYPE'
- /a/nojha-2022-07-14_20:32:09-rados-snapshot_key_conversion-distro-default-smithi/6930915...
- 12:06 PM Documentation #50883 (Duplicate): cephadm: mds_cache_memory_limit
07/14/2022
- 06:08 PM Backport #56473 (Resolved): pacific: cephadm: removes ceph.conf during qa run causing command fai...
- 06:08 PM Backport #56455 (Resolved): pacific: cephadm uses static placement when creating daemons causing ...
- 06:08 PM Documentation #54399 (Resolved): Enhance cephadm daemon [config|redeploy|restart] options documen...
- 06:08 PM Backport #56436 (Resolved): pacific: Enhance cephadm daemon [config|redeploy|restart] options doc...
- 06:07 PM Documentation #54474 (Resolved): Improve the documentation of ceph upgrade process
- 06:07 PM Backport #56158 (Resolved): pacific: Improve the documentation of ceph upgrade process
- 06:06 PM Backport #56505 (Resolved): quincy: cephadm spec: document "config" key
- 06:06 PM Backport #56437 (Resolved): quincy: Enhance cephadm daemon [config|redeploy|restart] options docu...
- 06:05 PM Documentation #55357 (Resolved): Update cephadm doc to reflect the new per fsid ceph configuratio...
- 06:05 PM Backport #56171 (Resolved): quincy: Update cephadm doc to reflect the new per fsid ceph configura...
- 06:05 PM Backport #56159 (Resolved): quincy: Improve the documentation of ceph upgrade process
- 11:41 AM Bug #56508: haproxy check fails for ceph-grafana service
- I changed the cephadm code by the following PR:
https://github.com/ceph/ceph/pull/47098
to store the grafana ce...
07/13/2022
- 10:47 PM Feature #56433 (Resolved): Add ceph-mib subpackage to install SNMP MIB file
- 10:14 PM Bug #56552 (Resolved): cephadm: reduce spam to cephadm.log
- cephadm.log is getting filled with a bunch of messages that won't be useful in 99.9% of situations and make it hard t...
- 12:53 AM Backport #56159 (In Progress): quincy: Improve the documentation of ceph upgrade process
- 12:51 AM Backport #56171 (In Progress): quincy: Update cephadm doc to reflect the new per fsid ceph config...
- 12:50 AM Backport #56177 (In Progress): quincy: cephadm: osd memory autotuning doesn't work with FQDN hosts
- 12:48 AM Backport #56437 (In Progress): quincy: Enhance cephadm daemon [config|redeploy|restart] options d...
- 12:47 AM Backport #56454 (In Progress): quincy: cephadm uses static placement when creating daemons causin...
- 12:45 AM Backport #56474 (In Progress): quincy: cephadm: removes ceph.conf during qa run causing command f...
- 12:44 AM Backport #56475 (In Progress): quincy: add better message when removing osd
- 12:43 AM Backport #56494 (In Progress): quincy: cephadm: password generated for ingress keepalived will be...
- 12:41 AM Backport #56501 (In Progress): quincy: vstart fails with --cephadm option
- 12:40 AM Backport #56505 (In Progress): quincy: cephadm spec: document "config" key
07/12/2022
- 07:48 AM Bug #56485: ceph orch upgrade stuck, ceph orch not updating
- Yes, I have tried a mgr failover but to no effect, the next mgr continued on with the same task.
I assume there is s...
07/11/2022
- 07:56 PM Bug #56523 (New): Cephadm fails to automatically create OSD with shared DB/WAL device
- I followed the procedure https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd to replace a failed d...
- 01:33 PM Bug #56485: ceph orch upgrade stuck, ceph orch not updating
- if cephadm is well and truly stuck, the best thing to do might be a mgr failover "ceph mgr fail". That will at least ...
07/09/2022
07/08/2022
- 04:56 PM Bug #56508: haproxy check fails for ceph-grafana service
- Couldn't this be solved by generating wildcard certificates (@*.grafana.<domain>@) and some kind of hostname resoluti...
- 01:43 PM Bug #56508 (Resolved): haproxy check fails for ceph-grafana service
- If OSP is deployed with ceph-dashboard there are multiple ceph-dashboard services deployed and place behind haproxy, ...
- 10:26 AM Backport #56505 (Resolved): quincy: cephadm spec: document "config" key
- https://github.com/ceph/ceph/pull/47068
- 10:25 AM Backport #56504 (Resolved): pacific: cephadm spec: document "config" key
- https://github.com/ceph/ceph/pull/47321
- 08:41 AM Backport #56502 (Rejected): pacific: vstart fails with --cephadm option
- https://github.com/ceph/ceph/pull/47322
- 08:41 AM Backport #56501 (Resolved): quincy: vstart fails with --cephadm option
- https://github.com/ceph/ceph/pull/47069
- 08:38 AM Bug #52514 (Pending Backport): vstart fails with --cephadm option
07/07/2022
- 10:00 PM Backport #56494 (Resolved): quincy: cephadm: password generated for ingress keepalived will be tr...
- https://github.com/ceph/ceph/pull/47070
- 09:55 PM Feature #55491 (Pending Backport): cephadm: password generated for ingress keepalived will be tru...
- 02:06 PM Feature #43692: repave osds
- For future reference: https://github.com/ceph/ceph/pull/43260
- 08:59 AM Documentation #53997 (Pending Backport): cephadm spec: document "config" key
07/06/2022
- 04:51 PM Bug #56485: ceph orch upgrade stuck, ceph orch not updating
- debug logs:...
- 03:59 PM Bug #56485 (New): ceph orch upgrade stuck, ceph orch not updating
- Ceph upgrade started with:
$ ceph orch upgrade start --ceph-version 16.2.9
caused the following error message o...
07/05/2022
- 06:50 PM Backport #56475 (Resolved): quincy: add better message when removing osd
- https://github.com/ceph/ceph/pull/47071
- 06:45 PM Bug #56092 (Pending Backport): add better message when removing osd
- 05:40 PM Backport #56158 (In Progress): pacific: Improve the documentation of ceph upgrade process
- 05:38 PM Backport #55949 (Resolved): quincy: cephadm crashes when trying to restart an invalid service nam...
- 05:37 PM Feature #55813 (Resolved): Allow setting crush_device_class in OSD service specs
- 05:37 PM Backport #55992 (Resolved): quincy: Allow setting crush_device_class in OSD service specs
- 05:35 PM Backport #56436 (In Progress): pacific: Enhance cephadm daemon [config|redeploy|restart] options ...
- 05:31 PM Bug #55674 (Resolved): mgr/cephadm: alertmanager generate_config also doesn't consider FQDN
- 05:31 PM Backport #56042 (Resolved): pacific: mgr/cephadm: alertmanager generate_config also doesn't consi...
- 05:27 PM Backport #56455 (In Progress): pacific: cephadm uses static placement when creating daemons causi...
- 05:24 PM Backport #56473 (In Progress): pacific: cephadm: removes ceph.conf during qa run causing command ...
- 02:45 PM Backport #56473 (Resolved): pacific: cephadm: removes ceph.conf during qa run causing command fai...
- https://github.com/ceph/ceph/pull/46974
- 02:45 PM Backport #56474 (Resolved): quincy: cephadm: removes ceph.conf during qa run causing command failure
- https://github.com/ceph/ceph/pull/47072
- 02:40 PM Bug #56024 (Pending Backport): cephadm: removes ceph.conf during qa run causing command failure
07/04/2022
- 09:50 AM Backport #56455 (Resolved): pacific: cephadm uses static placement when creating daemons causing ...
- https://github.com/ceph/ceph/pull/46975
- 09:50 AM Backport #56454 (Resolved): quincy: cephadm uses static placement when creating daemons causing a...
- https://github.com/ceph/ceph/pull/47073
- 09:45 AM Bug #56415 (Pending Backport): cephadm uses static placement when creating daemons causing a hots...
07/01/2022
- 08:10 PM Feature #50061 (Closed): cephadm: automatically redeploy daemons if user changes which container ...
- not sure we really want this. Closing
- 12:13 PM Documentation #53997 (In Progress): cephadm spec: document "config" key
- 12:07 PM Feature #55544: Make it possible to use custom haproxy config when using ingress
- Redouane Kachach Elhichou wrote:
> cephadm already allows using your own haproxy.cfg:
>
> [...]
>
> Please hav... - 11:54 AM Feature #55544 (Closed): Make it possible to use custom haproxy config when using ingress
- 11:52 AM Feature #55544 (Need More Info): Make it possible to use custom haproxy config when using ingress
- cephadm already allows using your own haproxy.cfg:...
- 11:59 AM Cleanup #54000: cephadm: upgrade commands should return yaml
- Could be straightforward once https://github.com/ceph/ceph/pull/45467 is merged.
- 11:45 AM Backport #56437 (Resolved): quincy: Enhance cephadm daemon [config|redeploy|restart] options docu...
- https://github.com/ceph/ceph/pull/47074
- 11:45 AM Backport #56436 (Resolved): pacific: Enhance cephadm daemon [config|redeploy|restart] options doc...
- https://github.com/ceph/ceph/pull/46976
- 11:44 AM Documentation #54399 (Pending Backport): Enhance cephadm daemon [config|redeploy|restart] options...
- 11:40 AM Feature #44461 (Fix Under Review): cephadm: watch Grafana certificates
06/30/2022
- 08:18 PM Feature #56433 (Resolved): Add ceph-mib subpackage to install SNMP MIB file
- packages the SNMP MIB file introduced in https://tracker.ceph.com/issues/52708
- 08:07 PM Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- /a/yuriw-2022-06-29_13:30:16-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/[6905513, 6905691]
- 08:04 PM Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- yuriw-2022-06-30_14:20:05-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/[6907398, 6907403, 6907406, 6...
- 12:23 PM Feature #55491 (Fix Under Review): cephadm: password generated for ingress keepalived will be tru...
06/29/2022
- 02:13 PM Bug #56419 (New): test_cephadm.sh: Failed to start Ceph nfs.a for 00000000-0000-0000-0000-0000dea...
- /a/yuriw-2022-06-27_15:15:17-rados-wip-yuri2-testing-2022-06-24-1331-distro-default-smithi/6901011
Failure Reason:... - 11:06 AM Feature #55879: mgr/cephadm: balanced static placement with rendezvous or consistent hashing
- Cephadm used to distribute the load randomly across the cluster but this was broken by the change: https://github.com...
- 11:01 AM Bug #56415 (Fix Under Review): cephadm uses static placement when creating daemons causing a hots...
- 09:47 AM Bug #56415 (Resolved): cephadm uses static placement when creating daemons causing a hotspot on t...
cephadm uses static placement when creating daemons. As consequence, most of the deamons end up in the first node (...
06/28/2022
- 12:19 PM Bug #56402 (In Progress): cephadm can't make nfs-ganesha start because of the pidfile not writable.
- ...
06/27/2022
- 07:53 PM Feature #56394 (Resolved): cephadm: support custom config files for daemons
- There a number of potential use cases, often in relation to the monitoring stack, where some config file must be moun...
06/24/2022
- 03:14 PM Backport #56043 (Resolved): octopus: mgr/cephadm: alertmanager generate_config also doesn't consi...
- 03:10 PM Backport #56043: octopus: mgr/cephadm: alertmanager generate_config also doesn't consider FQDN
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46787
merged - 01:53 PM Backport #56044 (Resolved): quincy: mgr/cephadm: alertmanager generate_config also doesn't consid...
- 01:21 PM Bug #54581 (Resolved): Virtual IP is not validated during the creation (service fails later)
- 01:20 PM Backport #56069 (Resolved): quincy: Virtual IP is not validated during the creation (service fail...
- 12:29 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- https://pulpito.ceph.com/adking-2022-06-23_22:33:27-orch:cephadm-wip-adk3-testing-2022-06-23-1416-quincy-distro-defau...
- 04:04 AM Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894632
- 04:01 AM Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894630
- 03:50 AM Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894623
/a/yuriw-2022-06-... - 03:14 AM Bug #56381 (Duplicate): crash: File "mgr/cephadm/module.py", in serve: serve.serve()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=298dfc576212a8359390921e...- 03:11 AM Bug #56323 (New): crash: File "mgr/cephadm/module.py", in __init__: self.upgrade = CephadmUpgrade...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4a466370b53b55b425360bbd...
06/23/2022
- 02:30 PM Feature #56179: [RFE] Our prometheus instance should scrape itself
- Thanks for writing down the RFE proposal. Please, can you put some more information about what config needs to be add...
- 07:20 AM Feature #56179: [RFE] Our prometheus instance should scrape itself
- I don't think it covers everything you want but Prometheus has a mixin: https://github.com/prometheus/prometheus/tree...
- 01:01 AM Feature #56179 (New): [RFE] Our prometheus instance should scrape itself
- At the moment we don't scrape metrics from the prometheus server itself.
If we did, we could
* track and monitor... - 07:51 AM Bug #55800 (Resolved): cephadm crashes when trying to restart an invalid service name which start...
- 07:50 AM Bug #52906 (Resolved): cephadm rm-daemon is not closing any tcp ports that were opened for the da...
- 12:29 AM Feature #56178 (New): [RFE] add a --force or --yes-i-really-mean-it to ceph orch upgrade
- At the moment we don't permit a downgrade to a dev build, but sometimes this is necessary when issues are detected an...
06/22/2022
- 09:00 PM Bug #54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
- /a/yuriw-2022-06-21_16:28:27-rados-wip-yuri4-testing-2022-06-21-0704-pacific-distro-default-smithi/6889715
Descrip... - 07:45 PM Backport #56176 (Resolved): pacific: cephadm: osd memory autotuning doesn't work with FQDN hosts
- 06:50 PM Backport #56176 (Resolved): pacific: cephadm: osd memory autotuning doesn't work with FQDN hosts
- https://github.com/ceph/ceph/pull/46556
- 06:50 PM Backport #56177 (Resolved): quincy: cephadm: osd memory autotuning doesn't work with FQDN hosts
- https://github.com/ceph/ceph/pull/47075
- 06:46 PM Bug #55841 (Pending Backport): cephadm: osd memory autotuning doesn't work with FQDN hosts
- 06:40 PM Bug #54251 (Resolved): ceph orch upgrade get stuck - mgr set_store mon returned -27: error: entry...
- 06:40 PM Bug #53624 (Resolved): cephadm agent: set_store mon returned -27: error: entry size limited to 65...
- 06:39 PM Bug #55654 (Resolved): cephadm: adoption of osds from cluster with custom names is broken
- 06:39 PM Bug #53424 (Resolved): CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- 06:39 PM Bug #55155 (Resolved): grafana/Makefile: don't push image to docker
- 06:33 PM Bug #56000: task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls`
- /a/yuriw-2022-06-11_02:24:12-rados-quincy-release-distro-default-smithi/6873817
- 06:26 PM Backport #55953 (Resolved): quincy: cephadm: adoption of osds from cluster with custom names is b...
- 06:22 PM Backport #55943 (Resolved): quincy: ceph orch upgrade status should report if upgrade in actually...
- 06:18 PM Backport #56041 (Resolved): quincy: Add and maintain keepalived container image for Ceph in main ...
- 06:15 PM Backport #55947 (Resolved): quincy: cephadm throws an exception when not able to list tags for up...
- 06:13 PM Backport #55987 (Resolved): quincy: ceph orch upgrade get stuck - mgr set_store mon returned -27:...
- 06:13 PM Backport #55987: quincy: ceph orch upgrade get stuck - mgr set_store mon returned -27: error: ent...
- linked incorrect PR in previous comment. Covered by https://github.com/ceph/ceph/pull/46791
- 12:08 AM Backport #55987 (In Progress): quincy: ceph orch upgrade get stuck - mgr set_store mon returned -...
- will be covered by https://github.com/ceph/ceph/pull/46790
- 06:12 PM Backport #55988 (Resolved): quincy: cephadm agent: set_store mon returned -27: error: entry size ...
- 12:07 AM Backport #55988 (In Progress): quincy: cephadm agent: set_store mon returned -27: error: entry si...
- 06:09 PM Backport #55945 (Resolved): pacific: ceph orch upgrade status should report if upgrade in actuall...
- 06:07 PM Backport #55948 (Resolved): pacific: cephadm throws an exception when not able to list tags for u...
- 06:04 PM Backport #55950 (Resolved): pacific: cephadm crashes when trying to restart an invalid service na...
- 06:01 PM Backport #55963 (Resolved): pacific: cephadm rm-daemon is not closing any tcp ports that were ope...
- 03:30 PM Documentation #55357 (Pending Backport): Update cephadm doc to reflect the new per fsid ceph conf...
- 03:12 PM Documentation #55357 (Closed): Update cephadm doc to reflect the new per fsid ceph configuration ...
- 02:21 PM Documentation #55357 (Pending Backport): Update cephadm doc to reflect the new per fsid ceph conf...
- 03:13 PM Documentation #54474 (Closed): Improve the documentation of ceph upgrade process
- 10:41 AM Documentation #54474 (Pending Backport): Improve the documentation of ceph upgrade process
- 02:49 PM Bug #53939: ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading d...
- saw this same issue presenting itself slightly differently.
https://pulpito.ceph.com/adking-2022-06-22_00:33:18-ra... - 02:25 PM Backport #56171 (Resolved): quincy: Update cephadm doc to reflect the new per fsid ceph configura...
- https://github.com/ceph/ceph/pull/47076
- 10:46 AM Backport #56159 (Resolved): quincy: Improve the documentation of ceph upgrade process
- https://github.com/ceph/ceph/pull/47077
- 10:46 AM Backport #56158 (Resolved): pacific: Improve the documentation of ceph upgrade process
- https://github.com/ceph/ceph/pull/46977
- 10:45 AM Bug #52906 (Closed): cephadm rm-daemon is not closing any tcp ports that were opened for the daem...
- 10:43 AM Bug #55800 (Closed): cephadm crashes when trying to restart an invalid service name which starts ...
- 10:43 AM Bug #54581 (Closed): Virtual IP is not validated during the creation (service fails later)
- 10:42 AM Bug #55801 (Closed): cephadm throws an exception when not able to list tags for upgrade
- 12:15 AM Backport #56069 (In Progress): quincy: Virtual IP is not validated during the creation (service f...
- 12:13 AM Backport #56044 (In Progress): quincy: mgr/cephadm: alertmanager generate_config also doesn't con...
- 12:11 AM Backport #55992 (In Progress): quincy: Allow setting crush_device_class in OSD service specs
- 12:05 AM Backport #55956 (Resolved): quincy: grafana/Makefile: don't push image to docker
- covered by https://github.com/ceph/ceph/pull/45799
- 12:03 AM Backport #55951 (In Progress): quincy: cephadm: cephadm user/home removed during RPM upgrade
- 12:00 AM Backport #55949 (In Progress): quincy: cephadm crashes when trying to restart an invalid service ...
06/21/2022
- 09:44 PM Backport #56043 (In Progress): octopus: mgr/cephadm: alertmanager generate_config also doesn't co...
- 08:02 PM Backport #55947 (In Progress): quincy: cephadm throws an exception when not able to list tags for...
- 05:21 PM Backport #55963 (In Progress): pacific: cephadm rm-daemon is not closing any tcp ports that were ...
- 05:18 PM Backport #55962 (Resolved): quincy: cephadm rm-daemon is not closing any tcp ports that were open...
- covered in https://github.com/ceph/ceph/pull/46360
- 05:16 PM Bug #56148 (New): rook: cephclient fails to set mgr module mode "upmap"
- /a/yuriw-2022-06-17_13:58:31-rados-wip-yuri7-testing-2022-06-16-1051-pacific-distro-default-smithi/6884406...
- 05:15 PM Backport #55957 (Resolved): pacific: grafana/Makefile: don't push image to docker
- covered in https://github.com/ceph/ceph/pull/45940
- 05:13 PM Backport #55964 (Resolved): pacific: Add SNMP MIB to the monitoring components within the core ce...
- seems to be covered by https://github.com/ceph/ceph/pull/44480
- 05:10 PM Backport #55952 (In Progress): pacific: cephadm: cephadm user/home removed during RPM upgrade
- https://github.com/ceph/ceph/pull/46553
- 05:03 PM Backport #55950 (In Progress): pacific: cephadm crashes when trying to restart an invalid service...
- 04:41 PM Backport #55948 (In Progress): pacific: cephadm throws an exception when not able to list tags fo...
- 04:39 PM Backport #55961 (Resolved): pacific: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- covered in https://github.com/ceph/ceph/pull/44631
- 04:30 PM Backport #56042 (In Progress): pacific: mgr/cephadm: alertmanager generate_config also doesn't co...
- 03:43 PM Feature #55663: cephadm/nfs: enable cephadm to provide one virtual per ganesha instance of the NF...
- HA/NFS POC -> https://docs.google.com/document/d/19G_SBuVXu2IVTz3hWkbHs__vRF1bbAhnfkL3YSNkVkQ/edit
and a code exam... - 02:23 PM Backport #55991 (Resolved): pacific: Allow setting crush_device_class in OSD service specs
- handled in https://github.com/ceph/ceph/pull/46555
- 08:26 AM Bug #56078: [cephadm/quincy] agent service getting deployed with unknown info
- It's blocker for per-node user agent/exporter feature testing.
06/19/2022
- 11:29 AM Bug #56024: cephadm: removes ceph.conf during qa run causing command failure
- Similar failure - https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-test...
06/17/2022
- 08:32 PM Bug #56100 (New): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade - Cephadm MGR crash - in KeyError
- ...
- 07:53 PM Bug #55808: task/test_nfs: KeyError: 'events'
- /a/yuriw-2022-06-16_16:41:04-rados-wip-yuri6-testing-2022-06-16-0651-quincy-distro-default-smithi/6882321$
- 06:32 PM Bug #55986: cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.Test...
- /a/yuriw-2022-06-16_18:33:18-rados-wip-yuri5-testing-2022-06-16-0649-distro-default-smithi/6882595
- 02:53 PM Bug #56092 (Resolved): add better message when removing osd
- People might be confused when removing an OSD with the command `ceph orch osd rm X`.
Sometimes, they might think tha... - 09:01 AM Bug #53762: Problem with cephfs-mirror and cephadm / ceph orch
- Sebastian Wagner wrote:
> you really should not try to deploy physical keys within the containers. The only sane way... - 05:57 AM Bug #56078 (New): [cephadm/quincy] agent service getting deployed with unknown info
- In upstream quincy with config option set to deploy ceph_agent service
[ceph: root@magna086 /]# ceph config set mgr ...
06/16/2022
- 05:26 PM Bug #56000 (Duplicate): task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `ceph...
- The teuthology run, /a/yuriw-2022-06-09_22:06:32-rados-wip-yuri3-testing-2022-06-09-1314-distro-default-smithi/687137...
- 03:25 PM Bug #56000: task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls`
- /a/yuriw-2022-06-11_02:24:12-rados-quincy-release-distro-default-smithi/6873817
- 05:19 PM Bug #55808: task/test_nfs: KeyError: 'events'
- This most likely is a cephadm bug, where `ceph orch ps ceph --service_name=nfs.<nfs cluster name> --format=json` does...
- 02:06 PM Documentation #54474 (In Progress): Improve the documentation of ceph upgrade process
- 01:57 PM Feature #52602 (In Progress): cephadm: Prometheus: 2.28: generic http based service discovery
- 01:49 PM Documentation #52825 (Closed): haproxy causes high number of connection resets
- 01:49 PM Documentation #52825: haproxy causes high number of connection resets
- Thanks for the detailed doc. It seems like it's a normall haproxy behavior and no change is needed at cephadm level. ...
- 11:00 AM Bug #56024 (Fix Under Review): cephadm: removes ceph.conf during qa run causing command failure
- 09:17 AM Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- /a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/6881248
- 08:51 AM Bug #54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
- /a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/6881304
Descrip... - 07:15 AM Backport #56041 (In Progress): quincy: Add and maintain keepalived container image for Ceph in ma...
Also available in: Atom