Activity
From 06/14/2021 to 07/13/2021
07/13/2021
- 06:41 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- https://github.com/ceph/ceph/pull/42000#issuecomment-879313562
- 11:34 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- https://bugzilla.redhat.com/show_bug.cgi?id=1972209
- 11:26 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- https://pulpito.ceph.com/pdonnell-2021-07-12_18:17:18-fs:workload-wip-pdonnell-testing-20210710.030422-distro-basic-s...
- 06:35 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- Answering to myself......
- 06:12 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- ...
- 12:48 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- Try running...
- 12:43 PM Bug #51642: cephadm/rgw : RGW server is not coming up: Initialization timeout, failed to initialize
- the rgw log looks like so:...
- 12:24 PM Bug #51642 (Resolved): cephadm/rgw : RGW server is not coming up: Initialization timeout, failed ...
- I have created the ceph cluster via cephadm and it looks fine but when I tried to deploy RGW it was failing and does ...
- 04:23 PM Bug #51629: cephadm reports nodes offline after rolling reboot
- Ian Merrick wrote:
> This could be because the IP address for the ceph hosts are not defined, __and__ if you are r... - 12:40 PM Bug #51629: cephadm reports nodes offline after rolling reboot
- does a failing *all* MGRs daemons (calling `ceph mgr fail ...`) helps?
- 11:06 AM Bug #51629: cephadm reports nodes offline after rolling reboot
- Hi,
This could be because the IP address for the ceph hosts are not defined, __and__ if you are relying on /etc/ho... - 03:22 PM Bug #51291: Adoption fails for Ceph MDS servers
- I've tracked down the issue. More details with fix here: https://github.com/alfredodeza/remoto/issues/65
The pro... - 12:25 PM Feature #48292 (Resolved): cephadm: allow more than 60 OSDs per host
- 11:20 AM Bug #50399 (Can't reproduce): cephadm ignores registry settings
- Please reopen this, if you still have this issue
- 11:19 AM Bug #51102 (Duplicate): RuntimeError: uid/gid not found in "rados/thrash-old-clients" test
- 04:32 AM Feature #47774: orch,cephadm: host search with filters
- Do we really need this? In most enterprise environments I;ve worked in this type of thing gets jumped on by security ...
07/12/2021
- 09:25 PM Bug #51102: RuntimeError: uid/gid not found in "rados/thrash-old-clients" test
- spotted again at /a/ksirivad-2021-07-11_01:45:00-rados-wip-pg-autoscaler-overlap-distro-basic-smithi/6263029/
- 08:47 PM Bug #51291: Adoption fails for Ceph MDS servers
- I had put this task on the shelf for a while to work on other stuff and since the cluster was still in a functional s...
- 02:57 PM Bug #51620 (Fix Under Review): Ceph orch upgrade to 16.2.5 fails
- 12:53 PM Bug #51634 (Closed): Validate allowed characters for rgw reams
- ...
- 12:09 PM Bug #51632 (Resolved): cephadm: selinux is not checked against running configuration
- The _fetch_selinux function inside kernel_security in cephadm is not checking the actual selinux mode in which the ke...
- 11:52 AM Feature #47774: orch,cephadm: host search with filters
- That would require some magic tricks to generate the list of hosts, like https://docs.ansible.com/ansible/latest/coll...
- 10:24 AM Bug #51629 (New): cephadm reports nodes offline after rolling reboot
- Hello,
On my 8 nodes Octopus 15.2.13 cluster which I installed using cephadm and podman for containers, I did a ro...
07/11/2021
- 01:56 PM Bug #51616: Updating node-exporter deployment progress stuck
- Confirmed.
cluster:
id: 4067126d-01cb-40af-824a-881c130140f8
health: HEALTH_OK
(muted: A...
07/09/2021
- 10:48 PM Bug #51621: Multiple "Updating node-exporter deployment"
- After some debugging I've managed to "fix" this by removing the node-exporter service, rebooting all managers, and th...
- 10:37 PM Bug #51621 (Duplicate): Multiple "Updating node-exporter deployment"
- After updating to 16.2.5 the orchistrator seems to be starting a new task every minute with the following message: "U...
- 09:37 PM Bug #51620 (Resolved): Ceph orch upgrade to 16.2.5 fails
- Hi there,
While upgrading my cluster from 16.2.4 to the 16.2.5 release the upgrade seems to get stuck on upgrading... - 05:06 PM Feature #51618 (Rejected): rgw_frontends configuration in config database not persistent
- following config settings in mon config DB...
- 03:30 PM Bug #51617 (New): cephadm: remove rgw block from minimal ganesha configuration
- Check the keyring section
https://pad.ceph.com/p/nfs
https://github.com/ceph/ceph/blob/master/src/pybind/mgr/ceph... - 03:15 PM Bug #51616 (Resolved): Updating node-exporter deployment progress stuck
- after upgrading to 16.2.5 via "ceph orch upgrade start --ceph-version 16.2.5" I get more and more instances of...
- 01:02 PM Bug #51571: cephadm: remove iscsi service fails due to incorrect gateway name
- This seems to be a duplicate of the other issue. Python's socket.getfqdn() unfortunately picks up the container name ...
- 07:33 AM Bug #51601 (Closed): mgr/dashboard: server does not bind to all addresses anymore
- I've upgraded ceph-mgr from 16.2.4 to 16.2.5 and the dashboard stopped listing on all addresses. Instead, it uses the...
- 06:53 AM Bug #51446: Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168.122.121.',)
- The same for all other places, where now get_mgr_ip is used. For example the prometheus module: docs say default is 0...
- 06:44 AM Bug #51446: Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168.122.121.',)
- Despite being closed, the change does affect certain setups using HA-IPs and/or IPVS because contrary to the docs say...
07/08/2021
- 02:30 PM Bug #48291 (Fix Under Review): Grafana should not have a predictable default password
- 02:07 PM Tasks #49490 (Need More Info): cephadm additions/changes to support everything rgw.py needs
- 01:21 PM Tasks #49490 (In Progress): cephadm additions/changes to support everything rgw.py needs
- 01:28 PM Bug #51277 (Can't reproduce): cephadm bootstrap: unable to set up admin label
- 12:32 PM Feature #51596: implement "gather-logs" feature in cephadm
- replaces https://github.com/ceph/cephadm-ansible/issues/18
- 12:20 PM Feature #51596 (New): implement "gather-logs" feature in cephadm
- ceph-ansible used to provide a playbook for fetching logs/config/keys from a running cluster.
Since Ansible isn't ... - 11:41 AM Bug #45420 (Can't reproduce): cephadmunit.py: teuthology.exceptions.CommandFailedError: Command f...
- 11:36 AM Tasks #45914 (Won't Fix): cephamd: make src/cephadm/vstart-smoke.sh a proper teuthology test
- don't care
- 11:35 AM Bug #43415 (Won't Fix): python3-remoto not available in ubuntu
- out of scope
- 10:47 AM Bug #51592 (Resolved): cephadm should not use the lvm binary of the container
- See Gluster's way of doing this:
https://github.com/gluster/gluster-containers/blob/master/CentOS/exec-on-host.sh
... - 10:24 AM Bug #51590 (In Progress): cephadm: iscsi: The first gateway defined must be the local machine
- 09:58 AM Bug #51590 (Resolved): cephadm: iscsi: The first gateway defined must be the local machine
- 1. Deploy cluster using cephadm
2. Deploy iscsi services using iscsi.yml file...
07/07/2021
- 08:16 PM Bug #49622 (Pending Backport): cephad orchestrator allows to delete hosts with ceph daemons running
- 08:16 PM Feature #48624 (Pending Backport): ceph orch drain <host>
- 04:10 PM Bug #51571: cephadm: remove iscsi service fails due to incorrect gateway name
- > <cluster name>-<cluster fsid>-<daemon name>
That's in fact the container name... - 02:42 PM Bug #51571 (Resolved): cephadm: remove iscsi service fails due to incorrect gateway name
- If the iscsi service is removed and the dashboard is deployed (dashboard mgr module enabled) then the cluster status ...
- 02:41 PM Feature #49171 (Resolved): cephadm: set osd-memory-target
- 02:38 PM Feature #51004 (In Progress): cephadm agent 2.0
- 02:34 PM Bug #51546 (Fix Under Review): cephadm: remove iscsi service fails when the dashboard isn't deployed
- 02:22 PM Bug #51567 (Closed): monitoring spec file doesn't support custom port option as per the doc
- unable to reproduce. issue originated from a downstream build that does not have the monitoring ports feature backpor...
- 01:25 PM Bug #51567 (Closed): monitoring spec file doesn't support custom port option as per the doc
- ...
- 01:07 PM Feature #51566 (Resolved): cephadm: cpu limit
- ceph-ansible allowed customization of CPU limits for containerized daemons per each type (so you could have different...
- 12:47 PM Bug #51111: Pacific: CEPHADM_STRAY_DAEMON after deploying iSCSI gateway with cephadm due to tcmu-...
- Hello,
I have the very same Issue on a fresh install of Pacific 16.2.4 on Ubuntu with podman, but I only have used s... - 12:17 PM Tasks #51562 (Pending Backport): Enable autotune for osd_memory_target
- Description of problem:
Enable autotune for osd_memory_target
cephadm brings the support for osd_memory_target,... - 12:05 PM Bug #51541: cephadm: gather_facts for the host in maintainence returns empty list
- Gather facts is never callied due to https://github.com/ceph/ceph/blob/f0b79b3a48e78a4b47c5c23c3991728cee01887a/src/p...
- 10:16 AM Feature #48292 (Fix Under Review): cephadm: allow more than 60 OSDs per host
- 10:16 AM Bug #51061 (Fix Under Review): GPT partitioning table: OSD "all-available-devices" tries to use "...
- 10:15 AM Bug #50928 (Fix Under Review): OSD size count mismatched with ceph orch commands
- 10:14 AM Bug #50690 (Fix Under Review): ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command no...
- 10:14 AM Bug #50592 (Fix Under Review): "ceph orch apply <svc_type>" applies placement by default without ...
- 09:51 AM Bug #48107 (Can't reproduce): cephadm fails to deploy iscsi gateway when selinux is enabled
- 09:48 AM Documentation #47142 (Fix Under Review): docs: explain the difference between services and daemons
- 09:41 AM Bug #46687 (Can't reproduce): MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- Setting this to can't reproduce as this didn't popped up again.
- 09:35 AM Feature #51302 (Duplicate): mgr/cephadm: automatically configure dashboard <-> RGW connection
- 09:34 AM Bug #49449 (Won't Fix): cephadm: synchronize container timezone with host
- 09:33 AM Bug #49273 (Resolved): cephadm fails deployment of node-exporter when ipv6 is disabled
- 09:31 AM Bug #48142 (Resolved): rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privilege...
- 09:30 AM Bug #48071 (Resolved): rook: 'ceph orch ls' does not list nfs-ganesha daemons
- 09:30 AM Bug #47968 (Resolved): rook: 'ceph orch rm' throws type error
- 09:30 AM Bug #47923 (Resolved): rook: 'ceph orch apply nfs' throws error if no ganesha daemons are deployed
- 09:29 AM Bug #47511 (Resolved): rook: 'ceph orch status' returns 403 error
- 09:28 AM Bug #46558 (Resolved): cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- 09:27 AM Bug #47513 (Resolved): rook: 'ceph orch ps' does not show image and container id correctly
- 09:05 AM Bug #51355: ingress service /var/lib/haproxy/haproxy.cfg
- An alternative approach would be to change the owner of the haproxy config to the haproxy user (currently 99 in the d...
- 07:47 AM Bug #51176 (Pending Backport): Module 'cephadm' has failed: 'MegaSAS'
07/06/2021
- 09:02 PM Bug #51546: cephadm: remove iscsi service fails when the dashboard isn't deployed
- ...
- 07:54 PM Bug #51546 (Resolved): cephadm: remove iscsi service fails when the dashboard isn't deployed
- If the iscsi service is removed and the dashboard isn't deployed (dashboard mgr module not enabled) then the cluster ...
- 04:01 PM Bug #51043 (Resolved): Incorrect information about usage of ceph orch osd rm
- 03:59 PM Bug #44972 (Closed): cephadm: add-repo on ubuntu broken
- 03:57 PM Bug #50981 (Closed): cephadm: --service-type arg in 'orch ls' not handled properly for services w...
- 03:56 PM Bug #50817 (Closed): cephadm: upgrade loops forever if not enough mds daemons
- 03:56 PM Feature #50815 (In Progress): cephadm: Removing an offline host
- 03:54 PM Documentation #51214 (Closed): config manage_etc_ceph_ceph_conf_hosts is typod
- 03:53 PM Bug #51328 (Fix Under Review): cephadm: `infer_fsid` should use fsid from ceph conf
- 03:51 PM Bug #51541 (Rejected): cephadm: gather_facts for the host in maintainence returns empty list
- https://github.com/ceph/ceph/pull/41816/files#diff-4f2fb7d330e74b64ac41457b7c7a723cd78db86433e0b0c398874531e5a7e39eR1...
- 03:51 PM Cleanup #43700 (Fix Under Review): cephadm: make it a proper python package
- 03:49 PM Cleanup #43700 (In Progress): cephadm: make it a proper python package
- 03:48 PM Cleanup #44676 (Fix Under Review): cephadm: Replace execnet (and remoto)
- 03:31 PM Bug #51304 (Won't Fix): Can't install cephadm on CentOS 7
- pacific isn't built for el7, so --add-repo won't work.
cephadm does not support el7 (containers aren't reliably st... - 12:55 PM Feature #48624 (Resolved): ceph orch drain <host>
- 12:55 PM Bug #49622 (Resolved): cephad orchestrator allows to delete hosts with ceph daemons running
- 12:46 PM Bug #46412: cephadm trying to pull mimic based image
- > Error happened during read: Digest did not match
strange. - 11:03 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- This issues can be marked as solved!
- 11:02 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- This item can be marked as closed
- 11:01 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- Yes,It's works by deleted this file.
Thank you very much. - 10:54 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- Please make sure there are NO stray files next to the list of daemons /var/lib/ceph/<cluster-fsid>
- 03:18 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- Sebastian Wagner wrote:
> you probably want to do:
>
> 1. Remove the MegaSys.log.bak file on node-04
>
> 2. ru... - 10:36 AM Bug #51426 (Closed): ceph orch stop should not remove systemctl entries
- not a bug
07/05/2021
- 11:31 AM Bug #51272 (Resolved): upgrade job: mgr.x getting removed by cephadm task: UPGRADE_NO_STANDBY_MGR
- 08:48 AM Bug #51176 (Fix Under Review): Module 'cephadm' has failed: 'MegaSAS'
- you probably want to do:
1. Remove the MegaSys.log.bak file on node-04
2. run... - 02:11 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- Sebastian Wagner wrote:
> I think something erroneous ended up in the config-key store. Could you give use the expor...
07/02/2021
07/01/2021
- 03:50 PM Bug #51446 (Resolved): Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168.122...
- 01:49 PM Feature #51468 (New): cephadm: make it possible to set a custom name for the ceph.conf name and a...
- ...
06/30/2021
- 04:03 PM Bug #51446 (In Progress): Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168....
- 04:00 PM Bug #51446: Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168.122.121.',)
- This might not be a pure orchestrator issue. I've investigated the `run-backend-api-tests.sh` issue I have locally an...
- 01:53 PM Bug #51446 (Resolved): Module 'dashboard' has failed: Timeout('Port 8443 not bound on 192.168.122...
- standby dashboard is binding to 127.0.1.1 becasue get_mgr_ip() always returns hostname for standby mgr and with podma...
06/29/2021
- 12:39 PM Bug #51426 (Closed): ceph orch stop should not remove systemctl entries
- 1. For example if we run `ceph orch stop mgr` the command stops all the mgr services.
2. Once `ceph orch` stops the ...
06/28/2021
- 12:22 PM Bug #51291: Adoption fails for Ceph MDS servers
- Posting an update with additional details. I was able to get some more verbose output from running `ceph log last cep...
06/26/2021
06/25/2021
- 10:48 AM Bug #51366 (New): cephadm: Super hard to use loopback devices for OSDs
- h3. Bootstrap the cluster
h3. losetup... - 09:15 AM Bug #51361: KillMode=none is deprecated
- Answer by Valentin:
> Hi Sebastian, feel free to ignore this warning. Systemd still supports KillMode=none but the... - 09:05 AM Bug #51361 (New): KillMode=none is deprecated
- We chaged systemd unit file killmode to none in https://github.com/ceph/ceph/pull/33162#issuecomment-584183316
No...
06/24/2021
- 09:21 PM Bug #49622 (In Progress): cephad orchestrator allows to delete hosts with ceph daemons running
- 09:21 PM Feature #48624 (In Progress): ceph orch drain <host>
- 06:29 PM Bug #51355: ingress service /var/lib/haproxy/haproxy.cfg
- Someone else also noticing the same:
https://www.reddit.com/r/ceph/comments/nxl5v3/ingress_service_on_pacific_v1624_... - 05:55 PM Bug #51355 (Resolved): ingress service /var/lib/haproxy/haproxy.cfg
- It seems like cephadm expects haproxy to run as root, while the docker image haproxy runs it as the user haproxy.
...
06/23/2021
- 02:28 PM Bug #51328 (Resolved): cephadm: `infer_fsid` should use fsid from ceph conf
- when a ceph.conf is present, but no daeamons deployed by cephadm exist:...
- 11:11 AM Bug #51209 (Pending Backport): cephadm: expose gather-facts api method
- 09:38 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- Sebastian Wagner wrote:
> I think something erroneous ended up in the config-key store. Could you give use the expor... - 09:15 AM Bug #51272: upgrade job: mgr.x getting removed by cephadm task: UPGRADE_NO_STANDBY_MGR
- Adding to analysis:
successful pick from pacific branch:...
06/22/2021
- 05:19 PM Bug #49273: cephadm fails deployment of node-exporter when ipv6 is disabled
- https://github.com/ceph/ceph/pull/41602 merged
- 01:44 PM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- I think something erroneous ended up in the config-key store. Could you give use the export of the config-key dump?
... - 09:14 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- I have viewed the code, and found that `ceph orch ls` worked with service name, for example ` ceph orch ls crash `
- 10:09 AM Bug #51311 (Resolved): Failed to apply ingress.rgw: IndexError: list index out of range
- Following the docs at: https://docs.ceph.com/en/latest/cephadm/rgw/ I've set up rgw with:...
06/21/2021
- 04:42 PM Bug #51277: cephadm bootstrap: unable to set up admin label
- I feel like this is the correct behavior. The current v16 image (https://hub.docker.com/layers/ceph/ceph/v16/images/s...
- 02:41 PM Bug #51304 (Won't Fix): Can't install cephadm on CentOS 7
- System info:
CentOS Linux release 7.9.2009 (Core) x86_64
I am trying to follow the cephadm install guide https://... - 02:00 PM Feature #51302 (Duplicate): mgr/cephadm: automatically configure dashboard <-> RGW connection
- Automatically configure the credential(s) for dashboard to talk to RGW.
- 01:24 PM Bug #51258: cephadm bootstrap: applying host specs suddenly removes the admin keyring from bootst...
- Agree on that, if the UX can be improved it's a good chance to add that to the backlog.
Just wanted to make sure the... - 01:19 PM Bug #51258: cephadm bootstrap: applying host specs suddenly removes the admin keyring from bootst...
- Francesco Pantano wrote:
> Hello,
> I confirm we can close this tracker.
This is a real UX bug that needs to be ... - 01:17 PM Bug #51258: cephadm bootstrap: applying host specs suddenly removes the admin keyring from bootst...
- Hello,
as per [1] and [2], where the fix [3] is tested, I confirm we can close this tracker.
Thanks for the suppo... - 01:11 PM Documentation #51299 (Duplicate): cephadm/ceph orch - document shutdown and start complete ceph c...
- cephadm/ceph orch - document shutdown and start complete ceph cluster
- this procedure helps when an admin/user deci... - 01:09 PM Bug #51298 (Resolved): ceph orch stop mgr should not stop all the mgrs and should give a warning ...
- ceph orch stop mgr should not stop all the mgrs and should give a warning and come out
Maybe a solution would be y... - 07:06 AM Bug #51176: Module 'cephadm' has failed: 'MegaSAS'
- Sebastian Wagner wrote:
> could you please attach the MGR log file?
since June 3th ,there were no more new logs....
06/19/2021
- 11:52 PM Bug #51291 (Resolved): Adoption fails for Ceph MDS servers
- I'm migrating my Ceph cluster from `ceph-ansible` to `cephadm` by following the guide here: https://docs.ceph.com/en/...
06/18/2021
- 10:15 PM Bug #51277 (Can't reproduce): cephadm bootstrap: unable to set up admin label
- I try to run bootstrap like so: ...
- 02:17 PM Bug #51257: mgr/cephadm: Cannot add managed (ceph apply) mon daemons on different subnets
- Thanks for the quick reply.
Sebastian Wagner wrote:
> I see this as a valid bug that needs a fix. Aggelos, by far t... - 10:07 AM Bug #51257: mgr/cephadm: Cannot add managed (ceph apply) mon daemons on different subnets
- I see this as a valid bug that needs a fix. Aggelos, by far the fastest way to fix this would be for you to create a ...
- 10:10 AM Bug #51192: cephadm failed to remove running OSD
- can you please check, if https://github.com/ceph/ceph/pull/41876 fixes your issue?
- 10:08 AM Bug #51176 (Need More Info): Module 'cephadm' has failed: 'MegaSAS'
- could you please attach the MGR log file?
- 09:43 AM Bug #51258: cephadm bootstrap: applying host specs suddenly removes the admin keyring from bootst...
- right, for now you need to add the _admin label when applying a spec.
- 08:04 AM Bug #51258: cephadm bootstrap: applying host specs suddenly removes the admin keyring from bootst...
- Hello,
Thanks Sebastian for the quick reply.
As per our previous conversation, cephadm adds the _admin label on the... - 08:47 AM Bug #51272 (Resolved): upgrade job: mgr.x getting removed by cephadm task: UPGRADE_NO_STANDBY_MGR
- I think this bug is not yet merged.
* https://github.com/ceph/ceph/pull/41478/
* https://github.com/ceph/ceph/pu...
06/17/2021
- 09:01 PM Bug #51245: octopus: cephadm/focal: E: The repository 'https://download.ceph.com/debian-15.1.1 fo...
- Neha Ojha wrote:
> Deepika: I think the problem is that cephadm/test_repos.sh is still using "sudo $CEPHADM -v add-r... - 08:55 PM Bug #51245: octopus: cephadm/focal: E: The repository 'https://download.ceph.com/debian-15.1.1 fo...
- Deepika: I think the problem is that cephadm/test_repos.sh is still using "sudo $CEPHADM -v add-repo --release 15.1.1...
- 02:57 PM Bug #51258: cephadm bootstrap: applying host specs suddenly removes the admin keyring from bootst...
- might be a nasty trap: if you add hosts via yaml files during bootstrap, cephadm now suddenly removes the admin keyri...
- 10:54 AM Bug #51258: cephadm bootstrap: applying host specs suddenly removes the admin keyring from bootst...
- When that spec is applied I see (cephadm in debug mode):
2021-06-17T10:46:46.346580+0000 mgr.standalone.localdomai... - 10:19 AM Bug #51258 (Resolved): cephadm bootstrap: applying host specs suddenly removes the admin keyring ...
- There's a job in OpenStack which is able to test the latest pacific bits for both ceph containers and cephadm.
Using... - 07:43 AM Bug #51257 (Resolved): mgr/cephadm: Cannot add managed (ceph apply) mon daemons on different subnets
- In our network setup we have an IP (layer3) Fabric to the server using @/128@ IPv6 addresses[3] and BGP to the server...
06/16/2021
- 02:31 PM Bug #51245: octopus: cephadm/focal: E: The repository 'https://download.ceph.com/debian-15.1.1 fo...
- http://qa-proxy.ceph.com/teuthology/ideepika-2021-06-16_13:16:21-rados:cephadm-wip-yuri6-testing-2021-06-14-1106-octo...
- 02:31 PM Bug #51245 (Closed): octopus: cephadm/focal: E: The repository 'https://download.ceph.com/debian-...
- ...
06/15/2021
- 02:59 PM Documentation #51214 (Pending Backport): config manage_etc_ceph_ceph_conf_hosts is typod
- 08:22 AM Feature #51004: cephadm agent 2.0
- Interesting move. Just beware of the "Second-System Effect":https://wiki.c2.com/?SecondSystemEffect (Dashboard v2 spe...
06/14/2021
Also available in: Atom