Activity
From 11/17/2020 to 12/16/2020
12/16/2020
- 01:45 PM Support #48630 (Resolved): non-LVM OSD do not start after upgrade from 15.2.4 -> 15.2.7
- During upgrade from 15.2.4 to 15.2.7 (docker_hub image), some of our OSD's do not startup after their systemd unit.ru...
- 10:38 AM Feature #48624 (Resolved): ceph orch drain <host>
- Implement the command to remove all the daemons running in a hosts.
- 10:31 AM Feature #47782: ceph orch host rm <host> is not stopping the services deployed in the respective ...
- Things to do
- Lets produce an error message if the host to remove has daemons running.
- Implement a "drain" comma... - 08:21 AM Bug #48071 (Pending Backport): rook: 'ceph orch ls' does not list nfs-ganesha daemons
12/14/2020
- 07:42 PM Bug #48598 (Can't reproduce): "ceph orch daemon redeploy" fails with [errno 13] RADOS permission ...
- ...
- 06:44 PM Bug #48469: upgrade:octopus-x-master: "Failed to start Ceph mon.smithi116": Failed with result 's...
- Sebastian Wagner wrote:
> maybe try calling
>
> [...]
>
> here?
>
> https://github.com/ceph/teuthology/blob... - 06:33 PM Bug #48597 (Resolved): pybind/mgr/cephadm: mds_join_fs not cleaned up
- After creating a new file system and then deleting it:...
- 05:29 PM Bug #48594 (Fix Under Review): cephadm: too many osd privileges for osd caps
- 05:21 PM Bug #48594 (Resolved): cephadm: too many osd privileges for osd caps
- We can use "tag rgw" to limit RGW caps.
- 12:14 PM Feature #47368 (Resolved): Provide a daemon mode for cephadm to handle host/daemon state requests
12/13/2020
- 09:24 PM Feature #48582 (Closed): During bootstrap, if the output_dir is not found - allocate it
- At the moment, the bootstrap workflow requires the user to create the /etc/ceph directory - and if this is not presen...
- 09:19 PM Feature #47368 (Closed): Provide a daemon mode for cephadm to handle host/daemon state requests
- merged
12/12/2020
- 12:14 AM Bug #48442: cephadm: upgrade loops on mixed x86_64/arm64 cluster
- How would I manually upgrade the remaining daemons? I'm not finding anything in the documentation about how to do th...
12/11/2020
- 06:49 PM Bug #48158 (Resolved): cephadm bootstrap fails with custom ssh port
- 06:49 PM Bug #48041 (Resolved): mgr/cephadm: Allow customizing mgr/cephadm/lsmcli_blink_lights_cmd per host
- 06:49 PM Bug #48031 (Resolved): Cephadm: Needs to pass cluster.listen-address to alertmanager
- 06:49 PM Bug #48166 (Resolved): cephadm should be run as root
- 06:49 PM Bug #48072 (Resolved): ppa:projectatomic is no longer maintained
- 06:48 PM Feature #43686 (Resolved): cephadm: support rgw nfs
- 06:48 PM Bug #47841 (Resolved): `ceph orch device ls` assumes lsm data is present
- 06:47 PM Bug #47745 (Resolved): cephadm: adopt {prometheus,grafana,alertmanager} fails with "RuntimeError:...
- 06:47 PM Bug #47648 (Resolved): mgr/cephadm: Rendering custom template HTML escapes wrongly
- 06:47 PM Bug #47580 (Resolved): cephadm: "Error ENOENT: Module not found": TypeError: type object argument...
- 06:43 PM Feature #48560: Spec files for each daemon in the monitoring stack
- do you have some example yamls?
- 09:18 AM Feature #48560 (Closed): Spec files for each daemon in the monitoring stack
- It would be nice to have a spec file for grafana, alert manager and prometheus.
In this way we can provide a more fl...
12/10/2020
- 10:52 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- ok:
*We must not call daemon add* in Teuthology. It's just too low level and not meant to be idempotent.
Would... - 04:08 PM Bug #48469: upgrade:octopus-x-master: "Failed to start Ceph mon.smithi116": Failed with result 's...
- maybe try calling...
- 04:03 PM Bug #48469: upgrade:octopus-x-master: "Failed to start Ceph mon.smithi116": Failed with result 's...
- mon thrasher:...
- 02:04 PM Bug #48535 (Fix Under Review): QA smoke test: cephadm is removing mgr.y
- 11:40 AM Bug #48535: QA smoke test: cephadm is removing mgr.y
- ...
- 11:33 AM Bug #48535 (In Progress): QA smoke test: cephadm is removing mgr.y
- 11:33 AM Bug #48535: QA smoke test: cephadm is removing mgr.y
- and this breaks the upgrade: https://pulpito.ceph.com/swagner-2020-12-09_11:23:12-rados:cephadm-wip-swagner3-testing-...
- 11:32 AM Bug #48535 (Resolved): QA smoke test: cephadm is removing mgr.y
- https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969
cephadm is properly d... - 12:03 PM Bug #48071 (Fix Under Review): rook: 'ceph orch ls' does not list nfs-ganesha daemons
- 11:54 AM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- Fascinatingly, this bug revealed #48535
- 11:20 AM Bug #48534 (New): rook: Fix nfs daemon names in `orch ps`
- The nfs service id/ cluster name should also be included in daemon name....
- 10:59 AM Bug #48510: CEPHADM_REFRESH_FAILED: detail item 0 not a [unicode] string
- https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969
- 10:29 AM Bug #48463 (Duplicate): mon.c: Error: invalid config provided: CapAdd and privileged are mutually...
12/09/2020
- 02:59 PM Documentation #45862 (Resolved): orch mds rm is documented but does not exist
- @Sebastian, looks like you fixed this in https://github.com/ceph/ceph/pull/35587 which was backported to octopus by h...
- 01:58 PM Documentation #45862 (Can't reproduce): orch mds rm is documented but does not exist
- I can't find it!...
- 09:55 AM Bug #48261: cephadm ceph-volume inventory -- --format json-pretty: INFO:cephadm:/usr/bin/podman:...
- Right, the problem seems to be that we're generating the wrong command. Otoh, I haven't seen this in a while. I'm not...
- 09:46 AM Bug #48510 (Can't reproduce): CEPHADM_REFRESH_FAILED: detail item 0 not a [unicode] string
- ...
12/08/2020
- 01:40 PM Bug #48157: test_cephadm.sh failure You have reached your pull rate limit. You may increase the l...
- Stephen Longofono wrote:
> Is there an option for pulling and saving an image, and providing as input to cephadm? I...
12/07/2020
- 09:26 PM Bug #48157: test_cephadm.sh failure You have reached your pull rate limit. You may increase the l...
- Is there an option for pulling and saving an image, and providing as input to cephadm? It seems problematic that you...
- 06:35 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- http://qa-proxy.ceph.com/teuthology/yuriw-2020-12-05_15:45:19-rados-wip-yuri2-testing-2020-12-04-0751-octopus-distro-...
- 09:12 AM Feature #47261: cephadm integration for cephfs-mirror daemon
- Sebastian Wagner wrote:
> what exactly do do you need here? which config options do we need?
(sorry for the laten...
12/06/2020
- 09:54 PM Bug #48471 (Can't reproduce): Cephadm fails to update repo if gnupg is present
- Probably related to https://tracker.ceph.com/issues/45009.
If gnupg is installed on the bootstrap node, running ce...
12/04/2020
- 10:18 PM Bug #48469: upgrade:octopus-x-master: "Failed to start Ceph mon.smithi116": Failed with result 's...
- tests are here https://github.com/ceph/ceph/pull/36592
- 09:22 PM Bug #48469 (Can't reproduce): upgrade:octopus-x-master: "Failed to start Ceph mon.smithi116": Fai...
- Run: https://pulpito.ceph.com/teuthology-2020-12-04_19:57:53-upgrade:octopus-x-master-distro-basic-smithi/
Job:56810... - 03:38 PM Bug #48463: mon.c: Error: invalid config provided: CapAdd and privileged are mutually exclusive o...
- https://github.com/ceph/ceph/pull/38066
- 11:17 AM Bug #48463 (Duplicate): mon.c: Error: invalid config provided: CapAdd and privileged are mutually...
- https://pulpito.ceph.com/swagner-2020-12-04_10:02:29-rados:cephadm-wip-jmolmo-testing-2020-12-02-1452-distro-basic-sm...
- 12:09 PM Bug #48157 (In Progress): test_cephadm.sh failure You have reached your pull rate limit. You may ...
- https://github.com/ceph/ceph-cm-ansible/pull/595
12/03/2020
- 10:34 AM Bug #48442: cephadm: upgrade loops on mixed x86_64/arm64 cluster
- Bryan Stillwell wrote:
> How often do the images for the same release change though? Couldn't checking if all the i...
12/02/2020
- 10:37 PM Bug #48442: cephadm: upgrade loops on mixed x86_64/arm64 cluster
- How often do the images for the same release change though? Couldn't checking if all the images are v15.2.7 be good ...
- 10:31 PM Bug #48442: cephadm: upgrade loops on mixed x86_64/arm64 cluster
- Hm. Not sure about this. Even if we fix this, how are we supposed to make sure, we're not introducing any regressions...
- 08:56 PM Bug #48442 (Closed): cephadm: upgrade loops on mixed x86_64/arm64 cluster
- When I tried to use 'ceph orch upgrade start --ceph-version 15.2.7' to upgrade my home cluster from 15.2.5 to 15.2.7,...
- 10:28 PM Bug #48157: test_cephadm.sh failure You have reached your pull rate limit. You may increase the l...
- https://github.com/ceph/ceph-cm-ansible/pull/591
11/27/2020
11/26/2020
- 11:15 AM Bug #48373 (Resolved): cephadm iscsi: required file missing from config-json: iscsi-gateway.cfg
- ...
- 10:38 AM Feature #48368 (Need More Info): cephadm check-host should verify fsid of ceph.conf
- if cephadm check-host detects a ceph cluster(s) running, it can find out the fsid(s) of that cluster (those clusters)...
11/24/2020
- 08:03 PM Bug #48041 (Pending Backport): mgr/cephadm: Allow customizing mgr/cephadm/lsmcli_blink_lights_cmd...
- 12:42 PM Feature #48340 (Resolved): cephadm/rgw: Add rgw_zonegroup to RGWSpec
- adding a rgw_zonegroup in the yaml and args and allowing a creation of that, since we're hardcoding the zonegroup def...
- 10:29 AM Feature #48339 (Rejected): Use file references in NFS ganesha service configuration
- Using "cephadm deploy", it is possible to deploy a NFS Ganesha daemon in a host not included in the cluster. We can c...
11/23/2020
- 10:55 PM Documentation #48333 (Rejected): cephadm: document the image used by cephadm to call ceph-volume ...
- ...
- 01:29 PM Bug #48325 (Resolved): PlacementSpec: 'NoneType' object has no attribute 'copy'
- ...
- 07:15 AM Bug #48312 (Fix Under Review): compilation fails if earlier version of lua is present
11/20/2020
- 06:29 PM Bug #46247: cephadm mon failure: Error: no container with name or ID ... no such container
- yes, looks like so, removing to avoid confusion
- 06:25 PM Bug #48072: ppa:projectatomic is no longer maintained
- Hey Sebastian,
saw exit code on failure: ... - 02:04 PM Bug #48312 (Resolved): compilation fails if earlier version of lua is present
- ...
- 10:37 AM Bug #48263 (Won't Fix): cephadm: ambigious container name `ceph-00000000-0000-0000-0000-0000deadb...
- please reopen, if you think we should improve this eventually
- 09:46 AM Bug #48263: cephadm: ambigious container name `ceph-00000000-0000-0000-0000-0000deadbeef` being c...
- on second look, I suppose that's not the cause of failure here. I think the daemons crashed and none was found for lo...
11/19/2020
- 07:31 PM Bug #48261: cephadm ceph-volume inventory -- --format json-pretty: INFO:cephadm:/usr/bin/podman:...
- It's kind of expected.
Why one would pass "--" to the cephadm ceph-volume command ?
I mean, using cephadm ceph-... - 03:59 PM Bug #44698 (Duplicate): cephadm: removing daemons leaves auth keys behind
- 03:44 PM Feature #47139 (In Progress): Require a minimum version for podman/docker
- 11:53 AM Feature #47139: Require a minimum version for podman/docker
- requiring podman >= 2.0 also for centos would make things a lot easier! Like #47139
- 03:09 PM Bug #48072 (Pending Backport): ppa:projectatomic is no longer maintained
- 02:39 PM Bug #48263: cephadm: ambigious container name `ceph-00000000-0000-0000-0000-0000deadbeef` being c...
- indeed hardcoded:
https://github.com/ceph/ceph/blob/dc42564807473726030f5b6baebbfe2d6f14bfc6/qa/workunits/cephadm... - 07:26 AM Bug #48263: cephadm: ambigious container name `ceph-00000000-0000-0000-0000-0000deadbeef` being c...
- sorry, I missed adding pulpito link, here: http://qa-proxy.ceph.com/teuthology/ideepika-2020-11-17_14:02:43-rados-wip...
- 02:15 PM Feature #48292 (Resolved): cephadm: allow more than 60 OSDs per host
- If the cluster is set to have very dense nodes (>60 OSDs per host) please make sure to assign sufficient ports for Ce...
- 02:04 PM Bug #48291 (Resolved): Grafana should not have a predictable default password
- https://github.com/ceph/ceph/blob/dc42564807473726030f5b6baebbfe2d6f14bfc6/src/pybind/mgr/cephadm/templates/services/...
- 01:48 PM Feature #47507 (In Progress): qa: add testing for Rook
- 01:43 PM Feature #43673 (Resolved): ceph-ansible playbook: pivot to cephadm
- https://github.com/ceph/ceph-ansible/pull/5269
- 12:18 PM Subtask #45116 (Fix Under Review): cephadm: RGW Load balancer using HAproxy
- 12:17 PM Documentation #45165 (Can't reproduce): cephadm troubleshooting: recover from broken daemons
- 12:16 PM Documentation #45623 (Can't reproduce): cephadm: "ceph orch apply mon" is deploying in wrong nodes
- 12:16 PM Tasks #45814 (Resolved): tasks/cephadm.py: Add iSCSI smoke test
- 12:14 PM Documentation #46168 (Resolved): Add information about 'unmanaged' parameter
- 12:13 PM Documentation #46377 (Resolved): cephadm: Missing 'service_id' in last example in orchestrator#se...
- 12:12 PM Bug #46991: cephadm ls does not list legacy rgws
- Yes, we should list legacy RGWs at this point. low prio, though.
- 12:11 PM Cleanup #47131 (Resolved): mgr/cephadm: Move serve() into a dedicated serve.py
- 12:10 PM Cleanup #48140 (Duplicate): cephadm: provide dashboard URL + credentials in an friendly way
- 12:09 PM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- I'd close this as duplicate now
- 12:04 PM Feature #43686 (Pending Backport): cephadm: support rgw nfs
- 11:57 AM Feature #45712: Add 'state' attribute to ServiceSpec
- * +1 for adding a state to cephadm's specs in the spec-store
* -1 for adding a state to the ServiceSpec class - 11:56 AM Feature #46499: Requesting a "ceph orch redeploy monitoring" command, as an option, so user does ...
- can we close this in favor of #45864 ?
- 11:54 AM Feature #46666 (Resolved): cephadm: Introduce 'container' specification to deploy custom containers
- 11:53 AM Feature #47079 (Resolved): cephadm should create a log file
- 11:51 AM Feature #47261 (Need More Info): cephadm integration for cephfs-mirror daemon
- 11:51 AM Feature #47261: cephadm integration for cephfs-mirror daemon
- what exactly do do you need here? which config options do we need?
- 11:49 AM Feature #47370 (Resolved): cephadm to support configuration of rbd mirroring daemon
- rbd-mirror already works.
- 11:41 AM Feature #48114 (Duplicate): Cephadm to support Adding multiple instances of RGW in same node for ...
- 11:35 AM Bug #45167 (Can't reproduce): cephadm: mons are not properly deployed
- 11:35 AM Bug #45454 (Can't reproduce): cephadm: teardown: hang at sudo systemctl stop ceph-453d3962-9141-1...
- 11:34 AM Bug #46237 (Won't Fix): cephadm: Inconsistent exit code
- 11:34 AM Bug #46655: cephadm rm-cluster: Systemd ceph.target not deleted
- the problem is: there might be other clusters. we have to make sure, no cluster is left. Only then we can remove the ...
- 11:31 AM Bug #44604 (Can't reproduce): cephadm: RGW: missing spec / mon store validation
- 11:31 AM Bug #44629 (Can't reproduce): cephadm: prometheus: graph queries are not working correctly
- please reopen, if reproducible
- 11:29 AM Bug #44747 (Can't reproduce): orch: `ceph orch ls --service_type` is broken
- 11:29 AM Bug #44756 (Resolved): drivegroups: replacement op will ignore existing wal/dbs
- 11:29 AM Bug #44781 (New): cephadm: monitoring: root volume alert doesn't work in container
- 11:28 AM Bug #44888 (Resolved): Drivegroup's :limit: isn't working correctly
- 11:28 AM Bug #45010 (Can't reproduce): cephadm: /etc/ceph/ceph.conf directory /etc/ceph does not exist
- 11:27 AM Bug #45451 (Can't reproduce): cephadm: `ceph orch redeploy mgr` never returns
- 11:27 AM Bug #45595 (Need More Info): qa/tasks/cephadm: No filesystem is configured and MDS daemon gets de...
- 11:26 AM Bug #45624 (Can't reproduce): cephadm: "ceph orch apply mgr" is deploying in wrong nodes
- 11:26 AM Bug #45719 (Can't reproduce): CommandFailedError: Command failed on smithi073 with status 1: 'tes...
- 11:25 AM Bug #45861 (Resolved): data_devices: limit 3 deployed 6 osds per node
- 11:23 AM Bug #46037 (Can't reproduce): ceph orch command hangs forever when trying to add osd
- please reopen, if still reproducible
- 11:22 AM Bug #46247: cephadm mon failure: Error: no container with name or ID ... no such container
- the error itself is harmless and unrelated to the actual error.
- 11:19 AM Bug #46327 (Won't Fix): cephadm: nfs daemons share the same config object
- decided to migrate users instead.
- 11:18 AM Bug #46561 (New): cephadm: monitoring services adoption doesn't honor the container image
- I think we need to remove the hardcoded default images from cephadm and make them somehow configurable.
- 11:15 AM Bug #46726 (Resolved): cephadm: deploying of monitoring images partially broken
- 11:13 AM Bug #46773: [RFE] validate argument of 'cephadm add-repo --release'
- is this still reproducible?
- 11:12 AM Bug #46910 (Can't reproduce): cephadm: Unable to create an iSCSI target
- 11:11 AM Bug #47035 (Resolved): TypeError: _daemon_action_redeploy() missing 1 required positional argumen...
- 11:07 AM Bug #47169 (Won't Fix): cephadm: wrong command parsing for `cephadm logs`
- 11:07 AM Bug #47298 (Resolved): mgr/cephadm: keyrings are left behind after daemon removal
- 11:06 AM Bug #47336 (Can't reproduce): `orch device ls`: Unexpected argument '--wide'
- 11:06 AM Bug #47340 (Duplicate): _list_devices: 'NoneType' object has no attribute 'get'
- 11:05 AM Bug #47358: "ceph orch apply osd" chokes on valid service_spec.yml
- MAybe we should just completely drop ...
- 10:58 AM Bug #47709 (Duplicate): orchestrator._interface.OrchestratorValidationError: name mon.c already i...
- 10:57 AM Feature #47782: ceph orch host rm <host> is not stopping the services deployed in the respective ...
- https://github.com/ceph/ceph/pull/34617 was an attempt to implement this, but the scheduler wasn't ready at that poin...
- 10:49 AM Bug #47841 (Pending Backport): `ceph orch device ls` assumes lsm data is present
- 10:46 AM Bug #47905 (Need More Info): cephadm: cephadm bootstrap is missing structured output. (was: logg...
- 10:44 AM Bug #47905: cephadm: cephadm bootstrap is missing structured output. (was: logging to stderr)
- Hm. I see the need and use case for this. How do we proceed here?
would printing a json document at the end of bo... - 10:28 AM Bug #48019: cephadm: `ceph daemon <daemon-name> ...` is broken
- Hm. Seems that when calling...
- 10:18 AM Bug #48031 (Pending Backport): Cephadm: Needs to pass cluster.listen-address to alertmanager
11/18/2020
- 09:49 PM Bug #48019: cephadm: `ceph daemon <daemon-name> ...` is broken
- Could it be that the admin_socket code has not been updated to reflect the fact that the asok files have moved?
Th... - 09:46 PM Bug #48019: cephadm: `ceph daemon <daemon-name> ...` is broken
- Sebastian Wagner wrote:
> I think this actually works already. the mgr daemons have some random identifier. like mgr... - 02:58 PM Bug #48019: cephadm: `ceph daemon <daemon-name> ...` is broken
- I think this actually works already. the mgr daemons have some random identifier. like mgr.<hostname>.xyzuizxy
- 08:58 PM Bug #48275 (Duplicate): cephadm: get_last_local_ceph_image returns "<none>:<none>"
- 08:58 PM Bug #48275 (New): cephadm: get_last_local_ceph_image returns "<none>:<none>"
- This was closed as a duplicate of Orchestrator - Bug #47134: cephadm: Unable to infer container image when tag is mis...
- 01:19 PM Bug #48275 (Duplicate): cephadm: get_last_local_ceph_image returns "<none>:<none>"
- 01:17 PM Bug #48275 (Duplicate): cephadm: get_last_local_ceph_image returns "<none>:<none>"
- ...
- 06:50 PM Bug #48205: cephadm: last local ceph image tag can be null
- PR has been updated to use a repository digest, because the image names are not available when using docker.
diges... - 04:07 PM Bug #48157: test_cephadm.sh failure You have reached your pull rate limit. You may increase the l...
- @Sebastian I also observed it by direct ssh to senta04/vossi04 to pull any other image ...
- 03:28 PM Bug #48157: test_cephadm.sh failure You have reached your pull rate limit. You may increase the l...
- we already have a caching registry for docker.io in sepia which is in ative use.
It's just that it is only used b... - 03:31 PM Feature #48114: Cephadm to support Adding multiple instances of RGW in same node for 5.0 release
- Right. co-locating services is on the agenda, but not super high priority.
- 03:29 PM Bug #48142 (Fix Under Review): rados:cephadm/upgrade/mon_election tests are failing: CapAdd and p...
- 03:26 PM Bug #48158 (Pending Backport): cephadm bootstrap fails with custom ssh port
- 03:26 PM Bug #48166 (Pending Backport): cephadm should be run as root
- 03:22 PM Bug #48263: cephadm: ambigious container name `ceph-00000000-0000-0000-0000-0000deadbeef` being c...
- any idea where this comes from? https://github.com/ceph/ceph/search?q=deadbeef
- 03:21 PM Bug #48277 (Duplicate): cephadm infer image: <none>:<none>, despite --filter dangling=false
- 03:18 PM Bug #48277 (Fix Under Review): cephadm infer image: <none>:<none>, despite --filter dangling=false
- 01:49 PM Bug #48277 (Duplicate): cephadm infer image: <none>:<none>, despite --filter dangling=false
- ...
- 03:17 PM Bug #44587 (Can't reproduce): failed to write <pid> to cgroup.procs:
- 03:07 PM Feature #47507: qa: add testing for Rook
- I did a similar thing about a year ago: https://github.com/sebastian-philipp/test-rook-orchestrator/blob/master/test_...
- 02:55 PM Bug #45973 (New): Adopted MDS daemons are removed by the orchestrator because they're orphans
- 02:53 PM Bug #46453 (Can't reproduce): cephadm: iSCSI container fails to start
- I think we solved this in the meantime
- 02:51 PM Feature #47335 (Resolved): cephadm: Make the Monitoring templates super flexible
- 02:48 PM Bug #45093 (Resolved): cephadm: mgrs transiently getting co-located (one node gets two when only ...
- 02:47 PM Bug #46529 (Resolved): cephadm: error removing storage for container "...-mon": remove /var/lib/c...
- 02:47 PM Bug #47078 (Need More Info): cephadm: ForwardToSyslog=yes (aka really huge syslog)
- 02:44 PM Bug #47170 (Resolved): cephadm "ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d-osd.3-activate" is alre...
- 02:42 PM Bug #44231 (Resolved): cephadm: cannot capture core files
- 02:41 PM Bug #47384 (Resolved): cephadm: Remove assignment to member variable in ServiceSpecs
- 02:40 PM Bug #47305 (Resolved): cephadm: podman not optional on SUSE
- 01:14 PM Bug #47134 (Resolved): cephadm: Unable to infer container image when tag is missing
11/17/2020
- 11:27 PM Documentation #48267 (Resolved): orchestrator docs should at least mention keyrings
- I had to use Suse's docs to add a host to the LRC....
- 03:46 PM Bug #48263 (Won't Fix): cephadm: ambigious container name `ceph-00000000-0000-0000-0000-0000deadb...
- ...
- 01:35 PM Bug #48261 (Won't Fix): cephadm ceph-volume inventory -- --format json-pretty: INFO:cephadm:/usr...
- ...
- 01:28 PM Feature #48260 (New): cephadm OSD removal: Prune empty hosts in the crush map.
- after I finished with steps described in documentation I found that node osd-node6 is still listed under `ceph osd tr...
- 12:11 PM Bug #47684: cephadm: auth get failed: failed to find osd.27 in keyring retval: -2
- workaround seems to be to manually remove the osd from cephadm's config-key store.
- 12:09 PM Bug #47684 (Fix Under Review): cephadm: auth get failed: failed to find osd.27 in keyring retval: -2
- 11:51 AM Bug #47684 (In Progress): cephadm: auth get failed: failed to find osd.27 in keyring retval: -2
Also available in: Atom