Activity
From 02/24/2020 to 03/24/2020
03/24/2020
- 10:05 PM Bug #44642: cephadm: mgr dump might be too huge
- > Do you know why the line "j = json.loads(out)" is choking on the integer value sent by "ceph mgr dump"?
Now I se... - 07:38 PM Bug #44669 (Fix Under Review): cephadm: rm-cluster should clean up /etc/ceph
- 02:35 PM Bug #44669 (In Progress): cephadm: rm-cluster should clean up /etc/ceph
- 02:57 PM Bug #44729: cephadm enter using docker is broken
- ls works though...
- 02:56 PM Bug #44729 (Can't reproduce): cephadm enter using docker is broken
- ...
03/23/2020
- 04:16 PM Bug #44720 (Need More Info): rook: rgw: allow realm != zone
- 04:16 PM Bug #44719 (New): rook: align rgw client names with orch and cephadm
- client.rgw.$realm.$zone[.$id]
- 04:11 PM Feature #44718 (Fix Under Review): NFS ganesha (mgr/cephadm)
- 04:10 PM Feature #44718 (Resolved): NFS ganesha (mgr/cephadm)
- mgr/cephadm
- 04:10 PM Feature #43688 (Resolved): NFS ganesha
- 02:36 PM Bug #44701 (Resolved): ganesha selinux denial
- 01:56 PM Documentation #44716 (Resolved): orchestrator/cephadm: document ceph orch apply -i -
- ...
- 12:16 PM Backport #44710 (Resolved): octopus: doc/cephadm: replace `osd create` with `apply osd`
- https://github.com/ceph/ceph/pull/34355
- 08:33 AM Bug #44642 (Fix Under Review): cephadm: mgr dump might be too huge
- 08:31 AM Bug #44642: cephadm: mgr dump might be too huge
- I don't know what caused this. Might actually be an artifact of our podman hang. prio=low for now.
edit: oh, you c...
03/20/2020
- 10:45 PM Bug #44642: cephadm: mgr dump might be too huge
- Now, with both cephadm and container at 15.1.1-168-g06ecd31e39 I am seeing "cephadm bootstrap" fail on "ceph mgr dump...
- 06:48 PM Bug #44701 (Resolved): ganesha selinux denial
- ...
- 06:28 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- yeah, I also don't like to create a new dependency from the dashboard to cephadm
- 05:08 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- I'm inclined to just open both, because the dashboard might move between ssl and not ssl. otherwise we need to make t...
- 05:10 PM Feature #44576 (Resolved): cephadm: Restart Prometheus, if a new node_exporter or alertmanager is...
- This already works.
- 05:05 PM Bug #44669: cephadm: rm-cluster should clean up /etc/ceph
- What should the behavior here be? Check if the /etc/ceph config has the same fsid, and if so, remove it + the keyrin...
- 05:04 PM Bug #44699 (Closed): cephadm: removing services leaves configs behind
- Some of the configs are created by cephadm itself. The user might have created some too, but the config history will...
- 05:02 PM Bug #44698 (Duplicate): cephadm: removing daemons leaves auth keys behind
- 02:19 PM Feature #43839 (Fix Under Review): enhance `host ls`
- 01:39 PM Feature #43839 (In Progress): enhance `host ls`
- 12:13 PM Bug #44692 (Pending Backport): doc/cephadm: replace `osd create` with `apply osd`
- 11:38 AM Bug #44692 (Fix Under Review): doc/cephadm: replace `osd create` with `apply osd`
- 11:33 AM Bug #44692 (Resolved): doc/cephadm: replace `osd create` with `apply osd`
- 12:06 PM Feature #43689 (Fix Under Review): cephadm: iscsi
- 11:43 AM Bug #43890 (Fix Under Review): cephadm: default hardcoded to non-ceph dockerhub
03/19/2020
- 07:19 PM Bug #44615 (Resolved): cephadm: reconfig of removed daemon
- 02:47 PM Feature #44599 (Fix Under Review): cephadm: check-host: Returns only a single problem
- 09:28 AM Cleanup #44676 (Resolved): cephadm: Replace execnet (and remoto)
- [[https://github.com/pytest-dev/execnet]] is in maintenance mode. ...
03/18/2020
- 11:55 PM Bug #44673 (Rejected): cephadm: `orch apply` and `orch daemon add` use completely different code ...
- ... which is not obvious to users and they will use this interchangeably. Which is not really a good idea.
We sho... - 10:43 PM Feature #44622 (Resolved): orch daemon add -i spec.yaml
- 03:16 PM Bug #44642 (In Progress): cephadm: mgr dump might be too huge
- 02:05 PM Bug #44642 (New): cephadm: mgr dump might be too huge
- 01:48 PM Bug #44669 (Resolved): cephadm: rm-cluster should clean up /etc/ceph
- ...
- 01:14 PM Bug #44401 (Resolved): cephadm: check host performed every time through serve loop
- 01:14 PM Bug #44607 (Resolved): cephadm: apply(): Traceback, if host doesn't exist
03/17/2020
- 04:12 PM Bug #44642 (Rejected): cephadm: mgr dump might be too huge
- seems to be a downstream issue.
- 02:51 PM Bug #44642 (Resolved): cephadm: mgr dump might be too huge
- ...
- 04:11 PM Feature #44599 (In Progress): cephadm: check-host: Returns only a single problem
- 04:08 PM Feature #44599 (Rejected): cephadm: check-host: Returns only a single problem
- 03:30 PM Bug #44644 (Closed): cephadm: RGW: updating the spec doesn't update the mon store
- when creating RGW running...
- 03:20 PM Backport #43993: mimic: ceph orchestrator rgw rm: no valid command found
- As `ceph orchestrator rgw rm` doesn't exist for mimic, what about just close this?
- 02:25 PM Bug #44607 (Fix Under Review): cephadm: apply(): Traceback, if host doesn't exist
- 12:46 PM Feature #44622 (Fix Under Review): orch daemon add -i spec.yaml
- 10:21 AM Feature #44622 (In Progress): orch daemon add -i spec.yaml
03/16/2020
- 05:43 PM Bug #44629 (Can't reproduce): cephadm: prometheus: graph queries are not working correctly
- graph queries are not working correctly. The use of instance and
exported_instance needs some investigation. On the ... - 05:41 PM Feature #44628 (Resolved): cephadm: Add initial firewall management to cephadm
- we open both 8080 and 8443 for dashboard even when the default is
https. We should probably do one or the other, not... - 05:30 PM Documentation #44600 (Resolved): cephadm: use ssh-copy-id
- 04:08 PM Bug #44597 (Resolved): cephadm: Traceback, if ssh key is not on the remote host
- 02:26 PM Feature #44625 (Resolved): cephadm: test dmcrypt
- we need to verify it.
- 01:18 PM Feature #44622 (Resolved): orch daemon add -i spec.yaml
03/14/2020
03/13/2020
- 05:00 PM Bug #44609 (Resolved): cephadm: grafana: cert problem prevents dashbaord integration
- SSL cert problem prevents embedding out of the box.
Is the problem that ssl_verify is true by default? or that we... - 04:59 PM Bug #44608 (Resolved): cephadm: grafana: bound to 127.0.0.1
- after deploying I noticed that it was bound to 127.0.0.1, which blocks
client access from other machines. Should thi... - 04:56 PM Bug #44607 (Resolved): cephadm: apply(): Traceback, if host doesn't exist
- when deploying a daemon, with a host for placement - if the host doesn't
exist you get a trackback. This scenario sh... - 04:55 PM Feature #44606 (Resolved): cephadm: RGW firewall + static port
- how is the firewall being handled? AFAIK, the port is a parameter on
the rgw_frontend setting, so it could be un... - 04:29 PM Bug #44604 (Can't reproduce): cephadm: RGW: missing spec / mon store validation
- should the deployment of rgw first check the presence of a minimum set
of parms defined in the config store - if no... - 04:25 PM Bug #44603 (Rejected): cephadm: `ls --refresh` shows Tracebacks in the log
- With a host down that had daemons deployed, a --refresh shows trackbacks in the mgr log from the failed connect attem...
- 04:23 PM Bug #44602 (Resolved): cephadm: `orch ls` shows daemons as online, despite host is down
- With a host down that had daemons deployed:
ceph orch ls didn't show services as affected even after a --refresh i... - 04:20 PM Feature #44601 (New): cephadm: Mix of hosts: with and without firewall
- We allow a mix of hosts that either have firewall or not. I think this
should be part of the checks - either all hos... - 04:14 PM Documentation #44600 (Resolved): cephadm: use ssh-copy-id
- adding a new host:
Passing the ceph.pub key to new hosts could use the... - 04:13 PM Feature #44599 (Resolved): cephadm: check-host: Returns only a single problem
- Adding a host:
If checks fail, they show one at a time, forcing the admin to repeat
the command to get passed eac... - 04:11 PM Bug #44598 (Resolved): cephadm: Traceback, if Python 3 is not installed on remote host
- Adding a host:
if python3 isn't on the target, you get a traceback with OSError:
cannot send(already closed?) err... - 04:10 PM Bug #44597 (Resolved): cephadm: Traceback, if ssh key is not on the remote host
- Adding a host:
if the ssh key isn't on the new target you hit a trackback - which doesn't inspire confidence. - 03:58 PM Feature #44402 (Resolved): cephadm: more complete smoke test that can be run with vstart
- 03:56 PM Feature #44581 (Resolved): cephadm pause and cephadm resume
- 03:55 PM Bug #44569 (Resolved): NotImplementedError not caught
- 03:23 PM Cleanup #44379 (Won't Fix): orchestrator: {to,from}_json inconsistent
- not worth the effort right now.
- 02:53 PM Documentation #44284: cephadm: provide a way to modify the initial crushmap
- Per our discussion today, using `cephadm bootstrap -c /root/ceph.conf` is the correct way to set initial crushmap or ...
- 02:55 AM Bug #44587 (New): failed to write <pid> to cgroup.procs:
- ...
03/12/2020
- 05:28 PM Bug #44440 (Resolved): cephadm should be able to infer running container
- 02:44 PM Feature #44581 (Resolved): cephadm pause and cephadm resume
- if the serve() thread is in a loop breaking all your daemons, people will want to pause it.
- 12:52 PM Feature #44578 (Rejected): cephadm: verify Grafana works with Prometheus HA
- Is Grafana correctly configured when a Prometheus instance is added, for example:
* Is HA working in the Grafana d... - 12:51 PM Bug #44577 (Closed): cephadm: reconfigure Prometheus on MGR failover
- we have to make sure, Prometheus knows the new prometheus exporter endpoint:
* Generate a new prometheus config po... - 12:44 PM Feature #44576 (Resolved): cephadm: Restart Prometheus, if a new node_exporter or alertmanager is...
- P needs to know the new targets / configuration
- 12:37 PM Bug #37514 (Can't reproduce): mgr CLI commands block one another (indefinitely if the orchestrato...
- CLI commands should now respond swiftly. (cephadm and rook)
- 12:36 PM Feature #39093 (Rejected): mgr/orchestrator: add `ceph orchestrator wait`
- out of scope for now.
- 12:33 PM Feature #43705: cephadm: on config change, restart appropriate daemons
- partially: https://github.com/ceph/ceph/pull/33855
- 12:28 PM Feature #43839 (New): enhance `host ls`
- 12:19 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- Which means, we have to track which nodes are scanned and bail out, if we don't have the inventory yet?
- 12:15 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- new workaround: https://github.com/ceph/ceph-salt/pull/109
- 12:10 PM Bug #44559: cephadm logs an invalid stat command
- just to clarify, ...
- 12:07 PM Bug #44569 (Fix Under Review): NotImplementedError not caught
03/11/2020
- 08:21 PM Bug #44569 (Resolved): NotImplementedError not caught
- with cephadm for example,...
- 08:21 PM Feature #43694 (Resolved): cephadm: flag dashboard user to change password
- 02:57 PM Bug #44559 (New): cephadm logs an invalid stat command
- 02:30 PM Bug #44559: cephadm logs an invalid stat command
- Thanks Kris - updated the bug description.
- 12:06 PM Bug #44559: cephadm logs an invalid stat command
- Shouldn't that be...
- 11:50 AM Bug #44559 (Fix Under Review): cephadm logs an invalid stat command
- 11:46 AM Bug #44559 (Can't reproduce): cephadm logs an invalid stat command
- When I run "cephadm bootstrap", I see the following in the log:...
- 02:52 PM Bug #44272 (Resolved): on SUSE, crash daemon starts but then always stops a couple minutes later
- 11:17 AM Bug #44557 (Resolved): cephadm: error on run-tox-cephadm test
- 09:14 AM Bug #44557 (Fix Under Review): cephadm: error on run-tox-cephadm test
- 08:19 AM Bug #44557 (Resolved): cephadm: error on run-tox-cephadm test
- run-tox-cephadm test fails with:...
- 09:56 AM Backport #43994 (Need More Info): luminous: ceph orchestrator rgw rm: no valid command found
- mimic backport attempt was closed. presuming non-trivial
- 08:05 AM Feature #44556 (Resolved): cephadm: preview drivegroups
- The osd deployment in cephadm happens async in the background.
When using drivegroups, it may be not always clear...
03/10/2020
- 10:19 PM Bug #44397 (Resolved): cephadm: make rgw daemons avoid the same host
- 12:59 PM Bug #44397 (Fix Under Review): cephadm: make rgw daemons avoid the same host
- 12:43 PM Bug #44397: cephadm: make rgw daemons avoid the same host
- https://github.com/ceph/ceph/commit/8330d2f2bd2bb9325ac48accedfecd6dfaab8697
- 09:27 PM Bug #44512 (Resolved): mgr/cephadm: `orch ls` doesn't obey filters
- 07:59 AM Bug #44512 (Fix Under Review): mgr/cephadm: `orch ls` doesn't obey filters
- 08:11 PM Bug #44401 (Fix Under Review): cephadm: check host performed every time through serve loop
- 04:14 PM Backport #43993 (Need More Info): mimic: ceph orchestrator rgw rm: no valid command found
- first attempted backport - https://github.com/ceph/ceph/pull/33159 - was closed
- 03:29 PM Feature #44548 (Resolved): cephadm: persist osd removal queue
- cephadm and the corresponding osd_support module currently don't save state of osds that are queued to be removed, he...
- 12:01 PM Feature #43699 (Resolved): mgr/cephadm: osd rm must validate before deletion
- 12:00 PM Feature #43693 (Resolved): cephadm: replace OSDs
- 11:54 AM Bug #44272 (Fix Under Review): on SUSE, crash daemon starts but then always stops a couple minute...
- 11:45 AM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- from dmesg:...
- 11:41 AM Feature #44402: cephadm: more complete smoke test that can be run with vstart
- fixed via https://github.com/ceph/ceph/pull/33730 or is there something else missing?
- 10:42 AM Cleanup #44379: orchestrator: {to,from}_json inconsistent
- to,from}_json should not accept strings and instead always accept/return dicts or lists.
- 03:42 AM Bug #44526 (Resolved): sporatic cephadm bootstrap failures: 'timed out'
03/09/2020
- 09:52 PM Feature #43962 (Resolved): cephadm: Make mgr/cephadm declarative
- 06:14 PM Bug #44440 (In Progress): cephadm should be able to infer running container
- 05:27 PM Bug #44526 (Fix Under Review): sporatic cephadm bootstrap failures: 'timed out'
- 05:27 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- I think the fundamental problem here is how ceph.in is using librados. One thread is trying to do some work, which i...
- 04:12 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- ceph.in sets a short 5s timeout for -h, and that's triggering shutdown, but then ceph isn't cleanly stopping...
<p... - 03:55 PM Bug #44526 (Resolved): sporatic cephadm bootstrap failures: 'timed out'
- ...
- 04:12 AM Bug #44513 (Resolved): mgr/cephadm: `orch ps --refresh` returns no results
- h3. Steps to reproduce
* Create services and list daemons... - 04:05 AM Bug #44512 (Resolved): mgr/cephadm: `orch ls` doesn't obey filters
- h3. Steps to reproduce
* Create a service, e.g. mgr
* List the service with service_type filter, say `osd`. The r...
03/08/2020
- 10:30 PM Bug #44253 (Resolved): _apply_service should move services, not just expand/contract
- 10:30 PM Bug #44254 (Resolved): scheduler should prefer existing daemon locations
- 10:30 PM Bug #44392 (Resolved): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Servi...
- 10:30 PM Bug #44491 (Resolved): mgr/cephadm: fail to load service specs after restarting
- 10:29 PM Bug #44167 (Resolved): cephadm/ def _update_service: Remove should make use of spec.placement.hosts
- 10:29 PM Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedError
03/07/2020
- 05:48 PM Bug #43713 (Resolved): drive group filters: use `and` instead of `or`
- 12:21 AM Bug #44440: cephadm should be able to infer running container
- ceph-container proposal for adding a new LABEL - https://github.com/ceph/ceph-container/pull/1604
03/06/2020
- 09:25 PM Feature #43937 (Rejected): cephadm: make default image configurable
- Closing in favor of https://tracker.ceph.com/issues/44440, see https://github.com/ceph/ceph/pull/33781 for more infor...
- 12:29 PM Feature #43937 (Fix Under Review): cephadm: make default image configurable
- 09:02 PM Bug #44302 (Fix Under Review): cehpadm: apply_mon: NotImplementedError
- 07:21 PM Bug #44302 (In Progress): cehpadm: apply_mon: NotImplementedError
- From https://github.com/ceph/ceph/pull/33548#issuecomment-591443581
I removed apply_mon because simply reusing _ap... - 07:25 PM Bug #44440: cephadm should be able to infer running container
- As described in https://github.com/ceph/ceph/pull/33781#issuecomment-595760420, if 'image' isn't specified then cepha...
- 07:13 PM Bug #44440 (New): cephadm should be able to infer running container
- 12:40 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- 33755 fixed this for c-v prepare on a single device, which is what teuthology does.
- 11:37 AM Bug #44491 (Resolved): mgr/cephadm: fail to load service specs after restarting
- h3. Steps to reproduce:
* Enable cephadm backend and add a host mgr0
* Create a mgr daemon*... - 08:30 AM Feature #44461 (Pending Backport): cephadm: watch Grafana certificates
- Add a period check for the validity of the provided Grafana certificates and raise an health alert if they aren't hea...
03/05/2020
- 10:50 PM Feature #43937: cephadm: make default image configurable
- I've learned that `cephadm bootstrap` is already storing the pulled image path in a config-key:...
- 07:47 PM Feature #43937 (In Progress): cephadm: make default image configurable
- 06:18 PM Feature #43937: cephadm: make default image configurable
- I would prefer to be able to set a MON config-key with the default value.
If env variable is defined, it should ov... - 06:06 PM Feature #43937: cephadm: make default image configurable
- Sebastian Wagner wrote:
> prio low, as this can be done by setting the env variable system-wide
How does one set ... - 06:57 PM Backport #43993 (New): mimic: ceph orchestrator rgw rm: no valid command found
- 06:29 PM Bug #44440 (Duplicate): cephadm should be able to infer running container
- 02:07 PM Bug #44440 (Resolved): cephadm should be able to infer running container
- If I use "cephadm" to deploy a Ceph cluster using "Container A" (not the default one) and I have that cluster running...
- 03:44 PM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- OK, some more information:...
- 11:18 AM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- OK, I will reproduce, obtain dmesg output, and post here.
One thing I did notice is that, with the upstream contai... - 07:00 AM Bug #44392 (In Progress): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Se...
03/04/2020
- 11:25 PM Feature #44429 (Rejected): cephadm: make upgrade work with 'packaged' mode
- In order to upgrade when the cephadm binary is installed via a package, mgr/cephadm needs to update the cephadm packa...
- 07:34 PM Bug #44273 (Can't reproduce): Getting "stray daemon osd.3 on host admin not managed by cephadm" o...
- not getting this anymore
- 11:27 AM Feature #44414 (Resolved): bubble up errors during 'apply' phase to 'cluster warnings'
- Since we moved to a fully declarative approach which handles most of the deployment in the background (k8-like) it be...
- 01:54 AM Bug #44390 (Resolved): cephadm: fail to create daemons
03/03/2020
- 08:47 PM Bug #44401 (In Progress): cephadm: check host performed every time through serve loop
- 08:41 PM Bug #44401 (Resolved): cephadm: check host performed every time through serve loop
- This should only check every N seconds (say, 10 minutes)
- 08:42 PM Feature #44402 (Resolved): cephadm: more complete smoke test that can be run with vstart
- Frequently we are making (mgr/cephadm and cephadm) code changes and are developing against vstart. It would be nice ...
- 08:33 PM Feature #44205 (Resolved): cephadm: push/apply config.yml
- 06:20 PM Bug #44397 (Resolved): cephadm: make rgw daemons avoid the same host
- A test verifying this behavior was removed, check history for test_rgw_update_fail
- 11:10 AM Bug #44392 (Fix Under Review): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output ...
- 10:07 AM Bug #44392 (Resolved): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Servi...
- A new column `SPEC` was added in PR https://github.com/ceph/ceph/pull/33553.
And PLACEMENT field was added in PR htt... - 03:44 AM Bug #44390 (Fix Under Review): cephadm: fail to create daemons
- 03:38 AM Bug #44390 (In Progress): cephadm: fail to create daemons
- Might be a regression from https://github.com/ceph/ceph/pull/33658/files#diff-8b586ec9c3ad3e8421a8858888f7ddf0R2067.
- 03:36 AM Bug #44390 (Resolved): cephadm: fail to create daemons
- I hit this error when creating OSDs:...
03/02/2020
- 05:35 PM Cleanup #44379 (Won't Fix): orchestrator: {to,from}_json inconsistent
- sometimes to_json returns a dict (that can be fed to json.dumps), sometimes it returns the JSON string. We should be...
02/28/2020
- 05:20 PM Documentation #44354: cephadm: Log messages are missing
- https://github.com/ceph/ceph/pull/33627 is an attempt to clarify the cephadm log gathering docs, and it has some rela...
- 04:43 PM Documentation #44354 (Duplicate): cephadm: Log messages are missing
- There is a bug in master/octopus which causes OSDs to crash in msgr V2 without any load on the cluster. But that's no...
- 04:52 PM Bug #43713 (Fix Under Review): drive group filters: use `and` instead of `or`
- 02:33 PM Bug #44028 (Resolved): cephadm: usability: failing to add an osd, useless message
- I don't think we can make the error message any more useful. We IMO have to fix bugs in c-v and cephadm if they appear.
- 02:30 PM Bug #44272 (Triaged): on SUSE, crash daemon starts but then always stops a couple minutes later
- 02:30 PM Bug #44312 (Duplicate): ceph-volume prepare is not idempotent and may get called twice
- 02:28 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- Sage Weil wrote:
> An alternative workaround would be to make cephadm look for 'RuntimeError: skipping vg_nvme/lv_4,... - 02:04 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- -Your QA run really had a excessive amount of duplicates-:
Ah, each mon logs the command.... - 02:27 PM Bug #44167 (In Progress): cephadm/ def _update_service: Remove should make use of spec.placement....
- 02:26 PM Bug #44167: cephadm/ def _update_service: Remove should make use of spec.placement.hosts
- should be fixed by https://github.com/ceph/ceph/pull/33523
- 02:24 PM Feature #43696: cephadm: check that units start
- I'm inclined to close this as won't fix. What shell we do?
h3. Wait, till the process is responsive?
Like http... - 01:49 AM Feature #44308 (Resolved): mgr/cephadm: Enable alertmanager configuration in mgr/cephadm (the orc...
02/27/2020
- 07:03 PM Bug #44121 (Resolved): calling cephadm shell again looses bash history
- 12:38 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- For ceph-salt, I have a workaround here:
https://github.com/ceph/ceph-salt/pull/99
02/26/2020
- 08:50 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- similar failure, this time a 'daemon mon add' dup:
/a/sage-2020-02-26_08:10:43-rados-wip-sage2-testing-2020-02-25-... - 08:27 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- One possible fix would be to make ceph-volume itself idempotent, so that calling prepare on an already-prepared devic...
- 08:26 PM Bug #44313 (Resolved): ceph-volume prepare is not idempotent and may get called twice
- symptom is a failure like so:...
- 08:24 PM Bug #44312 (Duplicate): ceph-volume prepare is not idempotent and may get called twice
- 04:59 PM Bug #43680 (Resolved): parallelize osd provisioning
- 01:46 PM Feature #44205 (Fix Under Review): cephadm: push/apply config.yml
- 10:41 AM Feature #44308 (Resolved): mgr/cephadm: Enable alertmanager configuration in mgr/cephadm (the orc...
- 09:44 AM Bug #44302 (Fix Under Review): cehpadm: apply_mon: NotImplementedError
- 09:08 AM Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedError
- ...
- 09:32 AM Feature #44305 (Resolved): mgr/cephadm: Add support for removing MONs
- As a future follow up of https://tracker.ceph.com/issues/44302
02/25/2020
- 03:07 PM Feature #44287 (Rejected): cephadm: Graceful Shutdown of the Whole Ceph Cluster
- even though the use case is limited, it would make it possible for users to stop their production clusters for schedu...
- 02:59 PM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- Rethinking. I think this is an apparmor problem. Adding the output of dmesg would be helpful.
- 02:41 PM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- related: https://github.com/opencontainers/runc/issues/2236
After reading the code at container_linux.go:389, podm... - 11:57 AM Documentation #44284: cephadm: provide a way to modify the initial crushmap
- Actually, when this workaround is used, the option ends up being set in the MON store, and is reflected in the initia...
- 11:51 AM Documentation #44284 (Resolved): cephadm: provide a way to modify the initial crushmap
- If we don't edit the map when bootstrapping, the CRUSH map has to be edited at runtime, which is non-trivial.
Howe...
02/24/2020
- 07:29 PM Bug #41746 (Resolved): mgr/rook: `ceph orchestrator device ls` doesn't set `available`
- this is working now AFAICS
- 07:29 PM Bug #43838: cephadm: Forcefully Remove Services (unresponsive hosts)
- One option is to have them 'ceph orch host rm $hostname'...
- 07:27 PM Bug #44121 (Fix Under Review): calling cephadm shell again looses bash history
- 07:19 PM Bug #44270 (Triaged): Under certain circumstances, "ceph orch apply" returns success even when no...
- i bet the problem is that the drive inventory isn't populated yet immediately after bootstrap.
- 03:12 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- we might need some more in-depth validation of drive groups here.
- 03:10 PM Bug #44270 (Can't reproduce): Under certain circumstances, "ceph orch apply" returns success even...
- On a single-node cluster, the "cephadm bootstrap" command deploys 1 MGR and 1 MON.
On very recent versions of mast... - 07:17 PM Bug #44273 (Need More Info): Getting "stray daemon osd.3 on host admin not managed by cephadm" on...
- This should have been fixed by 607263224c26... can you reproduce this with debug_mgr = 20 and attach a log?
- 03:46 PM Bug #44273 (Can't reproduce): Getting "stray daemon osd.3 on host admin not managed by cephadm" o...
- I typically test the most simple deployment imaginable: single node, 1 MGR, 1 MON, and 4 OSDs. The deployment is done...
- 07:13 PM Feature #43867 (Resolved): cephadm: progress item for upgrade
- 04:16 PM Feature #43673 (Need More Info): ceph-ansible playbook: pivot to cephadm
- 03:38 PM Bug #44272 (Resolved): on SUSE, crash daemon starts but then always stops a couple minutes later
- Recently cephadm/orchestrator started deploying crash daemon on all cluster nodes.
On SUSE (at least), the crash d... - 01:33 PM Feature #44205: cephadm: push/apply config.yml
- to sum up our discussion from Friday:
* What about doing all calls synchronously and only return async completions... - 01:30 PM Feature #44205 (In Progress): cephadm: push/apply config.yml
Also available in: Atom