Activity
From 03/02/2020 to 03/31/2020
03/31/2020
- 07:58 PM Feature #43687: cephadm: haproxy (or lb)
- Deploy and configure haproxy with cephadm; configure a service/lb with kubernetes/rook. Can we generalize these into...
- 07:54 PM Feature #44869 (Resolved): cephadm: automatic auth key rotation
- This is about periodically deploying new keys for daemons and clients. This is a bit more involved at we need to make...
- 04:21 PM Documentation #44867 (Rejected): cephadm: document "package" mode
- ...
- 04:07 PM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
- Let's say, someone does:...
- 03:45 PM Feature #44864 (New): cephadm: garbage collect old container images
- cephadm: garbage collect old container images
- 01:39 PM Bug #44820: racey concurrent ceph-volume callKeyError: 'ceph.type'
- Copying to fix in c-v as well.
- 01:18 PM Bug #44820 (Fix Under Review): racey concurrent ceph-volume callKeyError: 'ceph.type'
- 10:00 AM Backport #44845 (Resolved): octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 09:54 AM Bug #44823 (Triaged): cephadm: tries to parse arguments command passed to shell
- 09:54 AM Bug #44823: cephadm: tries to parse arguments command passed to shell
- right. should be ...
- 12:32 AM Bug #44823 (Won't Fix): cephadm: tries to parse arguments command passed to shell
- ...
- 09:49 AM Bug #44832 (Resolved): cephadm: `ceph cephadm generate-key` fails with No such file or directory:...
- ...
- 09:43 AM Bug #44810 (Won't Fix): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- according to kai , this was a false alarm
- 09:42 AM Bug #44830 (Duplicate): cpehadm bootstrap: improve error message, if `host add` fails
- ...
- 09:07 AM Documentation #44828 (Resolved): cephadm: clarify "Failed to infer CIDR network for mon ip"
- ...
- 03:41 AM Bug #44826 (Resolved): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring provided"
- ...
- 12:36 AM Bug #44825 (Rejected): cephadm: bootstrap is not idempotent
- It would be helpful if this command did nothing if the cluster is already bootstrapped. This would simplify ansible r...
- 12:35 AM Bug #44824 (Resolved): cephadm: adding osd device is not idempotent
- ...
03/30/2020
- 11:28 PM Bug #44820: racey concurrent ceph-volume callKeyError: 'ceph.type'
- the problem seems to be a racing invocation of inventory and prepare:...
- 08:29 PM Bug #44820 (Resolved): racey concurrent ceph-volume callKeyError: 'ceph.type'
- ...
- 09:01 PM Bug #44810 (Fix Under Review): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- 05:28 PM Bug #44810 (Need More Info): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- 11:09 AM Bug #44810 (Won't Fix): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- cephadm bootstrap creates @/etc/ceph/ceph.pub@ with a wrong permissions
- 08:23 PM Bug #44669 (Resolved): cephadm: rm-cluster should clean up /etc/ceph
- 06:08 PM Feature #44305 (Resolved): mgr/cephadm: Add support for removing MONs
- 06:08 PM Bug #44039 (Rejected): bin/cephadm: Remove --allow-fqdn-hostname
- seems that this might be a valid config!
- 06:07 PM Feature #44287: cephadm: Graceful Shutdown of the Whole Ceph Cluster
- open Q: how do we shutdown the mons, after we already did shut down all mgrs?
- 05:53 PM Bug #44758 (Fix Under Review): Drive Groups: limit:1 does not imply all:true
- 05:35 PM Feature #43708 (Resolved): mgr/rook: Blink enclosure LED
- 05:35 PM Feature #43696: cephadm: check that units start
- low, until someone complains.
- 05:33 PM Bug #44739 (Need More Info): ceph.conf parameters set via "cephadm bootstrap -c" are not persiste...
- Interesting. the code there is rather old: https://github.com/ceph/ceph/blame/master/src/cephadm/cephadm#L2115-L2124
... - 05:29 PM Documentation #44716 (Fix Under Review): orchestrator/cephadm: document ceph orch apply -i -
- 01:01 PM Feature #44556: cephadm: preview drivegroups
- Shortened this discussion with a f2t talk with Sebastian. Here are the results
Keep the old syntax but expand with... - 10:45 AM Feature #44556: cephadm: preview drivegroups
- > I guess we want to have the following features:
>
> 1) preview a service if the spec is already applied (useful ... - 08:31 AM Feature #44556: cephadm: preview drivegroups
- I guess we want to have the following features:
1) preview a service if the spec is already applied (useful when h... - 10:37 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- Not yet understsand, why https://github.com/ceph/ceph/pull/34260 fixes this issue.
- 10:21 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- ...
- 09:16 AM Bug #44777 (Fix Under Review): podman: stat /usr/bin/ceph-mon: no such file or directory, then un...
- 10:33 AM Bug #44642: cephadm: mgr dump might be too huge
- Backported to octopus by https://github.com/ceph/ceph/pull/34258
03/29/2020
- 10:38 PM Bug #44642 (Resolved): cephadm: mgr dump might be too huge
- 12:17 PM Bug #44642 (Pending Backport): cephadm: mgr dump might be too huge
03/28/2020
- 04:24 PM Feature #44599 (Resolved): cephadm: check-host: Returns only a single problem
- 09:59 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- Fascinating!
- 12:53 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- not sure why, but I'm pretty sure that https://github.com/ceph/ceph/pull/34091 is responsible for the regression.
...
03/27/2020
- 10:10 PM Bug #44792 (Resolved): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- /etc/ceph/ceph.conf is often use by tools in the ceph ecosystem. We should provide a mechanism to keep this up to dat...
- 08:58 PM Bug #44598 (Fix Under Review): cephadm: Traceback, if Python 3 is not installed on remote host
- 06:47 PM Bug #44598 (In Progress): cephadm: Traceback, if Python 3 is not installed on remote host
- 02:53 PM Feature #44556: cephadm: preview drivegroups
- When we want to have preview functionality for other components aswell, we should generalize the CLI a bit more.
... - 12:37 PM Bug #44781 (Fix Under Review): cephadm: monitoring: root volume alert doesn't work in container
- 10:19 AM Bug #44781 (Resolved): cephadm: monitoring: root volume alert doesn't work in container
- This is due the the root filesystem being mapped inside the container as `/rootfs` but the Prometheus alert checking ...
- 03:58 AM Bug #44642: cephadm: mgr dump might be too huge
- Same, @cephadm shell -- ceph mgr dump@ seems perfectly happy for me too. So it's only a problem during bootstrap some...
03/26/2020
- 10:56 PM Bug #44777 (Resolved): podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to ...
- the mon.b unit:...
- 08:43 PM Feature #43677 (Resolved): monitoring: create rpm for alerts rules also for centos
- 08:42 PM Documentation #43672 (Resolved): doc: point release upgrades
- 08:37 PM Bug #44559 (Need More Info): cephadm logs an invalid stat command
- 08:36 PM Feature #43708 (Pending Backport): mgr/rook: Blink enclosure LED
- backport https://github.com/ceph/ceph/pull/34199
- 08:35 PM Bug #44603: cephadm: `ls --refresh` shows Tracebacks in the log
- prio low, till someone complains
- 08:32 PM Bug #44699: cephadm: removing services leaves configs behind
- which config? things in the mon store?
- 08:27 PM Bug #44609 (In Progress): cephadm: grafana: cert problem prevents dashbaord integration
- 08:27 PM Bug #44513 (Resolved): mgr/cephadm: `orch ps --refresh` returns no results
- 02:17 AM Bug #44513 (Pending Backport): mgr/cephadm: `orch ps --refresh` returns no results
- https://github.com/ceph/ceph/pull/34190
- 08:26 PM Bug #44608 (Resolved): cephadm: grafana: bound to 127.0.0.1
- 02:18 AM Bug #44608 (Pending Backport): cephadm: grafana: bound to 127.0.0.1
- backport https://github.com/ceph/ceph/pull/34191
- 08:26 PM Bug #43890 (Resolved): cephadm: default hardcoded to non-ceph dockerhub
- 08:25 PM Feature #44775 (Resolved): cephadm: NFS stage 2
- * Teuthology integration
* cephadm adopt
* make container upgrades work - 08:22 PM Feature #44718 (Resolved): NFS ganesha (mgr/cephadm)
- 03:24 PM Bug #44642: cephadm: mgr dump might be too huge
- ...
- 02:47 PM Bug #44642: cephadm: mgr dump might be too huge
- ...
- 11:33 AM Bug #44642: cephadm: mgr dump might be too huge
- Just ran another loop of ten with https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm and docker.io/ceph/cep...
- 11:24 AM Bug #44642: cephadm: mgr dump might be too huge
- Ten runs using cephadm from https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm (so the container image is d...
- 11:01 AM Bug #44642: cephadm: mgr dump might be too huge
- To try to obtain some further clarity, I ran this:
@for n in $(seq 1 10); do sleep 5; systemctl stop ceph.target ;... - 08:15 AM Bug #44642: cephadm: mgr dump might be too huge
- OK, I've now seen it work at least once without that patch applied, and I've also seen it fail at least once without ...
- 03:35 AM Bug #44642: cephadm: mgr dump might be too huge
- Sebastian Wagner wrote:
> interesting. does it work when using https://github.com/ceph/ceph/pull/34031 ?
Yeah, it... - 01:32 PM Bug #44602 (Fix Under Review): cephadm: `orch ls` shows daemons as online, despite host is down
- 12:57 PM Feature #44556: cephadm: preview drivegroups
- This is the output of:...
- 10:16 AM Feature #44556 (In Progress): cephadm: preview drivegroups
- 10:53 AM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- https://github.com/ceph/ceph/compare/master...sebastian-philipp:cephadm-add-ipv6-routes?expand=1
- 10:15 AM Bug #44769 (Resolved): cephadm doesn't reuse osd_id of 'destroyed' osds
- The replacement operation is supposed to work like this:...
- 09:27 AM Documentation #44768 (Rejected): cephadm: document allow_ptrace true
- ...
- 09:18 AM Bug #44729: cephadm enter using docker is broken
- hm. Jan, close as "Can't reproduce"?
03/25/2020
- 11:57 PM Backport #43994 (Rejected): luminous: ceph orchestrator rgw rm: no valid command found
- 11:57 PM Backport #43993 (Rejected): mimic: ceph orchestrator rgw rm: no valid command found
- 09:42 PM Bug #43816 (Pending Backport): cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 03:16 PM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- > This command produced the following error:
> ... - 05:34 PM Bug #44758 (Resolved): Drive Groups: limit:1 does not imply all:true
- This drive group:...
- 05:29 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- prerequisite: https://github.com/ceph/ceph/pull/34091
- 05:28 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- I'm already getting bug reports, like...
- 04:23 PM Bug #44642: cephadm: mgr dump might be too huge
- interesting. does it work when using https://github.com/ceph/ceph/pull/34031 ?
- 08:26 AM Bug #44642: cephadm: mgr dump might be too huge
- I can't help but think this is somehow related to the hangs we're getting after ~100k output when podman is run via s...
- 04:12 PM Bug #44513: mgr/cephadm: `orch ps --refresh` returns no results
- Confirmed PR 34182 fixes this.
Saw a similar thing with 3 hosts and only the last host was shown during refersh:
... - 03:41 PM Bug #44513: mgr/cephadm: `orch ps --refresh` returns no results
- (pretty sure i'm fixing the same bug... it would happen if you had 2 hosts in your test cluster above and the last on...
- 03:40 PM Bug #44513 (Fix Under Review): mgr/cephadm: `orch ps --refresh` returns no results
- 03:44 PM Bug #44729 (Need More Info): cephadm enter using docker is broken
- It works for me......
- 03:35 PM Bug #44608 (Fix Under Review): cephadm: grafana: bound to 127.0.0.1
- 03:21 PM Bug #44756 (Resolved): drivegroups: replacement op will ignore existing wal/dbs
- Since the db/wal is considered "locked/non-avialable" by ceph-volume after the first deployment, the DriveGroup algor...
- 12:11 PM Bug #44747: orch: `ceph orch ls --service_type` is broken
- This looks actually random. In the same session:...
- 12:07 PM Bug #44747 (Can't reproduce): orch: `ceph orch ls --service_type` is broken
- ...
- 11:52 AM Bug #44746 (Closed): cephadm: vstart.sh --cephadm: don't deploy crash by default
- as it's not part of the normal cluster, it's getting left behind and stays running. ...
- 09:34 AM Bug #44739 (Can't reproduce): ceph.conf parameters set via "cephadm bootstrap -c" are not persist...
- Now, when I set e.g. "osd crush chooseleaf type = 0" via "cephadm bootstrap -c", the initial CRUSH map has failure do...
- 09:32 AM Bug #44738: drivegroups/cephadm: db_devices don't get applied correctly when using "paths"
- -It seems that db_devices are ignored whenever "paths" is used in the "data_devices" section.-
Ignore that. - 09:15 AM Bug #44738 (Won't Fix): drivegroups/cephadm: db_devices don't get applied correctly when using "p...
- ...
03/24/2020
- 10:05 PM Bug #44642: cephadm: mgr dump might be too huge
- > Do you know why the line "j = json.loads(out)" is choking on the integer value sent by "ceph mgr dump"?
Now I se... - 07:38 PM Bug #44669 (Fix Under Review): cephadm: rm-cluster should clean up /etc/ceph
- 02:35 PM Bug #44669 (In Progress): cephadm: rm-cluster should clean up /etc/ceph
- 02:57 PM Bug #44729: cephadm enter using docker is broken
- ls works though...
- 02:56 PM Bug #44729 (Can't reproduce): cephadm enter using docker is broken
- ...
03/23/2020
- 04:16 PM Bug #44720 (Need More Info): rook: rgw: allow realm != zone
- 04:16 PM Bug #44719 (New): rook: align rgw client names with orch and cephadm
- client.rgw.$realm.$zone[.$id]
- 04:11 PM Feature #44718 (Fix Under Review): NFS ganesha (mgr/cephadm)
- 04:10 PM Feature #44718 (Resolved): NFS ganesha (mgr/cephadm)
- mgr/cephadm
- 04:10 PM Feature #43688 (Resolved): NFS ganesha
- 02:36 PM Bug #44701 (Resolved): ganesha selinux denial
- 01:56 PM Documentation #44716 (Resolved): orchestrator/cephadm: document ceph orch apply -i -
- ...
- 12:16 PM Backport #44710 (Resolved): octopus: doc/cephadm: replace `osd create` with `apply osd`
- https://github.com/ceph/ceph/pull/34355
- 08:33 AM Bug #44642 (Fix Under Review): cephadm: mgr dump might be too huge
- 08:31 AM Bug #44642: cephadm: mgr dump might be too huge
- I don't know what caused this. Might actually be an artifact of our podman hang. prio=low for now.
edit: oh, you c...
03/20/2020
- 10:45 PM Bug #44642: cephadm: mgr dump might be too huge
- Now, with both cephadm and container at 15.1.1-168-g06ecd31e39 I am seeing "cephadm bootstrap" fail on "ceph mgr dump...
- 06:48 PM Bug #44701 (Resolved): ganesha selinux denial
- ...
- 06:28 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- yeah, I also don't like to create a new dependency from the dashboard to cephadm
- 05:08 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- I'm inclined to just open both, because the dashboard might move between ssl and not ssl. otherwise we need to make t...
- 05:10 PM Feature #44576 (Resolved): cephadm: Restart Prometheus, if a new node_exporter or alertmanager is...
- This already works.
- 05:05 PM Bug #44669: cephadm: rm-cluster should clean up /etc/ceph
- What should the behavior here be? Check if the /etc/ceph config has the same fsid, and if so, remove it + the keyrin...
- 05:04 PM Bug #44699 (Closed): cephadm: removing services leaves configs behind
- Some of the configs are created by cephadm itself. The user might have created some too, but the config history will...
- 05:02 PM Bug #44698 (Duplicate): cephadm: removing daemons leaves auth keys behind
- 02:19 PM Feature #43839 (Fix Under Review): enhance `host ls`
- 01:39 PM Feature #43839 (In Progress): enhance `host ls`
- 12:13 PM Bug #44692 (Pending Backport): doc/cephadm: replace `osd create` with `apply osd`
- 11:38 AM Bug #44692 (Fix Under Review): doc/cephadm: replace `osd create` with `apply osd`
- 11:33 AM Bug #44692 (Resolved): doc/cephadm: replace `osd create` with `apply osd`
- 12:06 PM Feature #43689 (Fix Under Review): cephadm: iscsi
- 11:43 AM Bug #43890 (Fix Under Review): cephadm: default hardcoded to non-ceph dockerhub
03/19/2020
- 07:19 PM Bug #44615 (Resolved): cephadm: reconfig of removed daemon
- 02:47 PM Feature #44599 (Fix Under Review): cephadm: check-host: Returns only a single problem
- 09:28 AM Cleanup #44676 (Resolved): cephadm: Replace execnet (and remoto)
- [[https://github.com/pytest-dev/execnet]] is in maintenance mode. ...
03/18/2020
- 11:55 PM Bug #44673 (Rejected): cephadm: `orch apply` and `orch daemon add` use completely different code ...
- ... which is not obvious to users and they will use this interchangeably. Which is not really a good idea.
We sho... - 10:43 PM Feature #44622 (Resolved): orch daemon add -i spec.yaml
- 03:16 PM Bug #44642 (In Progress): cephadm: mgr dump might be too huge
- 02:05 PM Bug #44642 (New): cephadm: mgr dump might be too huge
- 01:48 PM Bug #44669 (Resolved): cephadm: rm-cluster should clean up /etc/ceph
- ...
- 01:14 PM Bug #44401 (Resolved): cephadm: check host performed every time through serve loop
- 01:14 PM Bug #44607 (Resolved): cephadm: apply(): Traceback, if host doesn't exist
03/17/2020
- 04:12 PM Bug #44642 (Rejected): cephadm: mgr dump might be too huge
- seems to be a downstream issue.
- 02:51 PM Bug #44642 (Resolved): cephadm: mgr dump might be too huge
- ...
- 04:11 PM Feature #44599 (In Progress): cephadm: check-host: Returns only a single problem
- 04:08 PM Feature #44599 (Rejected): cephadm: check-host: Returns only a single problem
- 03:30 PM Bug #44644 (Closed): cephadm: RGW: updating the spec doesn't update the mon store
- when creating RGW running...
- 03:20 PM Backport #43993: mimic: ceph orchestrator rgw rm: no valid command found
- As `ceph orchestrator rgw rm` doesn't exist for mimic, what about just close this?
- 02:25 PM Bug #44607 (Fix Under Review): cephadm: apply(): Traceback, if host doesn't exist
- 12:46 PM Feature #44622 (Fix Under Review): orch daemon add -i spec.yaml
- 10:21 AM Feature #44622 (In Progress): orch daemon add -i spec.yaml
03/16/2020
- 05:43 PM Bug #44629 (Can't reproduce): cephadm: prometheus: graph queries are not working correctly
- graph queries are not working correctly. The use of instance and
exported_instance needs some investigation. On the ... - 05:41 PM Feature #44628 (Resolved): cephadm: Add initial firewall management to cephadm
- we open both 8080 and 8443 for dashboard even when the default is
https. We should probably do one or the other, not... - 05:30 PM Documentation #44600 (Resolved): cephadm: use ssh-copy-id
- 04:08 PM Bug #44597 (Resolved): cephadm: Traceback, if ssh key is not on the remote host
- 02:26 PM Feature #44625 (Resolved): cephadm: test dmcrypt
- we need to verify it.
- 01:18 PM Feature #44622 (Resolved): orch daemon add -i spec.yaml
03/14/2020
03/13/2020
- 05:00 PM Bug #44609 (Resolved): cephadm: grafana: cert problem prevents dashbaord integration
- SSL cert problem prevents embedding out of the box.
Is the problem that ssl_verify is true by default? or that we... - 04:59 PM Bug #44608 (Resolved): cephadm: grafana: bound to 127.0.0.1
- after deploying I noticed that it was bound to 127.0.0.1, which blocks
client access from other machines. Should thi... - 04:56 PM Bug #44607 (Resolved): cephadm: apply(): Traceback, if host doesn't exist
- when deploying a daemon, with a host for placement - if the host doesn't
exist you get a trackback. This scenario sh... - 04:55 PM Feature #44606 (Resolved): cephadm: RGW firewall + static port
- how is the firewall being handled? AFAIK, the port is a parameter on
the rgw_frontend setting, so it could be un... - 04:29 PM Bug #44604 (Can't reproduce): cephadm: RGW: missing spec / mon store validation
- should the deployment of rgw first check the presence of a minimum set
of parms defined in the config store - if no... - 04:25 PM Bug #44603 (Rejected): cephadm: `ls --refresh` shows Tracebacks in the log
- With a host down that had daemons deployed, a --refresh shows trackbacks in the mgr log from the failed connect attem...
- 04:23 PM Bug #44602 (Resolved): cephadm: `orch ls` shows daemons as online, despite host is down
- With a host down that had daemons deployed:
ceph orch ls didn't show services as affected even after a --refresh i... - 04:20 PM Feature #44601 (New): cephadm: Mix of hosts: with and without firewall
- We allow a mix of hosts that either have firewall or not. I think this
should be part of the checks - either all hos... - 04:14 PM Documentation #44600 (Resolved): cephadm: use ssh-copy-id
- adding a new host:
Passing the ceph.pub key to new hosts could use the... - 04:13 PM Feature #44599 (Resolved): cephadm: check-host: Returns only a single problem
- Adding a host:
If checks fail, they show one at a time, forcing the admin to repeat
the command to get passed eac... - 04:11 PM Bug #44598 (Resolved): cephadm: Traceback, if Python 3 is not installed on remote host
- Adding a host:
if python3 isn't on the target, you get a traceback with OSError:
cannot send(already closed?) err... - 04:10 PM Bug #44597 (Resolved): cephadm: Traceback, if ssh key is not on the remote host
- Adding a host:
if the ssh key isn't on the new target you hit a trackback - which doesn't inspire confidence. - 03:58 PM Feature #44402 (Resolved): cephadm: more complete smoke test that can be run with vstart
- 03:56 PM Feature #44581 (Resolved): cephadm pause and cephadm resume
- 03:55 PM Bug #44569 (Resolved): NotImplementedError not caught
- 03:23 PM Cleanup #44379 (Won't Fix): orchestrator: {to,from}_json inconsistent
- not worth the effort right now.
- 02:53 PM Documentation #44284: cephadm: provide a way to modify the initial crushmap
- Per our discussion today, using `cephadm bootstrap -c /root/ceph.conf` is the correct way to set initial crushmap or ...
- 02:55 AM Bug #44587 (New): failed to write <pid> to cgroup.procs:
- ...
03/12/2020
- 05:28 PM Bug #44440 (Resolved): cephadm should be able to infer running container
- 02:44 PM Feature #44581 (Resolved): cephadm pause and cephadm resume
- if the serve() thread is in a loop breaking all your daemons, people will want to pause it.
- 12:52 PM Feature #44578 (Rejected): cephadm: verify Grafana works with Prometheus HA
- Is Grafana correctly configured when a Prometheus instance is added, for example:
* Is HA working in the Grafana d... - 12:51 PM Bug #44577 (Closed): cephadm: reconfigure Prometheus on MGR failover
- we have to make sure, Prometheus knows the new prometheus exporter endpoint:
* Generate a new prometheus config po... - 12:44 PM Feature #44576 (Resolved): cephadm: Restart Prometheus, if a new node_exporter or alertmanager is...
- P needs to know the new targets / configuration
- 12:37 PM Bug #37514 (Can't reproduce): mgr CLI commands block one another (indefinitely if the orchestrato...
- CLI commands should now respond swiftly. (cephadm and rook)
- 12:36 PM Feature #39093 (Rejected): mgr/orchestrator: add `ceph orchestrator wait`
- out of scope for now.
- 12:33 PM Feature #43705: cephadm: on config change, restart appropriate daemons
- partially: https://github.com/ceph/ceph/pull/33855
- 12:28 PM Feature #43839 (New): enhance `host ls`
- 12:19 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- Which means, we have to track which nodes are scanned and bail out, if we don't have the inventory yet?
- 12:15 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- new workaround: https://github.com/ceph/ceph-salt/pull/109
- 12:10 PM Bug #44559: cephadm logs an invalid stat command
- just to clarify, ...
- 12:07 PM Bug #44569 (Fix Under Review): NotImplementedError not caught
03/11/2020
- 08:21 PM Bug #44569 (Resolved): NotImplementedError not caught
- with cephadm for example,...
- 08:21 PM Feature #43694 (Resolved): cephadm: flag dashboard user to change password
- 02:57 PM Bug #44559 (New): cephadm logs an invalid stat command
- 02:30 PM Bug #44559: cephadm logs an invalid stat command
- Thanks Kris - updated the bug description.
- 12:06 PM Bug #44559: cephadm logs an invalid stat command
- Shouldn't that be...
- 11:50 AM Bug #44559 (Fix Under Review): cephadm logs an invalid stat command
- 11:46 AM Bug #44559 (Can't reproduce): cephadm logs an invalid stat command
- When I run "cephadm bootstrap", I see the following in the log:...
- 02:52 PM Bug #44272 (Resolved): on SUSE, crash daemon starts but then always stops a couple minutes later
- 11:17 AM Bug #44557 (Resolved): cephadm: error on run-tox-cephadm test
- 09:14 AM Bug #44557 (Fix Under Review): cephadm: error on run-tox-cephadm test
- 08:19 AM Bug #44557 (Resolved): cephadm: error on run-tox-cephadm test
- run-tox-cephadm test fails with:...
- 09:56 AM Backport #43994 (Need More Info): luminous: ceph orchestrator rgw rm: no valid command found
- mimic backport attempt was closed. presuming non-trivial
- 08:05 AM Feature #44556 (Resolved): cephadm: preview drivegroups
- The osd deployment in cephadm happens async in the background.
When using drivegroups, it may be not always clear...
03/10/2020
- 10:19 PM Bug #44397 (Resolved): cephadm: make rgw daemons avoid the same host
- 12:59 PM Bug #44397 (Fix Under Review): cephadm: make rgw daemons avoid the same host
- 12:43 PM Bug #44397: cephadm: make rgw daemons avoid the same host
- https://github.com/ceph/ceph/commit/8330d2f2bd2bb9325ac48accedfecd6dfaab8697
- 09:27 PM Bug #44512 (Resolved): mgr/cephadm: `orch ls` doesn't obey filters
- 07:59 AM Bug #44512 (Fix Under Review): mgr/cephadm: `orch ls` doesn't obey filters
- 08:11 PM Bug #44401 (Fix Under Review): cephadm: check host performed every time through serve loop
- 04:14 PM Backport #43993 (Need More Info): mimic: ceph orchestrator rgw rm: no valid command found
- first attempted backport - https://github.com/ceph/ceph/pull/33159 - was closed
- 03:29 PM Feature #44548 (Resolved): cephadm: persist osd removal queue
- cephadm and the corresponding osd_support module currently don't save state of osds that are queued to be removed, he...
- 12:01 PM Feature #43699 (Resolved): mgr/cephadm: osd rm must validate before deletion
- 12:00 PM Feature #43693 (Resolved): cephadm: replace OSDs
- 11:54 AM Bug #44272 (Fix Under Review): on SUSE, crash daemon starts but then always stops a couple minute...
- 11:45 AM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- from dmesg:...
- 11:41 AM Feature #44402: cephadm: more complete smoke test that can be run with vstart
- fixed via https://github.com/ceph/ceph/pull/33730 or is there something else missing?
- 10:42 AM Cleanup #44379: orchestrator: {to,from}_json inconsistent
- to,from}_json should not accept strings and instead always accept/return dicts or lists.
- 03:42 AM Bug #44526 (Resolved): sporatic cephadm bootstrap failures: 'timed out'
03/09/2020
- 09:52 PM Feature #43962 (Resolved): cephadm: Make mgr/cephadm declarative
- 06:14 PM Bug #44440 (In Progress): cephadm should be able to infer running container
- 05:27 PM Bug #44526 (Fix Under Review): sporatic cephadm bootstrap failures: 'timed out'
- 05:27 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- I think the fundamental problem here is how ceph.in is using librados. One thread is trying to do some work, which i...
- 04:12 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- ceph.in sets a short 5s timeout for -h, and that's triggering shutdown, but then ceph isn't cleanly stopping...
<p... - 03:55 PM Bug #44526 (Resolved): sporatic cephadm bootstrap failures: 'timed out'
- ...
- 04:12 AM Bug #44513 (Resolved): mgr/cephadm: `orch ps --refresh` returns no results
- h3. Steps to reproduce
* Create services and list daemons... - 04:05 AM Bug #44512 (Resolved): mgr/cephadm: `orch ls` doesn't obey filters
- h3. Steps to reproduce
* Create a service, e.g. mgr
* List the service with service_type filter, say `osd`. The r...
03/08/2020
- 10:30 PM Bug #44253 (Resolved): _apply_service should move services, not just expand/contract
- 10:30 PM Bug #44254 (Resolved): scheduler should prefer existing daemon locations
- 10:30 PM Bug #44392 (Resolved): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Servi...
- 10:30 PM Bug #44491 (Resolved): mgr/cephadm: fail to load service specs after restarting
- 10:29 PM Bug #44167 (Resolved): cephadm/ def _update_service: Remove should make use of spec.placement.hosts
- 10:29 PM Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedError
03/07/2020
- 05:48 PM Bug #43713 (Resolved): drive group filters: use `and` instead of `or`
- 12:21 AM Bug #44440: cephadm should be able to infer running container
- ceph-container proposal for adding a new LABEL - https://github.com/ceph/ceph-container/pull/1604
03/06/2020
- 09:25 PM Feature #43937 (Rejected): cephadm: make default image configurable
- Closing in favor of https://tracker.ceph.com/issues/44440, see https://github.com/ceph/ceph/pull/33781 for more infor...
- 12:29 PM Feature #43937 (Fix Under Review): cephadm: make default image configurable
- 09:02 PM Bug #44302 (Fix Under Review): cehpadm: apply_mon: NotImplementedError
- 07:21 PM Bug #44302 (In Progress): cehpadm: apply_mon: NotImplementedError
- From https://github.com/ceph/ceph/pull/33548#issuecomment-591443581
I removed apply_mon because simply reusing _ap... - 07:25 PM Bug #44440: cephadm should be able to infer running container
- As described in https://github.com/ceph/ceph/pull/33781#issuecomment-595760420, if 'image' isn't specified then cepha...
- 07:13 PM Bug #44440 (New): cephadm should be able to infer running container
- 12:40 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- 33755 fixed this for c-v prepare on a single device, which is what teuthology does.
- 11:37 AM Bug #44491 (Resolved): mgr/cephadm: fail to load service specs after restarting
- h3. Steps to reproduce:
* Enable cephadm backend and add a host mgr0
* Create a mgr daemon*... - 08:30 AM Feature #44461 (Pending Backport): cephadm: watch Grafana certificates
- Add a period check for the validity of the provided Grafana certificates and raise an health alert if they aren't hea...
03/05/2020
- 10:50 PM Feature #43937: cephadm: make default image configurable
- I've learned that `cephadm bootstrap` is already storing the pulled image path in a config-key:...
- 07:47 PM Feature #43937 (In Progress): cephadm: make default image configurable
- 06:18 PM Feature #43937: cephadm: make default image configurable
- I would prefer to be able to set a MON config-key with the default value.
If env variable is defined, it should ov... - 06:06 PM Feature #43937: cephadm: make default image configurable
- Sebastian Wagner wrote:
> prio low, as this can be done by setting the env variable system-wide
How does one set ... - 06:57 PM Backport #43993 (New): mimic: ceph orchestrator rgw rm: no valid command found
- 06:29 PM Bug #44440 (Duplicate): cephadm should be able to infer running container
- 02:07 PM Bug #44440 (Resolved): cephadm should be able to infer running container
- If I use "cephadm" to deploy a Ceph cluster using "Container A" (not the default one) and I have that cluster running...
- 03:44 PM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- OK, some more information:...
- 11:18 AM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- OK, I will reproduce, obtain dmesg output, and post here.
One thing I did notice is that, with the upstream contai... - 07:00 AM Bug #44392 (In Progress): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Se...
03/04/2020
- 11:25 PM Feature #44429 (Rejected): cephadm: make upgrade work with 'packaged' mode
- In order to upgrade when the cephadm binary is installed via a package, mgr/cephadm needs to update the cephadm packa...
- 07:34 PM Bug #44273 (Can't reproduce): Getting "stray daemon osd.3 on host admin not managed by cephadm" o...
- not getting this anymore
- 11:27 AM Feature #44414 (Resolved): bubble up errors during 'apply' phase to 'cluster warnings'
- Since we moved to a fully declarative approach which handles most of the deployment in the background (k8-like) it be...
- 01:54 AM Bug #44390 (Resolved): cephadm: fail to create daemons
03/03/2020
- 08:47 PM Bug #44401 (In Progress): cephadm: check host performed every time through serve loop
- 08:41 PM Bug #44401 (Resolved): cephadm: check host performed every time through serve loop
- This should only check every N seconds (say, 10 minutes)
- 08:42 PM Feature #44402 (Resolved): cephadm: more complete smoke test that can be run with vstart
- Frequently we are making (mgr/cephadm and cephadm) code changes and are developing against vstart. It would be nice ...
- 08:33 PM Feature #44205 (Resolved): cephadm: push/apply config.yml
- 06:20 PM Bug #44397 (Resolved): cephadm: make rgw daemons avoid the same host
- A test verifying this behavior was removed, check history for test_rgw_update_fail
- 11:10 AM Bug #44392 (Fix Under Review): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output ...
- 10:07 AM Bug #44392 (Resolved): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Servi...
- A new column `SPEC` was added in PR https://github.com/ceph/ceph/pull/33553.
And PLACEMENT field was added in PR htt... - 03:44 AM Bug #44390 (Fix Under Review): cephadm: fail to create daemons
- 03:38 AM Bug #44390 (In Progress): cephadm: fail to create daemons
- Might be a regression from https://github.com/ceph/ceph/pull/33658/files#diff-8b586ec9c3ad3e8421a8858888f7ddf0R2067.
- 03:36 AM Bug #44390 (Resolved): cephadm: fail to create daemons
- I hit this error when creating OSDs:...
03/02/2020
- 05:35 PM Cleanup #44379 (Won't Fix): orchestrator: {to,from}_json inconsistent
- sometimes to_json returns a dict (that can be fed to json.dumps), sometimes it returns the JSON string. We should be...
Also available in: Atom