Activity
From 03/08/2020 to 04/06/2020
04/06/2020
- 11:18 PM Bug #44965 (Resolved): cephadm: git archive from ci failes
- http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-06_21:21:01-rados-wip-mgfritch-testing-2020-04-06-1246-distro-ba...
- 03:06 PM Backport #44893 (In Progress): octopus: racey concurrent ceph-volume callKeyError: 'ceph.type'
- 11:18 AM Bug #44950: OSDSpec: Reserving storage on db_devices
- This can be achieved using the `slots` option of ceph-volume.
Unfortunately the slots option for wal/db (taken fro... - 10:37 AM Bug #44950 (Duplicate): OSDSpec: Reserving storage on db_devices
- I'm trying to setup a new Ceph cluster using cephadm.
To save costs I've gotten four OSD servers with only a hand... - 10:38 AM Backport #44710: octopus: doc/cephadm: replace `osd create` with `apply osd`
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34355
m... - 10:28 AM Bug #44934 (Fix Under Review): cephadm RGW: scary remove-deploy loop
04/04/2020
- 06:22 AM Bug #44823 (Won't Fix): cephadm: tries to parse arguments command passed to shell
- Sebastian Wagner wrote:
> right. should be
>
> [...]
Ah, that is right. Thanks!
04/03/2020
- 07:16 PM Bug #44894 (Pending Backport): cephadm: non-ceph units put wrong uid:gid in systemd unit file
- 03:59 PM Bug #44934: cephadm RGW: scary remove-deploy loop
- Caught the intermittent container (at the very bottom):...
- 03:19 PM Bug #44934 (Resolved): cephadm RGW: scary remove-deploy loop
- ...
- 11:40 AM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Commands used to create the object gateway:...
- 11:26 AM Bug #44926 (Resolved): dashboard: creating a new bucket causes InvalidLocationConstraint
- In Octopus I created an object gateway as mentioned in https://ceph.io/ceph-management/introducing-cephadm/ chapter "...
- 10:51 AM Bug #44692 (Resolved): doc/cephadm: replace `osd create` with `apply osd`
- 10:51 AM Backport #44710 (Resolved): octopus: doc/cephadm: replace `osd create` with `apply osd`
- 10:47 AM Feature #44925 (In Progress): Please give cephadm a --version option
- Figuring out which version of cephadm a given environment is running seems needlessly difficult:...
- 09:17 AM Feature #43690: cephadm: service resource limits
- Rook actually recommends to *not* set resource limits for Ceph daemons, as all daemons are critical for the cluster....
- 09:10 AM Feature #44919 (Resolved): cephadm MONSpec: special dir for the mon store?
- Rook supports a dataDirHostPath to set the data dir for the mons. Do we need this as well? Is it worth?
e.g for p... - 04:02 AM Backport #44845: octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- https://github.com/ceph/ceph/pull/34350
- 04:02 AM Backport #44845 (Need More Info): octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
04/02/2020
- 11:37 PM Bug #44894 (Fix Under Review): cephadm: non-ceph units put wrong uid:gid in systemd unit file
- 06:01 PM Bug #44894: cephadm: non-ceph units put wrong uid:gid in systemd unit file
- When a node-export deploy runs, the generated systemd unit file puts 65534:65534 for /var/run/ceph/$fsid
- 02:31 PM Bug #44598 (Resolved): cephadm: Traceback, if Python 3 is not installed on remote host
- 02:18 PM Bug #44609 (Fix Under Review): cephadm: grafana: cert problem prevents dashbaord integration
- 11:04 AM Bug #44910 (Rejected): cephadm: PlacementSpec host1:192.168.0.2,host1:192.168.0.2
- Make sure...
- 10:58 AM Bug #44909 (Can't reproduce): cephadm on Debian Buster: journald logs are empty
- * Debian Buster
* docker... - 10:07 AM Documentation #44905 (Resolved): cephadm troubleshooting SSH errors
- ...
- 10:02 AM Documentation #43834 (Can't reproduce): cephadm: some command only support `hosts`. make sure use...
- close for now.
- 09:58 AM Feature #43709: mgr/rook: remove OSDs
- Design discussion:
https://github.com/rook/rook/pull/3954 - 09:39 AM Bug #44720: rook: rgw: allow realm != zone
- It seems sensible to wait for changes explained in the RGW multisite design:
https://github.com/rook/rook/pull/4520
04/01/2020
- 08:30 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- Sebastian Wagner wrote:
> prerequisite: https://github.com/ceph/ceph/pull/34091
This one has been merged.
- 08:27 PM Backport #44710 (In Progress): octopus: doc/cephadm: replace `osd create` with `apply osd`
- 06:50 PM Bug #44894 (Resolved): cephadm: non-ceph units put wrong uid:gid in systemd unit file
- ...
- 06:42 PM Bug #44832: cephadm: `ceph cephadm generate-key` fails with No such file or directory: '/tmp/...
- Here is the Debug Log:...
- 05:04 PM Backport #44893 (Resolved): octopus: racey concurrent ceph-volume callKeyError: 'ceph.type'
- https://github.com/ceph/ceph/pull/34423
- 03:59 PM Bug #44820 (Pending Backport): racey concurrent ceph-volume callKeyError: 'ceph.type'
- 02:36 PM Bug #44887 (Fix Under Review): cephadm: Simplify mounting of host dirs into prom/node-exporter
- 02:10 PM Bug #44887 (Rejected): cephadm: Simplify mounting of host dirs into prom/node-exporter
- With https://github.com/ceph/ceph/pull/32340 the following host directories are mounted into the prom/node-exporter c...
- 02:11 PM Bug #44888 (Resolved): Drivegroup's :limit: isn't working correctly
- Each iteration of an osd deployment deploys OSDs up to a set :limit:
Since we're deploying every $sleep_interval sec... - 02:10 PM Feature #44886: cephadm: allow use of authenticated registry
- and then the next request will be to support untrusted registries... and so on
- 01:49 PM Feature #44886 (Resolved): cephadm: allow use of authenticated registry
- Users may need to use an authenticated registry, e.g. in air-gapped deployments.
We could punt and require that th... - 02:06 PM Feature #44556 (Fix Under Review): cephadm: preview drivegroups
- 10:15 AM Bug #44777 (Resolved): podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to ...
- 10:04 AM Bug #44876 (New): mgr/rook: track minor upgrades
- ...
- 09:59 AM Feature #44875 (Resolved): mgr/rook: PlacementSpec to K8s POD scheduling conversion
- https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#affinity-v1-core
orch -> k8s... - 09:53 AM Feature #41239: mgr/rook: support creating OSDs on Persistent Volumes
- Some thoughts:
* OSDSpec specifies an existing StorageClass, num OSDs per node, ...?
* mgr/rook triggers osd crea... - 09:50 AM Feature #44874 (Rejected): cephadm: add Filestore support
- If someone wants to work on it:
* Has to mount the filesystem. ceph-ansible uses a complex bind-mount for it.
* c... - 09:46 AM Feature #44873 (Resolved): cephadm bootstrap: add --apply-spec <cluster.yaml>
- to have truly a single command when setting up a cluster for Day 1
Requirements:
h3. Adding hosts
* YAML spe... - 08:28 AM Bug #44769 (In Progress): cephadm doesn't reuse osd_id of 'destroyed' osds
03/31/2020
- 07:58 PM Feature #43687: cephadm: haproxy (or lb)
- Deploy and configure haproxy with cephadm; configure a service/lb with kubernetes/rook. Can we generalize these into...
- 07:54 PM Feature #44869 (Resolved): cephadm: automatic auth key rotation
- This is about periodically deploying new keys for daemons and clients. This is a bit more involved at we need to make...
- 04:21 PM Documentation #44867 (Rejected): cephadm: document "package" mode
- ...
- 04:07 PM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
- Let's say, someone does:...
- 03:45 PM Feature #44864 (New): cephadm: garbage collect old container images
- cephadm: garbage collect old container images
- 01:39 PM Bug #44820: racey concurrent ceph-volume callKeyError: 'ceph.type'
- Copying to fix in c-v as well.
- 01:18 PM Bug #44820 (Fix Under Review): racey concurrent ceph-volume callKeyError: 'ceph.type'
- 10:00 AM Backport #44845 (Resolved): octopus: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 09:54 AM Bug #44823 (Triaged): cephadm: tries to parse arguments command passed to shell
- 09:54 AM Bug #44823: cephadm: tries to parse arguments command passed to shell
- right. should be ...
- 12:32 AM Bug #44823 (Won't Fix): cephadm: tries to parse arguments command passed to shell
- ...
- 09:49 AM Bug #44832 (Resolved): cephadm: `ceph cephadm generate-key` fails with No such file or directory:...
- ...
- 09:43 AM Bug #44810 (Won't Fix): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- according to kai , this was a false alarm
- 09:42 AM Bug #44830 (Duplicate): cpehadm bootstrap: improve error message, if `host add` fails
- ...
- 09:07 AM Documentation #44828 (Resolved): cephadm: clarify "Failed to infer CIDR network for mon ip"
- ...
- 03:41 AM Bug #44826 (Resolved): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring provided"
- ...
- 12:36 AM Bug #44825 (Rejected): cephadm: bootstrap is not idempotent
- It would be helpful if this command did nothing if the cluster is already bootstrapped. This would simplify ansible r...
- 12:35 AM Bug #44824 (Resolved): cephadm: adding osd device is not idempotent
- ...
03/30/2020
- 11:28 PM Bug #44820: racey concurrent ceph-volume callKeyError: 'ceph.type'
- the problem seems to be a racing invocation of inventory and prepare:...
- 08:29 PM Bug #44820 (Resolved): racey concurrent ceph-volume callKeyError: 'ceph.type'
- ...
- 09:01 PM Bug #44810 (Fix Under Review): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- 05:28 PM Bug #44810 (Need More Info): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- 11:09 AM Bug #44810 (Won't Fix): cephadm: chmod /etc/ceph/ceph.pub should be set to 0600
- cephadm bootstrap creates @/etc/ceph/ceph.pub@ with a wrong permissions
- 08:23 PM Bug #44669 (Resolved): cephadm: rm-cluster should clean up /etc/ceph
- 06:08 PM Feature #44305 (Resolved): mgr/cephadm: Add support for removing MONs
- 06:08 PM Bug #44039 (Rejected): bin/cephadm: Remove --allow-fqdn-hostname
- seems that this might be a valid config!
- 06:07 PM Feature #44287: cephadm: Graceful Shutdown of the Whole Ceph Cluster
- open Q: how do we shutdown the mons, after we already did shut down all mgrs?
- 05:53 PM Bug #44758 (Fix Under Review): Drive Groups: limit:1 does not imply all:true
- 05:35 PM Feature #43708 (Resolved): mgr/rook: Blink enclosure LED
- 05:35 PM Feature #43696: cephadm: check that units start
- low, until someone complains.
- 05:33 PM Bug #44739 (Need More Info): ceph.conf parameters set via "cephadm bootstrap -c" are not persiste...
- Interesting. the code there is rather old: https://github.com/ceph/ceph/blame/master/src/cephadm/cephadm#L2115-L2124
... - 05:29 PM Documentation #44716 (Fix Under Review): orchestrator/cephadm: document ceph orch apply -i -
- 01:01 PM Feature #44556: cephadm: preview drivegroups
- Shortened this discussion with a f2t talk with Sebastian. Here are the results
Keep the old syntax but expand with... - 10:45 AM Feature #44556: cephadm: preview drivegroups
- > I guess we want to have the following features:
>
> 1) preview a service if the spec is already applied (useful ... - 08:31 AM Feature #44556: cephadm: preview drivegroups
- I guess we want to have the following features:
1) preview a service if the spec is already applied (useful when h... - 10:37 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- Not yet understsand, why https://github.com/ceph/ceph/pull/34260 fixes this issue.
- 10:21 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- ...
- 09:16 AM Bug #44777 (Fix Under Review): podman: stat /usr/bin/ceph-mon: no such file or directory, then un...
- 10:33 AM Bug #44642: cephadm: mgr dump might be too huge
- Backported to octopus by https://github.com/ceph/ceph/pull/34258
03/29/2020
- 10:38 PM Bug #44642 (Resolved): cephadm: mgr dump might be too huge
- 12:17 PM Bug #44642 (Pending Backport): cephadm: mgr dump might be too huge
03/28/2020
- 04:24 PM Feature #44599 (Resolved): cephadm: check-host: Returns only a single problem
- 09:59 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- Fascinating!
- 12:53 AM Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove cont...
- not sure why, but I'm pretty sure that https://github.com/ceph/ceph/pull/34091 is responsible for the regression.
...
03/27/2020
- 10:10 PM Bug #44792 (Resolved): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- /etc/ceph/ceph.conf is often use by tools in the ceph ecosystem. We should provide a mechanism to keep this up to dat...
- 08:58 PM Bug #44598 (Fix Under Review): cephadm: Traceback, if Python 3 is not installed on remote host
- 06:47 PM Bug #44598 (In Progress): cephadm: Traceback, if Python 3 is not installed on remote host
- 02:53 PM Feature #44556: cephadm: preview drivegroups
- When we want to have preview functionality for other components aswell, we should generalize the CLI a bit more.
... - 12:37 PM Bug #44781 (Fix Under Review): cephadm: monitoring: root volume alert doesn't work in container
- 10:19 AM Bug #44781 (Resolved): cephadm: monitoring: root volume alert doesn't work in container
- This is due the the root filesystem being mapped inside the container as `/rootfs` but the Prometheus alert checking ...
- 03:58 AM Bug #44642: cephadm: mgr dump might be too huge
- Same, @cephadm shell -- ceph mgr dump@ seems perfectly happy for me too. So it's only a problem during bootstrap some...
03/26/2020
- 10:56 PM Bug #44777 (Resolved): podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to ...
- the mon.b unit:...
- 08:43 PM Feature #43677 (Resolved): monitoring: create rpm for alerts rules also for centos
- 08:42 PM Documentation #43672 (Resolved): doc: point release upgrades
- 08:37 PM Bug #44559 (Need More Info): cephadm logs an invalid stat command
- 08:36 PM Feature #43708 (Pending Backport): mgr/rook: Blink enclosure LED
- backport https://github.com/ceph/ceph/pull/34199
- 08:35 PM Bug #44603: cephadm: `ls --refresh` shows Tracebacks in the log
- prio low, till someone complains
- 08:32 PM Bug #44699: cephadm: removing services leaves configs behind
- which config? things in the mon store?
- 08:27 PM Bug #44609 (In Progress): cephadm: grafana: cert problem prevents dashbaord integration
- 08:27 PM Bug #44513 (Resolved): mgr/cephadm: `orch ps --refresh` returns no results
- 02:17 AM Bug #44513 (Pending Backport): mgr/cephadm: `orch ps --refresh` returns no results
- https://github.com/ceph/ceph/pull/34190
- 08:26 PM Bug #44608 (Resolved): cephadm: grafana: bound to 127.0.0.1
- 02:18 AM Bug #44608 (Pending Backport): cephadm: grafana: bound to 127.0.0.1
- backport https://github.com/ceph/ceph/pull/34191
- 08:26 PM Bug #43890 (Resolved): cephadm: default hardcoded to non-ceph dockerhub
- 08:25 PM Feature #44775 (Resolved): cephadm: NFS stage 2
- * Teuthology integration
* cephadm adopt
* make container upgrades work - 08:22 PM Feature #44718 (Resolved): NFS ganesha (mgr/cephadm)
- 03:24 PM Bug #44642: cephadm: mgr dump might be too huge
- ...
- 02:47 PM Bug #44642: cephadm: mgr dump might be too huge
- ...
- 11:33 AM Bug #44642: cephadm: mgr dump might be too huge
- Just ran another loop of ten with https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm and docker.io/ceph/cep...
- 11:24 AM Bug #44642: cephadm: mgr dump might be too huge
- Ten runs using cephadm from https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm (so the container image is d...
- 11:01 AM Bug #44642: cephadm: mgr dump might be too huge
- To try to obtain some further clarity, I ran this:
@for n in $(seq 1 10); do sleep 5; systemctl stop ceph.target ;... - 08:15 AM Bug #44642: cephadm: mgr dump might be too huge
- OK, I've now seen it work at least once without that patch applied, and I've also seen it fail at least once without ...
- 03:35 AM Bug #44642: cephadm: mgr dump might be too huge
- Sebastian Wagner wrote:
> interesting. does it work when using https://github.com/ceph/ceph/pull/34031 ?
Yeah, it... - 01:32 PM Bug #44602 (Fix Under Review): cephadm: `orch ls` shows daemons as online, despite host is down
- 12:57 PM Feature #44556: cephadm: preview drivegroups
- This is the output of:...
- 10:16 AM Feature #44556 (In Progress): cephadm: preview drivegroups
- 10:53 AM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- https://github.com/ceph/ceph/compare/master...sebastian-philipp:cephadm-add-ipv6-routes?expand=1
- 10:15 AM Bug #44769 (Resolved): cephadm doesn't reuse osd_id of 'destroyed' osds
- The replacement operation is supposed to work like this:...
- 09:27 AM Documentation #44768 (Rejected): cephadm: document allow_ptrace true
- ...
- 09:18 AM Bug #44729: cephadm enter using docker is broken
- hm. Jan, close as "Can't reproduce"?
03/25/2020
- 11:57 PM Backport #43994 (Rejected): luminous: ceph orchestrator rgw rm: no valid command found
- 11:57 PM Backport #43993 (Rejected): mimic: ceph orchestrator rgw rm: no valid command found
- 09:42 PM Bug #43816 (Pending Backport): cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 03:16 PM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- > This command produced the following error:
> ... - 05:34 PM Bug #44758 (Resolved): Drive Groups: limit:1 does not imply all:true
- This drive group:...
- 05:29 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- prerequisite: https://github.com/ceph/ceph/pull/34091
- 05:28 PM Bug #44673: cephadm: `orch apply` and `orch daemon add` use completely different code path
- I'm already getting bug reports, like...
- 04:23 PM Bug #44642: cephadm: mgr dump might be too huge
- interesting. does it work when using https://github.com/ceph/ceph/pull/34031 ?
- 08:26 AM Bug #44642: cephadm: mgr dump might be too huge
- I can't help but think this is somehow related to the hangs we're getting after ~100k output when podman is run via s...
- 04:12 PM Bug #44513: mgr/cephadm: `orch ps --refresh` returns no results
- Confirmed PR 34182 fixes this.
Saw a similar thing with 3 hosts and only the last host was shown during refersh:
... - 03:41 PM Bug #44513: mgr/cephadm: `orch ps --refresh` returns no results
- (pretty sure i'm fixing the same bug... it would happen if you had 2 hosts in your test cluster above and the last on...
- 03:40 PM Bug #44513 (Fix Under Review): mgr/cephadm: `orch ps --refresh` returns no results
- 03:44 PM Bug #44729 (Need More Info): cephadm enter using docker is broken
- It works for me......
- 03:35 PM Bug #44608 (Fix Under Review): cephadm: grafana: bound to 127.0.0.1
- 03:21 PM Bug #44756 (Resolved): drivegroups: replacement op will ignore existing wal/dbs
- Since the db/wal is considered "locked/non-avialable" by ceph-volume after the first deployment, the DriveGroup algor...
- 12:11 PM Bug #44747: orch: `ceph orch ls --service_type` is broken
- This looks actually random. In the same session:...
- 12:07 PM Bug #44747 (Can't reproduce): orch: `ceph orch ls --service_type` is broken
- ...
- 11:52 AM Bug #44746 (Closed): cephadm: vstart.sh --cephadm: don't deploy crash by default
- as it's not part of the normal cluster, it's getting left behind and stays running. ...
- 09:34 AM Bug #44739 (Can't reproduce): ceph.conf parameters set via "cephadm bootstrap -c" are not persist...
- Now, when I set e.g. "osd crush chooseleaf type = 0" via "cephadm bootstrap -c", the initial CRUSH map has failure do...
- 09:32 AM Bug #44738: drivegroups/cephadm: db_devices don't get applied correctly when using "paths"
- -It seems that db_devices are ignored whenever "paths" is used in the "data_devices" section.-
Ignore that. - 09:15 AM Bug #44738 (Won't Fix): drivegroups/cephadm: db_devices don't get applied correctly when using "p...
- ...
03/24/2020
- 10:05 PM Bug #44642: cephadm: mgr dump might be too huge
- > Do you know why the line "j = json.loads(out)" is choking on the integer value sent by "ceph mgr dump"?
Now I se... - 07:38 PM Bug #44669 (Fix Under Review): cephadm: rm-cluster should clean up /etc/ceph
- 02:35 PM Bug #44669 (In Progress): cephadm: rm-cluster should clean up /etc/ceph
- 02:57 PM Bug #44729: cephadm enter using docker is broken
- ls works though...
- 02:56 PM Bug #44729 (Can't reproduce): cephadm enter using docker is broken
- ...
03/23/2020
- 04:16 PM Bug #44720 (Need More Info): rook: rgw: allow realm != zone
- 04:16 PM Bug #44719 (New): rook: align rgw client names with orch and cephadm
- client.rgw.$realm.$zone[.$id]
- 04:11 PM Feature #44718 (Fix Under Review): NFS ganesha (mgr/cephadm)
- 04:10 PM Feature #44718 (Resolved): NFS ganesha (mgr/cephadm)
- mgr/cephadm
- 04:10 PM Feature #43688 (Resolved): NFS ganesha
- 02:36 PM Bug #44701 (Resolved): ganesha selinux denial
- 01:56 PM Documentation #44716 (Resolved): orchestrator/cephadm: document ceph orch apply -i -
- ...
- 12:16 PM Backport #44710 (Resolved): octopus: doc/cephadm: replace `osd create` with `apply osd`
- https://github.com/ceph/ceph/pull/34355
- 08:33 AM Bug #44642 (Fix Under Review): cephadm: mgr dump might be too huge
- 08:31 AM Bug #44642: cephadm: mgr dump might be too huge
- I don't know what caused this. Might actually be an artifact of our podman hang. prio=low for now.
edit: oh, you c...
03/20/2020
- 10:45 PM Bug #44642: cephadm: mgr dump might be too huge
- Now, with both cephadm and container at 15.1.1-168-g06ecd31e39 I am seeing "cephadm bootstrap" fail on "ceph mgr dump...
- 06:48 PM Bug #44701 (Resolved): ganesha selinux denial
- ...
- 06:28 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- yeah, I also don't like to create a new dependency from the dashboard to cephadm
- 05:08 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- I'm inclined to just open both, because the dashboard might move between ssl and not ssl. otherwise we need to make t...
- 05:10 PM Feature #44576 (Resolved): cephadm: Restart Prometheus, if a new node_exporter or alertmanager is...
- This already works.
- 05:05 PM Bug #44669: cephadm: rm-cluster should clean up /etc/ceph
- What should the behavior here be? Check if the /etc/ceph config has the same fsid, and if so, remove it + the keyrin...
- 05:04 PM Bug #44699 (Closed): cephadm: removing services leaves configs behind
- Some of the configs are created by cephadm itself. The user might have created some too, but the config history will...
- 05:02 PM Bug #44698 (Duplicate): cephadm: removing daemons leaves auth keys behind
- 02:19 PM Feature #43839 (Fix Under Review): enhance `host ls`
- 01:39 PM Feature #43839 (In Progress): enhance `host ls`
- 12:13 PM Bug #44692 (Pending Backport): doc/cephadm: replace `osd create` with `apply osd`
- 11:38 AM Bug #44692 (Fix Under Review): doc/cephadm: replace `osd create` with `apply osd`
- 11:33 AM Bug #44692 (Resolved): doc/cephadm: replace `osd create` with `apply osd`
- 12:06 PM Feature #43689 (Fix Under Review): cephadm: iscsi
- 11:43 AM Bug #43890 (Fix Under Review): cephadm: default hardcoded to non-ceph dockerhub
03/19/2020
- 07:19 PM Bug #44615 (Resolved): cephadm: reconfig of removed daemon
- 02:47 PM Feature #44599 (Fix Under Review): cephadm: check-host: Returns only a single problem
- 09:28 AM Cleanup #44676 (Resolved): cephadm: Replace execnet (and remoto)
- [[https://github.com/pytest-dev/execnet]] is in maintenance mode. ...
03/18/2020
- 11:55 PM Bug #44673 (Rejected): cephadm: `orch apply` and `orch daemon add` use completely different code ...
- ... which is not obvious to users and they will use this interchangeably. Which is not really a good idea.
We sho... - 10:43 PM Feature #44622 (Resolved): orch daemon add -i spec.yaml
- 03:16 PM Bug #44642 (In Progress): cephadm: mgr dump might be too huge
- 02:05 PM Bug #44642 (New): cephadm: mgr dump might be too huge
- 01:48 PM Bug #44669 (Resolved): cephadm: rm-cluster should clean up /etc/ceph
- ...
- 01:14 PM Bug #44401 (Resolved): cephadm: check host performed every time through serve loop
- 01:14 PM Bug #44607 (Resolved): cephadm: apply(): Traceback, if host doesn't exist
03/17/2020
- 04:12 PM Bug #44642 (Rejected): cephadm: mgr dump might be too huge
- seems to be a downstream issue.
- 02:51 PM Bug #44642 (Resolved): cephadm: mgr dump might be too huge
- ...
- 04:11 PM Feature #44599 (In Progress): cephadm: check-host: Returns only a single problem
- 04:08 PM Feature #44599 (Rejected): cephadm: check-host: Returns only a single problem
- 03:30 PM Bug #44644 (Closed): cephadm: RGW: updating the spec doesn't update the mon store
- when creating RGW running...
- 03:20 PM Backport #43993: mimic: ceph orchestrator rgw rm: no valid command found
- As `ceph orchestrator rgw rm` doesn't exist for mimic, what about just close this?
- 02:25 PM Bug #44607 (Fix Under Review): cephadm: apply(): Traceback, if host doesn't exist
- 12:46 PM Feature #44622 (Fix Under Review): orch daemon add -i spec.yaml
- 10:21 AM Feature #44622 (In Progress): orch daemon add -i spec.yaml
03/16/2020
- 05:43 PM Bug #44629 (Can't reproduce): cephadm: prometheus: graph queries are not working correctly
- graph queries are not working correctly. The use of instance and
exported_instance needs some investigation. On the ... - 05:41 PM Feature #44628 (Resolved): cephadm: Add initial firewall management to cephadm
- we open both 8080 and 8443 for dashboard even when the default is
https. We should probably do one or the other, not... - 05:30 PM Documentation #44600 (Resolved): cephadm: use ssh-copy-id
- 04:08 PM Bug #44597 (Resolved): cephadm: Traceback, if ssh key is not on the remote host
- 02:26 PM Feature #44625 (Resolved): cephadm: test dmcrypt
- we need to verify it.
- 01:18 PM Feature #44622 (Resolved): orch daemon add -i spec.yaml
03/14/2020
03/13/2020
- 05:00 PM Bug #44609 (Resolved): cephadm: grafana: cert problem prevents dashbaord integration
- SSL cert problem prevents embedding out of the box.
Is the problem that ssl_verify is true by default? or that we... - 04:59 PM Bug #44608 (Resolved): cephadm: grafana: bound to 127.0.0.1
- after deploying I noticed that it was bound to 127.0.0.1, which blocks
client access from other machines. Should thi... - 04:56 PM Bug #44607 (Resolved): cephadm: apply(): Traceback, if host doesn't exist
- when deploying a daemon, with a host for placement - if the host doesn't
exist you get a trackback. This scenario sh... - 04:55 PM Feature #44606 (Resolved): cephadm: RGW firewall + static port
- how is the firewall being handled? AFAIK, the port is a parameter on
the rgw_frontend setting, so it could be un... - 04:29 PM Bug #44604 (Can't reproduce): cephadm: RGW: missing spec / mon store validation
- should the deployment of rgw first check the presence of a minimum set
of parms defined in the config store - if no... - 04:25 PM Bug #44603 (Rejected): cephadm: `ls --refresh` shows Tracebacks in the log
- With a host down that had daemons deployed, a --refresh shows trackbacks in the mgr log from the failed connect attem...
- 04:23 PM Bug #44602 (Resolved): cephadm: `orch ls` shows daemons as online, despite host is down
- With a host down that had daemons deployed:
ceph orch ls didn't show services as affected even after a --refresh i... - 04:20 PM Feature #44601 (New): cephadm: Mix of hosts: with and without firewall
- We allow a mix of hosts that either have firewall or not. I think this
should be part of the checks - either all hos... - 04:14 PM Documentation #44600 (Resolved): cephadm: use ssh-copy-id
- adding a new host:
Passing the ceph.pub key to new hosts could use the... - 04:13 PM Feature #44599 (Resolved): cephadm: check-host: Returns only a single problem
- Adding a host:
If checks fail, they show one at a time, forcing the admin to repeat
the command to get passed eac... - 04:11 PM Bug #44598 (Resolved): cephadm: Traceback, if Python 3 is not installed on remote host
- Adding a host:
if python3 isn't on the target, you get a traceback with OSError:
cannot send(already closed?) err... - 04:10 PM Bug #44597 (Resolved): cephadm: Traceback, if ssh key is not on the remote host
- Adding a host:
if the ssh key isn't on the new target you hit a trackback - which doesn't inspire confidence. - 03:58 PM Feature #44402 (Resolved): cephadm: more complete smoke test that can be run with vstart
- 03:56 PM Feature #44581 (Resolved): cephadm pause and cephadm resume
- 03:55 PM Bug #44569 (Resolved): NotImplementedError not caught
- 03:23 PM Cleanup #44379 (Won't Fix): orchestrator: {to,from}_json inconsistent
- not worth the effort right now.
- 02:53 PM Documentation #44284: cephadm: provide a way to modify the initial crushmap
- Per our discussion today, using `cephadm bootstrap -c /root/ceph.conf` is the correct way to set initial crushmap or ...
- 02:55 AM Bug #44587 (New): failed to write <pid> to cgroup.procs:
- ...
03/12/2020
- 05:28 PM Bug #44440 (Resolved): cephadm should be able to infer running container
- 02:44 PM Feature #44581 (Resolved): cephadm pause and cephadm resume
- if the serve() thread is in a loop breaking all your daemons, people will want to pause it.
- 12:52 PM Feature #44578 (Rejected): cephadm: verify Grafana works with Prometheus HA
- Is Grafana correctly configured when a Prometheus instance is added, for example:
* Is HA working in the Grafana d... - 12:51 PM Bug #44577 (Closed): cephadm: reconfigure Prometheus on MGR failover
- we have to make sure, Prometheus knows the new prometheus exporter endpoint:
* Generate a new prometheus config po... - 12:44 PM Feature #44576 (Resolved): cephadm: Restart Prometheus, if a new node_exporter or alertmanager is...
- P needs to know the new targets / configuration
- 12:37 PM Bug #37514 (Can't reproduce): mgr CLI commands block one another (indefinitely if the orchestrato...
- CLI commands should now respond swiftly. (cephadm and rook)
- 12:36 PM Feature #39093 (Rejected): mgr/orchestrator: add `ceph orchestrator wait`
- out of scope for now.
- 12:33 PM Feature #43705: cephadm: on config change, restart appropriate daemons
- partially: https://github.com/ceph/ceph/pull/33855
- 12:28 PM Feature #43839 (New): enhance `host ls`
- 12:19 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- Which means, we have to track which nodes are scanned and bail out, if we don't have the inventory yet?
- 12:15 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- new workaround: https://github.com/ceph/ceph-salt/pull/109
- 12:10 PM Bug #44559: cephadm logs an invalid stat command
- just to clarify, ...
- 12:07 PM Bug #44569 (Fix Under Review): NotImplementedError not caught
03/11/2020
- 08:21 PM Bug #44569 (Resolved): NotImplementedError not caught
- with cephadm for example,...
- 08:21 PM Feature #43694 (Resolved): cephadm: flag dashboard user to change password
- 02:57 PM Bug #44559 (New): cephadm logs an invalid stat command
- 02:30 PM Bug #44559: cephadm logs an invalid stat command
- Thanks Kris - updated the bug description.
- 12:06 PM Bug #44559: cephadm logs an invalid stat command
- Shouldn't that be...
- 11:50 AM Bug #44559 (Fix Under Review): cephadm logs an invalid stat command
- 11:46 AM Bug #44559 (Can't reproduce): cephadm logs an invalid stat command
- When I run "cephadm bootstrap", I see the following in the log:...
- 02:52 PM Bug #44272 (Resolved): on SUSE, crash daemon starts but then always stops a couple minutes later
- 11:17 AM Bug #44557 (Resolved): cephadm: error on run-tox-cephadm test
- 09:14 AM Bug #44557 (Fix Under Review): cephadm: error on run-tox-cephadm test
- 08:19 AM Bug #44557 (Resolved): cephadm: error on run-tox-cephadm test
- run-tox-cephadm test fails with:...
- 09:56 AM Backport #43994 (Need More Info): luminous: ceph orchestrator rgw rm: no valid command found
- mimic backport attempt was closed. presuming non-trivial
- 08:05 AM Feature #44556 (Resolved): cephadm: preview drivegroups
- The osd deployment in cephadm happens async in the background.
When using drivegroups, it may be not always clear...
03/10/2020
- 10:19 PM Bug #44397 (Resolved): cephadm: make rgw daemons avoid the same host
- 12:59 PM Bug #44397 (Fix Under Review): cephadm: make rgw daemons avoid the same host
- 12:43 PM Bug #44397: cephadm: make rgw daemons avoid the same host
- https://github.com/ceph/ceph/commit/8330d2f2bd2bb9325ac48accedfecd6dfaab8697
- 09:27 PM Bug #44512 (Resolved): mgr/cephadm: `orch ls` doesn't obey filters
- 07:59 AM Bug #44512 (Fix Under Review): mgr/cephadm: `orch ls` doesn't obey filters
- 08:11 PM Bug #44401 (Fix Under Review): cephadm: check host performed every time through serve loop
- 04:14 PM Backport #43993 (Need More Info): mimic: ceph orchestrator rgw rm: no valid command found
- first attempted backport - https://github.com/ceph/ceph/pull/33159 - was closed
- 03:29 PM Feature #44548 (Resolved): cephadm: persist osd removal queue
- cephadm and the corresponding osd_support module currently don't save state of osds that are queued to be removed, he...
- 12:01 PM Feature #43699 (Resolved): mgr/cephadm: osd rm must validate before deletion
- 12:00 PM Feature #43693 (Resolved): cephadm: replace OSDs
- 11:54 AM Bug #44272 (Fix Under Review): on SUSE, crash daemon starts but then always stops a couple minute...
- 11:45 AM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- from dmesg:...
- 11:41 AM Feature #44402: cephadm: more complete smoke test that can be run with vstart
- fixed via https://github.com/ceph/ceph/pull/33730 or is there something else missing?
- 10:42 AM Cleanup #44379: orchestrator: {to,from}_json inconsistent
- to,from}_json should not accept strings and instead always accept/return dicts or lists.
- 03:42 AM Bug #44526 (Resolved): sporatic cephadm bootstrap failures: 'timed out'
03/09/2020
- 09:52 PM Feature #43962 (Resolved): cephadm: Make mgr/cephadm declarative
- 06:14 PM Bug #44440 (In Progress): cephadm should be able to infer running container
- 05:27 PM Bug #44526 (Fix Under Review): sporatic cephadm bootstrap failures: 'timed out'
- 05:27 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- I think the fundamental problem here is how ceph.in is using librados. One thread is trying to do some work, which i...
- 04:12 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- ceph.in sets a short 5s timeout for -h, and that's triggering shutdown, but then ceph isn't cleanly stopping...
<p... - 03:55 PM Bug #44526 (Resolved): sporatic cephadm bootstrap failures: 'timed out'
- ...
- 04:12 AM Bug #44513 (Resolved): mgr/cephadm: `orch ps --refresh` returns no results
- h3. Steps to reproduce
* Create services and list daemons... - 04:05 AM Bug #44512 (Resolved): mgr/cephadm: `orch ls` doesn't obey filters
- h3. Steps to reproduce
* Create a service, e.g. mgr
* List the service with service_type filter, say `osd`. The r...
03/08/2020
- 10:30 PM Bug #44253 (Resolved): _apply_service should move services, not just expand/contract
- 10:30 PM Bug #44254 (Resolved): scheduler should prefer existing daemon locations
- 10:30 PM Bug #44392 (Resolved): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Servi...
- 10:30 PM Bug #44491 (Resolved): mgr/cephadm: fail to load service specs after restarting
- 10:29 PM Bug #44167 (Resolved): cephadm/ def _update_service: Remove should make use of spec.placement.hosts
- 10:29 PM Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedError
Also available in: Atom