Activity
From 11/07/2021 to 12/06/2021
12/06/2021
- 10:58 PM Bug #53365 (Resolved): pacific: broken groups or modules: container-tools:3.0
- 08:44 PM Bug #53365: pacific: broken groups or modules: container-tools:3.0
- https://github.com/ceph/ceph/pull/44201 merged
- 03:39 PM Bug #53496: cephadm: list-networks swallows /128 networks, breaking the orchestrator ("Filtered o...
- Sebastian Wagner wrote:
> Want to make a PR? If yes, please add your command outputs to https://github.com/ceph/ceph... - 01:57 PM Bug #53496: cephadm: list-networks swallows /128 networks, breaking the orchestrator ("Filtered o...
- Want to make a PR? If yes, please add your command outputs to https://github.com/ceph/ceph/blob/8c54a705e293682a8bbbd...
- 11:14 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2021-12-03_15:27:18-rados-wip-yuri11-testing-2021-12-02-1451-distro-default-smithi/6542708
- 08:32 AM Bug #53501 (Pending Backport): Exception when running 'rook' task.
- /a/yuriw-2021-12-03_15:27:18-rados-wip-yuri11-testing-2021-12-02-1451-distro-default-smithi/6542695...
12/04/2021
- 08:25 PM Bug #53496: cephadm: list-networks swallows /128 networks, breaking the orchestrator ("Filtered o...
- Applying the following tiny modification to cephadm and telling the mgr to use such patched binary (by setting the ce...
- 08:11 PM Bug #53496 (Resolved): cephadm: list-networks swallows /128 networks, breaking the orchestrator (...
- Commit 1897d1cd15af ("mgr/cephadm: update list-networks to report interface names too", backported to Pacific as 3237...
12/03/2021
- 11:58 PM Bug #53491: cephadm: 'ceph cephadm osd activate' does not activate existing, previously started OSDs
- introduced by ea987a0e56db106f7c76d11f86b3e602257f365e
- 11:29 PM Bug #53491 (Resolved): cephadm: 'ceph cephadm osd activate' does not activate existing, previousl...
- This command works great for created-by-never-started OSDs. However, if up_from is non-zero, the osd is ignored. Se...
- 10:49 PM Bug #53453 (In Progress): cephadm: current agent lock setup allows extraneous agent daemon actions
- 11:44 AM Bug #51736: mgr hung forever when execute multiprocessing.pool.ThreadPool accidentally
- should be fixed in quincy
- 11:43 AM Bug #52515 (Can't reproduce): asyncssh: prepare() got an unexpected keyword argument 'config'
- 11:42 AM Bug #53130 (Pending Backport): cephadm SYSCTL_DIR path not FHS compliant
- 11:42 AM Bug #53358 (Resolved): mgr/cephadm: ssh errors too verbose and timeout too long when can't connec...
- https://github.com/ceph/ceph/pull/43880
- 11:40 AM Bug #53335 (Fix Under Review): "cephadm bootstrap --ssh-user" doesn't support non root user
- 11:37 AM Bug #52828 (Resolved): _admin label not copying automatically the keyring and ceph.conf files to ...
- Fixed by https://github.com/ceph/ceph/pull/43149
Probably fixed in 16.2.8 - 11:07 AM Bug #53452 (Can't reproduce): cli: ceph orch host ls adds extraneous strings to json output
- *thankfully* this is not yet merged: https://github.com/ceph/ceph/pull/44020#discussion_r759991712
- 10:58 AM Bug #53365 (Fix Under Review): pacific: broken groups or modules: container-tools:3.0
- 09:25 AM Bug #50524: placement spec: irritating error message if passed a string for count_per_host
- John Mulligan wrote:
> I'm interested in helping out on this issue. What's the preferred behavior when faced with an...
12/02/2021
- 02:05 PM Bug #53365: pacific: broken groups or modules: container-tools:3.0
- http://pulpito.front.sepia.ceph.com/yuriw-2021-11-28_15:46:54-rados-pacific-16.2.7_RC1-distro-default-smithi/6532056
- 11:18 AM Feature #50593: cephadm: cephfs-mirror service should enable "mgr/mirror"
- https://github.com/ceph/ceph/pull/42682 should a good start!
12/01/2021
- 10:21 PM Bug #53397: make cephadm pass CEPH_VOLUME_SKIP_RESTORECON when running ceph-volume
- I cannot reproduce this in a minimal container....
- 10:15 PM Bug #53397: make cephadm pass CEPH_VOLUME_SKIP_RESTORECON when running ceph-volume
- (Tangent: the restorecon ENOENT error message is a confusing message, and I opened https://bugzilla.redhat.com/show_b...
- 10:07 PM Bug #53397: make cephadm pass CEPH_VOLUME_SKIP_RESTORECON when running ceph-volume
- It's good to skip restorecon for performance.
However, there's a bigger problem if SELinux is broken in the contai... - 09:04 AM Bug #53397: make cephadm pass CEPH_VOLUME_SKIP_RESTORECON when running ceph-volume
- copied from downstream:
It turns out ceph-volume fails here (although this behavior hasn't been reported upstream)... - 06:56 PM Feature #50593: cephadm: cephfs-mirror service should enable "mgr/mirror"
- I'm interested in helping out on this feature. Any general pointers/thoughts before I start poking at code?
- 06:55 PM Bug #50524: placement spec: irritating error message if passed a string for count_per_host
- I'm interested in helping out on this issue. What's the preferred behavior when faced with an invalid type: outright ...
- 04:45 PM Bug #53453 (Resolved): cephadm: current agent lock setup allows extraneous agent daemon actions
- Specifically, since multiple threads can be trying to check the agents at the same time, even though the current setu...
- 04:18 PM Bug #53452 (Can't reproduce): cli: ceph orch host ls adds extraneous strings to json output
- ...
- 02:51 PM Bug #50116 (Rejected): remove cephadm --dashboard-password-noupdate
- 02:22 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- https://pulpito.ceph.com/swagner-2021-12-01_08:46:48-orch:cephadm-wip-swagner-testing-2021-11-30-1105-distro-default-...
- 09:13 AM Bug #53448 (Resolved): cephadm: agent failures double reported by two health checks
- Whe nagents are down they are reported in both the agent down and failed daemon health check.
It's only really neces...
11/30/2021
- 04:53 PM Documentation #44284: cephadm: provide a way to modify the initial crushmap
- Note, given we have also:
* https://docs.ceph.com/en/latest/cephadm/host-management/#setting-the-initial-crush-loc... - 02:18 PM Bug #53438 (Resolved): cephadm: fail to re-add host with active mgr running on it
- When re-adding a host with the active mgr present without an explicit ip you will get...
11/29/2021
- 12:47 PM Bug #53424 (Resolved): CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- ...
- 08:47 AM Bug #53422 (New): tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: Asse...
- https://pulpito.ceph.com/swagner-2021-11-26_13:52:15-orch:cephadm-wip-swagner2-testing-2021-11-26-1129-distro-default...
11/26/2021
- 01:27 PM Bug #52828: _admin label not copying automatically the keyring and ceph.conf files to the host
- Sebastian Wagner wrote:
> which Ceph version do you have?
ceph version 16.2.0-117.el8cp - 12:09 PM Bug #52828: _admin label not copying automatically the keyring and ceph.conf files to the host
- which Ceph version do you have?
- 12:35 PM Feature #51618: rgw_frontends configuration in config database not persistent
- propably yes. but nevertheless i think a feature to block any atomatic changes in conf db would be nice to have. some...
- 12:21 PM Feature #51618: rgw_frontends configuration in config database not persistent
- Tobias Fischer wrote:
> I tried your proposal and it worked. Thanks. But nevertheless it would be nice to be able to... - 12:31 PM Bug #53033: cephadm removes MONs during upgrade 15.2.14 > 16.2.6 which leads to failed quorum and...
- I don't think it's the same bug. In the other bug the mon was removed from the monmap (by orchestrator?) after a rebo...
- 12:18 PM Bug #53033: cephadm removes MONs during upgrade 15.2.14 > 16.2.6 which leads to failed quorum and...
- Tobias, is this a duplicate of #51027 ?
- 12:23 PM Documentation #52490 (Resolved): document single-host-defaults
- 12:13 PM Bug #53321 (Duplicate): cephadm tries to use the system disk for osd specs
- 12:13 PM Bug #51061 (Duplicate): GPT partitioning table: OSD "all-available-devices" tries to use "non ava...
- 12:09 PM Bug #52855 (Resolved): cephadm: bootstrap --apply-spec shouldn't enforce :z
- 12:09 PM Documentation #52797: [cephadm]use mirrors for service images (grafana, prometheus ...)
- yes it is
- 12:07 PM Bug #52650 (Resolved): mgr/cephadm: improving refresh time for host-facts
- done by the agent
- 11:56 AM Support #50887: ERROR: Daemon not found: mgr.ceph-node1.ruuwlz. See cephadm ls
- Right, *cephadm logs* is just a thin wrapper around *journactl -u <unit name>*. It only works on the local host. You'...
- 11:30 AM Feature #52371 (Resolved): Ceph Behave Integration tests
- 11:28 AM Bug #51794 (Resolved): mgr/test_orchestrator: remove pool and namespace from nfs service
- 11:27 AM Documentation #47637 (Resolved): mgr/cephadm: document how to configure custom TLS certificate fo...
- https://docs.ceph.com/en/latest/cephadm/services/monitoring/#configuring-ssl-tls-for-grafana
- 11:25 AM Feature #47774 (Fix Under Review): orch,cephadm: host search with filters
- 11:21 AM Bug #50690 (New): ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command not generating ...
- 11:20 AM Bug #50592 (New): "ceph orch apply <svc_type>" applies placement by default without providing any...
- 11:20 AM Bug #46606 (Resolved): cephadm: post-bootstrap monitoring deployment only works if the command "c...
- PR 42682
- 11:19 AM Bug #45595 (Can't reproduce): qa/tasks/cephadm: No filesystem is configured and MDS daemon gets d...
- 11:15 AM Bug #47358 (Resolved): "ceph orch apply osd" chokes on valid service_spec.yml
- 11:12 AM Feature #44869 (Resolved): cephadm: automatic auth key rotation
- 11:11 AM Bug #50296 (Can't reproduce): Failed to remove OSD service
- 11:07 AM Tasks #47369 (Resolved): Ceph scales to 100's of hosts, 1000's of OSDs....can orchestrator?
- yes, we can!
- 11:06 AM Feature #49165 (Need More Info): ceph crush class in osd service spec
- We have https://docs.ceph.com/en/latest/cephadm/host-management/#setting-the-initial-crush-location-of-host by now. I...
- 11:04 AM Feature #45091 (Closed): cephadm: CephX disabled: bad_method + failed to fetch mon config
- no activity
- 11:03 AM Bug #45909 (Duplicate): already existing cluster deployed: cephadm bootstrap failure
- 11:03 AM Feature #45982 (Resolved): mgr/cephadm: remove or update Dashboard settings after daemons are des...
- 11:02 AM Bug #46685 (Won't Fix): mgr/rook: OSD devices are marked as available
- won't fix
- 11:01 AM Documentation #45977 (Resolved): cephadm: Improve Service removal docs
- https://docs.ceph.com/en/latest/cephadm/services/#removing-a-service
- 10:50 AM Feature #52869 (Resolved): add setting for cephadm log level
- 10:49 AM Documentation #50534 (Resolved): docs: add full cluster purge
- 10:49 AM Bug #52905 (Resolved): cephadm gather-facts is returning zram* devices as a valid block device
- 10:49 AM Bug #52866 (Resolved): removal of iscsi causes mgr module to fail
- 10:49 AM Bug #52040 (Resolved): during an apply the host must be online otherwise the apply fails with a t...
- 10:49 AM Feature #44414 (Resolved): bubble up errors during 'apply' phase to 'cluster warnings'
- 10:47 AM Bug #53097 (Resolved): "Failed to apply 4 service" in upgrade:octopus-x-master
- 10:46 AM Feature #47286 (Resolved): cephadm: Local registry setup
11/25/2021
- 10:58 AM Feature #53399 (In Progress): Provide container resources usage
- Provide an interface to expose the container resources usage to be consumed.
e.g by the dashboard - 10:51 AM Bug #53397 (Fix Under Review): make cephadm pass CEPH_VOLUME_SKIP_RESTORECON when running ceph-vo...
- 10:36 AM Bug #53397 (Resolved): make cephadm pass CEPH_VOLUME_SKIP_RESTORECON when running ceph-volume
- In containerized deployments, ceph-volume shouldn't try to make any call to restorecon binary.
Given ceph-volume c...
11/24/2021
- 10:27 PM Bug #53394 (Resolved): cephadm: can infer config from mon from different cluster causing file not...
- If you are trying to infer config for a cluster with fsid id x and there is a directory for a mon in /var/lib/ceph/y ...
- 03:24 PM Bug #53385 (Pending Backport): Allow mgr/cephadm to run radosgw-admin.
- 03:24 PM Bug #53385 (Resolved): Allow mgr/cephadm to run radosgw-admin.
- 03:22 PM Bug #53385 (Resolved): Allow mgr/cephadm to run radosgw-admin.
- 02:08 PM Feature #47286 (Fix Under Review): cephadm: Local registry setup
- 01:28 PM Feature #53378 (Duplicate): cephadm: redeploy nfs-ganesha service that was running in a host that...
- 01:15 AM Feature #53378 (Duplicate): cephadm: redeploy nfs-ganesha service that was running in a host that...
- OpenStack manila wants to use cephadm managed nfs-ganesha service to export CephFS subvolumes to OpenStack clients. C...
11/23/2021
- 08:06 AM Bug #53351 (Fix Under Review): mgr/cephadm: when the cephadm agent refreshes the mgr host case, t...
- 06:00 AM Bug #52279: cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requ...
- /a/yuriw-2021-11-20_18:01:41-rados-wip-yuri8-testing-2021-11-20-0807-distro-basic-smithi/6516634
- 05:24 AM Bug #53345: Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
- /a/yuriw-2021-11-20_18:01:41-rados-wip-yuri8-testing-2021-11-20-0807-distro-basic-smithi/6516902
- 03:55 AM Bug #53365: pacific: broken groups or modules: container-tools:3.0
- This is, of course, happening post https://github.com/ceph/ceph/pull/43934 but it's not clear where the actual issue ...
11/22/2021
- 07:13 PM Bug #53365 (Resolved): pacific: broken groups or modules: container-tools:3.0
- Seems like in the rados/cephadm/mgr-nfs-upgrade tests, we seem to encounter an error regarding container-tools:3.0.
... - 03:27 PM Bug #53358 (Resolved): mgr/cephadm: ssh errors too verbose and timeout too long when can't connec...
- When hosts are found to be offline a health warning will be raised with a corresponding error message. However, the c...
- 12:53 AM Bug #53351 (New): mgr/cephadm: when the cephadm agent refreshes the mgr host case, the config che...
- When cephadm agent is in play, the mgr cache can be updated at any point, causing a runtime error in the config check...
11/19/2021
- 10:39 PM Bug #53345: Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
- possible duplicate of https://tracker.ceph.com/issues/53305
- 10:30 PM Bug #53345: Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
- https://pulpito.ceph.com/ksirivad-2021-11-19_19:14:07-rados-wip-autoscale-profile-scale-up-default-distro-basic-smith...
- 10:30 PM Bug #53345 (New): Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
- 2021-11-19T19:49:36.480 INFO:tasks.cephfs_test_runner:test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCL...
- 10:13 AM Bug #53335: "cephadm bootstrap --ssh-user" doesn't support non root user
- turns out
https://github.com/ceph/ceph/blob/93054a3fa9465d2fad038924489df10ff4bf89d2/src/pybind/mgr/cephadm/ssh.p... - 10:11 AM Bug #53335 (Resolved): "cephadm bootstrap --ssh-user" doesn't support non root user
- Typical error thrown:...
11/18/2021
- 03:49 PM Bug #53323 (Rejected): the timezone in containers managed by cephadm are not in sync with the host
- the timezone in containers managed by cephadm isn't the same than the host
- 03:31 PM Bug #53321 (Duplicate): cephadm tries to use the system disk for osd specs
- Having this spec:...
- 03:18 PM Feature #53320: cephadm: warn users before upgrading to a release with major bugs in it
- The concern about lack of internet access is fair. This is how we used to run our storage clusters at MSI and it sou...
- 03:09 PM Feature #53320 (New): cephadm: warn users before upgrading to a release with major bugs in it
- e.g....
- 01:15 PM Bug #52279 (New): cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush...
- This is again seen in this pacific run:
http://qa-proxy.ceph.com/teuthology/yuriw-2021-11-17_19:02:43-fs-wip-yuri1... - 06:05 AM Feature #53312 (New): mgr/cephadm: host drain and host removal command improvements
- * After executing a drain command there is no way to know whether the drain action is in progress or not. At least a ...
11/17/2021
- 02:30 PM Bug #53305 (New): test_daemon_restart fails
- Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
The problem is in cephadm:...
11/16/2021
- 01:34 PM Cleanup #53276 (New): cephadm: don't download any images in `cephadm deploy`
- I think it might make sense to make `deploy` avoid downloading any images and instead let systemd pull them. Let's av...
11/15/2021
- 09:37 PM Bug #52116: kubeadm task fails with error execution phase wait-control-plane: couldn't initialize...
- Pacific backport: https://github.com/ceph/ceph/pull/43937
- 06:18 PM Bug #53235 (Pending Backport): cephadm: 'orch ls' shows individual osds (e.g., osd.123)
- 03:28 PM Bug #53257: mgr logs do not reopen after respawn
- Ownership is changed any time a non-ceph container (e.g., grafana, alertmanager) is deployed.
- 03:04 PM Bug #53257: mgr logs do not reopen after respawn
- ...
- 02:30 PM Bug #53269 (Resolved): store container registry credentials in config-key
- provides a more restricted level of access
11/12/2021
- 10:25 PM Bug #53257 (Resolved): mgr logs do not reopen after respawn
- ...
- 12:47 PM Bug #53175: podman: failed to exec pid1: Exec format error: wrongly using the amd64-only digest
- Because I have come a lot further I just leave the preliminary results of my investigation here:
The Cluster used ... - 03:12 AM Feature #52920 (In Progress): Add snmp-gateway as a supported service for deloyment via orchestrator
11/11/2021
- 06:50 PM Bug #53235 (Fix Under Review): cephadm: 'orch ls' shows individual osds (e.g., osd.123)
- 05:04 PM Bug #53235 (Resolved): cephadm: 'orch ls' shows individual osds (e.g., osd.123)
- example, from an old cluster running 16.2.6:...
- 11:25 AM Bug #53175: podman: failed to exec pid1: Exec format error: wrongly using the amd64-only digest
- The error related to *stat: cannot stat '%g': No such file or directory* seems to be a clue here. Would be great if y...
- 12:20 AM Bug #53223 (New): Fun with subinterpreters: Exceptions showing traceback when SpecValidationError...
- The service specs are using SpecValidationErrors to indicate issues, but the normal try/catch provided by the cli_wri...
11/10/2021
- 08:55 PM Bug #52116: kubeadm task fails with error execution phase wait-control-plane: couldn't initialize...
- teuthology/yuriw-2021-11-08_15:10:38-rados-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi/6491072/teut...
11/08/2021
- 04:54 PM Support #50594: ceph orch / cephadm does not allow deploying multiple MDS daemons per FS per host?
- In what way was this resolved? Thanks.
- 10:48 AM Bug #46921 (Resolved): Fedora: Download URL not found for cephadm installation
- cannot reproduce
- 10:38 AM Tasks #51562 (Resolved): Enable autotune for osd_memory_target
Also available in: Atom