Activity
From 12/26/2021 to 01/24/2022
01/24/2022
- 04:36 PM Cleanup #54002 (New): orchestrator interface: service type should be a Python Enum
- ...
- 04:13 PM Cleanup #54001 (New): type safe Python mon_command API clients
- like for the last 6 years, I always wondered, why we don't have an type
safe way to use the mon command api. Turns o... - 03:49 PM Bug #53572 (Pending Backport): cephadm should not require running as root for --help
- 03:39 PM Cleanup #54000 (New): cephadm: upgrade commands should return yaml
- Right now, commands like
* ceph orch upgrade ls
* ceph orch upgrade status
are returning json. YAML is much m... - 03:28 PM Cleanup #53999 (New): orch interface: cephadm contains a lot of special apply methods
- ...
- 03:25 PM Cleanup #53998 (New): Drop support for old-style service spec
- ...
- 03:19 PM Documentation #53997 (Resolved): cephadm spec: document "config" key
- Users can specify config options for a particular service:...
- 01:05 PM Bug #53652 (Closed): cephadm "Verifying IP <ip> port 3300" ... -> "OSError: [Errno 99] Cannot ass...
- Closing as the issue doesn't occur in the latest ceph version (v16.2)...
- 12:38 PM Bug #53652: cephadm "Verifying IP <ip> port 3300" ... -> "OSError: [Errno 99] Cannot assign reque...
- This issue could be closed. The code is fixed in the latest ceph development version and the error is reported correc...
- 12:44 PM Support #50887 (Closed): ERROR: Daemon not found: mgr.ceph-node1.ruuwlz. See cephadm ls
- 12:43 PM Bug #52898 (In Progress): cephadm: Unable to create max luns per iSCSI target: thread limit reached
- 12:40 PM Feature #49269: cephadm: upgrade stuck in repeating sleep when a host is offline
- increasing prio
- 12:38 PM Feature #43690 (Resolved): cephadm: service resource limits
- 12:34 PM Feature #51596: implement "gather-logs" feature in cephadm
- prio=low, cause I still don't have a good idea of how to implement this in a good UX way.
- 12:29 PM Feature #48247: cephadm: RGW rgw_ldap_secret
- https://github.com/ceph/ceph/pull/44459 might provide the ability to ahve a workaround. Then this is just a documenta...
- 12:25 PM Bug #48933 (Can't reproduce): cephadm: EOFError: couldnt load message header, expected 9 bytes, g...
- 12:25 PM Bug #50502 (Closed): cephadm pull doesn't get latest image
- 12:15 PM Bug #46412 (Can't reproduce): cephadm trying to pull mimic based image
- 12:12 PM Bug #49860 (Can't reproduce): cephadm adopt - Report conf file missing - now it says could not de...
- 11:33 AM Bug #52516 (Can't reproduce): vstart cluster refuses to start with KeyImportError in asyncssh 2.7.0
- 11:32 AM Bug #52601 (Resolved): cephadm RPM should depend on sshd
- 11:29 AM Bug #53034 (Can't reproduce): podman-3.0.1-6 crashed
- 11:29 AM Bug #53174 (Resolved): `ceph orch daemon rm mgr......` should warn if a user wants to remove the ...
- 11:22 AM Bug #53762: Problem with cephfs-mirror and cephadm / ceph orch
- you really should not try to deploy physical keys within the containers. The only sane way is to have those keys acce...
- 11:18 AM Bug #53904 (Duplicate): cephadm: ingress jobs stuck
- 10:08 AM Bug #53904: cephadm: ingress jobs stuck
- https://pulpito.ceph.com/swagner-2022-01-20_15:11:07-orch:cephadm-wip-swagner-testing-2022-01-20-1235-distro-default-...
- 11:12 AM Bug #53847: `ceph cephadm osd activate` don't work
- Well, it should exist in pacific: https://docs.ceph.com/en/pacific/api/mon_command_api/#cephadm-osd-activate
- 11:06 AM Documentation #53871: Can't pull the Ingress daemon due to docker.io rate limit
- don't know. for upstream i'd try to stick to the official images. I'd try to document how to setup a proxy registry i...
01/21/2022
- 04:47 PM Feature #53967 (New): support minor downgrades
- There is a general (needs to be properly supported by all components!) demand to support minor downgrades. For that w...
- 03:53 PM Bug #53965 (New): cephadm: RGW container is crashing at 'rados_nobjects_list_next2: Operation not...
- when upgrading from ceph-ansible, we're getting:...
01/20/2022
- 11:23 AM Bug #51736 (Resolved): mgr hung forever when execute multiprocessing.pool.ThreadPool accidentally
- 11:22 AM Feature #45138 (Closed): cephadm: remove legacy daemons
- 11:21 AM Bug #49838 (Closed): RFE: Support for cephadm daemon logs redirection to non default files and al...
- 11:21 AM Bug #53610 (Can't reproduce): 'Inventory' object has no attribute 'get_daemon'
- 11:20 AM Bug #53723 (Pending Backport): Cephadm agent fails to report and causes a health timeout
- 11:19 AM Bug #53541 (Pending Backport): permissions too open on the cephadm agent files (644) - includes c...
- 11:19 AM Bug #53624 (Fix Under Review): cephadm agent: set_store mon returned -27: error: entry size limit...
- 11:19 AM Bug #53706 (Fix Under Review): cephadm: Module 'cephadm' has failed: dashboard iscsi-gateway-rm f...
- 11:18 AM Feature #50593 (Pending Backport): cephadm: cephfs-mirror service should enable "mgr/mirror"
- 11:18 AM Bug #53106 (Resolved): octopus: failed to fetch deb https://download.opensuse.org/repositories/de...
- 11:17 AM Bug #53424 (Pending Backport): CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- 11:17 AM Bug #53594 (Pending Backport): mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded co...
- 11:16 AM Tasks #51562 (Pending Backport): Enable autotune for osd_memory_target
- 11:16 AM Bug #53501 (Resolved): Exception when running 'rook' task.
- 11:15 AM Bug #50524 (Resolved): placement spec: irritating error message if passed a string for count_per_...
- 11:15 AM Bug #50685 (Resolved): wrong exception type: Exception("No filters applied")
- 11:14 AM Bug #53130 (Fix Under Review): cephadm SYSCTL_DIR path not FHS compliant
- 11:13 AM Bug #53385 (Resolved): Allow mgr/cephadm to run radosgw-admin.
- 11:13 AM Bug #46253 (Resolved): OSD specs without service_id
- 11:12 AM Bug #52116 (Resolved): kubeadm task fails with error execution phase wait-control-plane: couldn't...
- 11:10 AM Bug #51111 (Pending Backport): Pacific: CEPHADM_STRAY_DAEMON after deploying iSCSI gateway with c...
- 11:10 AM Feature #52920 (Resolved): Add snmp-gateway as a supported service for deloyment via orchestrator
- 11:10 AM Bug #53235 (Resolved): cephadm: 'orch ls' shows individual osds (e.g., osd.123)
- 11:09 AM Bug #53394 (Resolved): cephadm: can infer config from mon from different cluster causing file not...
- 11:07 AM Bug #48291 (Resolved): Grafana should not have a predictable default password
- 11:07 AM Bug #53257 (Resolved): mgr logs do not reopen after respawn
- 11:07 AM Bug #49571 (Resolved): cephadm: same OSD one two host + daemon_id not unique
- 11:06 AM Bug #47401 (Resolved): improve drive group validation
- 11:02 AM Bug #52654 (Resolved): pybind/mgr/cephadm: mds upgrade does not disable standby-replay
- 11:01 AM Bug #53842 (Fix Under Review): cephadm/mds_upgrade_sequence: KeyError: 'en***'
01/19/2022
- 04:07 PM Bug #53939 (Resolved): ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: ...
- ...
- 01:45 PM Feature #51566 (Fix Under Review): cephadm: cpu limit
- 01:16 PM Feature #53931 (Resolved): [cephadm] support using --shared_ceph_folder for `ceph-volume` subcommand
- add --shared_ceph_folder option to `ceph-volume` subcommand like `shell` or `bootstrap`
- 10:37 AM Feature #52371: Ceph Behave Integration tests
- https://github.com/ceph/ceph/tree/master/src/test/behave_tests
01/18/2022
- 05:50 PM Bug #53842: cephadm/mds_upgrade_sequence: KeyError: 'en***'
- /a/yuriw-2022-01-15_05:47:18-rados-wip-yuri8-testing-2022-01-14-1551-distro-default-smithi/6619523
- 02:52 PM Bug #48742: cephadm bootstrap --docker doesn't actually work. It's just that the setting is not p...
- AI is to add something similar to https://github.com/ceph/ceph/blob/51a347456dead2c327a08926a7042bfa685b397c/src/pybi...
- 02:49 PM Bug #52514 (Can't reproduce): vstart fails with --cephadm option
- 02:37 PM Feature #50593 (Fix Under Review): cephadm: cephfs-mirror service should enable "mgr/mirror"
- 08:30 AM Feature #47774 (Resolved): orch,cephadm: host search with filters
01/17/2022
- 04:57 PM Bug #53904: cephadm: ingress jobs stuck
- https://github.com/ceph/ceph/blob/master/qa/suites/orch/cephadm/smoke-roleless/2-services/nfs-ingress.yaml#L63
- 04:07 PM Bug #53904 (Duplicate): cephadm: ingress jobs stuck
- https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-...
- 09:04 AM Bug #53762: Problem with cephfs-mirror and cephadm / ceph orch
- Hello. Yes, I can confirm that I used this bootstrapping.
- 08:47 AM Bug #53762: Problem with cephfs-mirror and cephadm / ceph orch
- Hey Manuel,
Did you bootstrap the remote peer using the "bootstrap create" and "bootstrap import" commands? The re...
01/14/2022
- 05:23 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- Happened again here: /a/yuriw-2022-01-13_14:57:55-rados-wip-yuri5-testing-2022-01-12-1534-distro-default-smithi/6612758
- 03:23 PM Bug #50830: rgw-ingress does not install
- /a/yuriw-2022-01-13_18:06:52-rados-wip-yuri3-testing-2022-01-13-0809-distro-default-smithi/6614359
- 03:05 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2022-01-13_18:06:52-rados-wip-yuri3-testing-2022-01-13-0809-distro-default-smithi/6614482
- 02:30 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- /a/yuriw-2022-01-13_18:06:52-rados-wip-yuri3-testing-2022-01-13-0809-distro-default-smithi/6614725
/a/yuriw-2022-01-... - 07:43 AM Bug #44587: failed to write <pid> to cgroup.procs:
- /a/yuriw-2022-01-13_18:06:52-rados-wip-yuri3-testing-2022-01-13-0809-distro-default-smithi/6614461
01/13/2022
- 06:29 PM Documentation #53871 (New): Can't pull the Ingress daemon due to docker.io rate limit
In the OpenStack context the undercloud may be used as a container registry for all the ceph related containers a...
01/12/2022
- 09:03 PM Bug #48925: cephadm: iscsi missing mgr permissions
- Mykola Golub wrote:
> octopus backport PR: https://github.com/ceph/ceph/pull/43822
merged - 08:46 PM Bug #53624 (In Progress): cephadm agent: set_store mon returned -27: error: entry size limited to...
- 07:32 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2022-01-11_19:17:55-rados-wip-yuri5-testing-2022-01-11-0843-distro-default-smithi/6608761
- 05:58 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Moved this Tracker out of CephFS, as offline filesystems on this particular test appear even in successful runs.
E... - 02:33 PM Bug #53851 (Resolved): [cephadm-ansible] the minimal ceph.conf file generated by cephadm-clients ...
- 01:36 PM Bug #53851 (Fix Under Review): [cephadm-ansible] the minimal ceph.conf file generated by cephadm-...
- 01:36 PM Bug #53851: [cephadm-ansible] the minimal ceph.conf file generated by cephadm-clients playbook is...
- upstream PR https://github.com/ceph/cephadm-ansible/pull/47
- 01:34 PM Bug #53851 (Resolved): [cephadm-ansible] the minimal ceph.conf file generated by cephadm-clients ...
- this causes the following error:...
- 08:29 AM Bug #53847: `ceph cephadm osd activate` don't work
- The ceph-common verison is 16.2.7...
- 08:20 AM Bug #53847 (Rejected): `ceph cephadm osd activate` don't work
- I'm follow this document [[https://docs.ceph.com/en/latest/cephadm/services/osd/#activate-existing-osds]] to active e...
01/11/2022
- 11:22 PM Bug #53706 (In Progress): cephadm: Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed...
- 10:53 PM Bug #53842 (Resolved): cephadm/mds_upgrade_sequence: KeyError: 'en***'
- This may not be reproducible since I haven't seen it anywhere else. Will update this Tracker if I see any more instan...
- 09:12 PM Bug #53693 (Closed): ceph orch upgrade start is getting stuck in gibba cluster
- Discussion from #ceph-gibba channel!...
- 09:38 AM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Is this related to CephFS? Comment https://tracker.ceph.com/issues/53807#note-1 indicates this is being hit with rado...
- 06:23 AM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Jeff Layton wrote:
> Looking at /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-...
01/10/2022
- 07:18 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Looking at /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi/6599082/remote/...
- 11:22 AM Feature #53815 (Resolved): cephadm rm-cluster should delete log files
- * /var/log/ceph/cephadm.log*
* /var/log/ceph/<cluster-fsid> - 09:53 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2022-01-08_17:57:43-rados-wip-yuri8-testing-2022-01-07-1541-distro-default-smithi/6603250
01/07/2022
- 11:05 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- And a third similar scenario where an offline filesystem leads to failed CEPHADM daemons:
Description: rados/cepha... - 10:49 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Another similar scenario, which does not involve offline filesystems:
Description: rados/cephadm/smoke-roleless/{0... - 10:33 PM Bug #53807 (Resolved): Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Description: rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs...
- 10:40 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi/6598788
- 09:09 PM Bug #53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionE...
- /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi/6599062
- 12:35 PM Feature #53794 (Resolved): make cephadm-ansible support setting up custom repository
- would be helpful to have the possibility to set up custom repository
01/06/2022
- 11:01 PM Bug #53541 (In Progress): permissions too open on the cephadm agent files (644) - includes certs ...
- 11:00 PM Bug #53723 (In Progress): Cephadm agent fails to report and causes a health timeout
- 10:21 PM Bug #53448: cephadm: agent failures double reported by two health checks
- Accidentally deleted the related issue; ignore.
- 08:32 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/lflores-2022-01-05_19:04:35-rados-wip-lflores-mgr-rocksdb-distro-default-smithi/6596855
- 04:18 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- Thanks Joseph. I frequently review teuthology runs, so I'll update this tracker if the problem persists. Hopefully if...
- 03:45 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- The first two logs are due to this ListBuckets call failing in the RGW pod: https://github.com/rook/rook/blob/0d8fd9d...
- 11:36 AM Bug #53424 (Fix Under Review): CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- 10:40 AM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- And indeed we're not stopping or undeploying the old ganesha:...
01/05/2022
- 10:05 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- /a/yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi/6595518
- 10:00 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi/6595248
- 09:26 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- /a/yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi/6595253
- 02:02 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- Going by the sentry event for these failures, it looks like this started being a common failure right as https://gith...
- 02:30 PM Feature #43709 (Resolved): mgr/rook: remove OSDs
- 11:57 AM Bug #53610 (Need More Info): 'Inventory' object has no attribute 'get_daemon'
- 11:41 AM Bug #50524 (Pending Backport): placement spec: irritating error message if passed a string for co...
- 11:39 AM Bug #53766 (Duplicate): ceph orch ls: setting cgroup config for procHooks process caused: Unit li...
- 11:39 AM Bug #53681 (Duplicate): Failed to extract uid/gid for path /var/lib/ceph
- 11:30 AM Documentation #46073 (Can't reproduce): cephadm install fails: apt:stderr E: Unable to locate pac...
- 11:30 AM Bug #51592 (Resolved): cephadm should not use the lvm binary of the container
- 11:29 AM Bug #53598 (Resolved): cephadm: upgrade when using agent is too conservative
- 11:29 AM Feature #53570 (Resolved): cephadm: reconfigure agents over http
- 11:29 AM Bug #53453 (Resolved): cephadm: current agent lock setup allows extraneous agent daemon actions
- 11:28 AM Bug #53448 (Resolved): cephadm: agent failures double reported by two health checks
- 11:27 AM Bug #53323 (Rejected): the timezone in containers managed by cephadm are not in sync with the host
- 11:27 AM Bug #53010 (New): cehpadm rm-cluster does not clean up /var/run/ceph
- 11:26 AM Feature #52409 (Resolved): mgr/rook: OSD Management
- 11:26 AM Bug #51111 (Fix Under Review): Pacific: CEPHADM_STRAY_DAEMON after deploying iSCSI gateway with c...
- 11:25 AM Tasks #51562 (Fix Under Review): Enable autotune for osd_memory_target
- 11:25 AM Bug #53335 (Resolved): "cephadm bootstrap --ssh-user" doesn't support non root user
- 11:24 AM Bug #53269 (Pending Backport): store container registry credentials in config-key
- 11:24 AM Bug #47401 (Pending Backport): improve drive group validation
- 11:24 AM Bug #50685 (Pending Backport): wrong exception type: Exception("No filters applied")
- 11:23 AM Bug #49571 (Pending Backport): cephadm: same OSD one two host + daemon_id not unique
- 11:23 AM Feature #47774 (Pending Backport): orch,cephadm: host search with filters
- 11:22 AM Bug #46253 (Pending Backport): OSD specs without service_id
- 11:22 AM Bug #48291 (Pending Backport): Grafana should not have a predictable default password
01/04/2022
- 08:33 PM Bug #53766 (Duplicate): ceph orch ls: setting cgroup config for procHooks process caused: Unit li...
- Description: rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw...
- 08:08 PM Bug #53394: cephadm: can infer config from mon from different cluster causing file not found error
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582558
- 07:50 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582492
- 09:57 AM Bug #53762 (New): Problem with cephfs-mirror and cephadm / ceph orch
- After setting up cephfs mirroring and <tt>ceph orch apply cephfs-mirror</tt>, the mirroring daemon complains...
Also available in: Atom