Activity
From 12/13/2021 to 01/11/2022
01/11/2022
- 11:22 PM Bug #53706 (In Progress): cephadm: Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed...
- 10:53 PM Bug #53842 (Resolved): cephadm/mds_upgrade_sequence: KeyError: 'en***'
- This may not be reproducible since I haven't seen it anywhere else. Will update this Tracker if I see any more instan...
- 09:12 PM Bug #53693 (Closed): ceph orch upgrade start is getting stuck in gibba cluster
- Discussion from #ceph-gibba channel!...
- 09:38 AM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Is this related to CephFS? Comment https://tracker.ceph.com/issues/53807#note-1 indicates this is being hit with rado...
- 06:23 AM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Jeff Layton wrote:
> Looking at /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-...
01/10/2022
- 07:18 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Looking at /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi/6599082/remote/...
- 11:22 AM Feature #53815 (Resolved): cephadm rm-cluster should delete log files
- * /var/log/ceph/cephadm.log*
* /var/log/ceph/<cluster-fsid> - 09:53 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2022-01-08_17:57:43-rados-wip-yuri8-testing-2022-01-07-1541-distro-default-smithi/6603250
01/07/2022
- 11:05 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- And a third similar scenario where an offline filesystem leads to failed CEPHADM daemons:
Description: rados/cepha... - 10:49 PM Bug #53807: Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Another similar scenario, which does not involve offline filesystems:
Description: rados/cephadm/smoke-roleless/{0... - 10:33 PM Bug #53807 (Resolved): Dead jobs in rados/cephadm/smoke-roleless{...}: ingress jobs stuck
- Description: rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs...
- 10:40 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi/6598788
- 09:09 PM Bug #53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionE...
- /a/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi/6599062
- 12:35 PM Feature #53794 (Resolved): make cephadm-ansible support setting up custom repository
- would be helpful to have the possibility to set up custom repository
01/06/2022
- 11:01 PM Bug #53541 (In Progress): permissions too open on the cephadm agent files (644) - includes certs ...
- 11:00 PM Bug #53723 (In Progress): Cephadm agent fails to report and causes a health timeout
- 10:21 PM Bug #53448: cephadm: agent failures double reported by two health checks
- Accidentally deleted the related issue; ignore.
- 08:32 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/lflores-2022-01-05_19:04:35-rados-wip-lflores-mgr-rocksdb-distro-default-smithi/6596855
- 04:18 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- Thanks Joseph. I frequently review teuthology runs, so I'll update this tracker if the problem persists. Hopefully if...
- 03:45 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- The first two logs are due to this ListBuckets call failing in the RGW pod: https://github.com/rook/rook/blob/0d8fd9d...
- 11:36 AM Bug #53424 (Fix Under Review): CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- 10:40 AM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- And indeed we're not stopping or undeploying the old ganesha:...
01/05/2022
- 10:05 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- /a/yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi/6595518
- 10:00 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi/6595248
- 09:26 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- /a/yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi/6595253
- 02:02 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- Going by the sentry event for these failures, it looks like this started being a common failure right as https://gith...
- 02:30 PM Feature #43709 (Resolved): mgr/rook: remove OSDs
- 11:57 AM Bug #53610 (Need More Info): 'Inventory' object has no attribute 'get_daemon'
- 11:41 AM Bug #50524 (Pending Backport): placement spec: irritating error message if passed a string for co...
- 11:39 AM Bug #53766 (Duplicate): ceph orch ls: setting cgroup config for procHooks process caused: Unit li...
- 11:39 AM Bug #53681 (Duplicate): Failed to extract uid/gid for path /var/lib/ceph
- 11:30 AM Documentation #46073 (Can't reproduce): cephadm install fails: apt:stderr E: Unable to locate pac...
- 11:30 AM Bug #51592 (Resolved): cephadm should not use the lvm binary of the container
- 11:29 AM Bug #53598 (Resolved): cephadm: upgrade when using agent is too conservative
- 11:29 AM Feature #53570 (Resolved): cephadm: reconfigure agents over http
- 11:29 AM Bug #53453 (Resolved): cephadm: current agent lock setup allows extraneous agent daemon actions
- 11:28 AM Bug #53448 (Resolved): cephadm: agent failures double reported by two health checks
- 11:27 AM Bug #53323 (Rejected): the timezone in containers managed by cephadm are not in sync with the host
- 11:27 AM Bug #53010 (New): cehpadm rm-cluster does not clean up /var/run/ceph
- 11:26 AM Feature #52409 (Resolved): mgr/rook: OSD Management
- 11:26 AM Bug #51111 (Fix Under Review): Pacific: CEPHADM_STRAY_DAEMON after deploying iSCSI gateway with c...
- 11:25 AM Tasks #51562 (Fix Under Review): Enable autotune for osd_memory_target
- 11:25 AM Bug #53335 (Resolved): "cephadm bootstrap --ssh-user" doesn't support non root user
- 11:24 AM Bug #53269 (Pending Backport): store container registry credentials in config-key
- 11:24 AM Bug #47401 (Pending Backport): improve drive group validation
- 11:24 AM Bug #50685 (Pending Backport): wrong exception type: Exception("No filters applied")
- 11:23 AM Bug #49571 (Pending Backport): cephadm: same OSD one two host + daemon_id not unique
- 11:23 AM Feature #47774 (Pending Backport): orch,cephadm: host search with filters
- 11:22 AM Bug #46253 (Pending Backport): OSD specs without service_id
- 11:22 AM Bug #48291 (Pending Backport): Grafana should not have a predictable default password
01/04/2022
- 08:33 PM Bug #53766 (Duplicate): ceph orch ls: setting cgroup config for procHooks process caused: Unit li...
- Description: rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw...
- 08:08 PM Bug #53394: cephadm: can infer config from mon from different cluster causing file not found error
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582558
- 07:50 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582492
- 09:57 AM Bug #53762 (New): Problem with cephfs-mirror and cephadm / ceph orch
- After setting up cephfs mirroring and <tt>ceph orch apply cephfs-mirror</tt>, the mirroring daemon complains...
12/24/2021
12/23/2021
- 09:46 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- Possibly related to https://tracker.ceph.com/issues/53448
- 09:42 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- /a/yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/6569647
- 07:36 PM Bug #53723 (Resolved): Cephadm agent fails to report and causes a health timeout
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580439
Description: ra... - 09:44 PM Bug #53448: cephadm: agent failures double reported by two health checks
- @Adam King would you say that https://tracker.ceph.com/issues/53723 is related to this Tracker?
- 08:33 PM Bug #53681: Failed to extract uid/gid for path /var/lib/ceph
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582372
- 06:57 PM Bug #53681: Failed to extract uid/gid for path /var/lib/ceph
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580330
- 08:20 PM Bug #53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionE...
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582346
- 08:00 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580296
- 07:42 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580078
12/22/2021
- 08:20 PM Bug #53706 (Resolved): cephadm: Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: i...
- ...
- 05:48 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- /a/yuriw-2021-12-21_18:01:07-rados-wip-yuri3-testing-2021-12-21-0749-distro-default-smithi/6576218/
12/21/2021
- 09:52 PM Bug #53693: ceph orch upgrade start is getting stuck in gibba cluster
- We have another tiny (3 nodes) cluster and the same command was tried there and it worked there.
The main differenc... - 09:42 PM Bug #53693 (Closed): ceph orch upgrade start is getting stuck in gibba cluster
- - The current ceph version ...
- 05:55 PM Bug #50524: placement spec: irritating error message if passed a string for count_per_host
- Following up - This was fixed by https://github.com/ceph/ceph/pull/44267
12/20/2021
- 11:58 PM Bug #53681 (Duplicate): Failed to extract uid/gid for path /var/lib/ceph
- /a/yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/6569399/...
- 11:45 PM Bug #53680 (Resolved): ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) ...
- /a/yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/6569344/...
12/17/2021
- 11:21 AM Bug #53652 (Closed): cephadm "Verifying IP <ip> port 3300" ... -> "OSError: [Errno 99] Cannot ass...
- We have to get better at returning better error messages:...
- 11:11 AM Bug #53594: mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded constant to docker.io
- See also https://github.com/ceph/ceph/pull/44346
- 10:41 AM Bug #53624: cephadm agent: set_store mon returned -27: error: entry size limited to 65536 bytes
- But in any case we have to fix this before porting things to pacific.
- 04:19 AM Documentation #53607: No coredump getting created during ceph daemons crashes
- The coredumps are managed through systemd-coredump.socket on RHEL-8/CENTOS-8.
I do see coredumps are generated on...
12/16/2021
- 10:30 PM Bug #53624: cephadm agent: set_store mon returned -27: error: entry size limited to 65536 bytes
- Some more info.
The issue was encountered on a storage dense node. In this case the server was reporting 100+ devi... - 09:21 AM Bug #53624 (Resolved): cephadm agent: set_store mon returned -27: error: entry size limited to 65...
- ...
- 08:19 PM Feature #52920 (Pending Backport): Add snmp-gateway as a supported service for deloyment via orch...
- 03:59 PM Bug #53610: 'Inventory' object has no attribute 'get_daemon'
- I have a hard time believing this issue is real. ...
- 02:52 PM Bug #53491 (Resolved): cephadm: 'ceph cephadm osd activate' does not activate existing, previousl...
- 01:11 PM Feature #51566: cephadm: cpu limit
- To be done here:
* You have to extend the Cephadm's data structure to contain a cpu limit per service
* Needs be ...
12/15/2021
- 07:58 PM Bug #53501: Exception when running 'rook' task.
- choffman-2021-12-15_14:40:21-rados-wip-chris-warn-pg-distro-default-smithi/6564050/
choffman-2021-12-15_14:40:21-rad... - 03:42 PM Documentation #53607: No coredump getting created during ceph daemons crashes
- last time the only thing I needed was *ulimit -S -c unlimited*
- 12:56 AM Bug #53610 (Can't reproduce): 'Inventory' object has no attribute 'get_daemon'
- ...
12/14/2021
- 07:44 PM Documentation #53607 (New): No coredump getting created during ceph daemons crashes
- No coredump getting created during ceph daemons crashes...
- 01:10 PM Bug #53594 (Fix Under Review): mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded co...
12/13/2021
- 08:51 PM Bug #53598 (Resolved): cephadm: upgrade when using agent is too conservative
- The upgrade procedure when using the agent backs out of the upgrade function entirely if we are missing up-to-date me...
- 08:46 PM Bug #52940 (Resolved): cephadm: cephadm can log sensitive information by logging all command line...
- 08:45 PM Bug #53394 (Pending Backport): cephadm: can infer config from mon from different cluster causing ...
- 04:20 PM Bug #53594: mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded constant to docker.io
- This is also an issue in the binary https://github.com/ceph/ceph/blob/master/src/cephadm/cephadm#L4011.
That one a... - 10:14 AM Bug #53594 (Resolved): mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded constant t...
- https://github.com/ceph/ceph/blob/84f88eaec44103edd377817e264d5d376df8c554/src/pybind/mgr/cephadm/upgrade.py#L34
I... - 11:15 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553774/
- 11:12 AM Bug #53583 (Resolved): mgr: Failed to validate Drive Group: OSD spec needs a `placement` key
- 11:12 AM Bug #53583: mgr: Failed to validate Drive Group: OSD spec needs a `placement` key
- fixed by https://github.com/ceph/ceph/pull/42905 in the meantime
Also available in: Atom