Activity
From 12/06/2021 to 01/04/2022
01/04/2022
- 08:33 PM Bug #53766 (Duplicate): ceph orch ls: setting cgroup config for procHooks process caused: Unit li...
- Description: rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw...
- 08:08 PM Bug #53394: cephadm: can infer config from mon from different cluster causing file not found error
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582558
- 07:50 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582492
- 09:57 AM Bug #53762 (New): Problem with cephfs-mirror and cephadm / ceph orch
- After setting up cephfs mirroring and <tt>ceph orch apply cephfs-mirror</tt>, the mirroring daemon complains...
12/24/2021
12/23/2021
- 09:46 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- Possibly related to https://tracker.ceph.com/issues/53448
- 09:42 PM Bug #53723: Cephadm agent fails to report and causes a health timeout
- /a/yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/6569647
- 07:36 PM Bug #53723 (Resolved): Cephadm agent fails to report and causes a health timeout
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580439
Description: ra... - 09:44 PM Bug #53448: cephadm: agent failures double reported by two health checks
- @Adam King would you say that https://tracker.ceph.com/issues/53723 is related to this Tracker?
- 08:33 PM Bug #53681: Failed to extract uid/gid for path /var/lib/ceph
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582372
- 06:57 PM Bug #53681: Failed to extract uid/gid for path /var/lib/ceph
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580330
- 08:20 PM Bug #53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionE...
- /a/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/6582346
- 08:00 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580296
- 07:42 PM Bug #53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
- /a/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/6580078
12/22/2021
- 08:20 PM Bug #53706 (Resolved): cephadm: Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: i...
- ...
- 05:48 PM Bug #53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiti...
- /a/yuriw-2021-12-21_18:01:07-rados-wip-yuri3-testing-2021-12-21-0749-distro-default-smithi/6576218/
12/21/2021
- 09:52 PM Bug #53693: ceph orch upgrade start is getting stuck in gibba cluster
- We have another tiny (3 nodes) cluster and the same command was tried there and it worked there.
The main differenc... - 09:42 PM Bug #53693 (Closed): ceph orch upgrade start is getting stuck in gibba cluster
- - The current ceph version ...
- 05:55 PM Bug #50524: placement spec: irritating error message if passed a string for count_per_host
- Following up - This was fixed by https://github.com/ceph/ceph/pull/44267
12/20/2021
- 11:58 PM Bug #53681 (Duplicate): Failed to extract uid/gid for path /var/lib/ceph
- /a/yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/6569399/...
- 11:45 PM Bug #53680 (Resolved): ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) ...
- /a/yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/6569344/...
12/17/2021
- 11:21 AM Bug #53652 (Closed): cephadm "Verifying IP <ip> port 3300" ... -> "OSError: [Errno 99] Cannot ass...
- We have to get better at returning better error messages:...
- 11:11 AM Bug #53594: mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded constant to docker.io
- See also https://github.com/ceph/ceph/pull/44346
- 10:41 AM Bug #53624: cephadm agent: set_store mon returned -27: error: entry size limited to 65536 bytes
- But in any case we have to fix this before porting things to pacific.
- 04:19 AM Documentation #53607: No coredump getting created during ceph daemons crashes
- The coredumps are managed through systemd-coredump.socket on RHEL-8/CENTOS-8.
I do see coredumps are generated on...
12/16/2021
- 10:30 PM Bug #53624: cephadm agent: set_store mon returned -27: error: entry size limited to 65536 bytes
- Some more info.
The issue was encountered on a storage dense node. In this case the server was reporting 100+ devi... - 09:21 AM Bug #53624 (Resolved): cephadm agent: set_store mon returned -27: error: entry size limited to 65...
- ...
- 08:19 PM Feature #52920 (Pending Backport): Add snmp-gateway as a supported service for deloyment via orch...
- 03:59 PM Bug #53610: 'Inventory' object has no attribute 'get_daemon'
- I have a hard time believing this issue is real. ...
- 02:52 PM Bug #53491 (Resolved): cephadm: 'ceph cephadm osd activate' does not activate existing, previousl...
- 01:11 PM Feature #51566: cephadm: cpu limit
- To be done here:
* You have to extend the Cephadm's data structure to contain a cpu limit per service
* Needs be ...
12/15/2021
- 07:58 PM Bug #53501: Exception when running 'rook' task.
- choffman-2021-12-15_14:40:21-rados-wip-chris-warn-pg-distro-default-smithi/6564050/
choffman-2021-12-15_14:40:21-rad... - 03:42 PM Documentation #53607: No coredump getting created during ceph daemons crashes
- last time the only thing I needed was *ulimit -S -c unlimited*
- 12:56 AM Bug #53610 (Can't reproduce): 'Inventory' object has no attribute 'get_daemon'
- ...
12/14/2021
- 07:44 PM Documentation #53607 (New): No coredump getting created during ceph daemons crashes
- No coredump getting created during ceph daemons crashes...
- 01:10 PM Bug #53594 (Fix Under Review): mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded co...
12/13/2021
- 08:51 PM Bug #53598 (Resolved): cephadm: upgrade when using agent is too conservative
- The upgrade procedure when using the agent backs out of the upgrade function entirely if we are missing up-to-date me...
- 08:46 PM Bug #52940 (Resolved): cephadm: cephadm can log sensitive information by logging all command line...
- 08:45 PM Bug #53394 (Pending Backport): cephadm: can infer config from mon from different cluster causing ...
- 04:20 PM Bug #53594: mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded constant to docker.io
- This is also an issue in the binary https://github.com/ceph/ceph/blob/master/src/cephadm/cephadm#L4011.
That one a... - 10:14 AM Bug #53594 (Resolved): mgr/cephadm/upgrade.py: normalize_image_digest has a hard coded constant t...
- https://github.com/ceph/ceph/blob/84f88eaec44103edd377817e264d5d376df8c554/src/pybind/mgr/cephadm/upgrade.py#L34
I... - 11:15 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553774/
- 11:12 AM Bug #53583 (Resolved): mgr: Failed to validate Drive Group: OSD spec needs a `placement` key
- 11:12 AM Bug #53583: mgr: Failed to validate Drive Group: OSD spec needs a `placement` key
- fixed by https://github.com/ceph/ceph/pull/42905 in the meantime
12/10/2021
- 08:17 PM Bug #53583 (Resolved): mgr: Failed to validate Drive Group: OSD spec needs a `placement` key
- Description:...
- 03:04 PM Bug #53545 (Duplicate): rados/cephadm/mgr-nfs-upgrade failures due to CEPHADM_DAEMON_PLACE_FAIL
12/09/2021
- 09:17 PM Bug #52279: cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requ...
- Noticed this failure in upgrade tests in octopus...
- 05:34 PM Bug #52279: cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requ...
- See this issue in master test runs,
http://qa-proxy.ceph.com/teuthology/vshankar-2021-11-30_06:23:32-fs-wip-vshankar... - 08:51 PM Bug #53501: Exception when running 'rook' task.
- /a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553994/
- 11:32 AM Bug #53501 (Fix Under Review): Exception when running 'rook' task.
- 08:04 PM Bug #53572: cephadm should not require running as root for --help
- If the team is interested in taking this report, I'd be happy to look into the required changes myself.
- 08:03 PM Bug #53572 (Resolved): cephadm should not require running as root for --help
- [ceph@ceph0 ~]$ cephadm --help
ERROR: cephadm should be run as root
[ceph@ceph0 ~]$ cephadm bootstrap --help
ERROR... - 06:58 PM Feature #53570 (Resolved): cephadm: reconfigure agents over http
- Since we can contact agents through http, it should be possible (and likely faster) to send them new config informati...
- 11:59 AM Feature #53562 (New): cephadm doesn't support osd crush_location_hook
- crush_location_hook is a path to an executable that is executed in order to update the current OSD's crush location. ...
- 10:38 AM Bug #53491 (Fix Under Review): cephadm: 'ceph cephadm osd activate' does not activate existing, p...
12/08/2021
- 08:00 PM Documentation #53532: cephadm document Prometheus Sizing observations
- :) Just the date dd/mm - nothing magical. I was hoping to use the exercise to track tsdb growth as we expanded the cl...
- 11:50 AM Documentation #53532 (New): cephadm document Prometheus Sizing observations
- Prometheus Sizing observations
* 22/11 48 osds growing to 600+ - 124GB Disk space, peak of 3000 IOPS, 150MB/s, 2 cor... - 07:05 PM Bug #53545 (Duplicate): rados/cephadm/mgr-nfs-upgrade failures due to CEPHADM_DAEMON_PLACE_FAIL
- These upgrade tests die after 12 hours and this is what I found in the log (which may or may not be the cause of the ...
- 06:44 PM Bug #44587: failed to write <pid> to cgroup.procs:
- ...
- 06:02 PM Bug #53501: Exception when running 'rook' task.
- /a/yuriw-2021-12-04_15:17:04-rados-wip-yuri-testing-2021-12-03-1257-pacific-distro-default-smithi/6544618/
- 02:24 PM Bug #53541 (Resolved): permissions too open on the cephadm agent files (644) - includes certs and...
- permissions too open on the cephadm agent files (644) - includes certs and config
- 01:51 PM Feature #53540 (Resolved): ceph orch device ls doesn’t show the age of the data it’s returning
- ceph orch device ls doesn’t show the age of the data it’s returning
- 01:50 PM Feature #53539 (New): ceph orch exposes no way to see what’s queued or remove work from the queue
- ceph orch exposes no way to see what’s queued or remove work from the queue
We should probably expose the Queue of... - 12:11 PM Tasks #53533 (New): cephadm: performance: Make ceph.conf change reconfiguration Parallelized
- reconfiguring daemon (e.g., ceph.conf change, e.g., new mon) is serialized and slow, and happens before anything else...
- 11:36 AM Feature #43684: Make use of progress items for OSD deployment
- OSD deployment is a little fire and forget. There is no indication of progress, nothing in the service events to indi...
- 11:32 AM Bug #53531 (New): cephadm: how agent finds the active mgr after mgr failover
- how agent finds the active mgr after mgr failover
1) new mgr pokes old agents to provide new mgr endpoint
could b... - 11:30 AM Bug #53530 (Closed): centos’s cephadm shebang does not work on ubuntu
- ...
- 11:28 AM Bug #53529 (New): ceph orch apply ... --dry-run: Table not properly formatted
- ...
- 11:19 AM Bug #53528 (Resolved): loopback devices are showing in gather-facts
- loopback devices are showing in gather-facts and therefore the GUI. This is a feature of using snapd on ubuntu with s...
- 11:17 AM Bug #53527 (Resolved): cephadm: orch upgrade ls: shows outdated major versions
- orch upgrade ls … why does this show older versions 16.x down to 14.2? Isn’t that asking for trouble?
- 11:04 AM Bug #53525 (New): In case a configcheck alert is firing, you cannot suppress it
- In case the alert “CephNodeInconsistentMTU” is firing all the time - this could be a nuisance alert that customers se...
- 10:58 AM Feature #53523 (New): cephadm: All MONs deployed in the same rack
- default deployment of the mons is just count - so in this case, you get 5 mons within the same rack. How can we make ...
- 10:49 AM Bug #53522 (Closed): cephadm: Changing grafana port: `orch ps` is still shows old port
- Changed grafana port to 1493 with orch apply. ‘orch ls’ shows the new port, but ps doesn’t. ran a reconfig of the gra...
- 09:39 AM Bug #48925 (Resolved): cephadm: iscsi missing mgr permissions
- 09:28 AM Bug #53397 (Pending Backport): make cephadm pass CEPH_VOLUME_SKIP_RESTORECON when running ceph-vo...
12/06/2021
- 10:58 PM Bug #53365 (Resolved): pacific: broken groups or modules: container-tools:3.0
- 08:44 PM Bug #53365: pacific: broken groups or modules: container-tools:3.0
- https://github.com/ceph/ceph/pull/44201 merged
- 03:39 PM Bug #53496: cephadm: list-networks swallows /128 networks, breaking the orchestrator ("Filtered o...
- Sebastian Wagner wrote:
> Want to make a PR? If yes, please add your command outputs to https://github.com/ceph/ceph... - 01:57 PM Bug #53496: cephadm: list-networks swallows /128 networks, breaking the orchestrator ("Filtered o...
- Want to make a PR? If yes, please add your command outputs to https://github.com/ceph/ceph/blob/8c54a705e293682a8bbbd...
- 11:14 AM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2021-12-03_15:27:18-rados-wip-yuri11-testing-2021-12-02-1451-distro-default-smithi/6542708
- 08:32 AM Bug #53501 (Pending Backport): Exception when running 'rook' task.
- /a/yuriw-2021-12-03_15:27:18-rados-wip-yuri11-testing-2021-12-02-1451-distro-default-smithi/6542695...
Also available in: Atom