- Registered on: 02/05/2020
- Last connection: 03/07/2023
- Assigned issues: 0
- Reported issues: 15
- 08:14 AM RADOS Bug #6297: ceph osd tell * will break when FD limit reached, messenger should close pipes as nece...
- Hi Brad, yes I can. I tried with 1300 and it works fine. I added "ulimit -n 2048" to the script as a work-around.
- 12:25 PM RADOS Bug #6297: ceph osd tell * will break when FD limit reached, messenger should close pipes as nece...
- Just run into this problem as well. I'm scraping OSD perf dumps to a file in a script and I get...
- 09:44 AM bluestore Bug #44010: changing osd_memory_target currently requires restart, should update at runtime
- Just to add to this from my side, after upgrading to octopus 15.2.17 the memory target can be adjusted at run time.
- 01:14 PM Ceph Bug #58002 (New): mon_max_pg_per_osd is not checked per OSD
- The warning for exceeding mon_max_pg_per_osd seems to be triggered only when the average PG count over all OSDs excee...
- 07:07 AM CephFS Bug #56529: ceph-fs crashes on getfattr
- Hi, just a confirmation. The problem is solved in ML 5.19.10-1.el7 and probably all other stable kernel lines, includ...
- 12:47 PM RADOS Bug #46847: Loss of placement information on OSD reboot
- The PR https://github.com/ceph/ceph/pull/40849 for adding the test was marked stale. I left a comment and it would be...
- 08:57 AM Ceph Bug #56995: PGs go inactive after failed OSD comes up and is marked as in
- The problem is still present in octopus 15.2.17. Almost certainly all newer versions are affected.
- 10:36 AM RADOS Bug #49231: MONs unresponsive over extended periods of time
- OK, I did some more work and it looks like I can trigger the issue with some certainty by failing an MDS that was up ...
- 09:35 AM Ceph Bug #56995: PGs go inactive after failed OSD comes up and is marked as in
- I cannot reproduce this with mimic-13.2.10.
- 12:34 PM Ceph Bug #57348 (New): crush map fails: (1) chose and choseleaf for type OSD not identical and (2) ret...
- We observe two issues with crush. Related ceph-user thread (split up into two for some reason): https://lists.ceph.io...
Also available in: Atom