Ben England
- Email: bengland@redhat.com
- Registered on: 01/20/2016
- Last connection: 02/07/2022
Issues
- Assigned issues: 0
- Reported issues: 17
Projects
- Ceph (Developer, 08/09/2018)
- Linux kernel client (Developer, 11/25/2020)
- phprados (Developer, 11/25/2020)
- devops (Developer, 11/25/2020)
- rbd (Developer, 11/25/2020)
- rgw (Developer, 11/25/2020)
- sepia (Developer, 08/09/2018)
- CephFS (Developer, 11/25/2020)
- teuthology (Developer, 12/04/2020)
- rados-java (Developer, 11/25/2020)
- Calamari (Developer, 08/09/2018)
- Ceph-deploy (Developer, 08/09/2018)
- ceph-dokan (Developer, 11/25/2020)
- Stable releases (Developer, 11/25/2020)
- Tools (Developer, 08/09/2018)
- Infrastructure (Developer, 08/09/2018)
- paddles (Developer, 08/16/2021)
- downburst (Developer, 08/09/2018)
- ovh (Developer, 08/09/2018)
- www.ceph.com (Developer, 08/09/2018)
- CI (Reporter, 08/09/2018)
- mgr (Developer, 08/09/2018)
- rgw-testing (Developer, 11/25/2020)
- RADOS (Developer, 08/09/2018)
- bluestore (Developer, 08/09/2018)
- ceph-volume (Developer, 11/25/2020)
- Messengers (Developer, 03/12/2019)
- Orchestrator (Developer, 01/16/2020)
- crimson (Developer, 11/25/2020)
- dmclock (Developer, 08/13/2020)
- cephsqlite (Developer, 03/19/2021)
- Dashboard (Developer, 04/22/2021)
- cleanup (Developer, 08/11/2022)
- nvme-of (Developer, 07/26/2023)
- Ceph QA (Developer, 03/14/2024)
Activity
11/06/2019
- 11:36 PM RADOS Bug #42668 (Won't Fix): ceph daemon osd.* fails in osd container but ceph daemon mds.* does not f...
- with K8S (RHOCS 4.2) OSD pods, I get this error from Ceph daemon command:
[bengland@bene-laptop ocs-operator]$ oco...
10/28/2019
- 10:24 PM mgr Bug #42517 (Resolved): ceph osd status - units invisible using black background
- When I do "ceph osd status" command, the units are not visible when using a black background, but when I paste them i...
08/26/2019
- 05:57 PM bluestore Backport #41273: nautilus: Containerized cluster failure due to osd_memory_target not being set t...
- Raising priority, I think this is a blocker for rook-ceph on any platform, and Octopus won't be in downstream release...
08/05/2019
- 02:09 PM bluestore Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- My guess would be if CGroup limit is X, then 0.95 X - 1/2 GB should be fine for osd_memory_target, that would give th...
08/01/2019
- 05:44 PM bluestore Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- from Joe T on his system:
# ceph version
ceph version 14.2.2-218-g734b519 (734b5199dc45d3d36c8d8d066d6249cc304d0e0e... - 05:38 PM bluestore Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- version you asked for:
ceph-base-14.2.2-0.el7.x86_64
from the Ceph container image ceph/ceph:v14.2.2-20190722
05/06/2019
- 11:39 AM Ceph Revision c2336997 (ceph): doc: remove recommendation for kernel.pid_max
- now that Ceph uses async messenger, setting kernel.pid_max = 4194303 should not be necessary anymore. Even if kernel...
04/17/2019
- 03:54 PM RADOS Feature #39339: prioritize backfill of metadata pools, automatically
- Also, this ceph command requires the operator to do it, the point of the tracker is that this should be default behav...
- 03:38 PM RADOS Feature #39339: prioritize backfill of metadata pools, automatically
- is backfill any different than recovery priority? If not, should it be? By "backfill" I mean the emergency situatio...
- 01:59 AM RADOS Feature #39339 (In Progress): prioritize backfill of metadata pools, automatically
- Neha Ojna suggested filing this feature request.
One relatively easy way to minimize damage in a double-failure sc...
Also available in: Atom