Shinobu Kinjo
- Email: shinobu@redhat.com
- Registered on: 07/23/2015
- Last connection: 01/30/2018
Issues
- Assigned issues: 1
- Reported issues: 26
Projects
- Ceph (Developer, Reporter, 01/12/2017)
- Linux kernel client (Developer, Reporter, 11/25/2020)
- phprados (Developer, Reporter, 11/25/2020)
- devops (Developer, Reporter, 11/25/2020)
- rbd (Developer, Reporter, 11/25/2020)
- rgw (Developer, Reporter, 11/25/2020)
- sepia (Developer, 02/02/2018)
- CephFS (Developer, Reporter, 11/25/2020)
- teuthology (Developer, 12/04/2020)
- rados-java (Developer, Reporter, 11/25/2020)
- Calamari (Developer, 02/02/2018)
- Ceph-deploy (Developer, 02/02/2018)
- ceph-dokan (Developer, Reporter, 11/25/2020)
- Stable releases (Developer, Reporter, 11/25/2020)
- Tools (Developer, 02/02/2018)
- Infrastructure (Developer, 02/02/2018)
- downburst (Developer, 02/02/2018)
- ovh (Developer, 02/02/2018)
- www.ceph.com (Developer, 02/02/2018)
- mgr (Developer, Reporter, 06/28/2017)
- rgw-testing (Developer, Reporter, 11/25/2020)
- RADOS (Developer, Reporter, 06/07/2017)
- bluestore (Developer, Reporter, 11/29/2017)
- ceph-volume (Developer, Reporter, 11/25/2020)
- Messengers (Developer, Reporter, 03/12/2019)
- Orchestrator (Developer, Reporter, 01/16/2020)
- crimson (Developer, Reporter, 11/25/2020)
- dmclock (Developer, Reporter, 08/13/2020)
Activity
01/30/2018
- 06:53 AM rgw Backport #22830: jewel: expose --sync-stats via admin api
- https://github.com/ceph/ceph/pull/20179
- 06:53 AM rgw Backport #22830 (Resolved): jewel: expose --sync-stats via admin api
- https://github.com/ceph/ceph/pull/20179
01/18/2018
- 07:07 AM bluestore Bug #22115 (Need More Info): OSD SIGABRT on bluestore_prefer_deferred_size = 104857600: assert(_b...
01/13/2018
- 01:02 AM Ceph Revision 97731e3a (ceph): common: Add min/max of ms_async_op_threads
- Signed-off-by: Shinobu Kinjo <shinobu@redhat.com>
01/08/2018
- 04:12 AM CephFS Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
- https://github.com/ceph/ceph/pull/19830
- 04:12 AM CephFS Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
- Shinobu Kinjo wrote:
-> fix already in luminous
- - 03:58 AM CephFS Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
- fix already in luminous
01/04/2018
- 01:38 AM Ceph Revision 213bc895 (ceph): common: Do not use unique_lock, if manual lock/unlock are not necessary
- Signed-off-by: Shinobu Kinjo <shinobu@redhat.com>
01/02/2018
- 08:03 PM Ceph Revision 0875bc7d (ceph): osd: Sanity check, if too full or not
- Signed-off-by: Shinobu Kinjo <shinobu@redhat.com>
12/21/2017
- 04:13 PM CephFS Bug #22357: mds: read hang in multiple mds setup
- i don't see any merged pr.
Also available in: Atom