General

Profile

Muthusamy Muthiah

  • Login: muthusamym
  • Registered on: 01/24/2017
  • Last sign in: 03/29/2017

Issues

open closed Total
Assigned issues 0 0 0
Reported issues 0 3 3

Activity

03/29/2017

06:57 AM mgr Bug #18994: High CPU usage for ceph-mgr daemon v11.2.0
Due to high CPU utilisation we stopped the ceph-mgr on all our cluster.
On one of our cluster we saw high memory usa...
Muthusamy Muthiah
06:42 AM RADOS Bug #18924: kraken-bluestore 11.2.0 memory leak issue
Hi Jaime,
The issue not fixed with this workaround, and we will address this workaround in another issue related t...
Muthusamy Muthiah

02/14/2017

08:58 AM RADOS Bug #18924 (Resolved): kraken-bluestore 11.2.0 memory leak issue
Hi All,
On all our 5 node cluster with ceph 11.2.0 we encounter memory leak issues.
Cluster details : 5 node wi...
Muthusamy Muthiah

01/31/2017

05:07 PM Ceph Bug #18693: min_size = k + 1
Hi All,
the problem is in kraken, when a pool is created with EC profile , min_size equals erasure size.
For 3...
Muthusamy Muthiah
12:48 PM Ceph Bug #18693: min_size = k + 1
Hi All,
Following are the test outcomes on EC profile ( n = k + m)
1. Kraken filestore and bluetore with m=1 ,...
Muthusamy Muthiah

01/29/2017

02:42 PM Ceph Bug #18693: min_size = k + 1
Hi All,
Also tried EC profile 3+1 on 5 node cluster with bluestore enabled . When an OSD is down the cluster goes...
Muthusamy Muthiah

01/27/2017

06:48 AM Ceph Bug #18693 (Won't Fix): min_size = k + 1
Installed version : v11.2.0
ceph cluster : 5 node cluster with 12 disk(6TB) on each node , enabled bluestore with EC...
Muthusamy Muthiah

01/24/2017

07:07 AM Ceph Bug #18648 (Resolved): Bluestore: OSD crash during soak test
Installed version : v11.2.0 with bluestore
Platform : 3 * SGI nodes with 40 disk (6TB) each and EC: 2+1
Issue:
...
Muthusamy Muthiah

Also available in: Atom