General

Profile

Prayank Saxena

  • Login: Prayank
  • Registered on: 04/12/2020
  • Last sign in: 01/04/2023

Issues

open closed Total
Assigned issues 0 0 0
Reported issues 3 0 3

Activity

02/22/2023

06:48 AM Ceph Bug #58821: pg_autoscaler module is not working since Pacific version upgrade from v16.2.4 to v16.2.9
What will happen if i change the crush rule of pool 1 from replicated rule(default) to customised crush rule.
Will t...
Prayank Saxena
06:24 AM Ceph Bug #58821: pg_autoscaler module is not working since Pacific version upgrade from v16.2.4 to v16.2.9
I see these tickets already opened for the same https://tracker.ceph.com/issues/55611
But can i get a solution on h...
Prayank Saxena
06:16 AM Ceph Bug #58821 (New): pg_autoscaler module is not working since Pacific version upgrade from v16.2.4 to v16.2.9
Hello Team,
We have upgraded our clusters from pacific v16.2.4 to v16.2.9 few months back. Before upgrade i was ab...
Prayank Saxena

01/05/2023

07:36 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
okay i see, thanks Xiubo li
i was going through the link and found reset of journal and session resolved the issue...
Prayank Saxena
05:03 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
Thanks Xiubo Li for the update
We are facing similar issue currently where client I/O is not visible in ceph statu...
Prayank Saxena

01/04/2023

05:04 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
Prayank Saxena wrote:
> Hello Team,
>
> We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one w...
Prayank Saxena
04:56 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t...
Prayank Saxena
04:53 AM CephFS Bug #52260: 1 MDSs are read only | pacific 16.2.5
Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t...
Prayank Saxena

06/22/2020

09:12 AM RADOS Bug #46137: Monitor leader is marking multiple osd's down
Every few mins multiple osd's are going down and coming back up which is causing recovery of data, This is occurring ... Prayank Saxena
09:07 AM RADOS Bug #46137 (New): Monitor leader is marking multiple osd's down
My ceph cluster consist of 5 Mon and 58 DN with 1302 total osd's (HDD's) with 12.2.8 Luminous (stable) version and Fi... Prayank Saxena

Also available in: Atom