General

Profile

Prayank Saxena

  • Registered on: 04/12/2020
  • Last connection: 01/04/2023

Issues

Activity

02/22/2023

06:48 AM Ceph Bug #58821: pg_autoscaler module is not working since Pacific version upgrade from v16.2.4 to v16...
What will happen if i change the crush rule of pool 1 from replicated rule(default) to customised crush rule.
Will t...
06:24 AM Ceph Bug #58821: pg_autoscaler module is not working since Pacific version upgrade from v16.2.4 to v16...
I see these tickets already opened for the same https://tracker.ceph.com/issues/55611
But can i get a solution on h...
06:16 AM Ceph Bug #58821 (New): pg_autoscaler module is not working since Pacific version upgrade from v16.2.4 ...
Hello Team,
We have upgraded our clusters from pacific v16.2.4 to v16.2.9 few months back. Before upgrade i was ab...

01/05/2023

07:36 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
okay i see, thanks Xiubo li
i was going through the link and found reset of journal and session resolved the issue...
05:03 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
Thanks Xiubo Li for the update
We are facing similar issue currently where client I/O is not visible in ceph statu...

01/04/2023

05:04 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
Prayank Saxena wrote:
> Hello Team,
>
> We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one w...
04:56 AM CephFS Bug #58082: cephfs:filesystem became read only after Quincy upgrade
Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t...
04:53 AM CephFS Bug #52260: 1 MDSs are read only | pacific 16.2.5
Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t...

06/22/2020

09:12 AM RADOS Bug #46137: Monitor leader is marking multiple osd's down
Every few mins multiple osd's are going down and coming back up which is causing recovery of data, This is occurring ...
09:07 AM RADOS Bug #46137 (New): Monitor leader is marking multiple osd's down
My ceph cluster consist of 5 Mon and 58 DN with 1302 total osd's (HDD's) with 12.2.8 Luminous (stable) version and Fi...

Also available in: Atom