General

Profile

Janek Bevendorff

  • Registered on: 12/17/2018
  • Last connection: 04/02/2020

Issues

Activity

04/02/2020

08:58 AM mgr Bug #39264: Ceph-mgr Hangup and _check_auth_rotating possible clock skew, rotating keys expired w...
There is nothing else in the logs and I disabled the Prometheus module in our production cluster, so I don't have the...
08:23 AM mgr Bug #39264: Ceph-mgr Hangup and _check_auth_rotating possible clock skew, rotating keys expired w...
No, but on the mailing list people were reporting that the problem went away once the free space dropped below a few ...

03/23/2020

05:03 PM mgr Bug #39264: Ceph-mgr Hangup and _check_auth_rotating possible clock skew, rotating keys expired w...
I want to dig this issue up again. With the new 14.2.8 update, the problem has become so bad I had to disable the pro...

01/23/2020

01:06 PM fs Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
I am seeing similar issues on our cluster. I had the Ganesha node running on the same node as the MONs just for conve...

10/28/2019

10:39 AM mgr Bug #42506 (New): Prometheus module response times are consistently slow
We have a cluster with 5 MONs/MGRs and 1248 OSDs.
Our Prometheus nodes are polling the MGRs for metrics every few ...
10:30 AM mgr Bug #39264: Ceph-mgr Hangup and _check_auth_rotating possible clock skew, rotating keys expired w...
I am seeing this quite regularly in Nautiluas 14.2.4. We have five MGRs and every now and then a few of them hang the...

09/19/2019

02:30 PM Ceph Bug #41929: Inconsistent reporting of STORED/USED in ceph df
Perhaps I should also mention that the meta data and index pools have a failure domain 'rack' whereas the data pools ...
10:54 AM Ceph Bug #41929 (Closed): Inconsistent reporting of STORED/USED in ceph df
@ceph df@ reports inconsistent values in the @STORED@ or @USED@ (not @%USED@). Notice how the @cephfs.storage.data@ p...

09/11/2019

01:05 PM fs Feature #41763 (New): Support decommissioning of additional data pools
Adding additional data pools via @ceph fs add_data_pool@ is very easy, but once a pool is in use, it is very hard to ...

09/04/2019

08:34 AM mgr Bug #23967: ceph fs status and Dashboard fail with Python stack trace
Update: after rotating through the other standby MDSs by repeatedly failing the currently active MDS, I got it workin...

Also available in: Atom