General

Profile

Janek Bevendorff

  • Registered on: 12/17/2018
  • Last connection: 01/23/2020

Issues

Activity

01/23/2020

01:06 PM fs Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
I am seeing similar issues on our cluster. I had the Ganesha node running on the same node as the MONs just for conve...

10/28/2019

10:39 AM mgr Bug #42506 (New): Prometheus module response times are consistently slow
We have a cluster with 5 MONs/MGRs and 1248 OSDs.
Our Prometheus nodes are polling the MGRs for metrics every few ...
10:30 AM mgr Bug #39264: Ceph-mgr Hangup and _check_auth_rotating possible clock skew, rotating keys expired w...
I am seeing this quite regularly in Nautiluas 14.2.4. We have five MGRs and every now and then a few of them hang the...

09/19/2019

02:30 PM Ceph Bug #41929: Inconsistent reporting of STORED/USED in ceph df
Perhaps I should also mention that the meta data and index pools have a failure domain 'rack' whereas the data pools ...
10:54 AM Ceph Bug #41929 (Closed): Inconsistent reporting of STORED/USED in ceph df
@ceph df@ reports inconsistent values in the @STORED@ or @USED@ (not @%USED@). Notice how the @cephfs.storage.data@ p...

09/11/2019

01:05 PM fs Feature #41763 (New): Support decommissioning of additional data pools
Adding additional data pools via @ceph fs add_data_pool@ is very easy, but once a pool is in use, it is very hard to ...

09/04/2019

08:34 AM mgr Bug #23967: ceph fs status and Dashboard fail with Python stack trace
Update: after rotating through the other standby MDSs by repeatedly failing the currently active MDS, I got it workin...
08:11 AM mgr Bug #23967: ceph fs status and Dashboard fail with Python stack trace
This just happened to me in Nautilus 14.2.2.
I failed an MDS, so the standby took over. Then I started deleting a ...

08/30/2019

07:47 AM fs Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
It's Bluestore on spinning disks. I don't really have an overview of the data distribution, it's very uneven. Perhaps...

08/29/2019

09:30 AM fs Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
Little status update: our data pool now uses up 186TiB while only storing 53TiB of actual data with a replication fac...

Also available in: Atom