General

Profile

Janek Bevendorff

  • Registered on: 12/17/2018
  • Last connection: 08/13/2019

Issues

Activity

09/11/2019

01:05 PM fs Feature #41763 (New): Support decommissioning of additional data pools
Adding additional data pools via @ceph fs add_data_pool@ is very easy, but once a pool is in use, it is very hard to ...

09/04/2019

08:34 AM mgr Bug #23967: ceph fs status and Dashboard fail with Python stack trace
Update: after rotating through the other standby MDSs by repeatedly failing the currently active MDS, I got it workin...
08:11 AM mgr Bug #23967: ceph fs status and Dashboard fail with Python stack trace
This just happened to me in Nautilus 14.2.2.
I failed an MDS, so the standby took over. Then I started deleting a ...

08/30/2019

07:47 AM fs Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
It's Bluestore on spinning disks. I don't really have an overview of the data distribution, it's very uneven. Perhaps...

08/29/2019

09:30 AM fs Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
Little status update: our data pool now uses up 186TiB while only storing 53TiB of actual data with a replication fac...

08/15/2019

07:31 AM fs Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
I think the first time I used the standard mimic workflow of @mds fail@ and once all MDSs are stopped, @fs remove@. T...

08/13/2019

08:01 PM fs Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
I tried again, this time with a replicated pool and just one MDS. I think it's too early to draw definitive conclusio...
02:54 PM fs Bug #41140: mds: trim cache more regularly
I believe this problem may be particularly severe when the main data pool is an EC pool. I am trying the same thing w...
01:17 PM fs Bug #41228 (New): mon: deleting a CephFS and its pools causes MONs to crash
Disclaimer: I am not entirely sure if this is strictly related to CephFS or a general problem when deleting pools wit...

08/12/2019

01:28 PM fs Bug #41204 (New): CephFS pool usage 3x above expected value and sparse journal dumps
I am in the process of copying about 230 million small and medium-sized files to a CephFS and I have three active MDS...

Also available in: Atom