General

Profile

Konstantin Shalygin

  • Email: k0ste@k0ste.ru
  • Registered on: 03/29/2017
  • Last connection: 07/25/2019

Issues

Activity

08/09/2019

04:01 AM bluestore Bug #38559: 50-100% iops lost due to bluefs_preextend_wal_files = false
luminous: https://github.com/ceph/ceph/pull/29564

08/01/2019

04:01 AM rbd Bug #40950: rbd: removing snaps failed: (2) No such file or directory
Wrote how-to deal with this, it may be help someone from Google - "link":https://k0ste.ru/how-to-delete-rbd-snapshot-...

07/30/2019

12:09 PM fs Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Bingo - mds_log_max_segments.
In Luminous desc for this option is empty:...
04:10 AM rbd Bug #40950: rbd: removing snaps failed: (2) No such file or directory
Thanks Jason.
I updated omap value like this, then snapshot will successfully deleted....

07/29/2019

04:25 AM fs Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
??The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata...

07/26/2019

09:37 AM fs Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
After another bunch of simulations I was call `cache drop` to MDS via admin socket. Pool usage *198.8MB* -> *2.8MB*.
...
06:45 AM fs Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
I was abstract of kernel client and make some tests with Samba VFS - this is Luminous client.
First, I just copied...
03:25 AM fs Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
+10MBytes for last ~24H.
Actual pool's usage:
fs_data:
!fs_data_26.07.19.png!
fs_meta:
!fs_meta_26.07.19.png!
03:23 AM fs Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Patrick, this is definitely Luminous 12.2.12. My actual question is why metadata usage (bytes) is growing and new obj...

07/25/2019

09:45 AM fs Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Found another cluster with this Ceph version. Data usage 10x more, but meta is not much.
Metadata pool in this clust...

Also available in: Atom