Activity
From 05/20/2019 to 06/18/2019
06/18/2019
- 03:32 PM Bug #40080 (Pending Backport): Bitmap allocator return duplicate entries which cause interval_set...
- 03:03 PM Bug #38272 (Pending Backport): "no available blob id" assertion might occur
- Not sure if this can/should be backported beyond nautilus...?
- 11:58 AM Bug #40412 (New): os/bluestore: osd_memory_target_cgroup_limit_ratio won't work with SELinux
- When running in SELinux-enabled environment ceph-osd violates access policy because of reading the memory limits via ...
06/17/2019
- 07:49 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
- This was enough to bring up the 3 OSDs, get back the stale PG, complete the resharding, i.e. get rid of the large oma...
- 07:15 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
- Thanks to the awesome help of several people, we managed to work around this problem.
With
bluestore rocksdb opti... - 12:35 AM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
All SSD report this logs after run ceph-bluestore-tool repair...
06/15/2019
- 09:37 AM Backport #40280 (In Progress): mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
- 09:32 AM Backport #40281 (In Progress): nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
06/14/2019
- 03:16 PM Bug #38745: spillover that doesn't make sense
- This is also showing up in 14.2.1 in instances where the db is overly provisioned.
HEALTH_WARN BlueFS spillover de... - 10:08 AM Bug #18375 (Resolved): bluestore: bluefs_preextend_wal_files=true is not crash consistent
- This is also duplicated by https://tracker.ceph.com/issues/38559
Marking this as resolved, backporting to be track...
06/13/2019
- 03:23 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
- We have reboot the SSD/monitors nodes, nothing change same pool size.
06/12/2019
- 10:31 PM Bug #40306 (Resolved): Pool dont show their true size after add more osd - Max Available 1TB
- Hi
Last night we added 4 disk class ssd to our cluster in a host.
We added normaly without problem but at dashb... - 03:36 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
- To add some info.
All of our 6 radosgw, behind a nginx load balancer, logs are full with such messages:
...
2... - 03:28 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
- *Possibly* helpful pointers:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-November/031595.html
Bug ... - 02:24 PM Bug #40300 (New): ceph-osd segfault: "rocksdb: Corruption: file is too short"
- Cluster is Nautilus 14.2.1, 350 OSDs with BlueStore.
Steps that led to the problem:
1. There is a bucket with ...
06/11/2019
- 08:19 PM Backport #40281 (Resolved): nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
- https://github.com/ceph/ceph/pull/28573
- 08:19 PM Backport #40280 (Resolved): mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
- https://github.com/ceph/ceph/pull/28574
- 05:35 PM Bug #40080 (Fix Under Review): Bitmap allocator return duplicate entries which cause interval_set...
- 01:18 PM Bug #40080 (In Progress): Bitmap allocator return duplicate entries which cause interval_set assert
- Managed to reproduce the same by replaying the log. WIP on the fix.
- 05:52 AM Bug #38559 (Pending Backport): 50-100% iops lost due to bluefs_preextend_wal_files = false
06/05/2019
- 06:42 AM Backport #39256 (Resolved): nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWrites...
- 06:41 AM Feature #39146 (Resolved): raise an alert when per pool stats aren't used
- 06:41 AM Backport #39348 (Resolved): nautilus: raise an alert when per pool stats aren't used
06/04/2019
- 08:47 PM Backport #39256: nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27525
merged - 08:47 PM Backport #39348: nautilus: raise an alert when per pool stats aren't used
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27645
merged
05/30/2019
- 06:36 PM Bug #24901: Client reads fail due to bad CRC under high memory pressure on OSDs
- The work-around for http://tracker.ceph.com/issues/22464 also fixed this
- 02:28 PM Bug #24901: Client reads fail due to bad CRC under high memory pressure on OSDs
- Has anyone else seen this issue? Is it still occurring in your environment Paul?
- 02:42 PM Bug #40080: Bitmap allocator return duplicate entries which cause interval_set assert
- I do see some very large directory in our fs…however the rebuilt-osd crashed again which bring down 8 PGs and as well...
- 09:31 AM Bug #40080: Bitmap allocator return duplicate entries which cause interval_set assert
- @Igor,
I noticed that during debugging, however, 14.2.1 crashed the same way.
-5> 2019-05-30 02:23:05.375 ... - 09:24 AM Bug #40080: Bitmap allocator return duplicate entries which cause interval_set assert
- Xiaoxi, this is v14.2.0, right?
14.2.1 & .2 have some bmap allocator's fixes that you might want to apply:
https:... - 08:42 AM Bug #40080: Bitmap allocator return duplicate entries which cause interval_set assert
- Full log with debug_bluestore=30, debug_bluefs=10 are available , but too big to upload.
- 08:41 AM Bug #40080 (Resolved): Bitmap allocator return duplicate entries which cause interval_set assert
- 0x941300000~200000 were returned several times by allocator , later when allocate_bluefs_freespace give duplicate ent...
- 02:29 PM Bug #20847: low performance for bluestore rbd block creation vs filestore
- Mark - is this fixed by deferred writes now?
- 02:24 PM Bug #36331 (Can't reproduce): FAILED ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixNoCsum/2...
- Looks like this hasn't happened in 6 months
- 02:23 PM Bug #38272 (Fix Under Review): "no available blob id" assertion might occur
- 02:22 PM Bug #38637 (Closed): BlueStore::ExtentMap::fault_range() assert
- Please re-open if you can gather more information
- 02:15 PM Bug #38745: spillover that doesn't make sense
- Adam's looking at similar spillover issues
- 10:07 AM Bug #38745: spillover that doesn't make sense
- BLUEFS_SPILLOVER BlueFS spillover detected on 7 OSD(s)
osd.248 spilled over 257 MiB metadata from 'db' device (... - 02:10 PM Bug #39569 (Pending Backport): os/bluestore: fix for FreeBSD iocb structure #27458
- 02:10 PM Bug #39618: Runaway memory usage on Bluestore OSD
- Mark can you take a look?
- 09:10 AM Bug #39621 (Resolved): os/bluestore: fix missing discard in BlueStore::_kv_sync_thread
- 09:09 AM Backport #39672 (Resolved): nautilus: os/bluestore: fix missing discard in BlueStore::_kv_sync_th...
05/29/2019
- 10:06 PM Backport #39672: nautilus: os/bluestore: fix missing discard in BlueStore::_kv_sync_thread
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28258
merged - 11:30 AM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- Just to record similar case and discovered root cause:
Our customer running Ceph version v12.2.11 complained about t...
05/28/2019
- 02:38 AM Backport #39672 (In Progress): nautilus: os/bluestore: fix missing discard in BlueStore::_kv_sync...
- https://github.com/ceph/ceph/pull/28258
05/24/2019
- 12:31 PM Feature #40021 (New): Zero out pending sectors during PG repair
- In our production clusters we often see inconsistent objects which are the result of a SCSI Medium Error during a rea...
05/23/2019
- 08:32 PM Bug #38272: "no available blob id" assertion might occur
- Alternative PR: https://github.com/ceph/ceph/pull/28229
(Original PR https://github.com/ceph/ceph/pull/26882 went ...
05/20/2019
- 04:10 PM Feature #38816: Deferred writes do not work for random writes
- Sage Weil wrote:
> Vitaliy Filippov wrote:
> > 1) Bluestore doesn't honor the max_deferred_txc parameter. It starts... - 03:20 PM Feature #38816 (In Progress): Deferred writes do not work for random writes
Also available in: Atom