Project

General

Profile

Activity

From 06/03/2019 to 07/02/2019

07/02/2019

08:33 PM Backport #40632 (Resolved): nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
https://github.com/ceph/ceph/pull/28963 Nathan Cutler
04:11 PM Bug #40623 (Fix Under Review): massive allocator dumps when unable to allocate space for bluefs
Igor Fedotov
04:11 PM Bug #40623 (In Progress): massive allocator dumps when unable to allocate space for bluefs
Igor Fedotov
03:59 PM Bug #40623 (Resolved): massive allocator dumps when unable to allocate space for bluefs
Igor Fedotov
11:17 AM Bug #40557 (Duplicate): Rocksdb lookups slow until manual compaction
http://tracker.ceph.com/issues/36482 Igor Fedotov
11:16 AM Bug #36482 (Pending Backport): High amount of Read I/O on BlueFS/DB when listing omap keys
Igor Fedotov
09:42 AM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
This is a known behavior in Nautilus. The mixture of new (created in Nautilus) and legacy OSDs results in such an inc... Igor Fedotov

07/01/2019

07:56 PM Backport #40280: mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28574
merged
Yuri Weinstein

06/26/2019

03:01 PM Bug #40557 (Duplicate): Rocksdb lookups slow until manual compaction
https://www.spinics.net/lists/ceph-users/msg53623.html
"[ceph-users] OSDs taking a long time to boot due to 'clear_t...
Greg Farnum

06/25/2019

10:18 AM Backport #40536 (Resolved): nautilus: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28892 Nathan Cutler
10:18 AM Backport #40535 (Resolved): mimic: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28894 Nathan Cutler
10:18 AM Backport #40534 (Resolved): luminous: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28895 Nathan Cutler
07:53 AM Bug #40492: man page for ceph-kvstore-tool missing command
https://github.com/ceph/ceph/pull/27162 might be backported, so doc changes might need backport as well Torben Hørup
05:17 AM Bug #40480 (Pending Backport): pool compression options not consistently applied
Kefu Chai

06/24/2019

02:37 PM Bug #40520 (Can't reproduce): snap_mapper record resurrected: trim_object: Can not trim 3:205afc9...
... Sage Weil
11:10 AM Bug #40492: man page for ceph-kvstore-tool missing command
`stats` added in https://github.com/ceph/ceph/commit/2ab28aa3295dbe29ab12bf0e7b81c675e92bd337#diff-7aba0ed0e56c23e81f... Torben Hørup
09:51 AM Bug #40492 (Resolved): man page for ceph-kvstore-tool missing command
https://github.com/ceph/ceph/blob/master/doc/man/8/ceph-kvstore-tool.rst doesn't describe the `stats` command
Torben Hørup
09:36 AM Documentation #40491 (New): add section on how to view rocksdb sizes/levels
bluestore docs doesn't tell anything about how to view the rocksdb sizes, and tell how much are used from db volume, ... Torben Hørup

06/21/2019

11:27 AM Bug #40480: pool compression options not consistently applied
Added #40483 to track the second issue. Igor Fedotov
11:15 AM Bug #40480: pool compression options not consistently applied
Actually there are two issues here - the first one (fixed by #28688) is unloaded OSD compression settings when OSD co... Igor Fedotov
11:12 AM Bug #40480 (Fix Under Review): pool compression options not consistently applied
Igor Fedotov
10:07 AM Bug #40480 (In Progress): pool compression options not consistently applied
Igor Fedotov
09:44 AM Bug #40480 (Resolved): pool compression options not consistently applied
With v13.2.6:
We boot an osd with bluestore_compression_mode=none and bluestore_compression_algorithm=snappy, but ...
Dan van der Ster
07:30 AM Documentation #40473 (Resolved): enhance db sizing
sizing section doesn't mention rocksdb extents, which are essential to understand how much of a db partition will act... Torben Hørup

06/20/2019

09:45 AM Backport #40447 (Need More Info): mimic: "no available blob id" assertion might occur
Sage wrote: "Not sure if this can/should be backported beyond nautilus...?" Nathan Cutler
09:45 AM Backport #40448 (Need More Info): luminous: "no available blob id" assertion might occur
Sage wrote: "Not sure if this can/should be backported beyond nautilus...?" Nathan Cutler
05:34 AM Bug #39097: _verify_csum bad crc32c/0x10000 checksum at blob offset 0x0, got 0x478682d5, expected...
hi, sage, i got the same err, http://tracker.ceph.com/issues/40459 in ceph 12.2.5 and centos7.4, any idea to solve this? yang wang
03:16 AM Bug #40459 (Can't reproduce): os/bluestore: _verify_csum bad crc32 but no error in message, ceph-...
in my cluster ceph 12.2.5(centos7.4), there alawys some _verify_csum bad in some osds, but no error in messages and s... yang wang

06/19/2019

06:06 PM Backport #40449 (Resolved): nautilus: "no available blob id" assertion might occur
https://github.com/ceph/ceph/pull/30144 Nathan Cutler
06:06 PM Backport #40448 (Rejected): luminous: "no available blob id" assertion might occur
Nathan Cutler
06:06 PM Backport #40447 (Rejected): mimic: "no available blob id" assertion might occur
Nathan Cutler
04:28 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
So, I think the question is what we can/should do to avoid this next time. A few options:
- This a rocksdb readah...
Sage Weil
02:41 PM Bug #40434 (Resolved): ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
This happens when migrating from DB to main device is initiated and some bluefs data is already at main device.
Afte...
Igor Fedotov
09:14 AM Backport #40422 (In Progress): luminous: Bitmap allocator return duplicate entries which cause in...
https://github.com/ceph/ceph/pull/28644 Igor Fedotov
08:41 AM Backport #40422 (Resolved): luminous: Bitmap allocator return duplicate entries which cause inter...
https://github.com/ceph/ceph/pull/28644 Igor Fedotov
09:13 AM Backport #40423 (In Progress): mimic: Bitmap allocator return duplicate entries which cause inter...
https://github.com/ceph/ceph/pull/28645 Igor Fedotov
08:41 AM Backport #40423 (Resolved): mimic: Bitmap allocator return duplicate entries which cause interval...
https://github.com/ceph/ceph/pull/28645 Igor Fedotov
09:13 AM Backport #40424 (In Progress): nautilus: Bitmap allocator return duplicate entries which cause in...
https://github.com/ceph/ceph/pull/28646 Igor Fedotov
08:41 AM Backport #40424 (Resolved): nautilus: Bitmap allocator return duplicate entries which cause inter...
Igor Fedotov

06/18/2019

03:32 PM Bug #40080 (Pending Backport): Bitmap allocator return duplicate entries which cause interval_set...
Kefu Chai
03:03 PM Bug #38272 (Pending Backport): "no available blob id" assertion might occur
Not sure if this can/should be backported beyond nautilus...? Sage Weil
11:58 AM Bug #40412 (New): os/bluestore: osd_memory_target_cgroup_limit_ratio won't work with SELinux
When running in SELinux-enabled environment ceph-osd violates access policy because of reading the memory limits via ... Radoslaw Zarzynski

06/17/2019

07:49 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
This was enough to bring up the 3 OSDs, get back the stale PG, complete the resharding, i.e. get rid of the large oma... Harald Staub
07:15 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
Thanks to the awesome help of several people, we managed to work around this problem.
With
bluestore rocksdb opti...
Harald Staub
12:35 AM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB

All SSD report this logs after run ceph-bluestore-tool repair...
Manuel Rios

06/15/2019

09:37 AM Backport #40280 (In Progress): mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
Nathan Cutler
09:32 AM Backport #40281 (In Progress): nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
Nathan Cutler

06/14/2019

03:16 PM Bug #38745: spillover that doesn't make sense
This is also showing up in 14.2.1 in instances where the db is overly provisioned.
HEALTH_WARN BlueFS spillover de...
Brett Chancellor
10:08 AM Bug #18375 (Resolved): bluestore: bluefs_preextend_wal_files=true is not crash consistent
This is also duplicated by https://tracker.ceph.com/issues/38559
Marking this as resolved, backporting to be track...
Igor Fedotov

06/13/2019

03:23 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
We have reboot the SSD/monitors nodes, nothing change same pool size. Manuel Rios

06/12/2019

10:31 PM Bug #40306 (Resolved): Pool dont show their true size after add more osd - Max Available 1TB
Hi
Last night we added 4 disk class ssd to our cluster in a host.
We added normaly without problem but at dashb...
Manuel Rios
03:36 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
To add some info.
All of our 6 radosgw, behind a nginx load balancer, logs are full with such messages:
...
2...
Valery Tschopp
03:28 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
*Possibly* helpful pointers:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-November/031595.html
Bug ...
Florian Haas
02:24 PM Bug #40300 (New): ceph-osd segfault: "rocksdb: Corruption: file is too short"
Cluster is Nautilus 14.2.1, 350 OSDs with BlueStore.
Steps that led to the problem:
1. There is a bucket with ...
Harald Staub

06/11/2019

08:19 PM Backport #40281 (Resolved): nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
https://github.com/ceph/ceph/pull/28573 Nathan Cutler
08:19 PM Backport #40280 (Resolved): mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
https://github.com/ceph/ceph/pull/28574 Nathan Cutler
05:35 PM Bug #40080 (Fix Under Review): Bitmap allocator return duplicate entries which cause interval_set...
Igor Fedotov
01:18 PM Bug #40080 (In Progress): Bitmap allocator return duplicate entries which cause interval_set assert
Managed to reproduce the same by replaying the log. WIP on the fix. Igor Fedotov
05:52 AM Bug #38559 (Pending Backport): 50-100% iops lost due to bluefs_preextend_wal_files = false
Kefu Chai

06/05/2019

06:42 AM Backport #39256 (Resolved): nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWrites...
Nathan Cutler
06:41 AM Feature #39146 (Resolved): raise an alert when per pool stats aren't used
Nathan Cutler
06:41 AM Backport #39348 (Resolved): nautilus: raise an alert when per pool stats aren't used
Nathan Cutler

06/04/2019

08:47 PM Backport #39256: nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27525
merged
Yuri Weinstein
08:47 PM Backport #39348: nautilus: raise an alert when per pool stats aren't used
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27645
merged
Yuri Weinstein
 

Also available in: Atom