Project

General

Profile

Activity

From 06/12/2019 to 07/11/2019

07/11/2019

07:26 PM Bug #40741 (Triaged): Mass OSD failure, unable to restart
Cluster: 14.2.1
OSDs: 250 spinners in default root, 63 SSDs in ssd root
History: 5 days ago, this cluster began l...
Brett Chancellor
11:38 AM Backport #40423 (Resolved): mimic: Bitmap allocator return duplicate entries which cause interval...
Igor Fedotov
08:53 AM Backport #40280 (Resolved): mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
Nathan Cutler

07/10/2019

07:37 PM Backport #40423: mimic: Bitmap allocator return duplicate entries which cause interval_set assert
Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/28645
merged
Yuri Weinstein
03:34 PM Backport #40632: nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
https://github.com/ceph/ceph/pull/28963 Igor Fedotov
03:22 PM Backport #40632 (In Progress): nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
Igor Fedotov
03:20 PM Bug #40703 (Fix Under Review): stupid allocator might return extents with length = 0
Igor Fedotov

07/09/2019

06:54 PM Feature #40704 (Resolved): BlueStore tool to check fragmentation
RHBZ - https://bugzilla.redhat.com/show_bug.cgi?id=1728357 Vikhyat Umrao
06:05 PM Bug #40703 (Pending Backport): stupid allocator might return extents with length = 0
Igor Fedotov
05:19 PM Bug #40703 (Resolved): stupid allocator might return extents with length = 0
Returned allocated amount is non-zero though. Igor Fedotov
05:23 PM Bug #39245 (Resolved): os/bluestore: fix length overflow
Igor Fedotov
05:22 PM Backport #39247 (Resolved): luminous: os/bluestore: fix length overflow
Igor Fedotov
04:33 PM Backport #39247: luminous: os/bluestore: fix length overflow
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27365
merged
Yuri Weinstein
04:13 PM Bug #21312 (Resolved): occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
Igor Fedotov
04:12 PM Backport #39254 (Resolved): luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWrites...
Igor Fedotov
04:07 PM Backport #39254: luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27529
merged
Yuri Weinstein
04:08 PM Bug #39334 (Resolved): OSD crashed in BitmapAllocator::init_add_free()
Igor Fedotov
04:08 PM Backport #39444 (Resolved): luminous: OSD crashed in BitmapAllocator::init_add_free()
Igor Fedotov
04:06 PM Backport #39444: luminous: OSD crashed in BitmapAllocator::init_add_free()
Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/27739
mergedReviewed-by: Sage Weil <sage@redhat.com>
Yuri Weinstein

07/07/2019

06:07 PM Bug #40684 (Resolved): bluestore objectstore_blackhole=true violates read-after-write
- osd has blackhole set
- osd receives osdmap 17, queues for write
- bluestore drops it (blackhole=true)
- osd rec...
Sage Weil

07/05/2019

09:19 AM Backport #40535 (In Progress): mimic: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28894 Igor Fedotov
09:19 AM Backport #40534 (In Progress): luminous: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28895 Igor Fedotov
09:01 AM Backport #40536 (In Progress): nautilus: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28892 Igor Fedotov
08:58 AM Backport #40675 (In Progress): nautilus: massive allocator dumps when unable to allocate space fo...
https://github.com/ceph/ceph/pull/28891 Igor Fedotov
08:54 AM Backport #40675 (Resolved): nautilus: massive allocator dumps when unable to allocate space for b...
https://github.com/ceph/ceph/pull/28891 Igor Fedotov
03:28 AM Bug #40623 (Pending Backport): massive allocator dumps when unable to allocate space for bluefs
Kefu Chai

07/04/2019

02:51 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
Igor Fedotov wrote:
> I'm not aware of any relevant documentation on the topic, may be it makes sense to fire a corr...
Manuel Rios
01:28 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
I'm not aware of any relevant documentation on the topic, may be it makes sense to fire a corresponding ticket... Igor Fedotov

07/03/2019

04:48 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
Is there some kind of documentation in the docs, or release notes that would allow this to be called "known behaviour... Thomas Kriechbaumer

07/02/2019

08:33 PM Backport #40632 (Resolved): nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
https://github.com/ceph/ceph/pull/28963 Nathan Cutler
04:11 PM Bug #40623 (Fix Under Review): massive allocator dumps when unable to allocate space for bluefs
Igor Fedotov
04:11 PM Bug #40623 (In Progress): massive allocator dumps when unable to allocate space for bluefs
Igor Fedotov
03:59 PM Bug #40623 (Resolved): massive allocator dumps when unable to allocate space for bluefs
Igor Fedotov
11:17 AM Bug #40557 (Duplicate): Rocksdb lookups slow until manual compaction
http://tracker.ceph.com/issues/36482 Igor Fedotov
11:16 AM Bug #36482 (Pending Backport): High amount of Read I/O on BlueFS/DB when listing omap keys
Igor Fedotov
09:42 AM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
This is a known behavior in Nautilus. The mixture of new (created in Nautilus) and legacy OSDs results in such an inc... Igor Fedotov

07/01/2019

07:56 PM Backport #40280: mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28574
merged
Yuri Weinstein

06/26/2019

03:01 PM Bug #40557 (Duplicate): Rocksdb lookups slow until manual compaction
https://www.spinics.net/lists/ceph-users/msg53623.html
"[ceph-users] OSDs taking a long time to boot due to 'clear_t...
Greg Farnum

06/25/2019

10:18 AM Backport #40536 (Resolved): nautilus: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28892 Nathan Cutler
10:18 AM Backport #40535 (Resolved): mimic: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28894 Nathan Cutler
10:18 AM Backport #40534 (Resolved): luminous: pool compression options not consistently applied
https://github.com/ceph/ceph/pull/28895 Nathan Cutler
07:53 AM Bug #40492: man page for ceph-kvstore-tool missing command
https://github.com/ceph/ceph/pull/27162 might be backported, so doc changes might need backport as well Torben Hørup
05:17 AM Bug #40480 (Pending Backport): pool compression options not consistently applied
Kefu Chai

06/24/2019

02:37 PM Bug #40520 (Can't reproduce): snap_mapper record resurrected: trim_object: Can not trim 3:205afc9...
... Sage Weil
11:10 AM Bug #40492: man page for ceph-kvstore-tool missing command
`stats` added in https://github.com/ceph/ceph/commit/2ab28aa3295dbe29ab12bf0e7b81c675e92bd337#diff-7aba0ed0e56c23e81f... Torben Hørup
09:51 AM Bug #40492 (Resolved): man page for ceph-kvstore-tool missing command
https://github.com/ceph/ceph/blob/master/doc/man/8/ceph-kvstore-tool.rst doesn't describe the `stats` command
Torben Hørup
09:36 AM Documentation #40491 (New): add section on how to view rocksdb sizes/levels
bluestore docs doesn't tell anything about how to view the rocksdb sizes, and tell how much are used from db volume, ... Torben Hørup

06/21/2019

11:27 AM Bug #40480: pool compression options not consistently applied
Added #40483 to track the second issue. Igor Fedotov
11:15 AM Bug #40480: pool compression options not consistently applied
Actually there are two issues here - the first one (fixed by #28688) is unloaded OSD compression settings when OSD co... Igor Fedotov
11:12 AM Bug #40480 (Fix Under Review): pool compression options not consistently applied
Igor Fedotov
10:07 AM Bug #40480 (In Progress): pool compression options not consistently applied
Igor Fedotov
09:44 AM Bug #40480 (Resolved): pool compression options not consistently applied
With v13.2.6:
We boot an osd with bluestore_compression_mode=none and bluestore_compression_algorithm=snappy, but ...
Dan van der Ster
07:30 AM Documentation #40473 (Resolved): enhance db sizing
sizing section doesn't mention rocksdb extents, which are essential to understand how much of a db partition will act... Torben Hørup

06/20/2019

09:45 AM Backport #40447 (Need More Info): mimic: "no available blob id" assertion might occur
Sage wrote: "Not sure if this can/should be backported beyond nautilus...?" Nathan Cutler
09:45 AM Backport #40448 (Need More Info): luminous: "no available blob id" assertion might occur
Sage wrote: "Not sure if this can/should be backported beyond nautilus...?" Nathan Cutler
05:34 AM Bug #39097: _verify_csum bad crc32c/0x10000 checksum at blob offset 0x0, got 0x478682d5, expected...
hi, sage, i got the same err, http://tracker.ceph.com/issues/40459 in ceph 12.2.5 and centos7.4, any idea to solve this? yang wang
03:16 AM Bug #40459 (Can't reproduce): os/bluestore: _verify_csum bad crc32 but no error in message, ceph-...
in my cluster ceph 12.2.5(centos7.4), there alawys some _verify_csum bad in some osds, but no error in messages and s... yang wang

06/19/2019

06:06 PM Backport #40449 (Resolved): nautilus: "no available blob id" assertion might occur
https://github.com/ceph/ceph/pull/30144 Nathan Cutler
06:06 PM Backport #40448 (Rejected): luminous: "no available blob id" assertion might occur
Nathan Cutler
06:06 PM Backport #40447 (Rejected): mimic: "no available blob id" assertion might occur
Nathan Cutler
04:28 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
So, I think the question is what we can/should do to avoid this next time. A few options:
- This a rocksdb readah...
Sage Weil
02:41 PM Bug #40434 (Resolved): ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
This happens when migrating from DB to main device is initiated and some bluefs data is already at main device.
Afte...
Igor Fedotov
09:14 AM Backport #40422 (In Progress): luminous: Bitmap allocator return duplicate entries which cause in...
https://github.com/ceph/ceph/pull/28644 Igor Fedotov
08:41 AM Backport #40422 (Resolved): luminous: Bitmap allocator return duplicate entries which cause inter...
https://github.com/ceph/ceph/pull/28644 Igor Fedotov
09:13 AM Backport #40423 (In Progress): mimic: Bitmap allocator return duplicate entries which cause inter...
https://github.com/ceph/ceph/pull/28645 Igor Fedotov
08:41 AM Backport #40423 (Resolved): mimic: Bitmap allocator return duplicate entries which cause interval...
https://github.com/ceph/ceph/pull/28645 Igor Fedotov
09:13 AM Backport #40424 (In Progress): nautilus: Bitmap allocator return duplicate entries which cause in...
https://github.com/ceph/ceph/pull/28646 Igor Fedotov
08:41 AM Backport #40424 (Resolved): nautilus: Bitmap allocator return duplicate entries which cause inter...
Igor Fedotov

06/18/2019

03:32 PM Bug #40080 (Pending Backport): Bitmap allocator return duplicate entries which cause interval_set...
Kefu Chai
03:03 PM Bug #38272 (Pending Backport): "no available blob id" assertion might occur
Not sure if this can/should be backported beyond nautilus...? Sage Weil
11:58 AM Bug #40412 (New): os/bluestore: osd_memory_target_cgroup_limit_ratio won't work with SELinux
When running in SELinux-enabled environment ceph-osd violates access policy because of reading the memory limits via ... Radoslaw Zarzynski

06/17/2019

07:49 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
This was enough to bring up the 3 OSDs, get back the stale PG, complete the resharding, i.e. get rid of the large oma... Harald Staub
07:15 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
Thanks to the awesome help of several people, we managed to work around this problem.
With
bluestore rocksdb opti...
Harald Staub
12:35 AM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB

All SSD report this logs after run ceph-bluestore-tool repair...
Manuel Rios

06/15/2019

09:37 AM Backport #40280 (In Progress): mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
Nathan Cutler
09:32 AM Backport #40281 (In Progress): nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
Nathan Cutler

06/14/2019

03:16 PM Bug #38745: spillover that doesn't make sense
This is also showing up in 14.2.1 in instances where the db is overly provisioned.
HEALTH_WARN BlueFS spillover de...
Brett Chancellor
10:08 AM Bug #18375 (Resolved): bluestore: bluefs_preextend_wal_files=true is not crash consistent
This is also duplicated by https://tracker.ceph.com/issues/38559
Marking this as resolved, backporting to be track...
Igor Fedotov

06/13/2019

03:23 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
We have reboot the SSD/monitors nodes, nothing change same pool size. Manuel Rios

06/12/2019

10:31 PM Bug #40306 (Resolved): Pool dont show their true size after add more osd - Max Available 1TB
Hi
Last night we added 4 disk class ssd to our cluster in a host.
We added normaly without problem but at dashb...
Manuel Rios
03:36 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
To add some info.
All of our 6 radosgw, behind a nginx load balancer, logs are full with such messages:
...
2...
Valery Tschopp
03:28 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
*Possibly* helpful pointers:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-November/031595.html
Bug ...
Florian Haas
02:24 PM Bug #40300 (New): ceph-osd segfault: "rocksdb: Corruption: file is too short"
Cluster is Nautilus 14.2.1, 350 OSDs with BlueStore.
Steps that led to the problem:
1. There is a bucket with ...
Harald Staub
 

Also available in: Atom