Activity
From 06/19/2019 to 07/18/2019
07/18/2019
- 11:52 PM Bug #39618: Runaway memory usage on Bluestore OSD
- Hi Richard,
Sorry for the long latency on this reply! Setting the osd_memory_target won't do anything if you disa... - 08:33 PM Backport #40535 (Resolved): mimic: pool compression options not consistently applied
- 07:50 PM Backport #40535: mimic: pool compression options not consistently applied
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/28894
merged
07/17/2019
- 05:13 PM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- Would you share file listing for root folder of osd-0 and any other working OSD?
- 05:05 PM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- Christian - please tell more about the drive layout at these OSDs.. Is this just a single drive config?
- 03:56 PM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- Full log file with osd debug = 20 is attached.
The meta data ceph gathered:... - 03:14 PM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- Nevermind - I missed the note that you can't reproduce it...
- 03:12 PM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- Christian - could you please set debug bluestore to 20 restart the osd and collect the log?
- 02:56 PM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- I encountered this bug in ceph version 13.2.6 mimic (stable) and it pulled down 6 out of 8 deployed OSDs, however I w...
- 01:21 PM Bug #40741: Mass OSD failure, unable to restart
- 01:15 AM Backport #40424 (Resolved): nautilus: Bitmap allocator return duplicate entries which cause inter...
07/16/2019
- 03:50 PM Bug #40769 (Pending Backport): Set concurrent max_background_compactions in rocksdb to 2
- https://github.com/ceph/ceph/pull/29027#issuecomment-511863023
07/13/2019
- 03:45 AM Bug #40741: Mass OSD failure, unable to restart
- Brett Chancellor wrote:
> 1. Info below
> 2. Attached last 50k lines of logs with debug_bluefs set to 20/20
> 3. C...
07/12/2019
- 10:35 PM Bug #40741: Mass OSD failure, unable to restart
- 1. Info below
2. Attached last 50k lines of logs with debug_bluefs set to 20/20
3. Can you share the syntax for cep... - 07:59 PM Bug #40741: Mass OSD failure, unable to restart
- Let's keep osd.44 aside for now. For 35 & 110 please answer/do the following.
1) Check corresponding disk activity f... - 06:25 PM Bug #40741: Mass OSD failure, unable to restart
- LVM..
The bigger issue right now isn't the failing SSDs, it's the constantly HDD's that are constantly rebooting ... - 04:30 PM Bug #40741: Mass OSD failure, unable to restart
- What's behind you DB volumes - LVM or plain partition/device?
- 04:28 PM Bug #40741: Mass OSD failure, unable to restart
- This one doesn't have enough space as well, 0xc00000 bytes as ssd, 0x28c8000 bytes at main device. See:
2019-07-11... - 04:05 PM Bug #40741: Mass OSD failure, unable to restart
- Thanks for looking into Igor. That was one of the many failed SSD volumes, chosen at random. Here is some info from ...
- 12:42 PM Bug #40741: Mass OSD failure, unable to restart
- Here is my analysis from what I've seen in your logs so far:
1) After initial issue(s) that trigger OSDs to restart ... - 05:42 PM Bug #40769 (Resolved): Set concurrent max_background_compactions in rocksdb to 2
- https://github.com/ceph/ceph/pull/29027#issue-297158998 explains why this change makes sense.
- 03:55 PM Backport #40756: luminous: stupid allocator might return extents with length = 0
- https://github.com/ceph/ceph/pull/29025
- 03:51 PM Backport #40756 (In Progress): luminous: stupid allocator might return extents with length = 0
- 03:03 PM Backport #40756 (Resolved): luminous: stupid allocator might return extents with length = 0
- https://github.com/ceph/ceph/pull/29025
- 03:52 PM Backport #40758: mimic: stupid allocator might return extents with length = 0
- https://github.com/ceph/ceph/pull/29024
- 03:36 PM Backport #40758 (In Progress): mimic: stupid allocator might return extents with length = 0
- 03:04 PM Backport #40758 (Resolved): mimic: stupid allocator might return extents with length = 0
- https://github.com/ceph/ceph/pull/29024
- 03:20 PM Backport #40757: nautilus: stupid allocator might return extents with length = 0
- https://github.com/ceph/ceph/pull/29023
- 03:11 PM Backport #40757 (In Progress): nautilus: stupid allocator might return extents with length = 0
- 03:04 PM Backport #40757 (Resolved): nautilus: stupid allocator might return extents with length = 0
- https://github.com/ceph/ceph/pull/29023
- 02:00 PM Bug #40703 (Pending Backport): stupid allocator might return extents with length = 0
07/11/2019
- 07:26 PM Bug #40741 (Triaged): Mass OSD failure, unable to restart
- Cluster: 14.2.1
OSDs: 250 spinners in default root, 63 SSDs in ssd root
History: 5 days ago, this cluster began l... - 11:38 AM Backport #40423 (Resolved): mimic: Bitmap allocator return duplicate entries which cause interval...
- 08:53 AM Backport #40280 (Resolved): mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
07/10/2019
- 07:37 PM Backport #40423: mimic: Bitmap allocator return duplicate entries which cause interval_set assert
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/28645
merged - 03:34 PM Backport #40632: nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
- https://github.com/ceph/ceph/pull/28963
- 03:22 PM Backport #40632 (In Progress): nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
- 03:20 PM Bug #40703 (Fix Under Review): stupid allocator might return extents with length = 0
07/09/2019
- 06:54 PM Feature #40704 (Resolved): BlueStore tool to check fragmentation
- RHBZ - https://bugzilla.redhat.com/show_bug.cgi?id=1728357
- 06:05 PM Bug #40703 (Pending Backport): stupid allocator might return extents with length = 0
- 05:19 PM Bug #40703 (Resolved): stupid allocator might return extents with length = 0
- Returned allocated amount is non-zero though.
- 05:23 PM Bug #39245 (Resolved): os/bluestore: fix length overflow
- 05:22 PM Backport #39247 (Resolved): luminous: os/bluestore: fix length overflow
- 04:33 PM Backport #39247: luminous: os/bluestore: fix length overflow
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27365
merged - 04:13 PM Bug #21312 (Resolved): occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
- 04:12 PM Backport #39254 (Resolved): luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWrites...
- 04:07 PM Backport #39254: luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27529
merged - 04:08 PM Bug #39334 (Resolved): OSD crashed in BitmapAllocator::init_add_free()
- 04:08 PM Backport #39444 (Resolved): luminous: OSD crashed in BitmapAllocator::init_add_free()
- 04:06 PM Backport #39444: luminous: OSD crashed in BitmapAllocator::init_add_free()
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/27739
mergedReviewed-by: Sage Weil <sage@redhat.com>
07/07/2019
- 06:07 PM Bug #40684 (Resolved): bluestore objectstore_blackhole=true violates read-after-write
- - osd has blackhole set
- osd receives osdmap 17, queues for write
- bluestore drops it (blackhole=true)
- osd rec...
07/05/2019
- 09:19 AM Backport #40535 (In Progress): mimic: pool compression options not consistently applied
- https://github.com/ceph/ceph/pull/28894
- 09:19 AM Backport #40534 (In Progress): luminous: pool compression options not consistently applied
- https://github.com/ceph/ceph/pull/28895
- 09:01 AM Backport #40536 (In Progress): nautilus: pool compression options not consistently applied
- https://github.com/ceph/ceph/pull/28892
- 08:58 AM Backport #40675 (In Progress): nautilus: massive allocator dumps when unable to allocate space fo...
- https://github.com/ceph/ceph/pull/28891
- 08:54 AM Backport #40675 (Resolved): nautilus: massive allocator dumps when unable to allocate space for b...
- https://github.com/ceph/ceph/pull/28891
- 03:28 AM Bug #40623 (Pending Backport): massive allocator dumps when unable to allocate space for bluefs
07/04/2019
- 02:51 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
- Igor Fedotov wrote:
> I'm not aware of any relevant documentation on the topic, may be it makes sense to fire a corr... - 01:28 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
- I'm not aware of any relevant documentation on the topic, may be it makes sense to fire a corresponding ticket...
07/03/2019
- 04:48 PM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
- Is there some kind of documentation in the docs, or release notes that would allow this to be called "known behaviour...
07/02/2019
- 08:33 PM Backport #40632 (Resolved): nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
- https://github.com/ceph/ceph/pull/28963
- 04:11 PM Bug #40623 (Fix Under Review): massive allocator dumps when unable to allocate space for bluefs
- 04:11 PM Bug #40623 (In Progress): massive allocator dumps when unable to allocate space for bluefs
- 03:59 PM Bug #40623 (Resolved): massive allocator dumps when unable to allocate space for bluefs
- 11:17 AM Bug #40557 (Duplicate): Rocksdb lookups slow until manual compaction
- http://tracker.ceph.com/issues/36482
- 11:16 AM Bug #36482 (Pending Backport): High amount of Read I/O on BlueFS/DB when listing omap keys
- 09:42 AM Bug #40306: Pool dont show their true size after add more osd - Max Available 1TB
- This is a known behavior in Nautilus. The mixture of new (created in Nautilus) and legacy OSDs results in such an inc...
07/01/2019
- 07:56 PM Backport #40280: mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28574
merged
06/26/2019
- 03:01 PM Bug #40557 (Duplicate): Rocksdb lookups slow until manual compaction
- https://www.spinics.net/lists/ceph-users/msg53623.html
"[ceph-users] OSDs taking a long time to boot due to 'clear_t...
06/25/2019
- 10:18 AM Backport #40536 (Resolved): nautilus: pool compression options not consistently applied
- https://github.com/ceph/ceph/pull/28892
- 10:18 AM Backport #40535 (Resolved): mimic: pool compression options not consistently applied
- https://github.com/ceph/ceph/pull/28894
- 10:18 AM Backport #40534 (Resolved): luminous: pool compression options not consistently applied
- https://github.com/ceph/ceph/pull/28895
- 07:53 AM Bug #40492: man page for ceph-kvstore-tool missing command
- https://github.com/ceph/ceph/pull/27162 might be backported, so doc changes might need backport as well
- 05:17 AM Bug #40480 (Pending Backport): pool compression options not consistently applied
06/24/2019
- 02:37 PM Bug #40520 (Can't reproduce): snap_mapper record resurrected: trim_object: Can not trim 3:205afc9...
- ...
- 11:10 AM Bug #40492: man page for ceph-kvstore-tool missing command
- `stats` added in https://github.com/ceph/ceph/commit/2ab28aa3295dbe29ab12bf0e7b81c675e92bd337#diff-7aba0ed0e56c23e81f...
- 09:51 AM Bug #40492 (Resolved): man page for ceph-kvstore-tool missing command
- https://github.com/ceph/ceph/blob/master/doc/man/8/ceph-kvstore-tool.rst doesn't describe the `stats` command
- 09:36 AM Documentation #40491 (New): add section on how to view rocksdb sizes/levels
- bluestore docs doesn't tell anything about how to view the rocksdb sizes, and tell how much are used from db volume, ...
06/21/2019
- 11:27 AM Bug #40480: pool compression options not consistently applied
- Added #40483 to track the second issue.
- 11:15 AM Bug #40480: pool compression options not consistently applied
- Actually there are two issues here - the first one (fixed by #28688) is unloaded OSD compression settings when OSD co...
- 11:12 AM Bug #40480 (Fix Under Review): pool compression options not consistently applied
- 10:07 AM Bug #40480 (In Progress): pool compression options not consistently applied
- 09:44 AM Bug #40480 (Resolved): pool compression options not consistently applied
- With v13.2.6:
We boot an osd with bluestore_compression_mode=none and bluestore_compression_algorithm=snappy, but ... - 07:30 AM Documentation #40473 (Resolved): enhance db sizing
- sizing section doesn't mention rocksdb extents, which are essential to understand how much of a db partition will act...
06/20/2019
- 09:45 AM Backport #40447 (Need More Info): mimic: "no available blob id" assertion might occur
- Sage wrote: "Not sure if this can/should be backported beyond nautilus...?"
- 09:45 AM Backport #40448 (Need More Info): luminous: "no available blob id" assertion might occur
- Sage wrote: "Not sure if this can/should be backported beyond nautilus...?"
- 05:34 AM Bug #39097: _verify_csum bad crc32c/0x10000 checksum at blob offset 0x0, got 0x478682d5, expected...
- hi, sage, i got the same err, http://tracker.ceph.com/issues/40459 in ceph 12.2.5 and centos7.4, any idea to solve this?
- 03:16 AM Bug #40459 (Can't reproduce): os/bluestore: _verify_csum bad crc32 but no error in message, ceph-...
- in my cluster ceph 12.2.5(centos7.4), there alawys some _verify_csum bad in some osds, but no error in messages and s...
06/19/2019
- 06:06 PM Backport #40449 (Resolved): nautilus: "no available blob id" assertion might occur
- https://github.com/ceph/ceph/pull/30144
- 06:06 PM Backport #40448 (Rejected): luminous: "no available blob id" assertion might occur
- 06:06 PM Backport #40447 (Rejected): mimic: "no available blob id" assertion might occur
- 04:28 PM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
- So, I think the question is what we can/should do to avoid this next time. A few options:
- This a rocksdb readah... - 02:41 PM Bug #40434 (Resolved): ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
- This happens when migrating from DB to main device is initiated and some bluefs data is already at main device.
Afte... - 09:14 AM Backport #40422 (In Progress): luminous: Bitmap allocator return duplicate entries which cause in...
- https://github.com/ceph/ceph/pull/28644
- 08:41 AM Backport #40422 (Resolved): luminous: Bitmap allocator return duplicate entries which cause inter...
- https://github.com/ceph/ceph/pull/28644
- 09:13 AM Backport #40423 (In Progress): mimic: Bitmap allocator return duplicate entries which cause inter...
- https://github.com/ceph/ceph/pull/28645
- 08:41 AM Backport #40423 (Resolved): mimic: Bitmap allocator return duplicate entries which cause interval...
- https://github.com/ceph/ceph/pull/28645
- 09:13 AM Backport #40424 (In Progress): nautilus: Bitmap allocator return duplicate entries which cause in...
- https://github.com/ceph/ceph/pull/28646
- 08:41 AM Backport #40424 (Resolved): nautilus: Bitmap allocator return duplicate entries which cause inter...
Also available in: Atom