Activity
From 07/28/2019 to 08/26/2019
08/26/2019
- 08:38 PM Backport #40535: mimic: pool compression options not consistently applied
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28894
m... - 07:34 PM Backport #40423: mimic: Bitmap allocator return duplicate entries which cause interval_set assert
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28645
m... - 06:58 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - This may help:
https://github.com/ceph/ceph/pull/27782 - 05:57 PM Backport #41273: nautilus: Containerized cluster failure due to osd_memory_target not being set t...
- Raising priority, I think this is a blocker for rook-ceph on any platform, and Octopus won't be in downstream release...
- 03:10 PM Backport #41510 (In Progress): luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false
- 03:09 PM Backport #41510 (Resolved): luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false
- https://github.com/ceph/ceph/pull/29564
- 03:09 PM Bug #38559 (Pending Backport): 50-100% iops lost due to bluefs_preextend_wal_files = false
- 03:06 PM Bug #38559 (Resolved): 50-100% iops lost due to bluefs_preextend_wal_files = false
- Having been run with --resolve-parent, the script "backport-create-issue" set the status of this issue to "Resolved" ...
- 02:55 PM Bug #40769 (Resolved): Set concurrent max_background_compactions in rocksdb to 2
- 02:43 PM Backport #41462 (Rejected): luminous: incorrect RW_IO_MAX
- https://github.com/ceph/ceph/pull/34513
- 02:42 PM Backport #41461 (Rejected): mimic: incorrect RW_IO_MAX
- 02:42 PM Backport #41460 (Resolved): nautilus: incorrect RW_IO_MAX
- https://github.com/ceph/ceph/pull/31397
- 10:56 AM Backport #40280: mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28574
m...
08/21/2019
- 02:39 PM Backport #40281 (Resolved): nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28573
m... - 02:38 PM Backport #40837 (Resolved): nautilus: Set concurrent max_background_compactions in rocksdb to 2
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29162
m... - 02:29 PM Backport #40675: nautilus: massive allocator dumps when unable to allocate space for bluefs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28891
m... - 02:29 PM Backport #40536: nautilus: pool compression options not consistently applied
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28892
m... - 02:28 PM Backport #40632: nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28963
m... - 02:28 PM Backport #40757: nautilus: stupid allocator might return extents with length = 0
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29023
m... - 07:20 AM Bug #41367 (Duplicate): rocksdb: submit_transaction error: Corruption: block checksum mismatch co...
- Since several weeks we have daily 1-8 OSD crashes since ceph 12.2.12.
The OSD log is attached.
```
Trace:
201...
08/19/2019
- 08:12 PM Bug #41215: os/bluestore: do not set osd_memory_target default from cgroup limit
- nautilus backport: https://github.com/ceph/ceph/pull/29745
- 03:02 PM Backport #41340 (Resolved): nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- https://github.com/ceph/ceph/pull/30229
- 03:02 PM Backport #41339 (Resolved): mimic: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- https://github.com/ceph/ceph/pull/30219
- 03:02 PM Backport #41338 (Resolved): luminous: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- https://github.com/ceph/ceph/pull/29910
- 02:30 PM Backport #36640: luminous: Unable to recover from ENOSPC in BlueFS
- It appeared to me that increasing "bluefs_min_log_runway" config option to a really high value is one way to prevent ...
- 02:54 AM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- @Zoltan Arnold Nagy,Did you change bluefs_buffered_io to true on 12.2.12?
We suspect this is due to the use of two f...
08/16/2019
08/15/2019
- 09:27 PM Bug #41188 (Pending Backport): incorrect RW_IO_MAX
- https://github.com/ceph/ceph/pull/29577
- 07:08 PM Bug #41215 (Pending Backport): os/bluestore: do not set osd_memory_target default from cgroup limit
- 05:31 PM Bug #41301 (Resolved): os/bluestore/BlueFS: use 64K alloc_size on the shared device
- 11:32 AM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- we've also hit this on a 12.2.12:
@ -4> 2019-08-15 13:29:40.208068 7ff0a6eabe00 1 bdev(0x55bae6cc7440 /var/lib... - 09:11 AM Backport #41290 (Resolved): nautilus: fix and improve doc regarding manual bluestore cache settings.
- https://github.com/ceph/ceph/pull/31259
- 09:11 AM Backport #41289 (Resolved): luminous: fix and improve doc regarding manual bluestore cache settings.
- https://github.com/ceph/ceph/pull/31257
- 09:11 AM Backport #41288 (Resolved): mimic: fix and improve doc regarding manual bluestore cache settings.
- https://github.com/ceph/ceph/pull/31258
- 09:09 AM Backport #41282 (Resolved): nautilus: BlueStore tool to check fragmentation
- https://github.com/ceph/ceph/pull/29949
- 09:09 AM Backport #41281 (Resolved): luminous: BlueStore tool to check fragmentation
- https://github.com/ceph/ceph/pull/29539
- 09:09 AM Backport #41280 (Rejected): mimic: BlueStore tool to check fragmentation
- 09:08 AM Backport #41273 (Resolved): nautilus: Containerized cluster failure due to osd_memory_target not ...
08/13/2019
- 04:46 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- Looks like a duplicate of this one - https://tracker.ceph.com/issues/36482.
- 04:41 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - Today I restarted the OSD with `debug_bluestore = 20` and `debug_bluefs = 20` and I see following logs....
- 04:15 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - One more finding as I mentioned in comment#2 this only happens to metadata OSD's(RGW index and multi-site sync logs...
- 12:36 AM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - This happens in each OSD restart. Today again I restarted and the same thing....
08/12/2019
- 10:44 PM Bug #41215 (Duplicate): os/bluestore: do not set osd_memory_target default from cgroup limit
- See https://github.com/ceph/ceph/pull/29581#issuecomment-520000272
- 09:52 PM Bug #41208 (Fix Under Review): os/bluestore/NVMEDevice:when write\read\flush func eorror in aio_h...
- 02:02 PM Bug #41208: os/bluestore/NVMEDevice:when write\read\flush func eorror in aio_handle, without erro...
- Hello, I think your ticket is incomplete?
- 01:50 PM Bug #41208 (Resolved): os/bluestore/NVMEDevice:when write\read\flush func eorror in aio_handle, w...
- 08:54 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- This is an SSD OSD and was used for heavy omap workload....
- 08:42 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- ...
- 08:38 PM Bug #41213 (Duplicate): BlueStore OSD taking more than 60 minutes to boot
- ...
- 01:51 PM Bug #41009: osd_memory_target isn't applied in runtime.
- PR https://github.com/ceph/ceph/pull/29606 is raised that addresses this issue.
Tested this using vstart and here ...
08/09/2019
- 03:03 PM Bug #41167: arm64: unexpected aio return value: does not match length
- https://github.com/ceph/ceph/pull/29370
- 02:23 PM Bug #41188 (Resolved): incorrect RW_IO_MAX
- 0x7fff0000, not 0x7ffff000
- 04:01 AM Bug #38559: 50-100% iops lost due to bluefs_preextend_wal_files = false
- luminous: https://github.com/ceph/ceph/pull/29564
08/08/2019
- 08:52 PM Bug #41037 (Pending Backport): Containerized cluster failure due to osd_memory_target not being s...
- nautilus backport: https://github.com/ceph/ceph/pull/29562
- 02:34 PM Bug #41167 (Duplicate): arm64: unexpected aio return value: does not match length
- ...
08/07/2019
- 06:38 PM Feature #40704: BlueStore tool to check fragmentation
- Luminous backport: https://github.com/ceph/ceph/pull/29539
- 06:38 PM Feature #40704 (Pending Backport): BlueStore tool to check fragmentation
08/06/2019
- 04:16 PM Bug #41037 (Fix Under Review): Containerized cluster failure due to osd_memory_target not being s...
- https://github.com/ceph/ceph/pull/29511
- 11:33 AM Documentation #39522 (Pending Backport): fix and improve doc regarding manual bluestore cache set...
- 12:14 AM Feature #40704 (Fix Under Review): BlueStore tool to check fragmentation
08/05/2019
- 04:05 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- @ben that's probably a semi-reasonable assumption in a lot of cases, though I've noticed that the kernel doesn't alwa...
- 02:09 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- My guess would be if CGroup limit is X, then 0.95 X - 1/2 GB should be fine for osd_memory_target, that would give th...
- 12:11 PM Bug #36482: High amount of Read I/O on BlueFS/DB when listing omap keys
- Robin,
not sure about the next Mimic/Luminous releases, but may be later. Let this patch bake for a bit in Nautilus ...
08/02/2019
- 08:17 PM Backport #40281: nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28573
merged - 08:14 PM Backport #40837: nautilus: Set concurrent max_background_compactions in rocksdb to 2
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29162
merged - 03:49 AM Feature #41053 (New): bluestore/rocksdb: aarch64 optimized crc32c instructions support
- Currently, rocksdb engine in the newest nautilus(14.2.2) and master branch doesn't support aarch64 optimized crc32c i...
08/01/2019
- 08:36 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- Joe Talerico reproduced this and found the POD_LIMIT was getting set, but not the system-wide limit, so the current O...
- 07:17 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- Neha Ojha wrote:
> Can you enable debug_osd=10 and see what this line (https://github.com/ceph/ceph/commit/fc3bdad87... - 05:44 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- from Joe T on his system:
# ceph version
ceph version 14.2.2-218-g734b519 (734b5199dc45d3d36c8d8d066d6249cc304d0e0e... - 05:38 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- version you asked for:
ceph-base-14.2.2-0.el7.x86_64
from the Ceph container image ceph/ceph:v14.2.2-20190722
- 05:33 PM Bug #36482: High amount of Read I/O on BlueFS/DB when listing omap keys
- Nathan/Igor: any chance of a Luminious backport for v12.2.13, and Mimic as well?
07/31/2019
- 09:25 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- Can you enable debug_osd=10 and see what this line (https://github.com/ceph/ceph/commit/fc3bdad87597066a813a3734b2a79...
- 09:07 PM Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgro...
- Which version of Ceph are you running?
- 07:21 PM Bug #41037 (Resolved): Containerized cluster failure due to osd_memory_target not being set to ra...
- Under heavy I/O workload (generated to multiple postgres databaseses, backed by Ceph RBD, via the pgbench utility), w...
- 01:54 PM Backport #40757 (Resolved): nautilus: stupid allocator might return extents with length = 0
- 01:53 PM Bug #36482 (Resolved): High amount of Read I/O on BlueFS/DB when listing omap keys
- 01:53 PM Backport #40632 (Resolved): nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
- 01:52 PM Bug #40480 (Resolved): pool compression options not consistently applied
- 01:52 PM Backport #40536 (Resolved): nautilus: pool compression options not consistently applied
- 01:52 PM Bug #40623 (Resolved): massive allocator dumps when unable to allocate space for bluefs
- 01:51 PM Backport #40675 (Resolved): nautilus: massive allocator dumps when unable to allocate space for b...
07/30/2019
- 10:28 PM Backport #40675: nautilus: massive allocator dumps when unable to allocate space for bluefs
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/28891
merged - 10:28 PM Backport #40536: nautilus: pool compression options not consistently applied
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28892
mergedReviewed-by: Sage Weil <sage@redhat.com>
- 10:26 PM Backport #40632: nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28963
merged - 10:25 PM Backport #40757: nautilus: stupid allocator might return extents with length = 0
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29023
merged - 05:37 PM Bug #41014 (Fix Under Review): make bluefs_alloc_size default to bluestore_min_alloc_size
- 05:13 PM Bug #41014 (Duplicate): make bluefs_alloc_size default to bluestore_min_alloc_size
- Originally 1M was chosen as the bluefs_alloc_size since metadata is stored in rocksdb, which persists it in large chu...
- 02:42 PM Bug #41009: osd_memory_target isn't applied in runtime.
- Sridhar was planning on working on this after the mon_memory_target (https://github.com/ceph/ceph/pull/28227) - I had...
- 01:46 PM Bug #41009 (Resolved): osd_memory_target isn't applied in runtime.
- Looks like this PR (https://github.com/ceph/ceph/pull/27381) completely removed (intentionally?) the ability to adjus...
- 02:37 AM Bug #40938: Some osd processes restart automatically after adding osd
- Detailed log information
Also available in: Atom