Activity
From 08/13/2019 to 09/11/2019
09/11/2019
- 03:06 PM Bug #41744 (In Progress): os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verif...
- 03:06 PM Bug #41744: os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_layout(bluef...
- Early, untested fix candidate: https://github.com/ceph/ceph/commit/d5b56c665a7f0ed7725e485ed05393a8b821ce7b.
Igor,... - 07:56 AM Bug #38745: spillover that doesn't make sense
- @Igor, @Rafal: please ignore. I was confused about the configuration of this cluster. It indeed has 26GB rocksdb's, s...
- 05:05 AM Bug #38745: spillover that doesn't make sense
- @Adam are there any news about this problem? We have about ~1500 osds with spillover.
If you need some more data, f...
09/10/2019
- 04:03 PM Bug #38745: spillover that doesn't make sense
- @Dan - this sounds weird - spillover without dedicated db... Could you please share 'ceph osd metadata output'?
- 03:38 PM Bug #38745: spillover that doesn't make sense
- Us to, on hdd-only OSDs (no dedicated block.db or wal):
BLUEFS_SPILLOVER BlueFS spillover detected on 94 OSD(s)
... - 01:28 PM Bug #41744: os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_layout(bluef...
- looks like one needs to update bluefs layout when doing volume add/removal.
- 12:13 PM Bug #41744 (Resolved): os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_l...
- ...
- 06:33 AM Backport #41338 (Resolved): luminous: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29910
m...
09/09/2019
- 11:54 AM Bug #25098 (Resolved): Bluestore OSD failed to start with `bluefs_types.h: 54: FAILED assert(pos ...
09/08/2019
- 01:49 PM Backport #41339 (In Progress): mimic: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- 09:19 AM Backport #41709 (Resolved): luminous: Set concurrent max_background_compactions in rocksdb to 2
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30149
m... - 08:21 AM Backport #41711 (In Progress): nautilus: man page for ceph-kvstore-tool missing command
09/07/2019
- 10:41 AM Backport #41709 (In Progress): luminous: Set concurrent max_background_compactions in rocksdb to 2
- 09:25 AM Backport #41709 (Resolved): luminous: Set concurrent max_background_compactions in rocksdb to 2
- https://github.com/ceph/ceph/pull/30149
- 10:39 AM Backport #41710 (In Progress): mimic: Set concurrent max_background_compactions in rocksdb to 2
- 09:25 AM Backport #41710 (Resolved): mimic: Set concurrent max_background_compactions in rocksdb to 2
- https://github.com/ceph/ceph/pull/30150
- 09:26 AM Backport #41711 (Resolved): nautilus: man page for ceph-kvstore-tool missing command
- https://github.com/ceph/ceph/pull/30245
09/06/2019
- 11:53 PM Backport #41340 (In Progress): nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared de...
- 07:08 PM Feature #41691 (Resolved): os/BlueStore: avoid double caching bluestore onodes in rocksdb block_c...
- 07:01 PM Feature #41690 (Fix Under Review): Sharding of rocksdb database using column families
- 07:01 PM Feature #41690 (Resolved): Sharding of rocksdb database using column families
- 1) Sharded BlueStore - 1 - KeyValueDB
https://github.com/ceph/ceph/pull/29084
2) Sharded BlueStore - 2 - main p...
09/05/2019
- 10:58 AM Bug #39618 (Can't reproduce): Runaway memory usage on Bluestore OSD
- Since we can't reproduce this in-house, I'm going to close this bug for now. With a 1GB cache I'd typically expect a...
09/04/2019
- 04:14 PM Bug #40769: Set concurrent max_background_compactions in rocksdb to 2
- Luminous backport - https://github.com/ceph/ceph/pull/30149
Mimic backport - https://github.com/ceph/ceph/pull/30150 - 04:02 PM Bug #40769 (Pending Backport): Set concurrent max_background_compactions in rocksdb to 2
- We want to backport this change to luminous and mimic as well.
- 02:01 PM Backport #40449 (In Progress): nautilus: "no available blob id" assertion might occur
- https://github.com/ceph/ceph/pull/30144
- 01:30 PM Bug #40492 (Pending Backport): man page for ceph-kvstore-tool missing command
- 01:29 PM Bug #40492 (Resolved): man page for ceph-kvstore-tool missing command
09/03/2019
- 08:37 PM Backport #40449 (Need More Info): nautilus: "no available blob id" assertion might occur
- The master PR has been merged, but Sage wants it to bake for awhile before backporting.
Setting "Needs More Info" ... - 08:35 PM Backport #40448 (Rejected): luminous: "no available blob id" assertion might occur
- https://github.com/ceph/ceph/pull/28229#issuecomment-503179250
- 08:35 PM Backport #40447 (Rejected): mimic: "no available blob id" assertion might occur
- https://github.com/ceph/ceph/pull/28229#issuecomment-503179250
08/29/2019
- 04:45 PM Bug #40492 (Fix Under Review): man page for ceph-kvstore-tool missing command
- 02:15 PM Bug #40492: man page for ceph-kvstore-tool missing command
- This is fixed by: https://github.com/ceph/ceph/pull/29990
- 02:14 PM Bug #40684 (Need More Info): bluestore objectstore_blackhole=true violates read-after-write
- Don't see the test run for this.
- 10:53 AM Bug #25098: Bluestore OSD failed to start with `bluefs_types.h: 54: FAILED assert(pos <= end)`
- https://github.com/ceph/ceph/compare/master...rzarzynski:wip-bug-25098-bluefs_layout_t
08/28/2019
- 11:09 AM Backport #41282 (In Progress): nautilus: BlueStore tool to check fragmentation
- 11:06 AM Backport #41281 (Resolved): luminous: BlueStore tool to check fragmentation
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29539
m... - 11:03 AM Backport #41281 (In Progress): luminous: BlueStore tool to check fragmentation
- 10:51 AM Backport #39638: luminous: fsck on mkfs breaks ObjectStore/StoreTestSpecificAUSize.BlobReuseOnOve...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27056
m... - 10:41 AM Backport #40422: luminous: Bitmap allocator return duplicate entries which cause interval_set assert
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28644
m...
08/27/2019
- 09:19 PM Backport #40534: luminous: pool compression options not consistently applied
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28895
m... - 08:19 PM Backport #41338 (In Progress): luminous: os/bluestore/BlueFS: use 64K alloc_size on the shared de...
- 02:54 PM Bug #41037 (Resolved): Containerized cluster failure due to osd_memory_target not being set to ra...
- 02:53 PM Backport #41273 (Resolved): nautilus: Containerized cluster failure due to osd_memory_target not ...
- https://github.com/ceph/ceph/pull/29562
- 01:19 PM Backport #39247: luminous: os/bluestore: fix length overflow
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27365
m... - 01:17 PM Backport #39254: luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27529
m... - 01:16 PM Backport #39444: luminous: OSD crashed in BitmapAllocator::init_add_free()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27739
m... - 12:07 PM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- One more occurrence:
https://tracker.ceph.com/issues/41367 - 12:00 PM Bug #41367 (Duplicate): rocksdb: submit_transaction error: Corruption: block checksum mismatch co...
- https://tracker.ceph.com/issues/37282
- 07:19 AM Backport #40758: mimic: stupid allocator might return extents with length = 0
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29024
m...
08/26/2019
- 08:38 PM Backport #40535: mimic: pool compression options not consistently applied
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28894
m... - 07:34 PM Backport #40423: mimic: Bitmap allocator return duplicate entries which cause interval_set assert
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28645
m... - 06:58 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - This may help:
https://github.com/ceph/ceph/pull/27782 - 05:57 PM Backport #41273: nautilus: Containerized cluster failure due to osd_memory_target not being set t...
- Raising priority, I think this is a blocker for rook-ceph on any platform, and Octopus won't be in downstream release...
- 03:10 PM Backport #41510 (In Progress): luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false
- 03:09 PM Backport #41510 (Resolved): luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false
- https://github.com/ceph/ceph/pull/29564
- 03:09 PM Bug #38559 (Pending Backport): 50-100% iops lost due to bluefs_preextend_wal_files = false
- 03:06 PM Bug #38559 (Resolved): 50-100% iops lost due to bluefs_preextend_wal_files = false
- Having been run with --resolve-parent, the script "backport-create-issue" set the status of this issue to "Resolved" ...
- 02:55 PM Bug #40769 (Resolved): Set concurrent max_background_compactions in rocksdb to 2
- 02:43 PM Backport #41462 (Rejected): luminous: incorrect RW_IO_MAX
- https://github.com/ceph/ceph/pull/34513
- 02:42 PM Backport #41461 (Rejected): mimic: incorrect RW_IO_MAX
- 02:42 PM Backport #41460 (Resolved): nautilus: incorrect RW_IO_MAX
- https://github.com/ceph/ceph/pull/31397
- 10:56 AM Backport #40280: mimic: 50-100% iops lost due to bluefs_preextend_wal_files = false
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28574
m...
08/21/2019
- 02:39 PM Backport #40281 (Resolved): nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28573
m... - 02:38 PM Backport #40837 (Resolved): nautilus: Set concurrent max_background_compactions in rocksdb to 2
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29162
m... - 02:29 PM Backport #40675: nautilus: massive allocator dumps when unable to allocate space for bluefs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28891
m... - 02:29 PM Backport #40536: nautilus: pool compression options not consistently applied
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28892
m... - 02:28 PM Backport #40632: nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28963
m... - 02:28 PM Backport #40757: nautilus: stupid allocator might return extents with length = 0
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29023
m... - 07:20 AM Bug #41367 (Duplicate): rocksdb: submit_transaction error: Corruption: block checksum mismatch co...
- Since several weeks we have daily 1-8 OSD crashes since ceph 12.2.12.
The OSD log is attached.
```
Trace:
201...
08/19/2019
- 08:12 PM Bug #41215: os/bluestore: do not set osd_memory_target default from cgroup limit
- nautilus backport: https://github.com/ceph/ceph/pull/29745
- 03:02 PM Backport #41340 (Resolved): nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- https://github.com/ceph/ceph/pull/30229
- 03:02 PM Backport #41339 (Resolved): mimic: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- https://github.com/ceph/ceph/pull/30219
- 03:02 PM Backport #41338 (Resolved): luminous: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- https://github.com/ceph/ceph/pull/29910
- 02:30 PM Backport #36640: luminous: Unable to recover from ENOSPC in BlueFS
- It appeared to me that increasing "bluefs_min_log_runway" config option to a really high value is one way to prevent ...
- 02:54 AM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- @Zoltan Arnold Nagy,Did you change bluefs_buffered_io to true on 12.2.12?
We suspect this is due to the use of two f...
08/16/2019
08/15/2019
- 09:27 PM Bug #41188 (Pending Backport): incorrect RW_IO_MAX
- https://github.com/ceph/ceph/pull/29577
- 07:08 PM Bug #41215 (Pending Backport): os/bluestore: do not set osd_memory_target default from cgroup limit
- 05:31 PM Bug #41301 (Resolved): os/bluestore/BlueFS: use 64K alloc_size on the shared device
- 11:32 AM Bug #23463: src/os/bluestore/StupidAllocator.cc: 336: FAILED assert(rm.empty())
- we've also hit this on a 12.2.12:
@ -4> 2019-08-15 13:29:40.208068 7ff0a6eabe00 1 bdev(0x55bae6cc7440 /var/lib... - 09:11 AM Backport #41290 (Resolved): nautilus: fix and improve doc regarding manual bluestore cache settings.
- https://github.com/ceph/ceph/pull/31259
- 09:11 AM Backport #41289 (Resolved): luminous: fix and improve doc regarding manual bluestore cache settings.
- https://github.com/ceph/ceph/pull/31257
- 09:11 AM Backport #41288 (Resolved): mimic: fix and improve doc regarding manual bluestore cache settings.
- https://github.com/ceph/ceph/pull/31258
- 09:09 AM Backport #41282 (Resolved): nautilus: BlueStore tool to check fragmentation
- https://github.com/ceph/ceph/pull/29949
- 09:09 AM Backport #41281 (Resolved): luminous: BlueStore tool to check fragmentation
- https://github.com/ceph/ceph/pull/29539
- 09:09 AM Backport #41280 (Rejected): mimic: BlueStore tool to check fragmentation
- 09:08 AM Backport #41273 (Resolved): nautilus: Containerized cluster failure due to osd_memory_target not ...
08/13/2019
- 04:46 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- Looks like a duplicate of this one - https://tracker.ceph.com/issues/36482.
- 04:41 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - Today I restarted the OSD with `debug_bluestore = 20` and `debug_bluefs = 20` and I see following logs....
- 04:15 PM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - One more finding as I mentioned in comment#2 this only happens to metadata OSD's(RGW index and multi-site sync logs...
- 12:36 AM Bug #41213: BlueStore OSD taking more than 60 minutes to boot
- - This happens in each OSD restart. Today again I restarted and the same thing....
Also available in: Atom