Activity
From 09/02/2019 to 10/01/2019
10/01/2019
- 10:59 PM Backport #39565: luminous: ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main device ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27912
merged - 10:58 PM Backport #40756: luminous: stupid allocator might return extents with length = 0
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29025
merged - 06:42 PM Bug #38637: BlueStore::ExtentMap::fault_range() assert
Something similar happened in 12.2.12 with ceph-objectstore-tool trying to remove an object and fsck and fsck-deep....
09/29/2019
- 04:54 AM Bug #42091: bluefs: sync_metadata leaks dirty files if log_t is empty
- https://github.com/ceph/ceph/pull/30631
- 04:19 AM Bug #42091 (Resolved): bluefs: sync_metadata leaks dirty files if log_t is empty
- When reading the source code, we found that, in the following code, if BlueFS::log_t is empty while there are BlueFS:...
- 04:20 AM Bug #42090: bluefs: sync_metadata leaks dirty files if log_t is empty
- Sorry, this issue is editted badly. New issue: https://tracker.ceph.com/issues/42091
- 04:17 AM Bug #42090 (Closed): bluefs: sync_metadata leaks dirty files if log_t is empty
- When reading the source code, we found that, in the following code, if BlueFS::log_t is empty while there are BlueFS:...
09/26/2019
- 02:30 PM Bug #24901 (Resolved): Client reads fail due to bad CRC under high memory pressure on OSDs
- Marking this "Resolved" since the workaround fixes this issue.
- 02:27 PM Bug #25077 (Can't reproduce): Occasional assertion in ObjectStore/StoreTest.HashCollisionTest/2
- 02:17 PM Bug #40434: ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
- 02:16 PM Bug #40459 (Can't reproduce): os/bluestore: _verify_csum bad crc32 but no error in message, ceph-...
- There isn't enough information on this to bug to diagnose the issue, please feel free to reopen if this appears again...
- 02:10 PM Bug #41901 (Fix Under Review): bluestore: unused calculation is broken
- 01:58 PM Bug #41744 (Fix Under Review): os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_...
- https://github.com/ceph/ceph/pull/30593
09/24/2019
- 07:55 PM Backport #42041 (Resolved): nautilus: bluestore objectstore_blackhole=true violates read-after-write
- https://github.com/ceph/ceph/pull/31019
09/23/2019
- 02:13 PM Bug #40684 (Pending Backport): bluestore objectstore_blackhole=true violates read-after-write
- note that for backport, we only want one commit, 6c2a8e472dc71b962d7de008e30631f125b148c3
- 02:02 PM Bug #42010 (Can't reproduce): segv in BlueStore::OnodeSpace::lookup during deletions
- lots of threads deleting objects. once of them:...
09/22/2019
- 01:53 AM Bug #41744: os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_layout(bluef...
- /a/kchai-2019-09-21_17:17:30-rados-wip-kefu-testing-2019-09-20-1944-distro-basic-mira/4324219/
09/18/2019
- 05:57 AM Documentation #40473: enhance db sizing
- I had the idea of opening a new ticket but this seems to be the same topic.
Here is what I wanted to request:
h... - 01:22 AM Bug #41901 (Resolved): bluestore: unused calculation is broken
- > 2019-09-16T10:48:48.852+0800 7fa1a4ce9700 15 bluestore(/clove/xxG/ceph/build/dev/osd0) _write 1.1_head #1:ae0e8960:...
09/13/2019
- 09:22 PM Bug #41014: make bluefs_alloc_size default to bluestore_min_alloc_size
- luminous,mimic,nautilus backports are being handled via #41301
- 09:20 PM Bug #41014 (Duplicate): make bluefs_alloc_size default to bluestore_min_alloc_size
- 09:21 PM Bug #41301 (Pending Backport): os/bluestore/BlueFS: use 64K alloc_size on the shared device
- backports are still in progress
- 08:43 PM Backport #41810: nautilus: osd_memory_target isn't applied in runtime.
- @Sridar - thanks for taking on this backport! We have a script, in the master branch: src/script/ceph-backport.sh - y...
- 01:20 PM Backport #41810 (In Progress): nautilus: osd_memory_target isn't applied in runtime.
- 09:11 AM Backport #41810 (Resolved): nautilus: osd_memory_target isn't applied in runtime.
- https://github.com/ceph/ceph/pull/31852
- 01:02 PM Bug #41744: os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_layout(bluef...
- /a/http://pulpito.ceph.com/kchai-2019-09-13_04:19:52-rados-wip-kefu-testing-2019-09-11-2224-distro-basic-mira/4301801/
09/12/2019
- 09:45 PM Bug #41213 (Duplicate): BlueStore OSD taking more than 60 minutes to boot
- basically a dup of #36482
- 09:44 PM Bug #41014 (Resolved): make bluefs_alloc_size default to bluestore_min_alloc_size
- 09:44 PM Bug #41301 (Resolved): os/bluestore/BlueFS: use 64K alloc_size on the shared device
- 01:15 PM Bug #40741: Mass OSD failure, unable to restart
- Please see my previous comment, on-site issue has been worked around the way shared there.
But please be cautious ... - 01:52 AM Bug #40741: Mass OSD failure, unable to restart
- I have same issue with you. Did you solve this issue? Could you update infor about this one?
- 11:53 AM Bug #41009 (Pending Backport): osd_memory_target isn't applied in runtime.
09/11/2019
- 03:06 PM Bug #41744 (In Progress): os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verif...
- 03:06 PM Bug #41744: os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_layout(bluef...
- Early, untested fix candidate: https://github.com/ceph/ceph/commit/d5b56c665a7f0ed7725e485ed05393a8b821ce7b.
Igor,... - 07:56 AM Bug #38745: spillover that doesn't make sense
- @Igor, @Rafal: please ignore. I was confused about the configuration of this cluster. It indeed has 26GB rocksdb's, s...
- 05:05 AM Bug #38745: spillover that doesn't make sense
- @Adam are there any news about this problem? We have about ~1500 osds with spillover.
If you need some more data, f...
09/10/2019
- 04:03 PM Bug #38745: spillover that doesn't make sense
- @Dan - this sounds weird - spillover without dedicated db... Could you please share 'ceph osd metadata output'?
- 03:38 PM Bug #38745: spillover that doesn't make sense
- Us to, on hdd-only OSDs (no dedicated block.db or wal):
BLUEFS_SPILLOVER BlueFS spillover detected on 94 OSD(s)
... - 01:28 PM Bug #41744: os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_layout(bluef...
- looks like one needs to update bluefs layout when doing volume add/removal.
- 12:13 PM Bug #41744 (Resolved): os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_l...
- ...
- 06:33 AM Backport #41338 (Resolved): luminous: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29910
m...
09/09/2019
- 11:54 AM Bug #25098 (Resolved): Bluestore OSD failed to start with `bluefs_types.h: 54: FAILED assert(pos ...
09/08/2019
- 01:49 PM Backport #41339 (In Progress): mimic: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- 09:19 AM Backport #41709 (Resolved): luminous: Set concurrent max_background_compactions in rocksdb to 2
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30149
m... - 08:21 AM Backport #41711 (In Progress): nautilus: man page for ceph-kvstore-tool missing command
09/07/2019
- 10:41 AM Backport #41709 (In Progress): luminous: Set concurrent max_background_compactions in rocksdb to 2
- 09:25 AM Backport #41709 (Resolved): luminous: Set concurrent max_background_compactions in rocksdb to 2
- https://github.com/ceph/ceph/pull/30149
- 10:39 AM Backport #41710 (In Progress): mimic: Set concurrent max_background_compactions in rocksdb to 2
- 09:25 AM Backport #41710 (Resolved): mimic: Set concurrent max_background_compactions in rocksdb to 2
- https://github.com/ceph/ceph/pull/30150
- 09:26 AM Backport #41711 (Resolved): nautilus: man page for ceph-kvstore-tool missing command
- https://github.com/ceph/ceph/pull/30245
09/06/2019
- 11:53 PM Backport #41340 (In Progress): nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared de...
- 07:08 PM Feature #41691 (Resolved): os/BlueStore: avoid double caching bluestore onodes in rocksdb block_c...
- 07:01 PM Feature #41690 (Fix Under Review): Sharding of rocksdb database using column families
- 07:01 PM Feature #41690 (Resolved): Sharding of rocksdb database using column families
- 1) Sharded BlueStore - 1 - KeyValueDB
https://github.com/ceph/ceph/pull/29084
2) Sharded BlueStore - 2 - main p...
09/05/2019
- 10:58 AM Bug #39618 (Can't reproduce): Runaway memory usage on Bluestore OSD
- Since we can't reproduce this in-house, I'm going to close this bug for now. With a 1GB cache I'd typically expect a...
09/04/2019
- 04:14 PM Bug #40769: Set concurrent max_background_compactions in rocksdb to 2
- Luminous backport - https://github.com/ceph/ceph/pull/30149
Mimic backport - https://github.com/ceph/ceph/pull/30150 - 04:02 PM Bug #40769 (Pending Backport): Set concurrent max_background_compactions in rocksdb to 2
- We want to backport this change to luminous and mimic as well.
- 02:01 PM Backport #40449 (In Progress): nautilus: "no available blob id" assertion might occur
- https://github.com/ceph/ceph/pull/30144
- 01:30 PM Bug #40492 (Pending Backport): man page for ceph-kvstore-tool missing command
- 01:29 PM Bug #40492 (Resolved): man page for ceph-kvstore-tool missing command
09/03/2019
- 08:37 PM Backport #40449 (Need More Info): nautilus: "no available blob id" assertion might occur
- The master PR has been merged, but Sage wants it to bake for awhile before backporting.
Setting "Needs More Info" ... - 08:35 PM Backport #40448 (Rejected): luminous: "no available blob id" assertion might occur
- https://github.com/ceph/ceph/pull/28229#issuecomment-503179250
- 08:35 PM Backport #40447 (Rejected): mimic: "no available blob id" assertion might occur
- https://github.com/ceph/ceph/pull/28229#issuecomment-503179250
Also available in: Atom