Activity
From 09/23/2019 to 10/22/2019
10/22/2019
- 03:00 PM Backport #42428 (Rejected): mimic: High amount of Read I/O on BlueFS/DB when listing omap keys
- no mimic backport at this time
- 10:36 AM Backport #42428 (Rejected): mimic: High amount of Read I/O on BlueFS/DB when listing omap keys
- 03:00 PM Bug #36482 (Resolved): High amount of Read I/O on BlueFS/DB when listing omap keys
- After discussing in the rados team standup, we've decided not to backport to mimic/luminous at this time. Nautilus f...
- 10:08 AM Bug #36482: High amount of Read I/O on BlueFS/DB when listing omap keys
- Upon discussion with @Igor, the backports of this issue will require
* https://github.com/ceph/ceph/pull/27627
* ... - 12:37 PM Bug #42345: OSD: When object compression ratio is high(but less than “bluestore_compression_requi...
- also please note that compression makes sense for writes that are at least 2x times as large as bluestore_min_alloc_s...
- 12:26 PM Bug #42345 (Need More Info): OSD: When object compression ratio is high(but less than “bluestore_...
- would you please set debug bluestore to 20, repeat the test and share the log? Thanks!
10/21/2019
- 07:07 PM Bug #36482: High amount of Read I/O on BlueFS/DB when listing omap keys
- There's several clusters on luminous that can't be upgraded just yet, but will upgrade what we can. I'm just trying t...
- 06:27 PM Bug #36482 (Pending Backport): High amount of Read I/O on BlueFS/DB when listing omap keys
- 06:26 PM Bug #36482: High amount of Read I/O on BlueFS/DB when listing omap keys
- Gerben Meijer wrote:
> How long would this need to "bake"? Running into this frequently (several times per day). Is ... - 12:18 PM Bug #36482: High amount of Read I/O on BlueFS/DB when listing omap keys
- How long would this need to "bake"? Running into this frequently (several times per day). Is this going to make it in...
- 01:08 PM Backport #42041 (In Progress): nautilus: bluestore objectstore_blackhole=true violates read-after...
- 09:54 AM Backport #42041: nautilus: bluestore objectstore_blackhole=true violates read-after-write
- Sage writes in parent issue:
note that for backport, we only want one commit, 6c2a8e472dc71b962d7de008e30631f125b1...
10/17/2019
- 08:14 AM Bug #38559 (Resolved): 50-100% iops lost due to bluefs_preextend_wal_files = false
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:12 AM Bug #40769 (Resolved): Set concurrent max_background_compactions in rocksdb to 2
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 AM Backport #41710 (Resolved): mimic: Set concurrent max_background_compactions in rocksdb to 2
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30150
m... - 07:05 AM Bug #42345: OSD: When object compression ratio is high(but less than “bluestore_compression_requi...
- Fengzhe Han wrote:
> Version: nautilus
>
> I set value “.98” to “bluestore_compression_required_ratio”. Then put ... - 07:03 AM Bug #42345 (Closed): OSD: When object compression ratio is high(but less than “bluestore_compress...
- Version: nautilus
I set value “.98” to “bluestore_compression_required_ratio”. Then put some objects into the clus... - 06:21 AM Backport #41510 (Resolved): luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29564
m...
10/16/2019
- 11:22 PM Backport #41710: mimic: Set concurrent max_background_compactions in rocksdb to 2
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30150
merged - 11:07 PM Backport #41510: luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29564
merged
10/15/2019
- 07:55 PM Backport #41339 (Resolved): mimic: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30219
m... - 07:46 PM Backport #41339: mimic: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30219
merged - 08:38 AM Bug #42297 (Rejected): ceph-bluestore-tool repair osd error
- 02:57 AM Bug #42297: ceph-bluestore-tool repair osd error
- Igor Fedotov wrote:
> I think that's not a bug.
> First time the tool showed some errors in DB due to legacy stats...
10/14/2019
- 06:43 PM Backport #37564 (Need More Info): mimic: OSD compression: incorrect display of the used disk space
- This is a massive feature which will require a release notes entry.
- 06:36 PM Backport #37565 (Rejected): luminous: OSD compression: incorrect display of the used disk space
- luminous is on the verge of being declared EOL - too late for this feature to be backported
- 03:26 PM Bug #42297: ceph-bluestore-tool repair osd error
- I think that's not a bug.
First time the tool showed some errors in DB due to legacy stats layout that left from th... - 02:47 PM Bug #42297: ceph-bluestore-tool repair osd error
- Igor Fedotov wrote:
> Hi Yu!
> are you getting these errors during repair only or it appears afterward as well?
> ... - 11:44 AM Bug #42297: ceph-bluestore-tool repair osd error
- Hi Yu!
are you getting these errors during repair only or it appears afterward as well?
They are expected during re... - 09:55 AM Bug #42297: ceph-bluestore-tool repair osd error
- Jiang Yu wrote:
> Jiang Yu wrote:
> > This is because my cluster has 1 filesystem is degraded alarm. When I restore... - 09:54 AM Bug #42297: ceph-bluestore-tool repair osd error
- Jiang Yu wrote:
> This is because my cluster has 1 filesystem is degraded alarm. When I restore the filesystem, it c... - 09:44 AM Bug #42297: ceph-bluestore-tool repair osd error
- This is because my cluster has 1 filesystem is degraded alarm. When I restore the filesystem, it can succeed.
[r... - 09:13 AM Bug #42297 (Rejected): ceph-bluestore-tool repair osd error
- Hello everyone,
I am from ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
Upgra... - 03:50 AM Bug #42293 (New): bluestore/rocksdb: wrong Fast CRC32 supported log printing on AArch64 platform
- _root@node3:~# lscpu |grep Architecture
Architecture: aarch64
root@node3:~# ceph -v
ceph version 14.2.2 (...
10/11/2019
- 03:23 PM Bug #42284: fastbmap_allocator_impl.h: FAILED ceph_assert(available >= allocated) in upgrade:lumi...
- Bluestore crash -- moving to RADOS
- 03:11 PM Bug #42284 (Duplicate): fastbmap_allocator_impl.h: FAILED ceph_assert(available >= allocated) in ...
- Run: http://qa-proxy.ceph.com/teuthology/teuthology-2019-10-11_02:25:02-upgrade:luminous-x-nautilus-distro-basic-smit...
- 02:37 PM Bug #42189 (Resolved): SyntheticWorkloadState::scan abort
- 11:54 AM Bug #41744 (Resolved): os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_verify_l...
- 11:46 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Dmitry,
could you please share the whole startup log?
Preferably with "debug bluestore" set to 20.
And are you...
10/10/2019
- 12:40 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- I feel like I may some common problem while running @ceph-bluestore-tool repair@ in 14.2.3:
{{collapse(trace)...
10/09/2019
- 08:31 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Hi Igor,
the next OSD broke. Let me know what you need.
10/08/2019
- 02:48 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Tobias, thanks for the info.
And yeah, please ping me once you have another broken OSD. - 10:18 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Hi Igor,
the issue mentioned in the mailing list looks like exact the same issue we have. the only thing i did wit... - 09:53 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Hi Tobias,
may I have some clarifications, please?
- I can see the similar issue report at ceph-users mailing list:... - 09:03 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- log of another OSD on a different cluster
- 08:41 AM Bug #42223 (Resolved): ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_a...
- after update to Nautilus 14.2.4 random OSDs on different Clusters keep failing after random time. Only known fix is t...
- 07:16 AM Bug #42209: STATE_KV_SUBMITTED is set too early.
- "Issue is Nautilus and earlier releases specific as master already has some changes making the case even worse and th...
10/07/2019
- 01:45 PM Bug #42189: SyntheticWorkloadState::scan abort
- Relevant ticket for Nautilus: https://tracker.ceph.com/issues/42209
- 01:11 PM Bug #42189: SyntheticWorkloadState::scan abort
- This is a regression (caused by https://github.com/ceph/ceph/commit/a2fa546d02cfe2a910413acdec5ef11dbfacb359) )which ...
- 01:05 PM Bug #42189 (Fix Under Review): SyntheticWorkloadState::scan abort
- 08:26 AM Bug #42189: SyntheticWorkloadState::scan abort
- I observed similar and some related issues, e.g. SimpleListTest test case. The reason is somehow bound to collection_...
- 01:44 PM Bug #42209 (Resolved): STATE_KV_SUBMITTED is set too early.
- _txc_state_proc function might set TXC's state before committing data to DB.
Issue is Nautilus and earlier release...
10/06/2019
10/02/2019
- 07:05 PM Bug #42166: crash when LRU trimming
- Build was done on Fedora 30.
- 07:03 PM Bug #42166 (Closed): crash when LRU trimming
- Testing xfstests on kcephfs vs. a vstart cluster, the OSD crashed with this:...
- 01:17 PM Bug #39144 (Resolved): ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main device (slo...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:15 PM Bug #40492 (Resolved): man page for ceph-kvstore-tool missing command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:15 PM Bug #40703 (Resolved): stupid allocator might return extents with length = 0
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:10 PM Bug #42090 (Closed): bluefs: sync_metadata leaks dirty files if log_t is empty
- closing in favor of #42091
- 01:09 PM Bug #42091 (Fix Under Review): bluefs: sync_metadata leaks dirty files if log_t is empty
- 12:13 PM Backport #39565 (Resolved): luminous: ceph-bluestore-tool: bluefs-bdev-expand silently bypasses m...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27912
m... - 12:12 PM Backport #40756 (Resolved): luminous: stupid allocator might return extents with length = 0
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29025
m... - 12:10 PM Backport #41711 (Resolved): nautilus: man page for ceph-kvstore-tool missing command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30245
m...
10/01/2019
- 10:59 PM Backport #39565: luminous: ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main device ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27912
merged - 10:58 PM Backport #40756: luminous: stupid allocator might return extents with length = 0
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29025
merged - 06:42 PM Bug #38637: BlueStore::ExtentMap::fault_range() assert
Something similar happened in 12.2.12 with ceph-objectstore-tool trying to remove an object and fsck and fsck-deep....
09/29/2019
- 04:54 AM Bug #42091: bluefs: sync_metadata leaks dirty files if log_t is empty
- https://github.com/ceph/ceph/pull/30631
- 04:19 AM Bug #42091 (Resolved): bluefs: sync_metadata leaks dirty files if log_t is empty
- When reading the source code, we found that, in the following code, if BlueFS::log_t is empty while there are BlueFS:...
- 04:20 AM Bug #42090: bluefs: sync_metadata leaks dirty files if log_t is empty
- Sorry, this issue is editted badly. New issue: https://tracker.ceph.com/issues/42091
- 04:17 AM Bug #42090 (Closed): bluefs: sync_metadata leaks dirty files if log_t is empty
- When reading the source code, we found that, in the following code, if BlueFS::log_t is empty while there are BlueFS:...
09/26/2019
- 02:30 PM Bug #24901 (Resolved): Client reads fail due to bad CRC under high memory pressure on OSDs
- Marking this "Resolved" since the workaround fixes this issue.
- 02:27 PM Bug #25077 (Can't reproduce): Occasional assertion in ObjectStore/StoreTest.HashCollisionTest/2
- 02:17 PM Bug #40434: ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
- 02:16 PM Bug #40459 (Can't reproduce): os/bluestore: _verify_csum bad crc32 but no error in message, ceph-...
- There isn't enough information on this to bug to diagnose the issue, please feel free to reopen if this appears again...
- 02:10 PM Bug #41901 (Fix Under Review): bluestore: unused calculation is broken
- 01:58 PM Bug #41744 (Fix Under Review): os/bluestore/BlueStore.cc: 5313: FAILED ceph_assert(bluefs->maybe_...
- https://github.com/ceph/ceph/pull/30593
09/24/2019
- 07:55 PM Backport #42041 (Resolved): nautilus: bluestore objectstore_blackhole=true violates read-after-write
- https://github.com/ceph/ceph/pull/31019
09/23/2019
- 02:13 PM Bug #40684 (Pending Backport): bluestore objectstore_blackhole=true violates read-after-write
- note that for backport, we only want one commit, 6c2a8e472dc71b962d7de008e30631f125b148c3
- 02:02 PM Bug #42010 (Can't reproduce): segv in BlueStore::OnodeSpace::lookup during deletions
- lots of threads deleting objects. once of them:...
Also available in: Atom