Activity
From 12/10/2018 to 01/08/2019
01/08/2019
- 04:27 PM Backport #37825 (Resolved): luminous: BlueStore: ENODATA not fully handled
- https://github.com/ceph/ceph/pull/25855
- 04:27 PM Backport #37824 (Resolved): mimic: BlueStore: ENODATA not fully handled
- https://github.com/ceph/ceph/pull/25854
01/07/2019
- 10:02 AM Bug #22464: Bluestore: many checksum errors, always 0x6706be76 (which matches a zero block)
- The upgrade to 12.2.10 fixed the issue for us. See #65 for our setup. The most likely change in 12.2.10 which fixed t...
01/01/2019
12/29/2018
- 08:13 AM Bug #24906: fio with bluestore crushed
- I have encountered similar problem when fio was linked with libc's malloc.
I solved it by forcibly adding tcmalloc t...
12/21/2018
- 11:08 PM Bug #37652 (Resolved): bluestore: "fsck warning: legacy statfs record found, suggest to run store...
- 07:59 PM Bug #37733 (Fix Under Review): os/bluestore: fixup access a destroy cond cause deadlock or undefi...
- https://github.com/ceph/ceph/pull/25659
- 06:55 AM Bug #37733 (Resolved): os/bluestore: fixup access a destroy cond cause deadlock or undefine behav...
- 1. osd has been mark down because of on heartbeat
2. gdb attach, found thread hung by __lock_lock_wait... - 03:13 PM Bug #36455 (Fix Under Review): BlueStore: ENODATA not fully handled
- 03:12 PM Bug #36455: BlueStore: ENODATA not fully handled
- https://github.com/ceph/ceph/pull/25670
12/20/2018
- 03:50 PM Bug #24639: [segfault] segfault in BlueFS::read
- I don't have the osd and disk anymore and can confirm that the disk itself was in a terrible state (many hardware rel...
- 03:34 PM Bug #24639 (Need More Info): [segfault] segfault in BlueFS::read
- can you do ceph-bluestore-tool fsck --path ... --log-file log --log-level 20 and attach the output?
- 03:40 PM Bug #22534 (Resolved): Debian's bluestore *rocksdb* does not support neither fast CRC nor compres...
- 03:38 PM Bug #23165 (Resolved): OSD used for Metadata / MDS storage constantly entering heartbeat timeout
- This sounds like the cephfs directories weren't well fragmented, leading to very large omap objects.
- 03:37 PM Bug #23206 (Need More Info): ceph-osd daemon crashes - *** Caught signal (Aborted) **
- 03:37 PM Bug #23372 (Can't reproduce): osd: segfault
- 03:36 PM Bug #23819 (Won't Fix): how to make compactions smooth
- the compaction is a function of rocksdb and there isn't a lot to be done about it at the moment...
- 03:35 PM Bug #24561 (Fix Under Review): if disableWAL is set, submit_transacton_sync will met error.
- 03:34 PM Bug #23390 (Resolved): Identifying NVMe via PCI serial isn't sufficient (Bluestore/SPDK)
- fixed in master. now the spdk backend is using PCI device's selector instead. see https://github.com/ceph/ceph/pull/2...
- 03:32 PM Bug #24901 (Need More Info): Client reads fail due to bad CRC under high memory pressure on OSDs
- 03:30 PM Bug #24906 (Need More Info): fio with bluestore crushed
- 03:30 PM Bug #24906: fio with bluestore crushed
- Is this still broken?
- 03:26 PM Bug #36108 (Duplicate): Assertion due to ENOENT result on clonerange2
12/18/2018
- 11:15 PM Bug #37652 (Fix Under Review): bluestore: "fsck warning: legacy statfs record found, suggest to r...
- 10:33 PM Bug #37652: bluestore: "fsck warning: legacy statfs record found, suggest to run store repair to ...
- Reassigning to Igor, sorry.
12/17/2018
- 04:42 PM Bug #22534: Debian's bluestore *rocksdb* does not support neither fast CRC nor compression
- Bumping up RocksDB in Luminous: https://github.com/ceph/ceph/pull/25592.
12/13/2018
Also available in: Atom