Activity
From 12/16/2018 to 01/14/2019
01/14/2019
- 06:46 PM Bug #37914 (Can't reproduce): bluestore: segmentation fault
- ...
- 12:41 PM Bug #36625 (Resolved): _aio_log_start inflight overlap of 0x10000~1000 with [65536~4096]
- 12:41 PM Backport #36755 (Rejected): luminous: _aio_log_start inflight overlap of 0x10000~1000 with [65536...
- Closing as too high-risk for luminous at this stage of its life cycle.
01/10/2019
- 12:24 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- I suggest to alter specific OSD compression settings only, not pool ones. I.e. bluestore_compression_mode + bluestore...
- 11:15 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- Hi Igor,
A question: What is the best implantation of the compression, or what is your recommendation When the algor... - 09:07 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- Hi Igor,
I meant that we had the same issue with snappy.
We'll try to reproduce it and send logs.
Thank you. - 08:52 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- OSD.6 asserts on non-zero result returned from the compress method:
2019-01-09 06:42:33.599 7fb20e625700 -1 /build... - 06:03 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- In this specific case it's lz4, but it also happened in snappy.
- 12:16 PM Bug #25050 (Duplicate): osd: OSD Failed to Start In function 'int BlueStore::_do_alloc_write
- Duplicates: https://tracker.ceph.com/issues/37839
01/09/2019
- 09:00 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- 03:51 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- I bet it's lz4 not snappy.
Could you please switch everything to snappy and check if issue is still present? - 03:36 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- Sorry, I meant what compression algorithm?
- 03:34 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- aggressive on the pool alone: ceph osd pool set cephfs_data compression_mode aggressive
- 03:32 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- Great, thanks! Do you remember which compression method was configured in this specific case?
- 02:41 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- That would be great if you manage to collect "broken state" OSD log with "debug bluestore set to 20"
- 02:38 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- From mon log I can see osd.6 connection termination report at 06:42:34:
2019-01-09 06:42:34.122505 mon.mon01 mon.0 1... - 01:01 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
- Attached OSD log seems to be broken, could you please upload it or any other valid one again?
- 12:31 PM Bug #37839 (Resolved): Compression not working, and when applied OSD disks are failing randomly
- We have a few Ceph mimic production environments, versions 13.2.2 to 13.2.4.
When we enable compression, either on t... - 04:48 AM Backport #37825 (In Progress): luminous: BlueStore: ENODATA not fully handled
- 04:45 AM Backport #37824 (In Progress): mimic: BlueStore: ENODATA not fully handled
01/08/2019
- 04:27 PM Backport #37825 (Resolved): luminous: BlueStore: ENODATA not fully handled
- https://github.com/ceph/ceph/pull/25855
- 04:27 PM Backport #37824 (Resolved): mimic: BlueStore: ENODATA not fully handled
- https://github.com/ceph/ceph/pull/25854
01/07/2019
- 10:02 AM Bug #22464: Bluestore: many checksum errors, always 0x6706be76 (which matches a zero block)
- The upgrade to 12.2.10 fixed the issue for us. See #65 for our setup. The most likely change in 12.2.10 which fixed t...
01/01/2019
12/29/2018
- 08:13 AM Bug #24906: fio with bluestore crushed
- I have encountered similar problem when fio was linked with libc's malloc.
I solved it by forcibly adding tcmalloc t...
12/21/2018
- 11:08 PM Bug #37652 (Resolved): bluestore: "fsck warning: legacy statfs record found, suggest to run store...
- 07:59 PM Bug #37733 (Fix Under Review): os/bluestore: fixup access a destroy cond cause deadlock or undefi...
- https://github.com/ceph/ceph/pull/25659
- 06:55 AM Bug #37733 (Resolved): os/bluestore: fixup access a destroy cond cause deadlock or undefine behav...
- 1. osd has been mark down because of on heartbeat
2. gdb attach, found thread hung by __lock_lock_wait... - 03:13 PM Bug #36455 (Fix Under Review): BlueStore: ENODATA not fully handled
- 03:12 PM Bug #36455: BlueStore: ENODATA not fully handled
- https://github.com/ceph/ceph/pull/25670
12/20/2018
- 03:50 PM Bug #24639: [segfault] segfault in BlueFS::read
- I don't have the osd and disk anymore and can confirm that the disk itself was in a terrible state (many hardware rel...
- 03:34 PM Bug #24639 (Need More Info): [segfault] segfault in BlueFS::read
- can you do ceph-bluestore-tool fsck --path ... --log-file log --log-level 20 and attach the output?
- 03:40 PM Bug #22534 (Resolved): Debian's bluestore *rocksdb* does not support neither fast CRC nor compres...
- 03:38 PM Bug #23165 (Resolved): OSD used for Metadata / MDS storage constantly entering heartbeat timeout
- This sounds like the cephfs directories weren't well fragmented, leading to very large omap objects.
- 03:37 PM Bug #23206 (Need More Info): ceph-osd daemon crashes - *** Caught signal (Aborted) **
- 03:37 PM Bug #23372 (Can't reproduce): osd: segfault
- 03:36 PM Bug #23819 (Won't Fix): how to make compactions smooth
- the compaction is a function of rocksdb and there isn't a lot to be done about it at the moment...
- 03:35 PM Bug #24561 (Fix Under Review): if disableWAL is set, submit_transacton_sync will met error.
- 03:34 PM Bug #23390 (Resolved): Identifying NVMe via PCI serial isn't sufficient (Bluestore/SPDK)
- fixed in master. now the spdk backend is using PCI device's selector instead. see https://github.com/ceph/ceph/pull/2...
- 03:32 PM Bug #24901 (Need More Info): Client reads fail due to bad CRC under high memory pressure on OSDs
- 03:30 PM Bug #24906 (Need More Info): fio with bluestore crushed
- 03:30 PM Bug #24906: fio with bluestore crushed
- Is this still broken?
- 03:26 PM Bug #36108 (Duplicate): Assertion due to ENOENT result on clonerange2
12/18/2018
- 11:15 PM Bug #37652 (Fix Under Review): bluestore: "fsck warning: legacy statfs record found, suggest to r...
- 10:33 PM Bug #37652: bluestore: "fsck warning: legacy statfs record found, suggest to run store repair to ...
- Reassigning to Igor, sorry.
12/17/2018
- 04:42 PM Bug #22534: Debian's bluestore *rocksdb* does not support neither fast CRC nor compression
- Bumping up RocksDB in Luminous: https://github.com/ceph/ceph/pull/25592.
Also available in: Atom