Project

General

Profile

Activity

From 12/12/2018 to 01/10/2019

01/10/2019

12:24 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
I suggest to alter specific OSD compression settings only, not pool ones. I.e. bluestore_compression_mode + bluestore... Igor Fedotov
11:15 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
Hi Igor,
A question: What is the best implantation of the compression, or what is your recommendation When the algor...
Greg Smith
09:07 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
Hi Igor,
I meant that we had the same issue with snappy.
We'll try to reproduce it and send logs.
Thank you.
Greg Smith
08:52 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
OSD.6 asserts on non-zero result returned from the compress method:
2019-01-09 06:42:33.599 7fb20e625700 -1 /build...
Igor Fedotov
06:03 AM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
In this specific case it's lz4, but it also happened in snappy. Greg Smith
12:16 PM Bug #25050 (Duplicate): osd: OSD Failed to Start In function 'int BlueStore::_do_alloc_write
Duplicates: https://tracker.ceph.com/issues/37839 Igor Fedotov

01/09/2019

09:00 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
Igor Fedotov
03:51 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
I bet it's lz4 not snappy.
Could you please switch everything to snappy and check if issue is still present?
Igor Fedotov
03:36 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
Sorry, I meant what compression algorithm? Igor Fedotov
03:34 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
aggressive on the pool alone: ceph osd pool set cephfs_data compression_mode aggressive Greg Smith
03:32 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
Great, thanks! Do you remember which compression method was configured in this specific case? Igor Fedotov
02:41 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
That would be great if you manage to collect "broken state" OSD log with "debug bluestore set to 20" Igor Fedotov
02:38 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
From mon log I can see osd.6 connection termination report at 06:42:34:
2019-01-09 06:42:34.122505 mon.mon01 mon.0 1...
Igor Fedotov
01:01 PM Bug #37839: Compression not working, and when applied OSD disks are failing randomly
Attached OSD log seems to be broken, could you please upload it or any other valid one again? Igor Fedotov
12:31 PM Bug #37839 (Resolved): Compression not working, and when applied OSD disks are failing randomly
We have a few Ceph mimic production environments, versions 13.2.2 to 13.2.4.
When we enable compression, either on t...
Greg Smith
04:48 AM Backport #37825 (In Progress): luminous: BlueStore: ENODATA not fully handled
Ashish Singh
04:45 AM Backport #37824 (In Progress): mimic: BlueStore: ENODATA not fully handled
Ashish Singh

01/08/2019

04:27 PM Backport #37825 (Resolved): luminous: BlueStore: ENODATA not fully handled
https://github.com/ceph/ceph/pull/25855 Nathan Cutler
04:27 PM Backport #37824 (Resolved): mimic: BlueStore: ENODATA not fully handled
https://github.com/ceph/ceph/pull/25854 Nathan Cutler

01/07/2019

10:02 AM Bug #22464: Bluestore: many checksum errors, always 0x6706be76 (which matches a zero block)
The upgrade to 12.2.10 fixed the issue for us. See #65 for our setup. The most likely change in 12.2.10 which fixed t... Gaudenz Steinlin

01/01/2019

04:28 AM Bug #36455 (Pending Backport): BlueStore: ENODATA not fully handled
Sage Weil

12/29/2018

08:13 AM Bug #24906: fio with bluestore crushed
I have encountered similar problem when fio was linked with libc's malloc.
I solved it by forcibly adding tcmalloc t...
Adam Kupczyk

12/21/2018

11:08 PM Bug #37652 (Resolved): bluestore: "fsck warning: legacy statfs record found, suggest to run store...
Igor Fedotov
07:59 PM Bug #37733 (Fix Under Review): os/bluestore: fixup access a destroy cond cause deadlock or undefi...
https://github.com/ceph/ceph/pull/25659 Sage Weil
06:55 AM Bug #37733 (Resolved): os/bluestore: fixup access a destroy cond cause deadlock or undefine behav...
1. osd has been mark down because of on heartbeat
2. gdb attach, found thread hung by __lock_lock_wait...
bing lin
03:13 PM Bug #36455 (Fix Under Review): BlueStore: ENODATA not fully handled
Radoslaw Zarzynski
03:12 PM Bug #36455: BlueStore: ENODATA not fully handled
https://github.com/ceph/ceph/pull/25670 Radoslaw Zarzynski

12/20/2018

03:50 PM Bug #24639: [segfault] segfault in BlueFS::read
I don't have the osd and disk anymore and can confirm that the disk itself was in a terrible state (many hardware rel... Compile Nix
03:34 PM Bug #24639 (Need More Info): [segfault] segfault in BlueFS::read
can you do ceph-bluestore-tool fsck --path ... --log-file log --log-level 20 and attach the output? Sage Weil
03:40 PM Bug #22534 (Resolved): Debian's bluestore *rocksdb* does not support neither fast CRC nor compres...
Sage Weil
03:38 PM Bug #23165 (Resolved): OSD used for Metadata / MDS storage constantly entering heartbeat timeout
This sounds like the cephfs directories weren't well fragmented, leading to very large omap objects. Sage Weil
03:37 PM Bug #23206 (Need More Info): ceph-osd daemon crashes - *** Caught signal (Aborted) **
Sage Weil
03:37 PM Bug #23372 (Can't reproduce): osd: segfault
Sage Weil
03:36 PM Bug #23819 (Won't Fix): how to make compactions smooth
the compaction is a function of rocksdb and there isn't a lot to be done about it at the moment... Sage Weil
03:35 PM Bug #24561 (Fix Under Review): if disableWAL is set, submit_transacton_sync will met error.
Sage Weil
03:34 PM Bug #23390 (Resolved): Identifying NVMe via PCI serial isn't sufficient (Bluestore/SPDK)
fixed in master. now the spdk backend is using PCI device's selector instead. see https://github.com/ceph/ceph/pull/2... Kefu Chai
03:32 PM Bug #24901 (Need More Info): Client reads fail due to bad CRC under high memory pressure on OSDs
Sage Weil
03:30 PM Bug #24906 (Need More Info): fio with bluestore crushed
Sage Weil
03:30 PM Bug #24906: fio with bluestore crushed
Is this still broken? Sage Weil
03:26 PM Bug #36108 (Duplicate): Assertion due to ENOENT result on clonerange2
Sage Weil

12/18/2018

11:15 PM Bug #37652 (Fix Under Review): bluestore: "fsck warning: legacy statfs record found, suggest to r...
Patrick Donnelly
10:33 PM Bug #37652: bluestore: "fsck warning: legacy statfs record found, suggest to run store repair to ...
Reassigning to Igor, sorry. Patrick Donnelly

12/17/2018

04:42 PM Bug #22534: Debian's bluestore *rocksdb* does not support neither fast CRC nor compression
Bumping up RocksDB in Luminous: https://github.com/ceph/ceph/pull/25592. Radoslaw Zarzynski

12/13/2018

07:41 PM Bug #37652 (Resolved): bluestore: "fsck warning: legacy statfs record found, suggest to run store...
... Patrick Donnelly
 

Also available in: Atom