Activity
From 02/08/2021 to 03/09/2021
03/09/2021
- 05:33 PM Backport #49039 (In Progress): octopus: Cannot allocate memory appears when using io_uring osd
- 05:24 PM Backport #49386 (In Progress): octopus: BlueFS reads might improperly rebuild internal buffer und...
- 04:47 PM Backport #49478 (Need More Info): luminous: Bluefs improperly handles huge (>4GB) writes which ca...
- non-trivial to backport and luminous is EOL
- 04:46 PM Backport #49477 (Need More Info): mimic: Bluefs improperly handles huge (>4GB) writes which cause...
- non-trivial commit, mimic is EOL
- 02:00 PM Backport #49038 (In Progress): pacific: Cannot allocate memory appears when using io_uring osd
03/08/2021
- 08:44 PM Bug #44878 (Resolved): mimic: incorrect SSD bluestore compression/allocation defaults
- 05:14 PM Backport #49479: pacific: Bluefs improperly handles huge (>4GB) writes which causes data corruption
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39688
m...
03/07/2021
- 06:35 PM Backport #49385 (In Progress): nautilus: BlueFS reads might improperly rebuild internal buffer un...
- 09:46 AM Backport #49384 (In Progress): pacific: BlueFS reads might improperly rebuild internal buffer und...
- 09:24 AM Backport #49039: octopus: Cannot allocate memory appears when using io_uring osd
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/39899
ceph-backport.sh versi... - 09:23 AM Backport #49038: pacific: Cannot allocate memory appears when using io_uring osd
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/39898
ceph-backport.sh versi...
03/06/2021
- 10:33 AM Backport #49386: octopus: BlueFS reads might improperly rebuild internal buffer under an shared lock
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/39884
ceph-backport.sh versi... - 10:29 AM Backport #49385: nautilus: BlueFS reads might improperly rebuild internal buffer under an shared ...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/39883
ceph-backport.sh versi... - 10:02 AM Backport #49384: pacific: BlueFS reads might improperly rebuild internal buffer under an shared lock
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/39881
ceph-backport.sh versi... - 07:23 AM Backport #49479 (Resolved): pacific: Bluefs improperly handles huge (>4GB) writes which causes da...
03/05/2021
- 05:13 PM Backport #49479: pacific: Bluefs improperly handles huge (>4GB) writes which causes data corruption
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/39688
merged
03/04/2021
- 09:42 PM Bug #49285: OSD Crash: Compaction error: Corruption: block checksum mismatch
- Hello Igor, sorry for the long wait.
We were able to get a duplicate crash on the same OSD. The OSD log is attach... - 07:48 PM Bug #48729 (Resolved): Bluestore memory leak on srub operations
- 05:55 PM Bug #48729: Bluestore memory leak on srub operations
- https://github.com/ceph/ceph/pull/39720 merged
03/03/2021
- 12:11 PM Bug #46027 (Resolved): bufferlist c_str() sometimes clears assignment to mempool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:10 PM Bug #48214 (Resolved): osd: fix bluestore bitmap allocator
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:32 AM Backport #48193 (Resolved): nautilus: bufferlist c_str() sometimes clears assignment to mempool
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39651
m... - 11:31 AM Backport #49480: nautilus: Bluefs improperly handles huge (>4GB) writes which causes data corruption
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39698
m... - 11:31 AM Backport #48282 (Resolved): nautilus: osd: fix bluestore bitmap allocator
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39708
m...
03/01/2021
- 05:54 PM Bug #48827: Ceph Bluestore OSDs fail to start on WAL corruption
- Can we link the tickets with a "caused by" to https://tracker.ceph.com/issues/45613 ?
- 05:16 PM Backport #49480 (Resolved): nautilus: Bluefs improperly handles huge (>4GB) writes which causes d...
- 05:11 PM Backport #49480: nautilus: Bluefs improperly handles huge (>4GB) writes which causes data corruption
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/39698
merged - 05:15 PM Backport #48193: nautilus: bufferlist c_str() sometimes clears assignment to mempool
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/39651
merged - 04:30 PM Backport #48282: nautilus: osd: fix bluestore bitmap allocator
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/39708
merged - 09:15 AM Bug #42928: ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- Related patch in ceph-volume: https://github.com/ceph/ceph/pull/39580
02/26/2021
- 03:41 PM Bug #48827: Ceph Bluestore OSDs fail to start on WAL corruption
- Thanks for the reply @Igor.
- 03:06 PM Bug #48827: Ceph Bluestore OSDs fail to start on WAL corruption
- Hi Wout,
I think https://tracker.ceph.com/issues/45613 is relevant indeed.
Suggest to set bluefs_preextend_wal_fi... - 10:18 AM Bug #48827: Ceph Bluestore OSDs fail to start on WAL corruption
- Hi @Igor,
It looks like we're experiencing this at the moment. We've moved some pools from hdd only osds to nvme o... - 02:42 PM Bug #48729 (Pending Backport): Bluestore memory leak on srub operations
02/25/2021
- 10:15 PM Bug #49394: another terminate called after throwing an instance of 'std::bad_alloc'
- Per #49387 (and an email from Casey) could be an issue with the tcmalloc version.
- 08:52 PM Backport #48282 (In Progress): nautilus: osd: fix bluestore bitmap allocator
- 05:16 PM Backport #49100: pacific: crash in BlueStore::Onode::put()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39228
m... - 05:16 PM Backport #49097: pacific: FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache(Blu...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39228
m... - 04:55 PM Backport #48193 (In Progress): nautilus: bufferlist c_str() sometimes clears assignment to mempool
- 04:24 PM Bug #49285: OSD Crash: Compaction error: Corruption: block checksum mismatch
- Chris K wrote:
> Yes, both nodes are using swap. Only an 8g file on the / filesystem though. It's backed by a raid ... - 04:19 PM Backport #49481 (In Progress): octopus: Bluefs improperly handles huge (>4GB) writes which causes...
- https://github.com/ceph/ceph/pull/39701
- 06:25 AM Backport #49481 (Resolved): octopus: Bluefs improperly handles huge (>4GB) writes which causes da...
- https://github.com/ceph/ceph/pull/39701
- 02:58 PM Backport #49479 (In Progress): pacific: Bluefs improperly handles huge (>4GB) writes which causes...
- https://github.com/ceph/ceph/pull/39688
- 06:25 AM Backport #49479 (Resolved): pacific: Bluefs improperly handles huge (>4GB) writes which causes da...
- https://github.com/ceph/ceph/pull/39688
- 02:57 PM Backport #49480 (In Progress): nautilus: Bluefs improperly handles huge (>4GB) writes which cause...
- https://github.com/ceph/ceph/pull/39698
- 06:25 AM Backport #49480 (Resolved): nautilus: Bluefs improperly handles huge (>4GB) writes which causes d...
- https://github.com/ceph/ceph/pull/39698
- 01:24 PM Backport #48950: pacific: ObjectStore/StoreTest hangs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38989
m... - 06:25 AM Backport #49478 (Rejected): luminous: Bluefs improperly handles huge (>4GB) writes which causes d...
- 06:25 AM Backport #49477 (Rejected): mimic: Bluefs improperly handles huge (>4GB) writes which causes data...
- 06:22 AM Bug #49168 (Pending Backport): Bluefs improperly handles huge (>4GB) writes which causes data cor...
02/24/2021
- 08:59 PM Bug #49285: OSD Crash: Compaction error: Corruption: block checksum mismatch
- Yes, both nodes are using swap. Only an 8g file on the / filesystem though. It's backed by a raid 10 array provided ...
- 05:48 PM Bug #49285: OSD Crash: Compaction error: Corruption: block checksum mismatch
- Wondering if you have swap enabled for your OSD nodes? And may be excessive RSS memory (much above configured osd-mem...
02/19/2021
- 09:14 PM Bug #49394 (Resolved): another terminate called after throwing an instance of 'std::bad_alloc'
- ...
- 03:27 PM Bug #49383: BlueFS reads might improperly rebuild internal buffer under an shared lock
- Presumably caused by: https://github.com/ceph/ceph/commit/054355934a59bf4c08aa994fbab97a0f96cab31c
- 03:24 PM Bug #49383 (Resolved): BlueFS reads might improperly rebuild internal buffer under an shared lock
- Both read and read_random methods in BlueFS call bufferlist::c_str() method against shared buffer under a read lock.
... - 03:25 PM Backport #49386 (Resolved): octopus: BlueFS reads might improperly rebuild internal buffer under ...
- https://github.com/ceph/ceph/pull/39884
- 03:25 PM Backport #49385 (Resolved): nautilus: BlueFS reads might improperly rebuild internal buffer under...
- https://github.com/ceph/ceph/pull/39883
- 03:25 PM Backport #49384 (Resolved): pacific: BlueFS reads might improperly rebuild internal buffer under ...
- https://github.com/ceph/ceph/pull/39881
02/17/2021
- 10:29 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- And prior log I dissected showed 4916555776 bytes allocated shortly before the crash.
- 10:23 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- Neha Ojha wrote:
> https://pulpito.ceph.com/ksirivad-2021-02-16_08:28:04-rados:mgr-wip-fix-test-turn-off-module-dist... - 08:57 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- https://pulpito.ceph.com/ksirivad-2021-02-16_08:28:04-rados:mgr-wip-fix-test-turn-off-module-distro-basic-smithi/
02/16/2021
- 07:14 PM Backport #49099: octopus: crash in BlueStore::Onode::put()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39230
m... - 07:14 PM Backport #49098: octopus: FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache(Blu...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39230
m...
02/15/2021
- 06:19 PM Bug #49285: OSD Crash: Compaction error: Corruption: block checksum mismatch
- Hello Igor,
These OSDs can be restarted and it seems they resync and step back in line without trouble.
I'll ... - 06:15 PM Bug #49285: OSD Crash: Compaction error: Corruption: block checksum mismatch
- Igor Fedotov wrote:
> Are these failures volatile? I mean is affected OSD able to startup successfully after a while... - 06:08 PM Bug #49285: OSD Crash: Compaction error: Corruption: block checksum mismatch
- Are these failures volatile? I mean is affected OSD able to startup successfully after a while?
- 06:05 PM Bug #48781 (Resolved): crash in BlueStore::Onode::put()
- 06:05 PM Backport #49099 (Resolved): octopus: crash in BlueStore::Onode::put()
- 04:47 PM Backport #49099: octopus: crash in BlueStore::Onode::put()
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/39230
merged
- 06:04 PM Bug #48966 (Resolved): FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache(BlueSt...
- 06:04 PM Backport #49098 (Resolved): octopus: FAILED ceph_assert(o->pinned) in BlueStore::Collection::spli...
- 04:48 PM Backport #49098: octopus: FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache(Blu...
- https://github.com/ceph/ceph/pull/39230 merged
02/14/2021
- 04:11 PM Bug #45519: OSD asserts during block allocation for BlueFS
- I faced the same issue in 14.2.14
My cluster was recovering the degraded pgs and one of my OSDs' db get full! After ...
02/12/2021
- 07:59 PM Bug #49285 (Closed): OSD Crash: Compaction error: Corruption: block checksum mismatch
- We're encountering periodic OSD crashes during test runs on some new hardware.
For this report I'm using osd 101 a... - 04:58 PM Backport #49100 (Resolved): pacific: crash in BlueStore::Onode::put()
- 04:17 PM Backport #49100: pacific: crash in BlueStore::Onode::put()
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/39228
merged - 04:57 PM Backport #49097 (Resolved): pacific: FAILED ceph_assert(o->pinned) in BlueStore::Collection::spli...
- https://github.com/ceph/ceph/pull/39228
- 04:17 PM Backport #49097: pacific: FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache(Blu...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/39228
merged - 01:32 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- Yesterday I observed similar issue with the same error code for my vstart cluster. The root cause (in my case) was th...
02/11/2021
- 11:32 PM Bug #48849: BlueStore.cc: 11380: FAILED ceph_assert(r == 0)
- Chris K wrote:
> I think I have encountered this same issue on 15.2.5 running ubuntu 18.04.5. I reproduced the prob... - 09:32 PM Bug #48849: BlueStore.cc: 11380: FAILED ceph_assert(r == 0)
- I think I have encountered this same issue on 15.2.5 running ubuntu 18.04.5. I reproduced the problem with debug_roc...
- 07:02 PM Bug #49256 (Can't reproduce): src/os/bluestore/BlueStore.cc: FAILED ceph_assert(!c)
- ...
- 10:04 AM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- Neha Ojha wrote:
> could this be related to the recent out of space issues in the lab?
>
> [...]
It impossible...
02/10/2021
- 07:24 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- could this be related to the recent out of space issues in the lab?...
02/09/2021
02/08/2021
- 10:18 PM Backport #48478: octopus: bluefs _allocate failed to allocate bdev 1 and 2,cause ceph_assert(r == 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38474
m... - 04:19 PM Backport #48478 (Resolved): octopus: bluefs _allocate failed to allocate bdev 1 and 2,cause ceph_...
- 04:17 PM Backport #48478: octopus: bluefs _allocate failed to allocate bdev 1 and 2,cause ceph_assert(r == 0)
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/38474
merged - 04:19 PM Bug #47883 (Resolved): bluefs _allocate failed to allocate bdev 1 and 2,cause ceph_assert(r == 0)
- 04:07 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- So using teuthology-2021-01-07_07:01:02-rados-master-distro-basic-smithi/5762380.
The crashes are apparently at
... - 01:25 PM Bug #49110 (Triaged): BlueFS.cc: 1542: FAILED assert(r == 0)
- Once fresher Luminous build (highly likely to be done on your own) is obtained you might try recovery procedure provi...
- 01:20 PM Bug #49110: BlueFS.cc: 1542: FAILED assert(r == 0)
- Given Ceph version provided and huge BlueFS log file size:
-3> 2021-02-06 09:39:29.927409 7ff72040dec0 10 bluefs _re...
Also available in: Atom