Activity
From 08/04/2021 to 09/02/2021
09/02/2021
- 10:02 AM Backport #52493 (In Progress): octopus: bluefs mount failed to replay log: (5) Input/output error
- https://github.com/ceph/ceph/pull/43024
- 10:00 AM Backport #52493 (Resolved): octopus: bluefs mount failed to replay log: (5) Input/output error
- https://github.com/ceph/ceph/pull/43024
- 10:01 AM Backport #52492 (In Progress): pacific: bluefs mount failed to replay log: (5) Input/output error
- https://github.com/ceph/ceph/pull/43023
- 10:00 AM Backport #52492 (Resolved): pacific: bluefs mount failed to replay log: (5) Input/output error
- https://github.com/ceph/ceph/pull/43023
- 09:55 AM Bug #52079 (Pending Backport): bluefs mount failed to replay log: (5) Input/output error
- 09:40 AM Bug #52079 (Resolved): bluefs mount failed to replay log: (5) Input/output error
- 05:21 AM Bug #52399: src/os/bluestore/HybridAllocator.cc: FAILED ceph_assert(false)
- https://pulpito.ceph.com/yuriw-2021-08-27_21:20:08-rados-wip-yuri2-testing-2021-08-27-1207-distro-basic-smithi/6363594/
09/01/2021
- 08:03 PM Backport #52433 (Resolved): pacific: BlueFS might end-up with huge WAL files when upgrading OMAPs
- 06:59 PM Backport #52433: pacific: BlueFS might end-up with huge WAL files when upgrading OMAPs
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42956
merged - 10:09 AM Backport #52476 (In Progress): octopus: BlueFS superblock might contain incomplete list of physic...
- https://github.com/ceph/ceph/pull/43008
- 10:09 AM Backport #52477 (In Progress): pacific: BlueFS superblock might contain incomplete list of physic...
- https://github.com/ceph/ceph/pull/43007
08/31/2021
- 11:50 PM Backport #52477 (Resolved): pacific: BlueFS superblock might contain incomplete list of physical ...
- https://github.com/ceph/ceph/pull/43007
- 11:50 PM Backport #52476 (Resolved): octopus: BlueFS superblock might contain incomplete list of physical ...
- https://github.com/ceph/ceph/pull/43008
- 11:47 PM Bug #52311 (Pending Backport): BlueFS superblock might contain incomplete list of physical extents
- 11:46 PM Bug #52311: BlueFS superblock might contain incomplete list of physical extents
- https://github.com/ceph/ceph/pull/42831 merged
- 07:33 PM Bug #51540 (Resolved): bluestore doesn't respect "bluestore_warn_on_spurious_read_errors" config ...
- 07:31 PM Bug #51540: bluestore doesn't respect "bluestore_warn_on_spurious_read_errors" config parameter
- https://github.com/ceph/ceph/pull/42897 merged
- 07:32 PM Backport #51664 (Resolved): pacific: bluestore doesn't respect "bluestore_warn_on_spurious_read_e...
- 06:44 PM Bug #52138: os/bluestore/BlueStore.cc: FAILED ceph_assert(lcl_extnt_map[offset] == length)
- Kefu Chai wrote:
> /a/kchai-2021-08-17_04:49:07-rados-wip-kefu-testing-2021-08-17-0902-distro-basic-smithi/6343511
... - 02:32 PM Bug #52464: FAILED ceph_assert(current_shard->second->valid())
- Gabi, I am assigning it to you for now, since this looks related to NCB.
- 01:57 PM Bug #52464 (New): FAILED ceph_assert(current_shard->second->valid())
- I've got a cephadm cluster I use for testing, and this morning one of the OSDs crashed down in bluestore code:...
- 01:31 PM Bug #40434 (Fix Under Review): ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
- 08:35 AM Bug #51217: BlueFS::_flush_range assert(h->file->fnode.ino != 1)
- Fix proposal:
https://github.com/ceph/ceph/pull/42988
08/27/2021
- 11:15 PM Backport #52432 (In Progress): octopus: BlueFS might end-up with huge WAL files when upgrading OMAPs
- https://github.com/ceph/ceph/pull/42958
- 06:36 PM Backport #52432 (Resolved): octopus: BlueFS might end-up with huge WAL files when upgrading OMAPs
- https://github.com/ceph/ceph/pull/42958
- 08:47 PM Backport #52433 (In Progress): pacific: BlueFS might end-up with huge WAL files when upgrading OMAPs
- https://github.com/ceph/ceph/pull/42956
- 06:36 PM Backport #52433 (Resolved): pacific: BlueFS might end-up with huge WAL files when upgrading OMAPs
- https://github.com/ceph/ceph/pull/42956
- 06:30 PM Bug #49170 (Pending Backport): BlueFS might end-up with huge WAL files when upgrading OMAPs
- 06:29 PM Bug #52398 (Resolved): ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/...
- 03:55 PM Bug #52399: src/os/bluestore/HybridAllocator.cc: FAILED ceph_assert(false)
- /a/yuriw-2021-08-26_18:40:53-rados-wip-yuri7-testing-2021-08-26-0841-distro-basic-smithi/6360320/
/a/yuriw-2021-08-2...
08/26/2021
- 04:02 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Tomasz Kloczko wrote:
> Ha found that https://github.com/google/leveldb/blob/master/CMakeLists.txt#L76
> Looks like... - 02:39 PM Bug #51217 (In Progress): BlueFS::_flush_range assert(h->file->fnode.ino != 1)
- 02:36 PM Bug #51960 (Duplicate): octopus: Assertion `new_prio == -1 || (new_prio >= fifo_min_prio && new_p...
- 07:48 AM Bug #51034: osd: failed to initialize OSD in Rook
- @Igor, Thank you! I'll do it later.
08/25/2021
- 05:25 PM Bug #51034: osd: failed to initialize OSD in Rook
- @satoru, wondering if you would be able to reproduce this for the upcoming 16.2.6.
Perhaps this has been fixed by ht... - 05:03 PM Bug #51445 (Resolved): cannot enable osd resharding on pacific
- 05:03 PM Backport #52246 (Resolved): pacific: cannot enable osd resharding on pacific
- 12:40 PM Bug #52398 (Fix Under Review): ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRep...
- 11:46 AM Bug #52138 (Triaged): os/bluestore/BlueStore.cc: FAILED ceph_assert(lcl_extnt_map[offset] == length)
- Looks like a bug in that new NCB stuff. I managed to repro the issue and here is the relevant onode dump (see offset ...
08/24/2021
- 11:27 PM Bug #52398 (Triaged): ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/2...
- 10:56 PM Bug #52398 (Resolved): ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/...
- ...
- 11:12 PM Bug #52399 (Resolved): src/os/bluestore/HybridAllocator.cc: FAILED ceph_assert(false)
- ...
- 11:06 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- Please note the above is again "OUT-OF-SPACE" error coming from the underlying disk/parittion/volume. See:
2021-08-2... - 06:38 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- /a/yuriw-2021-08-23_19:24:05-rados-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/6353851 - no logs
- 09:54 PM Backport #52246: pacific: cannot enable osd resharding on pacific
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42844
merged - 06:29 PM Bug #50788: crash in BlueStore::Onode::put()
- ...
08/23/2021
- 08:04 PM Backport #52024: pacific: kv/RocksDBStore: enrich debug message
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42544
m... - 03:00 PM Backport #52024 (Resolved): pacific: kv/RocksDBStore: enrich debug message
- 08:03 PM Backport #52244: pacific: Deferred writes are unexpectedly applied to large writes on spinners
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42773
m... - 01:24 PM Backport #52244 (Resolved): pacific: Deferred writes are unexpectedly applied to large writes on ...
- 03:24 PM Bug #48827 (Duplicate): Ceph Bluestore OSDs fail to start on WAL corruption
- 03:10 PM Backport #51664 (In Progress): pacific: bluestore doesn't respect "bluestore_warn_on_spurious_rea...
- https://github.com/ceph/ceph/pull/42897
- 03:04 PM Bug #51762 (Pending Backport): Missed shared block repair doesn't fix related issues
- 03:01 PM Bug #52023 (Resolved): kv/RocksDBStore: enrich debug message
- 01:25 PM Bug #52089 (Resolved): Deferred writes are unexpectedly applied to large writes on spinners
08/20/2021
- 06:27 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- After fixing leveldb looks like everything compiles and links.
Feel free to close this ticket.
However IMO it would... - 05:41 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Ha found that https://github.com/google/leveldb/blob/master/CMakeLists.txt#L76
Looks like leveldb cmake has hardcode... - 05:38 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Just in case I'm using leveldb 1.23
- 05:37 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- OK I've updated rocksdb to latest version and generally speaking all compiles but there are some linking issues.
m... - 01:32 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Kefu Chai wrote:
> the build failure related to rocksdb should be fixed by https://github.com/ceph/ceph/pull/42815
...
08/19/2021
- 03:57 PM Backport #52246 (In Progress): pacific: cannot enable osd resharding on pacific
- 09:54 AM Backport #52246: pacific: cannot enable osd resharding on pacific
- https://github.com/ceph/ceph/pull/42844
08/18/2021
- 11:25 AM Bug #52311 (Fix Under Review): BlueFS superblock might contain incomplete list of physical extents
- 11:17 AM Bug #52311 (Resolved): BlueFS superblock might contain incomplete list of physical extents
- BlueFS superblock might contain incomplete list of physical extents for
bluefs log. Hence we should always replay op... - 11:11 AM Bug #52079: bluefs mount failed to replay log: (5) Input/output error
- Viktor Svecov wrote:
> You are right. After zeroing appropriate areas of OSD block devices OSD daemons started. Now ... - 11:09 AM Bug #52079 (Fix Under Review): bluefs mount failed to replay log: (5) Input/output error
- 10:38 AM Bug #52079 (In Progress): bluefs mount failed to replay log: (5) Input/output error
- 03:42 AM Bug #52138: os/bluestore/BlueStore.cc: FAILED ceph_assert(lcl_extnt_map[offset] == length)
- /a/kchai-2021-08-17_04:49:07-rados-wip-kefu-testing-2021-08-17-0902-distro-basic-smithi/6343511
08/17/2021
- 10:57 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- the build failure related to rocksdb should be fixed by https://github.com/ceph/ceph/pull/42815
- 06:36 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Tomasz Kloczko wrote:
> And yet another one
>
> [...]
@Tomasz, the compiling failure related to snappy should ... - 05:37 AM Bug #52079: bluefs mount failed to replay log: (5) Input/output error
- You are right. After zeroing appropriate areas of OSD block devices OSD daemons started. Now all PGs of Ceph Storage ...
08/16/2021
- 11:30 PM Backport #52024: pacific: kv/RocksDBStore: enrich debug message
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42544
merged - 09:09 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Deepika Upadhyay wrote:
> Hey Tomasz,
> I am looking into the build failures you reported: opened a PR for this one... - 11:53 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Hey Tomasz,
I am looking into the build failures you reported: opened a PR for this one: 42791 - 10:02 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Deepika Upadhyay wrote:
> Tomasz Kloczko wrote:
> > And after run "make -k" yet another this time not linking but c... - 05:30 PM Backport #52244: pacific: Deferred writes are unexpectedly applied to large writes on spinners
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42773
merged - 12:02 PM Bug #52079: bluefs mount failed to replay log: (5) Input/output error
- Igor Fedotov wrote:
> Viktor Svecov wrote:
> > Thank you for help. I have attached log files with 'debug_bluefs = 2... - 05:00 AM Bug #52079: bluefs mount failed to replay log: (5) Input/output error
- Sorry i didn't notice that err standard output stopped before the end of actual log on node h3. Now the log is comple...
- 12:04 AM Bug #51684 (Duplicate): OSD crashes after update to 16.2.4
08/15/2021
- 11:50 PM Bug #52079: bluefs mount failed to replay log: (5) Input/output error
- Viktor Svecov wrote:
> Thank you for help. I have attached log files with 'debug_bluefs = 20' from two nodes.
One... - 11:31 PM Bug #46804 (Duplicate): nautilus: upgrade:nautilus-p2p/nautilus-p2p-stress-split: BlueFS::_flush_...
- Highly likely this is a duplicate of https://tracker.ceph.com/issues/50656 which was fixed in Nautilus v14.2.22.
C... - 11:24 PM Bug #47243 (Duplicate): bluefs _allocate failed then assert
08/14/2021
- 05:47 AM Bug #52079: bluefs mount failed to replay log: (5) Input/output error
- Thank you for help. I have attached log files with 'debug_bluefs = 20' from two nodes.
08/13/2021
- 01:12 PM Bug #52079: bluefs mount failed to replay log: (5) Input/output error
- Could you please set debug-bluefs to 20, retry startup attempt and share the log?
08/12/2021
- 07:55 PM Backport #52244 (In Progress): pacific: Deferred writes are unexpectedly applied to large writes ...
- 04:20 PM Backport #52244 (Resolved): pacific: Deferred writes are unexpectedly applied to large writes on ...
- https://github.com/ceph/ceph/pull/42773
- 06:35 PM Backport #52246 (Resolved): pacific: cannot enable osd resharding on pacific
- https://github.com/ceph/ceph/pull/42844
- 06:33 PM Bug #51445 (Pending Backport): cannot enable osd resharding on pacific
- 06:21 PM Bug #51445: cannot enable osd resharding on pacific
- https://github.com/ceph/ceph/pull/42345 merged
- 04:23 PM Bug #52182 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
- IO error
- 04:23 PM Bug #52184 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
- IO error
- 04:23 PM Bug #52216 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
- IO error
- 04:23 PM Bug #52219 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
- IO error
- 04:21 PM Bug #52206 (Won't Fix): crash: int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffe...
- IO error
- 04:20 PM Bug #52181 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
- IO error
- 04:20 PM Bug #52209 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
- IO error
- 04:20 PM Bug #52215 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
- IO error
- 04:16 PM Bug #52228 (Duplicate): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
- 04:15 PM Bug #52230 (Triaged): crash in pread64 during collection_list()
- 04:15 PM Bug #52089 (Pending Backport): Deferred writes are unexpectedly applied to large writes on spinners
08/11/2021
- 06:47 PM Bug #52234 (New): crash in Throttle::get()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ee9f576579ca7da4020a6407...- 06:47 PM Bug #52230 (Triaged): crash in pread64 during collection_list()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=62de4b1a9a0b46a543879925...- 06:47 PM Bug #52229 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87d52aeb63e897a72d70405a...- 06:47 PM Bug #52228 (Duplicate): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=247261499ff94d0dda7a61ce...- 06:47 PM Bug #52227 (Need More Info): crash: int BlueStore::_open_super_meta(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71c53404cd146064d1a3c436...- 06:47 PM Bug #52224 (New): crash: virtual void KernelDevice::aio_submit(IOContext*): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a6929f77d614e0d59875026e...- 06:47 PM Bug #52223 (New): crash: virtual void KernelDevice::aio_submit(IOContext*): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e52f4d1f0e282fe0930c2a9...- 06:47 PM Bug #52222 (New): crash: virtual int RocksDBStore::get(const string&, const string&, ceph::buffer...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=10a0140f4842e42ff26dd280...- 06:47 PM Bug #52219 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9532bbd5bf99075a28fa8532...- 06:47 PM Bug #52216 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e5a4f7173a1fecbb308d89a...- 06:47 PM Bug #52215 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d2eaa6bcd03c00321b61e5ba...- 06:46 PM Bug #52209 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=053d9b6b4376b77024df7cf5...- 06:46 PM Bug #52208 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=631597e39468c016bebd48f6...- 06:46 PM Bug #52206 (Won't Fix): crash: int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=83a2949b9e1911950c76a60f...- 06:46 PM Bug #52205 (Duplicate): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=801d5b960db942824dfbe281...- 06:46 PM Bug #52204 (New): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6da965283a0452fca0578463...- 06:46 PM Bug #52203 (Duplicate): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=56629a5f97eb83418851e3c9...- 06:46 PM Bug #52202 (New): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0003fba7b807aa2066f4f75f...- 06:46 PM Bug #52201 (New): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=213b0c143e583232235dc5ec...- 06:46 PM Bug #52196 (New): crash: void BlueStore::_txc_apply_kv(BlueStore::TransContext*, bool): assert(r ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c0393b03500f68d1023272b6...- 06:45 PM Bug #52188 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a1f0616fce8b0a346fea56f6...- 06:45 PM Bug #52187 (New): crash: bool WholeMergeIteratorImpl::is_main_smaller(): assert(current_shard->se...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=55b5827b701b50ff89863c06...- 06:45 PM Bug #52185 (New): crash: void BlueStore::_kv_sync_thread(): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0ef35ef76edbb32fb8a499c4...- 06:45 PM Bug #51133: OSDs failing to start: rocksdb: submit_common error: Corruption: block checksum mismatch
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=109ab3ee85a3bc3337746eb0e...- 06:45 PM Bug #52184 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7c2c3f345d367ab9252bf679...- 06:45 PM Bug #52182 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8188bd963a77114f4b891cdb...- 06:45 PM Bug #52181 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5d19e8cb5ed53742d981b8c4...- 06:45 PM Bug #52175 (New): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::Col...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=feca149a2576edf3cf2b7bd2...- 06:44 PM Bug #52157 (New): crash: pthread_cond_wait()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b97e710ffcb9d5b74919f5d8...- 06:43 PM Bug #52146 (New): crash: int BlueStore::_do_gc(BlueStore::TransContext*, BlueStore::CollectionRef...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c75e6b9783df33a2941491cd...- 06:43 PM Bug #52144 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f3a6edd025d032f02d7aeb55...- 06:43 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=37a4a134a1881693ca8828021...- 06:43 PM Bug #52139 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fff55d7102994e1ed007bcdb...- 06:22 PM Bug #52138 (Resolved): os/bluestore/BlueStore.cc: FAILED ceph_assert(lcl_extnt_map[offset] == len...
- ...
- 03:45 PM Bug #51755 (Triaged): crash: rocksdb::IteratorWrapperBase<rocksdb::Slice>::Update()
- only 1 instance so far.
- 02:30 PM Bug #50739 (Resolved): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, Obje...
- Only occurring on old Nautilus releases
- 02:28 PM Bug #51897 (Triaged): crash: pthread_cond_wait() (from BlueStore::_do_read)
- 02:25 PM Bug #51753 (Won't Fix): crash: void BlueStore::_kv_sync_thread(): assert(r == 0)
- rocksdb returns error
- 02:24 PM Bug #51871 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
- EIO
- 02:23 PM Bug #51875 (Won't Fix): crash: void BlueStore::_txc_apply_kv(BlueStore::TransContext*, bool): ass...
- rocksdb returns error code
- 02:20 PM Bug #51883 (Won't Fix): crash: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)
- OOM
- 02:21 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Tomasz Kloczko wrote:
> And after run "make -k" yet another this time not linking but compile error.
>
> [...]
...
08/09/2021
- 08:09 PM Backport #51130: pacific: In poweroff conditions BlueFS can create corrupted files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42424
m... - 08:05 PM Backport #51128: octopus: In poweroff conditions BlueFS can create corrupted files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42374
m... - 08:04 PM Backport #51711: octopus: compact db after bulk omap naming upgrade
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42375
m... - 08:04 PM Backport #50937: octopus: osd-bluefs-volume-ops.sh fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42377
m... - 06:05 PM Bug #50788: crash in BlueStore::Onode::put()
- ...
- 05:15 PM Bug #52089 (Fix Under Review): Deferred writes are unexpectedly applied to large writes on spinners
- 05:05 PM Bug #52089 (In Progress): Deferred writes are unexpectedly applied to large writes on spinners
08/08/2021
- 05:47 AM Bug #52095 (Need More Info): OSD container can't start: _read_bdev_label unable to decode label a...
- ceph version
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
OSD container can't...
08/06/2021
- 06:31 PM Bug #52089 (Resolved): Deferred writes are unexpectedly applied to large writes on spinners
- Appending 4MB to an object causes deferred write for every 64K blob.
- 05:19 AM Bug #52079 (Resolved): bluefs mount failed to replay log: (5) Input/output error
- In the testlab after simultaneous power off of all OSD nodes (3) two of them can not start.
h2 node:...
08/05/2021
- 02:41 PM Bug #51899 (Need More Info): crash: virtual int RocksDBStore::get(const string&, const string&, c...
- 02:41 PM Bug #51871: crash: void KernelDevice::_aio_thread(): abort
- error code reported: -121 EREMOTEIO Remote I/O error
- 02:35 PM Bug #51891 (Duplicate): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ...
- 02:34 PM Bug #51876 (Duplicate): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ...
- 02:31 PM Bug #51874 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:30 PM Bug #51893 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:29 PM Bug #51894 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:29 PM Bug #51898 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:25 PM Bug #51895 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:17 PM Bug #51900 (Need More Info): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, Blu...
- We need more logs. There is no contact information in the telemetry report.
Also available in: Atom