Activity
From 07/13/2021 to 08/11/2021
08/11/2021
- 06:47 PM Bug #52234 (New): crash in Throttle::get()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ee9f576579ca7da4020a6407...- 06:47 PM Bug #52230 (Triaged): crash in pread64 during collection_list()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=62de4b1a9a0b46a543879925...- 06:47 PM Bug #52229 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87d52aeb63e897a72d70405a...- 06:47 PM Bug #52228 (Duplicate): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=247261499ff94d0dda7a61ce...- 06:47 PM Bug #52227 (Need More Info): crash: int BlueStore::_open_super_meta(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71c53404cd146064d1a3c436...- 06:47 PM Bug #52224 (New): crash: virtual void KernelDevice::aio_submit(IOContext*): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a6929f77d614e0d59875026e...- 06:47 PM Bug #52223 (New): crash: virtual void KernelDevice::aio_submit(IOContext*): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e52f4d1f0e282fe0930c2a9...- 06:47 PM Bug #52222 (New): crash: virtual int RocksDBStore::get(const string&, const string&, ceph::buffer...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=10a0140f4842e42ff26dd280...- 06:47 PM Bug #52219 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9532bbd5bf99075a28fa8532...- 06:47 PM Bug #52216 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e5a4f7173a1fecbb308d89a...- 06:47 PM Bug #52215 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d2eaa6bcd03c00321b61e5ba...- 06:46 PM Bug #52209 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=053d9b6b4376b77024df7cf5...- 06:46 PM Bug #52208 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=631597e39468c016bebd48f6...- 06:46 PM Bug #52206 (Won't Fix): crash: int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=83a2949b9e1911950c76a60f...- 06:46 PM Bug #52205 (Duplicate): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=801d5b960db942824dfbe281...- 06:46 PM Bug #52204 (New): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6da965283a0452fca0578463...- 06:46 PM Bug #52203 (Duplicate): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=56629a5f97eb83418851e3c9...- 06:46 PM Bug #52202 (New): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0003fba7b807aa2066f4f75f...- 06:46 PM Bug #52201 (New): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=213b0c143e583232235dc5ec...- 06:46 PM Bug #52196 (New): crash: void BlueStore::_txc_apply_kv(BlueStore::TransContext*, bool): assert(r ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c0393b03500f68d1023272b6...- 06:45 PM Bug #52188 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a1f0616fce8b0a346fea56f6...- 06:45 PM Bug #52187 (New): crash: bool WholeMergeIteratorImpl::is_main_smaller(): assert(current_shard->se...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=55b5827b701b50ff89863c06...- 06:45 PM Bug #52185 (New): crash: void BlueStore::_kv_sync_thread(): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0ef35ef76edbb32fb8a499c4...- 06:45 PM Bug #51133: OSDs failing to start: rocksdb: submit_common error: Corruption: block checksum mismatch
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=109ab3ee85a3bc3337746eb0e...- 06:45 PM Bug #52184 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7c2c3f345d367ab9252bf679...- 06:45 PM Bug #52182 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8188bd963a77114f4b891cdb...- 06:45 PM Bug #52181 (Won't Fix): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5d19e8cb5ed53742d981b8c4...- 06:45 PM Bug #52175 (New): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::Col...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=feca149a2576edf3cf2b7bd2...- 06:44 PM Bug #52157 (New): crash: pthread_cond_wait()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b97e710ffcb9d5b74919f5d8...- 06:43 PM Bug #52146 (New): crash: int BlueStore::_do_gc(BlueStore::TransContext*, BlueStore::CollectionRef...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c75e6b9783df33a2941491cd...- 06:43 PM Bug #52144 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f3a6edd025d032f02d7aeb55...- 06:43 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=37a4a134a1881693ca8828021...- 06:43 PM Bug #52139 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fff55d7102994e1ed007bcdb...- 06:22 PM Bug #52138 (Resolved): os/bluestore/BlueStore.cc: FAILED ceph_assert(lcl_extnt_map[offset] == len...
- ...
- 03:45 PM Bug #51755 (Triaged): crash: rocksdb::IteratorWrapperBase<rocksdb::Slice>::Update()
- only 1 instance so far.
- 02:30 PM Bug #50739 (Resolved): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, Obje...
- Only occurring on old Nautilus releases
- 02:28 PM Bug #51897 (Triaged): crash: pthread_cond_wait() (from BlueStore::_do_read)
- 02:25 PM Bug #51753 (Won't Fix): crash: void BlueStore::_kv_sync_thread(): assert(r == 0)
- rocksdb returns error
- 02:24 PM Bug #51871 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
- EIO
- 02:23 PM Bug #51875 (Won't Fix): crash: void BlueStore::_txc_apply_kv(BlueStore::TransContext*, bool): ass...
- rocksdb returns error code
- 02:20 PM Bug #51883 (Won't Fix): crash: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)
- OOM
- 02:21 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Tomasz Kloczko wrote:
> And after run "make -k" yet another this time not linking but compile error.
>
> [...]
...
08/09/2021
- 08:09 PM Backport #51130: pacific: In poweroff conditions BlueFS can create corrupted files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42424
m... - 08:05 PM Backport #51128: octopus: In poweroff conditions BlueFS can create corrupted files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42374
m... - 08:04 PM Backport #51711: octopus: compact db after bulk omap naming upgrade
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42375
m... - 08:04 PM Backport #50937: octopus: osd-bluefs-volume-ops.sh fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42377
m... - 06:05 PM Bug #50788: crash in BlueStore::Onode::put()
- ...
- 05:15 PM Bug #52089 (Fix Under Review): Deferred writes are unexpectedly applied to large writes on spinners
- 05:05 PM Bug #52089 (In Progress): Deferred writes are unexpectedly applied to large writes on spinners
08/08/2021
- 05:47 AM Bug #52095 (Need More Info): OSD container can't start: _read_bdev_label unable to decode label a...
- ceph version
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
OSD container can't...
08/06/2021
- 06:31 PM Bug #52089 (Resolved): Deferred writes are unexpectedly applied to large writes on spinners
- Appending 4MB to an object causes deferred write for every 64K blob.
- 05:19 AM Bug #52079 (Resolved): bluefs mount failed to replay log: (5) Input/output error
- In the testlab after simultaneous power off of all OSD nodes (3) two of them can not start.
h2 node:...
08/05/2021
- 02:41 PM Bug #51899 (Need More Info): crash: virtual int RocksDBStore::get(const string&, const string&, c...
- 02:41 PM Bug #51871: crash: void KernelDevice::_aio_thread(): abort
- error code reported: -121 EREMOTEIO Remote I/O error
- 02:35 PM Bug #51891 (Duplicate): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ...
- 02:34 PM Bug #51876 (Duplicate): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ...
- 02:31 PM Bug #51874 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:30 PM Bug #51893 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:29 PM Bug #51894 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:29 PM Bug #51898 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:25 PM Bug #51895 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
- 02:17 PM Bug #51900 (Need More Info): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, Blu...
- We need more logs. There is no contact information in the telemetry report.
08/03/2021
- 06:14 PM Backport #52024 (In Progress): pacific: kv/RocksDBStore: enrich debug message
- 12:15 PM Backport #52024: pacific: kv/RocksDBStore: enrich debug message
- It's the backport PR.
https://github.com/ceph/ceph/pull/42544 - 12:05 PM Backport #52024 (Resolved): pacific: kv/RocksDBStore: enrich debug message
- https://github.com/ceph/ceph/pull/42544
- 12:05 PM Bug #52023: kv/RocksDBStore: enrich debug message
- Satoru Takeuchi wrote:
> It's a tracker issue for PR 42508 which has already been merged. I'd like to backport this ... - 12:05 PM Bug #52023 (Pending Backport): kv/RocksDBStore: enrich debug message
- 12:02 PM Bug #52023: kv/RocksDBStore: enrich debug message
- It's a tracker issue for PR 42508 which has already been merged. I'd like to backport this PR to pacific. Could someo...
- 11:59 AM Bug #52023 (Resolved): kv/RocksDBStore: enrich debug message
- It's better to print why ListColumnFamilies() failed.
07/30/2021
- 06:02 PM Bug #47446: No snap trim progress after removing large snapshots
- Moving the crash data into a note as 'Crash signature (v1)' should hold the value of 'stack_sig' key:...
07/29/2021
- 05:08 PM Bug #51960 (Duplicate): octopus: Assertion `new_prio == -1 || (new_prio >= fifo_min_prio && new_p...
- ...
07/28/2021
- 10:47 PM Backport #51128 (Resolved): octopus: In poweroff conditions BlueFS can create corrupted files
- 06:40 PM Backport #51128: octopus: In poweroff conditions BlueFS can create corrupted files
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42374
merged - 10:07 PM Feature #51709 (Resolved): compact db after bulk omap naming upgrade
- 10:06 PM Backport #51710 (Resolved): pacific: compact db after bulk omap naming upgrade
- 06:34 PM Backport #51710: pacific: compact db after bulk omap naming upgrade
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42426
merged - 10:06 PM Backport #51711 (Resolved): octopus: compact db after bulk omap naming upgrade
- 06:38 PM Backport #51711: octopus: compact db after bulk omap naming upgrade
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42375
merged - 06:29 PM Backport #51130 (Resolved): pacific: In poweroff conditions BlueFS can create corrupted files
- 06:28 PM Backport #51130: pacific: In poweroff conditions BlueFS can create corrupted files
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42424
merged - 01:48 PM Bug #51900 (Need More Info): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, Blu...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b2d663137e9d1f8cd1c272d3...- 01:48 PM Bug #51899 (Need More Info): crash: virtual int RocksDBStore::get(const string&, const string&, c...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b4ab1671ea57bafee4d91c49...- 01:48 PM Bug #51898 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4794488f190a327442aabe7c...- 01:48 PM Bug #51897 (Triaged): crash: pthread_cond_wait() (from BlueStore::_do_read)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87de91de2203b7fb57f67780...- 01:48 PM Bug #51895 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=edd5f3d0f8881365140ed777...- 01:48 PM Bug #51894 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=44ce2b3dd5cd70f483941d6f...- 01:48 PM Bug #51893 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3482d4226deeaa1f78a92a43...- 01:48 PM Bug #51891 (Duplicate): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fa41e17aadeee4242ee056cc...- 01:47 PM Bug #51883 (Won't Fix): crash: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b944fc8b39981011deeb304d...- 01:47 PM Bug #51876 (Duplicate): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=445c7d2ce1dce869a9c08525...- 01:47 PM Bug #51875 (Won't Fix): crash: void BlueStore::_txc_apply_kv(BlueStore::TransContext*, bool): ass...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f2878a1ba6d77c721abd54da...- 01:47 PM Bug #51874 (Duplicate): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStor...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9dc2bf03622a961ad8f3dd5b...- 03:51 AM Bug #51871 (Won't Fix): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ab8d41f3cb5ce3878723baca...
07/27/2021
- 03:45 PM Bug #50891 (Resolved): osd-bluefs-volume-ops.sh fails
- 03:44 PM Backport #50937 (Resolved): octopus: osd-bluefs-volume-ops.sh fails
- 03:19 PM Backport #50937: octopus: osd-bluefs-volume-ops.sh fails
- https://github.com/ceph/ceph/pull/42377
merged - 04:47 AM Backport #51650: octopus: Bluestore repair might erroneously remove SharedBlob entries.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42373
m... - 04:40 AM Backport #51649: pacific: Bluestore repair might erroneously remove SharedBlob entries.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42423
m... - 04:39 AM Backport #50935: pacific: osd-bluefs-volume-ops.sh fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42219
m...
07/23/2021
- 10:15 PM Backport #51649 (Resolved): pacific: Bluestore repair might erroneously remove SharedBlob entries.
- 10:01 PM Backport #51649: pacific: Bluestore repair might erroneously remove SharedBlob entries.
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42423
merged - 10:15 PM Backport #51650 (Resolved): octopus: Bluestore repair might erroneously remove SharedBlob entries.
- 10:02 PM Backport #51650: octopus: Bluestore repair might erroneously remove SharedBlob entries.
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42373
merged - 07:13 PM Backport #50935 (Resolved): pacific: osd-bluefs-volume-ops.sh fails
- 06:30 PM Backport #50935: pacific: osd-bluefs-volume-ops.sh fails
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/42219
merged
07/21/2021
- 11:08 PM Bug #51762 (Fix Under Review): Missed shared block repair doesn't fix related issues
- 12:41 PM Bug #51762 (Pending Backport): Missed shared block repair doesn't fix related issues
- 12:23 PM Bug #51762 (Resolved): Missed shared block repair doesn't fix related issues
- Missed shared block is usually coupled with improper statfs and extent leakage errors.
One should adjust relevant da... - 12:45 PM Backport #51764 (Resolved): octopus: Missed shared block repair doesn't fix related issues
- https://github.com/ceph/ceph/pull/43887
- 12:45 PM Backport #51763 (Resolved): pacific: Missed shared block repair doesn't fix related issues
- https://github.com/ceph/ceph/pull/43731
- 03:56 AM Bug #51755 (Triaged): crash: rocksdb::IteratorWrapperBase<rocksdb::Slice>::Update()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a71a001d0de1045775522aad...
07/20/2021
- 09:11 PM Bug #51753 (Won't Fix): crash: void BlueStore::_kv_sync_thread(): assert(r == 0)
New crash events were reported via Telemetry with newer versions (['15.2.1', '15.2.5']) than encountered in Tracker...- 05:07 PM Backport #51710 (In Progress): pacific: compact db after bulk omap naming upgrade
- https://github.com/ceph/ceph/pull/42426
- 04:59 PM Backport #51130 (In Progress): pacific: In poweroff conditions BlueFS can create corrupted files
- https://github.com/ceph/ceph/pull/42424
- 04:54 PM Backport #51649 (In Progress): pacific: Bluestore repair might erroneously remove SharedBlob entr...
- https://github.com/ceph/ceph/pull/42423
07/16/2021
- 01:06 PM Backport #51711 (In Progress): octopus: compact db after bulk omap naming upgrade
- https://github.com/ceph/ceph/pull/42375
- 12:20 PM Backport #51711 (Resolved): octopus: compact db after bulk omap naming upgrade
- https://github.com/ceph/ceph/pull/42375
- 12:42 PM Backport #51128 (In Progress): octopus: In poweroff conditions BlueFS can create corrupted files
- https://github.com/ceph/ceph/pull/42374
- 12:30 PM Backport #51650 (In Progress): octopus: Bluestore repair might erroneously remove SharedBlob entr...
- https://github.com/ceph/ceph/pull/42373
- 12:20 PM Backport #51710 (Resolved): pacific: compact db after bulk omap naming upgrade
- 12:18 PM Feature #51709 (Resolved): compact db after bulk omap naming upgrade
- Omap naming scheme upgrade introduced recently might perform bulk data
removal and hence leave DB in a "degraded" st... - 10:20 AM Bug #51217: BlueFS::_flush_range assert(h->file->fnode.ino != 1)
- An evolution of Shu Yu's fix is available at https://github.com/ceph/ceph/pull/42370
07/15/2021
- 08:35 PM Bug #51684: OSD crashes after update to 16.2.4
- Then you can go-ahead and close this issue, we'll proceed to the upgrade in the week to come. Thanks.
- 05:36 PM Bug #51684: OSD crashes after update to 16.2.4
- Jerome,
I think you've missed one more fix for unexpected ENOSPC in Hybrid allocator (https://tracker.ceph.com/iss... - 05:12 PM Bug #51684: OSD crashes after update to 16.2.4
- I did remove the first block: line and ran it through jq -c to compact it prior compression but it's still too big fo...
- 04:31 PM Bug #51684: OSD crashes after update to 16.2.4
- Jérôme Poulin wrote:
> Yes, it is correct, switching to bitmap allocator allows restarting the OSD and recovery.
> ... - 04:17 PM Bug #51684: OSD crashes after update to 16.2.4
- Yes, it is correct, switching to bitmap allocator allows restarting the OSD and recovery.
Here's the full log for ... - 01:18 PM Bug #51684: OSD crashes after update to 16.2.4
- Would you please share the OSD log with the crash?
Am I getting that correct that you managed to work around the ... - 01:06 PM Bug #51684: OSD crashes after update to 16.2.4
- Reference to the moment we've upgraded: https://tracker.ceph.com/issues/47883#note-5
- 12:59 PM Bug #51684 (Duplicate): OSD crashes after update to 16.2.4
- After we've updated to 16.2.4 from version 14.2.0, all OSD have crashed with this backtrace after about an hour and w...
- 12:23 PM Bug #51682 (Fix Under Review): bluestore repair might cause invalid write
- 11:02 AM Bug #51682 (Resolved): bluestore repair might cause invalid write
-26> 2021-07-14T13:29:44.463+0200 7f45d8ad8100 10 bluestore(/var/lib/ceph/osd/ceph-33) _fsck_on_open fix misrefe...
07/14/2021
- 10:03 PM Bug #51445 (Fix Under Review): cannot enable osd resharding on pacific
- 03:57 PM Bug #51676 (Duplicate): rocksdb: prepare_for_reshard failure parsing column options: block_cache=...
- 03:17 PM Bug #51676 (Duplicate): rocksdb: prepare_for_reshard failure parsing column options: block_cache=...
- Reported in ceph-users...
- 01:20 AM Backport #51664 (Resolved): pacific: bluestore doesn't respect "bluestore_warn_on_spurious_read_e...
- https://github.com/ceph/ceph/pull/42897
- 01:17 AM Bug #51540 (Pending Backport): bluestore doesn't respect "bluestore_warn_on_spurious_read_errors"...
07/13/2021
- 07:17 PM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- Sorry for asking.
Any progress on that issue?
- 02:20 PM Backport #51650 (Resolved): octopus: Bluestore repair might erroneously remove SharedBlob entries.
- https://github.com/ceph/ceph/pull/42373
- 02:20 PM Backport #51649 (Resolved): pacific: Bluestore repair might erroneously remove SharedBlob entries.
- https://github.com/ceph/ceph/pull/42423
- 02:20 PM Backport #51648 (Resolved): nautilus: Bluestore repair might erroneously remove SharedBlob entries.
- https://github.com/ceph/ceph/pull/43365
- 02:15 PM Bug #51619 (Pending Backport): Bluestore repair might erroneously remove SharedBlob entries.
- 01:28 PM Bug #51217 (Fix Under Review): BlueFS::_flush_range assert(h->file->fnode.ino != 1)
Also available in: Atom