Activity
From 05/14/2021 to 06/12/2021
06/12/2021
- 12:52 PM Bug #51163: N ceph-volume lvm batch to create bluestore osds
- Thank you very much for your attention to the problem I met. Since the log file is too large (more than 1000KB), I ha...
06/10/2021
- 09:27 AM Bug #51163 (Need More Info): N ceph-volume lvm batch to create bluestore osds
- There should be relevant osd log under /var/log/ceph... Could you please share it?
- 06:31 AM Bug #51163 (Need More Info): N ceph-volume lvm batch to create bluestore osds
- ...
06/09/2021
- 09:12 PM Backport #50939 (Resolved): nautilus: OSDs broken after nautilus->octopus upgrade: rocksdb Corrup...
- 07:41 PM Backport #50939: nautilus: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unkno...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/41749
merged - 09:11 PM Backport #50780 (Resolved): nautilus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- 07:42 PM Backport #50780: nautilus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/41750
merged - 01:09 PM Bug #47446: No snap trim progress after removing large snapshots
- Independent from any bluestore crashes or bugs, you should expect cephfs snap trimming to take a long time when there...
06/08/2021
- 05:32 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- Igor, after couple of weeks i run fsck on upgraded and repaired OSD's:...
- 02:53 PM Bug #51133 (New): OSDs failing to start: rocksdb: submit_common error: Corruption: block checksum...
- After a while of high usage on my stack, I'm getting this error:...
- 12:15 PM Bug #45994 (Duplicate): OSD crash - in thread tp_osd_tp
- 12:13 PM Bug #46525 (Need More Info): osd crush
- 12:13 PM Bug #46780 (Closed): BlueFS Spillover without db being full
- Fixed with new bluefs space tracking framework (see #39185) starting v14.2.12
- 12:07 PM Bug #46270 (Can't reproduce): mimic:osd can not start
- 12:06 PM Feature #47718 (Resolved): intoduce means to detect/workaround spurios read errors in bluefs
- 11:59 AM Bug #48047 (Rejected): osd: fix bluestore stupid allocator
- 11:35 AM Backport #51130 (Resolved): pacific: In poweroff conditions BlueFS can create corrupted files
- https://github.com/ceph/ceph/pull/42424
- 11:35 AM Backport #51129 (Resolved): nautilus: In poweroff conditions BlueFS can create corrupted files
- https://github.com/ceph/ceph/pull/43135
- 11:35 AM Backport #51128 (Resolved): octopus: In poweroff conditions BlueFS can create corrupted files
- https://github.com/ceph/ceph/pull/42374
- 11:32 AM Bug #50965 (Pending Backport): In poweroff conditions BlueFS can create corrupted files
- 11:30 AM Bug #50965: In poweroff conditions BlueFS can create corrupted files
- Why doesn't this need to be backported? The commit applies cleanly to pacific, at least.
- 11:27 AM Backport #49981 (In Progress): octopus: BlueRocksEnv::GetChildren may pass trailing slashes to Bl...
- https://github.com/ceph/ceph/pull/41757
- 10:46 AM Backport #49982 (In Progress): pacific: BlueRocksEnv::GetChildren may pass trailing slashes to Bl...
- https://github.com/ceph/ceph/pull/41755
- 10:37 AM Backport #50782 (In Progress): pacific: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- https://github.com/ceph/ceph/pull/41753
- 10:22 AM Backport #50938 (In Progress): pacific: OSDs broken after nautilus->octopus upgrade: rocksdb Corr...
- https://github.com/ceph/ceph/pull/41752
merged - 10:11 AM Feature #21741 (Closed): os/bluestore: multi-tier support in BlueStore
- Closing due to inactivity
- 10:03 AM Bug #41208 (Resolved): os/bluestore/NVMEDevice:when write\read\flush func eorror in aio_handle, w...
- 09:59 AM Bug #49168 (Resolved): Bluefs improperly handles huge (>4GB) writes which causes data corruption
- 09:58 AM Backport #49477 (Rejected): mimic: Bluefs improperly handles huge (>4GB) writes which causes data...
- 09:58 AM Backport #49478 (Rejected): luminous: Bluefs improperly handles huge (>4GB) writes which causes d...
- 09:57 AM Bug #45337 (Resolved): Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true
- 09:57 AM Backport #45683 (Rejected): mimic: Large (>=2 GB) writes are incomplete when bluefs_buffered_io =...
- wouldn't fix as mimic is at EOL
- 09:42 AM Backport #50780 (In Progress): nautilus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- https://github.com/ceph/ceph/pull/41750
- 09:33 AM Backport #50939 (In Progress): nautilus: OSDs broken after nautilus->octopus upgrade: rocksdb Cor...
- https://github.com/ceph/ceph/pull/41749
06/07/2021
- 02:17 PM Backport #50936 (Resolved): nautilus: osd-bluefs-volume-ops.sh fails
- 01:52 PM Backport #50936: nautilus: osd-bluefs-volume-ops.sh fails
- Igor Fedotov wrote:
> Partial backport (regression fix only, omits code refactoring) is at:
> https://github.com/ce...
06/05/2021
- 02:37 PM Backport #51040: pacific: bluefs _allocate unable to allocate, though enough free
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41655
merged - 01:26 PM Bug #50965 (Resolved): In poweroff conditions BlueFS can create corrupted files
06/04/2021
- 10:40 PM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
- Mark, can you help take a look at this.
- 07:26 PM Bug #50788: crash in BlueStore::Onode::put()
- Similar?...
- 04:24 PM Backport #51042 (Resolved): nautilus: bluefs _allocate unable to allocate, though enough free
- 04:22 PM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/41673
merged
06/03/2021
- 11:38 PM Bug #51034: osd: failed to initialize OSD in Rook
- Any comments? I can get more information if needed.
- 12:37 PM Backport #50936 (In Progress): nautilus: osd-bluefs-volume-ops.sh fails
- Partial backport (regression fix only, omits code refactoring) is at:
https://github.com/ceph/ceph/pull/41676 - 10:25 AM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
- Neha Ojha wrote:
> Igor: Not sure how to deal with the missing test in nautilus, but let's get the fix in the last n... - 10:21 AM Backport #51042 (In Progress): nautilus: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41673
06/02/2021
- 04:44 PM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
- Igor: Not sure how to deal with the missing test in nautilus, but let's get the fix in the last nautilus point release
- 04:13 PM Backport #51041 (In Progress): octopus: bluefs _allocate unable to allocate, though enough free
- 02:33 PM Backport #51040 (In Progress): pacific: bluefs _allocate unable to allocate, though enough free
06/01/2021
- 03:05 PM Backport #51042 (Resolved): nautilus: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41673
- 03:05 PM Backport #51041 (Resolved): octopus: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41658
- 03:05 PM Backport #51040 (Resolved): pacific: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41655
- 03:02 PM Bug #50656 (Pending Backport): bluefs _allocate unable to allocate, though enough free
- 11:10 AM Bug #51034 (Closed): osd: failed to initialize OSD in Rook
- I tried to create Rook/Ceph cluster that consists of some OSDs. However, some
OSDs failed to be initialized. Althoug... - 10:38 AM Backport #50940 (In Progress): octopus: OSDs broken after nautilus->octopus upgrade: rocksdb Corr...
- 09:58 AM Backport #50781 (In Progress): octopus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
05/27/2021
- 02:45 PM Bug #48849 (Need More Info): BlueStore.cc: 11380: FAILED ceph_assert(r == 0)
- 02:25 PM Bug #50578 (Duplicate): BlueFS::FileWriter::lock aborts when trying to `--mkfs`
- 02:08 PM Bug #49256 (Can't reproduce): src/os/bluestore/BlueStore.cc: FAILED ceph_assert(!c)
- 09:46 AM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
- I have enough memory...
- 09:34 AM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
- osd log...
- 08:29 AM Bug #50992 (Pending Backport): src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
When I use ceph-volume to add a new OSD, the following error appears...
05/25/2021
- 07:12 AM Bug #50965 (Resolved): In poweroff conditions BlueFS can create corrupted files
- It is possible to create condition in which a BlueFS contains file that is corrupted.
It can happen when BlueFS repl...
05/24/2021
- 09:45 PM Bug #50217 (Resolved): Increase default value of bluestore_cache_trim_max_skip_pinned
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:29 PM Backport #50405 (Resolved): octopus: Increase default value of bluestore_cache_trim_max_skip_pinned
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40919
m... - 05:32 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- And yet another one...
- 05:31 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- And after run "make -k" yet another this time not linking but compile error....
- 05:28 AM Bug #50947 (Resolved): 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- rdma-core 35.0 build with LTO.
I'm trying to build ceph without PMEM support (and not use internal PMEM as well)
<p...
05/23/2021
- 12:46 AM Backport #50940 (Resolved): octopus: OSDs broken after nautilus->octopus upgrade: rocksdb Corrupt...
- https://github.com/ceph/ceph/pull/41613
- 12:46 AM Backport #50939 (Resolved): nautilus: OSDs broken after nautilus->octopus upgrade: rocksdb Corrup...
- https://github.com/ceph/ceph/pull/41749
- 12:45 AM Backport #50938 (Resolved): pacific: OSDs broken after nautilus->octopus upgrade: rocksdb Corrupt...
- https://github.com/ceph/ceph/pull/41752
- 12:45 AM Backport #50937 (Resolved): octopus: osd-bluefs-volume-ops.sh fails
- https://github.com/ceph/ceph/pull/42377
- 12:45 AM Backport #50936 (Resolved): nautilus: osd-bluefs-volume-ops.sh fails
- https://github.com/ceph/ceph/pull/41676
- 12:45 AM Backport #50935 (Resolved): pacific: osd-bluefs-volume-ops.sh fails
- https://github.com/ceph/ceph/pull/42219
- 12:44 AM Bug #50891 (Pending Backport): osd-bluefs-volume-ops.sh fails
- should be backported along with #49554
- 12:42 AM Bug #50017 (Pending Backport): OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: u...
05/21/2021
- 12:03 PM Bug #50926: Stray spanning blob repair reverts legacy omap repair results on the same onode
- Workaround would be to repeat the repair.
- 11:59 AM Bug #50926 (New): Stray spanning blob repair reverts legacy omap repair results on the same onode
- If Onode has got both stray spanning blob and legacy omap errors then relevant repair would improperly leave legacy o...
05/20/2021
- 10:24 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- /a/nojha-2021-05-20_21:34:33-rados:mgr-master-distro-basic-smithi/6125349
- 10:21 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Konstantin Shalygin wrote:
> Added nautilus to backports, because if upgrade from luminous release, flow is luminous... - 05:33 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Added nautilus to backports, because if upgrade from luminous release, flow is luminous->nautilus->pacific.
- 06:03 AM Bug #50891 (Fix Under Review): osd-bluefs-volume-ops.sh fails
- 05:02 AM Bug #50891: osd-bluefs-volume-ops.sh fails
- it's a regression introduced by https://github.com/ceph/ceph/pull/39580/commits/94a91f54fe30a4dd113fbc1b02bc3f3d52c82a92
- 05:02 AM Bug #50891 (Resolved): osd-bluefs-volume-ops.sh fails
- ...
05/19/2021
- 11:25 PM Bug #50017 (Fix Under Review): OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: u...
- 10:19 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- based on https://github.com/ceph/ceph/pull/41369#issuecomment-844520075
05/18/2021
- 03:54 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- Konstantin Shalygin wrote:
> > Can't verify right now, but I presume you're getting zombies for pools 1 & 7:
> 1 is... - 03:52 PM Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
- Given the following line I presume we're just out of space for WAL volume (which has got 512MB only):
-1 bluefs _a... - 12:58 PM Bug #42928 (Closed): ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- Complete bluefs volume migration is now implemented at ceph-volume level. See https://github.com/ceph/ceph/pull/39580...
- 06:51 AM Bug #50511: osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
- Some small correction, in some case this compaction didn't help either so I had to reinstall the osd.
- 06:50 AM Bug #50511: osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
- Had similar issue with RBD pool removal, the only thing that helped me as Igor suggested, stop the osd and run ceph-k...
05/17/2021
- 09:16 PM Bug #50656 (Fix Under Review): bluefs _allocate unable to allocate, though enough free
- 01:23 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Igor Fedotov wrote:
> Dan van der Ster wrote:
> > Igor, I was checking what is different between pacific's avl and ... - 12:40 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Dan van der Ster wrote:
> Igor, I was checking what is different between pacific's avl and octopus, and found it is ... - 11:55 AM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Igor, I was checking what is different between pacific's avl and octopus, and found it is *only* this: https://tracke...
- 08:32 PM Bug #50844: ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
- Hi Igor, this seems new in master. So far just one occurrence but I am assigning this to you for your thoughts.
- 04:36 PM Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
- ...
05/14/2021
- 11:28 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Jonas Jelten wrote:
> Yes, this de-zombie-blobs the OSDs. So now I have an upgradepath by (automatically) stopping a... - 05:19 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Yes, this de-zombie-blobs the OSDs. So now I have an upgradepath by (automatically) stopping an osd, running bluestor...
- 08:17 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Quote from ceph-users thread - after upgrade to 16.2.3 16.2.4 and after adding few hdd's OSD's started to fail 1 by 1...
- 05:45 PM Backport #50405: octopus: Increase default value of bluestore_cache_trim_max_skip_pinned
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40919
merged - 07:22 AM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- > Can't verify right now, but I presume you're getting zombies for pools 1 & 7:
1 is replicated RBD pool, and 7 is E...
Also available in: Atom