Activity
From 05/06/2021 to 06/04/2021
06/04/2021
- 10:40 PM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
- Mark, can you help take a look at this.
- 07:26 PM Bug #50788: crash in BlueStore::Onode::put()
- Similar?...
- 04:24 PM Backport #51042 (Resolved): nautilus: bluefs _allocate unable to allocate, though enough free
- 04:22 PM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/41673
merged
06/03/2021
- 11:38 PM Bug #51034: osd: failed to initialize OSD in Rook
- Any comments? I can get more information if needed.
- 12:37 PM Backport #50936 (In Progress): nautilus: osd-bluefs-volume-ops.sh fails
- Partial backport (regression fix only, omits code refactoring) is at:
https://github.com/ceph/ceph/pull/41676 - 10:25 AM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
- Neha Ojha wrote:
> Igor: Not sure how to deal with the missing test in nautilus, but let's get the fix in the last n... - 10:21 AM Backport #51042 (In Progress): nautilus: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41673
06/02/2021
- 04:44 PM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
- Igor: Not sure how to deal with the missing test in nautilus, but let's get the fix in the last nautilus point release
- 04:13 PM Backport #51041 (In Progress): octopus: bluefs _allocate unable to allocate, though enough free
- 02:33 PM Backport #51040 (In Progress): pacific: bluefs _allocate unable to allocate, though enough free
06/01/2021
- 03:05 PM Backport #51042 (Resolved): nautilus: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41673
- 03:05 PM Backport #51041 (Resolved): octopus: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41658
- 03:05 PM Backport #51040 (Resolved): pacific: bluefs _allocate unable to allocate, though enough free
- https://github.com/ceph/ceph/pull/41655
- 03:02 PM Bug #50656 (Pending Backport): bluefs _allocate unable to allocate, though enough free
- 11:10 AM Bug #51034 (Closed): osd: failed to initialize OSD in Rook
- I tried to create Rook/Ceph cluster that consists of some OSDs. However, some
OSDs failed to be initialized. Althoug... - 10:38 AM Backport #50940 (In Progress): octopus: OSDs broken after nautilus->octopus upgrade: rocksdb Corr...
- 09:58 AM Backport #50781 (In Progress): octopus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
05/27/2021
- 02:45 PM Bug #48849 (Need More Info): BlueStore.cc: 11380: FAILED ceph_assert(r == 0)
- 02:25 PM Bug #50578 (Duplicate): BlueFS::FileWriter::lock aborts when trying to `--mkfs`
- 02:08 PM Bug #49256 (Can't reproduce): src/os/bluestore/BlueStore.cc: FAILED ceph_assert(!c)
- 09:46 AM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
- I have enough memory...
- 09:34 AM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
- osd log...
- 08:29 AM Bug #50992 (Pending Backport): src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
When I use ceph-volume to add a new OSD, the following error appears...
05/25/2021
- 07:12 AM Bug #50965 (Resolved): In poweroff conditions BlueFS can create corrupted files
- It is possible to create condition in which a BlueFS contains file that is corrupted.
It can happen when BlueFS repl...
05/24/2021
- 09:45 PM Bug #50217 (Resolved): Increase default value of bluestore_cache_trim_max_skip_pinned
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:29 PM Backport #50405 (Resolved): octopus: Increase default value of bluestore_cache_trim_max_skip_pinned
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40919
m... - 05:32 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- And yet another one...
- 05:31 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- And after run "make -k" yet another this time not linking but compile error....
- 05:28 AM Bug #50947 (Resolved): 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
- rdma-core 35.0 build with LTO.
I'm trying to build ceph without PMEM support (and not use internal PMEM as well)
<p...
05/23/2021
- 12:46 AM Backport #50940 (Resolved): octopus: OSDs broken after nautilus->octopus upgrade: rocksdb Corrupt...
- https://github.com/ceph/ceph/pull/41613
- 12:46 AM Backport #50939 (Resolved): nautilus: OSDs broken after nautilus->octopus upgrade: rocksdb Corrup...
- https://github.com/ceph/ceph/pull/41749
- 12:45 AM Backport #50938 (Resolved): pacific: OSDs broken after nautilus->octopus upgrade: rocksdb Corrupt...
- https://github.com/ceph/ceph/pull/41752
- 12:45 AM Backport #50937 (Resolved): octopus: osd-bluefs-volume-ops.sh fails
- https://github.com/ceph/ceph/pull/42377
- 12:45 AM Backport #50936 (Resolved): nautilus: osd-bluefs-volume-ops.sh fails
- https://github.com/ceph/ceph/pull/41676
- 12:45 AM Backport #50935 (Resolved): pacific: osd-bluefs-volume-ops.sh fails
- https://github.com/ceph/ceph/pull/42219
- 12:44 AM Bug #50891 (Pending Backport): osd-bluefs-volume-ops.sh fails
- should be backported along with #49554
- 12:42 AM Bug #50017 (Pending Backport): OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: u...
05/21/2021
- 12:03 PM Bug #50926: Stray spanning blob repair reverts legacy omap repair results on the same onode
- Workaround would be to repeat the repair.
- 11:59 AM Bug #50926 (New): Stray spanning blob repair reverts legacy omap repair results on the same onode
- If Onode has got both stray spanning blob and legacy omap errors then relevant repair would improperly leave legacy o...
05/20/2021
- 10:24 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
- /a/nojha-2021-05-20_21:34:33-rados:mgr-master-distro-basic-smithi/6125349
- 10:21 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Konstantin Shalygin wrote:
> Added nautilus to backports, because if upgrade from luminous release, flow is luminous... - 05:33 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Added nautilus to backports, because if upgrade from luminous release, flow is luminous->nautilus->pacific.
- 06:03 AM Bug #50891 (Fix Under Review): osd-bluefs-volume-ops.sh fails
- 05:02 AM Bug #50891: osd-bluefs-volume-ops.sh fails
- it's a regression introduced by https://github.com/ceph/ceph/pull/39580/commits/94a91f54fe30a4dd113fbc1b02bc3f3d52c82a92
- 05:02 AM Bug #50891 (Resolved): osd-bluefs-volume-ops.sh fails
- ...
05/19/2021
- 11:25 PM Bug #50017 (Fix Under Review): OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: u...
- 10:19 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- based on https://github.com/ceph/ceph/pull/41369#issuecomment-844520075
05/18/2021
- 03:54 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- Konstantin Shalygin wrote:
> > Can't verify right now, but I presume you're getting zombies for pools 1 & 7:
> 1 is... - 03:52 PM Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
- Given the following line I presume we're just out of space for WAL volume (which has got 512MB only):
-1 bluefs _a... - 12:58 PM Bug #42928 (Closed): ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- Complete bluefs volume migration is now implemented at ceph-volume level. See https://github.com/ceph/ceph/pull/39580...
- 06:51 AM Bug #50511: osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
- Some small correction, in some case this compaction didn't help either so I had to reinstall the osd.
- 06:50 AM Bug #50511: osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
- Had similar issue with RBD pool removal, the only thing that helped me as Igor suggested, stop the osd and run ceph-k...
05/17/2021
- 09:16 PM Bug #50656 (Fix Under Review): bluefs _allocate unable to allocate, though enough free
- 01:23 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Igor Fedotov wrote:
> Dan van der Ster wrote:
> > Igor, I was checking what is different between pacific's avl and ... - 12:40 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Dan van der Ster wrote:
> Igor, I was checking what is different between pacific's avl and octopus, and found it is ... - 11:55 AM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Igor, I was checking what is different between pacific's avl and octopus, and found it is *only* this: https://tracke...
- 08:32 PM Bug #50844: ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
- Hi Igor, this seems new in master. So far just one occurrence but I am assigning this to you for your thoughts.
- 04:36 PM Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
- ...
05/14/2021
- 11:28 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Jonas Jelten wrote:
> Yes, this de-zombie-blobs the OSDs. So now I have an upgradepath by (automatically) stopping a... - 05:19 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Yes, this de-zombie-blobs the OSDs. So now I have an upgradepath by (automatically) stopping an osd, running bluestor...
- 08:17 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Quote from ceph-users thread - after upgrade to 16.2.3 16.2.4 and after adding few hdd's OSD's started to fail 1 by 1...
- 05:45 PM Backport #50405: octopus: Increase default value of bluestore_cache_trim_max_skip_pinned
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40919
merged - 07:22 AM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- > Can't verify right now, but I presume you're getting zombies for pools 1 & 7:
1 is replicated RBD pool, and 7 is E...
05/13/2021
- 11:12 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- Konstantin Shalygin wrote:
> I have a case: Luminous 12.2.13 -> Nautilus 14.2.20 upgrade:
>
> host 0-5: redeploye... - 11:02 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- Konstantin Shalygin wrote:
>
> On logs I noticed a trand that almost all errors comes from prefix "rbd_data.6", "s... - 12:33 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Jan-Philipp Litza wrote:
> Thanks for looking into this so quickly!
>
> So in #47883 it says that hybrid is the d...
05/12/2021
- 10:34 PM Bug #50788 (Duplicate): crash in BlueStore::Onode::put()
- ...
- 03:05 PM Backport #50782 (Resolved): pacific: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- https://github.com/ceph/ceph/pull/41753
- 03:05 PM Backport #50781 (Resolved): octopus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- https://github.com/ceph/ceph/pull/41612
- 03:05 PM Backport #50780 (Resolved): nautilus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- https://github.com/ceph/ceph/pull/41750
- 03:02 PM Bug #50555 (Pending Backport): AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
05/11/2021
- 08:53 AM Backport #50403 (Resolved): nautilus: Increase default value of bluestore_cache_trim_max_skip_pinned
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40920
m... - 08:28 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Checked on next host by myself - "--command repair" fix OSD's before Nautilus auto fsck, and also CAN repair already ...
- 07:47 AM Backport #50402 (Resolved): pacific: Increase default value of bluestore_cache_trim_max_skip_pinned
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40918
m... - 02:14 AM Bug #50739 (Resolved): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, Obje...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5c573d1b3e5166fa32aaa6f1...
05/09/2021
- 05:44 AM Support #46781 (Closed): how to keep data security in bluestore when server power down ?
- This sort of question is best handled by writing the ceph-users@ceph.io mailing list. :)
05/08/2021
- 06:34 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Jonas, looks like my https://tracker.ceph.com/issues/48216#note-3 ?
This cluster have a EC meta pool for RBD?
Also,... - 06:21 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
- I have a case: Luminous 12.2.13 -> Nautilus 14.2.20 upgrade:
host 0-5: redeployed on last 60 days: ceph-disk -> ce...
05/07/2021
- 08:04 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- FTR, Igor replied on the ML:
> I think the root cause is related to the high amount of repairs made
> during the ... - 07:55 PM Backport #50403: nautilus: Increase default value of bluestore_cache_trim_max_skip_pinned
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40920
merged
05/06/2021
- 07:29 AM Bug #50656: bluefs _allocate unable to allocate, though enough free
- Thanks for looking into this so quickly!
So in #47883 it says that hybrid is the default allocator since 14.2.11, ...
Also available in: Atom