Project

General

Profile

Activity

From 05/07/2021 to 06/05/2021

06/05/2021

02:37 PM Backport #51040: pacific: bluefs _allocate unable to allocate, though enough free
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41655
merged
Yuri Weinstein
01:26 PM Bug #50965 (Resolved): In poweroff conditions BlueFS can create corrupted files
Kefu Chai

06/04/2021

10:40 PM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
Mark, can you help take a look at this. Neha Ojha
07:26 PM Bug #50788: crash in BlueStore::Onode::put()
Similar?... Neha Ojha
04:24 PM Backport #51042 (Resolved): nautilus: bluefs _allocate unable to allocate, though enough free
Igor Fedotov
04:22 PM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/41673
merged
Yuri Weinstein

06/03/2021

11:38 PM Bug #51034: osd: failed to initialize OSD in Rook
Any comments? I can get more information if needed. Satoru Takeuchi
12:37 PM Backport #50936 (In Progress): nautilus: osd-bluefs-volume-ops.sh fails
Partial backport (regression fix only, omits code refactoring) is at:
https://github.com/ceph/ceph/pull/41676
Igor Fedotov
10:25 AM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
Neha Ojha wrote:
> Igor: Not sure how to deal with the missing test in nautilus, but let's get the fix in the last n...
Igor Fedotov
10:21 AM Backport #51042 (In Progress): nautilus: bluefs _allocate unable to allocate, though enough free
https://github.com/ceph/ceph/pull/41673 Igor Fedotov

06/02/2021

04:44 PM Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free
Igor: Not sure how to deal with the missing test in nautilus, but let's get the fix in the last nautilus point release Neha Ojha
04:13 PM Backport #51041 (In Progress): octopus: bluefs _allocate unable to allocate, though enough free
Neha Ojha
02:33 PM Backport #51040 (In Progress): pacific: bluefs _allocate unable to allocate, though enough free
Neha Ojha

06/01/2021

03:05 PM Backport #51042 (Resolved): nautilus: bluefs _allocate unable to allocate, though enough free
https://github.com/ceph/ceph/pull/41673 Backport Bot
03:05 PM Backport #51041 (Resolved): octopus: bluefs _allocate unable to allocate, though enough free
https://github.com/ceph/ceph/pull/41658 Backport Bot
03:05 PM Backport #51040 (Resolved): pacific: bluefs _allocate unable to allocate, though enough free
https://github.com/ceph/ceph/pull/41655 Backport Bot
03:02 PM Bug #50656 (Pending Backport): bluefs _allocate unable to allocate, though enough free
Igor Fedotov
11:10 AM Bug #51034 (Closed): osd: failed to initialize OSD in Rook
I tried to create Rook/Ceph cluster that consists of some OSDs. However, some
OSDs failed to be initialized. Althoug...
Satoru Takeuchi
10:38 AM Backport #50940 (In Progress): octopus: OSDs broken after nautilus->octopus upgrade: rocksdb Corr...
Cory Snyder
09:58 AM Backport #50781 (In Progress): octopus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
Cory Snyder

05/27/2021

02:45 PM Bug #48849 (Need More Info): BlueStore.cc: 11380: FAILED ceph_assert(r == 0)
Neha Ojha
02:25 PM Bug #50578 (Duplicate): BlueFS::FileWriter::lock aborts when trying to `--mkfs`
Neha Ojha
02:08 PM Bug #49256 (Can't reproduce): src/os/bluestore/BlueStore.cc: FAILED ceph_assert(!c)
Neha Ojha
09:46 AM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
I have enough memory... Jiang Yu
09:34 AM Bug #50992: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
osd log... Jiang Yu
08:29 AM Bug #50992 (Pending Backport): src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)

When I use ceph-volume to add a new OSD, the following error appears...
Jiang Yu

05/25/2021

07:12 AM Bug #50965 (Resolved): In poweroff conditions BlueFS can create corrupted files
It is possible to create condition in which a BlueFS contains file that is corrupted.
It can happen when BlueFS repl...
Adam Kupczyk

05/24/2021

09:45 PM Bug #50217 (Resolved): Increase default value of bluestore_cache_trim_max_skip_pinned
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
09:29 PM Backport #50405 (Resolved): octopus: Increase default value of bluestore_cache_trim_max_skip_pinned
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40919
m...
Loïc Dachary
05:32 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
And yet another one... Tomasz Kloczko
05:31 AM Bug #50947: 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
And after run "make -k" yet another this time not linking but compile error.... Tomasz Kloczko
05:28 AM Bug #50947 (Resolved): 16.2.4: build fails with WITH_BLUESTORE_PMEM=OFF
rdma-core 35.0 build with LTO.
I'm trying to build ceph without PMEM support (and not use internal PMEM as well)
<p...
Tomasz Kloczko

05/23/2021

12:46 AM Backport #50940 (Resolved): octopus: OSDs broken after nautilus->octopus upgrade: rocksdb Corrupt...
https://github.com/ceph/ceph/pull/41613 Backport Bot
12:46 AM Backport #50939 (Resolved): nautilus: OSDs broken after nautilus->octopus upgrade: rocksdb Corrup...
https://github.com/ceph/ceph/pull/41749 Backport Bot
12:45 AM Backport #50938 (Resolved): pacific: OSDs broken after nautilus->octopus upgrade: rocksdb Corrupt...
https://github.com/ceph/ceph/pull/41752 Backport Bot
12:45 AM Backport #50937 (Resolved): octopus: osd-bluefs-volume-ops.sh fails
https://github.com/ceph/ceph/pull/42377 Backport Bot
12:45 AM Backport #50936 (Resolved): nautilus: osd-bluefs-volume-ops.sh fails
https://github.com/ceph/ceph/pull/41676 Backport Bot
12:45 AM Backport #50935 (Resolved): pacific: osd-bluefs-volume-ops.sh fails
https://github.com/ceph/ceph/pull/42219 Backport Bot
12:44 AM Bug #50891 (Pending Backport): osd-bluefs-volume-ops.sh fails
should be backported along with #49554 Kefu Chai
12:42 AM Bug #50017 (Pending Backport): OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: u...
Kefu Chai

05/21/2021

12:03 PM Bug #50926: Stray spanning blob repair reverts legacy omap repair results on the same onode
Workaround would be to repeat the repair. Igor Fedotov
11:59 AM Bug #50926 (New): Stray spanning blob repair reverts legacy omap repair results on the same onode
If Onode has got both stray spanning blob and legacy omap errors then relevant repair would improperly leave legacy o... Igor Fedotov

05/20/2021

10:24 PM Bug #49138: blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error
/a/nojha-2021-05-20_21:34:33-rados:mgr-master-distro-basic-smithi/6125349 Neha Ojha
10:21 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
Konstantin Shalygin wrote:
> Added nautilus to backports, because if upgrade from luminous release, flow is luminous...
Igor Fedotov
05:33 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
Added nautilus to backports, because if upgrade from luminous release, flow is luminous->nautilus->pacific. Konstantin Shalygin
06:03 AM Bug #50891 (Fix Under Review): osd-bluefs-volume-ops.sh fails
Kefu Chai
05:02 AM Bug #50891: osd-bluefs-volume-ops.sh fails
it's a regression introduced by https://github.com/ceph/ceph/pull/39580/commits/94a91f54fe30a4dd113fbc1b02bc3f3d52c82a92 Kefu Chai
05:02 AM Bug #50891 (Resolved): osd-bluefs-volume-ops.sh fails
... Kefu Chai

05/19/2021

11:25 PM Bug #50017 (Fix Under Review): OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: u...
Igor Fedotov
10:19 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
based on https://github.com/ceph/ceph/pull/41369#issuecomment-844520075 Neha Ojha

05/18/2021

03:54 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
Konstantin Shalygin wrote:
> > Can't verify right now, but I presume you're getting zombies for pools 1 & 7:
> 1 is...
Igor Fedotov
03:52 PM Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
Given the following line I presume we're just out of space for WAL volume (which has got 512MB only):
-1 bluefs _a...
Igor Fedotov
12:58 PM Bug #42928 (Closed): ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
Complete bluefs volume migration is now implemented at ceph-volume level. See https://github.com/ceph/ceph/pull/39580... Igor Fedotov
06:51 AM Bug #50511: osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
Some small correction, in some case this compaction didn't help either so I had to reinstall the osd. Ist Gab
06:50 AM Bug #50511: osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
Had similar issue with RBD pool removal, the only thing that helped me as Igor suggested, stop the osd and run ceph-k... Ist Gab

05/17/2021

09:16 PM Bug #50656 (Fix Under Review): bluefs _allocate unable to allocate, though enough free
Neha Ojha
01:23 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
Igor Fedotov wrote:
> Dan van der Ster wrote:
> > Igor, I was checking what is different between pacific's avl and ...
Igor Fedotov
12:40 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
Dan van der Ster wrote:
> Igor, I was checking what is different between pacific's avl and octopus, and found it is ...
Igor Fedotov
11:55 AM Bug #50656: bluefs _allocate unable to allocate, though enough free
Igor, I was checking what is different between pacific's avl and octopus, and found it is *only* this: https://tracke... Dan van der Ster
08:32 PM Bug #50844: ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
Hi Igor, this seems new in master. So far just one occurrence but I am assigning this to you for your thoughts. Neha Ojha
04:36 PM Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
... Neha Ojha

05/14/2021

11:28 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
Jonas Jelten wrote:
> Yes, this de-zombie-blobs the OSDs. So now I have an upgradepath by (automatically) stopping a...
Igor Fedotov
05:19 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
Yes, this de-zombie-blobs the OSDs. So now I have an upgradepath by (automatically) stopping an osd, running bluestor... Jonas Jelten
08:17 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
Quote from ceph-users thread - after upgrade to 16.2.3 16.2.4 and after adding few hdd's OSD's started to fail 1 by 1... Neha Ojha
05:45 PM Backport #50405: octopus: Increase default value of bluestore_cache_trim_max_skip_pinned
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40919
merged
Yuri Weinstein
07:22 AM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
> Can't verify right now, but I presume you're getting zombies for pools 1 & 7:
1 is replicated RBD pool, and 7 is E...
Konstantin Shalygin

05/13/2021

11:12 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
Konstantin Shalygin wrote:
> I have a case: Luminous 12.2.13 -> Nautilus 14.2.20 upgrade:
>
> host 0-5: redeploye...
Igor Fedotov
11:02 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
Konstantin Shalygin wrote:
>
> On logs I noticed a trand that almost all errors comes from prefix "rbd_data.6", "s...
Igor Fedotov
12:33 PM Bug #50656: bluefs _allocate unable to allocate, though enough free
Jan-Philipp Litza wrote:
> Thanks for looking into this so quickly!
>
> So in #47883 it says that hybrid is the d...
Igor Fedotov

05/12/2021

10:34 PM Bug #50788 (Duplicate): crash in BlueStore::Onode::put()
... Neha Ojha
03:05 PM Backport #50782 (Resolved): pacific: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
https://github.com/ceph/ceph/pull/41753 Backport Bot
03:05 PM Backport #50781 (Resolved): octopus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
https://github.com/ceph/ceph/pull/41612 Backport Bot
03:05 PM Backport #50780 (Resolved): nautilus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
https://github.com/ceph/ceph/pull/41750 Backport Bot
03:02 PM Bug #50555 (Pending Backport): AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
Kefu Chai

05/11/2021

08:53 AM Backport #50403 (Resolved): nautilus: Increase default value of bluestore_cache_trim_max_skip_pinned
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40920
m...
Loïc Dachary
08:28 AM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
Checked on next host by myself - "--command repair" fix OSD's before Nautilus auto fsck, and also CAN repair already ... Konstantin Shalygin
07:47 AM Backport #50402 (Resolved): pacific: Increase default value of bluestore_cache_trim_max_skip_pinned
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40918
m...
Loïc Dachary
02:14 AM Bug #50739 (Resolved): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, Obje...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5c573d1b3e5166fa32aaa6f1...
Yaarit Hatuka

05/09/2021

05:44 AM Support #46781 (Closed): how to keep data security in bluestore when server power down ?
This sort of question is best handled by writing the ceph-users@ceph.io mailing list. :) Greg Farnum

05/08/2021

06:34 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
Jonas, looks like my https://tracker.ceph.com/issues/48216#note-3 ?
This cluster have a EC meta pool for RBD?
Also,...
Konstantin Shalygin
06:21 PM Bug #48216: Spanning blobs list might have zombie blobs that aren't of use any more
I have a case: Luminous 12.2.13 -> Nautilus 14.2.20 upgrade:
host 0-5: redeployed on last 60 days: ceph-disk -> ce...
Konstantin Shalygin

05/07/2021

08:04 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
FTR, Igor replied on the ML:
> I think the root cause is related to the high amount of repairs made
> during the ...
Dan van der Ster
07:55 PM Backport #50403: nautilus: Increase default value of bluestore_cache_trim_max_skip_pinned
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40920
merged
Yuri Weinstein
 

Also available in: Atom