Project

General

Profile

Activity

From 04/14/2020 to 05/13/2020

05/13/2020

03:13 PM Backport #45354 (Need More Info): octopus: ceph_test_objectstore: src/os/bluestore/bluestore_type...
Nathan Cutler
02:41 PM Bug #45519: OSD asserts during block allocation for BlueFS
>We also see some new OSDs asserting:... Aleksei Zakharov
10:33 AM Bug #45519: OSD asserts during block allocation for BlueFS
>Igor, thanks for your answers!
>If I understand right: high bluestore fragmentation makes it impossible to alloca...
Aleksei Zakharov

05/12/2020

05:41 PM Bug #45519: OSD asserts during block allocation for BlueFS
Alexei, it looks like your OSDs have pretty fragmented free space (and presumably quite high utilization) which resul... Igor Fedotov
02:33 PM Bug #45519 (New): OSD asserts during block allocation for BlueFS
Hi all.
We use ceph as the rados object storage for small (<=4K) objects. We have about 2 billions of objects in one...
Aleksei Zakharov

05/11/2020

02:23 PM Bug #44774 (Resolved): ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
02:20 PM Bug #45250 (Resolved): check-generated.sh finds error in ceph-dencoder
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler

05/10/2020

04:10 AM Bug #43068: on disk size (81292) does not match object info size (81237)
Igor Fedotov wrote:
> And is this correct that Rados/BlueStore is valid (and is in line with the content) but CephFS...
Patrick Donnelly

05/08/2020

10:17 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
https://github.com/ceph/ceph/pull/34970 Sebastian Wagner
10:05 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
this also blocks getting a valid test run for any upgrade PR in cephadm .. Michael Fritch
03:32 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
because of this issue, affected Pulpito runs now take about 12h before they're killed. Instead of the usual 1h or so. Sebastian Wagner
02:00 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
http://pulpito.ceph.com/teuthology-2020-05-03_07:01:02-rados-master-distro-basic-smithi/5018156/ Sebastian Wagner
07:28 AM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/50... Sebastian Wagner

05/07/2020

05:46 PM Bug #44757 (Resolved): perf regression due to bluefs_buffered_io=true
Nathan Cutler
09:50 AM Backport #45426 (In Progress): octopus: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
https://github.com/ceph/ceph/pull/34943 Igor Fedotov
09:37 AM Backport #45426 (Resolved): octopus: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
https://github.com/ceph/ceph/pull/34943 Igor Fedotov
06:33 AM Bug #44880: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
Need that octopus backport.
/a/yuriw-2020-05-05_15:20:13-rados-wip-yuri8-testing-2020-05-04-2117-octopus-distro-ba...
Brad Hubbard

05/06/2020

02:05 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
Igor Fedotov
01:00 PM Backport #45127 (Resolved): octopus: Extent leak after main device expand
Nathan Cutler

05/05/2020

04:30 PM Backport #45330 (Resolved): nautilus: check-generated.sh finds error in ceph-dencoder
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34832
m...
Nathan Cutler
04:30 PM Backport #45045 (Resolved): nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damag...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34796
m...
Nathan Cutler
04:30 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34611
m...
Nathan Cutler
09:08 AM Backport #45123 (Resolved): nautilus: OSD might fail to recover after ENOSPC crash
Igor Fedotov
04:26 PM Backport #45348 (Resolved): octopus: BlueStore asserting on fs upgrade tests
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34610
m...
Nathan Cutler
04:26 PM Backport #45122: octopus: OSD might fail to recover after ENOSPC crash
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34610
m...
Nathan Cutler
09:06 AM Backport #45122 (Resolved): octopus: OSD might fail to recover after ENOSPC crash
Igor Fedotov
04:25 PM Backport #44819 (Resolved): octopus: perf regression due to bluefs_buffered_io=true
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34353
m...
Nathan Cutler
04:25 PM Backport #45044 (Resolved): octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34795
m...
Nathan Cutler
04:24 PM Backport #45063 (Resolved): octopus: bluestore: unused calculation is broken
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34793
m...
Nathan Cutler
09:08 AM Bug #45112 (Resolved): OSD might fail to recover after ENOSPC crash
Igor Fedotov

05/04/2020

08:43 PM Backport #45330: nautilus: check-generated.sh finds error in ceph-dencoder
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34832
merged
Yuri Weinstein
08:43 PM Backport #45045: nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34796
merged
Yuri Weinstein
08:37 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/34611
merged
Yuri Weinstein
08:33 PM Backport #45348: octopus: BlueStore asserting on fs upgrade tests
Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/34610
merged
Yuri Weinstein
08:33 PM Backport #45122: octopus: OSD might fail to recover after ENOSPC crash
Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/34610
merged
Yuri Weinstein
07:53 PM Bug #44880 (Pending Backport): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
/a/yuriw-2020-05-02_20:02:46-rados-wip-yuri6-testing-2020-04-30-2259-octopus-distro-basic-smithi/5016613 Neha Ojha
07:34 PM Backport #45044: octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34795
merged
Yuri Weinstein
09:07 AM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
http://pulpito.ceph.com/swagner-2020-04-30_09:15:46-rados-wip-swagner-testing-2020-04-29-1246-distro-basic-smithi/500... Sebastian Wagner

05/01/2020

12:04 PM Backport #45348: octopus: BlueStore asserting on fs upgrade tests
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
11:40 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
Partially fixed by https://github.com/ceph/ceph/pull/34503 Igor Fedotov
11:32 AM Backport #45354: octopus: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ...
This to be backported once https://github.com/ceph/ceph/pull/33434 is backported to Octopus Igor Fedotov
11:28 AM Backport #45354 (Rejected): octopus: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 7...
Igor Fedotov
05:05 AM Bug #45195 (Pending Backport): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FA...
Kefu Chai

04/30/2020

11:13 PM Backport #45063: octopus: bluestore: unused calculation is broken
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34793
merged
Yuri Weinstein
02:20 PM Bug #44509 (Won't Fix): using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k lefto...
Neha Ojha
02:00 PM Bug #45195 (Fix Under Review): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FA...
Igor Fedotov
08:49 AM Bug #45337 (Fix Under Review): Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true
Igor Fedotov
08:33 AM Bug #45337 (Pending Backport): Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true
Igor Fedotov
08:49 AM Backport #45348 (In Progress): octopus: BlueStore asserting on fs upgrade tests
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
08:35 AM Backport #45348 (Resolved): octopus: BlueStore asserting on fs upgrade tests
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
01:30 AM Bug #20236: bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
/a/yuriw-2020-04-28_21:58:13-rados-wip-yuri-testing-2020-04-24-1941-master-distro-basic-smithi/4995289 Brad Hubbard

04/29/2020

05:43 PM Bug #45133 (Pending Backport): BlueStore asserting on fs upgrade tests
Neha Ojha
04:27 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
Igor Fedotov
04:24 PM Bug #45133: BlueStore asserting on fs upgrade tests
https://github.com/ceph/ceph/pull/34616 merged Yuri Weinstein
04:38 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
related? https://github.com/ceph/ceph/pull/34006/files#diff-238dcf689f6d5fa51b9b9c814f356581R794-R796 Sebastian Wagner
03:58 PM Bug #45335 (Resolved): cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_...
... Sebastian Wagner
04:25 PM Bug #45337 (Resolved): Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true
Large buffered writes triggers _pwritev() call from KernelDevice::_sync_write() func which doesn't support more than ... Igor Fedotov
12:32 PM Backport #45330 (In Progress): nautilus: check-generated.sh finds error in ceph-dencoder
Nathan Cutler
12:31 PM Backport #45330 (Resolved): nautilus: check-generated.sh finds error in ceph-dencoder
https://github.com/ceph/ceph/pull/34832 Nathan Cutler
10:55 AM Bug #45250 (Pending Backport): check-generated.sh finds error in ceph-dencoder
Igor Fedotov

04/28/2020

05:18 PM Backport #45045 (In Progress): nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may da...
Nathan Cutler
05:17 PM Backport #45044 (In Progress): octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may dam...
Nathan Cutler
05:15 PM Backport #45064 (In Progress): nautilus: bluestore: unused calculation is broken
Nathan Cutler
05:14 PM Backport #45063 (In Progress): octopus: bluestore: unused calculation is broken
Nathan Cutler
05:11 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
Thanks a lot. That solves the problem. Would be good to have a note about it in docs. Stefan Priebe
05:10 PM Backport #43086 (In Progress): mimic: bluefs: sync_metadata leaks dirty files if log_t is empty
Nathan Cutler
01:05 PM Backport #45234 (Rejected): mimic: OSD might fail to recover after ENOSPC crash
wouldn't do as it will introduce super meta version mess. Or one needs to backport per-pool omap stats as well. Igor Fedotov
09:35 AM Backport #45234: mimic: OSD might fail to recover after ENOSPC crash
Igor Fedotov wrote:
> Duplicates https://tracker.ceph.com/issues/45124
#45124 is an orphan. I deleted it.
Nathan Cutler
09:35 AM Backport #45234 (Need More Info): mimic: OSD might fail to recover after ENOSPC crash
Nathan Cutler
09:35 AM Backport #45234: mimic: OSD might fail to recover after ENOSPC crash
@Igor wrote:
Probably this is an overkill, see my comment in https://tracker.ceph.com/issues/45123
Nathan Cutler

04/27/2020

12:59 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
Stefan,
I presume you did invoke bluefs-bdev-new-db command only, didn't you?
In fact this command allocates new DB...
Igor Fedotov
12:08 PM Backport #45234 (Duplicate): mimic: OSD might fail to recover after ENOSPC crash
Duplicates https://tracker.ceph.com/issues/45124 Igor Fedotov
08:51 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
Well, this fix is required if/when 'deferring big writes' feature (https://github.com/ceph/ceph/pull/33434) is backp... Igor Fedotov
08:49 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
Hi Brad,
no this is specific to master(Pacific) only
Igor Fedotov
12:39 AM Bug #45195 (In Progress): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ...
Brad Hubbard
12:38 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
Igor, does this need to be backported? If so could you set the appropriate releases please? I know you set 16 as the ... Brad Hubbard
03:34 AM Bug #44880 (Resolved): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
Kefu Chai

04/25/2020

08:27 PM Bug #45195 (Fix Under Review): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FA...
Igor Fedotov

04/24/2020

11:56 PM Bug #45265: Disable bluestore_fsck_quick_fix_on_mount by default
Oh, I'm afraid this is a perfect topic for a holy war ;)
I recall a few people who wanted this feature to be enabled...
Igor Fedotov
11:25 PM Bug #45265 (Resolved): Disable bluestore_fsck_quick_fix_on_mount by default
Since the format conversion may take from a few minutes to a few hours, depending on the amount of stored "omap" data... Neha Ojha
10:15 PM Bug #45195 (In Progress): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ...
Igor Fedotov
10:08 PM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
The following back trace is likely to be the symptom for the same bug.
-1> 2020-04-24T16:25:20.971+0300 7efe4897e...
Igor Fedotov
06:44 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
Igor did you had a look at those values? Any chance to fix this manually? Stefan Priebe
06:43 PM Bug #42928: ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
OK i was able to solve this by using the lv tools and copy all lv flags from the block device to db and just chaning ... Stefan Priebe
11:51 AM Bug #45110: Extent leak after main device expand
Re-adding "mimic" to Backports because otherwise "src/script/backport-create-issue" will complain that the issue has ... Nathan Cutler
07:30 AM Bug #45250 (Resolved): check-generated.sh finds error in ceph-dencoder
[~/master36] wjw@cephdev.digiware.nl> build/bin/ceph-dencoder type bluestore_bdev_label_t select_test 1 encode ex... Willem Jan Withagen

04/23/2020

10:54 PM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
... Brad Hubbard
05:40 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
Not sure whether this is an issue with the test? Brad Hubbard
05:30 AM Bug #45195 (Resolved): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED cep...
... Brad Hubbard
03:27 PM Backport #45125 (Rejected): mimic: Extent leak after main device expand
My bad, main device expansion feature started at Nautilus, hence the code isn't present. Igor Fedotov
02:57 PM Backport #45125 (Need More Info): mimic: Extent leak after main device expand
@Igor - I looked, but in mimic I can't find the code touched by the master fix. Nathan Cutler
02:54 PM Backport #45234 (Rejected): mimic: OSD might fail to recover after ENOSPC crash
Nathan Cutler
02:51 PM Backport #45126 (In Progress): nautilus: Extent leak after main device expand
Nathan Cutler
02:51 PM Backport #45127 (In Progress): octopus: Extent leak after main device expand
Nathan Cutler

04/21/2020

05:51 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
sure:
```
# ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-5 stats
2020-04-21 19:51:16.635 7f439d4b7e00 1 ...
Stefan Priebe
02:09 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
Could you please provide an output for the following command (OSD to be offline):
ceph-kvstore-tool bluestore-kv <pa...
Igor Fedotov
02:33 PM Bug #42928: ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
is ceph.db_uuid the LVM uuid? Stefan Priebe
02:14 PM Bug #42928: ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
Those two tags are missing:
ceph.db_device=/dev/BLOCKDB
ceph.db_uuid=k7Oc64-aShi-Z4Y0-Opov-daQM-Vdue-kJpIAN
Stefan Priebe
05:25 AM Bug #45133: BlueStore asserting on fs upgrade tests
http://pulpito.ceph.com/bhubbard-2020-04-16_09:57:54-rados-wip-badone-testing-distro-basic-smithi/4957960/ Brad Hubbard

04/20/2020

01:24 PM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
Thanks for your reply. Good to know that @ceph-bluestore-tool@ needs to run on an offline OSD, I did not check that b... Gerdriaan Mulder
12:29 PM Bug #43370 (Rejected): OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs eno...
Hi Gerdriaan.
Thanks for the update.
First of all - ceph-bluestore-tool to be executed on an offline OSD hence the ...
Igor Fedotov
08:35 AM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
Hi Igor,
Output below for each OSD node:...
Gerdriaan Mulder

04/17/2020

04:37 PM Bug #45133 (Fix Under Review): BlueStore asserting on fs upgrade tests
Igor Fedotov
03:58 PM Bug #45133: BlueStore asserting on fs upgrade tests
Yes indeed, BlueStore::_upgrade_super() when you are starting with format 2 invokes _prepare_ondisk_format_super() as... Greg Farnum
03:10 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
See for instance http://pulpito.front.sepia.ceph.com/gregf-2020-04-15_20:54:50-fs-wip-greg-testing-415-1-distro-basic... Greg Farnum
01:48 PM Backport #45122 (In Progress): octopus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
10:26 AM Backport #45122 (Resolved): octopus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
01:46 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
Doing partial backport for now to override an issue from comment #2. Igor Fedotov
01:46 PM Backport #45123 (In Progress): nautilus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34611 Igor Fedotov
11:06 AM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
Looks like this backport depends on backporting per-pool omap stats collection support, see
https://github.com/ceph...
Igor Fedotov
10:27 AM Backport #45123 (Resolved): nautilus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34611 Igor Fedotov
12:54 PM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
Gerdriaan, sorry, missed you inquiry.
What I wanted is the output for:
ceph-bluestore-tool --path <path-to-osd> --c...
Igor Fedotov
12:47 PM Bug #43068: on disk size (81292) does not match object info size (81237)
It looks to me that this bug has rather out-of-bluestore scope. Am I right it applies to CephFS files/objects only? A... Igor Fedotov
12:24 AM Bug #43068 (New): on disk size (81292) does not match object info size (81237)
we get one more hit
2020-04-16 14:21:30.139493 osd.869 (osd.869) 10192 : cluster [ERR] 6.68b shard 869 soid 6:d16f...
Xiaoxi Chen
10:27 AM Backport #45127 (Resolved): octopus: Extent leak after main device expand
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
10:27 AM Backport #45126 (Resolved): nautilus: Extent leak after main device expand
https://github.com/ceph/ceph/pull/34711 Igor Fedotov
10:27 AM Backport #45125 (Rejected): mimic: Extent leak after main device expand
Igor Fedotov

04/16/2020

10:21 AM Bug #45112 (Resolved): OSD might fail to recover after ENOSPC crash
While opening after such a crash KV might need to flush some data and hence needs additional disk space. But allocato... Igor Fedotov
10:05 AM Bug #45110 (Resolved): Extent leak after main device expand
To reproduce the issue one can expand device of 3,147,480,064 bytes to
4,147,480,064 using bluefs-bdev-expand comman...
Igor Fedotov

04/15/2020

03:25 PM Bug #44880 (Fix Under Review): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
Igor Fedotov
 

Also available in: Atom