Activity
From 04/11/2020 to 05/10/2020
05/10/2020
- 04:10 AM Bug #43068: on disk size (81292) does not match object info size (81237)
- Igor Fedotov wrote:
> And is this correct that Rados/BlueStore is valid (and is in line with the content) but CephFS...
05/08/2020
- 10:17 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
- https://github.com/ceph/ceph/pull/34970
- 10:05 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
- this also blocks getting a valid test run for any upgrade PR in cephadm ..
- 03:32 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
- because of this issue, affected Pulpito runs now take about 12h before they're killed. Instead of the usual 1h or so.
- 02:00 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
- http://pulpito.ceph.com/teuthology-2020-05-03_07:01:02-rados-master-distro-basic-smithi/5018156/
- 07:28 AM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
- http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/50...
05/07/2020
- 05:46 PM Bug #44757 (Resolved): perf regression due to bluefs_buffered_io=true
- 09:50 AM Backport #45426 (In Progress): octopus: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
- https://github.com/ceph/ceph/pull/34943
- 09:37 AM Backport #45426 (Resolved): octopus: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
- https://github.com/ceph/ceph/pull/34943
- 06:33 AM Bug #44880: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
- Need that octopus backport.
/a/yuriw-2020-05-05_15:20:13-rados-wip-yuri8-testing-2020-05-04-2117-octopus-distro-ba...
05/06/2020
- 02:05 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
- 01:00 PM Backport #45127 (Resolved): octopus: Extent leak after main device expand
05/05/2020
- 04:30 PM Backport #45330 (Resolved): nautilus: check-generated.sh finds error in ceph-dencoder
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34832
m... - 04:30 PM Backport #45045 (Resolved): nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damag...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34796
m... - 04:30 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34611
m... - 09:08 AM Backport #45123 (Resolved): nautilus: OSD might fail to recover after ENOSPC crash
- 04:26 PM Backport #45348 (Resolved): octopus: BlueStore asserting on fs upgrade tests
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34610
m... - 04:26 PM Backport #45122: octopus: OSD might fail to recover after ENOSPC crash
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34610
m... - 09:06 AM Backport #45122 (Resolved): octopus: OSD might fail to recover after ENOSPC crash
- 04:25 PM Backport #44819 (Resolved): octopus: perf regression due to bluefs_buffered_io=true
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34353
m... - 04:25 PM Backport #45044 (Resolved): octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34795
m... - 04:24 PM Backport #45063 (Resolved): octopus: bluestore: unused calculation is broken
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34793
m... - 09:08 AM Bug #45112 (Resolved): OSD might fail to recover after ENOSPC crash
05/04/2020
- 08:43 PM Backport #45330: nautilus: check-generated.sh finds error in ceph-dencoder
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34832
merged - 08:43 PM Backport #45045: nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34796
merged - 08:37 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/34611
merged - 08:33 PM Backport #45348: octopus: BlueStore asserting on fs upgrade tests
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/34610
merged - 08:33 PM Backport #45122: octopus: OSD might fail to recover after ENOSPC crash
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/34610
merged - 07:53 PM Bug #44880 (Pending Backport): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
- /a/yuriw-2020-05-02_20:02:46-rados-wip-yuri6-testing-2020-04-30-2259-octopus-distro-basic-smithi/5016613
- 07:34 PM Backport #45044: octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34795
merged - 09:07 AM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
- http://pulpito.ceph.com/swagner-2020-04-30_09:15:46-rados-wip-swagner-testing-2020-04-29-1246-distro-basic-smithi/500...
05/01/2020
- 12:04 PM Backport #45348: octopus: BlueStore asserting on fs upgrade tests
- https://github.com/ceph/ceph/pull/34610
- 11:40 AM Bug #40300: ceph-osd segfault: "rocksdb: Corruption: file is too short"
- Partially fixed by https://github.com/ceph/ceph/pull/34503
- 11:32 AM Backport #45354: octopus: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ...
- This to be backported once https://github.com/ceph/ceph/pull/33434 is backported to Octopus
- 11:28 AM Backport #45354 (Rejected): octopus: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 7...
- 05:05 AM Bug #45195 (Pending Backport): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FA...
04/30/2020
- 11:13 PM Backport #45063: octopus: bluestore: unused calculation is broken
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34793
merged - 02:20 PM Bug #44509 (Won't Fix): using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k lefto...
- 02:00 PM Bug #45195 (Fix Under Review): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FA...
- 08:49 AM Bug #45337 (Fix Under Review): Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true
- 08:33 AM Bug #45337 (Pending Backport): Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true
- 08:49 AM Backport #45348 (In Progress): octopus: BlueStore asserting on fs upgrade tests
- https://github.com/ceph/ceph/pull/34610
- 08:35 AM Backport #45348 (Resolved): octopus: BlueStore asserting on fs upgrade tests
- https://github.com/ceph/ceph/pull/34610
- 01:30 AM Bug #20236: bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- /a/yuriw-2020-04-28_21:58:13-rados-wip-yuri-testing-2020-04-24-1941-master-distro-basic-smithi/4995289
04/29/2020
- 05:43 PM Bug #45133 (Pending Backport): BlueStore asserting on fs upgrade tests
- 04:27 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
- 04:24 PM Bug #45133: BlueStore asserting on fs upgrade tests
- https://github.com/ceph/ceph/pull/34616 merged
- 04:38 PM Bug #45335: cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_sharding mi...
- related? https://github.com/ceph/ceph/pull/34006/files#diff-238dcf689f6d5fa51b9b9c814f356581R794-R796
- 03:58 PM Bug #45335 (Resolved): cephadm upgrade: OSD.0 is not coming back after restart: rocksdb: verify_...
- ...
- 04:25 PM Bug #45337 (Resolved): Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true
- Large buffered writes triggers _pwritev() call from KernelDevice::_sync_write() func which doesn't support more than ...
- 12:32 PM Backport #45330 (In Progress): nautilus: check-generated.sh finds error in ceph-dencoder
- 12:31 PM Backport #45330 (Resolved): nautilus: check-generated.sh finds error in ceph-dencoder
- https://github.com/ceph/ceph/pull/34832
- 10:55 AM Bug #45250 (Pending Backport): check-generated.sh finds error in ceph-dencoder
04/28/2020
- 05:18 PM Backport #45045 (In Progress): nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may da...
- 05:17 PM Backport #45044 (In Progress): octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may dam...
- 05:15 PM Backport #45064 (In Progress): nautilus: bluestore: unused calculation is broken
- 05:14 PM Backport #45063 (In Progress): octopus: bluestore: unused calculation is broken
- 05:11 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
- Thanks a lot. That solves the problem. Would be good to have a note about it in docs.
- 05:10 PM Backport #43086 (In Progress): mimic: bluefs: sync_metadata leaks dirty files if log_t is empty
- 01:05 PM Backport #45234 (Rejected): mimic: OSD might fail to recover after ENOSPC crash
- wouldn't do as it will introduce super meta version mess. Or one needs to backport per-pool omap stats as well.
- 09:35 AM Backport #45234: mimic: OSD might fail to recover after ENOSPC crash
- Igor Fedotov wrote:
> Duplicates https://tracker.ceph.com/issues/45124
#45124 is an orphan. I deleted it. - 09:35 AM Backport #45234 (Need More Info): mimic: OSD might fail to recover after ENOSPC crash
- 09:35 AM Backport #45234: mimic: OSD might fail to recover after ENOSPC crash
- @Igor wrote:
Probably this is an overkill, see my comment in https://tracker.ceph.com/issues/45123
04/27/2020
- 12:59 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
- Stefan,
I presume you did invoke bluefs-bdev-new-db command only, didn't you?
In fact this command allocates new DB... - 12:08 PM Backport #45234 (Duplicate): mimic: OSD might fail to recover after ENOSPC crash
- Duplicates https://tracker.ceph.com/issues/45124
- 08:51 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
- Well, this fix is required if/when 'deferring big writes' feature (https://github.com/ceph/ceph/pull/33434) is backp...
- 08:49 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
- Hi Brad,
no this is specific to master(Pacific) only - 12:39 AM Bug #45195 (In Progress): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ...
- 12:38 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
- Igor, does this need to be backported? If so could you set the appropriate releases please? I know you set 16 as the ...
- 03:34 AM Bug #44880 (Resolved): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
04/25/2020
- 08:27 PM Bug #45195 (Fix Under Review): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FA...
04/24/2020
- 11:56 PM Bug #45265: Disable bluestore_fsck_quick_fix_on_mount by default
- Oh, I'm afraid this is a perfect topic for a holy war ;)
I recall a few people who wanted this feature to be enabled... - 11:25 PM Bug #45265 (Resolved): Disable bluestore_fsck_quick_fix_on_mount by default
- Since the format conversion may take from a few minutes to a few hours, depending on the amount of stored "omap" data...
- 10:15 PM Bug #45195 (In Progress): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ...
- 10:08 PM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
- The following back trace is likely to be the symptom for the same bug.
-1> 2020-04-24T16:25:20.971+0300 7efe4897e... - 06:44 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
- Igor did you had a look at those values? Any chance to fix this manually?
- 06:43 PM Bug #42928: ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- OK i was able to solve this by using the lv tools and copy all lv flags from the block device to db and just chaning ...
- 11:51 AM Bug #45110: Extent leak after main device expand
- Re-adding "mimic" to Backports because otherwise "src/script/backport-create-issue" will complain that the issue has ...
- 07:30 AM Bug #45250 (Resolved): check-generated.sh finds error in ceph-dencoder
- [~/master36] wjw@cephdev.digiware.nl> build/bin/ceph-dencoder type bluestore_bdev_label_t select_test 1 encode ex...
04/23/2020
- 10:54 PM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
- ...
- 05:40 AM Bug #45195: ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED ceph_assert(p ...
- Not sure whether this is an issue with the test?
- 05:30 AM Bug #45195 (Resolved): ceph_test_objectstore: src/os/bluestore/bluestore_types.h: 734: FAILED cep...
- ...
- 03:27 PM Backport #45125 (Rejected): mimic: Extent leak after main device expand
- My bad, main device expansion feature started at Nautilus, hence the code isn't present.
- 02:57 PM Backport #45125 (Need More Info): mimic: Extent leak after main device expand
- @Igor - I looked, but in mimic I can't find the code touched by the master fix.
- 02:54 PM Backport #45234 (Rejected): mimic: OSD might fail to recover after ENOSPC crash
- 02:51 PM Backport #45126 (In Progress): nautilus: Extent leak after main device expand
- 02:51 PM Backport #45127 (In Progress): octopus: Extent leak after main device expand
04/21/2020
- 05:51 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
- sure:
```
# ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-5 stats
2020-04-21 19:51:16.635 7f439d4b7e00 1 ... - 02:09 PM Bug #44509: using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k leftover on slow ...
- Could you please provide an output for the following command (OSD to be offline):
ceph-kvstore-tool bluestore-kv <pa... - 02:33 PM Bug #42928: ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- is ceph.db_uuid the LVM uuid?
- 02:14 PM Bug #42928: ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- Those two tags are missing:
ceph.db_device=/dev/BLOCKDB
ceph.db_uuid=k7Oc64-aShi-Z4Y0-Opov-daQM-Vdue-kJpIAN - 05:25 AM Bug #45133: BlueStore asserting on fs upgrade tests
- http://pulpito.ceph.com/bhubbard-2020-04-16_09:57:54-rados-wip-badone-testing-distro-basic-smithi/4957960/
04/20/2020
- 01:24 PM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
- Thanks for your reply. Good to know that @ceph-bluestore-tool@ needs to run on an offline OSD, I did not check that b...
- 12:29 PM Bug #43370 (Rejected): OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs eno...
- Hi Gerdriaan.
Thanks for the update.
First of all - ceph-bluestore-tool to be executed on an offline OSD hence the ... - 08:35 AM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
- Hi Igor,
Output below for each OSD node:...
04/17/2020
- 04:37 PM Bug #45133 (Fix Under Review): BlueStore asserting on fs upgrade tests
- 03:58 PM Bug #45133: BlueStore asserting on fs upgrade tests
- Yes indeed, BlueStore::_upgrade_super() when you are starting with format 2 invokes _prepare_ondisk_format_super() as...
- 03:10 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
- See for instance http://pulpito.front.sepia.ceph.com/gregf-2020-04-15_20:54:50-fs-wip-greg-testing-415-1-distro-basic...
- 01:48 PM Backport #45122 (In Progress): octopus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34610
- 10:26 AM Backport #45122 (Resolved): octopus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34610
- 01:46 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
- Doing partial backport for now to override an issue from comment #2.
- 01:46 PM Backport #45123 (In Progress): nautilus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34611
- 11:06 AM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
- Looks like this backport depends on backporting per-pool omap stats collection support, see
https://github.com/ceph... - 10:27 AM Backport #45123 (Resolved): nautilus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34611
- 12:54 PM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
- Gerdriaan, sorry, missed you inquiry.
What I wanted is the output for:
ceph-bluestore-tool --path <path-to-osd> --c... - 12:47 PM Bug #43068: on disk size (81292) does not match object info size (81237)
- It looks to me that this bug has rather out-of-bluestore scope. Am I right it applies to CephFS files/objects only? A...
- 12:24 AM Bug #43068 (New): on disk size (81292) does not match object info size (81237)
- we get one more hit
2020-04-16 14:21:30.139493 osd.869 (osd.869) 10192 : cluster [ERR] 6.68b shard 869 soid 6:d16f... - 10:27 AM Backport #45127 (Resolved): octopus: Extent leak after main device expand
- https://github.com/ceph/ceph/pull/34610
- 10:27 AM Backport #45126 (Resolved): nautilus: Extent leak after main device expand
- https://github.com/ceph/ceph/pull/34711
- 10:27 AM Backport #45125 (Rejected): mimic: Extent leak after main device expand
04/16/2020
- 10:21 AM Bug #45112 (Resolved): OSD might fail to recover after ENOSPC crash
- While opening after such a crash KV might need to flush some data and hence needs additional disk space. But allocato...
- 10:05 AM Bug #45110 (Resolved): Extent leak after main device expand
- To reproduce the issue one can expand device of 3,147,480,064 bytes to
4,147,480,064 using bluefs-bdev-expand comman...
04/15/2020
04/11/2020
- 09:42 AM Backport #45064 (Resolved): nautilus: bluestore: unused calculation is broken
- https://github.com/ceph/ceph/pull/34794
- 09:42 AM Backport #45063 (Resolved): octopus: bluestore: unused calculation is broken
- https://github.com/ceph/ceph/pull/34793
- 09:42 AM Backport #45062 (Rejected): mimic: bluestore: unused calculation is broken
- 09:38 AM Backport #45045 (Resolved): nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damag...
- https://github.com/ceph/ceph/pull/34796
- 09:37 AM Backport #45044 (Resolved): octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage...
- https://github.com/ceph/ceph/pull/34795
Also available in: Atom