Activity
From 11/10/2019 to 12/09/2019
12/09/2019
- 05:59 PM Bug #43211 (Duplicate): Bluestore OSDs don't start after upgrade to 14.2.4
- 05:58 PM Bug #43211: Bluestore OSDs don't start after upgrade to 14.2.4
- This is a known bug, you should switch to 14.2.5 once it's out
- 04:24 PM Bug #43211 (Duplicate): Bluestore OSDs don't start after upgrade to 14.2.4
- After an upgrade from 13.2.7 to 14.2.4 the OSDs were no longer able to start. Restarting ceph resulted in losing all ...
12/07/2019
- 06:32 PM Bug #43131: segfault in BlueStore::Collection::split_cache()
- I haven't been able to reproduce it with that pr. see e.g. http://pulpito.ceph.com/sage-2019-12-05_22:59:50-rados-wi...
- 08:20 AM Bug #43183 (Can't reproduce): Segmentation fault in tcmalloc when create osd
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2....
12/06/2019
- 11:07 AM Backport #41810 (In Progress): nautilus: osd_memory_target isn't applied in runtime.
- 09:45 AM Backport #41810 (Need More Info): nautilus: osd_memory_target isn't applied in runtime.
12/05/2019
- 09:40 PM Bug #39318 (New): w_await high when rockdb compacting
- 09:40 PM Support #23433 (New): Ceph cluster doesn't start - ERROR: error creating empty object store in /d...
- 09:37 PM Bug #43147 (New): segv in LruOnodeCacheShard::_pin
- 01:17 PM Bug #43147 (Resolved): segv in LruOnodeCacheShard::_pin
- ...
- 09:37 PM Bug #42010 (New): segv in BlueStore::OnodeSpace::lookup during deletions
- 09:37 PM Bug #40741 (New): Mass OSD failure, unable to restart
- 09:37 PM Bug #40434 (New): ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
- 09:36 PM Bug #38637 (New): BlueStore::ExtentMap::fault_range() assert
- 09:35 PM Bug #20847 (New): low performance for bluestore rbd block creation vs filestore
- 09:35 PM Bug #20236 (New): bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- 07:49 AM Bug #43131: segfault in BlueStore::Collection::split_cache()
- might be a regression introduced by https://github.com/ceph/ceph/pull/29687. need to rerun the test without it.
- 04:57 AM Bug #43131 (Resolved): segfault in BlueStore::Collection::split_cache()
- ...
12/02/2019
- 12:42 PM Backport #43087 (Resolved): nautilus: bluefs: sync_metadata leaks dirty files if log_t is empty
- https://github.com/ceph/ceph/pull/34515
- 12:41 PM Backport #43086 (Rejected): mimic: bluefs: sync_metadata leaks dirty files if log_t is empty
- https://github.com/ceph/ceph/pull/34792
- 08:40 AM Bug #42091 (Pending Backport): bluefs: sync_metadata leaks dirty files if log_t is empty
12/01/2019
- 04:28 PM Bug #43068: on disk size (81292) does not match object info size (81237)
- I dump the on-disk object via objectstore tool , seems the last 55 bytes are all zero...
This file is a bash_hist... - 01:37 AM Bug #43068 (New): on disk size (81292) does not match object info size (81237)
- ...
11/25/2019
11/23/2019
- 07:12 PM Bug #20236: bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- /a/nojha-2019-11-22_22:10:13-rados-wip-31657-nautilus-distro-basic-smithi/4534580/
11/22/2019
- 04:15 PM Bug #42823: crash in BlueStore::Onode destructor
- New crash today. Looks somewhat similar to the old one. This one is based on e4b3036422df70e3 with some cephfs patche...
- 08:32 AM Bug #41301 (Resolved): os/bluestore/BlueFS: use 64K alloc_size on the shared device
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
11/21/2019
- 01:42 PM Bug #42928 (Closed): ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- Adding a DB Device to an existing OSD with ceph-bluestore-tool bluefs-bdev-new-db does not update the lv tags of the ...
- 05:52 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- After 24h of testing, we can confirm, that fix is helping.
Good job Igor! - 01:58 AM Backport #41340 (Resolved): nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30229
m...
11/20/2019
- 08:07 PM Backport #41340: nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30229
merged - 06:24 PM Bug #42913 (Resolved): nautilus: cram test fails with _do_alloc_write failed with (28) No space l...
- ...
11/19/2019
- 07:26 PM Backport #42834 (Resolved): luminous: STATE_KV_SUBMITTED is set too early.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31674
m... - 05:02 PM Backport #42834: luminous: STATE_KV_SUBMITTED is set too early.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31674
merged
11/18/2019
- 02:50 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- TObias, thanks for the info. Hopefully we wouldn't need it any more.
Please let us know if the fix helps... - 07:05 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Hi Igor,
looks like you fixed the issue :) nevertheless, i got logs and sst of 2 new OSDs if you need further confir...
11/17/2019
- 09:02 AM Backport #40449 (Resolved): nautilus: "no available blob id" assertion might occur
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
m... - 09:01 AM Backport #40449 (In Progress): nautilus: "no available blob id" assertion might occur
- 09:00 AM Backport #41460 (Resolved): nautilus: incorrect RW_IO_MAX
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31397
m...
11/16/2019
- 09:38 PM Backport #41282 (Resolved): nautilus: BlueStore tool to check fragmentation
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29949
m... - 03:52 PM Backport #41282: nautilus: BlueStore tool to check fragmentation
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29949
merged - 04:13 PM Bug #38272 (Resolved): "no available blob id" assertion might occur
- 04:12 PM Backport #40449 (Resolved): nautilus: "no available blob id" assertion might occur
- 04:07 PM Backport #40449: nautilus: "no available blob id" assertion might occur
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/30144
merged - 04:06 PM Backport #41460: nautilus: incorrect RW_IO_MAX
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31397
merged - 06:34 AM Backport #42041: nautilus: bluestore objectstore_blackhole=true violates read-after-write
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31019
m...
11/15/2019
- 11:22 PM Bug #40684 (Resolved): bluestore objectstore_blackhole=true violates read-after-write
- 11:21 PM Backport #42041 (Resolved): nautilus: bluestore objectstore_blackhole=true violates read-after-write
- 10:39 PM Backport #42041: nautilus: bluestore objectstore_blackhole=true violates read-after-write
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31019
merged - 02:33 PM Backport #42834 (In Progress): luminous: STATE_KV_SUBMITTED is set too early.
- 02:29 PM Backport #42834 (Resolved): luminous: STATE_KV_SUBMITTED is set too early.
- https://github.com/ceph/ceph/pull/31674
- 02:30 PM Backport #42833 (In Progress): mimic: STATE_KV_SUBMITTED is set too early.
- 02:29 PM Backport #42833 (Resolved): mimic: STATE_KV_SUBMITTED is set too early.
- https://github.com/ceph/ceph/pull/31673
- 01:12 PM Bug #42209 (Pending Backport): STATE_KV_SUBMITTED is set too early.
- nautilus fix merged in https://github.com/ceph/ceph/pull/30755
- 01:09 PM Bug #42223 (Resolved): ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_a...
11/14/2019
- 09:07 PM Bug #42823 (Duplicate): crash in BlueStore::Onode destructor
- Testing xfstests on cephfs in vstart cluster, and OSD crashed:...
- 04:32 PM Bug #42223 (Pending Backport): ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILE...
- backport pr: https://github.com/ceph/ceph/pull/31644
11/13/2019
- 11:47 PM Bug #42284 (Duplicate): fastbmap_allocator_impl.h: FAILED ceph_assert(available >= allocated) in ...
- haha - we've finally got it locally...
https://tracker.ceph.com/issues/42223 - 09:11 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- @Marcin,
003808.sst has the same bluefs log pattern inside at e.g. offset 5687664. So it seems it has been overwritt... - 08:27 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- @Igor,
Well done!
Would it also explain problem I had with 003808.sst(30MB) where all others were 16MB as they w... - 07:48 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Hopefully here is the final analysis for the root cause.
a) analysis for 002375.sst dump from comment #94 reveals ... - 05:39 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- @Rafal - great, thank you. Now I can tell that data extents in this file which are supposed to be RocksDB data belong...
- 11:25 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Yes it is available, I didn't redeployed failed OSDs...
- 07:12 PM Bug #20236: bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- /a/yuriw-2019-11-08_20:50:44-rados-wip-yuri3-testing-2019-11-08-1221-nautilus-distro-basic-smithi/4485359/
- 06:17 PM Bug #20236: bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- This appeared in nautilus
/a/yuriw-2019-11-12_15:30:19-rados-wip-yuri5-testing-2019-11-11-1520-nautilus-distro-bas... - 05:25 PM Bug #42683: OSD Segmentation fault
- @Antonio - yes, please go head.
- 04:17 PM Bug #42683: OSD Segmentation fault
- @Igor if you don't object I would scratch the OSD to keep testing the system.
11/12/2019
- 05:00 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- @Rafal, is osd.35 still available? If so could you please run bluestore-tool:
CEPH_ARGS="--debug-bluefs 10 --log-fil... - 09:22 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Rafal Wadolowski wrote:
> I'm just upgraded version, so they are created with 14.2.3 version.
> So do you suggest t... - 09:17 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- I'm just upgraded version, so they are created with 14.2.3 version.
So do you suggest to redeploy cluster? - 09:15 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Thanks, Rafal! One clarification though - have you redeployed all the OSDs before running Ceph with the latest RocksD...
- 08:55 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- @Igor version with upgraded rocksdb broke two OSD for now. Logs from one of them:...
- 03:15 PM Bug #42166: crash when LRU trimming
- Just to note, osd log contains multiple odd checksum verification failures from RocksDB, e.g.
2019-10-02T11:44:22.... - 12:15 AM Bug #42712 (Resolved): ObjectStore/StoreTest.ColSplitTest0/2 hangs
11/11/2019
- 08:48 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- @Krzysztof - what I'd like to do for broken OSDs is:
a) (simple case) export bluefs log using ceph-bluestore-tool's ... - 08:23 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- I'm actually not sure - I see that the cluster has been upgrade to a new 14.2.4 build with upgraded rocksdb, but osd-...
- 06:59 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- @Rafal, @Krzystzof - am I right that data for OSDs from comment #87 aren't available any more as you deployed a new c...
- 03:03 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- I deployed the version with new rocksdb and started tests. I will come up with results :)
- 01:41 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Hi Marcin,
there is bluefs_buffered_io parameter (true by default) which controls buffered vs. direct IO mode. And y...
11/10/2019
- 12:02 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Dev guys,
I haven't checked the source code for handling KV store but is it possible that some parts use buffered ...
Also available in: Atom