Activity
From 11/17/2019 to 12/16/2019
12/16/2019
- 11:10 PM Bug #42913 (Fix Under Review): nautilus: cram test fails with _do_alloc_write failed with (28) No...
- We need to increase the block size.
[ubuntu@smithi076 ceph]$ ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE... - 07:53 PM Bug #42913: nautilus: cram test fails with _do_alloc_write failed with (28) No space left on device
- ...
12/13/2019
- 07:15 PM Documentation #40473 (In Progress): enhance db sizing
- We have started making some improvements in https://github.com/ceph/ceph/pull/32226.
- 02:07 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Due to this bug, a had to mitigate it with setting RocksDB param min_write_buffer_number_to_merge to 2. Since 14.2.5 ...
- 07:15 AM Bug #43297: StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)
- Not kernel client issue, should be bluestore related
- 03:42 AM Bug #43297 (Resolved): StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)
- h3. system info...
12/12/2019
- 12:59 AM Bug #43183: Segmentation fault in tcmalloc when create osd
- Yes, dpdk is enabled, the error probability is 10%, and retry will succeed
12/11/2019
- 10:26 PM Bug #43183: Segmentation fault in tcmalloc when create osd
- Looks like SPDK/DPDK mode is enabled. Suggest to turn this off as a workaround.
12/10/2019
- 04:04 AM Bug #43147: segv in LruOnodeCacheShard::_pin
- ...
- 03:43 AM Bug #43147: segv in LruOnodeCacheShard::_pin
- /a/sage-2019-12-09_20:35:48-rados:thrash-erasure-code-wip-sage3-testing-2019-12-09-1226-distro-basic-smithi/4585860
... - 03:40 AM Bug #43217 (Duplicate): segv in BlueStore::OnodeSpace::map_any
- ...
12/09/2019
- 05:59 PM Bug #43211 (Duplicate): Bluestore OSDs don't start after upgrade to 14.2.4
- 05:58 PM Bug #43211: Bluestore OSDs don't start after upgrade to 14.2.4
- This is a known bug, you should switch to 14.2.5 once it's out
- 04:24 PM Bug #43211 (Duplicate): Bluestore OSDs don't start after upgrade to 14.2.4
- After an upgrade from 13.2.7 to 14.2.4 the OSDs were no longer able to start. Restarting ceph resulted in losing all ...
12/07/2019
- 06:32 PM Bug #43131: segfault in BlueStore::Collection::split_cache()
- I haven't been able to reproduce it with that pr. see e.g. http://pulpito.ceph.com/sage-2019-12-05_22:59:50-rados-wi...
- 08:20 AM Bug #43183 (Can't reproduce): Segmentation fault in tcmalloc when create osd
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2....
12/06/2019
- 11:07 AM Backport #41810 (In Progress): nautilus: osd_memory_target isn't applied in runtime.
- 09:45 AM Backport #41810 (Need More Info): nautilus: osd_memory_target isn't applied in runtime.
12/05/2019
- 09:40 PM Bug #39318 (New): w_await high when rockdb compacting
- 09:40 PM Support #23433 (New): Ceph cluster doesn't start - ERROR: error creating empty object store in /d...
- 09:37 PM Bug #43147 (New): segv in LruOnodeCacheShard::_pin
- 01:17 PM Bug #43147 (Resolved): segv in LruOnodeCacheShard::_pin
- ...
- 09:37 PM Bug #42010 (New): segv in BlueStore::OnodeSpace::lookup during deletions
- 09:37 PM Bug #40741 (New): Mass OSD failure, unable to restart
- 09:37 PM Bug #40434 (New): ceph-bluestore-tool:bluefs-bdev-migrate might result in broken OSD
- 09:36 PM Bug #38637 (New): BlueStore::ExtentMap::fault_range() assert
- 09:35 PM Bug #20847 (New): low performance for bluestore rbd block creation vs filestore
- 09:35 PM Bug #20236 (New): bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- 07:49 AM Bug #43131: segfault in BlueStore::Collection::split_cache()
- might be a regression introduced by https://github.com/ceph/ceph/pull/29687. need to rerun the test without it.
- 04:57 AM Bug #43131 (Resolved): segfault in BlueStore::Collection::split_cache()
- ...
12/02/2019
- 12:42 PM Backport #43087 (Resolved): nautilus: bluefs: sync_metadata leaks dirty files if log_t is empty
- https://github.com/ceph/ceph/pull/34515
- 12:41 PM Backport #43086 (Rejected): mimic: bluefs: sync_metadata leaks dirty files if log_t is empty
- https://github.com/ceph/ceph/pull/34792
- 08:40 AM Bug #42091 (Pending Backport): bluefs: sync_metadata leaks dirty files if log_t is empty
12/01/2019
- 04:28 PM Bug #43068: on disk size (81292) does not match object info size (81237)
- I dump the on-disk object via objectstore tool , seems the last 55 bytes are all zero...
This file is a bash_hist... - 01:37 AM Bug #43068 (New): on disk size (81292) does not match object info size (81237)
- ...
11/25/2019
11/23/2019
- 07:12 PM Bug #20236: bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- /a/nojha-2019-11-22_22:10:13-rados-wip-31657-nautilus-distro-basic-smithi/4534580/
11/22/2019
- 04:15 PM Bug #42823: crash in BlueStore::Onode destructor
- New crash today. Looks somewhat similar to the old one. This one is based on e4b3036422df70e3 with some cephfs patche...
- 08:32 AM Bug #41301 (Resolved): os/bluestore/BlueFS: use 64K alloc_size on the shared device
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
11/21/2019
- 01:42 PM Bug #42928 (Closed): ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags
- Adding a DB Device to an existing OSD with ceph-bluestore-tool bluefs-bdev-new-db does not update the lv tags of the ...
- 05:52 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- After 24h of testing, we can confirm, that fix is helping.
Good job Igor! - 01:58 AM Backport #41340 (Resolved): nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30229
m...
11/20/2019
- 08:07 PM Backport #41340: nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared device
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30229
merged - 06:24 PM Bug #42913 (Resolved): nautilus: cram test fails with _do_alloc_write failed with (28) No space l...
- ...
11/19/2019
- 07:26 PM Backport #42834 (Resolved): luminous: STATE_KV_SUBMITTED is set too early.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31674
m... - 05:02 PM Backport #42834: luminous: STATE_KV_SUBMITTED is set too early.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31674
merged
11/18/2019
- 02:50 PM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- TObias, thanks for the info. Hopefully we wouldn't need it any more.
Please let us know if the fix helps... - 07:05 AM Bug #42223: ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(avail...
- Hi Igor,
looks like you fixed the issue :) nevertheless, i got logs and sst of 2 new OSDs if you need further confir...
11/17/2019
- 09:02 AM Backport #40449 (Resolved): nautilus: "no available blob id" assertion might occur
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
m... - 09:01 AM Backport #40449 (In Progress): nautilus: "no available blob id" assertion might occur
- 09:00 AM Backport #41460 (Resolved): nautilus: incorrect RW_IO_MAX
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31397
m...
Also available in: Atom