Activity
From 03/30/2021 to 04/28/2021
04/28/2021
- 11:38 AM Support #50309 (Resolved): bluestore_min_alloc_size_hdd = 4096
- This have now been backported to the next nautilus/octopus release. #50549 #50550
- 11:15 AM Support #50309: bluestore_min_alloc_size_hdd = 4096
- Thanks.
- 08:33 AM Support #50309: bluestore_min_alloc_size_hdd = 4096
- min_alloc_size is printed in hex at debug_bluestore level 10 when the superblock is opened:...
- 08:02 AM Support #50309: bluestore_min_alloc_size_hdd = 4096
- Can someone help?
- 11:36 AM Bug #50550 (Resolved): octopus: os/bluestore: be more verbose in _open_super_meta by default
- 09:02 AM Bug #50550 (Fix Under Review): octopus: os/bluestore: be more verbose in _open_super_meta by default
- 08:56 AM Bug #50550 (Resolved): octopus: os/bluestore: be more verbose in _open_super_meta by default
- backport https://github.com/ceph/ceph/pull/30838/commits/4087f82aea674df4c7b485bf804f3a9c98ae3741 only
- 11:35 AM Bug #50549 (Resolved): nautilus: os/bluestore: be more verbose in _open_super_meta by default
- 09:01 AM Bug #50549 (Fix Under Review): nautilus: os/bluestore: be more verbose in _open_super_meta by def...
- 08:56 AM Bug #50549 (Resolved): nautilus: os/bluestore: be more verbose in _open_super_meta by default
- backport https://github.com/ceph/ceph/pull/30838/commits/4087f82aea674df4c7b485bf804f3a9c98ae3741 only
- 10:08 AM Bug #50555 (Resolved): AvlAllocator.cc: 60: FAILED ceph_assert(size != 0)
- This started happening for an existing OSD after an upgrade from 15.2.2 octopus to 16.2.0 Pacific, but it turns out t...
04/27/2021
- 07:59 AM Bug #50511: osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
- Hi,
I am able to reproduce this behaviour:
- Create lots of snapshots on CephFS on an active filesystem
- rmdir lo...
04/26/2021
04/25/2021
- 06:10 PM Bug #50511 (Need More Info): osd: rmdir .snap/snap triggers snaptrim and then crashes various OSDs
- Hi,
we use http://manpages.ubuntu.com/manpages/artful/man1/cephfs-snap.1.html to create hourly, daily, weekly and mo...
04/24/2021
- 01:12 PM Bug #46490: osds crashing during deep-scrub
- Maximilian Stinsky wrote:
> Igor Fedotov wrote:
> > Maximilian Stinsky wrote:
> > > Hi,
> > >
> > > we tried up...
04/23/2021
- 09:38 AM Bug #46490: osds crashing during deep-scrub
- Igor Fedotov wrote:
> Maximilian Stinsky wrote:
> > Hi,
> >
> > we tried upgrading our cluster to version 14.2.1...
04/21/2021
- 12:37 PM Bug #46490: osds crashing during deep-scrub
- Maximilian Stinsky wrote:
> Hi,
>
> we tried upgrading our cluster to version 14.2.18 but still have the random s... - 12:31 PM Bug #46490: osds crashing during deep-scrub
- Hi,
we tried upgrading our cluster to version 14.2.18 but still have the random scrub error's on the ec pool every... - 09:10 AM Bug #45765 (Resolved): BlueStore::_collection_list causes huge latency growth pg deletion
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:01 AM Backport #49966 (Resolved): nautilus: BlueStore::_collection_list causes huge latency growth pg d...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40393
m... - 06:25 AM Support #50309: bluestore_min_alloc_size_hdd = 4096
- We encountered a problem that we have some of the disks that are both old and new and we have no ability to identify ...
04/19/2021
- 05:41 PM Backport #50403 (In Progress): nautilus: Increase default value of bluestore_cache_trim_max_skip_...
- 05:38 PM Backport #50405 (In Progress): octopus: Increase default value of bluestore_cache_trim_max_skip_p...
- 05:34 PM Backport #50402 (In Progress): pacific: Increase default value of bluestore_cache_trim_max_skip_p...
04/16/2021
- 08:05 PM Backport #50405 (Resolved): octopus: Increase default value of bluestore_cache_trim_max_skip_pinned
- https://github.com/ceph/ceph/pull/40919
- 08:05 PM Backport #50403 (Resolved): nautilus: Increase default value of bluestore_cache_trim_max_skip_pinned
- https://github.com/ceph/ceph/pull/40920
- 08:05 PM Backport #50402 (Resolved): pacific: Increase default value of bluestore_cache_trim_max_skip_pinned
- https://github.com/ceph/ceph/pull/40918
- 08:01 PM Bug #50217 (Pending Backport): Increase default value of bluestore_cache_trim_max_skip_pinned
04/13/2021
- 07:54 AM Support #50309 (Resolved): bluestore_min_alloc_size_hdd = 4096
- Hi,
We’ve changed ‘bluestore_min_alloc_size_hdd’ to 4096 in ceph.conf and deployed disks with the new configuration...
04/12/2021
- 03:21 PM Backport #49966: nautilus: BlueStore::_collection_list causes huge latency growth pg deletion
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40393
merged - 10:28 AM Bug #50297 (New): long osd online compaction: mon wrongly mark osd down
- 1. ceph tell osd.43 compact...
- 08:27 AM Bug #47330 (Resolved): ceph-osd can't start when CURRENT file does not end with newline or conten...
- 04:22 AM Bug #47740: OSD crash when increase pg_num
- We also experience similar issues in our v15.2.4 clusters recently. We are doing tests. We keep writing data into poo...
04/10/2021
- 05:14 PM Backport #49980 (Need More Info): nautilus: BlueRocksEnv::GetChildren may pass trailing slashes t...
- not clear if master changeset constitutes the minimal fix
04/09/2021
- 04:38 PM Bug #50217 (Fix Under Review): Increase default value of bluestore_cache_trim_max_skip_pinned
- 10:20 AM Bug #49383 (Resolved): BlueFS reads might improperly rebuild internal buffer under an shared lock
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 AM Bug #49900 (Resolved): _txc_add_transaction error (39) Directory not empty not handled on operati...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 AM Backport #49964 (Resolved): octopus: BlueStore::_collection_list causes huge latency growth pg de...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40392
m... - 09:42 AM Backport #49990 (Resolved): octopus: _txc_add_transaction error (39) Directory not empty not hand...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40441
m... - 09:32 AM Backport #49385 (Resolved): nautilus: BlueFS reads might improperly rebuild internal buffer under...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39883
m...
04/08/2021
- 05:53 PM Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
- Another attempt on a different host, now we upgrade just one 1T device...
I've set @ceph config set osd bluestore_fs...
04/07/2021
- 11:05 PM Bug #50217 (Resolved): Increase default value of bluestore_cache_trim_max_skip_pinned
- The current default value of 64 has shown to be very low for large clusters. In some cases, this has lead to huge mem...
- 09:14 PM Bug #47453 (New): checksum failures lead to assert on OSD shutdown in lab tests
- ...
- 05:02 PM Backport #49964: octopus: BlueStore::_collection_list causes huge latency growth pg deletion
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40392
merged - 03:35 PM Backport #49990: octopus: _txc_add_transaction error (39) Directory not empty not handled on oper...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40441
merged - 08:39 AM Feature #41691 (Resolved): os/BlueStore: avoid double caching bluestore onodes in rocksdb block_c...
04/05/2021
- 03:10 PM Backport #49385: nautilus: BlueFS reads might improperly rebuild internal buffer under an shared ...
- singuliere _ wrote:
> please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/39883...
04/02/2021
- 07:11 AM Bug #45265 (Resolved): Disable bluestore_fsck_quick_fix_on_mount by default
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:11 AM Backport #49965 (Resolved): pacific: BlueStore::_collection_list causes huge latency growth pg de...
- 07:10 AM Backport #49920 (Resolved): pacific: Disable bluestore_fsck_quick_fix_on_mount by default
- 05:52 AM Backport #49386 (Resolved): octopus: BlueFS reads might improperly rebuild internal buffer under ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39884
m...
Also available in: Atom