Project

General

Profile

Actions

Bug #62836

open

CEPH zero iops after upgrade to Reef and manual read balancer

Added by Mosharaf Hossain 8 months ago. Updated 5 months ago.

Status:
Need More Info
Priority:
Normal
Assignee:
Category:
Performance/Resource Usage
Target version:
-
% Done:

0%

Source:
Tags:
Reef Update and slow IOPS
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

We've recently performed an upgrade on our Cephadm cluster, transitioning from Ceph Quiency to Reef. However, following the manual implementation of a read balancer in the Reef cluster, we've experienced a significant slowdown in client I/O operations within the Ceph cluster, affecting both client bandwidth and overall cluster performance.

This slowdown has resulted in unresponsiveness across all virtual machines within the cluster, despite the fact that the cluster exclusively utilizes SSD storage."

Kindly guide us to move forward.


Files

dashboard_CEPH-Reef.png (93.1 KB) dashboard_CEPH-Reef.png Mosharaf Hossain, 09/14/2023 04:30 AM
osd_map_ceph2.txt (32.5 KB) osd_map_ceph2.txt Mosharaf Hossain, 09/14/2023 05:16 PM
Actions

Also available in: Atom PDF