Project

General

Profile

Bug #40785

In case of osd full scenario 100% pgs went to unknown state, when added more storage

Added by servesha dudhgaonkar 8 months ago. Updated 7 months ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature:

Description

After populating the more data, osds were being nearfull and full. When added more storage in this situation, all pgs went into unknown state.

How to reproduce it (minimal and precise):

1. Populate lots of data.
2. Wait for osds to be nearfull or full.
3. Add more storage into the cluster.
4. Watch ceph status to see pgs state.
- Kubernetes version : Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+7bd2e5b", GitCommit:"7bd2e5b", GitTreeState:"clean", BuildDate:"2019-05-19T23:52:43Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes cluster type : OpenShift
- Storage backend status : for Ceph use ceph health

History

#1 Updated by Patrick Donnelly 7 months ago

  • Project changed from Ceph to RADOS

#2 Updated by Neha Ojha 7 months ago

  • Status changed from New to Need More Info

Which ceph version are you running? Can you provide the "ceph -s" output?

Also available in: Atom PDF