Actions
Bug #40785
openIn case of osd full scenario 100% pgs went to unknown state, when added more storage
Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
After populating the more data, osds were being nearfull and full. When added more storage in this situation, all pgs went into unknown state.
How to reproduce it (minimal and precise):
1. Populate lots of data.
2. Wait for osds to be nearfull or full.
3. Add more storage into the cluster.
4. Watch ceph status to see pgs state.
- Kubernetes version : Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+7bd2e5b", GitCommit:"7bd2e5b", GitTreeState:"clean", BuildDate:"2019-05-19T23:52:43Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes cluster type : OpenShift
- Storage backend status : for Ceph use ceph health
Actions