Project

General

Profile

Actions

Bug #40785

open

In case of osd full scenario 100% pgs went to unknown state, when added more storage

Added by servesha dudhgaonkar almost 5 years ago. Updated almost 5 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

After populating the more data, osds were being nearfull and full. When added more storage in this situation, all pgs went into unknown state.

How to reproduce it (minimal and precise):

1. Populate lots of data.
2. Wait for osds to be nearfull or full.
3. Add more storage into the cluster.
4. Watch ceph status to see pgs state.
- Kubernetes version : Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+7bd2e5b", GitCommit:"7bd2e5b", GitTreeState:"clean", BuildDate:"2019-05-19T23:52:43Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes cluster type : OpenShift
- Storage backend status : for Ceph use ceph health
Actions #1

Updated by Patrick Donnelly almost 5 years ago

  • Project changed from Ceph to RADOS
Actions #2

Updated by Neha Ojha almost 5 years ago

  • Status changed from New to Need More Info

Which ceph version are you running? Can you provide the "ceph -s" output?

Actions

Also available in: Atom PDF