Actions
Bug #42207
openceph osd df showing 0/0/0
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I have ceph mimic cluster with 1 mon 2- osd nodes after adding all the two osd to the cluster my data store is still showing 0b usage , 0b available 0b total. all the disk had original size of 10gig, i need help
Files
Updated by Josh Durgin over 4 years ago
Does this persist after you create a pool?
Actions