Project

General

Profile

Actions

Bug #21362

open

cephfs ec data pool + windows fio,ceph cluster degraed several hours always, osd up and down

Added by Yong Wang over 6 years ago. Updated over 6 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

1.configure
version : 12.2.0, ceph professional rpms install,new installed env.
cephfs: meta pool (ssd 1*3 replica 2), data pool (hdd 20*3 ec 2+1).
cluster os : centos 7.3, nodes 3.
client os : windows7, fio. (8threads 1iodepth, 30g files), 1 node

2. operation
once started fio, the cluster begin degraed, pgs displayed serveral status.
ceph-osd output manys logs.
from ceph -s, it can be saw that up osds increased and decrease crossed.
even if that client ios stopped. the cluster keep those status about 2 hours,
and it canno't return to all pgs active-clean status.


Files

12.jpg (12.8 KB) 12.jpg windows fio arguments Yong Wang, 09/12/2017 01:52 AM
Actions #1

Updated by Patrick Donnelly over 6 years ago

  • Project changed from Ceph to CephFS
  • Category deleted (129)
  • Status changed from New to Need More Info
  • Assignee deleted (Jos Collin)

cephfs: meta pool (ssd 1*3 replica 2), data pool (hdd 20*3 ec 2+1).

Using replica 2 is strongly advised against. see also: https://www.spinics.net/lists/ceph-users/msg32895.html

We also need more information to advise you on what's wrong. `ceph status` and debug logs would be helpful.

Actions #2

Updated by Patrick Donnelly over 6 years ago

  • Target version deleted (v12.2.0)
Actions

Also available in: Atom PDF