Project

General

Profile

Bug #21362

cephfs ec data pool + windows fio,ceph cluster degraed several hours always, osd up and down

Added by wangyong wang 2 months ago. Updated about 2 months ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
Start date:
09/12/2017
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Release:
Component(FS):
Needs Doc:
No

Description

1.configure
version : 12.2.0, ceph professional rpms install,new installed env.
cephfs: meta pool (ssd 1*3 replica 2), data pool (hdd 20*3 ec 2+1).
cluster os : centos 7.3, nodes 3.
client os : windows7, fio. (8threads 1iodepth, 30g files), 1 node

2. operation
once started fio, the cluster begin degraed, pgs displayed serveral status.
ceph-osd output manys logs.
from ceph -s, it can be saw that up osds increased and decrease crossed.
even if that client ios stopped. the cluster keep those status about 2 hours,
and it canno't return to all pgs active-clean status.

12.jpg View - windows fio arguments (12.8 KB) wangyong wang, 09/12/2017 01:52 AM

History

#1 Updated by Patrick Donnelly about 2 months ago

  • Project changed from Ceph to fs
  • Category deleted (cephfs)
  • Status changed from New to Need More Info
  • Assignee deleted (Jos Collin)

cephfs: meta pool (ssd 1*3 replica 2), data pool (hdd 20*3 ec 2+1).

Using replica 2 is strongly advised against. see also: https://www.spinics.net/lists/ceph-users/msg32895.html

We also need more information to advise you on what's wrong. `ceph status` and debug logs would be helpful.

Also available in: Atom PDF