Project

General

Profile

Actions

Bug #11311

closed

failed to mount ceph-fs when a erasure-code pool was created

Added by tvm tvm about 9 years ago. Updated about 9 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

i deployed a ceph cluster, setup a MDS/CephFS and mount it in my frontend server.

At the beginning, it works fine; but when i created a new erasure-code pool, the mounted directory was hanged and no longer read or write any data in to Ceph; at the same time, also failed to mount ceph-fs, the error message was:

[root@fs-10101020 ~]# ceph osd pool create tdata 12 12 erasure default
pool 'tdata' created

[root@web-01 ~]# mount -t ceph 10.10.10.20:6789:/ /data/ceph -o name=cephfs,secretfile=/etc/ceph/client.cephfs.keyring
mount error 5 = Input/output error

then, i delete the erasure-code pool, and try it again, and successfully mounted the cephfs!

[root@fs-10101020 ~]# ceph osd pool delete tdata2 tdata2 --yes-i-really-really-mean-it
pool 'tdata2' removed

[root@web-01 ~]# mount -t ceph 10.10.10.20:6789:/ /data/ceph -o name=cephfs,secretfile=/etc/ceph/client.cephfs.keyring
OK

The ceph -s health was HEALTH_OK in the whole time, and my environment is:

ceph version:
giant 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e
install by rpm from the http://ceph.com/rpm-giant/el6/SRPMS/

ceph cluster server:
CentOS 6.6, kernel 2.6.32-504.12.2.el6.x86_64

frounend web server (ceph client):
CentOS 6.6, kernel 3.10.71-1.el6.elrepo.x86_64

Actions #1

Updated by Sage Weil about 9 years ago

  • Status changed from New to Resolved

The ec crush rule had a feature that your kernel did not understand. Use a newer kernel.

Actions

Also available in: Atom PDF