Project

General

Profile

Actions

Bug #20541

closed

rgw: s3 put obj hangs with ec data pool on filestore

Added by Aleksei Gutikov almost 7 years ago. Updated almost 7 years ago.

Status:
Closed
Priority:
High
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

luminous 12.1.0 and master(676e1c9b7cc81467c527e9eae118cbfb8b29ca34)

How to reproduce:

$ ../src/vstart.sh -d -n -x -l --osd_num 5 --rgw_num 1
$ ceph osd erasure-code-profile set ec-profile k=3 m=2 ruleset-failure-domain=host
$ ceph osd pool create default.rgw.buckets.data 12 12 erasure ec-profile
$ ceph osd pool ls detail
(notice that stripe width not 4096, but 4096*3, not sure that related)
$ s3cmd mb s3://1111
$ s3cmd put test.txt s3://1111/xxx

rgw hangs in
int RGWPutObjProcessor_Atomic::complete_writing_data()
r = drain_pending();

Actions #1

Updated by Aleksei Gutikov almost 7 years ago

Not only on filestore, same on bluestore.

Actions #2

Updated by Chang Liu almost 7 years ago

you have to set ruleset-failure-domain to osd. there are not enough hosts (3+2). your cluster is not in clean+active statue. you could check it "./bin/ceph -s".

Actions #3

Updated by Aleksei Gutikov almost 7 years ago

Thanks, exactly, my mistake.
Not a bug, please, close ticket.

Actions #4

Updated by Abhishek Lekshmanan almost 7 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF