Project

General

Profile

Actions

Bug #13916

closed

erasure code profile name

Added by Chris Holcombe over 8 years ago. Updated about 8 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I've been experimenting with ceph erasure pools. In my naivety I created an erasure pool with a dash in the name. Example:

ceph osd erasure-code-profile set myprofile-wide k=3 m=2 ruleset-failure-domain=rack

The pool fails to create because the pg's never get created. They get stuck in incomplete-creating. I did a query on a few of the pgs and it's not apparent what is wrong. They don't appear to be stuck or waiting on anything. When I delete the pool and recreate the same thing without the dash in the name it goes through just fine. I think the ceph tool should deny invalid names. Here's what the command returns when you query the erasure profile after creating it.


ceph osd erasure-code-profile get ceph-wide
directory=/usr/lib/x86_64-linux-gnu/ceph/erasure-code
k=10
m=3
plugin=jerasure
ceph-wide=
ruleset-failure-domain=osd
technique=reed_sol_van

Actions #1

Updated by Sage Weil over 8 years ago

  • Assignee set to Loïc Dachary
Actions #2

Updated by Sage Weil over 8 years ago

  • Priority changed from Normal to High
Actions #3

Updated by Loïc Dachary over 8 years ago

I tried the following and the problem does not show. It would be great if you could provide me with a sequence of commands to reproduce the problem. Thanks :-)

loic@fold:~/software/ceph/ceph/src$ ceph osd erasure-code-profile set myprofile-wide k=3 m=2 ruleset-failure-domain=rack
loic@fold:~/software/ceph/ceph/src$ ceph osd erasure-code-profile get myprofile-wide
jerasure-per-chunk-alignment=false
k=3
m=2
plugin=jerasure
ruleset-failure-domain=rack
ruleset-root=default
technique=reed_sol_van
w=8
loic@fold:~/software/ceph/ceph/src$ ceph osd pool create mypool 1 1 erasure myprofile-wide
loic@fold:~/software/ceph/ceph/src$ ceph osd dump
epoch 14
fsid e509fb9d-2119-400b-b3ee-e595827698a5
created 2015-12-14 13:17:26.718295
modified 2015-12-14 13:21:28.586998
flags sortbitwise
pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 1 flags hashpspool stripe_width 0
pool 1 'mypool' erasure size 5 min_size 3 crush_ruleset 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 14 flags hashpspool stripe_width 4128
max_osd 3
osd.0 up   in  weight 1 up_from 4 up_thru 10 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/22215 127.0.0.1:6801/22215 127.0.0.1:6802/22215 127.0.0.1:6803/22215 exists,up 7a5471f8-514e-4b6c-8647-63a8654e61e6
osd.1 up   in  weight 1 up_from 6 up_thru 10 down_at 0 last_clean_interval [0,0) 127.0.0.1:6804/22415 127.0.0.1:6805/22415 127.0.0.1:6806/22415 127.0.0.1:6807/22415 exists,up 2755a563-fcd9-47a4-89bc-4033e74f855d
osd.2 up   in  weight 1 up_from 9 up_thru 10 down_at 0 last_clean_interval [0,0) 127.0.0.1:6808/22668 127.0.0.1:6809/22668 127.0.0.1:6810/22668 127.0.0.1:6811/22668 exists,up acce5a40-b018-42d2-96f5-ccc563df4b6b
loic@fold:~/software/ceph/ceph/src$ ceph osd pool get mypool erasure_code_profile
erasure_code_profile: myprofile-wide
Actions #4

Updated by Loïc Dachary over 8 years ago

  • Status changed from New to Need More Info
Actions #5

Updated by Loïc Dachary over 8 years ago

  • Assignee deleted (Loïc Dachary)
Actions #6

Updated by Samuel Just about 8 years ago

  • Status changed from Need More Info to Can't reproduce

I'm closing this on the assumption that the original problem was an issue with the reporter's crush map. Reopen if I'm missing something!

Actions

Also available in: Atom PDF