Project

General

Profile

Actions

Bug #9917

closed

RADOSGW: Not able to create Swift objects with erasure coded pool

Added by pushpesh sharma over 9 years ago. Updated over 9 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph@Ubuntu14:~/ceph-0.86/src$ MON=3 MDS=0 RGW=1 OSD=3 ./vstart.sh -d -n -x -r
going verbose **
[./fetch_config /tmp/fetched.ceph.conf.5971] === osd.2 ===
Stopping Ceph osd.2 on Ubuntu14...done === osd.1 ===
Stopping Ceph osd.1 on Ubuntu14...done === osd.0 ===
Stopping Ceph osd.0 on Ubuntu14...done === mon.c ===
Stopping Ceph mon.c on Ubuntu14...done === mon.b ===
Stopping Ceph mon.b on Ubuntu14...done === mon.a ===
Stopping Ceph mon.a on Ubuntu14...done
hostname Ubuntu14
ip 10.66.26.231
ip 10.66.26.231
creating /home/ceph/ceph-0.86/src/keyring
./monmaptool --create --clobber --add a 10.66.26.231:6789 --add b 10.66.26.231:6790 --add c 10.66.26.231:6791 --print /tmp/ceph_monmap.5970
./monmaptool: monmap file /tmp/ceph_monmap.5970
./monmaptool: generated fsid 666f92fe-5982-42c5-b8d7-867849eeb381
epoch 0
fsid 666f92fe-5982-42c5-b8d7-867849eeb381
last_changed 2014-10-28 15:50:41.096164
created 2014-10-28 15:50:41.096164
0: 10.66.26.231:6789/0 mon.a
1: 10.66.26.231:6790/0 mon.b
2: 10.66.26.231:6791/0 mon.c
./monmaptool: writing epoch 0 to /tmp/ceph_monmap.5970 (3 monitors)
rm -rf /home/ceph/ceph-0.86/src/dev/mon.a
mkdir -p /home/ceph/ceph-0.86/src/dev/mon.a
./ceph-mon --mkfs -c /home/ceph/ceph-0.86/src/ceph.conf -i a --monmap=/tmp/ceph_monmap.5970 --keyring=/home/ceph/ceph-0.86/src/keyring
./ceph-mon: set fsid to e4913316-61d9-47cb-909f-c3e70d926751
./ceph-mon: created monfs at /home/ceph/ceph-0.86/src/dev/mon.a for mon.a
rm -rf /home/ceph/ceph-0.86/src/dev/mon.b
mkdir -p /home/ceph/ceph-0.86/src/dev/mon.b
./ceph-mon --mkfs -c /home/ceph/ceph-0.86/src/ceph.conf -i b --monmap=/tmp/ceph_monmap.5970 --keyring=/home/ceph/ceph-0.86/src/keyring
./ceph-mon: set fsid to e4913316-61d9-47cb-909f-c3e70d926751
./ceph-mon: created monfs at /home/ceph/ceph-0.86/src/dev/mon.b for mon.b
rm -rf /home/ceph/ceph-0.86/src/dev/mon.c
mkdir -p /home/ceph/ceph-0.86/src/dev/mon.c
./ceph-mon --mkfs -c /home/ceph/ceph-0.86/src/ceph.conf -i c --monmap=/tmp/ceph_monmap.5970 --keyring=/home/ceph/ceph-0.86/src/keyring
./ceph-mon: set fsid to e4913316-61d9-47cb-909f-c3e70d926751
./ceph-mon: created monfs at /home/ceph/ceph-0.86/src/dev/mon.c for mon.c
./ceph-mon -i a -c /home/ceph/ceph-0.86/src/ceph.conf
./ceph-mon -i b -c /home/ceph/ceph-0.86/src/ceph.conf
./ceph-mon -i c -c /home/ceph/ceph-0.86/src/ceph.conf
./vstart.sh: 482: ./vstart.sh: btrfs: not found
add osd0 0d955f1d-debe-43c2-921b-3464b70d7865
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
0
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

add item id 0 name 'osd.0' weight 1 at location {host=Ubuntu14,root=default} to crush map
2014-10-28 15:50:52.320731 7f1259895900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2014-10-28 15:50:52.321044 7f1259895900 -1 journal check: ondisk fsid d1cec18a-c0fd-4cc5-ade6-24a84b463c36 doesn't match expected 0d955f1d-debe-43c2-921b-3464b70d7865, invalid (someone else's?) journal
2014-10-28 15:50:52.464316 7f1259895900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2014-10-28 15:50:52.466073 7f1259895900 -1 filestore(/home/ceph/ceph-0.86/src/dev/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-10-28 15:50:52.646634 7f1259895900 -1 created object store /home/ceph/ceph-0.86/src/dev/osd0 journal /home/ceph/ceph-0.86/src/dev/osd0.journal for osd.0 fsid e4913316-61d9-47cb-909f-c3e70d926751
2014-10-28 15:50:52.646721 7f1259895900 -1 auth: error reading file: /home/ceph/ceph-0.86/src/dev/osd0/keyring: can't open /home/ceph/ceph-0.86/src/dev/osd0/keyring: (2) No such file or directory
2014-10-28 15:50:52.646993 7f1259895900 -1 created new key in keyring /home/ceph/ceph-0.86/src/dev/osd0/keyring
adding osd0 key to auth repository
  • DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    added key for osd.0
    start osd0
    ./ceph-osd -i 0 -c /home/ceph/ceph-0.86/src/ceph.conf
    starting osd.0 at :/0 osd_data /home/ceph/ceph-0.86/src/dev/osd0 /home/ceph/ceph-0.86/src/dev/osd0.journal
    ./vstart.sh: 482: ./vstart.sh: btrfs: not found
    add osd1 9bbca84a-dac3-41dd-8003-67da1d33d71e
  • DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    1
  • DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH
    add item id 1 name 'osd.1' weight 1 at location {host=Ubuntu14,root=default}
    to crush map
    2014-10-28 15:50:55.168752 7f378303b900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
    2014-10-28 15:50:55.169446 7f378303b900 -1 journal check: ondisk fsid 180077ad-32df-44ec-91f0-016965fb3aec doesn't match expected 9bbca84a-dac3-41dd-8003-67da1d33d71e, invalid (someone else's?) journal
    2014-10-28 15:50:55.384939 7f378303b900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
    2014-10-28 15:50:55.387164 7f378303b900 -1 filestore(/home/ceph/ceph-0.86/src/dev/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
    2014-10-28 15:50:55.517814 7f378303b900 -1 created object store /home/ceph/ceph-0.86/src/dev/osd1 journal /home/ceph/ceph-0.86/src/dev/osd1.journal for osd.1 fsid e4913316-61d9-47cb-909f-c3e70d926751
    2014-10-28 15:50:55.517898 7f378303b900 -1 auth: error reading file: /home/ceph/ceph-0.86/src/dev/osd1/keyring: can't open /home/ceph/ceph-0.86/src/dev/osd1/keyring: (2) No such file or directory
    2014-10-28 15:50:55.517968 7f378303b900 -1 created new key in keyring /home/ceph/ceph-0.86/src/dev/osd1/keyring
    adding osd1 key to auth repository
    DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

    added key for osd.1
    start osd1
    ./ceph-osd -i 1 -c /home/ceph/ceph-0.86/src/ceph.conf
    starting osd.1 at :/0 osd_data /home/ceph/ceph-0.86/src/dev/osd1 /home/ceph/ceph-0.86/src/dev/osd1.journal
    ./vstart.sh: 482: ./vstart.sh: btrfs: not found
    add osd2 b1153298-059b-4723-b5dd-d5cd6c72e74c
    DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
    2
    DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

    add item id 2 name 'osd.2' weight 1 at location {host=Ubuntu14,root=default}
    to crush map
    2014-10-28 15:50:58.110529 7fd9dbd60900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
    2014-10-28 15:50:58.115296 7fd9dbd60900 -1 journal check: ondisk fsid 01ef45f0-d87b-40e9-afe5-5a5974414957 doesn't match expected b1153298-059b-4723-b5dd-d5cd6c72e74c, invalid (someone else's?) journal
    2014-10-28 15:50:58.496037 7fd9dbd60900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
    2014-10-28 15:50:58.501966 7fd9dbd60900 -1 filestore(/home/ceph/ceph-0.86/src/dev/osd2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
    2014-10-28 15:50:58.733315 7fd9dbd60900 -1 created object store /home/ceph/ceph-0.86/src/dev/osd2 journal /home/ceph/ceph-0.86/src/dev/osd2.journal for osd.2 fsid e4913316-61d9-47cb-909f-c3e70d926751
    2014-10-28 15:50:58.733399 7fd9dbd60900 -1 auth: error reading file: /home/ceph/ceph-0.86/src/dev/osd2/keyring: can't open /home/ceph/ceph-0.86/src/dev/osd2/keyring: (2) No such file or directory
    2014-10-28 15:50:58.733467 7fd9dbd60900 -1 created new key in keyring /home/ceph/ceph-0.86/src/dev/osd2/keyring
    adding osd2 key to auth repository
    DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
    *
    added key for osd.2
    start osd2
    ./ceph-osd -i 2 -c /home/ceph/ceph-0.86/src/ceph.conf
    starting osd.2 at :/0 osd_data /home/ceph/ceph-0.86/src/dev/osd2 /home/ceph/ceph-0.86/src/dev/osd2.journal
    start rgw on http://localhost:8000
    setting up user testid
    2014-10-28 15:50:59.192996 7f0dbca31840 0 couldn't find old data placement pools config, setting up new ones for the zone
    setting up user tester
    started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output.
    export PYTHONPATH=./pybind
    export LD_LIBRARY_PATH=.libs
    ceph@Ubuntu14:~/ceph-0.86/src$ swift -A http://localhost:8000/auth -U tester:testing -K asdf list
./ceph osd pool delete .rgw .rgw --yes-i-really-really-mean-it;./ceph osd pool delete .log .log --yes-i-really-really-mean-it;./ceph osd pool delete .rgw.buckets.index .rgw.buckets.index --yes-i-really-really-mean-it;./ceph osd pool delete .rgw.control .rgw.control --yes-i-really-really-mean-it;./ceph osd pool delete .rgw.gc .rgw.gc --yes-i-really-really-mean-it;./ceph osd pool delete .rgw.buckets .rgw.buckets --yes-i-really-really-mean-it;./ceph osd pool delete .rgw.root .rgw.root --yes-i-really-really-mean-it;./ceph osd pool delete .users .users --yes-i-really-really-mean-it;./ceph osd pool delete .users.email .users.email --yes-i-really-really-mean-it;./ceph osd pool delete .users.swift .users.swift --yes-i-really-really-mean-it;./ceph osd pool delete .users.uid .users.uid --yes-i-really-really-mean-it;./ceph osd pool create .rgw.buckets 20 20 erasure;./ceph osd pool create .rgw.buckets.index 20 20 erasure;./ceph osd pool create .rgw.root 20 20 erasure;./ceph osd pool create .log 20 20 erasure; ./ceph osd pool create .rgw.control 20 20 erasure;./ceph osd pool create .rgw.gc 20 20 erasure; ./ceph osd pool create .rgw 20 20 erasure; ./ceph osd pool create .users 20 20 erasure; ./ceph osd pool create .users.email 20 20 erasure;./ceph osd pool create .users.swift 20 20 erasure;./ceph osd pool create .users.uid 20 20 erasure;./ceph osd pool create .rgw.buckets.extra 20 20 erasure
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH
pool '.rgw' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.log' does not exist
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.rgw.buckets.index' does not exist
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.rgw.control' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.rgw.gc' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.rgw.buckets' does not exist
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.rgw.root' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.users' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.users.email' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.users.swift' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.users.uid' removed
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.rgw.buckets' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.rgw.buckets.index' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.rgw.root' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.log' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.rgw.control' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.rgw.gc' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.rgw' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.users' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.users.email' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
pool '.users.swift' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *

pool '.users.uid' created
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *
*
pool '.rgw.buckets.extra' created
ceph@Ubuntu14:~/ceph-0.86/src$ ./radosgw-admin zone set ./zone.txt
2014-10-28 15:55:28.245478 7fae34c11840 0 couldn't find old data placement pools config, setting up new ones for the zone
2014-10-28 15:55:28.326468 7fae34c11840 0 couldn't find old data placement pools config, setting up new ones for the zone

ceph@Ubuntu14:~/ceph-0.86/src$ swift -A http://localhost:8000/auth -U tester:testing -K asdf list
ceph@Ubuntu14:~/ceph-0.86/src$ swift -A http://localhost:8000/auth -U tester:testing -K asdf upload mycontainer ceph
Error trying to create container 'mycontainer': 500 Internal Server Error: UnknownError
Object PUT failed: http://localhost:8000:8000/swift/v1/mycontainer/ceph 404 Not Found NoSuchBucket
ceph@Ubuntu14:~/ceph-0.86/src$ swift -A http://localhost:8000/auth -U tester:testing -K asdf list
ceph@Ubuntu14:~/ceph-0.86/src$ swift -A http://localhost:8000/auth -U tester:testing -K asdf upload mycontainer ceph
Error trying to create container 'mycontainer': 500 Internal Server Error: UnknownError
Object PUT failed: http://localhost:8000:8000/swift/v1/mycontainer/ceph 404 Not Found NoSuchBucket

Actions #1

Updated by pushpesh sharma over 9 years ago

able to create rados object:-

#./ceph osd pool create mypool 20 20 erasure
DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'mypool' created
./rados -p mypool bench 1 write -b 123 --no-cleanup
INFO: op_size has been rounded to 4096
Maintaining 16 concurrent writes of 4096 bytes for up to 1 seconds or 0 objects
Object prefix: benchmark_data_Ubuntu14_9063
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0 - 0
1 16 82 66 0.257649 0.257812 0.172959 0.234113
Total time run: 1.742133
Total writes made: 83
Write size: 4096
Bandwidth (MB/sec): 0.186
Stddev Bandwidth: 0.182301
Max bandwidth (MB/sec): 0.257812
Min bandwidth (MB/sec): 0
Average Latency: 0.331102
Stddev Latency: 0.208738
Max latency: 0.783039
Min latency: 0.13927
ceph@Ubuntu14:~/ceph-0.86/src$ ./rados -p mypool ls
benchmark_data_Ubuntu14_9063_object2
benchmark_data_Ubuntu14_9063_object36
benchmark_data_Ubuntu14_9063_object55
benchmark_data_Ubuntu14_9063_object67
benchmark_data_Ubuntu14_9063_object62
benchmark_data_Ubuntu14_9063_object50
benchmark_data_Ubuntu14_9063_object51
benchmark_data_Ubuntu14_9063_object60
benchmark_data_Ubuntu14_9063_object28
benchmark_data_Ubuntu14_9063_object73
benchmark_data_Ubuntu14_9063_object72
.................
.................

Actions #2

Updated by pushpesh sharma over 9 years ago

2014-10-28 15:59:41.468515 7f0863fef700 20 RGWEnv::set(): HTTP_HOST: localhost:8000
2014-10-28 15:59:41.468583 7f0863fef700 20 RGWEnv::set(): CONTENT_LENGTH: 0
2014-10-28 15:59:41.468590 7f0863fef700 20 RGWEnv::set(): HTTP_USER_AGENT: python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-36-generic
2014-10-28 15:59:41.468595 7f0863fef700 20 RGWEnv::set(): HTTP_ACCEPT_ENCODING: gzip, deflate, compress
2014-10-28 15:59:41.468600 7f0863fef700 20 RGWEnv::set(): HTTP_ACCEPT: /
2014-10-28 15:59:41.468604 7f0863fef700 20 RGWEnv::set(): HTTP_X_AUTH_TOKEN: AUTH_rgwtk0e0000007465737465723a74657374696e675853fe7787259fd786c1505478987718b1e6446c69afe9f74172a2e01c61afee1e789729
2014-10-28 15:59:41.468608 7f0863fef700 20 RGWEnv::set(): REQUEST_METHOD: PUT
2014-10-28 15:59:41.468611 7f0863fef700 20 RGWEnv::set(): REQUEST_URI: /swift/v1/mycontainer
2014-10-28 15:59:41.468615 7f0863fef700 20 RGWEnv::set(): QUERY_STRING:
2014-10-28 15:59:41.468618 7f0863fef700 20 RGWEnv::set(): REMOTE_USER:
2014-10-28 15:59:41.468621 7f0863fef700 20 RGWEnv::set(): SCRIPT_URI: /swift/v1/mycontainer
2014-10-28 15:59:41.468630 7f0863fef700 20 RGWEnv::set(): SERVER_PORT: 8000
2014-10-28 15:59:41.468633 7f0863fef700 20 CONTENT_LENGTH=0
2014-10-28 15:59:41.468636 7f0863fef700 20 HTTP_ACCEPT=/
2014-10-28 15:59:41.468639 7f0863fef700 20 HTTP_ACCEPT_ENCODING=gzip, deflate, compress
2014-10-28 15:59:41.468642 7f0863fef700 20 HTTP_HOST=localhost:8000
2014-10-28 15:59:41.468645 7f0863fef700 20 HTTP_USER_AGENT=python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-36-generic
2014-10-28 15:59:41.468648 7f0863fef700 20 HTTP_X_AUTH_TOKEN=AUTH_rgwtk0e0000007465737465723a74657374696e675853fe7787259fd786c1505478987718b1e6446c69afe9f74172a2e01c61afee1e789729
2014-10-28 15:59:41.468651 7f0863fef700 20 QUERY_STRING=
2014-10-28 15:59:41.468654 7f0863fef700 20 REMOTE_USER=
2014-10-28 15:59:41.468657 7f0863fef700 20 REQUEST_METHOD=PUT
2014-10-28 15:59:41.468659 7f0863fef700 20 REQUEST_URI=/swift/v1/mycontainer
2014-10-28 15:59:41.468662 7f0863fef700 20 SCRIPT_URI=/swift/v1/mycontainer
2014-10-28 15:59:41.468666 7f0863fef700 20 SERVER_PORT=8000
2014-10-28 15:59:41.468671 7f0863fef700 20 RGWEnv::set(): HTTP_HOST: localhost:8000
2014-10-28 15:59:41.468674 7f0863fef700 20 RGWEnv::set(): CONTENT_LENGTH: 0
2014-10-28 15:59:41.468677 7f0863fef700 20 RGWEnv::set(): HTTP_USER_AGENT: python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-36-generic
2014-10-28 15:59:41.468681 7f0863fef700 20 RGWEnv::set(): HTTP_ACCEPT_ENCODING: gzip, deflate, compress
2014-10-28 15:59:41.468684 7f0863fef700 20 RGWEnv::set(): HTTP_ACCEPT: /
2014-10-28 15:59:41.468687 7f0863fef700 20 RGWEnv::set(): HTTP_X_AUTH_TOKEN: AUTH_rgwtk0e0000007465737465723a74657374696e675853fe7787259fd786c1505478987718b1e6446c69afe9f74172a2e01c61afee1e789729
2014-10-28 15:59:41.468690 7f0863fef700 20 RGWEnv::set(): REQUEST_METHOD: PUT
2014-10-28 15:59:41.468693 7f0863fef700 20 RGWEnv::set(): REQUEST_URI: /swift/v1/mycontainer
2014-10-28 15:59:41.468696 7f0863fef700 20 RGWEnv::set(): QUERY_STRING:
2014-10-28 15:59:41.468699 7f0863fef700 20 RGWEnv::set(): REMOTE_USER:
2014-10-28 15:59:41.468702 7f0863fef700 20 RGWEnv::set(): SCRIPT_URI: /swift/v1/mycontainer
2014-10-28 15:59:41.468705 7f0863fef700 20 RGWEnv::set(): SERVER_PORT: 8000
2014-10-28 15:59:41.468708 7f0863fef700 20 CONTENT_LENGTH=0
2014-10-28 15:59:41.468712 7f0863fef700 20 HTTP_ACCEPT=/
2014-10-28 15:59:41.468714 7f0863fef700 20 HTTP_ACCEPT_ENCODING=gzip, deflate, compress
2014-10-28 15:59:41.468717 7f0863fef700 20 HTTP_HOST=localhost:8000
2014-10-28 15:59:41.468720 7f0863fef700 20 HTTP_USER_AGENT=python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-36-generic
2014-10-28 15:59:41.468723 7f0863fef700 20 HTTP_X_AUTH_TOKEN=AUTH_rgwtk0e0000007465737465723a74657374696e675853fe7787259fd786c1505478987718b1e6446c69afe9f74172a2e01c61afee1e789729
2014-10-28 15:59:41.468726 7f0863fef700 20 QUERY_STRING=
2014-10-28 15:59:41.468728 7f0863fef700 20 REMOTE_USER=
2014-10-28 15:59:41.468731 7f0863fef700 20 REQUEST_METHOD=PUT
2014-10-28 15:59:41.468734 7f0863fef700 20 REQUEST_URI=/swift/v1/mycontainer
2014-10-28 15:59:41.468736 7f0863fef700 20 SCRIPT_URI=/swift/v1/mycontainer
2014-10-28 15:59:41.468739 7f0863fef700 20 SERVER_PORT=8000
2014-10-28 15:59:41.468745 7f0863fef700 1 ====== starting new request req=0x7f088c002490 =====
2014-10-28 15:59:41.468762 7f0863fef700 2 req 0:0.000017::PUT /swift/v1/mycontainer::initializing
2014-10-28 15:59:41.473434 7f0863fef700 10 ver=v1 first=mycontainer req=
2014-10-28 15:59:41.473656 7f0863fef700 10 s->object=<NULL> s->bucket=mycontainer
2014-10-28 15:59:41.473668 7f0863fef700 2 req 0:0.004923:swift:PUT /swift/v1/mycontainer::getting op
2014-10-28 15:59:41.473678 7f0863fef700 2 req 0:0.004933:swift:PUT /swift/v1/mycontainer:create_bucket:authorizing
2014-10-28 15:59:41.473695 7f0863fef700 10 swift_user=tester:testing
2014-10-28 15:59:41.473828 7f0863fef700 20 build_token token=0e0000007465737465723a74657374696e675853fe7787259fd786c1505478987718
2014-10-28 15:59:41.473912 7f0863fef700 2 req 0:0.005168:swift:PUT /swift/v1/mycontainer:create_bucket:reading permissions
2014-10-28 15:59:41.473922 7f0863fef700 2 req 0:0.005178:swift:PUT /swift/v1/mycontainer:create_bucket:init op
2014-10-28 15:59:41.473928 7f0863fef700 2 req 0:0.005184:swift:PUT /swift/v1/mycontainer:create_bucket:verifying op mask
2014-10-28 15:59:41.473931 7f0863fef700 20 required_mask= 2 user.op_mask=7
2014-10-28 15:59:41.473937 7f0863fef700 2 req 0:0.005192:swift:PUT /swift/v1/mycontainer:create_bucket:verifying op permissions
2014-10-28 15:59:41.473984 7f0863fef700 1 – 10.66.26.231:0/1007028 --> 10.66.26.231:6800/6523 – osd_op(client.4112.0:177 tester.buckets [call user.list_buckets] 19.47c6a3bc ack+read+known_if_redirected e46) v4 – ?+0 0x7f088c02e9c0 con 0x7f086c023300
2014-10-28 15:59:41.476183 7f08affff700 1 – 10.66.26.231:0/1007028 <== osd.0 10.66.26.231:6800/6523 72 ==== osd_op_reply(177 tester.buckets [call] v0'0 uv0 ack = 2 ((2) No such file or directory)) v6 ==== 181+0+0 (1164910737 0 0) 0x7f08840266c0 con 0x7f086c023300
2014-10-28 15:59:41.476331 7f0863fef700 2 req 0:0.007585:swift:PUT /swift/v1/mycontainer:create_bucket:verifying op params
2014-10-28 15:59:41.476340 7f0863fef700 2 req 0:0.007596:swift:PUT /swift/v1/mycontainer:create_bucket:executing
2014-10-28 15:59:41.476493 7f0863fef700 20 get_obj_state: rctx=0x7f0863fea1f0 obj=.rgw:mycontainer state=0x7f088c02e978 s
>prefetch_data=0
2014-10-28 15:59:41.476505 7f0863fef700 10 cache get: name=.rgw+mycontainer : type miss (requested=22, cached=0)
2014-10-28 15:59:41.476540 7f0863fef700 1 – 10.66.26.231:0/1007028 --> 10.66.26.231:6805/6753 – osd_op(client.4112.0:178 mycontainer [call version.read,getxattrs,stat] 15.dffe55c4 ack+read+known_if_redirected e46) v4 – ?+0 0x7f088c033010 con 0x21d4a80
2014-10-28 15:59:41.479176 7f08affff700 1 – 10.66.26.231:0/1007028 <== osd.1 10.66.26.231:6805/6753 93 ==== osd_op_reply(178 mycontainer [call,getxattrs,stat] v0'0 uv0 ack = 2 ((2) No such file or directory)) v6 ==== 262+0+0 (2766048895 0 0) 0x7f08840266c0 con 0x21d4a80
2014-10-28 15:59:41.479393 7f0863fef700 10 cache put: name=.rgw+mycontainer
2014-10-28 15:59:41.479402 7f0863fef700 10 moving .rgw+mycontainer to cache LRU end
2014-10-28 15:59:41.479582 7f0863fef700 1 – 10.66.26.231:0/1007028 -
> 10.66.26.231:6810/7003 – osd_op(client.4112.0:179 .dir.default.4112.11 [create 0~0,call rgw.bucket_init_index] 10.9f37f465 ondisk+write+known_if_redirected e46) v4 – ?+0 0x7f088c037500 con 0x21d8c00
2014-10-28 15:59:41.482841 7f08affff700 1 – 10.66.26.231:0/1007028 <== osd.2 10.66.26.231:6810/7003 20 ==== osd_op_reply(179 .dir.default.4112.11 [create 0~0,call] v0'0 uv0 ondisk = -95 ((95) Operation not supported)) v6 ==== 229+0+0 (3446824157 0 0) 0x7f0880019010 con 0x21d8c00
2014-10-28 15:59:41.482935 7f0863fef700 20 rgw_create_bucket returned ret=-95 bucket=mycontainer(@ {i=.rgw.buckets.index,e=.rgw.buckets.extra}
.rgw.buckets[default.4112.11])
2014-10-28 15:59:41.482990 7f0863fef700 0 WARNING: set_req_state_err err_no=95 resorting to 500
2014-10-28 15:59:41.483094 7f0863fef700 2 req 0:0.014350:swift:PUT /swift/v1/mycontainer:create_bucket:http status=500
2014-10-28 15:59:41.483100 7f0863fef700 1 ====== req done req=0x7f088c002490 http_status=500 ======
2014-10-28 15:59:41.483106 7f0863fef700 20 process_request() returned -95
2014-10-28 15:59:41.483130 7f0863fef700 1 civetweb: 0x7f088c006a40: 127.0.0.1 - - [28/Oct/2014:15:59:41 +0530] "PUT /swift/v1/mycontainer HTTP/1.1" -1 0 - python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-36-generic
2014-10-28 15:59:47.170088 7f089ffff700 2 RGWDataChangesLog::ChangesRenewThread: start

Actions #3

Updated by Yehuda Sadeh over 9 years ago

The bucket index cannot reside on EC pools.

Actions #4

Updated by pushpesh sharma over 9 years ago

OK,I was not aware of this, seems sane behaviour to me.

Actions #5

Updated by Sage Weil over 9 years ago

  • Status changed from New to Won't Fix
Actions

Also available in: Atom PDF