Bug #21497
boto.exception.S3ResponseError: S3ResponseError: 416 Requested Range Not Satisfiable
Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I have updated few things in the test like moved to OVH nodes instead of VPS before
also this is first time its run on CentOS 7.4 (previously it was 7.3), so few things have changed (Previously I havent seen this issue on vps/7.3 runs)
Console logs: http://qa-proxy.ceph.com/teuthology/vasu-2017-09-21_23:50:43-rgw-master-distro-basic-ovh/1656592/teuthology.log
2017-09-22T00:24:27.486 INFO:teuthology.orchestra.run.ovh010.stdout:conn = boto.connect_s3( 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout: aws_access_key_id = access_key, 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout: aws_secret_access_key = secret_key, 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout: host = 's3.ceph.com', 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout: is_secure=False, 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout: calling_format = boto.s3.connection.OrdinaryCallingFormat(), 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout: ) 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout:bucket = conn.create_bucket('s3atest') 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout:for bucket in conn.get_all_buckets(): 2017-09-22T00:24:27.487 INFO:teuthology.orchestra.run.ovh010.stdout: print bucket.name + " " + bucket.creation_date 2017-09-22T00:24:27.488 INFO:teuthology.orchestra.run.ovh010:Running: '/home/ubuntu/cephtest/venv/bin/python /home/ubuntu/cephtest/create_bucket.py' 2017-09-22T00:24:28.095 INFO:teuthology.orchestra.run.ovh010.stderr:Traceback (most recent call last): 2017-09-22T00:24:28.095 INFO:teuthology.orchestra.run.ovh010.stderr: File "/home/ubuntu/cephtest/create_bucket.py", line 15, in <module> 2017-09-22T00:24:28.095 INFO:teuthology.orchestra.run.ovh010.stderr: bucket = conn.create_bucket('s3atest') 2017-09-22T00:24:28.095 INFO:teuthology.orchestra.run.ovh010.stderr: File "/home/ubuntu/cephtest/venv/lib/python2.7/site-packages/boto/s3/connection.py", line 628, in create_bucket 2017-09-22T00:24:28.095 INFO:teuthology.orchestra.run.ovh010.stderr: response.status, response.reason, body) 2017-09-22T00:24:28.095 INFO:teuthology.orchestra.run.ovh010.stderr:boto.exception.S3ResponseError: S3ResponseError: 416 Requested Range Not Satisfiable 2017-09-22T00:24:28.095 INFO:teuthology.orchestra.run.ovh010.stderr:<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidRange</Code><BucketName>s3atest</BucketName><RequestId>tx000000000000000000001-0059c4583b-1032-default</RequestId><HostId>1032-default-default</HostId></Error>
2017-09-22 00:24:27.751658 7eff44655700 15 server signature=NWcy6u5I4sefqHOuK3JNKLvvV60= 2017-09-22 00:24:27.751659 7eff44655700 15 client signature=NWcy6u5I4sefqHOuK3JNKLvvV60= 2017-09-22 00:24:27.751667 7eff44655700 15 compare=0 2017-09-22 00:24:27.751677 7eff44655700 20 rgw::auth::s3::LocalEngine granted access 2017-09-22 00:24:27.751678 7eff44655700 20 rgw::auth::s3::AWSAuthStrategy granted access 2017-09-22 00:24:27.751681 7eff44655700 2 req 1:0.000476:s3:PUT /s3atest/:create_bucket:normalizing buckets and tenants 2017-09-22 00:24:27.751688 7eff44655700 10 s->object=<NULL> s->bucket=s3atest 2017-09-22 00:24:27.751695 7eff44655700 2 req 1:0.000491:s3:PUT /s3atest/:create_bucket:init permissions 2017-09-22 00:24:27.751713 7eff44655700 2 req 1:0.000493:s3:PUT /s3atest/:create_bucket:recalculating target 2017-09-22 00:24:27.751714 7eff44655700 2 req 1:0.000510:s3:PUT /s3atest/:create_bucket:reading permissions 2017-09-22 00:24:27.751723 7eff44655700 2 req 1:0.000518:s3:PUT /s3atest/:create_bucket:init op 2017-09-22 00:24:27.751726 7eff44655700 2 req 1:0.000521:s3:PUT /s3atest/:create_bucket:verifying op mask 2017-09-22 00:24:27.751728 7eff44655700 20 required_mask= 2 user.op_mask=7 2017-09-22 00:24:27.751729 7eff44655700 2 req 1:0.000525:s3:PUT /s3atest/:create_bucket:verifying op permissions 2017-09-22 00:24:27.751804 7eff44655700 1 -- 158.69.80.63:0/2419816177 --> 158.69.80.75:6808/15019 -- osd_op(unknown.0.0:1136 3.31 3:8c573b05:users.uid::s3a.buckets:head [call user.list_buckets] snapc 0=[] ondisk+read+known_if_redirected e43) v8 -- 0x5587d5a53e00 con 0 2017-09-22 00:24:27.753044 7eff69542700 1 -- 158.69.80.63:0/2419816177 <== osd.7 158.69.80.75:6808/15019 156 ==== osd_op_reply(1136 s3a.buckets [call] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 155+0+0 (3714447664 0 0) 0x5587d5a53e00 con 0x5587d5580000 2017-09-22 00:24:27.753212 7eff44655700 2 req 1:0.002007:s3:PUT /s3atest/:create_bucket:verifying op params 2017-09-22 00:24:27.753219 7eff44655700 2 req 1:0.002014:s3:PUT /s3atest/:create_bucket:pre-executing 2017-09-22 00:24:27.753229 7eff44655700 2 req 1:0.002025:s3:PUT /s3atest/:create_bucket:executing 2017-09-22 00:24:27.753259 7eff44655700 5 NOTICE: call to do_aws4_auth_completion 2017-09-22 00:24:27.753277 7eff44655700 20 get_system_obj_state: rctx=0x7eff4464d410 obj=default.rgw.meta:root:s3atest state=0x5587d59a3080 s->prefetch_data=0 2017-09-22 00:24:27.753283 7eff44655700 10 cache get: name=default.rgw.meta+root+s3atest : miss 2017-09-22 00:24:27.753364 7eff44655700 1 -- 158.69.80.63:0/2419816177 --> 158.69.80.63:6800/30966 -- osd_op(unknown.0.0:1137 3.13 3:c8ac452a:root::s3atest:head [call version.read,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e43) v8 -- 0x5587d5a54100 con 0 2017-09-22 00:24:27.754030 7eff68d41700 1 -- 158.69.80.63:0/2419816177 <== osd.1 158.69.80.63:6800/30966 129 ==== osd_op_reply(1137 s3atest [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 235+0+0 (3834934427 0 0) 0x5587d5a52300 con 0x5587d5561000 2017-09-22 00:24:27.754120 7eff44655700 10 cache put: name=default.rgw.meta+root+s3atest info.flags=0x0 2017-09-22 00:24:27.754128 7eff44655700 10 adding default.rgw.meta+root+s3atest to cache LRU end 2017-09-22 00:24:27.754220 7eff44655700 1 -- 158.69.80.63:0/2419816177 --> 158.69.80.63:6789/0 -- mon_get_version(what=osdmap handle=2) v1 -- 0x5587d528de00 con 0 2017-09-22 00:24:27.754635 7eff67d3f700 1 -- 158.69.80.63:0/2419816177 <== mon.0 158.69.80.63:6789/0 9 ==== mon_get_version_reply(handle=2 version=43) v2 ==== 24+0+0 (2384367712 0 0) 0x5587d528c000 con 0x5587d552b000 2017-09-22 00:24:27.754715 7eff44655700 1 -- 158.69.80.63:0/2419816177 --> 158.69.80.63:6789/0 -- pool_op(create pool 0 auid 0 tid 1138 name default.rgw.buckets.index v0) v4 -- 0x5587d5a56d80 con 0 2017-09-22 00:24:28.058140 7eff67d3f700 1 -- 158.69.80.63:0/2419816177 <== mon.0 158.69.80.63:6789/0 10 ==== osd_map(44..44 src has 1..44) v3 ==== 244+0+0 (417964393 0 0) 0x5587d52d1480 con 0x5587d552b000 2017-09-22 00:24:28.059791 7eff67d3f700 1 -- 158.69.80.63:0/2419816177 <== mon.0 158.69.80.63:6789/0 11 ==== pool_op_reply(tid 1138 (34) Numerical result out of range v44) v1 ==== 43+0+0 (2113825093 0 0) 0x5587d52d1700 con 0x5587d552b000 2017-09-22 00:24:28.059863 7eff44655700 20 rgw_create_bucket returned ret=-34 bucket=s3atest[4ffe783c-ade5-4037-998e-be2d9fc70d40.4146.1]) 2017-09-22 00:24:28.059876 7eff44655700 2 req 1:0.308670:s3:PUT /s3atest/:create_bucket:completing 2017-09-22 00:24:28.060087 7eff44655700 2 req 1:0.308882:s3:PUT /s3atest/:create_bucket:op status=-34 2017-09-22 00:24:28.060093 7eff44655700 2 req 1:0.308888:s3:PUT /s3atest/:create_bucket:http status=416 2017-09-22 00:24:28.060107 7eff44655700 1 ====== req done req=0x7eff4464ddc0 op status=-34 http_status=416 ====== 2017-09-22 00:24:28.060134 7eff44655700 20 process_request() returned -34 2017-09-22 00:24:28.060208 7eff44655700 1 civetweb: 0x5587d56b0000: 158.69.80.63 - - [22/Sep/2017:00:24:27 +0000] "PUT /s3atest/ HTTP/1.1" 1 0 - Boto/2.48.0 Python/2.7.5 Linux/3.10.0-693.el7.x86_64
Related issues
History
#1 Updated by Russell Islam about 6 years ago
Is this issue fixed? I am having the similar issue. How do I resolve it?
#2 Updated by Vasu Kulkarni about 6 years ago
- Status changed from New to Resolved
For anyone who is hitting this issue
set default pg_num and pgp_num to lower value(8 for example), or set mon_max_pg_per_osd to a high value in ceph.conf
radosgw-admin doesn' throw proper error when internal pool creation fails, hence the upper level error which is very confusing.
#3 Updated by Casey Bodley over 2 years ago
- Duplicated by Bug #48139: Ceph Dashboard Object Gateway InvalidRange bucket exception added