Project

General

Profile

Actions

Bug #41225

closed

rgw: GETing S3 website root with two slashes // crashes rgw

Added by Dan van der Ster over 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
nautilus, octopus
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

RGW crashes if you curl a website root with 2 slashes.

First, we create a new bucket and website configuration:

~ $ s3cmd mb -P s3://dvanders-test4
Bucket 's3://dvanders-test4/' created
~ $ s3cmd ws-create s3://dvanders-test4
Bucket 's3://dvanders-test4/': website configuration created.

Now, curl the website root returns 404 (as expected)

~ $ curl http://dvanders-test4.s3-website.cern.ch/
<html>
 <head><title>404 Not Found</title></head>
 <body>
  <h1>404 Not Found</h1>
  <ul>
   <li>Code: NoSuchKey</li>
   <li>BucketName: dvanders-test4</li>
   <li>RequestId: tx00000000000000040d7fa-005d52a626-53a780a-default</li>
   <li>HostId: 53a780a-default-default</li>

But curl the root with an extra // crashes the rgw:

~ $ curl http://dvanders-test4.s3-website.cern.ch//
Bad Gatewayā¸ˇ

Here is the rgw log:

/builddir/build/BUILD/ceph-12.2.12/src/rgw/rgw_rados.h: In function 'void RGWObjectCtxImpl<T, S>::set_atomic(T&) [with T = rgw_obj; S = RGWObjState]' thread 7f7a1066e700 time 2019-08-13 13:51:12.867584
/builddir/build/BUILD/ceph-12.2.12/src/rgw/rgw_rados.h: 2150: FAILED assert(!obj.empty())
 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x7f7aba0f8ed0]
 2: (RGWHandler_REST_S3Website::web_dir() const+0x449) [0x55db02c2fdc9]
 3: (RGWHandler_REST_S3Website::retarget(RGWOp*, RGWOp**)+0x103) [0x55db02c2ff23]
 4: (rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, bool)+0x1d2) [0x55db02b3e6d2]
 5: (process_request(RGWRados*, RGWREST*, RGWRequest*, std::string const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, int*)+0xb98) [0x55db02b3f428]
 6: (()+0x24de6e) [0x55db029d5e6e]
 7: (()+0x24e9db) [0x55db029d69db]
 8: (make_fcontext()+0x2f) [0x55db02da81ef]


Related issues 2 (0 open2 closed)

Copied to rgw - Backport #46966: octopus: rgw: GETing S3 website root with two slashes // crashes rgwResolvedNathan CutlerActions
Copied to rgw - Backport #46967: nautilus: rgw: GETing S3 website root with two slashes // crashes rgwResolvedActions
Actions #1

Updated by Dan van der Ster over 4 years ago

Here is some gdb:

(gdb) up
#7  0x000055fc19cdcdc9 in set_atomic (obj=..., this=0x55fc2359fdf0)
    at /usr/src/debug/ceph-12.2.12/src/rgw/rgw_rados.h:2150
2150        assert (!obj.empty());
(gdb) up
#8  RGWHandler_REST_S3Website::web_dir (this=this@entry=0x55fc1da941b0)
    at /usr/src/debug/ceph-12.2.12/src/rgw/rgw_rest_s3.cc:3462
3462      obj_ctx.obj.set_atomic(obj);
(gdb) up
#9  0x000055fc19cdcf23 in RGWHandler_REST_S3Website::retarget (this=0x55fc1da941b0, op=<optimized out>, 
    new_op=<optimized out>) at /usr/src/debug/ceph-12.2.12/src/rgw/rgw_rest_s3.cc:3507
3507      s->bucket_info.website_conf.get_effective_key(s->object.name, &new_obj.name, web_dir());
(gdb) up
#10 0x000055fc19beb6d2 in rgw_process_authenticated (handler=handler@entry=0x55fc1da941b0, op=
    @0x55fc2359fcb8: 0x55fc22a19c00, req=req@entry=0x55fc235a0800, s=s@entry=0x55fc235a00a0, 
    skip_retarget=skip_retarget@entry=false) at /usr/src/debug/ceph-12.2.12/src/rgw/rgw_process.cc:54
54        ret = handler->retarget(op, &op);
(gdb) p op
$1 = (RGWOp *&) @0x55fc2359fcb8: 0x55fc22a19c00
(gdb) p *op
$2 = {_vptr.RGWOp = 0x55fc1a1abd50 <vtable for RGWGetObj_ObjStore_S3Website+16>, s = 0x55fc235a00a0, 
  dialect_handler = 0x55fc1da941b0, store = 0x55fc1b4dc000, bucket_cors = {rules = empty std::list}, 
  cors_exist = false, bucket_quota = {max_size_soft_threshold = -1, max_objs_soft_threshold = -1, max_size = -1, 
    max_objects = -1, enabled = false, check_on_raw = false}, user_quota = {max_size_soft_threshold = -1, 
    max_objs_soft_threshold = -1, max_size = -1, max_objects = -1, enabled = false, check_on_raw = false}, 
  op_ret = 0}
(gdb) up
#11 0x000055fc19bec428 in process_request (store=0x55fc1b4dc000, rest=0x7ffecd187400, req=req@entry=0x55fc235a0800, 
    frontend_prefix="", auth_registry=..., client_io=client_io@entry=0x55fc235a0830, olog=0x0, 
    http_ret=http_ret@entry=0x0) at /usr/src/debug/ceph-12.2.12/src/rgw/rgw_process.cc:207
207      ret = rgw_process_authenticated(handler, op, req, s);
(gdb) p req
$3 = (RGWRequest * const) 0x55fc235a0800
(gdb) p *req
$4 = {_vptr.RGWRequest = 0x55fc1a198390 <vtable for RGWRequest+16>, id = 4298274, s = 0x0, req_str = "GET //", 
  op = 0x55fc22a19c00, ts = {tv = {tv_sec = 1565699298, tv_nsec = 342105018}}}
(gdb) 

Actions #2

Updated by Patrick Donnelly over 4 years ago

  • Project changed from Ceph to rgw
  • Subject changed from GETing S3 website root with two slashes // crashes rgw to rgw: GETing S3 website root with two slashes // crashes rgw
  • Start date deleted (08/13/2019)
Actions #3

Updated by Matt Benjamin over 4 years ago

  • Status changed from New to Triaged
  • Assignee set to Matt Benjamin
Actions #4

Updated by Dan van der Ster almost 4 years ago

PR: https://github.com/ceph/ceph/pull/35792
Pls add nautilus, octopus backports.

Actions #5

Updated by Kefu Chai almost 4 years ago

  • Status changed from Triaged to Fix Under Review
  • Backport set to nautilus, octopus
  • Pull request ID set to 35792
Actions #6

Updated by J. Eric Ivancich over 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #7

Updated by Nathan Cutler over 3 years ago

  • Copied to Backport #46966: octopus: rgw: GETing S3 website root with two slashes // crashes rgw added
Actions #8

Updated by Nathan Cutler over 3 years ago

  • Copied to Backport #46967: nautilus: rgw: GETing S3 website root with two slashes // crashes rgw added
Actions #9

Updated by Nathan Cutler over 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF