Bug #61769
closedBulk upload feature not working
0%
Description
As part of this - https://github.com/ceph/ceph/pull/14775, ceph rgw should support bulkupload feature. But this isn't working as expected in latest quincy release. It is failing with below message when we try to upload files using bulk upload feature -
curl -i -H "X-Auth-Token: $TOKEN" https://<url>/swift/v1/test_1234?extract-archive=tar -X PUT --data-binary @test.tar
HTTP/1.1 200 OK
transfer-encoding: chunked
x-trans-id: tx00000d353477433a6ae6e-006493d898-26a0393-default
x-openstack-request-id: tx00000d353477433a6ae6e-006493d898-26a0393-default
content-type: text/plain; charset=utf-8
date: Thu, 22 Jun 2023 05:14:03 GMT
Number Files Created: 3
Response Body:
Response Status: 400 Bad Request
Errors:
`��DV, 404 Not Found
� ��DV, 404 Not Found
This issue is encountered for few file formats - (ex: .html and .bin)
During upload of these files, rgw doesn't identify the bucket name properly and it gets some garbage value. This results in 404 error.
Snippet from the rgw debug logs -
023-06-22T05:38:52.191+0000 7f90fc66b700 2 req 3378194986423204620 0.000000000s s3:list_buckets executing
2023-06-22T05:38:52.191+0000 7f90fc66b700 20 req 3378194986423204620 0.000000000s s3:list_buckets RGWSI_User_RADOS::list_buckets(): anonymous user
2023-06-22T05:38:52.191+0000 7f90cce0c700 20 req 17170992831263490010 1.939950347s swift:bulk_upload bulk_upload: get_exactly ret=512
2023-06-22T05:38:52.191+0000 7f90cce0c700 2 req 17170992831263490010 1.939950347s swift:bulk_upload handling regular file
2023-06-22T05:38:52.191+0000 7f90cce0c700 20 req 17170992831263490010 1.939950347s swift:bulk_upload got file=<A0><C3>^Y^@V^
4/files.html, size=8118
^Y
2023-06-22T05:38:52.191+0000 7f90cce0c700 20 req 17170992831263490010 1.939950347s swift:bulk_upload get_system_obj_state: rctx=0x7f90cad84f60 obj=default.rgw.meta:root:<A0><C3>V^
4 state=0x560019200760 s->prefetch_data=0
^Y
2023-06-22T05:38:52.191+0000 7f90cce0c700 10 req 17170992831263490010 1.939950347s swift:bulk_upload cache get: name=default.rgw.meta+root+<A0><C3>V^
4 : miss
%19%00V%00%004:head [call version.read in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e58800) v8 -- 0x560019d45800 con 0x56001863c000
2023-06-22T05:38:52.191+0000 7f90cce0c700 1 -- -- osd_op(unknown.0.0:8362 3.17 3:eebde558:root::%a0%c3
2023-06-22T05:38:52.191+0000 7f90e7641700 2 req 3378194986423204620 0.000000000s s3:list_buckets completing
2023-06-22T05:38:52.191+0000 7f90e7641700 20 req 3378194986423204620 0.000000000s get_system_obj_state: rctx=0x7f90cae06790 obj=default.rgw.log:script.postrequest. state=0x56001b5e7de0 s->prefetch_data=0
2023-06-22T05:38:52.191+0000 7f90e7641700 10 req 3378194986423204620 0.000000000s cache get: name=default.rgw.log++script.postrequest. : hit (negative entry)
2023-06-22T05:38:52.191+0000 7f90e7641700 2 req 3378194986423204620 0.000000000s s3:list_buckets op status=0
2023-06-22T05:38:52.191+0000 7f90e7641700 2 req 3378194986423204620 0.000000000s s3:list_buckets http status=200
2023-06-22T05:38:52.191+0000 7f90e7641700 1 ====== req done req=0x7f90cae07710 op status=0 http_status=200 latency=0.000000000s ======
2023-06-22T05:38:52.191+0000 7f90e7641700 1 beast: 0x7f90cae07710: - anonymous [22/Jun/2023:05:38:52.191 +0000] "GET / HTTP/1.0" 200 231 - - - latency=0.000000000s
2023-06-22T05:38:52.191+0000 7f91f7081700 1 -- <== osd.1179 1 ==== osd_op_reply(8362 <A0><C3>^Y^@V^
4 [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 237+0+0 (crc 0 0 0) 0x5600157598c0 con 0x56001863c000
Y
2023-06-22T05:38:52.191+0000 7f91c8002700 10 req 17170992831263490010 1.939950347s swift:bulk_upload cache put: name=default.rgw.meta+root+<A0><C3>V^
4 info.flags=0x0
^Y
2023-06-22T05:38:52.191+0000 7f91c8002700 10 req 17170992831263490010 1.939950347s swift:bulk_upload adding default.rgw.meta+root+<A0><C3>V^
4 to cache LRU end
^Y
2023-06-22T05:38:52.191+0000 7f91c8002700 20 req 17170992831263490010 1.939950347s swift:bulk_upload non existent directory=<A0><C3>V^
^@4
This case was reported with Redhat before and it has already been fixed in RHCS releases but this issue is still seen in upstream releases. Referring BZ case here -
https://bugzilla.redhat.com/show_bug.cgi?id=2132481
https://bugzilla.redhat.com/show_bug.cgi?id=2132482
Files