Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2023-05-30T12:39:01Z
Ceph
Redmine
rgw - Backport #61510 (In Progress): quincy: S3 ListMultipartUploads action shows current timesta...
https://tracker.ceph.com/issues/61510
2023-05-30T12:39:01Z
Backport Bot
<p><a class="external" href="https://github.com/ceph/ceph/pull/51834">https://github.com/ceph/ceph/pull/51834</a></p>
rgw - Backport #61509 (In Progress): reef: S3 ListMultipartUploads action shows current timestamp...
https://tracker.ceph.com/issues/61509
2023-05-30T12:38:54Z
Backport Bot
<p><a class="external" href="https://github.com/ceph/ceph/pull/51833">https://github.com/ceph/ceph/pull/51833</a></p>
rgw - Bug #61251 (Pending Backport): S3 ListMultipartUploads action shows current timestamp inste...
https://tracker.ceph.com/issues/61251
2023-05-18T12:27:09Z
Victor Tsykanov
<p>S3 ListMultipartUploads action returns current timestamp in Upload/Initiated field instead of actual upload start time.</p>
<p>Example:<br /><pre><code class="xml syntaxhl"><span class="CodeRay"><span class="preprocessor"><?xml version="1.0" encoding="UTF-8"?></span>
<span class="tag"><ListMultipartUploadsResult</span> <span class="attribute-name">xmlns</span>=<span class="string"><span class="delimiter">"</span><span class="content">http://s3.amazonaws.com/doc/2006-03-01/</span><span class="delimiter">"</span></span><span class="tag">></span>
<span class="tag"><Tenant></span>orsxg5c7obzg62tfmn2f6vrujritkq2qja3uerkwircfgm2xkm2dmssrijiucui<span class="tag"></Tenant></span>
<span class="tag"><Bucket></span>bucket-8ec03ff4-0d47-40ba-b99b-ce9c1a05d317<span class="tag"></Bucket></span>
<span class="tag"><NextKeyMarker></span>file-bc2c391d-f676-457b-b2c2-b68e1d2b7cec<span class="tag"></NextKeyMarker></span>
<span class="tag"><NextUploadIdMarker></span>2~4WFUDGCm_93vGhTPD8ci2dQTFkAto2l<span class="tag"></NextUploadIdMarker></span>
<span class="tag"><MaxUploads></span>1000<span class="tag"></MaxUploads></span>
<span class="tag"><IsTruncated></span>false<span class="tag"></IsTruncated></span>
<span class="tag"><Upload></span>
<span class="tag"><Key></span>file-8796097a-c589-45a4-a08d-ea609b71ab05<span class="tag"></Key></span>
<span class="tag"><UploadId></span>2~2hw0xjD_JOifvklWq-ea7qTZBZbZyLk<span class="tag"></UploadId></span>
<span class="tag"><Initiator></span>
<span class="tag"><ID></span>orsxg5c7obzg62tfmn2f6vrujritkq2qja3uerkwircfgm2xkm2dmssrijiucui$cfdeb891-46a9-4f36-8d0a-ec374d8b21b7
<span class="tag"></ID></span>
<span class="tag"><DisplayName></span>User cfdeb891-46a9-4f36-8d0a-ec374d8b21b7<span class="tag"></DisplayName></span>
<span class="tag"></Initiator></span>
<span class="tag"><Owner></span>
<span class="tag"><ID></span>orsxg5c7obzg62tfmn2f6vrujritkq2qja3uerkwircfgm2xkm2dmssrijiucui$cfdeb891-46a9-4f36-8d0a-ec374d8b21b7
<span class="tag"></ID></span>
<span class="tag"><DisplayName></span>User cfdeb891-46a9-4f36-8d0a-ec374d8b21b7<span class="tag"></DisplayName></span>
<span class="tag"></Owner></span>
<span class="tag"><StorageClass></span>STANDARD<span class="tag"></StorageClass></span>
<span class="tag"><Initiated></span>2023-05-17T09:20:03.271Z<span class="tag"></Initiated></span>
<span class="tag"></Upload></span>
<span class="tag"></ListMultipartUploadsResult></span>
</span></code></pre></p>
<p>Repeating the same request results in response with changed value in Initiated tag:<br /><pre><code class="xml syntaxhl"><span class="CodeRay"><span class="preprocessor"><?xml version="1.0" encoding="UTF-8"?></span>
<span class="tag"><ListMultipartUploadsResult</span> <span class="attribute-name">xmlns</span>=<span class="string"><span class="delimiter">"</span><span class="content">http://s3.amazonaws.com/doc/2006-03-01/</span><span class="delimiter">"</span></span><span class="tag">></span>
<span class="tag"><Tenant></span>orsxg5c7obzg62tfmn2f6vrujritkq2qja3uerkwircfgm2xkm2dmssrijiucui<span class="tag"></Tenant></span>
<span class="tag"><Bucket></span>bucket-8ec03ff4-0d47-40ba-b99b-ce9c1a05d317<span class="tag"></Bucket></span>
<span class="tag"><NextKeyMarker></span>file-bc2c391d-f676-457b-b2c2-b68e1d2b7cec<span class="tag"></NextKeyMarker></span>
<span class="tag"><NextUploadIdMarker></span>2~4WFUDGCm_93vGhTPD8ci2dQTFkAto2l<span class="tag"></NextUploadIdMarker></span>
<span class="tag"><MaxUploads></span>1000<span class="tag"></MaxUploads></span>
<span class="tag"><IsTruncated></span>false<span class="tag"></IsTruncated></span>
<span class="tag"><Upload></span>
<span class="tag"><Key></span>file-8796097a-c589-45a4-a08d-ea609b71ab05<span class="tag"></Key></span>
<span class="tag"><UploadId></span>2~2hw0xjD_JOifvklWq-ea7qTZBZbZyLk<span class="tag"></UploadId></span>
<span class="tag"><Initiator></span>
<span class="tag"><ID></span>orsxg5c7obzg62tfmn2f6vrujritkq2qja3uerkwircfgm2xkm2dmssrijiucui$cfdeb891-46a9-4f36-8d0a-ec374d8b21b7
<span class="tag"></ID></span>
<span class="tag"><DisplayName></span>User cfdeb891-46a9-4f36-8d0a-ec374d8b21b7<span class="tag"></DisplayName></span>
<span class="tag"></Initiator></span>
<span class="tag"><Owner></span>
<span class="tag"><ID></span>orsxg5c7obzg62tfmn2f6vrujritkq2qja3uerkwircfgm2xkm2dmssrijiucui$cfdeb891-46a9-4f36-8d0a-ec374d8b21b7
<span class="tag"></ID></span>
<span class="tag"><DisplayName></span>User cfdeb891-46a9-4f36-8d0a-ec374d8b21b7<span class="tag"></DisplayName></span>
<span class="tag"></Owner></span>
<span class="tag"><StorageClass></span>STANDARD<span class="tag"></StorageClass></span>
<span class="tag"><Initiated></span>2023-05-17T09:20:05.738Z<span class="tag"></Initiated></span>
<span class="tag"></Upload></span>
<span class="tag"></ListMultipartUploadsResult></span>
</span></code></pre></p>
rgw - Backport #58509 (In Progress): quincy: 'radosgw-admin bucket chown' doesn't set bucket inst...
https://tracker.ceph.com/issues/58509
2023-01-19T15:28:46Z
Backport Bot
<p><a class="external" href="https://github.com/ceph/ceph/pull/49794">https://github.com/ceph/ceph/pull/49794</a></p>
rgw - Backport #58388 (In Progress): quincy: Segmentation fault when uploading file with bucket p...
https://tracker.ceph.com/issues/58388
2023-01-05T15:07:30Z
Backport Bot
<p><a class="external" href="https://github.com/ceph/ceph/pull/49641">https://github.com/ceph/ceph/pull/49641</a></p>
rgw - Bug #57936 (Pending Backport): 'radosgw-admin bucket chown' doesn't set bucket instance own...
https://tracker.ceph.com/issues/57936
2022-10-26T22:01:09Z
Casey Bodley
cbodley@redhat.com
<p>steps to reproduce:</p>
<p>1. start a vstart cluster and create a bucket as user 'testid'</p>
<pre>
~/ceph/build $ MON=1 OSD=1 RGW=1 MGR=0 MDS=0 ../src/vstart.sh -n -d
~/ceph/build $ bin/radosgw-admin user list
[
"56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234",
"testx$9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
"0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
"test",
"testid"
]
~/ceph/build $ s3cmd mb s3://testbucket
Bucket 's3://testbucket/' created
~/ceph/build $ bin/radosgw-admin metadata list bucket.instance
[
"testbucket:5c06055b-ce0d-4829-8d66-4fec8a3b69cf.4147.1"
]
~/ceph/build $ bin/radosgw-admin metadata get bucket.instance:testbucket:5c06055b-ce0d-4829-8d66-4fec8a3b69cf.4147.1 | grep owner
"owner": "testid",
~/ceph/build $ bin/radosgw-admin metadata get bucket:testbucket | grep owner
"owner": "testid",
</pre>
<p>2. change bucket owner to user 'test'. the bucket instance's owner is still 'testid', and the bucket is now linked to both users<br /><pre>
~/ceph/build $ bin/radosgw-admin bucket chown --bucket testbucket --uid test
0 objects processed in :testbucket[5c06055b-ce0d-4829-8d66-4fec8a3b69cf.4147.1]). Next marker
~/ceph/build $ bin/radosgw-admin metadata get bucket.instance:testbucket:5c06055b-ce0d-4829-8d66-4fec8a3b69cf.4147.1 | grep owner
"owner": "testid",
~/ceph/build $ bin/radosgw-admin metadata get bucket:testbucket | grep owner
"owner": "test",
~/ceph/build $ bin/radosgw-admin bucket list --uid testid
[
"testbucket"
]
~/ceph/build $ bin/radosgw-admin bucket list --uid test
[
"testbucket"
]
</pre></p>
<p>3. upload a single 128MB object to the bucket. the new object is counted against the 'user stats' of both users<br /><pre>
~/ceph/build $ s3cmd put 128m.iso s3://testbucket
...
upload: '128m.iso' -> 's3://testbucket/128m.iso' [part 9 of 9, 8MB] [1 of 1]
8388608 of 8388608 100% in 0s 127.21 MB/s done
~/ceph/build $ bin/radosgw-admin user stats --uid testid --sync-stats
{
"stats": {
"size": 134217728,
"size_actual": 134217728,
"size_kb": 131072,
"size_kb_actual": 131072,
"num_objects": 1
},
"last_stats_sync": "2022-10-26T21:37:56.577699Z",
"last_stats_update": "2022-10-26T21:37:56.576376Z"
}
~/ceph/build $ bin/radosgw-admin user stats --uid test --sync-stats
{
"stats": {
"size": 134217728,
"size_actual": 134217728,
"size_kb": 131072,
"size_kb_actual": 131072,
"num_objects": 1
},
"last_stats_sync": "2022-10-26T21:38:00.272976Z",
"last_stats_update": "2022-10-26T21:38:00.271206Z"
}
</pre></p>
<p>this behavior was last changed in <a class="external" href="https://github.com/ceph/ceph/pull/41108">https://github.com/ceph/ceph/pull/41108</a></p>
rgw - Bug #57911 (Pending Backport): Segmentation fault when uploading file with bucket policy on...
https://tracker.ceph.com/issues/57911
2022-10-21T10:39:43Z
Jan Graichen
<p>RGW crashes when a file is uploaded and a bucket policy has been set up.</p>
<p>The crash has been <a href="https://github.com/jgraichen/ceph-rgw-51574/actions/runs/3263466201" class="external">reproduced for latest Quincy release (17.2.4) and dev branches</a> using the official container images (<code>latest-quiny</code>, <code>latest-quincy-devel</code>, <code>latest</code>, and <code>latest-devel</code>).</p>
<p>A fully automatic test script, an example policy, and a reproducing CI pipeline is provided <a href="https://github.com/jgraichen/ceph-rgw-51574" class="external">on GitHub</a>.</p>
<pre>
-19> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 1 ====== starting new request req=0x7f4c03b9c730 =====
-18> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s initializing for trans_id = tx000000c91d834c3cbab37-00634d0f73-100e-default
-17> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s getting op 4
-16> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj verifying requester
-15> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj normalizing buckets and tenants
-14> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj init permissions
-13> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj recalculating target
-12> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj reading permissions
-11> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj init op
-10> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj verifying op mask
-9> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj verifying op permissions
-8> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj verifying op params
-7> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj pre-executing
-6> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj check rate limiting
-5> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 2 req 905742721212984119 0.000000000s s3:post_obj executing
-4> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 0 req 905742721212984119 0.000000000s s3:post_obj Signature verification algorithm AWS v4 (AWS4-HMAC-SHA256)
-3> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 0 req 905742721212984119 0.000000000s Signature verification algorithm AWS v4 (AWS4-HMAC-SHA256)
-2> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 1 policy condition check $key [uploads/test.txt] uploads/ [uploads/]
-1> 2022-10-17T08:16:51.964+0000 7f4ddcdf7700 1 policy condition check $Content-Type [] []
0> 2022-10-17T08:16:51.968+0000 7f4ddcdf7700 -1 *** Caught signal (Segmentation fault) **
in thread 7f4ddcdf7700 thread_name:radosgw
ceph version 17.2.4 (1353ed37dec8d74973edc3d5d5908c20ad5a7332) quincy (stable)
1: /lib64/libpthread.so.0(+0x12cf0) [0x7f4e670c2cf0]
2: (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x12) [0x7f4e69689e72]
3: (rgw::sal::Object::get_obj() const+0x28) [0x7f4e69980008]
4: (RGWPostObj::execute(optional_yield)+0xb2) [0x7f4e69b3a2c2]
5: (rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, optional_yield, rgw::sal::Store*, bool)+0xab8) [0x7f4e697457f8]
6: (process_request(rgw::sal::Store*, RGWREST*, RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSink*, optional_yield, rgw::dmclock::Scheduler*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*, std::shared_ptr<RateLimiter>, int*)+0xe54) [0x7f4e69746e94]
7: /lib64/libradosgw.so.2(+0x638635) [0x7f4e696a6635]
8: /lib64/libradosgw.so.2(+0x639816) [0x7f4e696a7816]
9: make_fcontext()
</pre>
<p>The same or a similar crash has been reported and fixed for Pacific here: <a class="issue tracker-1 status-13 priority-4 priority-default" title="Bug: Segfault when uploading file (Fix Under Review)" href="https://tracker.ceph.com/issues/51574">#51574</a></p>
rgw - Bug #57336 (Triaged): rgw:junk data exists when performing "AbortMultipartUpload" operation
https://tracker.ceph.com/issues/57336
2022-08-30T01:59:26Z
wang kevin
<p>There is such a scenario. After executing the "InitiateMultipartUpload" interface, we execute the "UploadPart" interface to upload data. When this action is not completely executed, for some reason, we call "AbortMultipartUpload" to terminate the upload, and finally we pass "radosgw" -admin bucket list" command to view the files in this bucket, it will display the shard left by "UploadPart", that is, the shard has not been deleted.(as shown in 1.jpg)<br />Going further, we verified whether there is complete data in the pool, but found that there is no sharded data in the data pool, but there is index data in the index pool.(as shown in 2.jpg)</p>
<p>We went to find the reason and found that because we used "AbortMultipartUpload" to delete the part information, an error was reported during the execution of the "void RGWPutObj::execute()" function, but at the end of this function, the data in the data pool was deleted, and The index data in the index pool is not deleted.</p>
<p>So what we are confused about is why this phenomenon occurs and how to solve it.</p>
rgw - Bug #56810 (Fix Under Review): crash: rgw_user::to_str(std::basic_string<char, std::char_tr...
https://tracker.ceph.com/issues/56810
2022-07-28T02:19:45Z
Telemetry Bot
<p><a class="external" href="http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4d6f6c2d95a83dd94aa9b2160a7ca2c956d48d9aabf0bf5a07d0ca85a8e13853">http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4d6f6c2d95a83dd94aa9b2160a7ca2c956d48d9aabf0bf5a07d0ca85a8e13853</a></p>
<p>Sanitized backtrace:<br /><pre> rgw_user::to_str(std::basic_string<char, std::char_traits<char>, std::allocator<char> >&) const
RGWRados::read_usage(DoutPrefixProvider const*, rgw_user const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, unsigned long, unsigned int, bool*, RGWUsageIter&, std::map<rgw_user_bucket, rgw_usage_log_entry, std::less<rgw_user_bucket>, std::allocator<std::pair<rgw_user_bucket const, rgw_usage_log_entry> > >&)
rgw::sal::RadosBucket::read_usage(DoutPrefixProvider const*, unsigned long, unsigned long, unsigned int, bool*, RGWUsageIter&, std::map<rgw_user_bucket, rgw_usage_log_entry, std::less<rgw_user_bucket>, std::allocator<std::pair<rgw_user_bucket const, rgw_usage_log_entry> > >&)
RGWUsage::show(DoutPrefixProvider const*, rgw::sal::Store*, rgw::sal::User*, rgw::sal::Bucket*, unsigned long, unsigned long, bool, bool, std::map<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, bool> > >*, RGWFormatterFlusher&)
</pre><br />Crash dump sample:<br /><pre>{
"backtrace": [
"/lib64/libpthread.so.0(+0x12ce0) [0x7f62a5d1ace0]",
"(rgw_user::to_str(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&) const+0x19) [0x55f961af2c79]",
"(RGWRados::read_usage(DoutPrefixProvider const*, rgw_user const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, unsigned long, unsigned int, bool*, RGWUsageIter&, std::map<rgw_user_bucket, rgw_usage_log_entry, std::less<rgw_user_bucket>, std::allocator<std::pair<rgw_user_bucket const, rgw_usage_log_entry> > >&)+0xdc) [0x55f961ebe3fc]",
"(rgw::sal::RadosBucket::read_usage(DoutPrefixProvider const*, unsigned long, unsigned long, unsigned int, bool*, RGWUsageIter&, std::map<rgw_user_bucket, rgw_usage_log_entry, std::less<rgw_user_bucket>, std::allocator<std::pair<rgw_user_bucket const, rgw_usage_log_entry> > >&)+0x42) [0x55f961fdcce2]",
"(RGWUsage::show(DoutPrefixProvider const*, rgw::sal::Store*, rgw::sal::User*, rgw::sal::Bucket*, unsigned long, unsigned long, bool, bool, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, bool> > >*, RGWFormatterFlusher&)+0x237) [0x55f961b44ad7]",
"main()",
"__libc_start_main()",
"_start()"
],
"ceph_version": "17.2.0",
"crash_id": "2022-06-28T06:03:07.591094Z_13ffbbc6-2a2b-4b1f-811b-4143df19fcc8",
"entity_name": "client.4470c629830f24db158bd262656bb3edbc7c3c1f",
"os_id": "centos",
"os_name": "CentOS Stream",
"os_version": "8",
"os_version_id": "8",
"process_name": "radosgw-admin",
"stack_sig": "e8c732f737bd9c0f8b1b47671fe31fd96a47cf16bd0ca2e335e0b5bdecc42344",
"timestamp": "2022-06-28T06:03:07.591094Z",
"utsname_machine": "x86_64",
"utsname_release": "5.4.0-109-generic",
"utsname_sysname": "Linux",
"utsname_version": "#123-Ubuntu SMP Fri Apr 8 09:10:54 UTC 2022"
}</pre></p>
rgw - Bug #54488 (Pending Backport): can not get quota config via swift api
https://tracker.ceph.com/issues/54488
2022-03-08T03:10:14Z
yunqing wang
<p>**According to [[<a class="external" href="https://docs.openstack.org/api-ref/object-store/?expanded=show-container-metadata-detail#show-container-metadata">https://docs.openstack.org/api-ref/object-store/?expanded=show-container-metadata-detail#show-container-metadata</a>]],<br />we can get the quota config by head the bucket.</p>
<p>However, rgw would not give it because `RGWOp::init_quota()` returned before the `bucket_quota` was set.</p>
rgw - Bug #54452 (Pending Backport): Inverted check in RGWRados::check_disk_state()
https://tracker.ceph.com/issues/54452
2022-03-02T18:02:02Z
Daniel Gryniewicz
dang@redhat.com
<p>This code is inverted:</p>
<pre><code>const bool head_pool_found =<br /> get_obj_head_ioctx(dpp, bucket_info, obj, &head_obj_ctx);<br /> if (head_pool_found) {</code></pre>
<p>This is because get_obj_head_ioctx() returns the standard int, where 0 is success. Casting to bool inverts this.</p>
rgw - Bug #54183 (Pending Backport): Early return in cloud tiering
https://tracker.ceph.com/issues/54183
2022-02-07T17:05:10Z
Daniel Gryniewicz
dang@redhat.com
<p>In transition_obj_to_cloud(), there's an early return in an error case, where the target is not removed from the list.</p>
rgw - Bug #53614 (Fix Under Review): The "--marker" option does not work for "radosgw-admin bucke...
https://tracker.ceph.com/issues/53614
2021-12-15T07:14:30Z
Amir Malekzadeh
<p>According to <a class="external" href="https://docs.ceph.com/en/latest/man/8/radosgw-admin">https://docs.ceph.com/en/latest/man/8/radosgw-admin</a> when using "radosgw-admin bucket chown" command, you can use --markder to resume if command gets interrupted:</p>
<blockquote>
<p>bucket chown</p>
<blockquote>
<p>Link bucket to specified user and update object ACLs. Use –marker to resume if command gets interrupted.</p>
</blockquote></blockquote>
<p>But using --marker does not anything, and just starts over.</p>
rgw - Bug #53431 (Fix Under Review): When using radosgw-admin to create a user, when the uid is e...
https://tracker.ceph.com/issues/53431
2021-11-30T02:02:57Z
wang kevin
<p>When creating a user, when --uid is empty, the error message is "user.init failed: (13) Permission denied", but a more reasonable prompt should be "ERROR: --uid required"</p>
rgw - Bug #51574 (Fix Under Review): Segfault when uploading file
https://tracker.ceph.com/issues/51574
2021-07-07T15:36:22Z
Jan Graichen
<p>We recently upgraded our cluster to 16.2.4 but got segmentations faults in radosgw when uploading files.</p>
<p>At first, I thought we are hit by <a class="external" href="https://tracker.ceph.com/issues/50556">https://tracker.ceph.com/issues/50556</a>, as very few uploads did work, and we are using bucket policies, but I was able to reproduce the issue with the following devel versions too. As far as know, they should have included the backport from 50556.</p>
<pre>
16.2.4-568-g2e1902f3
16.2.4-670-g468a1be6
</pre>
<p>I did run a radosgw via docker to reproduce the issue:</p>
<pre>
docker run --rm -it --net=host --user 64045:64045 -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ --name rgw.compute3 ceph/daemon-base:latest-pacific-devel@sha256:ce85def02b46df732434a553f0f343edd51ddbf67c1e0dc0a5b1ed19f32923ae radosgw -d --id rgw.test --keyring /etc/ceph/ceph.client.rgw.test.keyring --debug 255
2021-07-07T15:20:26.618+0000 7ff5f64e3440 0 ceph version 16.2.4-568-g2e1902f3 (2e1902f3a43860da461e68ebea5ef8dd48418278) pacific (stable), process radosgw, pid 1
2021-07-07T15:20:26.618+0000 7ff5f64e3440 0 framework: civetweb
2021-07-07T15:20:26.618+0000 7ff5f64e3440 0 framework conf key: port, val: 127.0.0.1:6080
2021-07-07T15:20:26.618+0000 7ff5f64e3440 1 radosgw_Main not setting numa affinity
2021-07-07T15:20:26.618+0000 7ff5f64e3440 -1 asok(0x55ba6c6e4000) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-client.rgw.test.1.94259171456320.asok': (13) Permission denied
2021-07-07T15:20:26.910+0000 7ff5f64e3440 0 framework: beast
2021-07-07T15:20:26.910+0000 7ff5f64e3440 0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
2021-07-07T15:20:26.910+0000 7ff5f64e3440 0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
2021-07-07T15:20:26.910+0000 7ff5f64e3440 0 starting handler: civetweb
2021-07-07T15:20:27.002+0000 7ff5bd0e8700 0 lifecycle: RGWLC::process() failed to acquire lock on lc.30, sleep 5, try again
2021-07-07T15:20:27.018+0000 7ff5f64e3440 1 mgrc service_daemon_register rgw.52645456 metadata {arch=x86_64,ceph_release=pacific,ceph_version=ceph version 16.2.4-568-g2e1902f3 (2e1902f3a43860da461e68ebea5ef8dd48418278) pacific (stable),ceph_version_short=16.2.4-568-g2e1902f3,cpu=AMD EPYC 7302P 16-Core Processor,distro=centos,distro_description=CentOS Linux 8,distro_version=8,frontend_config#0=civetweb port=127.0.0.1:6080,frontend_type#0=civetweb,hostname=core-a,id=test,kernel_description=#52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020,kernel_version=5.4.0-48-generic,mem_swap_kb=16759804,mem_total_kb=131448768,num_handles=1,os=Linux,pid=1,zone_id=5d41157e-dd10-42a1-99c7-542bf1fc6645,zone_name=default,zonegroup_id=99c1add5-41f3-4b7a-b2bd-32a84919c2db,zonegroup_name=default}
2021-07-07T15:20:27.022+0000 7ff5b90e0700 0 lifecycle: RGWLC::process() failed to acquire lock on lc.5, sleep 5, try again
2021-07-07T15:21:02.215+0000 7ff5b48d7700 1 ====== starting new request req=0x7ff5b48ced10 =====
2021-07-07T15:21:02.223+0000 7ff5b48d7700 1 ====== req done req=0x7ff5b48ced10 op status=0 http_status=200 latency=0.008000219s ======
2021-07-07T15:21:02.223+0000 7ff5b48d7700 1 civetweb: 0x55ba6d864000: 127.0.0.1 - - [07/Jul/2021:15:21:02 +0000] "OPTIONS /uploads HTTP/1.0" 200 354 https://example.org/ Mozilla/5.0 (X11; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0
2021-07-07T15:21:02.271+0000 7ff5b48d7700 1 ====== starting new request req=0x7ff5b48ced10 =====
2021-07-07T15:21:02.307+0000 7ff5b48d7700 0 req 2 0.036000986s s3:post_obj Signature verification algorithm AWS v4 (AWS4-HMAC-SHA256)
2021-07-07T15:21:02.307+0000 7ff5b48d7700 0 req 2 0.036000986s Signature verification algorithm AWS v4 (AWS4-HMAC-SHA256)
2021-07-07T15:21:02.311+0000 7ff5b48d7700 1 policy condition check $key [uploads/0f45545d-09ac-4040-a744-93aa3ddc4c47/13fec8c30338d94b6767ac4c6f54df14215b1d241e9719deaa5cc74608f43398_1.jpg] uploads/0f45545d-09ac-4040-a744-93aa3ddc4c47/ [uploads/0f45545d-09ac-4040-a744-93aa3ddc4c47/]
2021-07-07T15:21:02.311+0000 7ff5b48d7700 1 policy condition check $Content-Type [image/jpeg] []
*** Caught signal (Segmentation fault) **
in thread 7ff5b48d7700 thread_name:civetweb-worker
ceph version 16.2.4-568-g2e1902f3 (2e1902f3a43860da461e68ebea5ef8dd48418278) pacific (stable)
1: /lib64/libpthread.so.0(+0x12b20) [0x7ff5ea8beb20]
2: (rgw_bucket::rgw_bucket(rgw_bucket const&)+0x23) [0x7ff5f5717123]
3: (rgw::sal::RGWObject::get_obj() const+0x20) [0x7ff5f5747320]
4: (RGWPostObj::execute(optional_yield)+0xb0) [0x7ff5f5a76250]
5: (rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, optional_yield, bool)+0xb12) [0x7ff5f56f5a82]
6: (process_request(rgw::sal::RGWRadosStore*, RGWREST*, RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, optional_yield, rgw::dmclock::Scheduler*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*, int*)+0x2851) [0x7ff5f56f98d1]
7: (RGWCivetWebFrontend::process(mg_connection*)+0x29b) [0x7ff5f562fa8b]
8: /lib64/libradosgw.so.2(+0x62a8f6) [0x7ff5f57c88f6]
9: /lib64/libradosgw.so.2(+0x62c567) [0x7ff5f57ca567]
10: /lib64/libradosgw.so.2(+0x62ca28) [0x7ff5f57caa28]
11: /lib64/libpthread.so.0(+0x814a) [0x7ff5ea8b414a]
12: clone()
2021-07-07T15:21:02.315+0000 7ff5b48d7700 -1 *** Caught signal (Segmentation fault) **
in thread 7ff5b48d7700 thread_name:civetweb-worker
ceph version 16.2.4-568-g2e1902f3 (2e1902f3a43860da461e68ebea5ef8dd48418278) pacific (stable)
1: /lib64/libpthread.so.0(+0x12b20) [0x7ff5ea8beb20]
2: (rgw_bucket::rgw_bucket(rgw_bucket const&)+0x23) [0x7ff5f5717123]
3: (rgw::sal::RGWObject::get_obj() const+0x20) [0x7ff5f5747320]
4: (RGWPostObj::execute(optional_yield)+0xb0) [0x7ff5f5a76250]
5: (rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, optional_yield, bool)+0xb12) [0x7ff5f56f5a82]
6: (process_request(rgw::sal::RGWRadosStore*, RGWREST*, RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, optional_yield, rgw::dmclock::Scheduler*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*, int*)+0x2851) [0x7ff5f56f98d1]
7: (RGWCivetWebFrontend::process(mg_connection*)+0x29b) [0x7ff5f562fa8b]
8: /lib64/libradosgw.so.2(+0x62a8f6) [0x7ff5f57c88f6]
9: /lib64/libradosgw.so.2(+0x62c567) [0x7ff5f57ca567]
10: /lib64/libradosgw.so.2(+0x62ca28) [0x7ff5f57caa28]
11: /lib64/libpthread.so.0(+0x814a) [0x7ff5ea8b414a]
12: clone()
</pre>
<p>This completely blocks use from upgrading radosgw, as most buckets and uploads in our cloud are affected. We are currently running all components on 16.2.4 (via Debian packages), but only radosgw on v15.2 (via docker).</p>