Activity
From 05/04/2021 to 06/02/2021
06/02/2021
- 08:42 PM Backport #50430: nautilus: Added caching for S3 credentials retrieved from keystone
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41158
merged - 02:44 PM Bug #51068 (Pending Backport): multisite: metadata sync does not sync STS metadata (e.g., roles, ...
- Support for synchronizing roles/role policy/oidc provider config (etc?) is required to easily use STS AA in replicate...
- 06:05 AM Feature #47776: Add Support customed CA certificate from vault KMS for SSE encryption
- Jiffin Tony Thottan wrote:
> I have created Bp branch in my git repo since this PR have conflicts with pacific https...
06/01/2021
- 03:40 PM Backport #51045 (Resolved): pacific: S3 event message eTag misspelled
- https://github.com/ceph/ceph/pull/42945
- 03:38 PM Bug #50115 (Pending Backport): S3 event message eTag misspelled
05/28/2021
- 02:05 PM Bug #51019 (Pending Backport): correct session policies permission evaluation
- Session policies restrict permissions granted by Identity based policies and/ or Resource policies. This PR corrects ...
- 01:50 PM Bug #51018 (Fix Under Review): improve JWT token validation in accordance with JSON Web Key Set
- Improve and enhance the JWT validation, based on different parameters passed in as JSON Web Key Set like x5u, n, e etc.
- 11:41 AM Bug #50839: radosgw returns error after upgrade from luminous to nautilus
- Here are logs from failed request to pre-jewel bucket...
- 11:33 AM Bug #50839: radosgw returns error after upgrade from luminous to nautilus
- Please, try to run nautilus rgw with debug_rgw=20
- 11:07 AM Bug #50839: radosgw returns error after upgrade from luminous to nautilus
- I don't want it in a different pool. And how does this change the fact that luminous rgw works perfectly and nautilus...
- 11:02 AM Bug #50839: radosgw returns error after upgrade from luminous to nautilus
- I think this is problem with default placement...
- 10:11 AM Bug #50839: radosgw returns error after upgrade from luminous to nautilus
- $ radosgw-admin zone get --rgw-zone=default...
- 09:46 AM Bug #50839: radosgw returns error after upgrade from luminous to nautilus
- Marcin, please paste yours ...
- 09:52 AM Feature #51017 (New): rgw: beast: lack of 302 http -> https redirects
- With civetweb was possible to define:...
- 09:40 AM Backport #50982: pacific: Pacific - RadosGW not starting after upgrade
- https://github.com/ceph/ceph/pull/41576
- 08:18 AM Bug #50765 (Duplicate): impossible to disable TLS 1.0 and 1.1
- Closed in favour of #50932.
05/27/2021
- 04:55 PM Backport #51014 (Resolved): pacific: rgw: remove quota soft threshold
- https://github.com/ceph/ceph/pull/42634
- 04:55 PM Backport #51013 (Rejected): nautilus: rgw: remove quota soft threshold
- 04:55 PM Backport #51012 (Resolved): octopus: rgw: remove quota soft threshold
- https://github.com/ceph/ceph/pull/43271
- 04:51 PM Bug #50975 (Pending Backport): rgw: remove quota soft threshold
- 11:30 AM Bug #50975 (Fix Under Review): rgw: remove quota soft threshold
- 04:24 PM Bug #50932 (Fix Under Review): rgw: beast: lack of TLS settings
- 02:39 PM Bug #50932 (In Progress): rgw: beast: lack of TLS settings
- 02:26 PM Bug #50932: rgw: beast: lack of TLS settings
- there's some discussion about configuring the protocols and ciphers in https://github.com/ceph/ceph/pull/41384
- 03:21 PM Backport #50982 (In Progress): pacific: Pacific - RadosGW not starting after upgrade
- 02:58 PM Bug #50977: s3select: empty file select failed
- not clear what the test is doing.
please do not use stdin, but use s3object.
can you attach test.py?
you can... - 02:35 PM Backport #51001 (Resolved): pacific: Reproducible crash on multipart upload to bucket with policy
- https://github.com/ceph/ceph/pull/41893
- 02:32 PM Bug #50556 (Pending Backport): Reproducible crash on multipart upload to bucket with policy
- 02:23 PM Bug #50920 (Fix Under Review): rgw: awsv4 client: wrong signature when param value contains '+'
- 02:22 PM Bug #50974: rgw: storage class: GLACIER lifecycle don't worked when STANDARD pool and GLACIER poo...
- sharing pools between storage classes is a bit unexpected, but not unreasonable. lifecycle transitions use the same l...
05/26/2021
- 08:19 PM Bug #49747: tempest: test_create_object_with_transfer_encoding fails
- Problem appears to be with tempest. It sends "transfer-encoding: chunked" twice. The tempest test is also failing w...
- 06:05 PM Backport #50982 (Resolved): pacific: Pacific - RadosGW not starting after upgrade
- https://github.com/ceph/ceph/pull/41576
- 06:02 PM Bug #50169 (Pending Backport): Pacific - RadosGW not starting after upgrade
- 09:52 AM Bug #50977 (Fix Under Review): s3select: empty file select failed
- SQL: select * from stdin;
[root@node1 s3select]# python3 test.py
Traceback (most recent call last):
File "/usr... - 08:41 AM Bug #50975: rgw: remove quota soft threshold
- Description:
According to this thread in https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/QHALPTYRZA2OKLHYN... - 08:29 AM Bug #50975 (Resolved): rgw: remove quota soft threshold
- Link to the description in PR: https://github.com/ceph/ceph/pull/41495#issue-651023054
05/25/2021
- 07:02 PM Bug #50974 (Triaged): rgw: storage class: GLACIER lifecycle don't worked when STANDARD pool and G...
- The case is simple:
* storage classes is:... - 08:42 AM Bug #50967: In master-branch rgw enviroment,The bucket list operation displays the wrong informat...
- https://github.com/ceph/ceph/pull/41522
- 08:02 AM Bug #50967 (Resolved): In master-branch rgw enviroment,The bucket list operation displays the wro...
- The reslut:
s3cmd mb s3://testbucket --host 0.0.0.0:8001 --access_key user1 --secret_key user1
dd if=/dev/zero of=3... - 07:28 AM Bug #42788: unittest_rgw_dmclock_scheduler Failed
- Hi,
I got the same error in my development machine.
AFAIU, io_context::poll() does not guarantee that context i... - 07:16 AM Feature #50966 (New): rgw: bnucket index resharding with non-blocking write I/Os
- I have a Ceph cluster (octopus) for RGW. The number of objests grows up periodically.
For example, my logging system... - 05:33 AM Bug #49747: tempest: test_create_object_with_transfer_encoding fails
- I've managed to recreate this in several ubuntu focal environments with teuthology. So far it's looking very ubuntu ...
- 04:26 AM Bug #50945 (Closed): rgw: I/Os are blocked during dynamic resharding
- Closed by creator request
- 01:08 AM Bug #50945: rgw: I/Os are blocked during dynamic resharding
- I got an anser. Plaese close this ticket.
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/OS6KJL...
05/24/2021
- 09:47 PM Bug #48869 (Resolved): race condition in amqp manager initialization
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #49791 (Resolved): Fail to download large object when put it with swift api.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:42 PM Backport #48423 (Resolved): nautilus: Able to circumvent S3 Object Lock using deleteobjects command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41164
m... - 07:36 PM Backport #48423: nautilus: Able to circumvent S3 Object Lock using deleteobjects command
- Matt Benjamin wrote:
> https://github.com/ceph/ceph/pull/41164
merged - 09:41 PM Backport #50836 (Resolved): nautilus: Fail to download large object when put it with swift api.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40106
m... - 09:36 PM Backport #48934 (Resolved): octopus: race condition in amqp manager initialization
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40382
m... - 09:32 PM Backport #50365 (Resolved): octopus: rgw: during reshard lock contention, adjust logging
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41157
m... - 12:52 PM Bug #50953 (New): Expose RGW outstanding request perf counter in Octopus
- Fixed in master: https://github.com/ceph/ceph/pull/38283
Would be nice to have these in Octopus as well so we can tu... - 11:20 AM Bug #50952 (Resolved): ceph-16.2.4 compilation failure: invalid use of incomplete type in rgw
- gcc --version
gcc (Mageia 11.1.1-0.20210522.1.mga9) 11.1.1 20210522
Could be some C++ standard fuzziness?
<pre... - 04:21 AM Bug #50945 (Closed): rgw: I/Os are blocked during dynamic resharding
- I have a Ceph Octopus cluster for RGW and RBD. I found that all I/Os to
RGW seemed to be blocked while dynamic resha... - 03:45 AM Bug #50944 (New): rgw: RGWListMultipart_ObjStore_S3 and RGWListBucketMultiparts_ObjStore_S3 show ...
- RGWListMultipart_ObjStore_S3 and RGWListBucketMultiparts_ObjStore_S3 show return real StorageClass instead of fixed "...
05/21/2021
- 05:12 PM Bug #50932 (Resolved): rgw: beast: lack of TLS settings
- Currently Beast frontend is lack of TLS options
For example our production civetweb run with options:
"civetweb p... - 02:02 PM Backport #48934: octopus: race condition in amqp manager initialization
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40382
merged - 05:26 AM Bug #50924 (Resolved): rgw/rgw_string.h: has missing includes when compiling with boost 1.75 on a...
- On alpine rgw/rgw-string.h needs:
#include <string>
#include <stdexcept>
When compiling with boost 1.75
https:/...
05/20/2021
- 11:36 PM Bug #50169 (Fix Under Review): Pacific - RadosGW not starting after upgrade
- It tries to autodetect the zero'th generation, once it does so successfully it marks what it is so it doesn't have to...
- 08:53 PM Bug #50920 (Resolved): rgw: awsv4 client: wrong signature when param value contains '+'
- 03:38 PM Bug #50741 (Won't Fix): crash: librados::v14_2_0::IoCtx::operate(std::__cxx11::basic_string<char,...
- this was bionic, so i can only assume the crash was related to https://tracker.ceph.com/issues/50218. since this came...
- 03:17 PM Backport #50836: nautilus: Fail to download large object when put it with swift api.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40106
merged - 02:27 PM Bug #48262 (Fix Under Review): rgw: RGWListMultipart show Owner ID and DisplayName
- pr needs rebase
- 02:25 PM Bug #48646 (Closed): Bucket operations an issue with C# AWSSDK.S3 client
- 02:23 PM Bug #49692: Decompression of object file which is larger than 15M
- it looks like these changes came in https://github.com/ceph/ceph/pull/34852. it doesn't look like anyone reviewed or ...
- 06:18 AM Bug #50892 (Need More Info): rgw: In a multi-site environment, deleting the key from the bucket l...
- In a multi-site synchronization environment, the bucket full sync process will send a bucket list request to the remo...
05/19/2021
- 08:41 PM Bug #49747: tempest: test_create_object_with_transfer_encoding fails
- So far I have failed to reproduce this problem - various banches on fedora, rhel8. Most of the teuthology reports wh...
- 06:55 PM Backport #50677 (In Progress): octopus: Reproducible crash in radosgw (nautilus and later)
- 06:48 PM Backport #50423 (In Progress): octopus: [feature] rgw send headers of quota settings
- 05:34 PM Backport #50380 (In Progress): octopus: ceph-16.2.0 builds have started failing in Fedora 35/rawh...
- 05:32 PM Backport #50643 (In Progress): octopus: rgw: allow rgw-orphan-list to process multiple data pools
- 05:28 PM Backport #50730 (In Progress): octopus: rgw_file: RGWLibFS::read success executed, but nodata readed
- 05:20 PM Backport #50640 (In Progress): octopus: assumed-role: s3api head-object returns 403 Forbidden, ev...
- 04:25 PM Backport #50709 (In Progress): octopus: rgw: fix bucket object listing when initial marker matche...
- 04:16 PM Backport #50464 (In Progress): octopus: per bucket notification object is never deleted
05/18/2021
- 01:10 PM Bug #50765 (Fix Under Review): impossible to disable TLS 1.0 and 1.1
- 09:22 AM Bug #50756 (Need More Info): Haproxy with keepalive crashes RGW when object name lenght > 1024 bytes
- I tried to reproduce it on my machine without success (it works fine in my machine), Can you please share logs from R...
05/17/2021
- 06:58 PM Backport #50845: pacific: deprecate civetweb in pacific
- https://github.com/ceph/ceph/pull/41367
- 06:52 PM Backport #50845 (In Progress): pacific: deprecate civetweb in pacific
- 06:50 PM Backport #50845 (Resolved): pacific: deprecate civetweb in pacific
- https://github.com/ceph/ceph/pull/41367
- 06:47 PM Bug #50758 (Pending Backport): deprecate civetweb in pacific
- 11:59 AM Bug #50839 (New): radosgw returns error after upgrade from luminous to nautilus
- I've tried to upgrade a very old (dating back to firefly at least) RGW cluster from luminous to nautilus. Everything ...
- 11:52 AM Bug #43172: Radosgw(rgw_build_bucket_policies) fails on old buckets
- I've hit the same issue, while upgrading (very old) cluster from luminous to nautilus. I'm currently forced to keep o...
- 11:05 AM Backport #50836 (In Progress): nautilus: Fail to download large object when put it with swift api.
- 07:55 AM Backport #50836 (Resolved): nautilus: Fail to download large object when put it with swift api.
- https://github.com/ceph/ceph/pull/40106
- 08:44 AM Bug #48382: Broken public Swift bucket access with Keystone integration
- Rafal Wadolowski wrote:
> Hi Pietari,
>
> Can you try this?
>
> https://github.com/ceph/ceph/pull/38319
>
>... - 07:53 AM Bug #49791 (Pending Backport): Fail to download large object when put it with swift api.
- 07:41 AM Bug #50785: multisite: full sync broken for tenanted buckets
- Hi Casey:
I think we are having this issue now. Currently we are using octopus 15.2.10 and tenant bucket. All the ...
05/16/2021
- 04:57 PM Bug #50169: Pacific - RadosGW not starting after upgrade
- My logs show r=-5 instead of -1.
Casey mentioned in the mailing list that the fixes already merged (https://lists.... - 04:28 PM Bug #37942: Integer underflow in bucket stats
- It is hard to find the root cause, I gave up. I have confirmed with gdb that this large value comes from rados. So th...
- 10:34 AM Bug #37942: Integer underflow in bucket stats
- I can reproduce this on our cluster, running 16.2.3. And also radosgw compiled from current master branch. It appears...
- 09:37 AM Bug #50115: S3 event message eTag misspelled
- This can be closed now as the corresponding PR has been merged in.
05/15/2021
- 04:50 PM Backport #50365: octopus: rgw: during reshard lock contention, adjust logging
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41157
merged
05/14/2021
- 05:44 PM Bug #49791: Fail to download large object when put it with swift api.
- https://github.com/ceph/ceph/pull/40296 merged
- 04:49 PM Bug #49955: upgrade: rgw multisite failures
- i verified that the ubuntu crashes went away in https://pulpito.ceph.com/cbodley-2021-05-13_16:10:36-rgw:multisite-up...
- 03:47 PM Backport #50366 (Resolved): nautilus: rgw: during reshard lock contention, adjust logging
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41156
m... - 07:42 AM Bug #47586: Able to circumvent S3 Object Lock using deleteobjects command
- Matt Benjamin wrote:
> (for bug scrub 4/29: I will take ownership of this issue)
Thank you, Matt!
05/13/2021
- 02:58 PM Bug #50169: Pacific - RadosGW not starting after upgrade
- That error, -1, on a CLS call, usually means the OSD doesn't have the CLS module loaded. Are you specifying the list ...
- 02:30 PM Bug #49756 (Resolved): unused variable warnings in rgw_crypt.cc
- 02:24 PM Bug #50552 (Triaged): rgw: set_olh return -2 when resharding
- 02:14 PM Bug #50742 (Duplicate): crash: std::locale::operator=(std::locale const&)
- 02:05 PM Backport #50803 (Resolved): pacific: slow radosgw-admin startup when large value of rgw_gc_max_ob...
- https://github.com/ceph/ceph/pull/42655
- 02:05 PM Backport #50802 (Resolved): octopus: slow radosgw-admin startup when large value of rgw_gc_max_ob...
- https://github.com/ceph/ceph/pull/45423
- 02:03 PM Bug #50520 (Pending Backport): slow radosgw-admin startup when large value of rgw_gc_max_objs con...
05/12/2021
- 06:16 PM Bug #50785 (Fix Under Review): multisite: full sync broken for tenanted buckets
- 06:12 PM Bug #50785 (Resolved): multisite: full sync broken for tenanted buckets
- full sync's requests to list bucket shards use rgwx-bucket-instance to pass the tenant name. this broke at some point...
- 05:00 PM Backport #50366: nautilus: rgw: during reshard lock contention, adjust logging
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41156
merged - 09:54 AM Cleanup #50770: unify all ISO8601 timestamp formatting functions
- also: parse_iso8601() in src/rgw/rgw_common.cc
- 09:51 AM Cleanup #50770 (New): unify all ISO8601 timestamp formatting functions
- currently, there are several implementations of ISO8601 timestamp formatting:
* gmtime() in src/include/utime.h
* s... - 09:51 AM Bug #50556 (Fix Under Review): Reproducible crash on multipart upload to bucket with policy
- 08:11 AM Bug #50765 (Duplicate): impossible to disable TLS 1.0 and 1.1
- TLS 1.0 and 1.1 are known to have some cryptographic design flaws and are recommended (required?) to be disabled on t...
05/11/2021
- 12:49 PM Bug #50758 (Fix Under Review): deprecate civetweb in pacific
- 12:45 PM Bug #50758 (Resolved): deprecate civetweb in pacific
- 11:01 AM Bug #50756 (Need More Info): Haproxy with keepalive crashes RGW when object name lenght > 1024 bytes
- AWS S3 specification required object names lenght <= 1024 bytes and RGW follows this spec.
But we have problem if ... - 08:59 AM Bug #49090 (Resolved): http client - fix "Expect: 100-continue" issue
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:55 AM Backport #49092 (Resolved): nautilus: http client - fix "Expect: 100-continue" issue
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40667
m... - 08:43 AM Bug #48358: rgw: qlen and qactive perf counters leak
- We can see same behavior on 14.2.15.
- 02:30 AM Bug #50742 (Duplicate): crash: std::locale::operator=(std::locale const&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8ecddb583c78489948848932...- 02:30 AM Bug #50741 (Won't Fix): crash: librados::v14_2_0::IoCtx::operate(std::__cxx11::basic_string<char,...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=66705e83204ec57949275295...
05/10/2021
- 06:21 PM Backport #50732 (Rejected): nautilus: rgw_file: RGWLibFS::read success executed, but nodata readed
- 06:21 PM Backport #50731 (Resolved): pacific: rgw_file: RGWLibFS::read success executed, but nodata readed
- https://github.com/ceph/ceph/pull/42654
- 06:21 PM Backport #50730 (Resolved): octopus: rgw_file: RGWLibFS::read success executed, but nodata readed
- https://github.com/ceph/ceph/pull/41416
- 06:20 PM Backport #50729 (Rejected): nautilus: multisite: crash in RGWRESTStreamRWRequest::do_send_prepare...
- 06:20 PM Backport #50728 (Resolved): pacific: multisite: crash in RGWRESTStreamRWRequest::do_send_prepare(...
- https://github.com/ceph/ceph/pull/42653
- 06:20 PM Backport #50727 (Resolved): octopus: multisite: crash in RGWRESTStreamRWRequest::do_send_prepare(...
- https://github.com/ceph/ceph/pull/41766
- 02:46 PM Backport #49092: nautilus: http client - fix "Expect: 100-continue" issue
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40667
merged - 02:44 PM Bug #49189 (Pending Backport): rgw_file: RGWLibFS::read success executed, but nodata readed
- 02:38 PM Bug #49189: rgw_file: RGWLibFS::read success executed, but nodata readed
- What PR merged? There is none associated.
- 02:37 PM Bug #49189: rgw_file: RGWLibFS::read success executed, but nodata readed
- The PR is merged. This looks like a bug and implies that backports would be appropriate but none are listed.
- 02:33 PM Bug #50103 (Pending Backport): multisite: crash in RGWRESTStreamRWRequest::do_send_prepare() with...
- 08:49 AM Bug #50721 (In Progress): make fetching of certs while validating tokens, more generic.
- 08:48 AM Bug #50721 (Resolved): make fetching of certs while validating tokens, more generic.
- rgw/sts: Make the fetching of certs using .well-known/openid-configuration url, which is openid connect compliant.
05/08/2021
- 03:00 PM Backport #50711 (Resolved): pacific: radosgw-admin mfa resync crashes when supplied with only one...
- https://github.com/ceph/ceph/pull/42652
- 03:00 PM Backport #50710 (Resolved): pacific: rgw: radosgw-admin bucket rm --bucket <bucket> --purge-objects
- https://github.com/ceph/ceph/pull/41863
- 03:00 PM Backport #50709 (Resolved): octopus: rgw: fix bucket object listing when initial marker matches p...
- https://github.com/ceph/ceph/pull/41413
- 03:00 PM Backport #50708 (Resolved): pacific: rgw: fix bucket object listing when initial marker matches p...
- https://github.com/ceph/ceph/pull/42638
- 02:57 PM Bug #50394 (Pending Backport): radosgw-admin mfa resync crashes when supplied with only one totp-pin
- 02:56 PM Bug #50620 (Pending Backport): rgw: radosgw-admin bucket rm --bucket <bucket> --purge-objects
- 02:56 PM Bug #50621 (Pending Backport): rgw: fix bucket object listing when initial marker matches prefix
- 03:07 AM Support #49549 (Resolved): createRole failed
05/06/2021
- 11:49 PM Bug #50169: Pacific - RadosGW not starting after upgrade
- Created bug for OSD issue at https://tracker.ceph.com/issues/50682
- 11:48 PM Bug #49589 (Resolved): use DoutPrefixProvider for log output under requests
- This issue has been resolved via this PR on master: https://github.com/ceph/ceph/pull/40551.
No need to manually b... - 02:27 PM Bug #48350: [rgw]some multipart object will become orphan objects in data pool
- Does this require 2 cycles, or only one? That is, if you were running normal lifecycle that runs once a day at midni...
- 02:27 PM Bug #48122 (Won't Fix): rgw cannot find keyring after config file is minimized
- 02:23 PM Bug #48983 (Duplicate): radosgw not working - upgraded from mimic to octopus
- 02:15 PM Backport #50679 (Resolved): pacific: Reproducible crash in radosgw (nautilus and later)
- https://github.com/ceph/ceph/pull/42633
- 02:15 PM Backport #50678 (Rejected): nautilus: Reproducible crash in radosgw (nautilus and later)
- 02:15 PM Backport #50677 (Resolved): octopus: Reproducible crash in radosgw (nautilus and later)
- https://github.com/ceph/ceph/pull/41420
- 02:14 PM Bug #50388 (Resolved): upgrade/pacific-x/rgw-multisite: RuntimeError: An attempt to upgrade from ...
- 02:13 PM Bug #50394 (Fix Under Review): radosgw-admin mfa resync crashes when supplied with only one totp-pin
- 02:10 PM Bug #50467 (Pending Backport): Reproducible crash in radosgw (nautilus and later)
- 02:05 PM Bug #37375 (Closed): radosgw crashes when 'nss db path' is set in config (mimic/13.2.2)
05/05/2021
- 12:19 AM Backport #48423 (In Progress): nautilus: Able to circumvent S3 Object Lock using deleteobjects co...
- https://github.com/ceph/ceph/pull/41164
05/04/2021
- 07:48 PM Backport #50641 (In Progress): nautilus: assumed-role: s3api head-object returns 403 Forbidden, e...
- 12:10 PM Backport #50641 (Rejected): nautilus: assumed-role: s3api head-object returns 403 Forbidden, even...
- https://github.com/ceph/ceph/pull/41159
- 07:46 PM Backport #50429 (Rejected): octopus: Added caching for S3 credentials retrieved from keystone
- This fix is already in octopus:...
- 07:43 PM Backport #50430 (In Progress): nautilus: Added caching for S3 credentials retrieved from keystone
- 07:41 PM Backport #50365 (In Progress): octopus: rgw: during reshard lock contention, adjust logging
- 07:40 PM Backport #50366 (In Progress): nautilus: rgw: during reshard lock contention, adjust logging
- 02:07 PM Backport #46997 (In Progress): nautilus: S3 API DELETE /{bucket}?encryption or DELETE /{bucket}?r...
- 02:07 PM Backport #46997 (New): nautilus: S3 API DELETE /{bucket}?encryption or DELETE /{bucket}?replicati...
- first attempted backport - https://github.com/ceph/ceph/pull/36781 - was closed
- 01:10 PM Backport #50645 (Resolved): pacific: rgw: allow rgw-orphan-list to process multiple data pools
- https://github.com/ceph/ceph/pull/42635
- 01:10 PM Backport #50644 (Rejected): nautilus: rgw: allow rgw-orphan-list to process multiple data pools
- 01:10 PM Backport #50643 (Resolved): octopus: rgw: allow rgw-orphan-list to process multiple data pools
- https://github.com/ceph/ceph/pull/41417
- 01:09 PM Bug #50432 (Pending Backport): rgw: allow rgw-orphan-list to process multiple data pools
- 12:49 PM Bug #50620: rgw: radosgw-admin bucket rm --bucket <bucket> --purge-objects
- Lei Cao, I deleted your comment #1 so there wouldn't be two PRs attached to this tracker.
- 12:47 PM Bug #50620 (Fix Under Review): rgw: radosgw-admin bucket rm --bucket <bucket> --purge-objects
- 12:31 PM Bug #50620: rgw: radosgw-admin bucket rm --bucket <bucket> --purge-objects
- lei cao wrote:
> https://github.com/ceph/ceph/pull/41135
I had done the testing to determine the 1000 limit maki... - 12:47 PM Bug #50621 (Fix Under Review): rgw: fix bucket object listing when initial marker matches prefix
- 12:10 PM Backport #50642 (Resolved): pacific: assumed-role: s3api head-object returns 403 Forbidden, even ...
- https://github.com/ceph/ceph/pull/42650
- 12:10 PM Backport #50640 (Resolved): octopus: assumed-role: s3api head-object returns 403 Forbidden, even ...
- https://github.com/ceph/ceph/pull/41415
- 12:09 PM Bug #49780 (Pending Backport): assumed-role: s3api head-object returns 403 Forbidden, even if rol...
Also available in: Atom