Activity
From 09/30/2021 to 10/29/2021
10/29/2021
- 07:20 PM Backport #53099 (Resolved): octopus: rgw/crypt s3tests with vault: Failed to retrieve the actual ...
- https://github.com/ceph/ceph/pull/43963
- 07:20 PM Backport #53098 (Resolved): pacific: rgw/crypt s3tests with vault: Failed to retrieve the actual ...
- https://github.com/ceph/ceph/pull/43951
- 07:15 PM Bug #51539 (Pending Backport): rgw/crypt s3tests with vault: Failed to retrieve the actual key, k...
- 05:20 PM Bug #53095 (Resolved): tempest dependency issue with 'PrettyTable'
- ...
- 05:13 PM Backport #53091 (In Progress): pacific: [rfe] generalize ops log output channel for gelf and/or s...
- 04:45 PM Backport #53091 (Resolved): pacific: [rfe] generalize ops log output channel for gelf and/or sysl...
- https://github.com/ceph/ceph/pull/43740
- 04:41 PM Bug #48752 (Pending Backport): [rfe] generalize ops log output channel for gelf and/or syslog tar...
- 04:30 PM Bug #48752: [rfe] generalize ops log output channel for gelf and/or syslog targets
- thanks Matt!
Cory, if you want to see this on Pacific, feel free to update the Backport field and set Status = Pen... - 04:23 PM Bug #48752 (Resolved): [rfe] generalize ops log output channel for gelf and/or syslog targets
- 04:26 PM Bug #53090 (Resolved): rgw/sts: 403 response seen from radosgw on cleanup from (passed!) s3-tests...
- From teuthology.log:
2021-10-28T15:19:49.750 INFO:teuthology.orchestra.run.smithi105.stderr:botocore.parsers: DEBU... - 12:56 PM Feature #45568: Swift Extract Archive Operation
- I've experimented a bit more and inspected the output of the @/info@ API endpoint of my local OpenStack instance . It...
- 08:40 AM Bug #52813 (Resolved): octopus:"ragodgw-admin bucket limit check" has duplicate entries if bucket...
- 07:26 AM Bug #52697: segmentation faults in libradosgw
- I couldn't reproduce this problem. So please close this ticket for now.
10/28/2021
- 10:53 PM Bug #52716 (Fix Under Review): incorrect multipart upload owner, access denied when listing parts...
- 08:02 PM Bug #52716: incorrect multipart upload owner, access denied when listing parts of multipart uploa...
- > And I can't list it's parts:
>
> $ aws s3api list-parts --bucket test-r1 --key ubuntu-20.04.1-desktop-amd64.iso ... - 08:01 PM Bug #52716: incorrect multipart upload owner, access denied when listing parts of multipart uploa...
- ok, i see that list-multipart-uploads is just printing the requesting user as both 'Owner' and 'Initiator', instead o...
- 09:53 PM Bug #52813: octopus:"ragodgw-admin bucket limit check" has duplicate entries if bucket count exce...
- https://github.com/ceph/ceph/pull/43381 merged
- 07:27 PM Bug #53003: Performance regression on rgw/s3 copy operation
- thanks for the help in tracking this down!
- 07:26 PM Bug #53003 (Fix Under Review): Performance regression on rgw/s3 copy operation
- 03:13 PM Backport #52783 (In Progress): pacific: Cannot perform server-side copy using STS credentials
- 02:30 PM Backport #53079 (Resolved): pacific: notifications: http endpoints with one trailing slash are co...
- https://github.com/ceph/ceph/pull/45459
- 02:30 PM Backport #53078 (Resolved): octopus: notifications: http endpoints with one trailing slash are co...
- https://github.com/ceph/ceph/pull/45460
- 02:27 PM Bug #52738 (Pending Backport): notifications: http endpoints with one trailing slash are consider...
- 02:24 PM Bug #52814: RGW services stop responding when datacenter down
- hi Or, this case looks kind of similar to the fast-fail work you did previously. can you tell whether that change wou...
- 02:16 PM Bug #52964 (Triaged): garbage collection doesn't remove gc list entries if the object's pool does...
- 02:16 PM Bug #52964 (New): garbage collection doesn't remove gc list entries if the object's pool doesn't ...
- 02:12 PM Bug #53029 (Triaged): radosgw-admin fails on "sync status" if a single RGW process is down
- 02:11 PM Bug #53075 (Need More Info): nautilus:rgw:The master zone received too many packets during synchr...
- > If a log shard don't perform incremental synchronization after full synchronization, it will give up waiting for 20...
- 03:45 AM Bug #53075 (Won't Fix - EOL): nautilus:rgw:The master zone received too many packets during synch...
- If a log shard don't perform incremental synchronization after full synchronization, it will give up waiting for 20s ...
- 02:06 PM Bug #51068 (In Progress): multisite: metadata sync does not sync STS metadata (e.g., roles, polic...
- 01:16 PM Bug #52530 (Resolved): segfault in rgw_log_op()
- Cory Snyder wrote:
> I don't believe that this fix needs to be backported to Pacific because rgw_log_entry does not ... - 01:11 PM Bug #52530: segfault in rgw_log_op()
- I don't believe that this fix needs to be backported to Pacific because rgw_log_entry does not have the identity_type...
- 01:12 PM Backport #52787 (Resolved): pacific: segfault in rgw_log_op()
- Fix does not appear to be applicable in Pacific.
- 12:32 PM Bug #48382: Broken public Swift bucket access with Keystone integration
- so basically this...
- 12:29 PM Bug #48382: Broken public Swift bucket access with Keystone integration
- Mohammed Naser wrote:
> this has just hit us and it seems like a huge regression, i'm trying this patch now.
a Ha... - 12:18 PM Backport #52959 (In Progress): octopus: Make RGW transaction IDs less deterministic
- 12:18 PM Backport #52960 (In Progress): pacific: Make RGW transaction IDs less deterministic
10/27/2021
- 03:35 PM Bug #53030: rgw nfs export at user-level crash on readdir
- same crash on pacific:...
- 02:27 PM Bug #52964: garbage collection doesn't remove gc list entries if the object's pool doesn't exist
- Pritha Srivastava wrote:
> Can you give me the exact steps that you tried for the second bug that you are reporting,... - 04:59 AM Bug #52964 (Need More Info): garbage collection doesn't remove gc list entries if the object's po...
- Can you give me the exact steps that you tried for the second bug that you are reporting,
I understand that you ha... - 01:35 PM Backport #53071 (Rejected): octopus: crash: RGWSI_MetaBackend_SObj
- 01:35 PM Backport #53070 (Resolved): pacific: crash: RGWSI_MetaBackend_SObj
- https://github.com/ceph/ceph/pull/44747
- 01:35 PM Backport #53069 (Rejected): octopus: rgw: add logging to bucket listing so calls are better under...
- 01:35 PM Backport #53068 (Rejected): pacific: rgw: add logging to bucket listing so calls are better under...
- 01:34 PM Bug #51927 (Pending Backport): crash: RGWSI_MetaBackend_SObj
- 01:30 PM Feature #52594 (Pending Backport): rgw: add logging to bucket listing so calls are better understood
10/26/2021
- 11:12 PM Bug #51327: Format of date/time header "x-amz-object-lock-retain-until-date" is incorrect (does n...
- I’ve submitted the backport for Octopus: https://github.com/ceph/ceph/pull/43656
- 10:41 PM Bug #48382: Broken public Swift bucket access with Keystone integration
- this has just hit us and it seems like a huge regression, i'm trying this patch now.
- 08:38 PM Backport #53005: pacific: librgw_op_tp.so and librgw_rados_tp.so are included in librgw2 package,...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43619
merged - 07:28 PM Bug #50249: rgw doesn't respect rgw_frontends stored in cluster configuration
- i raised an email about this issue on the dev@ceph.io list, titled "global init, mon config and setuid" - feel free t...
- 08:35 AM Feature #53040 (Pending Backport): notifications: allow empty configuration to delete all notific...
- this would make our code compliant with the way notifications are deleted in AWS.
see: https://docs.aws.amazon.com/A...
10/25/2021
- 06:55 PM Backport #53037 (Resolved): octopus: rgw: cls_bucket_list_unordered() might return repeating or p...
- https://github.com/ceph/ceph/pull/45458
- 06:55 PM Backport #53036 (Resolved): pacific: rgw: cls_bucket_list_unordered() might return repeating or p...
- https://github.com/ceph/ceph/pull/45457
- 06:52 PM Bug #51466 (Pending Backport): rgw: cls_bucket_list_unordered() might return repeating or partial...
- 06:00 PM Tasks #53035 (In Progress): in dbstore add config to set sqlite pragma performance tuning options
- in benchmarks
changing the defaults to for example... - 02:50 PM Bug #53030: rgw nfs export at user-level crash on readdir
- I'm reproducing this with this branch: https://github.com/liewegas/ceph/tree/nfs-rgw-by-user...
- 02:47 PM Bug #53030 (Resolved): rgw nfs export at user-level crash on readdir
- ...
- 02:08 PM Bug #53029 (Fix Under Review): radosgw-admin fails on "sync status" if a single RGW process is down
- We're using ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) in a containerized depl...
- 01:10 PM Bug #51429: radosgw-admin bi list fails with Input/Output error
- Issue not seen on ceph version 17.0.0-8051-g15b54dc9
- 11:47 AM Bug #51461: Unable to read bucket stats post reshard from Multisite primary
- We are not seeing issue on ceph version 17.0.0-8051-g15b54dc9 (15b54dc9eaa835e95809e32e8ddf109d416320c9) quincy (dev)
10/22/2021
- 09:33 AM Bug #53016: init_multipart saves uncoded tags
- I specified tag params in the init_mulipart interface,then upload the file, after this, I can't get the taginfo by ge...
- 09:23 AM Bug #53016 (Fix Under Review): init_multipart saves uncoded tags
- 06:34 AM Bug #53003: Performance regression on rgw/s3 copy operation
- I've tested it with the latest pacific version (16.2.6) and it still happens:
```
# time s3cmd modify s3://test/t... - 02:46 AM Bug #51466 (Resolved): rgw: cls_bucket_list_unordered() might return repeating or partial results...
- 鹏 张, Would you determine if this needs a backport to pacific or octopus? I currently have this in the RESOLVED state,...
10/21/2021
- 07:44 PM Backport #53009 (In Progress): octopus: librgw_op_tp.so and librgw_rados_tp.so are included in li...
- 03:05 PM Backport #53009 (Rejected): octopus: librgw_op_tp.so and librgw_rados_tp.so are included in librg...
- https://github.com/ceph/ceph/pull/43624
- 02:35 PM Backport #53008 (Resolved): octopus: segfault on FIPS enabled server as result of EVP_md5 disable...
- https://github.com/ceph/ceph/pull/44806
- 02:35 PM Backport #53007 (Resolved): pacific: segfault on FIPS enabled server as result of EVP_md5 disable...
- 02:34 PM Bug #52900 (Pending Backport): segfault on FIPS enabled server as result of EVP_md5 disabled in o...
- 02:30 PM Backport #53005 (In Progress): pacific: librgw_op_tp.so and librgw_rados_tp.so are included in li...
- 02:25 PM Backport #53005 (Resolved): pacific: librgw_op_tp.so and librgw_rados_tp.so are included in librg...
- https://github.com/ceph/ceph/pull/43619
- 02:27 PM Bug #52964 (Triaged): garbage collection doesn't remove gc list entries if the object's pool does...
- 02:27 PM Bug #52964 (New): garbage collection doesn't remove gc list entries if the object's pool doesn't ...
- 02:21 PM Bug #52979 (Pending Backport): librgw_op_tp.so and librgw_rados_tp.so are included in librgw2 pac...
- 02:13 PM Bug #50249 (Triaged): rgw doesn't respect rgw_frontends stored in cluster configuration
- 02:09 PM Bug #51068: multisite: metadata sync does not sync STS metadata (e.g., roles, policy, ...)
- new pr opened in https://github.com/ceph/ceph/pull/43597
- 01:55 PM Bug #53003 (Resolved): Performance regression on rgw/s3 copy operation
- reported on dev@ceph.io by Tim Foerster:
> we noticed a massive latency increase on object copy since the pacific
... - 06:04 AM Bug #48746 (Resolved): SSE-KMS vault transit: use transit correctly.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:01 AM Bug #51536 (Resolved): rgw: fail as expected when set-bucket-website attempted on a non-existent ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:00 AM Bug #51681 (Resolved): notifications: zero size in case of delete marker creation
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:59 AM Bug #52290 (Resolved): rgw fix sts memory leak
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:53 AM Backport #51695 (Resolved): octopus: rgw: fail as expected when set-bucket-website attempted on a...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43424
m... - 05:44 AM Backport #52350 (Resolved): pacific: rgw fix sts memory leak
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43348
m... - 05:44 AM Backport #51803 (Resolved): pacific: notifications: zero size in case of delete marker creation
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42643
m...
10/20/2021
- 06:40 PM Backport #52991 (Resolved): octopus: rgw: have "bucket check --fix" fix pool ids correctly
- https://github.com/ceph/ceph/pull/45456
- 06:40 PM Backport #52990 (Resolved): pacific: rgw: have "bucket check --fix" fix pool ids correctly
- https://github.com/ceph/ceph/pull/45455
- 06:38 PM Bug #52941 (Pending Backport): rgw: have "bucket check --fix" fix pool ids correctly
- 05:50 PM Backport #52989 (Resolved): octopus: rgw: document rgw_lc_debug_interval configuration option
- https://github.com/ceph/ceph/pull/45454
- 05:50 PM Backport #52988 (Resolved): pacific: rgw: document rgw_lc_debug_interval configuration option
- https://github.com/ceph/ceph/pull/45453
- 05:49 PM Documentation #52830 (Pending Backport): rgw: document rgw_lc_debug_interval configuration option
- 05:06 PM Feature #48402 (Fix Under Review): multisite option to enable keepalive
- 12:11 PM Bug #52979 (Fix Under Review): librgw_op_tp.so and librgw_rados_tp.so are included in librgw2 pac...
- 10:50 AM Bug #52979: librgw_op_tp.so and librgw_rados_tp.so are included in librgw2 package, but have a di...
- It's possible that this RPMLINT error is specific to openSUSE. If upstream doesn't think it's useful for the SO versi...
- 10:47 AM Bug #52979 (Resolved): librgw_op_tp.so and librgw_rados_tp.so are included in librgw2 package, bu...
- librgw_op_tp.so and librgw_rados_tp.so are included in librgw2 package, but have a different major version number tha...
- 12:04 PM Feature #52980 (Pending Backport): notifications/admin: add admin interfaces to get and delete no...
- ...
10/19/2021
- 07:51 PM Bug #52684 (Need More Info): Observing sync inconsistencies on a bucket that has been resharded.
- can you please share the outputs of 'bucket sync status' on each zone? also check 'sync error list' for errors?
- 07:10 AM Bug #52684: Observing sync inconsistencies on a bucket that has been resharded.
- We are observing the issue on a larger scale for a tenanted and versioned bucket.
Steps done:
1. create a bucket ... - 07:15 PM Bug #52748 (Won't Fix - EOL): specifing --tags、--placement-id and --storage-class are not work wh...
- Huber ming wrote:
> are you sure radosgw-admin version is 14.2.15 ?
sorry no, this was testing on master. it ... - 01:56 AM Bug #52748: specifing --tags、--placement-id and --storage-class are not work when create(modify) ...
- are you sure radosgw-admin version is 14.2.15 ?
the value of "placement_tags" is not changed after running "rado... - 07:13 PM Feature #43164 (Resolved): rgw: support specify user default placement and placement_tags when cr...
- 07:11 PM Feature #43164 (Pending Backport): rgw: support specify user default placement and placement_tags...
- 05:24 PM Bug #52976 (Fix Under Review): 'radosgw-admin bi purge' unable to delete index if bucket entrypoi...
- 05:08 PM Bug #52976 (Resolved): 'radosgw-admin bi purge' unable to delete index if bucket entrypoint doesn...
- when given both --bucket and --bucket-id, the command should be able to delete the given bucket instance/index even i...
- 09:11 AM Bug #52964: garbage collection doesn't remove gc list entries if the object's pool doesn't exist
- Casey Bodley wrote:
> how big was this object?
>
> are you familiar with the garbage collection system that clean... - 08:24 AM Bug #51327: Format of date/time header "x-amz-object-lock-retain-until-date" is incorrect (does n...
- Deepika Upadhyay wrote:
> Hey Danny!
>
> I see already existing tracker issues for backports, do you want to crea...
10/18/2021
- 08:58 PM Bug #52917: rgw-multisite: bucket sync checkpoint for a bucket lists out very high value/incorre...
- Vidushi Mishra wrote:
> radosgw-admin bucket sync checkpoint lists out a very high value for local gen.
>
>
> s... - 08:01 PM Bug #52964 (Need More Info): garbage collection doesn't remove gc list entries if the object's po...
- how big was this object?
are you familiar with the garbage collection system that cleans up these deleted tail obj... - 02:47 PM Bug #52964 (Triaged): garbage collection doesn't remove gc list entries if the object's pool does...
- How is reproduced
added file to bucket from S3 browser... - 06:19 PM Bug #51487 (Resolved): Sync stopped from primary to secondary post reshard
- thanks for verifying!
- 04:45 PM Bug #51487: Sync stopped from primary to secondary post reshard
- We are not seeing the issue on resharding the bucket twice from the secondary and writing from the primary and verify...
- 02:40 PM Bug #51487: Sync stopped from primary to secondary post reshard
- 1. We tested the above scenario by resharding the bucket from the primary and writing objects from the secondary. We ...
- 04:14 PM Bug #51927 (Fix Under Review): crash: RGWSI_MetaBackend_SObj
- 01:19 PM Bug #52963 (Won't Fix): pubsub: duplicate events are seen when there are more than 2 zones
- when there are N zones in a zonegroup, and 1 pubsub zones is is possible to see: N-1 instances of each bucket notific...
10/15/2021
- 08:27 PM Bug #52748 (Need More Info): specifing --tags、--placement-id and --storage-class are not work whe...
- hi, can you please clarify which part doesn't work?
after adding a new storage class 'GLACIER' to the zonegroup an... - 06:10 PM Backport #52960 (Resolved): pacific: Make RGW transaction IDs less deterministic
- https://github.com/ceph/ceph/pull/43695
- 06:10 PM Backport #52959 (Resolved): octopus: Make RGW transaction IDs less deterministic
- https://github.com/ceph/ceph/pull/43696
- 06:10 PM Backport #52958 (Resolved): pacific: ERROR: failed to list reshard log entries, oid=reshard.00000...
- https://github.com/ceph/ceph/pull/45451
- 06:10 PM Backport #52957 (Resolved): octopus: ERROR: failed to list reshard log entries, oid=reshard.00000...
- https://github.com/ceph/ceph/pull/45452
- 06:09 PM Bug #51559 (Resolved): radosgw-admin create subuser with a existing access-key has not error log
- 06:08 PM Bug #52457 (Resolved): D3N: when trying to write to cache object larger than the free disk space ...
- @Mark, i'm closing as resolved. feel free to tag for backports if you think it needs it
- 06:06 PM Fix #52818 (Pending Backport): Make RGW transaction IDs less deterministic
- 06:06 PM Bug #52873 (Pending Backport): ERROR: failed to list reshard log entries, oid=reshard.0000000000 ...
- 06:05 PM Bug #52873 (Resolved): ERROR: failed to list reshard log entries, oid=reshard.0000000000 marker= ...
- 03:14 PM Bug #52896: rgw-multisite: Dynamic resharding take too long to take effect
- Vidushi Mishra wrote:
> I have 2 more observations to add here.
>
> 1. when I restarted the gateways, the dynamic...
10/14/2021
- 08:13 PM Documentation #23027 (Resolved): document rgw_multipart_min_part_size
- 07:23 PM Bug #52941 (Resolved): rgw: have "bucket check --fix" fix pool ids correctly
- This is a long-standing bug where "radosgw-admin bucket check --fix" updates the bucket index records with the wrong ...
- 02:39 PM Bug #52721: Bucket index entry not always created
- Hi,
Could you please try to listomapkeys from the second shard of the bucket too and share the results? - 02:32 PM Bug #52800 (Duplicate): OSD down make all the RGW crashed and cluster unavailable with production...
- Changing status to 'Duplicate' to match this issue's 'Duplicates' relation.
- 02:26 PM Bug #52799 (Duplicate): Segmentation Fault in radosgw-admin period update --commit
- 02:23 PM Bug #52813 (Fix Under Review): octopus:"ragodgw-admin bucket limit check" has duplicate entries i...
- 02:04 PM Bug #52647 (Resolved): rgw/singleton failures: assert rgwlog['bucket'].find(bucket_name) == 0 or ...
- 02:04 PM Bug #52668 (Resolved): rgw: radosgw_admin.py is reporting errors in "log show --object" tests
- 08:29 AM Bug #52927: notifications: add per topic stats for persistent notifications
- see question from: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YQEW7DE4SOWERD7KH64MCRIVHSXWMUCZ/
- 08:15 AM Bug #52927 (Pending Backport): notifications: add per topic stats for persistent notifications
- stats should be accessible via radosgw-admin and should include:
* retries
* pending notifications in the queue
* ...
10/13/2021
- 03:49 PM Bug #52917: rgw-multisite: bucket sync checkpoint for a bucket lists out very high value/incorre...
- On the master site :
[ceph: root@clara001 /]# radosgw-admin bucket sync checkpoint --bucket tx/ss-bkt-v3
2021-10-13... - 03:48 PM Bug #52917 (Closed): rgw-multisite: bucket sync checkpoint for a bucket lists out very high valu...
- radosgw-admin bucket sync checkpoint lists out a very high value for local gen.
sniipet of the issue:
---------... - 03:41 AM Bug #52896: rgw-multisite: Dynamic resharding take too long to take effect
- I have 2 more observations to add here.
1. when I restarted the gateways, the dynamic resahrding kicked in as expe...
10/12/2021
- 06:27 PM Bug #52877 (Resolved): rgw-multisite: Dynamic resharding fails to take effect, even when rgw_max_...
- merged into wip-rgw-multisite-reshard and kicked off new builds
- 04:49 PM Bug #52877 (Fix Under Review): rgw-multisite: Dynamic resharding fails to take effect, even when ...
- 04:42 PM Bug #52877: rgw-multisite: Dynamic resharding fails to take effect, even when rgw_max_objs_per_sh...
- oops! looks like this part of rgw_rados.cc is to blame:...
- 04:25 PM Bug #52877 (Triaged): rgw-multisite: Dynamic resharding fails to take effect, even when rgw_max_o...
- Vidushi Mishra wrote:
> On doing a radosgw-admin reshard process, the buckets that were listed in the reshard list a... - 05:15 AM Bug #52877: rgw-multisite: Dynamic resharding fails to take effect, even when rgw_max_objs_per_sh...
- On doing a radosgw-admin reshard process, the buckets that were listed in the reshard list are regarded on the slave ...
- 05:02 PM Bug #52896 (Triaged): rgw-multisite: Dynamic resharding take too long to take effect
- not sure what would cause this delay. the RGWReshard::ReshardWorker thread should be attempting reshards every rgw_re...
- 08:42 AM Bug #52896 (Closed): rgw-multisite: Dynamic resharding take too long to take effect
- On a multi-site cluster, dynamic resharding takes very long to kick in, even when the rgw_max_objs_per_shard value is...
- 03:13 PM Bug #52900: segfault on FIPS enabled server as result of EVP_md5 disabled in openssl
- created a PR (https://github.com/ceph/ceph/pull/43503) that activates the FIPS overriding mechanism in 2 locations (r...
- 01:55 PM Bug #52900 (Pending Backport): segfault on FIPS enabled server as result of EVP_md5 disabled in o...
- reproduces in a vstart environment on ...
- 10:49 AM Bug #52799: Segmentation Fault in radosgw-admin period update --commit
- Yes, FIPS is enabled.
I just tested this right now with FIPS disabled - and... it works.
So, i have a work-a-ro... - 10:06 AM Bug #51427: Multisite sync stuck if reshard is done while bucket sync is disabled
- Not seeing an issue as of now on ceph version 17.0.0-8051-g15b54dc9 (15b54dc9eaa835e95809e32e8ddf109d416320c9) quincy...
10/11/2021
- 06:32 PM Bug #52873 (Fix Under Review): ERROR: failed to list reshard log entries, oid=reshard.0000000000 ...
- this behavior isn't specific to the wip-rgw-multisite-reshard branch, so i pushed a fix to master instead. these ENOE...
- 05:21 PM Bug #52799: Segmentation Fault in radosgw-admin period update --commit
- we've seen a similar crash from `radosgw-admin period update --commit` inside of openssl due to FIPS enforcement. is ...
- 02:37 PM Bug #52814: RGW services stop responding when datacenter down
- We find out a "solution" in order to reactivate RGW service. The only workaround was to remove every host that was do...
- 11:18 AM Bug #52877: rgw-multisite: Dynamic resharding fails to take effect, even when rgw_max_objs_per_sh...
- Update on the issue after 2 days:
1- On the primary site, the buckets have been resharded dynamically.
2- On the ... - 08:31 AM Bug #49666: RGW crash due to PerfCounters::inc assert_condition during multisite syncing
- Setting up multisite on a former single sited RADOSGW setup / cluster we observed multiple RADOSGW crashes as well.
...
10/09/2021
- 07:46 AM Bug #52877: rgw-multisite: Dynamic resharding fails to take effect, even when rgw_max_objs_per_sh...
- Logs:
http://magna002.ceph.redhat.com/ceph-qe-logs/vidushi/upstream-testing/52877/master_zone
http://magna002.cep... - 07:36 AM Bug #52877 (Resolved): rgw-multisite: Dynamic resharding fails to take effect, even when rgw_max_...
- On a multi-site cluster, dynamic resharding fails to take effect, even when the rgw_max_objs_per_shard value is reach...
10/08/2021
- 01:38 PM Bug #52873: ERROR: failed to list reshard log entries, oid=reshard.0000000000 marker= (2) No such...
- ceph version 17.0.0-8051-g15b54dc9 (15b54dc9eaa835e95809e32e8ddf109d416320c9)
- 01:37 PM Bug #52873: ERROR: failed to list reshard log entries, oid=reshard.0000000000 marker= (2) No such...
- [ceph: root@magna017 /]# ceph versions
{
"mon": {
"ceph version 17.0.0-8051-g15b54dc9 (15b54dc9eaa835e... - 01:35 PM Bug #52873 (Resolved): ERROR: failed to list reshard log entries, oid=reshard.0000000000 marker= ...
- On doing a radosgw-admin reshard list on the slave site, the error is observed
ERROR: failed to list reshard log e...
10/06/2021
- 08:28 PM Bug #52837 (Fix Under Review): qa/rgw: restore ability to run radosgw_admin.py unit standalone--i...
- 08:22 PM Bug #52837 (Resolved): qa/rgw: restore ability to run radosgw_admin.py unit standalone--improved ...
- Some minor fixes, cleanups and enhancement of radosgw_admin.py standlone mode. How uses the LocalRemote machinery of...
- 08:05 PM Bug #52647 (Fix Under Review): rgw/singleton failures: assert rgwlog['bucket'].find(bucket_name) ...
- 06:34 PM Feature #50611 (Fix Under Review): notification: add conf parameter to allow sending bucket notif...
- 04:59 PM Backport #51695: octopus: rgw: fail as expected when set-bucket-website attempted on a non-existe...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43424
merged - 03:20 PM Bug #50141 (Resolved): consecutive complete-multipart-upload requests following the 1st (successf...
- 02:06 PM Fix #52818 (Fix Under Review): Make RGW transaction IDs less deterministic
- 01:11 PM Documentation #52830 (Resolved): rgw: document rgw_lc_debug_interval configuration option
- rgw_lc_debug_interval configuration option has no description ("desc") or long description ("long_desc") fields.
- 08:23 AM Backport #52827 (In Progress): nautilus: rgw: deleting a bucket results in infinite loop
- 08:05 AM Backport #52827 (Rejected): nautilus: rgw: deleting a bucket results in infinite loop
- https://github.com/ceph/ceph/pull/43432
- 08:00 AM Bug #49206 (Pending Backport): rgw: deleting a bucket results in infinite loop
- 07:43 AM Backport #49746 (Resolved): pacific: SSE-KMS vault transit: use transit correctly.
10/05/2021
- 04:28 PM Fix #52818 (Resolved): Make RGW transaction IDs less deterministic
- S3 API responses expose RGW transaction IDs in the `x-amz-request-id` header. The current format of these IDs is: 'tx...
- 03:00 PM Backport #52350: pacific: rgw fix sts memory leak
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43348
mered - 02:59 PM Backport #51803: pacific: notifications: zero size in case of delete marker creation
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42643
merged - 01:35 PM Backport #51695 (In Progress): octopus: rgw: fail as expected when set-bucket-website attempted o...
- 01:35 PM Backport #51695 (New): octopus: rgw: fail as expected when set-bucket-website attempted on a non-...
- 01:32 PM Backport #51695 (In Progress): octopus: rgw: fail as expected when set-bucket-website attempted o...
- 08:25 AM Bug #52814 (New): RGW services stop responding when datacenter down
- We planned a stop of half of our cluster hosts to move to a new datacenter and right after all the RGW services stop ...
- 08:08 AM Bug #52813 (In Progress): octopus:"ragodgw-admin bucket limit check" has duplicate entries if buc...
- 05:25 AM Bug #52813 (Resolved): octopus:"ragodgw-admin bucket limit check" has duplicate entries if bucket...
- The ragodgw-admin bucket limit check command has a bug in octopus.
Since we do not clear the bucket list in RGWRa... - 04:05 AM Bug #49206 (Resolved): rgw: deleting a bucket results in infinite loop
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:04 AM Bug #50115 (Resolved): S3 event message eTag misspelled
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:03 AM Bug #51206 (Resolved): Creating a role in another tenant seems to be possible
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:03 AM Bug #51249 (Resolved): rgw: when an already-deleted object is removed in a versioned bucket, an u...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:03 AM Bug #51347 (Resolved): valgrind use-after-free in RGWCopyObj::execute() after RGWObjectCtx::inval...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:59 AM Backport #51597 (Resolved): pacific: valgrind use-after-free in RGWCopyObj::execute() after RGWOb...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42949
m... - 03:59 AM Backport #51511 (Resolved): pacific: notification: topic creation is failing when using v4 authen...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42947
m... - 03:58 AM Backport #51350 (Resolved): pacific: notifications: notifications stop working after bucket reshard
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42946
m... - 03:58 AM Backport #51045 (Resolved): pacific: S3 event message eTag misspelled
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42945
m... - 03:57 AM Backport #52351 (Resolved): octopus: rgw fix sts memory leak
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43349
m... - 03:56 AM Backport #51778 (Resolved): octopus: Creating a role in another tenant seems to be possible
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43270
m... - 03:52 AM Backport #52052 (Resolved): octopus: rgw: when an already-deleted object is removed in a versione...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43273
m... - 03:52 AM Backport #51330 (Resolved): octopus: rgw: deleting a bucket results in infinite loop
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43272
m... - 03:52 AM Backport #51012 (Resolved): octopus: rgw: remove quota soft threshold
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43271
m...
10/04/2021
- 07:32 PM Backport #52351: octopus: rgw fix sts memory leak
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43349
merged - 09:19 AM Bug #52333: beast frontend performance regressions
- adding details about the above flame graph generation methodology:...
- 08:10 AM Bug #52776: the bucket resharding time is too long, putting object is fail
- posting performance results of resharding 50M obj buckets from 1 to 10224 shards on vstart environment
summery of ...
10/03/2021
- 09:37 AM Bug #46127: amqp: allow for multiple exchanges on the same broker set
- one of the side effects of this issue is that if an exchange was misconfigured, it cannot be fixed unless the RGW res...
10/01/2021
- 11:38 AM Bug #52800 (Duplicate): OSD down make all the RGW crashed and cluster unavailable with production...
- This is the backtrace that I got. When osd goes down it kills all the RGW in the cluster.
What I can see is that the... - 10:36 AM Bug #52799 (Duplicate): Segmentation Fault in radosgw-admin period update --commit
- On one of our ceph clusters it is not possible to set up a simple radosgw/S3 basic setup. The cluster was running fin...
09/30/2021
- 03:39 PM Bug #52333: beast frontend performance regressions
- as discussed in bugscrub, possible related boost git issue:
https://github.com/boostorg/beast/issues/2200 -- Boos... - 02:39 PM Bug #52711 (Duplicate): Deleting a bucket with large MPU (1.4tb or more) object does not cleanup ...
- 02:35 PM Bug #52744 (Closed): rgw: crash during robust_notify
- This is not a bug; a stack trace is added to the logs.
- 02:33 PM Feature #52775: rgw:Bucket logging has an API, but it does not implement this function
- (feature: server access logging)
- 02:32 PM Feature #52775: rgw:Bucket logging has an API, but it does not implement this function
- correct, this is not implemented. changing from a bug report to a feature request
- 07:20 AM Feature #52775 (New): rgw:Bucket logging has an API, but it does not implement this function
- Bucket logging has an API, but it does not implement this function,the return value of calling the relevant API is null.
- 02:30 PM Bug #52776 (Need More Info): the bucket resharding time is too long, putting object is fail
- this sounds like a relatively small bucket to have such performance issues. are the index pools on ssd/nvme? how long...
- 07:24 AM Bug #52776 (Need More Info): the bucket resharding time is too long, putting object is fail
- there is 50 million objects in bucket, and the bucket index need reshard to 1024, but the resharding time is too long...
- 02:26 PM Bug #52779 (Closed): Deleting an object without specifying the version ID is allowed on S3 Object...
- this is allowed by aws s3. please see https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html...
- 08:23 AM Bug #52779 (Closed): Deleting an object without specifying the version ID is allowed on S3 Object...
- Ceph Octopus : 15.2.14
In a bucket with locking activated, we are able to delete an object whose retention has not... - 11:46 AM Bug #49075: Bucket synchronization works only after disable/enable bucket sync on the bucket. Onc...
- We are just syncing between two sites but ran into exactly the same issue:
Some buckets not fully synced (lots of ... - 10:45 AM Backport #52789 (Resolved): pacific: rgw/sts: ABAC support in STS
- https://github.com/ceph/ceph/pull/47746
- 10:40 AM Bug #52788 (Resolved): rgw/sts: ABAC support in STS
- Add support for Attribute Based Access Control In STS.
- 10:40 AM Backport #52787 (Resolved): pacific: segfault in rgw_log_op()
- 10:37 AM Bug #52530 (Pending Backport): segfault in rgw_log_op()
- This has to be backported after https://github.com/ceph/ceph/pull/41735
- 08:41 AM Backport #52785 (Resolved): pacific: rgw/sts: chunked upload fails using STS temp credentials gen...
- https://github.com/ceph/ceph/pull/44463
- 08:40 AM Backport #52784 (Resolved): pacific: Session policy evaluation incorrect for CreateBucket.
- https://github.com/ceph/ceph/pull/44476
- 08:38 AM Bug #49797 (Pending Backport): rgw/sts: chunked upload fails using STS temp credentials generated...
- 08:36 AM Backport #52783 (Resolved): pacific: Cannot perform server-side copy using STS credentials
- https://github.com/ceph/ceph/pull/43703
- 08:35 AM Backport #52782 (Resolved): pacific: add role information and auth type to ops log
- https://github.com/ceph/ceph/pull/43956
- 08:35 AM Bug #51598 (Pending Backport): Session policy evaluation incorrect for CreateBucket.
- 08:33 AM Bug #51152 (Pending Backport): add role information and auth type to ops log
- 08:20 AM Backport #52778 (Resolved): pacific: make fetching of certs while validating tokens, more generic.
- https://github.com/ceph/ceph/pull/44464
- 08:15 AM Bug #50721 (Pending Backport): make fetching of certs while validating tokens, more generic.
Also available in: Atom