Activity
From 11/13/2021 to 12/12/2021
12/12/2021
- 04:29 PM Fix #53588 (Fix Under Review): allow specifying ssl certificate for radosgw-admin operations, for...
- add the ability to specify ssl certificate to be used by radosgw-admin when acting as https client to radosgw which f...
- 01:34 PM Bug #53585: RGW Garbage collector leads to slow ops and osd down when removing large object
- pouria dehghani wrote:
> After deleting one file(rgw object) more than 500GB size, we have some laggy pgs, slow ops ... - 01:04 PM Bug #53585 (New): RGW Garbage collector leads to slow ops and osd down when removing large object
- After deleting one file(rgw object) more than 500GB size, we have some laggy pgs, slow ops and osd down on cluster wh...
12/10/2021
- 01:50 PM Backport #53581 (Rejected): pacific: [rfe] rgw: expose RADOS cluster_fsid in radosgw AdminOps API
- 01:50 PM Backport #53580 (Rejected): pacific: rgwlc: remove "9969" conditional blocks accidentally includ...
- 01:49 PM Bug #51432 (Pending Backport): [rfe] rgw: expose RADOS cluster_fsid in radosgw AdminOps API
- 01:46 PM Bug #53489 (Pending Backport): rgwlc: remove "9969" conditional blocks accidentally included wit...
12/09/2021
- 04:20 PM Backport #52074: octopus: rgw: user stats showing 0 value for "size_utilized" and "size_kb_utiliz...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44172
merged - 04:19 PM Backport #53133: octopus: prefetch in rgw_file is done 3 times per read
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44170
merged - 03:59 PM Bug #53325 (Fix Under Review): Test failures in ceph_test_cls_rgw_gc
- 03:31 PM Bug #53319 (Need More Info): RFE: rgw: Ceph command 'ceph node ls' doesn't report clients like RGW
- not sure where exactly this functionality belongs, but we do need a way to discover the rgw endpoints in the cluster,...
- 03:26 PM Bug #53354 (Need More Info): rgw_file: multi-thread concurrency requests lead to librgw.so crashed
- 03:22 PM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- i think it's worth discussing the removal of rgw_gc_processor_max_time altogether? i tend to think GC should drain it...
- 03:10 PM Backport #53565 (Duplicate): pacific: rgw pubsub list topics coredump
- Duplicate of https://tracker.ceph.com/issues/53518
- 03:10 PM Backport #53564 (Rejected): pacific: radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --pur...
- 03:10 PM Backport #53563 (Rejected): octopus: radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --pur...
- 03:07 PM Bug #53489 (Fix Under Review): rgwlc: remove "9969" conditional blocks accidentally included wit...
- 03:07 PM Bug #53497 (Pending Backport): rgw pubsub list topics coredump
- 03:05 PM Bug #53512 (Pending Backport): radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --purge-obj...
12/08/2021
- 07:34 PM Feature #53546 (Fix Under Review): rgw/beast: add max_header_size option with 16k default, up fro...
- 06:10 AM Backport #53518 (Resolved): pacific: pubsub: list topics, radosgw coredump
- https://github.com/ceph/ceph/pull/45476
- 06:05 AM Bug #53464 (Pending Backport): pubsub: list topics, radosgw coredump
12/07/2021
- 07:25 PM Backport #53515 (Rejected): pacific: notifications: allow empty configuration to delete all notif...
- 07:21 PM Feature #53040 (Pending Backport): notifications: allow empty configuration to delete all notific...
- 06:56 PM Backport #53213: octopus: rgw: cannot delete bucket
- J. Eric Ivancich wrote:
> pr: https://github.com/ceph/ceph/pull/43863
merged - 05:00 PM Backport #53272: octopus: beast frontend performance regressions
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43961
merged - 04:58 PM Backport #52959: octopus: Make RGW transaction IDs less deterministic
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43696
merged - 03:26 PM Bug #53512 (Pending Backport): radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --purge-obj...
- radosgw-admin bucket rm --bucket=testbucket1 --bypass-gc --purge-objects crashes
Steps to Reproduce:
1. Create a ...
12/06/2021
- 06:37 PM Bug #53497 (Fix Under Review): rgw pubsub list topics coredump
- 01:15 AM Bug #53497: rgw pubsub list topics coredump
- same with https://tracker.ceph.com/issues/53464
https://github.com/ceph/ceph/pull/44186 - 01:13 AM Bug #53497 (Pending Backport): rgw pubsub list topics coredump
- [root@node106 ceph]# ceph crash info 2021-12-03T03:15:40.323370Z_0c0cf32a-1e95-4369-a2c9-4cd34bed0d7b
{
"backtr... - 03:00 PM Bug #53423: Calling list_buckets after assuming a role lists all my buckets, not their buckets
- Sam Mesterton-Gibbons wrote:
> We're currently investigating whether we can test it with local rgw users.
I've no... - 09:00 AM Bug #53464 (Fix Under Review): pubsub: list topics, radosgw coredump
- 12:49 AM Bug #53464: pubsub: list topics, radosgw coredump
- I fixed it by this PR https://github.com/ceph/ceph/pull/44186
12/04/2021
- 01:05 PM Bug #53457: Stale bucket index entry remains after object deletion
- I found some stale bucket indexs in my project. Otherwise,I can't find any other reason for this problem
12/03/2021
- 05:25 PM Backport #53490 (Rejected): pacific: rgwlc: optionally process lifecycle for a single bucket
- 05:23 PM Bug #53489 (Pending Backport): rgwlc: remove "9969" conditional blocks accidentally included wit...
- This is a cleanup only, applies to Pacific.
- 05:22 PM Bug #53430 (Pending Backport): rgwlc: optionally process lifecycle for a single bucket
- 11:41 AM Bug #53484 (Need More Info): swift: AccessDenied on private container
- Hello!
A problem has arisen: when connecting to the storage, we receive an "Operation forbidden" error from swift ... - 08:29 AM Bug #52776: the bucket resharding time is too long, putting object is fail
- as @Mark Kogan said, elapsed time was ~6 minutes on SSD system(resharding 50M obj buckets from 1 to 10224 shards), al...
- 12:52 AM Bug #53354: rgw_file: multi-thread concurrency requests lead to librgw.so crashed
- Thanks your reply. After applying the pr, the crash is always here. I will give an fix to solve this.
12/02/2021
- 10:36 PM rgw-testing Bug #53478 (New): s3-tests: AttributeError: module 'collections' has no attribute 'Callable'
- Trying to run s3-tests and this happens:...
- 03:45 PM Backport #53471 (Resolved): pacific: cohort::lru may unlock twice
- https://github.com/ceph/ceph/pull/45464
- 03:45 PM Backport #53470 (Resolved): octopus: cohort::lru may unlock twice
- https://github.com/ceph/ceph/pull/45465
- 03:41 PM Bug #53469 (Resolved): cohort::lru may unlock twice
- 03:22 PM Bug #53354: rgw_file: multi-thread concurrency requests lead to librgw.so crashed
- there was a fix to cohort lru in https://github.com/ceph/ceph/pull/43563 that may be related
- 03:17 PM Bug #53369 (Closed): pubsub:postman create a topic, NoSuchBucket
- 12:27 AM Bug #53369: pubsub:postman create a topic, NoSuchBucket
- becuse i sending requests to non pubsub zone,so it's my misktake. close it
- 03:16 PM Bug #53384 (Triaged): tail objects that have already been garbage collected remain in the gc queu...
- 03:16 PM Bug #53423 (Triaged): Calling list_buckets after assuming a role lists all my buckets, not their ...
- 03:11 PM Bug #53431 (Fix Under Review): When using radosgw-admin to create a user, when the uid is empty, ...
- 03:07 PM Bug #53457 (Need More Info): Stale bucket index entry remains after object deletion
- in step 5, the pending operations shouldn't expire for 120 seconds. did the delete really take that long to process?
- 03:13 AM Bug #53457 (Need More Info): Stale bucket index entry remains after object deletion
- I found a special sequence may cause this new situation
1.requests an object be deleted,prepares the op, adding pend... - 03:07 PM Bug #53429 (Fix Under Review): rgw: "reshard cancel" errors with "invalid argument"
- 10:27 AM Backport #53465 (Rejected): pacific: notification: awsRegion not set to zonegroup
- 10:22 AM Bug #46169 (Pending Backport): notification: awsRegion not set to zonegroup
- 10:20 AM Bug #53464 (Resolved): pubsub: list topics, radosgw coredump
- when using postman sending the request follow to list topics associated with a tenant. coredump
GET /topics
[ro...
12/01/2021
- 11:56 PM Feature #53455 (Pending Backport): [RFE] Ill-formatted JSON response from RGW
- Requesting returned JSON output be properly formatted.
Use case: accessing a world readable bucket using curl (no... - 10:45 PM Bug #52590: "[ FAILED ] CmpOmap.cmp_vals_u64_invalid_default" in upgrade:pacific-p2p-pacific
- Neha Ojha wrote:
> I think the problem is that the fix for https://tracker.ceph.com/issues/52330 merged in 16.2.6 an... - 10:28 PM Bug #46567: Access denied for multi-object-delete by non-bucket-owner
- sorry I may have reply too quickly to this ticket, I just check the https://github.com/ceph/ceph/pull/37933 and it ma...
- 10:14 PM Bug #46567: Access denied for multi-object-delete by non-bucket-owner
- Hi, it's look like I hit this bug(?) too.
In my case, I test 3 clients: s3browser, cyberduck and s3cmd, and the er... - 08:11 PM Backport #52074 (In Progress): octopus: rgw: user stats showing 0 value for "size_utilized" and "...
- 08:08 PM Backport #52073 (In Progress): pacific: rgw: user stats showing 0 value for "size_utilized" and "...
- 07:59 PM Backport #53133 (In Progress): octopus: prefetch in rgw_file is done 3 times per read
- 07:44 PM Backport #53290 (In Progress): octopus: rgw: fix bi put not using right bucket index shard
- 07:40 PM Backport #53289 (In Progress): pacific: rgw: fix bi put not using right bucket index shard
- 05:53 PM Bug #53423: Calling list_buckets after assuming a role lists all my buckets, not their buckets
- Pritha Srivastava wrote:
> from the above statement I understood that assumerole works fine for local rgw users, or ... - 04:31 PM Bug #53423: Calling list_buckets after assuming a role lists all my buckets, not their buckets
- "Otherwise assuming a role works as expected - for example calling `list_objects_v2` on a bucket lists objects in buc...
- 11:28 AM Bug #53423: Calling list_buckets after assuming a role lists all my buckets, not their buckets
- Pritha Srivastava wrote:
> The AssumeRole functionality has been tested for local RGW users and not for external Ope...
11/30/2021
- 04:03 PM Bug #53367: Log S3 access key ID in ops logs
- looks sane, Cory
- 02:56 PM Bug #53367: Log S3 access key ID in ops logs
- matt: I've taken a close look at each of the existing auth engines and here is my summary of what each does and what ...
- 05:54 AM Bug #53431: When using radosgw-admin to create a user, when the uid is empty, the error message i...
- https://github.com/ceph/ceph/pull/44142
- 02:02 AM Bug #53431 (Fix Under Review): When using radosgw-admin to create a user, when the uid is empty, ...
- When creating a user, when --uid is empty, the error message is "user.init failed: (13) Permission denied", but a mor...
- 05:25 AM Bug #53423: Calling list_buckets after assuming a role lists all my buckets, not their buckets
- The AssumeRole functionality has been tested for local RGW users and not for external Openstack Keystone users. Also ...
- 02:46 AM Bug #53430 (Fix Under Review): rgwlc: optionally process lifecycle for a single bucket
- 01:31 AM Bug #53430 (Pending Backport): rgwlc: optionally process lifecycle for a single bucket
- Permit a --bucket option to be passed to radosgw-admin lc process, and propagate the bucket name to lifecycle process...
11/29/2021
- 10:56 PM Bug #53429 (Resolved): rgw: "reshard cancel" errors with "invalid argument"
- When radosgw-admin reshard cancel ... was used on a bucket that is
not currently undergoing resharding, it would err... - 02:45 PM Bug #53423: Calling list_buckets after assuming a role lists all my buckets, not their buckets
- Another, possibly related oddity: If I try and create a bucket using the assumed role I get a TooManyBuckets client e...
- 11:53 AM Bug #53423 (Triaged): Calling list_buckets after assuming a role lists all my buckets, not their ...
- Hi,
I've been testing out the AssumeRole example in https://docs.ceph.com/en/latest/radosgw/STS/#examples and it s... - 11:15 AM Backport #53291: pacific: rgw: investigate conditional governing cleaning of incomplete multipart...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43975
m... - 10:54 AM Bug #53369: pubsub:postman create a topic, NoSuchBucket
- [root@node106 ~]# ceph -v
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
has the... - 10:53 AM Bug #53370: pubsub:list user's topic, ERROR: could not get topics: (2) No such file or directory
- [root@node106 ~]# ceph -v
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
has same ...
11/26/2021
- 09:15 AM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- fwiw, 12 hours after setting max_time and period on all rgw instances to 1hr, the gc queue is empty. thanks again.
11/25/2021
- 12:38 PM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- I will discuss this with the team and look into it.
- 12:28 PM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- I understand. however, as stated, the current behaviour is a bug as it will result in the gc queue never draining if ...
- 10:39 AM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- In Octopus we have moved to offloading gc data from omap to a queue. The mechanism of deleting data from queue is dif...
- 09:59 AM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- thanks for the insight, pritha.
I was suspecting the fact that large deletes take over 5mins time could be related... - 05:36 AM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- I checked the logs for shard 109 since you have given that as an example for entries that never get cleared from the ...
- 09:29 AM Bug #53369: pubsub:postman create a topic, NoSuchBucket
- if i create bucket topics,it will putobj ,create obj in topics bucket
11/24/2021
- 04:03 PM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- attaching complete log files - save the ones that containt HTTP_X_AUTH_TOKEN - of said rgw instance for some 40 minut...
- 03:39 PM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- Hi Jaka,
Can you please attach more logs, relevant to shard 109 (garbage collection: RGWGC::process entered with G... - 02:21 PM Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- explicitly set rgw-related config options:
rgw_override_bucket_index_max_shards = 16
rgw_max_put_size = 109951162... - 02:13 PM Bug #53384 (Triaged): tail objects that have already been garbage collected remain in the gc queu...
- running an octopus cluster (upgraded from nautilus a few months ago) of some 0.5PB capacity. it is used exclusively a...
- 08:50 AM Bug #53370: pubsub:list user's topic, ERROR: could not get topics: (2) No such file or directory
- is radosgw-admin APIS deprecated ?
- 01:24 AM Bug #53226: `radosgw-admin reshard stale-instances rm` can't remove stale-indexs which have no co...
- how about merging this for backport to nautilus?
11/23/2021
- 04:06 PM Bug #53367: Log S3 access key ID in ops logs
- I agree, Matt. I'll take a closer look at the other auth strategies and try to determine what makes sense to log for ...
- 03:16 PM Bug #53367: Log S3 access key ID in ops logs
- (It would be nice to ensure that identity is logged appropriately for other auth strategies, too--we already have inf...
- 02:38 AM Bug #53370: pubsub:list user's topic, ERROR: could not get topics: (2) No such file or directory
- [root@node106 ceph]# ceph -v
ceph version 17.0.0-9019-g02348afc (02348afc735344c38ea2a3e6c1a291881b273696) quincy (d... - 02:27 AM Bug #53370 (Resolved): pubsub:list user's topic, ERROR: could not get topics: (2) No such file or...
- [root@node106 ~]# radosgw-admin topic list --uid=test
ERROR: could not get topics: (2) No such file or directory - 02:25 AM Bug #53369 (Closed): pubsub:postman create a topic, NoSuchBucket
- I use ceph rgw pubsub sync module create a topic, notice NoSuchBucket
postman post:
192.168.56.106:8080/top...
11/22/2021
- 08:47 PM Bug #53367 (Resolved): Log S3 access key ID in ops logs
- One use case for allowing multiple sets of S3 keys per RGW user account is to provide individual credentials for diff...
- 06:49 PM Bug #51574: Segfault when uploading file
- I was able to reproduce the bug on a blank new ceph using the `ceph/daemon` demo mode. A bucket policy is needed to t...
- 05:38 PM Bug #51574: Segfault when uploading file
- The issue is still happening on our cluster with 16.2.6.
I am still working on reproducing it on a clean demo syst... - 05:40 PM Bug #53361 (Pending Backport): rgwlc: permit lifecycle rules to be marked as running only in Arch...
- Proposed approach is to introduce a flags extension to RGW's lifecycle configuration, that will be provided as a new ...
- 06:35 AM Bug #53354: rgw_file: multi-thread concurrency requests lead to librgw.so crashed
- A RGWFileHandle is been unref (rgw::RGWLibFS::unref), it may be used for other operation at the same time.
when lru ... - 06:18 AM Bug #53354 (Need More Info): rgw_file: multi-thread concurrency requests lead to librgw.so crashed
- unpredictable crash stack
1,
[librgw.so.2+0x21952c] cohort::lru::LRU<std::mutex>::unref(cohort::lru::Object*, uns...
11/19/2021
- 04:35 PM Bug #53341 (In Progress): qa/rgw: allow local runs of radosgw_admin_rest.py
- 04:12 PM Documentation #53337: document rgw_max_chunk_size
- https://tracker.ceph.com/issues/6999
https://github.com/ceph/ceph/commit/4aa102f911db7e6928f80c421a32ac8f9f283c6e - 12:37 PM Documentation #53337 (Resolved): document rgw_max_chunk_size
- Please add rgw_max_chunk_size to the CEPH OBJECT GATEWAY CONFIG REFERENCE
https://docs.ceph.com/en/latest/radosgw/... - 03:50 PM Bug #53222 (Resolved): rgw: investigate conditional governing cleaning of incomplete multipart up...
- 03:50 PM Backport #53291 (Resolved): pacific: rgw: investigate conditional governing cleaning of incomplet...
- 11:18 AM Bug #53325: Test failures in ceph_test_cls_rgw_gc
- The test fails because of code that was commented out from defer gc (related to a data loss bug last year). This test...
11/18/2021
- 08:09 PM Bug #53325 (Resolved): Test failures in ceph_test_cls_rgw_gc
- It looks like this test was left out of teuthology by mistake and something broke somewhere along the line:
On a v... - 07:44 PM Bug #51574: Segfault when uploading file
- I apologize for my delayed response. I'll try the latest images again and check if they still fail on our existing cl...
- 03:21 PM Bug #51574 (Need More Info): Segfault when uploading file
- 05:15 PM Bug #51317 (Closed): Objects not synced if reshard is done while sync is happening in Multisite
- 05:15 PM Bug #51420 (Closed): radosgw-admin core dumps on "bucket sync status"
- 05:14 PM Bug #51427 (Closed): Multisite sync stuck if reshard is done while bucket sync is disabled
- 05:14 PM Bug #51461 (Closed): Unable to read bucket stats post reshard from Multisite primary
- 05:14 PM Bug #51486 (Closed): Incorrect stats on versioned bucket on multisite
- 05:14 PM Bug #51595 (Closed): Incremental sync fails to complete post reshard on a bucket ownership changed.
- 05:13 PM Bug #51827 (Closed): existing objects are deleted while I/o is running with bucket reshard execut...
- 05:13 PM Bug #52684 (Closed): Observing sync inconsistencies on a bucket that has been resharded.
- 05:13 PM Bug #52896 (Closed): rgw-multisite: Dynamic resharding take too long to take effect
- 05:13 PM Bug #52917 (Closed): rgw-multisite: bucket sync checkpoint for a bucket lists out very high valu...
- 03:30 PM Bug #52379 (Closed): cannot Delete objects on s3 ceph bucket
- 03:28 PM Bug #52379: cannot Delete objects on s3 ceph bucket
- The haproxy that was in used was blocking http DELETE request. nothing was wrong with ceph.
- 03:26 PM Bug #52379: cannot Delete objects on s3 ceph bucket
- This looks like a bucket policy attached to bucket 'elk'. The Principal ARN needs to be that of the user gitlab-s3-ap...
- 03:17 PM Bug #52379: cannot Delete objects on s3 ceph bucket
- Would you please close this issue, it was something with haproxy
- 03:15 PM Bug #52379: cannot Delete objects on s3 ceph bucket
- @Pritha can you please review the bucket policy?
- 03:22 PM Bug #51538 (Closed): multisite sync not syncing any data via HTTPS endpoints
- 03:16 PM Bug #52363 (New): rgw: fix the Content-Length in response header is inconsistent with response bo...
- 03:10 PM Bug #52776: the bucket resharding time is too long, putting object is fail
- @Huber Ming -- Are you able to provide the "need more info"?
- 03:08 PM Bug #53172: after user created, stating exists user, noticing "User has not been initialized or ...
- Proper fix appears to be here: https://github.com/ceph/ceph/pull/43915
- 03:07 PM Bug #52837 (Resolved): qa/rgw: restore ability to run radosgw_admin.py unit standalone--improved ...
- 03:03 PM Bug #53319 (Need More Info): RFE: rgw: Ceph command 'ceph node ls' doesn't report clients like RGW
- We pull client versions with the 'ceph versions' command. COuld we look at doing something similar with the 'ceph nod...
- 02:23 PM Bug #53220 (Fix Under Review): notification tests failing: 'find /home/ubuntu/cephtest -ls ; rmdi...
- 10:29 AM Feature #53040 (Fix Under Review): notifications: allow empty configuration to delete all notific...
11/17/2021
- 05:13 PM Backport #51701: pacific: S3 CopyObject: failed to parse copy location
- ping?
- 05:13 PM Backport #51700: octopus: S3 CopyObject: failed to parse copy location
- ping?
- 08:49 AM Bug #48001: Brocken SwiftAPI anonymous access
- I've created a 2nd PR to backport this to pacific:
https://github.com/ceph/ceph/pull/43984
11/16/2021
- 11:23 PM Backport #53291 (In Progress): pacific: rgw: investigate conditional governing cleaning of incomp...
- 04:55 PM Backport #53291 (Resolved): pacific: rgw: investigate conditional governing cleaning of incomplet...
- https://github.com/ceph/ceph/pull/43975
- 04:50 PM Backport #53290 (Resolved): octopus: rgw: fix bi put not using right bucket index shard
- https://github.com/ceph/ceph/pull/44167
- 04:50 PM Backport #53289 (Resolved): pacific: rgw: fix bi put not using right bucket index shard
- https://github.com/ceph/ceph/pull/44166
- 04:50 PM Bug #53222 (Pending Backport): rgw: investigate conditional governing cleaning of incomplete mult...
- 04:48 PM Bug #53248 (Pending Backport): rgw: fix bi put not using right bucket index shard
- 03:30 PM Backport #53286 (Rejected): octopus: bucket stats inconsistency from racing multipart completions
- 03:30 PM Backport #53285 (Resolved): pacific: bucket stats inconsistency from racing multipart completions
- 03:29 PM Bug #53199 (Pending Backport): bucket stats inconsistency from racing multipart completions
- 03:28 PM Backport #53145 (In Progress): pacific: Performance regression on rgw/s3 copy operation
- 03:26 PM Backport #53256 (In Progress): pacific: rgw nfs export at user-level crash on readdir
- 03:23 PM Backport #53224 (In Progress): octopus: tempest dependency issue with 'PrettyTable'
- 03:22 PM Backport #53225 (In Progress): pacific: tempest dependency issue with 'PrettyTable'
- 03:08 PM Backport #53099 (In Progress): octopus: rgw/crypt s3tests with vault: Failed to retrieve the actu...
- 03:00 PM Backport #53272 (In Progress): octopus: beast frontend performance regressions
- 02:51 PM Bug #53119: RadosGW sends X-Auth-Token instead of Authorization header to Open Policy Agent
- I'm not sure about that. The commit that introduced OPA integration is https://github.com/ceph/ceph/commit/631a036a6b...
- 02:38 PM Backport #52782 (In Progress): pacific: add role information and auth type to ops log
- 02:23 PM Backport #53098 (In Progress): pacific: rgw/crypt s3tests with vault: Failed to retrieve the actu...
- 02:20 PM Backport #53271 (In Progress): pacific: beast frontend performance regressions
- 07:29 AM Bug #48752 (Resolved): [rfe] generalize ops log output channel for gelf and/or syslog targets
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:27 AM Bug #51491 (Resolved): "ERROR: s3tests.functional.test_s3_website.test_website_xredirect_private_...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:22 AM Backport #52783 (Resolved): pacific: Cannot perform server-side copy using STS credentials
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43703
m... - 07:22 AM Backport #52468 (Resolved): pacific: "ERROR: s3tests.functional.test_s3_website.test_website_xred...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43777
m... - 07:20 AM Backport #53091 (Resolved): pacific: [rfe] generalize ops log output channel for gelf and/or sysl...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43740
m... - 07:20 AM Backport #52960 (Resolved): pacific: Make RGW transaction IDs less deterministic
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43695
m...
11/15/2021
- 06:36 PM Bug #53222 (Fix Under Review): rgw: investigate conditional governing cleaning of incomplete mult...
- 06:35 PM Backport #53272 (Resolved): octopus: beast frontend performance regressions
- https://github.com/ceph/ceph/pull/43961
- 06:35 PM Backport #53271 (Resolved): pacific: beast frontend performance regressions
- https://github.com/ceph/ceph/pull/43946
- 06:32 PM Bug #52333 (Pending Backport): beast frontend performance regressions
- 03:32 PM Bug #50977: s3select: empty file select failed
- the following PR
https://github.com/ceph/ceph/pull/42416
will handle the empty size (combine with more features)
Also available in: Atom