Activity
From 10/23/2018 to 11/21/2018
11/21/2018
- 02:22 PM Bug #37352: get or set rgw realm zonegroup zone should check user's caps for security
- 12:16 AM Bug #37352: get or set rgw realm zonegroup zone should check user's caps for security
- https://github.com/ceph/ceph/pull/25178 fix this
- 12:08 AM Bug #37352 (Resolved): get or set rgw realm zonegroup zone should check user's caps for security
- in current version rgw multisite, we can get realm period zonegroup zone info without user caps check, we should chec...
- 06:45 AM Backport #37285 (In Progress): mimic: rgw: radosgw-admin: reshard status prints status codes as e...
- https://github.com/ceph/ceph/pull/25198
- 05:44 AM Backport #37284 (In Progress): luminous: rgw: radosgw-admin: reshard status prints status codes a...
- https://github.com/ceph/ceph/pull/25195
11/20/2018
- 10:01 PM Bug #19514: rgw: error debug message in rgw_build_bucket_policies() is showed even on success
- Radoslaw Zarzynski wrote:
> PR: https://github.com/ceph/ceph/pull/14369.
merged - 10:01 PM Feature #20795: rgw: the TempURL implementation should support ISO8601 in temp_url_expires
- Radoslaw Zarzynski wrote:
> https://github.com/ceph/ceph/pull/16658
merged - 09:56 PM Bug #37350 (Duplicate): segfault in rgw::putobj::RadosWriter::process()
- only seen this once in the rgw_bucket_quota.pl workunit test:...
- 08:19 PM Backport #36687 (Resolved): mimic: lock in resharding may expires before the dynamic resharding c...
- 05:12 PM Backport #36687: mimic: lock in resharding may expires before the dynamic resharding completes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24899
merged - 08:18 PM Bug #24505 (Resolved): radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- 08:18 PM Bug #36496 (Resolved): cls_user_remove_bucket does not write the modified cls_user_stats
- 08:17 PM Backport #36223 (Resolved): mimic: rgw: default quota not set in radosgw for Openstack users
- 05:12 PM Backport #36223: mimic: rgw: default quota not set in radosgw for Openstack users
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24907
merged - 08:17 PM Backport #36415 (Resolved): mimic: librgw: crashes in multisite configuration
- 05:11 PM Backport #36415: mimic: librgw: crashes in multisite configuration
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/24908
merged - 08:17 PM Backport #36533 (Resolved): mimic: cls_user_remove_bucket does not write the modified cls_user_stats
- 05:11 PM Backport #36533: mimic: cls_user_remove_bucket does not write the modified cls_user_stats
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24910
merged - 08:16 PM Backport #36535 (Resolved): mimic: radosgw-admin user stats are incorrect when dynamic re-shardin...
- 05:10 PM Backport #36535: mimic: radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24911
merged - 08:16 PM Backport #36539 (Resolved): mimic: use-after-free from RGWRadosGetOmapKeysCR::~RGWRadosGetOmapKeysCR
- 05:10 PM Backport #36539: mimic: use-after-free from RGWRadosGetOmapKeysCR::~RGWRadosGetOmapKeysCR
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24912
merged - 08:08 PM Backport #36756 (Resolved): mimic: rgw-admin: reshard add can add a non existant bucket
- 05:08 PM Backport #36756: mimic: rgw-admin: reshard add can add a non existant bucket
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25087
merged - 07:55 PM Backport #37349 (Resolved): luminous: when using nfs-ganesha to upload file, rgw es sync module g...
- https://github.com/ceph/ceph/pull/25444
- 07:55 PM Backport #37348 (Resolved): mimic: when using nfs-ganesha to upload file, rgw es sync module get ...
- https://github.com/ceph/ceph/pull/27972
- 07:53 PM Bug #37339: rgw: resharding leaves old bucket info objs and index shards behind
- PR: https://github.com/ceph/ceph/pull/25142
- 07:52 PM Bug #37339 (Resolved): rgw: resharding leaves old bucket info objs and index shards behind
- The resharding process does not delete formerly used bucket info objects or index shards. Additionally, should a resh...
- 07:52 PM Bug #36233 (Pending Backport): when using nfs-ganesha to upload file, rgw es sync module get failed
- 06:12 PM Backport #36734 (Resolved): mimic: beast frontend fails to parse ipv6 endpoints
- 05:09 PM Backport #36734: mimic: beast frontend fails to parse ipv6 endpoints
- Casey Bodley wrote:
> https://github.com/ceph/ceph/pull/25079
merged
11/19/2018
- 04:45 PM Bug #36233: when using nfs-ganesha to upload file, rgw es sync module get failed
- Abhishek Lekshmanan wrote:
> updated pr: https://github.com/ceph/ceph/pull/24492
merged - 04:43 PM Bug #36654: librgw not check user max objects
- Tao CHEN wrote:
> Here is my patch:
> https://github.com/ceph/ceph/pull/24846
merged - 05:16 AM Bug #36582: `radosgw-admin bucket limit check` does not show all buckets
- Abhishek Lekshmanan wrote:
> does radosgw-admin bucket list after the limit check still show the 56 bucket, what doe...
11/16/2018
- 08:27 PM Bug #23817: Bucket policy and colons in filename
- (I deleted the backport issues for now since they weren't properly formed and the master PR hasn't merged yet. Thanks!)
- 08:26 PM Bug #23817: Bucket policy and colons in filename
- @Adam - the general idea is to wait until the master PR is merged, and then:
1. fill in the Backport field (e.g. "... - 08:23 PM Bug #23817: Bucket policy and colons in filename
- @Adam: next time, please check out src/script/backport-create-issue for your backport issue creation needs - thanks!
- 07:47 PM Bug #23817 (Fix Under Review): Bucket policy and colons in filename
- Fix in: https://github.com/ceph/ceph/pull/25145
- 04:07 PM Bug #24562: Tail tag should be different when copy object data
- zhang sw wrote:
> https://github.com/ceph/ceph/pull/22613
merged - 11:32 AM Backport #37285 (Resolved): mimic: rgw: radosgw-admin: reshard status prints status codes as enum...
- https://github.com/ceph/ceph/pull/25198
- 11:32 AM Backport #37284 (Resolved): luminous: rgw: radosgw-admin: reshard status prints status codes as e...
- https://github.com/ceph/ceph/pull/25195
- 06:25 AM Support #37157: how to use "RGW_ACCESS_KEY_ID" with S3/swift for AD user ?
> 3. /etc/bindpass contains a password for the binddn, which is the ldap credential RGW will use to search ldap
> ...- 12:18 AM Bug #37281 (Resolved): rgw: multisite: bucket sync fails to read sync state
- happens on current master. Regression introduced in: fc860a367bc59d4c70e7e80f021235864c5da08b
Problem is that buck...
11/15/2018
- 08:30 PM Bug #36486 (Pending Backport): rgw: radosgw-admin: reshard status prints status codes as enum val...
- 07:14 PM Bug #36582: `radosgw-admin bucket limit check` does not show all buckets
- does radosgw-admin bucket list after the limit check still show the 56 bucket, what does the bucket stats --bucket {t...
- 07:02 PM Bug #36654: librgw not check user max objects
- 07:02 PM Bug #36665 (Resolved): rgw: remove rgw_aclparser.cc
- 06:59 PM Bug #36670: rgw: es: fix doc dump etag
- 06:58 PM Bug #36671 (Fix Under Review): rgw: RGWHTTP::send support send_data to fix es search
- 06:56 PM Bug #36763 (Fix Under Review): rgw: set null version object issues
- 06:56 PM Bug #36761 (Duplicate): rgw: set null version object issues
- http://tracker.ceph.com/issues/36763
- 06:56 PM Bug #36762 (Duplicate): rgw: set null version object issues
- tracked at http://tracker.ceph.com/issues/36763
- 06:54 PM Bug #36765 (Fix Under Review): OpenStack integration documentation is lacking critical informatio...
- https://github.com/ceph/ceph/pull/25056
- 06:53 PM Bug #37270 (Closed): radosgw response 500 error while putting object
- 06:53 PM Bug #37270: radosgw response 500 error while putting object
- ...
- 09:21 AM Bug #37270 (Closed): radosgw response 500 error while putting object
- My env is :
[tanweijie@gz-ceph-52-203 ~]$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[tanweijie@... - 06:46 PM Bug #26965: Compliance to aws s3's relaxed query handling behaviour
- 06:29 AM Support #37157: how to use "RGW_ACCESS_KEY_ID" with S3/swift for AD user ?
- Hi Benjamin,
1. setting the user to be the token transfers the bundle unmodified to RGW--when the S3 signing metho... - 02:10 AM Support #37157: how to use "RGW_ACCESS_KEY_ID" with S3/swift for AD user ?
Add more details in my test, get error "403 Forbidden" :
[testobj@obj_gtway01 ~]$ cat /etc/ceph/ceph.conf
[glob...
11/14/2018
- 09:48 PM Support #37157 (New): how to use "RGW_ACCESS_KEY_ID" with S3/swift for AD user ?
- ceph --version
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
Cluster status is he... - 12:04 PM Backport #36757 (In Progress): luminous: rgw-admin: reshard add can add a non existant bucket
- 11:51 AM Backport #36756 (In Progress): mimic: rgw-admin: reshard add can add a non existant bucket
11/13/2018
- 10:13 PM Bug #37091 (Fix Under Review): multisite: bilog trimming crashes when pgnls fails with EINVAL
- https://github.com/ceph/ceph/pull/25081
- 08:03 PM Bug #37091 (Resolved): multisite: bilog trimming crashes when pgnls fails with EINVAL
- ...
- 03:58 PM Backport #36734 (In Progress): mimic: beast frontend fails to parse ipv6 endpoints
- 10:12 AM Feature #37073: rgw: add list user admin OP API
- Reference the pull request here: https://github.com/ceph/ceph/pull/25073
- 08:41 AM Feature #37073 (Resolved): rgw: add list user admin OP API
- The radosgw-admin tool support the `user list` subcommand to list radosgw users, but there is no user listing functio...
- 09:54 AM Backport #36645 (Resolved): mimic: SSE encryption does not detect ssl termination in proxy
- 09:51 AM Bug #36706 (Duplicate): Ceph ECBackend: assert fail at PGLog::trim
11/12/2018
- 05:13 PM Backport #36645: mimic: SSE encryption does not detect ssl termination in proxy
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24931
merged - 12:42 PM Bug #36765 (Fix Under Review): OpenStack integration documentation is lacking critical informatio...
- There are a few things missing from the documentation, as far as OpenStack integration and Swift drop-in replacement ...
- 09:34 AM Bug #36706: Ceph ECBackend: assert fail at PGLog::trim
- duplicate with http://tracker.ceph.com/issues/36686
Close this please!
Fixed with luminous branch v12.2.10
...
11/11/2018
- 11:58 AM Bug #36763: rgw: set null version object issues
- fixes:
https://github.com/ceph/ceph/pull/25044 - 11:56 AM Bug #36763 (Resolved): rgw: set null version object issues
- 1.set null version object acl will create empty index
RGWRados::set_attrs did not clear instance, so index prepare, ... - 11:56 AM Bug #36762 (Duplicate): rgw: set null version object issues
- 1.set null version object acl will create empty index
RGWRados::set_attrs did not clear instance, so index prepare, ... - 11:54 AM Bug #36761 (Duplicate): rgw: set null version object issues
- 1.set null version object acl will create empty index
RGWRados::set_attrs did not clear instance, so index prepare, ...
11/10/2018
- 08:54 AM Backport #36757 (Resolved): luminous: rgw-admin: reshard add can add a non existant bucket
- https://github.com/ceph/ceph/pull/25088
- 08:54 AM Backport #36756 (Resolved): mimic: rgw-admin: reshard add can add a non existant bucket
- https://github.com/ceph/ceph/pull/25087
11/08/2018
- 03:22 PM Backport #36733: luminous: beast frontend fails to parse ipv6 endpoints
- depends on http://tracker.ceph.com/issues/24358
- 03:16 PM Backport #36733 (Resolved): luminous: beast frontend fails to parse ipv6 endpoints
- https://github.com/ceph/ceph/pull/25512
- 03:21 PM Backport #24358 (In Progress): luminous: SSL support for beast frontend
- https://github.com/ceph/ceph/pull/24621
- 03:16 PM Backport #36734 (Resolved): mimic: beast frontend fails to parse ipv6 endpoints
- https://github.com/ceph/ceph/pull/25079
- 03:09 PM Bug #36662 (Pending Backport): beast frontend fails to parse ipv6 endpoints
- 03:08 PM Bug #36621 (Resolved): rgw: move keystone secrets from ceph.conf to external files
- 03:04 PM Bug #36449 (Pending Backport): rgw-admin: reshard add can add a non existant bucket
11/07/2018
- 08:00 PM Bug #24551: RGW Dynamic bucket index resharding keeps resharding all buckets
- @Beom-Seok Park
From all the output you displayed, I wasn't clear what the issue you were reporting was. The main ... - 08:26 AM Bug #24551: RGW Dynamic bucket index resharding keeps resharding all buckets
- luminous commit 8157642b94a60dbfc3c88529a543a094d45d2b5e + https://github.com/ceph/ceph/pull/24898
rgw dynamic resha... - 04:20 PM Bug #23817: Bucket policy and colons in filename
- ...
- 04:18 PM Bug #23817: Bucket policy and colons in filename
- I was not able to reproduce this, so if you could give me reproducers I would greatly appreciate it!
- 01:17 PM Bug #23817: Bucket policy and colons in filename
- Hello,
We've just been hit by this issue on a Ceph Cluster running version 12.2.8.
We can easily provide reprod... - 08:55 AM Bug #36619: radosgw-admin realm pull fails with an error "request failed: (13) Permission denied ...
- I followed link below with newly created clusters and ran into the same issue. My guess is that it's related to the P...
11/06/2018
- 09:00 AM Backport #36644: luminous: SSE encryption does not detect ssl termination in proxy
- https://github.com/ceph/ceph/pull/24944
- 03:16 AM Bug #36706 (Duplicate): Ceph ECBackend: assert fail at PGLog::trim
- ...
11/05/2018
- 10:03 PM Bug #36654 (Fix Under Review): librgw not check user max objects
- 09:10 PM Backport #36645: mimic: SSE encryption does not detect ssl termination in proxy
- Also, in the PR, the description doesn't need to contain anything but a link to the backport tracker issue.
If you... - 09:09 PM Backport #36645: mimic: SSE encryption does not detect ssl termination in proxy
- Thanks for the backport @Jonathan! If possible, could you put the PR link in the _description_? (This only applies to...
- 12:32 PM Backport #36645 (In Progress): mimic: SSE encryption does not detect ssl termination in proxy
- https://github.com/ceph/ceph/pull/24931
- 03:46 PM Backport #36644 (In Progress): luminous: SSE encryption does not detect ssl termination in proxy
11/03/2018
- 02:50 PM Backport #36697 (Need More Info): luminous: multisite: segfault in 'radosgw-admin sync status'
- Should be backported together with #36538 - leaving it to an RGW developer.
- 02:31 PM Backport #36697 (Rejected): luminous: multisite: segfault in 'radosgw-admin sync status'
- Should be backported together with #36538 - leaving it to an RGW developer.
- 02:49 PM Backport #36538 (Need More Info): luminous: use-after-free from RGWRadosGetOmapKeysCR::~RGWRadosG...
- This is a non-trivial backport and needs to be done together with another backport, so earmarking it for an RGW devel...
- 02:40 PM Backport #36698 (In Progress): mimic: multisite: segfault in 'radosgw-admin sync status'
- 02:32 PM Backport #36698 (Resolved): mimic: multisite: segfault in 'radosgw-admin sync status'
- https://github.com/ceph/ceph/pull/24912
- 02:31 PM Bug #36537 (Pending Backport): multisite: segfault in 'radosgw-admin sync status'
- 02:31 PM Backport #36539 (In Progress): mimic: use-after-free from RGWRadosGetOmapKeysCR::~RGWRadosGetOmap...
- 02:22 PM Backport #36535 (In Progress): mimic: radosgw-admin user stats are incorrect when dynamic re-shar...
- 02:19 PM Backport #36533 (In Progress): mimic: cls_user_remove_bucket does not write the modified cls_user...
- 02:18 PM Backport #36414 (In Progress): luminous: librgw: crashes in multisite configuration
- 02:16 PM Backport #36415 (In Progress): mimic: librgw: crashes in multisite configuration
- 02:15 PM Bug #23216 (Resolved): multi-site: object name should be urlencoded when we put it into ES
- The fix has been in mimic since 13.0.2
- 12:27 PM Backport #36223 (In Progress): mimic: rgw: default quota not set in radosgw for Openstack users
- 04:45 AM Bug #25019 (Resolved): multisite: curl client does not time out on sync requests
- Jewel is EOL
- 04:44 AM Bug #24915: rgw_file: "deep stat"/stats of unenumerated paths not handled
- Jewel is EOL
- 04:44 AM Bug #24915 (Resolved): rgw_file: "deep stat"/stats of unenumerated paths not handled
- 04:43 AM Bug #24603 (Resolved): rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- Jewel is EOL
- 04:42 AM Bug #24280 (Resolved): rgw: fail to recover index from crash
- 04:41 AM Backport #24627 (Rejected): jewel: rgw: fail to recover index from crash
- Jewel is EOL
- 04:41 AM Bug #23301 (Resolved): rgw: 403 error when creating an object with metadata containing sequence o...
- Jewel is EOL
- 04:39 AM Bug #22844 (Resolved): rgw: do not reflect period if not current in RGWPeriodPuller::pull
- Jewel is EOL
- 04:38 AM Bug #22820 (Resolved): rgw_file: avoid fragging thread_local log buffer
- 04:38 AM Backport #22890 (Rejected): jewel: rgw_file: avoid fragging thread_local log buffer
- Jewel is EOL
- 04:37 AM Bug #22473 (Resolved): multisite: 'radosgw-admin sync error list' contains temporary EBUSY errors
- 04:36 AM Bug #21583 (Resolved): "radosgw-admin zonegroup set" requires realm
- Jewel is EOL
- 04:35 AM Bug #21377 (Resolved): rgw:lc: set LifecycleConfiguration without "Rule" tag return OK
- 04:34 AM Backport #22638 (Rejected): jewel: rgw:lc: set LifecycleConfiguration without "Rule" tag return OK
- Jewel is EOL
- 04:34 AM Bug #20757 (Resolved): rgw: RadosGW leaks memory during Swift Static Website's error handling
- 04:34 AM Backport #20817 (Rejected): jewel: rgw: RadosGW leaks memory during Swift Static Website's error ...
- Jewel is EOL
- 04:33 AM Bug #20418 (Resolved): rgw: URI with \0 in the middle is silently handled as its initial part
- 04:33 AM Backport #20830 (Rejected): jewel: rgw: URI with \0 in the middle is silently handled as its init...
- Jewel is EOL
- 04:32 AM Bug #20307 (Resolved): website request URL without slashes (for folder) returns error
- 04:32 AM Backport #20832 (Rejected): jewel: website request URL without slashes (for folder) returns error
- Jewel is EOL
- 04:31 AM Bug #20234 (Resolved): rgw: shadow objects are sometimes not removed
- 04:31 AM Backport #21271 (Rejected): jewel: rgw: shadow objects are sometimes not removed
- Jewel is EOL
- 04:30 AM Bug #20064 (Resolved): allow empty data_extra_pool in zone placement
- 04:30 AM Backport #20136 (Rejected): jewel: allow empty data_extra_pool in zone placement
- Jewel is EOL
- 04:27 AM Bug #24367 (Resolved): multisite: object metadata operations are skipped by sync
- 04:27 AM Bug #24367: multisite: object metadata operations are skipped by sync
- Jewel is EOL
- 03:55 AM Bug #25109 (Resolved): rgw: default value for objecter_inflight_ops is too low
- 03:55 AM Backport #36571 (Resolved): mimic: rgw: default value for objecter_inflight_ops is too low
- 03:46 AM Backport #36536 (Resolved): luminous: radosgw-admin user stats are incorrect when dynamic re-shar...
- 03:45 AM Backport #36534 (Resolved): luminous: cls_user_remove_bucket does not write the modified cls_user...
- 03:44 AM Backport #36570 (Resolved): luminous: rgw: default value for objecter_inflight_ops is too low
- 03:44 AM Bug #36041 (Resolved): Beast frontend tries to bind (privileged) ports after dropping privileges ...
- 03:44 AM Backport #36333 (Resolved): luminous: Beast frontend tries to bind (privileged) ports after dropp...
- 03:43 AM Bug #35812 (Resolved): multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- 03:42 AM Backport #36212 (Resolved): luminous: multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- 03:42 AM Bug #35715 (Resolved): multisite: memory leak from curl_multi_add_handle()
- 03:41 AM Backport #36214 (Resolved): luminous: multisite: memory leak from curl_multi_add_handle()
- 03:41 AM Bug #36265 (Resolved): rgw: list bucket can not show the object uploaded by RGWPostObj when enabl...
- 03:40 AM Backport #36423 (Resolved): luminous: rgw: list bucket can not show the object uploaded by RGWPos...
- 03:40 AM Bug #26897 (Resolved): multisite: data full sync does not limit concurrent bucket sync
- 03:39 AM Backport #36215 (Resolved): luminous: multisite: data full sync does not limit concurrent bucket ...
- 03:27 AM Bug #24937 (Resolved): [rgw] Very high cache misses with automatic bucket resharding
- Backports are going via #27219
- 03:25 AM Bug #24551 (Resolved): RGW Dynamic bucket index resharding keeps resharding all buckets
- Backports are going via #27219
- 03:21 AM Backport #36688 (In Progress): luminous: lock in resharding may expires before the dynamic reshar...
- 03:17 AM Backport #36688 (Resolved): luminous: lock in resharding may expires before the dynamic reshardin...
- https://github.com/ceph/ceph/pull/25326
- 03:18 AM Backport #36687 (In Progress): mimic: lock in resharding may expires before the dynamic reshardin...
- 03:17 AM Backport #36687 (Resolved): mimic: lock in resharding may expires before the dynamic resharding c...
- https://github.com/ceph/ceph/pull/24899
- 03:17 AM Bug #27219 (Pending Backport): lock in resharding may expires before the dynamic resharding compl...
11/02/2018
- 06:34 PM Bug #24937 (Pending Backport): [rgw] Very high cache misses with automatic bucket resharding
- This PR (https://github.com/ceph/ceph/pull/24898) is a luminous backport of a bug fix that resolved this in both mast...
- 06:32 PM Bug #24551 (Pending Backport): RGW Dynamic bucket index resharding keeps resharding all buckets
- The code in the above-listed PR (https://github.com/ceph/ceph/pull/24898) will allow resharding to complete if it's t...
- 06:28 PM Bug #24551: RGW Dynamic bucket index resharding keeps resharding all buckets
- This PR is a luminous backport of a downstream issue that seems to be related. I hope it will be in the next luminous...
- 02:51 PM Backport #36571: mimic: rgw: default value for objecter_inflight_ops is too low
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24860
merged - 12:18 PM Feature #36681: rgw: Return tenant field in bucket_stats function
- https://github.com/ceph/ceph/pull/24895
- 12:04 PM Feature #36681 (Resolved): rgw: Return tenant field in bucket_stats function
- The tenant of a bucket should be returned by the bucket_stats() function.
11/01/2018
- 08:26 PM Bug #36662 (Fix Under Review): beast frontend fails to parse ipv6 endpoints
- https://github.com/ceph/ceph/pull/24887
- 04:10 PM Backport #36536: luminous: radosgw-admin user stats are incorrect when dynamic re-sharding is ena...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24854
merged - 04:10 PM Backport #36534: luminous: cls_user_remove_bucket does not write the modified cls_user_stats
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24855
merged - 04:09 PM Backport #36570: luminous: rgw: default value for objecter_inflight_ops is too low
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24862
merged - 04:06 PM Backport #36333: luminous: Beast frontend tries to bind (privileged) ports after dropping privile...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24454
merged - 04:05 PM Backport #36212: luminous: multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24507
merged - 04:05 PM Backport #36214: luminous: multisite: memory leak from curl_multi_add_handle()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24519
merged - 04:04 PM Backport #36423: luminous: rgw: list bucket can not show the object uploaded by RGWPostObj when e...
- joke lee wrote:
> https://github.com/ceph/ceph/pull/24570
merged
- 04:04 PM Backport #36215: luminous: multisite: data full sync does not limit concurrent bucket sync
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24857
mergedReviewed-by: Casey Bodley <cbodley@redhat.com> - 02:30 PM Bug #24937: [rgw] Very high cache misses with automatic bucket resharding
- This may be related to the problem addressed by http://tracker.ceph.com/issues/27219 . The problem there was that due...
- 02:27 PM Bug #24551: RGW Dynamic bucket index resharding keeps resharding all buckets
- PR https://github.com/ceph/ceph/pull/24406 addresses issues that were described similarly.
- 12:24 PM Bug #36670: rgw: es: fix doc dump etag
- https://github.com/ceph/ceph/pull/24877
- 12:09 PM Bug #36670 (Fix Under Review): rgw: es: fix doc dump etag
- since etag store without last '\0', es doc dump should change also, otherwise will miss last letter.
- 12:23 PM Bug #36671: rgw: RGWHTTP::send support send_data to fix es search
- https://github.com/ceph/ceph/pull/24879
- 12:22 PM Bug #36671 (Fix Under Review): rgw: RGWHTTP::send support send_data to fix es search
- es search will send query in body, need send_data_hint
- 12:05 AM Bug #36293: InvalidBucketName expected in more cases: uppercase, adjacent chars, underscores
- sorry, but I'm becoming busy for other things and don't have much time to look how period config should be changed.
...
10/31/2018
- 07:56 PM Bug #36665: rgw: remove rgw_aclparser.cc
- https://github.com/ceph/ceph/pull/24866
- 07:54 PM Bug #36665 (Resolved): rgw: remove rgw_aclparser.cc
- Per https://github.com/ceph/ceph/pull/18682, this file is not being
built and is not going to be maintained. - 06:17 PM Bug #36662 (Resolved): beast frontend fails to parse ipv6 endpoints
- ...
- 05:09 PM Backport #36570 (In Progress): luminous: rgw: default value for objecter_inflight_ops is too low
- 05:08 PM Backport #36570: luminous: rgw: default value for objecter_inflight_ops is too low
- https://github.com/ceph/ceph/pull/24862
- 04:33 PM Backport #36571 (In Progress): mimic: rgw: default value for objecter_inflight_ops is too low
- https://github.com/ceph/ceph/pull/24860
- 02:23 PM Backport #36215 (In Progress): luminous: multisite: data full sync does not limit concurrent buck...
- https://github.com/ceph/ceph/pull/24857
- 01:36 PM Backport #36534 (In Progress): luminous: cls_user_remove_bucket does not write the modified cls_u...
- https://github.com/ceph/ceph/pull/24855
- 01:35 PM Backport #36536 (In Progress): luminous: radosgw-admin user stats are incorrect when dynamic re-s...
- https://github.com/ceph/ceph/pull/24854
- 06:13 AM Bug #36654: librgw not check user max objects
- Here is my patch:
https://github.com/ceph/ceph/pull/24846 - 05:52 AM Bug #36654 (Fix Under Review): librgw not check user max objects
- Hi,
Librgw seems not check the max objects of user's quota. For example, if we set max_objects to 3, we can still ...
10/30/2018
- 05:15 PM Backport #36645 (Resolved): mimic: SSE encryption does not detect ssl termination in proxy
- https://github.com/ceph/ceph/pull/24931
- 05:15 PM Backport #36644 (Resolved): luminous: SSE encryption does not detect ssl termination in proxy
- https://github.com/ceph/ceph/pull/24944
- 04:08 PM Bug #36619: radosgw-admin realm pull fails with an error "request failed: (13) Permission denied ...
- Can we raise the sev of the issue to 1 as its a blocker to our testing multi-site functionality for ceph.
- 02:52 PM Bug #36512: Lifecycle doesn't remove delete markers
- More specifically, it seems this behaviour happens only if you have autosharding enabled and a reshard has happened.
- 02:29 AM Bug #36623 (New): radosgw crush by FAILED assert
- 2018-10-29 23:59:03.280173 7f84cb523700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/...
- 01:57 AM Bug #36621: rgw: move keystone secrets from ceph.conf to external files
- https://github.com/ceph/ceph/pull/24816
- 01:53 AM Bug #36621 (Resolved): rgw: move keystone secrets from ceph.conf to external files
10/29/2018
- 09:41 PM Backport #35977 (Resolved): mimic: multisite: incremental data sync makes unnecessary call to RGW...
- 08:20 PM Backport #35977: mimic: multisite: incremental data sync makes unnecessary call to RGWReadRemoteD...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24710
merged - 09:37 PM Bug #36619 (Resolved): radosgw-admin realm pull fails with an error "request failed: (13) Permiss...
- Hi All,
I am trying to setup a multi-site following the below link
http://docs.ceph.com/docs/master/radosgw/mul... - 08:00 PM Bug #36293: InvalidBucketName expected in more cases: uppercase, adjacent chars, underscores
- We need 3 variations on validations:
1. validation per Swift rules
2. validation per S3-relaxed (old us-east-1 rule...
10/26/2018
- 11:05 AM Bug #23663 (Resolved): libcurl ignores headers with empty value, leading to signature mismatches
- 11:05 AM Backport #23907 (Rejected): jewel: libcurl ignores headers with empty value, leading to signature...
- Jewel is EOL
- 11:04 AM Bug #23468 (Resolved): radosgw-admin should not use metadata cache when not needed
- 11:04 AM Backport #23684 (Rejected): jewel: radosgw-admin should not use metadata cache when not needed
- Jewel is EOL
- 11:04 AM Bug #22804 (Resolved): multisite Synchronization failed when read and write delete at the same time
- 11:04 AM Backport #23692 (Rejected): jewel: multisite Synchronization failed when read and write delete at...
- Jewel is EOL
- 11:03 AM Bug #22721 (Resolved): Resharding hangs with versioning-enabled buckets
- 11:03 AM Backport #23887 (Rejected): jewel: Resharding hangs with versioning-enabled buckets
- Jewel is EOL
10/25/2018
10/24/2018
- 10:40 AM Bug #36582 (Can't reproduce): `radosgw-admin bucket limit check` does not show all buckets
- `radosgw-admin bucket list` shows 56 buckets. However, `radosgw-admin bucket limit check` shows 55 buckets.
is thi... - 08:50 AM Fix #36579: rgw: Correct permission evaluation to allow only admin users to work with Roles
- The PR for this issue is: https://github.com/ceph/ceph/pull/24510
- 08:49 AM Fix #36579 (Resolved): rgw: Correct permission evaluation to allow only admin users to work with ...
- The PR for this issue is: https://github.com/ceph/ceph/pull/24510
This is a follow-on fix for https://github.com/c...
10/23/2018
- 05:24 PM Backport #36571 (Resolved): mimic: rgw: default value for objecter_inflight_ops is too low
- https://github.com/ceph/ceph/pull/24860
- 05:24 PM Backport #36570 (Resolved): luminous: rgw: default value for objecter_inflight_ops is too low
- https://github.com/ceph/ceph/pull/24862
- 03:32 PM Bug #25109 (Pending Backport): rgw: default value for objecter_inflight_ops is too low
- 09:08 AM Bug #36562: [RGW][metadata] There is a huge failure rate when use put metadat API concurrently
- add env and ceph version info:
# ceph -v
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stab... - 09:02 AM Bug #36562 (New): [RGW][metadata] There is a huge failure rate when use put metadat API concurren...
- There is a huge failure rate when use put metadat API concurrently, and the number of concurrency is not very high.
...
Also available in: Atom