Activity
From 11/02/2017 to 12/01/2017
12/01/2017
- 04:45 PM Bug #21984: RGWBug: rewrite a versioning object create a new object
- 01:58 PM Bug #22296: Librgw shutdown uncorreclty
- @Tao Thanks for your work debugging this. This removes the crash, but I'm not certain whether we should really be se...
- 09:30 AM Bug #22296: Librgw shutdown uncorreclty
- Addressed by : https://github.com/ceph/ceph/pull/19279
- 08:02 AM Bug #22296 (Resolved): Librgw shutdown uncorreclty
- Hi,
I'm recently using nfs-ganesha over RGW to do some test. I find that each time when nfs-ganesha shutdown, the ... - 12:50 PM Bug #22279: re-creating bucket should get BucketAlreadyOwnedByYou/BucketAlreadyExists
- https://github.com/ceph/ceph/pull/19249
- 12:22 PM Bug #22295: rgw multisite can't sync obj that has a name with url special character like '/'
- Hey, what's you ceph version? and would you mind checking the logging of RGW?
- 07:56 AM Bug #22295 (Triaged): rgw multisite can't sync obj that has a name with url special character lik...
- obj can sync:
normalobj
obj can't sync:
test/test/test
test/test/test/s
is this a bug? - 08:35 AM Bug #22274: Luminous's radosgw cant decide if it's 32 or 64bit
- Hi Matt :-)
We'll provide it with pleasure. Here we go:
[bartosz.bajorek@c49960]{09:28:15}[~]$ cat ceph-rados-32-... - 03:18 AM Bug #21368: rados gateway failed to sync with openstack keystone
- Yes. It worked. I updated to newest ceph.
Thanks. - 03:06 AM Bug #21368: rados gateway failed to sync with openstack keystone
- Yehuda Sadeh wrote:
> Is this still a problem for you?
It still error. I must change to keystone v2. It works.
B... - 03:06 AM Bug #21397: permission denied rados gateway multi-size meta search integration Elasticsearch
- I fixed.
Thanks.
11/30/2017
- 09:36 PM Bug #21560: rgw: put cors operation returns 500 unknown error (ops are ECANCELED)
- Unfortunately not 12.2.2, but master has the change. It should reach 12.2.3.
Matt - 09:30 PM Bug #21560: rgw: put cors operation returns 500 unknown error (ops are ECANCELED)
- This is awesome! Would it be a part of 12.2.2 by any chance?
- 07:01 PM Bug #21560: rgw: put cors operation returns 500 unknown error (ops are ECANCELED)
- (that has merged to master and is pending backport)
- 07:00 PM Bug #21560: rgw: put cors operation returns 500 unknown error (ops are ECANCELED)
- This may be addressed by cache consistency bugfix with similar result:
https://github.com/ceph/ceph/pull/18954 - 07:34 PM Bug #22225: rgw:socket leak in s3 multi part upload
- Sorry, related but not a duplicate. After triage, we're interested in:
1. behavior on L or master
2. Ceph configur... - 07:32 PM Bug #22225 (In Progress): rgw:socket leak in s3 multi part upload
- 07:28 PM Bug #22225 (Duplicate): rgw:socket leak in s3 multi part upload
- 07:28 PM Bug #22225: rgw:socket leak in s3 multi part upload
- We think this is a duplicate of:
http://tracker.ceph.com/issues/21401
(will backport to Luminous) - 07:32 PM Bug #21620 (Duplicate): radosgw-admin reshard process : unrecognized arg process
- 07:31 PM Bug #21723 (Pending Backport): rgw: radosgw-admin reshard command argument error.
- 07:29 PM Bug #22148 (Resolved): command `radosgw-admin bi put` not rightly set the mtime
- 07:22 PM Bug #22015 (Triaged): Civetweb reports bad response code.
- 07:20 PM Bug #22274 (Need More Info): Luminous's radosgw cant decide if it's 32 or 64bit
- Hi Folks,
Could you provide more information on the hardware and OS platform, and also binary packages you're usin... - 07:03 PM Bug #22202: rgw_statfs should report the correct stats
- The general approach sounds good. Coversation is happening on the attached branch.
- 07:02 PM Bug #22202 (In Progress): rgw_statfs should report the correct stats
- 06:56 PM Bug #21015 (Pending Backport): rgw: make HTTP dechunking compatible with Amazon S3
- 06:49 PM Bug #21591 (Need More Info): RGW multisite does not sync all objects
- 06:41 PM Bug #22129 (Pending Backport): rgw: 501 is returned When init multipart is using V4 signature and...
- 06:40 PM Bug #22124: rgw: user stats increased after bucket reshard
- 10:21 AM Bug #22124 (Fix Under Review): rgw: user stats increased after bucket reshard
- 10:21 AM Bug #22124: rgw: user stats increased after bucket reshard
- https://github.com/ceph/ceph/pull/19253
- 10:22 AM Bug #22094: Lots of reads on default.rgw.usage pool
- Can you clarify this? I now restart my rgw daemons every hour, which is ... ehm ... Suboptimal. :)
- 07:49 AM Bug #22279 (New): re-creating bucket should get BucketAlreadyOwnedByYou/BucketAlreadyExists
11/29/2017
- 04:09 PM Bug #22272: S3 API: incorrect error code on GET website bucket
- 02:20 PM Bug #22272: S3 API: incorrect error code on GET website bucket
- *PR*: https://github.com/ceph/ceph/pull/19236
- 12:19 PM Bug #22272 (Resolved): S3 API: incorrect error code on GET website bucket
- RGW returns "NoSuchKey" error code on GET website, when bucket doesn't have website configuration:
Request:
@GET /s... - 03:27 PM Bug #22015: Civetweb reports bad response code.
- Same here. Guys, can you take a look at this issue? StatusCode is a Major issue :-)
@SageWell ? - 03:25 PM Bug #22094: Lots of reads on default.rgw.usage pool
- BTW just wanted to note here, that as a workaround we can bump the value of "osd_max_omap_entries_per_request" (a new...
- 02:47 PM Bug #22274 (Won't Fix): Luminous's radosgw cant decide if it's 32 or 64bit
- Hi,
we have found something interesting:
strace: Process 86681 attached with 1104 threads
strace: [ Process PID=...
11/28/2017
- 08:05 PM Backport #22259 (Fix Under Review): jewel: rgw: swift anonymous access doesn't work in jewel
- This is a jewel-only bugfix.
- 04:38 AM Backport #22259: jewel: rgw: swift anonymous access doesn't work in jewel
- I have a pull request for the fix: https://github.com/ceph/ceph/pull/19194
- 04:22 AM Backport #22259 (Resolved): jewel: rgw: swift anonymous access doesn't work in jewel
- https://github.com/ceph/ceph/pull/19194
- 06:47 AM Bug #22261 (Duplicate): Object remaining in domain_root pool after delete bucket
- we use boto to create&delete bucket,after bucket was deleted,the .bucket.meta still in domain_root pool,it is a bug?...
- 12:49 AM Backport #22211 (In Progress): jewel: radosgw-admin zonegroup get and zone get should return defa...
- https://github.com/ceph/ceph/pull/19189
11/27/2017
- 09:59 AM Bug #22223: Swift API - Keystone Token Expiry - 403 instead of 401.
- Hi @Nathan,
Thanks, do you have any idea if this is fixed in Luminous? - 09:47 AM Bug #22234 (Fix Under Review): rgw usage trim only trims a few entries
- master pr: https://github.com/ceph/ceph/pull/19131
- 07:48 AM Bug #22248 (Resolved): system user can't delete bucket completely
- Create a bucket with normal user via s3 api, then remove the bucket with system user. The bucket instance is removed ...
- 04:49 AM Backport #22178: jewel: rgw: lifecycle process may block RGWRealmReloader::reload
- @Nathan, the src/rgw/rgw_lc.cc file is not available in jewel release. What would the possible way to backport this c...
- 03:02 AM Bug #22246: rgw: persist rgw_lc_max_objs
- https://github.com/ceph/ceph/pull/19119
- 02:58 AM Bug #22246 (New): rgw: persist rgw_lc_max_objs
- The argument rgw_lc_max_objs is not persisted, if it's changed to
a smaller value, some bucket's lifecycle config wi... - 01:38 AM Bug #21583: "radosgw-admin zonegroup set" requires realm
- https://github.com/ceph/ceph/pull/19061
11/26/2017
- 02:02 PM Bug #22225: rgw:socket leak in s3 multi part upload
- Kraken is EOL. Is the issue reproducible on Luminous or master?
- 02:01 PM Bug #22223: Swift API - Keystone Token Expiry - 403 instead of 401.
- Hi @Ross: Sorry to be the "bringer of bad news" here. . . . Kraken has been declared End Of Life (EOL) so do not expe...
11/24/2017
- 03:23 PM Bug #22234 (In Progress): rgw usage trim only trims a few entries
- 03:22 PM Bug #22234 (Resolved): rgw usage trim only trims a few entries
- rgw usage trim only trims a few entries (128 omap keys as per the code) which results in the need to call this multip...
11/22/2017
- 02:58 PM Bug #22225 (In Progress): rgw:socket leak in s3 multi part upload
- Kraken : 11.2.1
radosgw: S3 API.
When using rclone and the Ceph S3 API, we saw XML errors not long after
first u... - 02:10 PM Bug #22223: Swift API - Keystone Token Expiry - 403 instead of 401.
- To confirm RGW - 11.2.1.
- 01:55 PM Bug #22223 (New): Swift API - Keystone Token Expiry - 403 instead of 401.
- The Kraken implementation of the Swift API for RGW sends a 403 (Forbidden) response rather than a 401 (Unauthorized) ...
- 01:23 PM Fix #22222 (New): RGW whose "rgw_print_continue" is false should return 417 immediately when rece...
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Expect
- 03:51 AM Bug #22218 (Triaged): multisite: rgw crashed during meta sync
- This report superficially resembles an issue thought resolved in 12.2.1, http://tracker.ceph.com/issues/21097. Orit,...
- 03:10 AM Bug #22218 (Duplicate): multisite: rgw crashed during meta sync
- ceph version:12.1.3
I built a multi-zone multisite. It contains a realm, a zonegroup and two zones.After creating lo... - 02:31 AM Backport #22179 (In Progress): luminous: Swift object expiry incorrectly trims entries, leaving b...
- https://github.com/ceph/ceph/pull/19090
- 12:38 AM Backport #22177 (In Progress): luminous: rgw: lifecycle process may block RGWRealmReloader::reload
- https://github.com/ceph/ceph/pull/19088
11/21/2017
- 10:58 PM Backport #22214: luminous: Bucket policy evaluation is not carried out for DeleteBucketWebsite
- -https://github.com/ceph/ceph/pull/19087-
- 07:31 PM Backport #22214 (Resolved): luminous: Bucket policy evaluation is not carried out for DeleteBucke...
- https://github.com/ceph/ceph/pull/19810
- 10:20 PM Backport #22210: luminous: radosgw-admin zonegroup get and zone get should return defaults when t...
- https://github.com/ceph/ceph/pull/19086
- 06:42 PM Backport #22210 (Resolved): luminous: radosgw-admin zonegroup get and zone get should return defa...
- https://github.com/ceph/ceph/pull/19086
- 10:09 PM Backport #22215: luminous: rgw: bucket index object not deleted after radosgw-admin bucket rm --p...
- https://github.com/ceph/ceph/pull/19085
- 08:17 PM Backport #22215 (Resolved): luminous: rgw: bucket index object not deleted after radosgw-admin bu...
- https://github.com/ceph/ceph/pull/19085
- 09:24 PM Bug #22094: Lots of reads on default.rgw.usage pool
- See two logs. One of the users (from broken.log) needs a run from dynamic bucket sharding.
- 07:26 PM Bug #22094: Lots of reads on default.rgw.usage pool
- Certainly. Please see the output of that command here: https://hastebin.com/raw/omoqetimun. Let me know if there's an...
- 05:04 PM Bug #22094: Lots of reads on default.rgw.usage pool
- Can you also give us the output of radosgw-admin usage show --debug-rgw=20 --debug-ms=1 (this is verbose)
- 08:16 PM Bug #22122 (Pending Backport): rgw: bucket index object not deleted after radosgw-admin bucket rm...
- 08:16 PM Bug #19959 (Resolved): luminous: rgw: s3 user stats not updated after radosgw-admin bucket rm --p...
- The master PR https://github.com/ceph/ceph/pull/18922 fixes two tracker issues - this one and #22122. We'll take care...
- 08:12 PM Bug #20201: radosgw refuses upload when Content-Type missing from POST policy
- This is waiting for a test case to be added to https://github.com/ceph/s3-tests
Once that is in place, we can do t... - 08:09 PM Bug #20201 (Pending Backport): radosgw refuses upload when Content-Type missing from POST policy
- 06:42 PM Backport #22211 (Rejected): jewel: radosgw-admin zonegroup get and zone get should return default...
- https://github.com/ceph/ceph/pull/19189
- 03:48 PM Backport #22171 (In Progress): luminous: rgw: log keystone errors at a higher level
- 11:06 AM Backport #22171 (New): luminous: rgw: log keystone errors at a higher level
- 01:46 AM Backport #22171 (In Progress): luminous: rgw: log keystone errors at a higher level
- 09:03 AM Backport #22181 (In Progress): luminous: rgw segfaults after running radosgw-admin data sync init
11/20/2017
- 10:51 PM Bug #22094: Lots of reads on default.rgw.usage pool
- I am not sure if my problem is related but my "radosgw-admin usage show" calls do not return. They just hang on the f...
- 04:11 PM Bug #22094: Lots of reads on default.rgw.usage pool
- 2017-11-20 17:04:09.146690 7fd5d8968700 20 QUERY_STRING=format=json&show-summary=True&start=2017-11-01&show-entries=F...
- 03:48 PM Bug #22094: Lots of reads on default.rgw.usage pool
- For invoicing purposes, I use the following calls:
* /$rgw_admin_entry/user
* /$rgw_admin_entry/metadata/user
* ... - 03:40 PM Bug #22094: Lots of reads on default.rgw.usage pool
- the usage log isn't read by RGWs themselves (as per my understanding so far, I could be wrong) are there tools using ...
- 10:11 PM Backport #22188: jewel: rgw: add cors header rule check in cors option request
- https://github.com/ceph/ceph/pull/19057
- 11:05 AM Backport #22188 (Resolved): jewel: rgw: add cors header rule check in cors option request
- https://github.com/ceph/ceph/pull/19057
- 08:12 PM Backport #22187: luminous: rgw: add cors header rule check in cors option request
- https://github.com/ceph/ceph/pull/19053
- 11:05 AM Backport #22187 (Resolved): luminous: rgw: add cors header rule check in cors option request
- https://github.com/ceph/ceph/pull/19053
- 08:04 PM Backport #22184: luminous: Dynamic bucket indexing, resharding and tenants seems to be broken
- https://github.com/ceph/ceph/pull/19050
- 11:05 AM Backport #22184 (Resolved): luminous: Dynamic bucket indexing, resharding and tenants seems to be...
- https://github.com/ceph/ceph/pull/19050
- 04:52 PM Bug #22202 (Resolved): rgw_statfs should report the correct stats
- On NFS-Ganesha/RGW client, "df" command returns 0 for all the fields. It's because the function (https://github.com/c...
- 02:15 PM Bug #21615 (Pending Backport): radosgw-admin zonegroup get and zone get should return defaults wh...
- 12:45 PM Backport #22183 (In Progress): luminous: rgw: multisite with jewel as master will not sync data
- https://github.com/ceph/ceph/pull/19038
- 11:09 AM Backport #22183: luminous: rgw: multisite with jewel as master will not sync data
- I am working on this.
- 11:05 AM Backport #22183 (Resolved): luminous: rgw: multisite with jewel as master will not sync data
- https://github.com/ceph/ceph/pull/19038
- 11:05 AM Backport #22182 (Resolved): jewel: rgw segfaults after running radosgw-admin data sync init
- https://github.com/ceph/ceph/pull/19783
- 11:05 AM Backport #22181 (Resolved): luminous: rgw segfaults after running radosgw-admin data sync init
- https://github.com/ceph/ceph/pull/19071
- 11:05 AM Backport #22180 (Resolved): jewel: Swift object expiry incorrectly trims entries, leaving behind ...
- https://github.com/ceph/ceph/pull/18925
- 11:05 AM Backport #22179 (Resolved): luminous: Swift object expiry incorrectly trims entries, leaving behi...
- https://github.com/ceph/ceph/pull/18972
https://github.com/ceph/ceph/pull/19090 - 11:05 AM Backport #22178 (Rejected): jewel: rgw: lifecycle process may block RGWRealmReloader::reload
- 11:05 AM Backport #22177 (Resolved): luminous: rgw: lifecycle process may block RGWRealmReloader::reload
- https://github.com/ceph/ceph/pull/19088
- 11:05 AM Backport #22171 (Resolved): luminous: rgw: log keystone errors at a higher level
- https://github.com/ceph/ceph/pull/19077
- 10:24 AM Bug #22151 (Pending Backport): rgw: log keystone errors at a higher level
- 09:17 AM Feature #22168 (New): The RGW Admin OPS is missing the ability to filter for e.g. buckets and users
- Currently it is not possible to use the RGW Admin OPS API in scenarious with a large number of user and buckets. The ...
11/19/2017
- 05:23 PM Bug #22162 (Fix Under Review): add roles_pool json decode and encode in RGWZoneParams
- RGWZoneParams‘s json format miss the roles_pool, fix that so we and set it by 'zone set'.
11/17/2017
- 04:30 PM Bug #20234: rgw: shadow objects are sometimes not removed
- bug reproducible for v12.2.1...
- 02:34 PM Bug #22151: rgw: log keystone errors at a higher level
- master pr: https://github.com/ceph/ceph/pull/18985
- 01:57 PM Bug #22151 (Resolved): rgw: log keystone errors at a higher level
- A misconfigured keystone is not immediately obvious as even at level 5 we just log failed auth strategy, -1 which isn...
- 07:30 AM Bug #22148: command `radosgw-admin bi put` not rightly set the mtime
- https://github.com/ceph/ceph/pull/18981
- 07:30 AM Bug #22148 (Resolved): command `radosgw-admin bi put` not rightly set the mtime
- radosgw-admin bi get --bucket=testbucket --object=hello.txt | radosgw-admin bi put --bucket=testbucket
after exec... - 06:48 AM Bug #21591: RGW multisite does not sync all objects
- I certainly will. Hope it will be out soon.
- 01:03 AM Bug #22094: Lots of reads on default.rgw.usage pool
- maybe an issue with the is_truncated check when reading the usage?
11/16/2017
- 09:26 PM Bug #22094: Lots of reads on default.rgw.usage pool
- 2017-11-16 22:09:54.579970 7faa64f17700 1 -- [fdf5:c7f1:ca7b:0:10:10:9:101]:0/3705443576 <== osd.3 [fdf5:c7f1:ca7b:0...
- 07:27 PM Bug #22094 (Need More Info): Lots of reads on default.rgw.usage pool
- 07:27 PM Bug #22094: Lots of reads on default.rgw.usage pool
- Hi,
Can you provide rgw logs with debug_rgw=20 and debug_ms=1?
Can you provide more info about your setup and config? - 09:20 PM Bug #21154 (Fix Under Review): use-after-free from RGWRadosGetOmapKeysCR::~RGWRadosGetOmapKeysCR
- -https://github.com/ceph/ceph/pull/18975-
https://github.com/ceph/ceph/pull/23634 - 07:36 PM Bug #21154 (In Progress): use-after-free from RGWRadosGetOmapKeysCR::~RGWRadosGetOmapKeysCR
- 07:57 PM Bug #21984 (Fix Under Review): RGWBug: rewrite a versioning object create a new object
- 07:38 PM Bug #22080: radosgw-admin data sync run crashes
- 07:35 PM Bug #19959: luminous: rgw: s3 user stats not updated after radosgw-admin bucket rm --purge-objects
- 07:31 PM Bug #21397: permission denied rados gateway multi-size meta search integration Elasticsearch
- Is this still an issue?
- 07:30 PM Bug #21368: rados gateway failed to sync with openstack keystone
- Is this still a problem for you?
- 07:27 PM Bug #21619 (In Progress): RGW Reshard error add failed to drop lock on <bucket>
- 07:27 PM Bug #21619: RGW Reshard error add failed to drop lock on <bucket>
- We're working on adding a radosgw-admin reshard abort command that would deal with such issues.
- 07:25 PM Bug #22046 (Pending Backport): Dynamic bucket indexing, resharding and tenants seems to be broken
- 07:18 PM Bug #22122: rgw: bucket index object not deleted after radosgw-admin bucket rm --purge-objects --...
- 08:35 AM Bug #22122: rgw: bucket index object not deleted after radosgw-admin bucket rm --purge-objects --...
- https://github.com/ceph/ceph/pull/18922
- 07:15 PM Bug #22121 (In Progress): rgw: S3 interface: X-Amz-Copy-Source must be URL-encoded
- 07:13 PM Bug #22121: rgw: S3 interface: X-Amz-Copy-Source must be URL-encoded
- (we won't fix kraken, does this reproduce on Luminous or above?)
- 07:12 PM Bug #22121 (Triaged): rgw: S3 interface: X-Amz-Copy-Source must be URL-encoded
- 07:01 PM Bug #21015: rgw: make HTTP dechunking compatible with Amazon S3
- 07:00 PM Bug #20914: cls_rgw.gc_set and defer fail during jewel->luminous upgrade
- We will update the jewel tests to ignore this test or this specific error.
- 06:58 PM Bug #22002 (Pending Backport): rgw: add cors header rule check in cors option request
- 06:58 PM Bug #21591: RGW multisite does not sync all objects
- Can you make sure that this one is fixed once 12.2.2 is out?
- 06:57 PM Bug #21896 (Pending Backport): Bucket policy evaluation is not carried out for DeleteBucketWebsite
- 06:47 PM Bug #22124: rgw: user stats increased after bucket reshard
- need also to reset user stats
- 06:46 PM Bug #22124: rgw: user stats increased after bucket reshard
- We probably need to remove the stats entry for the old bucket instance from the user stats.
- 06:40 PM Bug #22124 (New): rgw: user stats increased after bucket reshard
- 08:37 AM Bug #22124: rgw: user stats increased after bucket reshard
- Aleksei Gutikov wrote:
> https://github.com/ceph/ceph/pull/18922
Sorry, this is not the fix, mixed up with http:/... - 06:47 PM Bug #22133: RGWRados::gc_aio_operate() ops abandoned on shutdown
- 04:33 PM Bug #22062 (Pending Backport): rgw: multisite with jewel as master will not sync data
- 10:50 AM rgw-testing Bug #22141: rgw: run ../qa/workunits/rgw/run-s3tests.sh in build dir,test_load_graph test not fin...
- https://github.com/ceph/ceph/pull/18966
- 10:25 AM rgw-testing Bug #22141 (New): rgw: run ../qa/workunits/rgw/run-s3tests.sh in build dir,test_load_graph test n...
- When run ../qa/workunits/rgw/run-s3tests.sh in ceph build dir,
s3tests.fuzz.test.test_fuzzer.test_load_grap need ope... - 04:06 AM Backport #22022: jewel: rgw: modify s3 type subuser access permission fail
- Shinobu Kinjo wrote:
> -https://github.com/ceph/ceph/pull/18937-
- 02:05 AM Bug #22130: Custom root pools: default zonegroup is not found by radosgw-admin or radosgw
- My bad - false alarm. Casey on the mailing list noted that I had a wrong key:
rgw_region = gold.rgw.root
## this ... - 01:15 AM Bug #22129: rgw: 501 is returned When init multipart is using V4 signature and chunk encoding
- https://github.com/ceph/ceph/pull/18956
11/15/2017
- 11:25 PM Bug #22139 (Resolved): clang compilation error in BoundedKeyCounter
- 09:34 PM Bug #22139 (Fix Under Review): clang compilation error in BoundedKeyCounter
- https://github.com/ceph/ceph/pull/18953
- 09:02 PM Bug #22139 (Resolved): clang compilation error in BoundedKeyCounter
- clang complains about the use of BoundedKeyCounter::const_pointer_iterator in vector::assign(), because it doesn't sa...
- 07:52 PM Bug #22083 (Pending Backport): rgw segfaults after running radosgw-admin data sync init
- https://github.com/ceph/ceph/pull/18883 also merged
- 03:58 PM Bug #22133 (New): RGWRados::gc_aio_operate() ops abandoned on shutdown
- This is a problem for `radosgw-admin bucket rm` without --bypass-gc, because it queues up a lot of these and then exi...
- 03:33 PM Bug #22099 (Pending Backport): rgw: lifecycle process may block RGWRealmReloader::reload
- https://github.com/ceph/ceph/pull/18846
- 02:50 PM Bug #22129: rgw: 501 is returned When init multipart is using V4 signature and chunk encoding
- Per http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html, this sound logically correct.
- 02:36 PM Bug #22129: rgw: 501 is returned When init multipart is using V4 signature and chunk encoding
- 11:04 AM Bug #22129: rgw: 501 is returned When init multipart is using V4 signature and chunk encoding
- To fix this issue share we modify the code as follows?...
- 11:01 AM Bug #22129 (Resolved): rgw: 501 is returned When init multipart is using V4 signature and chunk e...
- The is what we saw in wireshark when using a Go S3 SDK to access the S3 service from RGW in 11.2.0....
- 02:42 PM Bug #22124 (Fix Under Review): rgw: user stats increased after bucket reshard
- 07:40 AM Bug #22124: rgw: user stats increased after bucket reshard
- https://github.com/ceph/ceph/pull/18922
- 02:36 PM Bug #22130 (Triaged): Custom root pools: default zonegroup is not found by radosgw-admin or radosgw
- 12:58 PM Bug #22130: Custom root pools: default zonegroup is not found by radosgw-admin or radosgw
- If I use the normal .rgw.root then it works as usual.
- 12:56 PM Bug #22130 (Resolved): Custom root pools: default zonegroup is not found by radosgw-admin or radosgw
- Using a custom root (not .rgw.root)
[global]
rgw_realm_root_pool = gold.rgw.root
rgw_zonegroup_root_pool = gold.... - 11:13 AM Bug #22094: Lots of reads on default.rgw.usage pool
- Is there anything I can do to debug this?
- 07:41 AM Bug #19959: luminous: rgw: s3 user stats not updated after radosgw-admin bucket rm --purge-objects
- https://github.com/ceph/ceph/pull/18922
- 06:46 AM Backport #22022: jewel: rgw: modify s3 type subuser access permission fail
- -https://github.com/ceph/ceph/pull/18937-
11/14/2017
- 08:46 PM Feature #22127 (New): rgw: a house cleaning tool
- Admin will be able to schedule removal of users' data, or specific buckets. Potentially other operations. A work thre...
- 07:20 PM Bug #22062: rgw: multisite with jewel as master will not sync data
- https://github.com/ceph/ceph/pull/18926
- 04:21 PM Bug #22083: rgw segfaults after running radosgw-admin data sync init
- 04:20 PM Bug #22083 (Pending Backport): rgw segfaults after running radosgw-admin data sync init
- 04:18 PM Bug #22083: rgw segfaults after running radosgw-admin data sync init
- Casey Bodley wrote:
> https://github.com/ceph/ceph/pull/18852
merged
- 04:20 PM Bug #22101 (Resolved): rgw_asio_client.cc compilation failure with boost version 1.64+
- 04:16 PM Bug #22101: rgw_asio_client.cc compilation failure with boost version 1.64+
- Casey Bodley wrote:
> https://github.com/ceph/ceph/pull/18866
merged - 04:08 PM Bug #22124 (Resolved): rgw: user stats increased after bucket reshard
- after calling ...
- 03:39 PM Bug #22084: Swift object expiry incorrectly trims entries, leaving behind some of the objects to ...
- jewel backport pr: https://github.com/ceph/ceph/pull/18925
- 03:26 PM Bug #22122 (Fix Under Review): rgw: bucket index object not deleted after radosgw-admin bucket rm...
- http://tracker.ceph.com/issues/22122
- 12:50 PM Bug #22122 (Resolved): rgw: bucket index object not deleted after radosgw-admin bucket rm --purge...
- After execution of ...
- 03:25 PM Bug #19959 (Fix Under Review): luminous: rgw: s3 user stats not updated after radosgw-admin bucke...
- https://github.com/ceph/ceph/pull/18922
- 09:03 AM Bug #19959: luminous: rgw: s3 user stats not updated after radosgw-admin bucket rm --purge-objects
- reproducible with v12.2.1
user stats not updated with... - 10:07 AM Bug #20201: radosgw refuses upload when Content-Type missing from POST policy
- (I am the original reporter) I confirm that the patch at https://github.com/ceph/ceph/pull/18658 fixes the bug for me...
- 09:50 AM Bug #22121 (Resolved): rgw: S3 interface: X-Amz-Copy-Source must be URL-encoded
- According to the AWS docs on COPY: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html it states about...
11/13/2017
- 05:36 PM Bug #22084 (Pending Backport): Swift object expiry incorrectly trims entries, leaving behind some...
- 04:08 PM Bug #21154: use-after-free from RGWRadosGetOmapKeysCR::~RGWRadosGetOmapKeysCR
- it looks like this is happening because the parent coroutine RGWDataSyncShardCR is passing pointers to its members (e...
- 09:34 AM Bug #22080 (Fix Under Review): radosgw-admin data sync run crashes
- https://github.com/ceph/ceph/pull/18898
- 03:58 AM Bug #13095 (In Progress): rgw: MDlog will not output correctly
11/10/2017
- 07:48 PM Bug #22083 (Fix Under Review): rgw segfaults after running radosgw-admin data sync init
- https://github.com/ceph/ceph/pull/18852
https://github.com/ceph/ceph/pull/18883 - 04:04 PM Bug #22084: Swift object expiry incorrectly trims entries, leaving behind some of the objects to ...
- Pavan Rallabhandi wrote:
> https://github.com/ceph/ceph/pull/18821
merged - 03:02 PM Bug #21560: rgw: put cors operation returns 500 unknown error (ops are ECANCELED)
- That does make sense, Yehuda! Meanwhile please let me know if there's any other information I can pull to expedite this.
- 07:26 AM Backport #21919: luminous: rgw_file: full enumeration needs new readdir whence strategy, plus re...
- -https://github.com/ceph/ceph/pull/18872-
- 07:19 AM Backport #21949: luminous: rgw: null instance mtime incorrect when enable versioning
- https://github.com/ceph/ceph/pull/18870
- 07:11 AM Backport #22026: luminous: Policy parser may or may not dereference uninitialized boost::optional...
- https://github.com/ceph/ceph/pull/18868
- 07:09 AM Backport #22027: luminous: multisite: destination zone does not compress synced objects
- https://github.com/ceph/ceph/pull/18867
- 05:00 AM Bug #22101 (Fix Under Review): rgw_asio_client.cc compilation failure with boost version 1.64+
- https://github.com/ceph/ceph/pull/18866
- 04:45 AM Bug #22101 (Resolved): rgw_asio_client.cc compilation failure with boost version 1.64+
- The beast library changes its boost::beast::string_view typedef from boost::string_ref to boost::string_view if BOOST...
- 04:32 AM Backport #21631: luminous: remove region from "INSTALL CEPH OBJECT GATEWAY"
- https://github.com/ceph/ceph/pull/18865
11/09/2017
- 04:37 PM Bug #22084: Swift object expiry incorrectly trims entries, leaving behind some of the objects to ...
- 12:59 PM Backport #21972 (Resolved): luminous: rgw_file: objects have wrong accounted_size
- 06:58 AM Backport #21972: luminous: rgw_file: objects have wrong accounted_size
- https://github.com/ceph/ceph/pull/18839
- 12:39 PM Bug #22099: rgw: lifecycle process may block RGWRealmReloader::reload
- https://github.com/ceph/ceph/pull/18846
- 12:39 PM Bug #22099 (Resolved): rgw: lifecycle process may block RGWRealmReloader::reload
- lifecycle process may block RGWRealmReloader::reload, and during this time, rgw cannot process new requests.
- 09:49 AM Backport #21938 (Resolved): luminous: multisite: data sync status advances despite failure in RGW...
- 08:50 AM Bug #22094 (Closed): Lots of reads on default.rgw.usage pool
- Since I upgraded to Luminous a few weeks ago, I see a lot of read-activity on the default.rgw.usage pool. (See attach...
11/08/2017
- 04:59 PM Bug #22084: Swift object expiry incorrectly trims entries, leaving behind some of the objects to ...
- https://github.com/ceph/ceph/pull/18821
- 04:52 PM Bug #22084 (Fix Under Review): Swift object expiry incorrectly trims entries, leaving behind some...
- https://github.com/ceph/ceph/pull/18821
- 04:01 PM Bug #22084 (Resolved): Swift object expiry incorrectly trims entries, leaving behind some of the ...
- In cls_timeindex_list() though `to_index` has expired for a timespan, the marker is set for a subsequent index during...
- 04:00 PM Bug #22002: rgw: add cors header rule check in cors option request
- joke lee wrote:
> https://github.com/ceph/ceph/pull/18556 fix this
merged - 03:07 PM Bug #22083 (Resolved): rgw segfaults after running radosgw-admin data sync init
- ...
- 02:40 PM Feature #22081 (New): RGWRados should acquire/release its cache watchers in parallel
- By default, 8 rgw cache watchers get started/stopped every time you run radosgw/radosgw-admin. For simple admin comma...
- 01:00 PM Bug #22080: radosgw-admin data sync run crashes
- seeing in a Luminous-Luminous cluster as well
- 10:52 AM Bug #22080 (Resolved): radosgw-admin data sync run crashes
- Seen this on a jewel-master luminous-secondary cluster. Maybe reproducible on a L-L cluster as well haven't tried.
... - 10:05 AM Bug #22046 (Fix Under Review): Dynamic bucket indexing, resharding and tenants seems to be broken
- https://github.com/ceph/ceph/pull/18811
- 10:00 AM Bug #22060 (Closed): RGW: ERROR: failed to distribute cache
- As it turns out a admin on this system created a Zabbix check which ran 'radosgw-admin usage show' every minute. Whil...
- 08:21 AM Bug #22072 (New): one object degraded cause all ceph rgw request hang
- I have a 20 node cluster (3 mons, 3 rgw, 14 osd) running on centos7.3 with XFS filesystem, ceph version 10.2.9 jewel
...
11/07/2017
- 09:44 PM Bug #22060: RGW: ERROR: failed to distribute cache
- Wido den Hollander wrote:
> All the watchers are from the 2 RGWs I have running.
Are you saying that your 'listwa... - 07:01 PM Bug #22060: RGW: ERROR: failed to distribute cache
- I saw 144 watchers on each object. All 8 notify objects have that many watchers. Is that normal?
I tried stopping ... - 06:01 PM Bug #22060: RGW: ERROR: failed to distribute cache
- Sounds like a leak in the watch/notify registration. How many watchers are there on the other notify objects?
- 01:18 PM Bug #22060: RGW: ERROR: failed to distribute cache
- Digging further I see that my *rgw1* sends out a notify:...
- 10:14 AM Bug #22060: RGW: ERROR: failed to distribute cache
- I lowered *client_notify_timeout* to 5 seconds and now I see:...
- 09:40 AM Bug #22060 (Closed): RGW: ERROR: failed to distribute cache
- As reported on the ceph-users list I see these messages in my RGW log running Luminous:...
- 04:10 PM Bug #22062 (Resolved): rgw: multisite with jewel as master will not sync data
- With a jewel rgw as the master zone, data doesn't sync as sync_from_all in period ends up as configured being false a...
11/06/2017
- 09:34 PM Bug #21607: rgw: s3 v4 auth fails teuthology s3-tests: test_object_header_acl_grants test_bucket_...
- jewel revert: http://tracker.ceph.com/issues/22028
- 05:39 PM Bug #21607: rgw: s3 v4 auth fails teuthology s3-tests: test_object_header_acl_grants test_bucket_...
- @Matt sorry for the typo, it's https://github.com/ceph/ceph/pull/18080.
and ack. will backport the revert. - 09:31 PM Backport #22028 (In Progress): jewel: boto3 v4 SignatureDoesNotMatch failure due to sorting of ss...
- 08:15 PM Bug #22002: rgw: add cors header rule check in cors option request
- test in https://github.com/ceph/s3-tests/pull/192 for backport as well
- 02:15 PM Backport #21851 (New): luminous: rgw: Missing error handling when gen_rand_alphanumeric is failing
- 01:50 PM Backport #21851 (In Progress): luminous: rgw: Missing error handling when gen_rand_alphanumeric i...
- 01:53 PM Backport #22016 (In Progress): luminous: RGWCrashError: RGW will crash when generating random buc...
- 12:35 PM Backport #22020 (In Progress): luminous: multisite: race between sync of bucket and bucket instan...
- 12:34 PM Backport #22021 (In Progress): luminous: rgw: modify s3 type subuser access permission fail
- 12:33 PM Backport #22024 (In Progress): luminous: RGWCrashError: RGW will crash if a putting lc config req...
- 12:23 PM Backport #22017 (In Progress): luminous: Segmentation fault when starting radosgw after reverting...
- 09:36 AM Bug #22046 (Resolved): Dynamic bucket indexing, resharding and tenants seems to be broken
- I see issues with resharding. rgw logging shows the following:
2017-10-20 15:17:30.018807 7fa1b219a700 -1 ERROR: fai...
11/03/2017
- 03:49 PM Backport #22028 (Resolved): jewel: boto3 v4 SignatureDoesNotMatch failure due to sorting of sse-k...
- https://github.com/ceph/ceph/pull/18772
- 03:49 PM Backport #22027 (Resolved): luminous: multisite: destination zone does not compress synced objects
- https://github.com/ceph/ceph/pull/18867
- 03:49 PM Backport #22026 (Resolved): luminous: Policy parser may or may not dereference uninitialized boos...
- https://github.com/ceph/ceph/pull/18868
- 03:49 PM Backport #22024 (Resolved): luminous: RGWCrashError: RGW will crash if a putting lc config reques...
- https://github.com/ceph/ceph/pull/18765
- 03:49 PM Backport #22022 (Rejected): jewel: rgw: modify s3 type subuser access permission fail
- 03:49 PM Backport #22021 (Resolved): luminous: rgw: modify s3 type subuser access permission fail
- https://github.com/ceph/ceph/pull/18766
- 03:49 PM Backport #22020 (Resolved): luminous: multisite: race between sync of bucket and bucket instance ...
- https://github.com/ceph/ceph/pull/18767
- 03:49 PM Backport #22018 (Resolved): jewel: Segmentation fault when starting radosgw after reverting .rgw....
- https://github.com/ceph/ceph/pull/20292
- 03:49 PM Backport #22017 (Resolved): luminous: Segmentation fault when starting radosgw after reverting .r...
- https://github.com/ceph/ceph/pull/18764
- 03:48 PM Backport #22016 (Resolved): luminous: RGWCrashError: RGW will crash when generating random bucket...
- https://github.com/ceph/ceph/pull/18765
- 03:48 PM Bug #21832 (Pending Backport): boto3 v4 SignatureDoesNotMatch failure due to sorting of sse-kms h...
- https://github.com/ceph/ceph/pull/18080 was merged by mistake, so we'll need to backport the revert after all.
- 03:30 PM Bug #22015 (Resolved): Civetweb reports bad response code.
- Civetweb does not log response code like 200, 204, 403, 404 etc. Always logs value "1". Entry looks like:
2017-11... - 02:19 PM Bug #21607: rgw: s3 v4 auth fails teuthology s3-tests: test_object_header_acl_grants test_bucket_...
- kefu, if https://github.com/ceph/ceph/pull/18080 merged into Jewel, I think it needs a revert.
Matt - 02:08 PM Bug #21607: rgw: s3 v4 auth fails teuthology s3-tests: test_object_header_acl_grants test_bucket_...
- Kefu, https://github.com/ceph/ceph/pull/18165 is about xfs vs. btrfs in filestore?
Matt - 01:39 PM Bug #21607: rgw: s3 v4 auth fails teuthology s3-tests: test_object_header_acl_grants test_bucket_...
- Ken, Nathan, i merged https://github.com/ceph/ceph/pull/18165 before i read this ticket. shall we backport the revert...
- 02:11 PM Bug #22006 (Pending Backport): RGWCrashError: RGW will crash when generating random bucket name a...
- 02:09 PM Bug #22006: RGWCrashError: RGW will crash when generating random bucket name and object name duri...
- https://github.com/ceph/ceph/pull/18536
11/02/2017
- 10:25 PM Bug #22010 (New): Failures removing RGW buckets with --bypass-gc
- When trying to remove 6000 buckets that were used during a POC I ran into the following couple of problems when attem...
- 05:50 PM Bug #21591: RGW multisite does not sync all objects
- ah, nevermind. Casey's last comment is probably correct.
- 05:48 PM Bug #21591: RGW multisite does not sync all objects
- could be an issue with sync of resharded buckets. At the moment resharded bucket sync on the secondary zone could lea...
- 05:44 PM Bug #21942 (Resolved): rgw_log (and rgw_file): don't use undefined/unset RGWEnv key/value pairs
- 05:43 PM Bug #21560: rgw: put cors operation returns 500 unknown error (ops are ECANCELED)
- @cbodley @vdb I'm not completely convinced that it's a cache coherency issue. Could still be, but could also be that ...
- 05:36 PM Bug #20201: radosgw refuses upload when Content-Type missing from POST policy
- 04:43 PM Bug #22006 (Resolved): RGWCrashError: RGW will crash when generating random bucket name and objec...
- The definition of gen_rand_alphanumeric is
int gen_rand_alphanumeric(CephContext *cct, char *dest, int size);,
cct-... - 04:29 PM Bug #18939 (Fix Under Review): rgw: versioned object sync inconsistency
- 02:33 PM Bug #21996 (Pending Backport): Segmentation fault when starting radosgw after reverting .rgw.root
- 09:44 AM Bug #21615: radosgw-admin zonegroup get and zone get should return defaults when there is no realm
- https://github.com/ceph/ceph/pull/18667
- 09:17 AM Backport #21938 (In Progress): luminous: multisite: data sync status advances despite failure in ...
- 05:01 AM Bug #22002: rgw: add cors header rule check in cors option request
- https://github.com/ceph/ceph/pull/18556 fix this
- 05:01 AM Bug #22002 (Resolved): rgw: add cors header rule check in cors option request
- hi,
i set cors on bucket test1 as follow
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http... - 04:30 AM Bug #22001: multisite: dead lock in RGWSyncTraceManager::finish_node
- I did not see the necessity to preserve parent, so just removed that will fix the problem.
fix in https://github.com... - 04:22 AM Bug #22001 (New): multisite: dead lock in RGWSyncTraceManager::finish_node
- complete_nodes is a circular_buffer.
RGWSyncTraceManager::finish_node() will push back to complete_nodes, which may ... - 02:07 AM Bug #21620: radosgw-admin reshard process : unrecognized arg process
- duplicated with: http://tracker.ceph.com/issues/21723
and has been fixed at https://github.com/ceph/ceph/pull/18175
...
Also available in: Atom