Activity
From 05/25/2018 to 06/23/2018
06/23/2018
06/22/2018
- 03:54 PM Backport #24633 (Resolved): mimic: rgw performance regression for luminous 12.2.4
- https://github.com/ceph/ceph/pull/22929
- 03:54 PM Backport #24632 (Resolved): luminous: rgw performance regression for luminous 12.2.4
- https://github.com/ceph/ceph/pull/22930
- 03:54 PM Backport #24631 (Resolved): mimic: cls_bucket_list fails causes cascading osd crashes
- https://github.com/ceph/ceph/pull/22927
- 03:54 PM Backport #24630 (Resolved): luminous: cls_bucket_list fails causes cascading osd crashes
- https://github.com/ceph/ceph/pull/24391
- 03:54 PM Backport #24629 (Resolved): mimic: rgw: fail to recover index from crash
- https://github.com/ceph/ceph/pull/23118
- 03:53 PM Backport #24628 (Resolved): luminous: rgw: fail to recover index from crash
- https://github.com/ceph/ceph/pull/23130
- 03:53 PM Backport #24627 (Rejected): jewel: rgw: fail to recover index from crash
- 01:50 PM Bug #24280 (Pending Backport): rgw: fail to recover index from crash
- 01:47 PM Bug #23379 (Pending Backport): rgw performance regression for luminous 12.2.4
- 01:45 PM Bug #24117 (Pending Backport): cls_bucket_list fails causes cascading osd crashes
- 08:41 AM Backport #24619 (Resolved): mimic: multisite: RGWSyncTraceNode released twice and crashed in reload
- https://github.com/ceph/ceph/pull/22926
06/21/2018
- 07:52 PM Bug #24432 (Pending Backport): multisite: RGWSyncTraceNode released twice and crashed in reload
- 06:53 PM Bug #24565: rgw: log usage to actual user
- Please see my comment on the bug, I don't think this one is correct.
- 06:49 PM Bug #24566: rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine::state_send_rest...
- 06:43 PM Bug #24572 (Fix Under Review): Lifecycle rules number on one bucket should be limited.
- https://github.com/ceph/ceph/pull/22623
- 06:42 PM Bug #24589: rgw: meta and data notify thread miss stop cr manager
- 06:41 PM Bug #24590: rgw: index complete miss zones_trace set
- 06:39 PM Bug #24592: "radosgw-admin objects expire" always returns ok even if the process fails.
- 06:38 PM Bug #24594: rgw: RGWPutObjProcessor_Aio::throttle_data not get actual write size
- 06:36 PM Bug #24595 (Triaged): rgw: default quota not set in radosgw for Openstack users
- 07:26 AM Bug #24595: rgw: default quota not set in radosgw for Openstack users
- This issue can be related to the same problem: https://tracker.ceph.com/issues/24532
- 05:38 PM Bug #24505 (Triaged): radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- 05:34 PM Bug #24551 (Triaged): RGW Dynamic bucket index resharding keeps resharding all buckets
- 03:42 PM Bug #24532: rgw user max buckets not working
- I also have the problem with default quota for keystone users (see https://tracker.ceph.com/issues/24595)
- 03:38 PM Bug #24532: rgw user max buckets not working
- I put my config to global section and it work. Great.
But i need openstack user which sync from keystone is applie... - 10:23 AM Bug #24532: rgw user max buckets not working
- @hoannv hoannv
As far as I remember I had the same issue having that attribute in the radosgw section
Moving tha... - 10:18 AM Bug #24532: rgw user max buckets not working
- @Massimo Sgaravatto
I put "rgw user max buckets" setting in radosgw section... - 07:30 AM Bug #24532: rgw user max buckets not working
- I also see problems with "rgw user default quota max size" for OpenStack users. See: https://tracker.ceph.com/issues/...
- 07:07 AM Bug #24532: rgw user max buckets not working
- We have seen the same problem with:...
- 03:38 PM Bug #22632: radosgw - s3 keystone integration doesn't work while using civetweb as frontend
- I am affected by the same problem (or at least a problem with the very same symptoms)
I am also running Ocata in t... - 01:25 PM Feature #24335: Get the user metadata of the user used to sign the request
- Can be marked as resolved.
- 01:25 PM Feature #24335: Get the user metadata of the user used to sign the request
- Done by https://github.com/ceph/ceph/pull/22390. Used by https://github.com/ceph/ceph/pull/22416.
- 01:00 PM Bug #24603: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- fix pr: https://github.com/ceph/ceph/pull/22660
I'm not 100% sure about this fix. This problem occurs in our produ... - 12:41 PM Bug #24603 (Resolved): rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- with rgw_debug=20, log shows as below( this is original log without any filter):
~~~~~~~~~rgw log begin~~~~~~~~~~~~~...
06/20/2018
- 09:57 PM Bug #23196 (Resolved): rgw: fix 'copy part' without 'x-amz-copy-source-range' when compression en...
- 09:57 PM Backport #24298 (Resolved): luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' when...
- 04:44 PM Backport #24298: luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' when compressio...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22438
merged - 09:55 PM Backport #24302 (Resolved): luminous: rgw: (jewel) can't delete swift acls with swift command.
- 04:42 PM Backport #24302: luminous: rgw: (jewel) can't delete swift acls with swift command.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22465
merged - 09:50 PM Backport #24314 (Resolved): luminous: multisite test failures in test_versioned_object_incrementa...
- 04:42 PM Backport #24314: luminous: multisite test failures in test_versioned_object_incremental_sync
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22541
merged - 09:50 PM Bug #19632 (Resolved): Bucket lifecycles stick around after buckets are deleted
- 09:49 PM Backport #24477 (Resolved): luminous: Bucket lifecycles stick around after buckets are deleted
- 04:41 PM Backport #24477: luminous: Bucket lifecycles stick around after buckets are deleted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22551
merged - 09:46 PM Bug #22897: rgw: (jewel) can't delete swift acls with swift command.
- Deleting mimic backport issue because, according to @PrashantD, the commits in question are already in mimic:
The ... - 09:44 PM Backport #20587 (Need More Info): jewel: radosgw/swift emulate split read/write acls?
- 09:41 PM Backport #24303 (Need More Info): jewel: rgw: (jewel) can't delete swift acls with swift command.
- 01:39 PM Bug #23651: Dynamic bucket indexing, resharding and tenants still seems to be broken
- Can someone tell me how to clean up the index? I have way too many objects now..
- 12:30 PM Backport #24385 (In Progress): mimic: objects in cache never refresh after rgw_cache_expiry_inter...
- https://github.com/ceph/ceph/pull/22643
- 07:52 AM Bug #24595 (Resolved): rgw: default quota not set in radosgw for Openstack users
- I have a luminous ceph cluster where I have just deployed radosgw
I tried relying on the "rgw user default quota m... - 07:43 AM Bug #24594: rgw: RGWPutObjProcessor_Aio::throttle_data not get actual write size
- https://github.com/ceph/ceph/pull/22638
- 07:36 AM Bug #24594 (New): rgw: RGWPutObjProcessor_Aio::throttle_data not get actual write size
- If handle_data get not enough for one write, we will put in pending_data_bl, so the throttle_data will only throttle ...
- 04:36 AM Bug #24592 (Fix Under Review): "radosgw-admin objects expire" always returns ok even if the proce...
- 03:50 AM Bug #24592: "radosgw-admin objects expire" always returns ok even if the process fails.
- https://github.com/ceph/ceph/pull/22635
- 03:49 AM Bug #24592: "radosgw-admin objects expire" always returns ok even if the process fails.
- rgw: "radosgw-admin objects expire" always returns ok even if the process fails.
- 03:45 AM Bug #24592 (Resolved): "radosgw-admin objects expire" always returns ok even if the process fails.
- "radosgw-admin objects expire" always returns ok, even if the objects expire process fails. We should print error so ...
- 03:29 AM Bug #24590 (Fix Under Review): rgw: index complete miss zones_trace set
- 02:32 AM Bug #24590: rgw: index complete miss zones_trace set
- https://github.com/ceph/ceph/pull/22632
- 02:24 AM Bug #24590 (Resolved): rgw: index complete miss zones_trace set
- index complete default with nullptr of zones_trace, and we should init one, otherwise will cause data sync redundant....
- 03:28 AM Bug #24563 (Fix Under Review): rgw: copy_object and multipart_copy response header etag format no...
- 03:27 AM Bug #24589 (Fix Under Review): rgw: meta and data notify thread miss stop cr manager
- 01:46 AM Bug #24589: rgw: meta and data notify thread miss stop cr manager
- https://github.com/ceph/ceph/pull/22631
- 01:45 AM Bug #24589 (Resolved): rgw: meta and data notify thread miss stop cr manager
- rgw restrat or reload will stuck in RGWCompletionManager::get_next()
- 03:25 AM Bug #24565 (Fix Under Review): rgw: log usage to actual user
- 03:07 AM Bug #24562 (Fix Under Review): Tail tag should be different when copy object data
- 01:17 AM Bug #24568: Delete marker generated by lifecycle has no owner
- According to S3, the delete marker should have owner attribute.
- 01:16 AM Bug #24568 (Fix Under Review): Delete marker generated by lifecycle has no owner
- 01:16 AM Bug #24568 (In Progress): Delete marker generated by lifecycle has no owner
06/19/2018
- 11:11 PM Bug #23531: s3a/2.8.0 fs.contract failure Seek/Rename/ComplexDirActions/RecursiveRootListing/Empt...
- I think the related link was for different reason, this is still a genuine issue with s3a tests.
- 09:18 AM Bug #24572 (Resolved): Lifecycle rules number on one bucket should be limited.
- In S3, one bucket could have no more than 1000 lifecycle rules. Error will be returned when we set more than 1000 rules.
- 06:53 AM Bug #24532: rgw user max buckets not working
- I restarted my radosgw admin.
I tested with rados gateway admin api, all config is working.... - 06:23 AM Bug #24532: rgw user max buckets not working
- Maybe you should restart the radosgw. It works in my environment.
- 06:16 AM Bug #24568: Delete marker generated by lifecycle has no owner
- https://github.com/ceph/ceph/pull/22619
- 06:09 AM Bug #24568 (Resolved): Delete marker generated by lifecycle has no owner
- The delete marker generated by object expiration has no owner associated with it.
- 05:35 AM Bug #24566: rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine::state_send_rest...
- https://github.com/ceph/ceph/pull/22617
- 05:34 AM Bug #24566 (Resolved): rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine::stat...
- 05:24 AM Bug #24565: rgw: log usage to actual user
- https://github.com/ceph/ceph/pull/22616
- 05:22 AM Bug #24565 (Fix Under Review): rgw: log usage to actual user
- other user's bucket operation was all log to owner's usage, it's not fair.
- 04:44 AM Bug #24563: rgw: copy_object and multipart_copy response header etag format not correct
- https://github.com/ceph/ceph/pull/22614
- 03:57 AM Bug #24563 (Fix Under Review): rgw: copy_object and multipart_copy response header etag format no...
- response header etag should wrapped with quotation marks
- 03:21 AM Bug #24562: Tail tag should be different when copy object data
- https://github.com/ceph/ceph/pull/22613
- 03:17 AM Bug #24562 (Fix Under Review): Tail tag should be different when copy object data
- Copying object data should generate new tail tag for the object.
06/18/2018
- 08:55 PM Bug #24551: RGW Dynamic bucket index resharding keeps resharding all buckets
- I also seem to have a case on 12.2.5 where buckets are in an endless attempt to reshard - If there is any useful data...
- 04:16 PM Bug #24551 (Resolved): RGW Dynamic bucket index resharding keeps resharding all buckets
- We're into some problems with dynamic bucket index resharding. After an upgrade from Ceph 12.2.2 to 12.2.5, which fix...
- 07:05 PM Bug #24364: rgw: s3cmd sync fails
- I can't reproduce using a different folder from my workstation but I do still get the same error when trying to sync ...
- 03:16 AM Backport #24352 (In Progress): mimic: rgw: request with range defined as "bytes=0--1" returns 41...
- https://github.com/ceph/ceph/pull/22590
06/16/2018
- 01:36 PM Backport #24547 (Resolved): mimic: rgw: change order of authentication back to local, remote
- https://github.com/ceph/ceph/pull/22842
- 01:36 PM Backport #24546 (Resolved): luminous: rgw: change order of authentication back to local, remote
- https://github.com/ceph/ceph/pull/23501/
06/15/2018
- 07:57 PM Bug #23089 (Pending Backport): rgw: change order of authentication back to local, remote
- https://github.com/ceph/ceph/pull/21494 merged
- 04:16 PM Bug #24544: change default rgw_thread_pool_size to 512
- https://github.com/ceph/ceph/pull/22581
- 04:08 PM Bug #24544 (Resolved): change default rgw_thread_pool_size to 512
- 03:29 PM Bug #24532 (New): rgw user max buckets not working
- Hi all.
Currently, i want to set default max-bucket per user.
I see this config in radosgw....
06/14/2018
- 12:32 PM Bug #18260: When uploading a large number of objects constantly, the objects number of bucket is ...
- Not yet, was busy with other issues,
I Will be resuming work on this bug now. - 09:39 AM Bug #24523 (New): rgw-multisite:metadata is behind on # shards
- radosgw-admin sync status return this:
realm a4cf2cc1-ce25-49d2-b5de-ffd491e35f7f (Sensetime)
zoneg...
06/13/2018
- 04:31 PM Backport #23683 (Resolved): luminous: radosgw-admin should not use metadata cache when not needed
- 04:30 PM Bug #22202 (Resolved): rgw_statfs should report the correct stats
- 04:30 PM Backport #23231 (Resolved): luminous: rgw_statfs should report the correct stats
- 04:29 PM Backport #23906 (Resolved): luminous: libcurl ignores headers with empty value, leading to signat...
- 04:28 PM Backport #24120 (Resolved): luminous: rgw: 403 error when creating an object with metadata contai...
- 03:19 PM Backport #24252 (Resolved): luminous: Admin OPS Api overwrites email when user is modified
- 03:05 PM Backport #24393 (Resolved): luminous: rgw: making implicit_tenants backwards compatible
- 03:04 PM Backport #24477 (In Progress): luminous: Bucket lifecycles stick around after buckets are deleted
- 02:57 PM Backport #24514 (Resolved): luminous: "radosgw-admin zonegroup set" requires realm
- https://github.com/ceph/ceph/pull/22767
- 01:12 PM Backport #24384 (In Progress): luminous: objects in cache never refresh after rgw_cache_expiry_in...
- 12:57 PM Bug #23146 (Resolved): rgw: download object might fail for local invariable uninitialized
- 12:56 PM Backport #24299 (Resolved): luminous: rgw: download object might fail for local invariable uninit...
- 04:18 AM Backport #24314 (In Progress): luminous: multisite test failures in test_versioned_object_increme...
- https://github.com/ceph/ceph/pull/22541
- 02:24 AM Bug #21583 (Pending Backport): "radosgw-admin zonegroup set" requires realm
06/12/2018
- 08:48 PM Documentation #24508 (Resolved): How to configure user- or bucket-specific data placement
- There is some minimal documentation about the radosgw-admin 'zone/zonegroup placement' commands, and some documentati...
- 05:19 PM Feature #24507 (Fix Under Review): [rfe] rgw: relaxed region constraint enforcement
- https://github.com/ceph/ceph/pull/22533
- 03:32 PM Feature #24507 (Fix Under Review): [rfe] rgw: relaxed region constraint enforcement
- Add a new "rgw_relaxed_region_enforcement" Boolean option to enable relaxed (non-enforcement of region constraint) be...
- 04:05 PM rgw-testing Bug #24483: add unit test for cls bi list command
- 01:54 PM Bug #23199: radosgw coredump RGWGC::process
- -https://github.com/ceph/ceph/pull/22520-
- 01:40 PM Bug #24505 (Resolved): radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- On a Ceph cluster started with vstart:...
- 09:35 AM Bug #24364: rgw: s3cmd sync fails
- Is this still reproducible in the setup, I tried reproducing this locally, but I'm not successful wkith that
- 09:27 AM Support #24457: large omap object
- i now have "11 large omap objects" and no clue what to do about it
- 06:59 AM Feature #24493 (Fix Under Review): rgw does not implement list_object_v2 in S3
- List_object_v2(get_bucket_v2) is different to list_object in some parameters and returned values.
In most case,it c...
06/11/2018
- 09:14 AM Support #24457: large omap object
- i tried a manual resharding
using this command: radosgw-admin bucket reshard --bucket bucket_name --num-shards 512...
06/10/2018
- 05:11 AM rgw-testing Bug #24483: add unit test for cls bi list command
- https://github.com/ceph/ceph/pull/21772
- 05:10 AM rgw-testing Bug #24483 (Resolved): add unit test for cls bi list command
- Focusing on the truncated flag
06/09/2018
- 11:19 AM Backport #24477 (Resolved): luminous: Bucket lifecycles stick around after buckets are deleted
- https://github.com/ceph/ceph/pull/22551
06/08/2018
- 02:19 PM Support #24457: large omap object
- Stephan Schultchen wrote:
> how can i get this information?
google helped (https://arvimal.blog/2016/06/30/shardi... - 02:13 PM Support #24457: large omap object
- how can i get this information?
- 02:02 PM Support #24457: large omap object
- How many shards does the bucket index have currently?
Matt - 01:37 PM Support #24457 (New): large omap object
- i have a ceph cluster that is exclusively serving radosgw/s3.
it only has one bucket with many objects in it
af... - 05:51 AM Backport #24313 (In Progress): mimic: multisite test failures in test_versioned_object_incrementa...
- https://github.com/ceph/ceph/pull/22466
- 05:38 AM Backport #24302 (In Progress): luminous: rgw: (jewel) can't delete swift acls with swift command.
- https://github.com/ceph/ceph/pull/22465
- 02:05 AM Feature #24443: bucket tagging not working properly
- You should change this issue to feature and backport to mimic,nautilus.
Bucket policy had s3:GetBucketTagging and ...
06/07/2018
- 10:17 PM Backport #23683: luminous: radosgw-admin should not use metadata cache when not needed
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21437
merged - 10:16 PM Backport #23231: luminous: rgw_statfs should report the correct stats
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21724
mergedReviewed-by: Matt Benjamin <mbenjami@redhat.... - 10:16 PM Backport #23906: luminous: libcurl ignores headers with empty value, leading to signature mismatches
- Casey Bodley wrote:
> https://github.com/ceph/ceph/pull/21738
merged - 10:15 PM Backport #24120: luminous: rgw: 403 error when creating an object with metadata containing sequen...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22177
merged - 10:14 PM Backport #24252: luminous: Admin OPS Api overwrites email when user is modified
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22352
merged - 10:11 PM Backport #24393: luminous: rgw: making implicit_tenants backwards compatible
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22363
merged - 06:53 PM Bug #24401: RadosGW credential locks itself during operation
- RGW doesn't lock requests by the access key. Maybe it's your java client that does it? Try running separate clients, ...
- 06:47 PM Bug #24364 (Triaged): rgw: s3cmd sync fails
- 03:08 PM Bug #24364: rgw: s3cmd sync fails
- ...
- 03:05 PM Bug #24364: rgw: s3cmd sync fails
- can you share teh ceph.conf and s3cmd.cfg
- 06:45 PM Bug #23939: etag/hash and content_type fields were concatenated wit \u0000
- It should be fixed in Mimic. However, I'm not completely sure that objects that were created prior will not have the ...
- 06:44 PM Feature #24443: bucket tagging not working properly
- We don't have support for bucket tagging yet, only object tagging support exists; though it wouldnt be that difficult...
- 07:56 AM Feature #24443 (Resolved): bucket tagging not working properly
- Hi all.
I am testing bucket tagging.
This is my script test... - 06:39 PM Bug #19632 (Pending Backport): Bucket lifecycles stick around after buckets are deleted
- 06:39 PM Bug #19632: Bucket lifecycles stick around after buckets are deleted
- https://github.com/ceph/ceph/pull/10460
- 06:37 PM Bug #23953: rgw: bucket index delete cleanup
- I had some thoughts around this one. It seems to me that we could have some entity (e.g., a special user) on which we...
- 06:24 PM Bug #18260: When uploading a large number of objects constantly, the objects number of bucket is ...
- Mark, is there any new information on this one?
- 06:19 PM Bug #23531: s3a/2.8.0 fs.contract failure Seek/Rename/ComplexDirActions/RecursiveRootListing/Empt...
- Casey, Vasu, can this one be closed?
- 06:18 PM Bug #23454: rgw/s3atests failure: fs contract testBlockReadZeroByteFile/testSeekZeroByteFile/test...
- Casey / Vasu, can this one be closed?
- 06:15 PM Bug #23099: REST admin metadata API paging failure bucket & bucket.instance: InvalidArgument
- Matt, any progress on this?
- 06:12 PM Bug #23661: RGWAsyncGetSystemObj failed assertion on shutdown/realm reload
- This looks like the teuthology run preceded the cloud sync merge. Unless it was testing the cloud sync work, I think ...
- 06:02 PM Bug #23089: rgw: change order of authentication back to local, remote
- 06:02 PM Bug #23824 (Duplicate): Quota calculation is substantially incorrect after user removes a non-emp...
- Marking as duplicate of http://tracker.ceph.com/issues/22124
Feel free to reopen if there's more info. - 06:00 PM Bug #23659: tempest tests failing with pysaml2 version conflict
- @cbodley anything new?
- 05:58 PM Bug #23470: presigned URL for PUT with metadata fails: SignatureDoesNotMatch
- 05:57 PM Bug #22985 (Can't reproduce): Buckets missing from rgw_read_user_buckets output
- Feel free to reopen if it happens again and/or you find more info.
- 05:50 PM Bug #24117: cls_bucket_list fails causes cascading osd crashes
06/06/2018
- 06:24 PM Backport #24386: jewel: rgw: objects in cache never refresh after rgw_cache_expiry_interval
- this duplicates
https://github.com/ceph/ceph/pull/22377 - 05:57 PM Backport #24386 (In Progress): jewel: rgw: objects in cache never refresh after rgw_cache_expiry_...
- -https://github.com/ceph/ceph/pull/22442-
- 06:22 PM Bug #24346: objects in cache never refresh after rgw_cache_expiry_interval
- I have already created (last week):
https://github.com/ceph/ceph/pull/22377 (jewel)
https://github.com/ceph/ceph/pu... - 05:58 PM Bug #24346: objects in cache never refresh after rgw_cache_expiry_interval
- Nathan Cutler wrote:
> Pavan Rallabhandi wrote:
> > It would be great if this can make it to Jewel 10.2.11, thanks!... - 05:06 PM Bug #24432: multisite: RGWSyncTraceNode released twice and crashed in reload
- 07:04 AM Bug #24432: multisite: RGWSyncTraceNode released twice and crashed in reload
- https://github.com/ceph/ceph/pull/22432
- 06:57 AM Bug #24432 (Resolved): multisite: RGWSyncTraceNode released twice and crashed in reload
- backtrace
#0 0x00007fea39590fcb in raise (sig=sig@entry=11) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:37
#1 ... - 05:04 PM Bug #24117 (Fix Under Review): cls_bucket_list fails causes cascading osd crashes
- 05:04 PM Bug #24117: cls_bucket_list fails causes cascading osd crashes
- This should fix the crash issue:
https://github.com/ceph/ceph/pull/22440 - 02:55 PM Backport #24299 (Need More Info): luminous: rgw: download object might fail for local invariable ...
- Luminous already have changes related to tracker #23146 :
afe9d995700 (Fang Yuxiang 2017-05-27 15:20:30 +0800 79... - 02:50 PM Backport #24298 (In Progress): luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' w...
- https://github.com/ceph/ceph/pull/22438
06/05/2018
- 01:27 PM Bug #23993: RGWCopyObj::parse_copy_location does not take vhost styple url into to consideration
- specifying source as somethinglike bucket.s3.amazonaws.com seems to fail in aws as well for me
- 12:54 PM Bug #23993: RGWCopyObj::parse_copy_location does not take vhost styple url into to consideration
- What is aws' behaviour in this case? Most clients seem to not permit using vhost style bucket names as a copy source?
- 09:10 AM Bug #23379 (In Progress): rgw performance regression for luminous 12.2.4
- Fix PR: https://github.com/ceph/ceph/pull/22410
06/04/2018
- 08:50 AM Bug #24401 (Need More Info): RadosGW credential locks itself during operation
- I have been seeing a behavior ever since I use Ceph, which at first I believed to be due to a misuse on my side, but ...
- 02:05 AM Backport #24253 (In Progress): mimic: Admin OPS Api overwrites email when user is modified
- http://tracker.ceph.com/issues/24253
06/03/2018
06/02/2018
- 09:07 AM Backport #24393 (In Progress): luminous: rgw: making implicit_tenants backwards compatible
- 09:07 AM Backport #24393 (Resolved): luminous: rgw: making implicit_tenants backwards compatible
- https://github.com/ceph/ceph/pull/22363
- 08:55 AM Bug #24348: rgw (luminous) making implicit_tenants backwards compatible.
- I've made a PR against master.
https://github.com/ceph/ceph/pull/22378
exactly the same logic as the original PR I ...
06/01/2018
- 07:04 PM Bug #24346: objects in cache never refresh after rgw_cache_expiry_interval
- Pavan Rallabhandi wrote:
> It would be great if this can make it to Jewel 10.2.11, thanks!
Do you want to take #2... - 05:10 PM Bug #24346: objects in cache never refresh after rgw_cache_expiry_interval
- It would be great if this can make it to Jewel 10.2.11, thanks!
- 12:27 PM Bug #24346 (Pending Backport): objects in cache never refresh after rgw_cache_expiry_interval
- jewel added to backports list as the cache expiration change is present
- 05:45 PM Backport #24386 (Rejected): jewel: rgw: objects in cache never refresh after rgw_cache_expiry_int...
- https://github.com/ceph/ceph/pull/22377
- 05:44 PM Backport #24385 (Resolved): mimic: objects in cache never refresh after rgw_cache_expiry_interval
- https://github.com/ceph/ceph/pull/22643
- 05:44 PM Backport #24384 (Resolved): luminous: objects in cache never refresh after rgw_cache_expiry_inter...
- https://github.com/ceph/ceph/pull/22369
- 11:52 AM Bug #24348 (Fix Under Review): rgw (luminous) making implicit_tenants backwards compatible.
- 07:22 AM Bug #24348: rgw (luminous) making implicit_tenants backwards compatible.
- I've made a PR that I beleive will address this.
https://github.com/ceph/ceph/pull/22363
this is for luminous: I'll... - 01:28 AM Backport #24252 (In Progress): luminous: Admin OPS Api overwrites email when user is modified
- https://github.com/ceph/ceph/pull/22352
05/31/2018
- 06:40 PM Bug #23939: etag/hash and content_type fields were concatenated wit \u0000
- Matt Benjamin wrote:
> yehuda thinks may be fixed in mimic
Could you clarify what is meant here please? Should we... - 06:24 PM Bug #23939 (Triaged): etag/hash and content_type fields were concatenated wit \u0000
- yehuda thinks may be fixed in mimic
- 06:35 PM Bug #24367 (Fix Under Review): multisite: object metadata operations are skipped by sync
- https://github.com/ceph/ceph/pull/22347
- 06:22 PM Bug #24367 (Resolved): multisite: object metadata operations are skipped by sync
- operations like PutACL that only modify object metadata do not generate a LINK_OLH entry in the bucket index log. whe...
- 06:23 PM Bug #23884 (Triaged): rgw: orphans find should avoid objects in indexless buckets
- discussed in bug scrub; this ticket is a placeholder for actually verifying orphans correctly when bucket is indexle...
- 05:52 PM Bug #24117 (Triaged): cls_bucket_list fails causes cascading osd crashes
- 05:52 PM Bug #24117: cls_bucket_list fails causes cascading osd crashes
- Need to remove assertions from the objclass code.
- 03:07 PM Bug #24364 (Resolved): rgw: s3cmd sync fails
- I'm trying to sync download.ceph.com packages to the Sepia Long Running Cluster and it's failing after upgrading to m...
- 10:55 AM Bug #23092: RGW leaking/orphan data with Jewel
- Hi Orit/Matt,
I am able to reproduce a similar issue in 12.2.5. Here are the steps for the same.
*TL;DR* : This... - 07:38 AM Backport #24358 (Resolved): luminous: SSL support for beast frontend
- https://github.com/ceph/ceph/pull/24621
- 07:37 AM Backport #24354 (Rejected): jewel: rgw: request with range defined as "bytes=0--1" returns 416 I...
- https://github.com/ceph/ceph/pull/22300
- 07:37 AM Backport #24353 (Resolved): luminous: rgw: request with range defined as "bytes=0--1" returns 41...
- https://github.com/ceph/ceph/pull/22302
- 07:37 AM Backport #24352 (Resolved): mimic: rgw: request with range defined as "bytes=0--1" returns 416 I...
- https://github.com/ceph/ceph/pull/22590
- 07:31 AM Bug #23489: [rgw] civetweb behind haproxy doesn't work with absolute URI
- Yes, you can see in issue description that rgw_dns_name corresponded to the host part of client url.
It's easy to ...
05/30/2018
- 08:32 PM Bug #24348 (Resolved): rgw (luminous) making implicit_tenants backwards compatible.
- In jewel, "rgw keystone implicit tenants" only applied to swift. In luminous(,+), it applies to s3 also. Sites that...
- 01:53 PM Bug #24346: objects in cache never refresh after rgw_cache_expiry_interval
- Matt Benjamin wrote:
> I'm unclear why there is any trace of already-expired items. I guess the intrusive cache cha... - 01:47 PM Bug #24346 (Fix Under Review): objects in cache never refresh after rgw_cache_expiry_interval
- https://github.com/ceph/ceph/pull/22324
- 01:40 PM Bug #24346: objects in cache never refresh after rgw_cache_expiry_interval
- I'm unclear why there is any trace of already-expired items. I guess the intrusive cache change doesn't have this is...
- 01:28 PM Bug #24346 (Resolved): objects in cache never refresh after rgw_cache_expiry_interval
- ObjectCacheInfo::time_added is only initialized on first insert. so once an entry reaches its rgw_cache_expiry_interv...
- 01:00 PM Bug #24317: rgw: request with range defined as "bytes=0--1" returns 416 InvalidRange
- Here's a trivial S3 test for the invalid range behavior when ignore invalid range is set:
@@attr(resource='object'... - 08:39 AM Bug #23379: rgw performance regression for luminous 12.2.4
- Mark Kogan wrote:
> Following internal discussion and verification on the test system
> While we are debugging the ... - 08:34 AM Bug #23379: rgw performance regression for luminous 12.2.4
- Following internal discussion and verification on the test system
While we are debugging the issue its currently rec...
05/29/2018
- 04:47 PM Feature #22832 (Pending Backport): SSL support for beast frontend
- 04:47 PM Feature #22832: SSL support for beast frontend
- backport also needs https://github.com/ceph/ceph/pull/21380
- 02:58 PM Bug #23379: rgw performance regression for luminous 12.2.4
- I was able to reproduce this and traced it back to the "rgw_cache_expiry_interval" parameter, the default value is 90...
- 02:26 PM Bug #24336 (Fix Under Review): rgw-multisite: Segmentation fault when use different rgw_data_log_...
- 10:38 AM Bug #24336: rgw-multisite: Segmentation fault when use different rgw_data_log_max_shards among zones
- Fix pr: https://github.com/ceph/ceph/pull/22108
- 10:36 AM Bug #24336 (Fix Under Review): rgw-multisite: Segmentation fault when use different rgw_data_log_...
- RGW will core when using different rgw_data_log_max_shards config in different zones within the same zonegroup.
log ... - 02:02 PM Bug #24317 (Pending Backport): rgw: request with range defined as "bytes=0--1" returns 416 Inval...
- 11:31 AM Bug #23322 (Resolved): radosgw-admin user stats --sync-stats without a user will create an empty ...
- 11:31 AM Backport #23721 (Resolved): jewel: radosgw-admin user stats --sync-stats without a user will crea...
- 11:30 AM Bug #23335 (Resolved): radosgw-admin: add an option to reset user stats
- 11:30 AM Backport #23338 (Resolved): jewel: radosgw-admin: add an option to reset user stats
- 11:30 AM Bug #22248 (Resolved): system user can't delete bucket completely
- 11:30 AM Backport #22939 (Resolved): jewel: system user can't delete bucket completely
- 11:29 AM Bug #22729 (Resolved): rgw: copying part without http header "x-amz-copy-source-range" will be mi...
- 11:29 AM Backport #22904 (Resolved): jewel: rgw: copying part without http header "x-amz-copy-source-range...
- 10:23 AM Bug #24194: rgw-multisite: Segmental fault when use different rgw_md_log_max_shards among zones
- Xinying Song wrote:
> Backport to luminous https://github.com/ceph/ceph/pull/22272
This pr is closed because it d... - 10:20 AM Feature #24335 (New): Get the user metadata of the user used to sign the request
- Currently it is not possible to get the metadata of a user without specifying the user ID via /admin/metadata/user=xy...
05/28/2018
- 06:11 AM Bug #24194: rgw-multisite: Segmental fault when use different rgw_md_log_max_shards among zones
- Backport to luminous https://github.com/ceph/ceph/pull/22272
05/26/2018
05/25/2018
- 09:14 PM Bug #24317: rgw: request with range defined as "bytes=0--1" returns 416 InvalidRange
- Casey noted that bytes=0--1 could be intended as a Python range. I haven't tried other negative range values against...
- 09:13 PM Bug #24317: rgw: request with range defined as "bytes=0--1" returns 416 InvalidRange
- https://github.com/ceph/ceph/pull/22231
- 09:13 PM Bug #24317 (Resolved): rgw: request with range defined as "bytes=0--1" returns 416 InvalidRange
Client application issues an S3 request with range defined as "bytes=0--1". The RadosGW will respond with 416 Inval...- 08:39 PM Backport #24314 (Resolved): luminous: multisite test failures in test_versioned_object_incrementa...
- https://github.com/ceph/ceph/pull/22541
- 08:39 PM Backport #24313 (Resolved): mimic: multisite test failures in test_versioned_object_incremental_sync
- https://github.com/ceph/ceph/pull/22466
- 03:36 PM Bug #23092: RGW leaking/orphan data with Jewel
- I don't think it's a known issue on Jewel. It would be useful to confirm the workflow for affected objects.
Matt - 03:12 PM Backport #23338: jewel: radosgw-admin: add an option to reset user stats
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/20877
merged - 03:11 PM Backport #22939: jewel: system user can't delete bucket completely
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21212
merged - 03:10 PM Backport #22904: jewel: rgw: copying part without http header "x-amz-copy-source-range" will be m...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21294
merged - 02:08 PM Bug #24212 (Pending Backport): multisite test failures in test_versioned_object_incremental_sync
- 11:57 AM Bug #23489: [rgw] civetweb behind haproxy doesn't work with absolute URI
- When using absolute urls, following things need to be in order:
rgw_dns_name must correspond to the host part of u... - 11:04 AM Backport #24303 (Rejected): jewel: rgw: (jewel) can't delete swift acls with swift command.
- https://github.com/ceph/ceph/pull/20257
- 11:04 AM Backport #24302 (Resolved): luminous: rgw: (jewel) can't delete swift acls with swift command.
- https://github.com/ceph/ceph/pull/22465
- 11:04 AM Backport #24299 (Resolved): luminous: rgw: download object might fail for local invariable uninit...
- https://github.com/ceph/ceph/pull/20741
- 11:04 AM Backport #24298 (Resolved): luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' when...
- https://github.com/ceph/ceph/pull/22438
- 03:16 AM Bug #24287 (New): rgw:when set rgw_max_chunk_size = 0 , put object will get RequestTimeout
- when set rgw_max_chunk_size = 0 , put object will get RequestTimeout
< PUT /test1/ssss HTTP/1.1
< Host: 127.0.0...
Also available in: Atom