Activity
From 06/11/2018 to 07/10/2018
07/10/2018
- 03:31 PM Bug #20696: radosgw doesn't start with civetweb handlerin Debian Stretch
- Experiencing the same problem on Ubuntu:
* Ubuntu 18.04
* Ceph 12.2.4 (luminous)
* openssl: 1.1.0g - 03:24 PM Bug #18936 (Fix Under Review): rgw slo manifest: etag and size should be optional
- 03:24 PM Bug #18936: rgw slo manifest: etag and size should be optional
- part 2:
https://github.com/ceph/ceph/pull/22967 - 03:09 PM Bug #24857 (New): Object Gateway not stable on FreeBSD
- We are trying to use the Object Gateway on FreeBSD but the radosgw daemon is segfaulting almost every time.
We a... - 09:14 AM Bug #24846: rgw: check zone_id or zone_name not empty for zone get sub-command
- https://github.com/ceph/ceph/pull/22957
- 09:09 AM Bug #24846 (Closed): rgw: check zone_id or zone_name not empty for zone get sub-command
- Currently radosgw-admin zone get command didn't check zone_id or zone_name is not empty or not, so this command alway...
- 04:00 AM Backport #24844 (Resolved): luminous: rgw: require --yes-i-really-mean-it to run radosgw-admin o...
- https://github.com/ceph/ceph/pull/22985
- 04:00 AM Backport #24843 (Resolved): mimic: rgw: require --yes-i-really-mean-it to run radosgw-admin orph...
- https://github.com/ceph/ceph/pull/22986
- 03:33 AM Bug #24146 (Pending Backport): rgw: require --yes-i-really-mean-it to run radosgw-admin orphans ...
07/09/2018
- 04:35 PM Bug #24364: rgw: s3cmd sync fails
- I updated the RGWs to 13.2.0 and am still seeing the same issue.
- 01:22 PM Backport #24834 (Resolved): mimic: 'radogw-admin reshard status' command should print text for re...
- https://github.com/ceph/ceph/pull/23021
- 01:22 PM Backport #24833 (Resolved): luminous: 'radogw-admin reshard status' command should print text for...
- https://github.com/ceph/ceph/pull/23019
- 01:21 PM Backport #24831 (Resolved): mimic: "radosgw-admin objects expire" always returns ok even if the p...
- https://github.com/ceph/ceph/pull/23001
- 01:21 PM Backport #24830 (Resolved): luminous: "radosgw-admin objects expire" always returns ok even if th...
- https://github.com/ceph/ceph/pull/23000
- 08:43 AM Bug #24821: rgw doesnt support delimiter longer then one symbol
- Yes, i saw it, but look - Type: String, and Amazon S3 takes accepts any length string as delimiter.
Some software us... - 06:46 AM Bug #24821: rgw doesnt support delimiter longer then one symbol
- `A delimiter is a character you use to group keys.`
FYI: https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBu... - 03:05 AM Bug #24658: bucketname Unable to recognize if it including "."
- Abhishek Lekshmanan wrote:
> Is the bucket name with dots wroking with path style format http://gz../audio.res.v1/fo... - 02:51 AM Backport #24782 (In Progress): luminous: rgw: set cr state if aio_read err return in RGWCloneMeta...
- https://github.com/ceph/ceph/pull/22942
07/08/2018
- 11:33 PM Backport #24807 (In Progress): mimic: rgw gc may cause a large number of read traffic
- 09:11 AM Bug #24821 (Resolved): rgw doesnt support delimiter longer then one symbol
- If i list bucket with delimiter containing only one symbol everything is ok:...
07/07/2018
- 04:37 PM Backport #24809 (In Progress): mimic: Invalid Access-Control-Request-Request may bypass validate_...
- 04:36 PM Backport #24810 (In Progress): luminous: Invalid Access-Control-Request-Request may bypass valida...
- 09:18 AM Backport #24813 (In Progress): mimic: REST admin metadata API paging failure bucket & bucket.inst...
- 09:17 AM Backport #24814 (In Progress): luminous: REST admin metadata API paging failure bucket & bucket.i...
- 08:22 AM Backport #24632 (In Progress): luminous: rgw performance regression for luminous 12.2.4
- 08:21 AM Backport #24633 (In Progress): mimic: rgw performance regression for luminous 12.2.4
- 08:19 AM Backport #24630 (In Progress): luminous: cls_bucket_list fails causes cascading osd crashes
- 08:15 AM Backport #24631 (In Progress): mimic: cls_bucket_list fails causes cascading osd crashes
- 08:12 AM Bug #24432: multisite: RGWSyncTraceNode released twice and crashed in reload
- @Yehuda: I could not find RGWSyncTraceNode in luminous, so I removed luminous from the backport field.
- 08:10 AM Bug #24432: multisite: RGWSyncTraceNode released twice and crashed in reload
- @Casey: I could not find RGWSyncTraceNode in luminous
- 08:06 AM Backport #24619 (In Progress): mimic: multisite: RGWSyncTraceNode released twice and crashed in r...
- 08:04 AM Bug #22927: rgw: Location element not returned correctly from Post Upload Object
- The master PR https://github.com/ceph/ceph/pull/20330 is still open, presumably pending integration testing?
07/06/2018
- 09:58 PM Bug #24815 (Resolved): cls_rgw test is only run in rados suite: add it to rgw suite as well
- Master PR: https://github.com/ceph/ceph/pull/22919
- 09:37 PM Bug #23257 (Pending Backport): 'radogw-admin reshard status' command should print text for reshar...
- 06:49 PM Bug #23257: 'radogw-admin reshard status' command should print text for reshard status
- Orit Wasserman wrote:
> https://github.com/ceph/ceph/pull/20779
merged - 09:11 PM Bug #24592 (Pending Backport): "radosgw-admin objects expire" always returns ok even if the proce...
- 01:57 PM Bug #24592: "radosgw-admin objects expire" always returns ok even if the process fails.
- zhang sw wrote:
> https://github.com/ceph/ceph/pull/22635
merged - 09:07 PM Backport #24814 (Resolved): luminous: REST admin metadata API paging failure bucket & bucket.inst...
- https://github.com/ceph/ceph/pull/22932
- 09:07 PM Backport #24813 (Resolved): mimic: REST admin metadata API paging failure bucket & bucket.instanc...
- https://github.com/ceph/ceph/pull/22933
- 09:06 PM Backport #24810 (Resolved): luminous: Invalid Access-Control-Request-Request may bypass validate_...
- https://github.com/ceph/ceph/pull/22934
- 09:06 PM Backport #24809 (Resolved): mimic: Invalid Access-Control-Request-Request may bypass validate_cor...
- https://github.com/ceph/ceph/pull/22935
- 09:06 PM Backport #24808 (Resolved): luminous: rgw gc may cause a large number of read traffic
- https://github.com/ceph/ceph/pull/22984
- 09:06 PM Backport #24807 (Resolved): mimic: rgw gc may cause a large number of read traffic
- https://github.com/ceph/ceph/pull/22941
- 08:39 PM Backport #24627 (In Progress): jewel: rgw: fail to recover index from crash
- Reverted by https://github.com/ceph/ceph/pull/22916 (until we sort out the regression http://tracker.ceph.com/issues/...
- 06:51 PM Bug #24767 (Pending Backport): rgw gc may cause a large number of read traffic
- 06:49 PM Bug #23099 (Pending Backport): REST admin metadata API paging failure bucket & bucket.instance: I...
- I believe this is needed on Luminous also - correct me if I'm wrong.
- 06:46 PM Bug #23099: REST admin metadata API paging failure bucket & bucket.instance: InvalidArgument
- Matt Benjamin wrote:
> https://github.com/ceph/ceph/pull/22721
merged - 03:27 PM Bug #24658: bucketname Unable to recognize if it including "."
- Is the bucket name with dots wroking with path style format http://gz../audio.res.v1/foobar?
If so then it is a... - 02:43 PM Bug #24568: Delete marker generated by lifecycle has no owner
- zhang sw wrote:
> https://github.com/ceph/ceph/pull/22619
merged - 02:39 PM Bug #24568: Delete marker generated by lifecycle has no owner
- This change seems logically correct, since delete markers may be deleted by protocol.
https://docs.aws.amazon.com/... - 02:14 PM Bug #24223 (Pending Backport): Invalid Access-Control-Request-Request may bypass validate_cors_ru...
- 02:02 PM Bug #24223: Invalid Access-Control-Request-Request may bypass validate_cors_rule_method
- Jeegn Chen wrote:
> PR: https://github.com/ceph/ceph/pull/22145
mergedReviewed-by: Casey Bodley <cbodley@redhat.com> - 02:00 PM Bug #24563: rgw: copy_object and multipart_copy response header etag format not correct
- Tianshan Qu wrote:
> https://github.com/ceph/ceph/pull/22614
merged - 01:58 PM Bug #24572: Lifecycle rules number on one bucket should be limited.
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/22623
merged - 11:14 AM Bug #24789 (New): [rgw] ERROR: unable to remove bucket(2) No such file or directory (if --bypass-gc)
- I try to remove bucket bypass gc:...
- 10:46 AM Bug #24364: rgw: s3cmd sync fails
- Is this reproducible still with latest mimic (13.2.0)?
07/05/2018
- 06:22 PM Backport #24783 (In Progress): mimic: rgw: set cr state if aio_read err return in RGWCloneMetaLog...
- 06:15 PM Backport #24783 (Resolved): mimic: rgw: set cr state if aio_read err return in RGWCloneMetaLogCor...
- https://github.com/ceph/ceph/pull/22880
- 06:15 PM Backport #24782 (Resolved): luminous: rgw: set cr state if aio_read err return in RGWCloneMetaLog...
- https://github.com/ceph/ceph/pull/22942
- 06:03 PM Bug #23199 (Fix Under Review): radosgw coredump RGWGC::process
- -https://github.com/ceph/ceph/pull/22539-
https://github.com/ceph/ceph/pull/25430 - 06:01 PM Bug #23842: multisite: es sync null versioned object failed because of olh info
- 05:57 PM Bug #24265 (Need More Info): Ceph Luminous radosgw/rgw failed to start Couldn't init storage prov...
- the systemd service name must match the ceph.conf section after client. In this case try starting rgw service as rado...
- 05:48 PM Bug #24401 (Need More Info): RadosGW credential locks itself during operation
- 05:42 PM Bug #24572: Lifecycle rules number on one bucket should be limited.
- 05:40 PM Bug #24744: rgw: index wrongly deleted when put raced with list
- 05:39 PM Bug #24744 (Fix Under Review): rgw: index wrongly deleted when put raced with list
- 03:45 PM Bug #24767: rgw gc may cause a large number of read traffic
- 06:33 AM Bug #24767: rgw gc may cause a large number of read traffic
- [[https://github.com/ceph/ceph/pull/22863]]
- 02:32 PM Bug #24566 (Pending Backport): rgw: set cr state if aio_read err return in RGWCloneMetaLogCorouti...
- 07:37 AM Bug #24775: rgw-multisite: radosgw-admin misreport sync status
- fix pr: https://github.com/ceph/ceph/pull/22866
- 06:50 AM Bug #24775 (Fix Under Review): rgw-multisite: radosgw-admin misreport sync status
- Recently we get a situation where data in different zones are far away from being consistency, but `radosgw-admin syn...
07/04/2018
- 01:09 PM Bug #24767 (Resolved): rgw gc may cause a large number of read traffic
- Using rgw uploads only, ceph -s finds a large number of read traffic(about hundreds of MB/s). And the write traffic i...
- 07:19 AM rgw-testing Backport #24737 (In Progress): luminous: add unit test for cls bi list command
- 07:14 AM rgw-testing Backport #24736 (In Progress): mimic: add unit test for cls bi list command
- 04:15 AM Backport #24547 (In Progress): mimic: rgw: change order of authentication back to local, remote
- https://github.com/ceph/ceph/pull/22842
07/03/2018
- 03:46 PM Bug #24505: radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- Adding another scenario -
where the cluster was installed with resharding and then resharding was disabled and rados... - 11:03 AM Backport #24693 (In Progress): luminous: rgw: meta and data notify thread miss stop cr manager
- 11:02 AM Backport #24702 (In Progress): mimic: rgw: meta and data notify thread miss stop cr manager
- 11:00 AM Backport #24692 (In Progress): luminous: rgw: index complete miss zones_trace set
- 11:00 AM Backport #24701 (In Progress): mimic: rgw: index complete miss zones_trace set
- 10:58 AM Backport #24690 (In Progress): luminous: rgw-multisite: endless loop in RGWBucketShardIncremental...
- 10:57 AM Backport #24700 (In Progress): mimic: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- 09:12 AM Documentation #24754 (New): NFS ganesha with RGW - missing documentation/bug?
- Hi, i have tried to configure NFS ganesha with my RGW to be able to see files in RGW as a FS(running Ubuntu 16.04, Lu...
- 08:42 AM Backport #24627 (Resolved): jewel: rgw: fail to recover index from crash
- 08:37 AM Backport #24628 (In Progress): luminous: rgw: fail to recover index from crash
- 08:36 AM Backport #24629 (In Progress): mimic: rgw: fail to recover index from crash
07/02/2018
- 08:38 PM Bug #24640: test_cls_rgw.sh cls_rgw.index_suggest fail with EINVAL
- I'm currently evaluating the fix.
- 07:48 PM Bug #24640: test_cls_rgw.sh cls_rgw.index_suggest fail with EINVAL
- fix in -https://github.com/ceph/ceph/pull/22798-
- 08:35 PM Bug #18936 (In Progress): rgw slo manifest: etag and size should be optional
- 08:34 PM Bug #18936: rgw slo manifest: etag and size should be optional
- As of Mimic, an incoming manifest lacking etag and/or size_bytes can be uploaded, but the resulting object cannot be ...
- 07:44 PM Bug #24744: rgw: index wrongly deleted when put raced with list
- https://github.com/ceph/ceph/pull/22798
- 07:14 PM Bug #24744 (Fix Under Review): rgw: index wrongly deleted when put raced with list
- like the issue http://tracker.ceph.com/issues/22555 , a special sequence can cause this new situation.
IO sequence... - 05:55 PM Bug #23257: 'radogw-admin reshard status' command should print text for reshard status
- 04:37 PM rgw-testing Backport #24737 (Resolved): luminous: add unit test for cls bi list command
- https://github.com/ceph/ceph/pull/22846
- 04:37 PM rgw-testing Backport #24736 (Resolved): mimic: add unit test for cls bi list command
- https://github.com/ceph/ceph/pull/22845
- 01:06 PM rgw-testing Bug #24483 (Pending Backport): add unit test for cls bi list command
- 10:37 AM Bug #24551: RGW Dynamic bucket index resharding keeps resharding all buckets
- This problem occurs in the version-enabled bucket.
Test env: ceph 12.2.5 + applied this patch https://github.com/c...
06/29/2018
- 09:34 PM Bug #24640: test_cls_rgw.sh cls_rgw.index_suggest fail with EINVAL
- I have confirmed that commit https://github.com/ceph/ceph/pull/22217 is the culprit. The commit before the merge succ...
- 06:01 AM Backport #24514 (In Progress): luminous: "radosgw-admin zonegroup set" requires realm
- https://github.com/ceph/ceph/pull/22767
- 02:25 AM Bug #24658: bucketname Unable to recognize if it including "."
- but can request ok like that http://gz.s3.imu.cn/test/mp3/abc.mp3 , the bucketname is test, which not including "."
- 02:20 AM Bug #24658: bucketname Unable to recognize if it including "."
- Abhishek Lekshmanan wrote:
> What is the rgw dns name for rgw here? Are you trying to request via vhost style calli...
06/28/2018
- 08:09 PM Backport #24702 (Resolved): mimic: rgw: meta and data notify thread miss stop cr manager
- https://github.com/ceph/ceph/pull/22821
- 08:09 PM Backport #24701 (Resolved): mimic: rgw: index complete miss zones_trace set
- https://github.com/ceph/ceph/pull/22818
- 08:09 PM Backport #24700 (Resolved): mimic: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- https://github.com/ceph/ceph/pull/22815
- 08:04 PM Backport #24693 (Resolved): luminous: rgw: meta and data notify thread miss stop cr manager
- https://github.com/ceph/ceph/pull/22822
- 08:04 PM Backport #24692 (Resolved): luminous: rgw: index complete miss zones_trace set
- https://github.com/ceph/ceph/pull/22820
- 08:04 PM Backport #24690 (Resolved): luminous: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- https://github.com/ceph/ceph/pull/22817
- 05:57 PM Bug #24589 (Pending Backport): rgw: meta and data notify thread miss stop cr manager
- 05:57 PM Bug #24590 (Pending Backport): rgw: index complete miss zones_trace set
- 05:55 PM Bug #24603 (Pending Backport): rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- 05:48 PM Bug #24658: bucketname Unable to recognize if it including "."
- What is the rgw dns name for rgw here? Are you trying to request via vhost style calling format with gz as the bucke...
- 05:44 PM Bug #23099: REST admin metadata API paging failure bucket & bucket.instance: InvalidArgument
- 05:36 PM Bug #24640: test_cls_rgw.sh cls_rgw.index_suggest fail with EINVAL
- likely caused by recent changes to rgw_dir_suggest_changes() in https://github.com/ceph/ceph/pull/22217
- 02:52 PM Bug #24544 (Fix Under Review): change default rgw_thread_pool_size to 512
- 04:13 AM Feature #24681 (New): Radosgw admin api for logging (like radosgw-admin log * command)
- Hi all.
Currently,i used radosgw-admin command to collect all radosgw action for a user.
All command is slow, i...
06/27/2018
- 09:03 AM Bug #23193: multsite use of metadata list pagination
- I don't think the admin api requires url encoding--though that works; the marker is opaque, and needs to work in doc...
06/26/2018
- 09:47 PM Bug #23099 (Fix Under Review): REST admin metadata API paging failure bucket & bucket.instance: I...
- 09:47 PM Bug #23099: REST admin metadata API paging failure bucket & bucket.instance: InvalidArgument
- https://github.com/ceph/ceph/pull/22721
- 05:21 PM Backport #24627: jewel: rgw: fail to recover index from crash
- pr: https://github.com/ceph/ceph/pull/22677
- 05:20 PM Backport #24628: luminous: rgw: fail to recover index from crash
- pr: -https://github.com/ceph/ceph/pull/22678-
- 05:19 PM Backport #24629: mimic: rgw: fail to recover index from crash
- pr: https://github.com/ceph/ceph/pull/22679
- 01:25 PM Bug #22072: one object degraded cause all ceph rgw request hang
- Having a same issue on Luminous, see: http://tracker.ceph.com/issues/24645
Can someone have a look at it, please? - 04:08 AM Bug #24658 (Need More Info): bucketname Unable to recognize if it including "."
my request is:
http://gz.s3.imu.cn/audio.res.v1/mp3/abc.mp3
return:
Error
>
<Code
>
NoSuchBucket
</Co...
06/23/2018
06/22/2018
- 03:54 PM Backport #24633 (Resolved): mimic: rgw performance regression for luminous 12.2.4
- https://github.com/ceph/ceph/pull/22929
- 03:54 PM Backport #24632 (Resolved): luminous: rgw performance regression for luminous 12.2.4
- https://github.com/ceph/ceph/pull/22930
- 03:54 PM Backport #24631 (Resolved): mimic: cls_bucket_list fails causes cascading osd crashes
- https://github.com/ceph/ceph/pull/22927
- 03:54 PM Backport #24630 (Resolved): luminous: cls_bucket_list fails causes cascading osd crashes
- https://github.com/ceph/ceph/pull/24391
- 03:54 PM Backport #24629 (Resolved): mimic: rgw: fail to recover index from crash
- https://github.com/ceph/ceph/pull/23118
- 03:53 PM Backport #24628 (Resolved): luminous: rgw: fail to recover index from crash
- https://github.com/ceph/ceph/pull/23130
- 03:53 PM Backport #24627 (Rejected): jewel: rgw: fail to recover index from crash
- 01:50 PM Bug #24280 (Pending Backport): rgw: fail to recover index from crash
- 01:47 PM Bug #23379 (Pending Backport): rgw performance regression for luminous 12.2.4
- 01:45 PM Bug #24117 (Pending Backport): cls_bucket_list fails causes cascading osd crashes
- 08:41 AM Backport #24619 (Resolved): mimic: multisite: RGWSyncTraceNode released twice and crashed in reload
- https://github.com/ceph/ceph/pull/22926
06/21/2018
- 07:52 PM Bug #24432 (Pending Backport): multisite: RGWSyncTraceNode released twice and crashed in reload
- 06:53 PM Bug #24565: rgw: log usage to actual user
- Please see my comment on the bug, I don't think this one is correct.
- 06:49 PM Bug #24566: rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine::state_send_rest...
- 06:43 PM Bug #24572 (Fix Under Review): Lifecycle rules number on one bucket should be limited.
- https://github.com/ceph/ceph/pull/22623
- 06:42 PM Bug #24589: rgw: meta and data notify thread miss stop cr manager
- 06:41 PM Bug #24590: rgw: index complete miss zones_trace set
- 06:39 PM Bug #24592: "radosgw-admin objects expire" always returns ok even if the process fails.
- 06:38 PM Bug #24594: rgw: RGWPutObjProcessor_Aio::throttle_data not get actual write size
- 06:36 PM Bug #24595 (Triaged): rgw: default quota not set in radosgw for Openstack users
- 07:26 AM Bug #24595: rgw: default quota not set in radosgw for Openstack users
- This issue can be related to the same problem: https://tracker.ceph.com/issues/24532
- 05:38 PM Bug #24505 (Triaged): radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- 05:34 PM Bug #24551 (Triaged): RGW Dynamic bucket index resharding keeps resharding all buckets
- 03:42 PM Bug #24532: rgw user max buckets not working
- I also have the problem with default quota for keystone users (see https://tracker.ceph.com/issues/24595)
- 03:38 PM Bug #24532: rgw user max buckets not working
- I put my config to global section and it work. Great.
But i need openstack user which sync from keystone is applie... - 10:23 AM Bug #24532: rgw user max buckets not working
- @hoannv hoannv
As far as I remember I had the same issue having that attribute in the radosgw section
Moving tha... - 10:18 AM Bug #24532: rgw user max buckets not working
- @Massimo Sgaravatto
I put "rgw user max buckets" setting in radosgw section... - 07:30 AM Bug #24532: rgw user max buckets not working
- I also see problems with "rgw user default quota max size" for OpenStack users. See: https://tracker.ceph.com/issues/...
- 07:07 AM Bug #24532: rgw user max buckets not working
- We have seen the same problem with:...
- 03:38 PM Bug #22632: radosgw - s3 keystone integration doesn't work while using civetweb as frontend
- I am affected by the same problem (or at least a problem with the very same symptoms)
I am also running Ocata in t... - 01:25 PM Feature #24335: Get the user metadata of the user used to sign the request
- Can be marked as resolved.
- 01:25 PM Feature #24335: Get the user metadata of the user used to sign the request
- Done by https://github.com/ceph/ceph/pull/22390. Used by https://github.com/ceph/ceph/pull/22416.
- 01:00 PM Bug #24603: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- fix pr: https://github.com/ceph/ceph/pull/22660
I'm not 100% sure about this fix. This problem occurs in our produ... - 12:41 PM Bug #24603 (Resolved): rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
- with rgw_debug=20, log shows as below( this is original log without any filter):
~~~~~~~~~rgw log begin~~~~~~~~~~~~~...
06/20/2018
- 09:57 PM Bug #23196 (Resolved): rgw: fix 'copy part' without 'x-amz-copy-source-range' when compression en...
- 09:57 PM Backport #24298 (Resolved): luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' when...
- 04:44 PM Backport #24298: luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' when compressio...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22438
merged - 09:55 PM Backport #24302 (Resolved): luminous: rgw: (jewel) can't delete swift acls with swift command.
- 04:42 PM Backport #24302: luminous: rgw: (jewel) can't delete swift acls with swift command.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22465
merged - 09:50 PM Backport #24314 (Resolved): luminous: multisite test failures in test_versioned_object_incrementa...
- 04:42 PM Backport #24314: luminous: multisite test failures in test_versioned_object_incremental_sync
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22541
merged - 09:50 PM Bug #19632 (Resolved): Bucket lifecycles stick around after buckets are deleted
- 09:49 PM Backport #24477 (Resolved): luminous: Bucket lifecycles stick around after buckets are deleted
- 04:41 PM Backport #24477: luminous: Bucket lifecycles stick around after buckets are deleted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22551
merged - 09:46 PM Bug #22897: rgw: (jewel) can't delete swift acls with swift command.
- Deleting mimic backport issue because, according to @PrashantD, the commits in question are already in mimic:
The ... - 09:44 PM Backport #20587 (Need More Info): jewel: radosgw/swift emulate split read/write acls?
- 09:41 PM Backport #24303 (Need More Info): jewel: rgw: (jewel) can't delete swift acls with swift command.
- 01:39 PM Bug #23651: Dynamic bucket indexing, resharding and tenants still seems to be broken
- Can someone tell me how to clean up the index? I have way too many objects now..
- 12:30 PM Backport #24385 (In Progress): mimic: objects in cache never refresh after rgw_cache_expiry_inter...
- https://github.com/ceph/ceph/pull/22643
- 07:52 AM Bug #24595 (Resolved): rgw: default quota not set in radosgw for Openstack users
- I have a luminous ceph cluster where I have just deployed radosgw
I tried relying on the "rgw user default quota m... - 07:43 AM Bug #24594: rgw: RGWPutObjProcessor_Aio::throttle_data not get actual write size
- https://github.com/ceph/ceph/pull/22638
- 07:36 AM Bug #24594 (New): rgw: RGWPutObjProcessor_Aio::throttle_data not get actual write size
- If handle_data get not enough for one write, we will put in pending_data_bl, so the throttle_data will only throttle ...
- 04:36 AM Bug #24592 (Fix Under Review): "radosgw-admin objects expire" always returns ok even if the proce...
- 03:50 AM Bug #24592: "radosgw-admin objects expire" always returns ok even if the process fails.
- https://github.com/ceph/ceph/pull/22635
- 03:49 AM Bug #24592: "radosgw-admin objects expire" always returns ok even if the process fails.
- rgw: "radosgw-admin objects expire" always returns ok even if the process fails.
- 03:45 AM Bug #24592 (Resolved): "radosgw-admin objects expire" always returns ok even if the process fails.
- "radosgw-admin objects expire" always returns ok, even if the objects expire process fails. We should print error so ...
- 03:29 AM Bug #24590 (Fix Under Review): rgw: index complete miss zones_trace set
- 02:32 AM Bug #24590: rgw: index complete miss zones_trace set
- https://github.com/ceph/ceph/pull/22632
- 02:24 AM Bug #24590 (Resolved): rgw: index complete miss zones_trace set
- index complete default with nullptr of zones_trace, and we should init one, otherwise will cause data sync redundant....
- 03:28 AM Bug #24563 (Fix Under Review): rgw: copy_object and multipart_copy response header etag format no...
- 03:27 AM Bug #24589 (Fix Under Review): rgw: meta and data notify thread miss stop cr manager
- 01:46 AM Bug #24589: rgw: meta and data notify thread miss stop cr manager
- https://github.com/ceph/ceph/pull/22631
- 01:45 AM Bug #24589 (Resolved): rgw: meta and data notify thread miss stop cr manager
- rgw restrat or reload will stuck in RGWCompletionManager::get_next()
- 03:25 AM Bug #24565 (Fix Under Review): rgw: log usage to actual user
- 03:07 AM Bug #24562 (Fix Under Review): Tail tag should be different when copy object data
- 01:17 AM Bug #24568: Delete marker generated by lifecycle has no owner
- According to S3, the delete marker should have owner attribute.
- 01:16 AM Bug #24568 (Fix Under Review): Delete marker generated by lifecycle has no owner
- 01:16 AM Bug #24568 (In Progress): Delete marker generated by lifecycle has no owner
06/19/2018
- 11:11 PM Bug #23531: s3a/2.8.0 fs.contract failure Seek/Rename/ComplexDirActions/RecursiveRootListing/Empt...
- I think the related link was for different reason, this is still a genuine issue with s3a tests.
- 09:18 AM Bug #24572 (Resolved): Lifecycle rules number on one bucket should be limited.
- In S3, one bucket could have no more than 1000 lifecycle rules. Error will be returned when we set more than 1000 rules.
- 06:53 AM Bug #24532: rgw user max buckets not working
- I restarted my radosgw admin.
I tested with rados gateway admin api, all config is working.... - 06:23 AM Bug #24532: rgw user max buckets not working
- Maybe you should restart the radosgw. It works in my environment.
- 06:16 AM Bug #24568: Delete marker generated by lifecycle has no owner
- https://github.com/ceph/ceph/pull/22619
- 06:09 AM Bug #24568 (Resolved): Delete marker generated by lifecycle has no owner
- The delete marker generated by object expiration has no owner associated with it.
- 05:35 AM Bug #24566: rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine::state_send_rest...
- https://github.com/ceph/ceph/pull/22617
- 05:34 AM Bug #24566 (Resolved): rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine::stat...
- 05:24 AM Bug #24565: rgw: log usage to actual user
- https://github.com/ceph/ceph/pull/22616
- 05:22 AM Bug #24565 (Fix Under Review): rgw: log usage to actual user
- other user's bucket operation was all log to owner's usage, it's not fair.
- 04:44 AM Bug #24563: rgw: copy_object and multipart_copy response header etag format not correct
- https://github.com/ceph/ceph/pull/22614
- 03:57 AM Bug #24563 (Fix Under Review): rgw: copy_object and multipart_copy response header etag format no...
- response header etag should wrapped with quotation marks
- 03:21 AM Bug #24562: Tail tag should be different when copy object data
- https://github.com/ceph/ceph/pull/22613
- 03:17 AM Bug #24562 (Fix Under Review): Tail tag should be different when copy object data
- Copying object data should generate new tail tag for the object.
06/18/2018
- 08:55 PM Bug #24551: RGW Dynamic bucket index resharding keeps resharding all buckets
- I also seem to have a case on 12.2.5 where buckets are in an endless attempt to reshard - If there is any useful data...
- 04:16 PM Bug #24551 (Resolved): RGW Dynamic bucket index resharding keeps resharding all buckets
- We're into some problems with dynamic bucket index resharding. After an upgrade from Ceph 12.2.2 to 12.2.5, which fix...
- 07:05 PM Bug #24364: rgw: s3cmd sync fails
- I can't reproduce using a different folder from my workstation but I do still get the same error when trying to sync ...
- 03:16 AM Backport #24352 (In Progress): mimic: rgw: request with range defined as "bytes=0--1" returns 41...
- https://github.com/ceph/ceph/pull/22590
06/16/2018
- 01:36 PM Backport #24547 (Resolved): mimic: rgw: change order of authentication back to local, remote
- https://github.com/ceph/ceph/pull/22842
- 01:36 PM Backport #24546 (Resolved): luminous: rgw: change order of authentication back to local, remote
- https://github.com/ceph/ceph/pull/23501/
06/15/2018
- 07:57 PM Bug #23089 (Pending Backport): rgw: change order of authentication back to local, remote
- https://github.com/ceph/ceph/pull/21494 merged
- 04:16 PM Bug #24544: change default rgw_thread_pool_size to 512
- https://github.com/ceph/ceph/pull/22581
- 04:08 PM Bug #24544 (Resolved): change default rgw_thread_pool_size to 512
- 03:29 PM Bug #24532 (New): rgw user max buckets not working
- Hi all.
Currently, i want to set default max-bucket per user.
I see this config in radosgw....
06/14/2018
- 12:32 PM Bug #18260: When uploading a large number of objects constantly, the objects number of bucket is ...
- Not yet, was busy with other issues,
I Will be resuming work on this bug now. - 09:39 AM Bug #24523 (New): rgw-multisite:metadata is behind on # shards
- radosgw-admin sync status return this:
realm a4cf2cc1-ce25-49d2-b5de-ffd491e35f7f (Sensetime)
zoneg...
06/13/2018
- 04:31 PM Backport #23683 (Resolved): luminous: radosgw-admin should not use metadata cache when not needed
- 04:30 PM Bug #22202 (Resolved): rgw_statfs should report the correct stats
- 04:30 PM Backport #23231 (Resolved): luminous: rgw_statfs should report the correct stats
- 04:29 PM Backport #23906 (Resolved): luminous: libcurl ignores headers with empty value, leading to signat...
- 04:28 PM Backport #24120 (Resolved): luminous: rgw: 403 error when creating an object with metadata contai...
- 03:19 PM Backport #24252 (Resolved): luminous: Admin OPS Api overwrites email when user is modified
- 03:05 PM Backport #24393 (Resolved): luminous: rgw: making implicit_tenants backwards compatible
- 03:04 PM Backport #24477 (In Progress): luminous: Bucket lifecycles stick around after buckets are deleted
- 02:57 PM Backport #24514 (Resolved): luminous: "radosgw-admin zonegroup set" requires realm
- https://github.com/ceph/ceph/pull/22767
- 01:12 PM Backport #24384 (In Progress): luminous: objects in cache never refresh after rgw_cache_expiry_in...
- 12:57 PM Bug #23146 (Resolved): rgw: download object might fail for local invariable uninitialized
- 12:56 PM Backport #24299 (Resolved): luminous: rgw: download object might fail for local invariable uninit...
- 04:18 AM Backport #24314 (In Progress): luminous: multisite test failures in test_versioned_object_increme...
- https://github.com/ceph/ceph/pull/22541
- 02:24 AM Bug #21583 (Pending Backport): "radosgw-admin zonegroup set" requires realm
06/12/2018
- 08:48 PM Documentation #24508 (Resolved): How to configure user- or bucket-specific data placement
- There is some minimal documentation about the radosgw-admin 'zone/zonegroup placement' commands, and some documentati...
- 05:19 PM Feature #24507 (Fix Under Review): [rfe] rgw: relaxed region constraint enforcement
- https://github.com/ceph/ceph/pull/22533
- 03:32 PM Feature #24507 (Fix Under Review): [rfe] rgw: relaxed region constraint enforcement
- Add a new "rgw_relaxed_region_enforcement" Boolean option to enable relaxed (non-enforcement of region constraint) be...
- 04:05 PM rgw-testing Bug #24483: add unit test for cls bi list command
- 01:54 PM Bug #23199: radosgw coredump RGWGC::process
- -https://github.com/ceph/ceph/pull/22520-
- 01:40 PM Bug #24505 (Resolved): radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- On a Ceph cluster started with vstart:...
- 09:35 AM Bug #24364: rgw: s3cmd sync fails
- Is this still reproducible in the setup, I tried reproducing this locally, but I'm not successful wkith that
- 09:27 AM Support #24457: large omap object
- i now have "11 large omap objects" and no clue what to do about it
- 06:59 AM Feature #24493 (Fix Under Review): rgw does not implement list_object_v2 in S3
- List_object_v2(get_bucket_v2) is different to list_object in some parameters and returned values.
In most case,it c...
06/11/2018
- 09:14 AM Support #24457: large omap object
- i tried a manual resharding
using this command: radosgw-admin bucket reshard --bucket bucket_name --num-shards 512...
Also available in: Atom