Activity
From 04/16/2017 to 05/15/2017
05/15/2017
- 11:06 PM Bug #19932 (Fix Under Review): rgw_file: v3 write timer does not close open handles
- https://github.com/ceph/ceph/pull/15097
- 09:37 PM Bug #19932 (Resolved): rgw_file: v3 write timer does not close open handles
- When a v3 write timer expires, it completes the current write as required, but does not clear the FLAG_OPEN or FLAG_S...
- 08:58 AM Bug #19927 (New): rgw-multisite: slave zone can not redirect website configure request to metadat...
hi,
when i make website configure request to slave zone rgw, and get website configure from master zone,but i ca...- 06:39 AM Tasks #16784 (Fix Under Review): RADOSGW feature requires FCGI which no longer has an upstream so...
05/12/2017
- 09:08 PM Tasks #16784: RADOSGW feature requires FCGI which no longer has an upstream source
- Added PR: https://github.com/ceph/ceph/pull/15070
- 04:59 PM Bug #19922 (Resolved): remove unnecessary 'error in read_id for object name: default'
- This change was merged before kraken, but I'd like to see it backported to jewel.
https://github.com/ceph/ceph/pul... - 04:14 PM Bug #19920 (New): test_cls_rgw.sh FAILED ClsSDK.TestSDKCoverageReplay (2035 ms)
http://qa-proxy.ceph.com/teuthology/teuthology-2017-05-12_05:00:32-smoke-master-testing-basic-vps/1156802/teutholog...- 01:23 PM Feature #19917 (New): radosgw access log is lacking useful information
- Currently a typical log entry looks like this:...
05/11/2017
- 08:44 PM Backport #19759 (New): kraken: multisite: after CreateBucket is forwarded to master, local bucket...
- note that this is for backporting *two* PRs
- 08:44 PM Backport #19758 (New): jewel: multisite: after CreateBucket is forwarded to master, local bucket ...
- note that this is for backporting *two* PRs
- 03:57 PM Bug #19745 (Pending Backport): multisite: after CreateBucket is forwarded to master, local bucket...
- https://github.com/ceph/ceph/pull/15010 merged, please include in backports
- 12:43 PM Bug #19905 (Fix Under Review): rgw website configure 'RedirectAllRequestsTo' failed to sync to sl...
- 12:21 PM Bug #19905: rgw website configure 'RedirectAllRequestsTo' failed to sync to slave zone
- this can fix the problem
https://github.com/ceph/ceph/pull/15036 - 04:47 AM Bug #19905 (Resolved): rgw website configure 'RedirectAllRequestsTo' failed to sync to slave zone
- hi, when i enable bucket website and put
<WebsiteConfiguration><RedirectAllRequestsTo><HostName>google.com</HostNa... - 07:05 AM Bug #19906 (New): Random Segmentation fault thread_name:civetweb-worker
- I'm using ceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7) on Ubuntu 16.04.
Sometimes (once a week) ...
05/10/2017
- 09:56 PM Bug #19894: Write fails on larger files on nfs-ganesha-rgw
- Hi Stein,
I did a by-hand verify of current v10.2.7 stable and upstream nfs-ganesha v2.4-stable, and got correct r... - 07:17 PM Bug #19904: rgw: Objects of bucket with '_' fail to sync in multisite env
- The problem is that __test__ is an invalid bucket name in S3 but was created using swift.
The sync uses S3 API and t... - 07:16 PM Bug #19904 (In Progress): rgw: Objects of bucket with '_' fail to sync in multisite env
Steps to Reproduce:
1. Create bucket with '_' using swift
2. Let the bucket sync
3. Create objects in it. Check...- 01:04 PM Bug #19831 (Fix Under Review): rgw: segfault during the shutdown procedure
- PR: https://github.com/ceph/ceph/pull/15033.
- 08:14 AM Bug #19898 (Fix Under Review): rgw: fix lc list failure when shards not be all created
- 07:20 AM Bug #19898: rgw: fix lc list failure when shards not be all created
- https://github.com/ceph/ceph/pull/15025
- 07:18 AM Bug #19898 (Resolved): rgw: fix lc list failure when shards not be all created
- Default lc shard is 32, if we had created few lc config,that all been
sharded to lc.31, then radosgw-admin lc list f... - 08:13 AM Backport #19759 (Need More Info): kraken: multisite: after CreateBucket is forwarded to master, l...
- master fix is not complete yet
- 08:13 AM Backport #19758 (Need More Info): jewel: multisite: after CreateBucket is forwarded to master, lo...
- master fix is not complete yet
- 02:26 AM Bug #19745 (Fix Under Review): multisite: after CreateBucket is forwarded to master, local bucket...
- https://github.com/ceph/ceph/pull/15010
per Shaha, https://github.com/ceph/ceph/pull/14388 is an incomplete fix. s...
05/09/2017
- 04:25 PM Bug #19894 (In Progress): Write fails on larger files on nfs-ganesha-rgw
- 02:09 PM Bug #19894 (In Progress): Write fails on larger files on nfs-ganesha-rgw
- Writing files on nfs mounted rgw fail on all files larger than 1MB(?)
2017-05-09 15:45:14.600162 7fe82c144700 1 =... - 10:01 AM Bug #19627: Several related(?) failures calling the s3 interface, when using Amazon libraries
- For absolute urls there is also https://github.com/ceph/ceph/pull/12861 which bypasses the authentication domain chec...
- 09:37 AM Bug #19886: VersionIdMarker and NextVersionIdMarker are not returned when listing object versions
- https://github.com/ceph/ceph/pull/15014
- 08:45 AM Bug #19886 (Resolved): VersionIdMarker and NextVersionIdMarker are not returned when listing obje...
- When listing the object versions in bucket and version id marker is specified, the returned xml should include '<Vers...
- 08:06 AM Bug #19830 (Fix Under Review): radosgw-admin validate tier type while setting
- master pr: https://github.com/ceph/ceph/pull/14994
05/07/2017
- 02:03 AM Cleanup #19851 (In Progress): Move AES_256_CTR to auth/Crypto for others to reuse
- PR Created by aclamk: https://github.com/ceph/ceph/pull/14498
05/05/2017
- 09:05 PM Bug #19876 (Pending Backport): multisite: bi_list() decode failures
- 07:41 PM Bug #19876 (Resolved): multisite: bi_list() decode failures
- Steps to reproduce:
1) Set up a multisite configuration with two zones.
2) Create a bucket and upload some objects ... - 02:38 PM Bug #19831: rgw: segfault during the shutdown procedure
- The bug is a regression. It has been introduced in: b22977dfd27967def1b6d4caf83694a2264fc825 ("PR 14801":https://gith...
- 02:24 PM Bug #19831: rgw: segfault during the shutdown procedure
- After pulling the debug symbols for libnspr4:...
- 01:38 PM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- Nathan Cutler wrote:
> If this ends up being fixed by https://github.com/ceph/ceph/pull/14714 that is the same fix a... - 10:30 AM Bug #19870 (Resolved): rgw: bytes_send and bytes_recv in the msg of usage show returning is 0 in ...
- ➜ build git:(master) ✗ ./bin/radosgw-admin usage show
{
"entries": [
{
"user": "tester",... - 09:49 AM Bug #19830: radosgw-admin validate tier type while setting
- basically init_complete() fails since it will fail at RGWSyncModulesManager::create_instance() and since period comm...
- 08:55 AM Bug #19830: radosgw-admin validate tier type while setting
- Yehuda Sadeh wrote:
> The command here in itself is not invalid. You could and should be able to modify the tier typ... - 02:06 AM Bug #19285: rgw: user quota did not work well on multipart upload
- Yehuda Sadeh wrote:
> It would be great if there was a test for it. I think this could be a candidate to add for the...
05/04/2017
- 09:26 PM Tasks #16784: RADOSGW feature requires FCGI which no longer has an upstream source
- After talking today with Matt Benjamin, it's not clear whether we want to completely take out fcgi for the Luminous r...
- 08:34 PM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- If this ends up being fixed by https://github.com/ceph/ceph/pull/14714 that is the same fix as for #19861 - in that c...
- 08:18 PM Backport #19704 (In Progress): civetweb-worker segmentation fault
- h3. description
We're encountering this crash when nessus scans our gateways. I'll attempt to track down the exact... - 05:58 PM Backport #19704 (Pending Backport): civetweb-worker segmentation fault
- https://github.com/ceph/ceph/pull/14960
- 05:53 PM Backport #19704: civetweb-worker segmentation fault
- The problem seems to be in the civetweb version that is in kraken. We should move it to whatever we currently have in...
- 05:43 PM Backport #19704: civetweb-worker segmentation fault
fails on the following line:...- 06:37 PM Bug #15882 (Closed): rgw: GC is not working for __shadow and __multipart objects
- Closing, feel free to reopen if this still happens.
- 06:36 PM Bug #15285 (Closed): Clang reports warning in civetweb.c
- closing for now, feel free to reopen if the issue still exists.
- 06:34 PM Bug #19830: radosgw-admin validate tier type while setting
- The command here in itself is not invalid. You could and should be able to modify the tier type of the zone. What hap...
- 06:30 PM Bug #19834 (Pending Backport): multisite: RGWPeriodPusher CRThread startup races with RGWHTTPMana...
- https://github.com/ceph/ceph/pull/14936
- 06:28 PM Bug #19520 (Pending Backport): rgw: Swift's at-root features (/crossdomain.xml, /info, /healthche...
- 06:24 PM Bug #19285: rgw: user quota did not work well on multipart upload
- It would be great if there was a test for it. I think this could be a candidate to add for the ragweed test.
- 06:17 PM Bug #19861: multisite: memory leak on failed lease in RGWDataSyncShardCR
- https://github.com/ceph/ceph/pull/9950 also
- 06:15 PM Bug #19861 (Resolved): multisite: memory leak on failed lease in RGWDataSyncShardCR
- https://github.com/ceph/ceph/pull/14714
- 06:14 PM Bug #19739: Content-MD5 header is not validated with POST uploads
- we'll also need to create a new test in s3-tests for this one.
- 06:10 PM Bug #19739: Content-MD5 header is not validated with POST uploads
- https://github.com/ceph/ceph/pull/14961
- 06:02 PM Bug #19645 (Resolved): bulkupload should be supported on every zone in multisite
- 01:59 PM Bug #19831: rgw: segfault during the shutdown procedure
- ...
- 12:32 PM Bug #19554 (Pending Backport): multisite: radosgw-admin period commands cant use --remote in anot...
- 02:55 AM Cleanup #19851 (In Progress): Move AES_256_CTR to auth/Crypto for others to reuse
- The following warning was introduced by Adam Kupczyk. So creating a tracker for implementing the changes suggested by...
05/03/2017
- 10:24 PM Backport #19704: civetweb-worker segmentation fault
- Managed to get a core dump. Is there somewhere I can upload it for you? Here's the backtrace:
(gdb) bt
#0 0x0000... - 08:20 PM Backport #19848 (Rejected): kraken: multisite: incremental metadata sync does not properly advanc...
- 08:20 PM Backport #19847 (Resolved): jewel: multisite: incremental metadata sync does not properly advance...
- https://github.com/ceph/ceph/pull/15556
- 08:19 PM Backport #19844 (Resolved): jewel: Add custom user data support in bucket index
- https://github.com/ceph/ceph/pull/15966
- 08:19 PM Backport #19843 (Resolved): kraken: Add custom user data support in bucket index
- https://github.com/ceph/ceph/pull/15985
- 08:19 PM Backport #19840 (Resolved): kraken: civetweb frontend segfaults in Luminous
- https://github.com/ceph/ceph/pull/16166
- 08:19 PM Backport #19839 (Resolved): kraken: reduce log level of 'storing entry at' in cls_log
- https://github.com/ceph/ceph/pull/16165
- 08:19 PM Backport #19838 (Resolved): jewel: reduce log level of 'storing entry at' in cls_log
- https://github.com/ceph/ceph/pull/15455
- 07:11 PM Backport #19837 (In Progress): kraken: rgw: S3 object uploads using the AWSv4's multi-chunk mode ...
- 07:08 PM Backport #19837: kraken: rgw: S3 object uploads using the AWSv4's multi-chunk mode hang RadosGW
- h3. description
The bug affects PUTs employing the STREAMING-AWS4-HMAC-SHA256-PAYLOAD authentication mode. In such... - 06:02 PM Backport #19837 (Fix Under Review): kraken: rgw: S3 object uploads using the AWSv4's multi-chunk ...
- https://github.com/ceph/ceph/pull/14939
- 06:00 PM Backport #19837 (Resolved): kraken: rgw: S3 object uploads using the AWSv4's multi-chunk mode han...
- https://github.com/ceph/ceph/pull/14939
- 05:19 PM Bug #19835 (Pending Backport): reduce log level of 'storing entry at' in cls_log
- 04:03 PM Bug #19835: reduce log level of 'storing entry at' in cls_log
This log line is used in 2 locations.
The other reference in the source has level already at 20.
./src/cls/time...- 03:15 PM Bug #19835 (Resolved): reduce log level of 'storing entry at' in cls_log
- https://github.com/ceph/ceph/pull/14879
- 03:50 PM Bug #19554 (Fix Under Review): multisite: radosgw-admin period commands cant use --remote in anot...
- https://github.com/ceph/ceph/pull/14407
- 03:08 PM Bug #19834 (Resolved): multisite: RGWPeriodPusher CRThread startup races with RGWHTTPManager::set...
- ...
- 12:56 PM Bug #19831 (Resolved): rgw: segfault during the shutdown procedure
- On a very recent master:...
- 12:45 PM Bug #19754 (Pending Backport): rgw: S3 object uploads using the AWSv4's multi-chunk mode hang Rad...
- 12:27 PM Bug #19830 (Resolved): radosgw-admin validate tier type while setting
- One can get into difficult situations with multisite and incorrect tier-types
an eg would be:
on a secondary zone d...
05/02/2017
- 04:55 PM Bug #19816 (Fix Under Review): multisite: set_latest_epoch not atomic
- https://github.com/ceph/ceph/pull/14915
added backport tags to related issue http://tracker.ceph.com/issues/19817,... - 04:54 PM Bug #19817 (Fix Under Review): multisite: RGWPeriodPuller does not call RGWPeriod::reflect() on n...
- https://github.com/ceph/ceph/pull/14915
05/01/2017
- 06:31 PM Bug #19817 (Resolved): multisite: RGWPeriodPuller does not call RGWPeriod::reflect() on new period
- If radosgw first learns about a new period from RGWPeriodPuller, it doesn't call RGWPeriod::reflect() to update its z...
- 06:22 PM Bug #19816 (Resolved): multisite: set_latest_epoch not atomic
- see comment in rgw_period_puller.cc:...
- 03:26 PM Bug #19749 (Pending Backport): civetweb frontend segfaults in Luminous
- 06:27 AM Feature #19644 (Pending Backport): Add custom user data support in bucket index
04/28/2017
- 05:10 PM Bug #18639 (Pending Backport): multisite: incremental metadata sync does not properly advance to ...
- 04:06 PM Backport #19809 (In Progress): kraken: APIs to support Ragweed suite
- 04:05 PM Backport #19809 (Resolved): kraken: APIs to support Ragweed suite
- https://github.com/ceph/ceph/pull/14852
- 04:02 PM Backport #19806 (In Progress): jewel: APIs to support Ragweed suite
- 04:01 PM Backport #19806 (Resolved): jewel: APIs to support Ragweed suite
- https://github.com/ceph/ceph/pull/14851
- 03:46 PM Feature #19804 (Resolved): APIs to support Ragweed suite
- *master PR*: https://github.com/ceph/ceph/pull/13645
04/27/2017
- 08:36 AM Backport #19728 (In Progress): jewel: rgw: add radosgw-admin command to check progress toward buc...
04/26/2017
- 08:32 PM Backport #19786 (In Progress): jewel: ceph jewel failed create s3 tpye subuser from admin rest api
- 06:22 PM Backport #19786 (Resolved): jewel: ceph jewel failed create s3 tpye subuser from admin rest api
- https://github.com/ceph/ceph/pull/14815
- 05:54 PM Bug #15285: Clang reports warning in civetweb.c
- is that still happening?
- 05:53 PM Bug #15377 (Closed): How to change the default pool of keeping data uding radosgw
- There are different ways to do it. Probably adding a new placement and setting it as the default placement is the way...
- 05:50 PM Bug #15882: rgw: GC is not working for __shadow and __multipart objects
- is this still happening? we fixed a lot of related issues since
- 05:43 PM Bug #19645: bulkupload should be supported on every zone in multisite
- 05:41 PM Bug #19749: civetweb frontend segfaults in Luminous
- 07:33 AM Bug #16682 (Pending Backport): ceph jewel failed create s3 tpye subuser from admin rest api
- 12:43 AM Bug #16682: ceph jewel failed create s3 tpye subuser from admin rest api
- Could we patch this fix for jewel? We can't create subuser of s3 type by rest api without this fix
04/25/2017
- 08:54 PM Feature #9493 (Resolved): Ability to disable keystone revocation polling when using UUID keystone...
- The PR is already being backported at #19499 - we don't need to flag it twice.
- 07:39 PM Feature #9493: Ability to disable keystone revocation polling when using UUID keystone provider
- Jewel backport is in this PR
https://github.com/ceph/ceph/pull/14789
- 07:39 PM Feature #9493 (Pending Backport): Ability to disable keystone revocation polling when using UUID ...
- 08:48 PM Backport #19777 (Resolved): kraken: rgw: implement support for OS-REVOKE extension of OpenStack I...
- https://github.com/ceph/ceph/pull/16164
- 08:48 PM Backport #19776 (Resolved): kraken: multisite: realm rename does not propagate to other clusters
- https://github.com/ceph/ceph/pull/16161
- 08:48 PM Backport #19775 (Resolved): jewel: multisite: realm rename does not propagate to other clusters
- https://github.com/ceph/ceph/pull/15454
- 07:44 PM Backport #19772 (Fix Under Review): jewel: rgw: swift: disable revocation thread under certain ci...
- 07:44 PM Backport #19772 (Resolved): jewel: rgw: swift: disable revocation thread under certain circumstances
- https://github.com/ceph/ceph/pull/14789
- 07:43 PM Feature #19499 (Pending Backport): rgw: implement support for OS-REVOKE extension of OpenStack Id...
- 03:38 PM Backport #19476 (Resolved): jewel: RGW S3 v4 authentication issue with X-Amz-Expires
- 03:38 PM Backport #19523 (Resolved): jewel: `radosgw-admin zone create` command with specified zone-id cre...
- 03:37 PM Backport #19478 (Resolved): jewel: "zonegroupmap set" does not work
- 03:36 PM Backport #19607 (Resolved): jewel: multisite: fetch_remote_obj() gets wrong version when copying ...
- 02:52 PM Bug #19746 (Pending Backport): multisite: realm rename does not propagate to other clusters
- 10:31 AM Bug #19754: rgw: S3 object uploads using the AWSv4's multi-chunk mode hang RadosGW
- https://github.com/ceph/ceph/pull/14770
- 10:21 AM Bug #19754 (In Progress): rgw: S3 object uploads using the AWSv4's multi-chunk mode hang RadosGW
- 02:33 AM Bug #19754 (Resolved): rgw: S3 object uploads using the AWSv4's multi-chunk mode hang RadosGW
- The bug affects PUTs employing the STREAMING-AWS4-HMAC-SHA256-PAYLOAD authentication mode. In such scenario, when *RG...
- 09:06 AM Backport #19764: jewel: set latest object's acl failed
- merge https://github.com/ceph/s3-tests/pull/160 along with this
- 08:22 AM Backport #19764 (Resolved): jewel: set latest object's acl failed
- https://github.com/ceph/ceph/pull/15451
https://github.com/ceph/s3-tests/pull/160 - 09:05 AM Feature #18923: add a s3-test case in set acl
- jewel backport: https://github.com/ceph/s3-tests/pull/160
- 08:41 AM Bug #19749 (Fix Under Review): civetweb frontend segfaults in Luminous
- Master PR: https://github.com/ceph/ceph/pull/14750
- 08:25 AM Backport #19768 (Resolved): jewel: [multisite] operating bucket's acl&cors is not restricted on s...
- https://github.com/ceph/ceph/pull/15453
- 08:24 AM Backport #19767 (Resolved): jewel: rgw: Delete non-empty bucket in slave zonegroup
- https://github.com/ceph/ceph/pull/15477
- 08:23 AM Bug #19313 (Pending Backport): Delete non-empty bucket in slave zonegroup
- 07:05 AM Bug #19313: Delete non-empty bucket in slave zonegroup
- Could we patch this for branch jewel? We encounter this issue in production env.
- 08:22 AM Backport #19766 (Resolved): kraken: rgw: when uploading the objects continuesly in the versioned ...
- https://github.com/ceph/ceph/pull/16163
- 08:22 AM Backport #19765 (Resolved): jewel: rgw: when uploading the objects continuesly in the versioned b...
- https://github.com/ceph/ceph/pull/15452
- 08:22 AM Bug #16888 (Pending Backport): [multisite] operating bucket's acl&cors is not restricted on slave...
- 06:59 AM Bug #16888: [multisite] operating bucket's acl&cors is not restricted on slave zone
- Should we patch #14082 and #14376 to jewel branch?
- 06:58 AM Bug #16888: [multisite] operating bucket's acl&cors is not restricted on slave zone
- https://github.com/ceph/ceph/pull/14376
- 06:47 AM Bug #16888: [multisite] operating bucket's acl&cors is not restricted on slave zone
- https://github.com/ceph/ceph/pull/14082
- 08:21 AM Bug #18208 (Pending Backport): rgw: when uploading the objects continuesly in the versioned bucke...
- 06:45 AM Bug #18208: rgw: when uploading the objects continuesly in the versioned bucket, some objects wil...
- Do we have plan to patch this in 10.2.8?
- 08:20 AM Bug #18649 (Pending Backport): set latest object's acl failed
- 06:43 AM Bug #18649: set latest object's acl failed
- This issue should be resolved in jewel version too. Nathan, could you please help to backport?
- 07:21 AM Backport #19662 (Resolved): jewel: rgw_file: fix event expire check, don't expire directories bei...
- 07:21 AM Backport #19660 (Resolved): jewel: rgw_file: fix readdir after dir-change
- 07:20 AM Backport #19525 (Resolved): jewel: rgwfs hung due to missing unlock within unlink operation
- 07:20 AM Backport #19469 (Resolved): jewel: rgw_file: leaf objects (which store Unix attrs) can be delete...
- 06:59 AM Backport #19757 (In Progress): jewel: fix failed to create bucket if a non-master zonegroup has a...
- 06:50 AM Backport #19757 (Resolved): jewel: fix failed to create bucket if a non-master zonegroup has a si...
- https://github.com/ceph/ceph/pull/14766
- 06:50 AM Backport #19759 (Resolved): kraken: multisite: after CreateBucket is forwarded to master, local b...
- https://github.com/ceph/ceph/pull/16290
- 06:50 AM Backport #19758 (Resolved): jewel: multisite: after CreateBucket is forwarded to master, local bu...
- https://github.com/ceph/ceph/pull/15450
- 06:50 AM Bug #19756 (Pending Backport): fix failed to create bucket if a non-master zonegroup has a single...
- 06:39 AM Bug #19756: fix failed to create bucket if a non-master zonegroup has a single zone
- This fix is important for online environment, please help to evaluate and backport for jewel if ok.
- 06:36 AM Bug #19756 (Resolved): fix failed to create bucket if a non-master zonegroup has a single zone
- If a non-master zonegroup has a single zone, the metadata sync thread not running and
the non-master zonegroup can't...
04/24/2017
- 08:28 PM Bug #19236 (Resolved): multisite: some 'radosgw-admin data sync' commands hang
- 08:27 PM Backport #19353 (Resolved): jewel: multisite: some 'radosgw-admin data sync' commands hang
- 08:27 PM Backport #19321 (Resolved): jewel: multisite: possible infinite loop in RGWFetchAllMetaCR
- 08:26 PM Backport #19211 (Resolved): jewel: rgw: "cluster [WRN] bad locator @X on object @X...." in cluste...
- 08:25 PM Bug #19096 (Resolved): rgw: a few cases where rgw_obj is incorrectly initialized
- 08:25 PM Backport #19145 (Resolved): jewel: rgw: a few cases where rgw_obj is incorrectly initialized
- 08:24 PM Backport #19048 (Resolved): jewel: multisite: some yields in RGWMetaSyncShardCR::full_sync() resu...
- 08:23 PM Backport #18626 (Resolved): jewel: TempURL verification broken for URI encoded object names
- 07:10 PM Backport #19704: civetweb-worker segmentation fault
- If you got core from the crash you can now try to get a backtrace from gdb. If not, then next time hopefully you'll g...
- 05:15 PM Backport #19704: civetweb-worker segmentation fault
- Yehuda Sadeh wrote:
> It'd be great if you could provide more info here. Also, maybe try to install the debug packag... - 03:56 PM Bug #19745 (Pending Backport): multisite: after CreateBucket is forwarded to master, local bucket...
- 03:26 PM Backport #19474 (In Progress): jewel: multisite: EPERM when trying to read SLO objects as system/...
- https://github.com/ceph/ceph/pull/14752
- 02:33 PM Bug #19749 (Resolved): civetweb frontend segfaults in Luminous
- possibly reproducible in K as well, reproducible in ARM builds of 12.0.1 in openSUSE
gdb bt below:
(gdb) bt full
...
04/21/2017
- 07:11 PM Bug #19746 (Fix Under Review): multisite: realm rename does not propagate to other clusters
- https://github.com/ceph/ceph/pull/14722
- 07:00 PM Bug #19746 (Resolved): multisite: realm rename does not propagate to other clusters
- the realm name is not part of the period, so other clusters won't be notified about realm renames. radosgw-admin shou...
- 06:54 PM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- testing https://github.com/ceph/ceph/pull/14714 as a potential fix
- 02:35 PM Bug #19745 (Resolved): multisite: after CreateBucket is forwarded to master, local bucket may use...
- https://github.com/ceph/ceph/pull/14388
- 12:22 PM Bug #19739 (Resolved): Content-MD5 header is not validated with POST uploads
- Radosgw does not check Content-MD5 header against uploaded file, if upload is done with POST method. It does work wit...
- 07:25 AM Backport #19736 (Fix Under Review): radosgw/s3 chunked transfer encodings and fast_forward_request
- 06:37 AM Backport #19736: radosgw/s3 chunked transfer encodings and fast_forward_request
- I have fix for this problem.
https://github.com/ceph/civetweb/pull/18
I've tested this with swiftest, s3tests, & ... - 06:29 AM Backport #19736 (Resolved): radosgw/s3 chunked transfer encodings and fast_forward_request
- https://github.com/ceph/ceph/pull/14776
- 05:58 AM Feature #19730 (Fix Under Review): Support delete marker expiration in s3 lifecycle
- 02:39 AM Feature #19730: Support delete marker expiration in s3 lifecycle
- https://github.com/ceph/ceph/pull/14703
- 02:36 AM Feature #19730 (Fix Under Review): Support delete marker expiration in s3 lifecycle
- S3 lifecycle support delete marker expiration config. If bucket versioning is enabled, and delete marker is the only ...
- 03:09 AM Fix #19732 (Resolved): fix: rgw crashed caused by shard id out of range when listing data log
- s3curl --id x --key x -- -s 'http://rgw-address/admin/log?type=data&id=109&max-entries=2'
when shard id out of ran...
04/20/2017
- 10:34 PM Backport #19728 (Resolved): jewel: rgw: add radosgw-admin command to check progress toward bucket...
- https://github.com/ceph/ceph/pull/14787
- 09:27 PM Feature #17925 (Pending Backport): rgw: add radosgw-admin command to check progress toward bucket...
- 09:16 PM Backport #19725 (Resolved): kraken: RGW S3 v4 authentication issue with X-Amz-Expires
- https://github.com/ceph/ceph/pull/16162
- 09:16 PM Backport #19724 (Resolved): jewel: RGW S3 v4 authentication issue with X-Amz-Expires
- https://github.com/ceph/ceph/pull/14605
- 09:15 PM Backport #19723 (Resolved): kraken: rgw_file: introduce rgw_lookup type hints
- https://github.com/ceph/ceph/pull/13871
- 09:15 PM Backport #19722 (Resolved): jewel: rgw_file: introduce rgw_lookup type hints
- https://github.com/ceph/ceph/pull/14653
- 09:15 PM Backport #19721 (Rejected): kraken: rgw_file: fix size and (c|m)time unix attrs in write_finish
- 09:15 PM Backport #19720 (Resolved): jewel: rgw_file: fix size and (c|m)time unix attrs in write_finish
- https://github.com/ceph/ceph/pull/15449
- 06:28 PM Bug #19645 (Fix Under Review): bulkupload should be supported on every zone in multisite
- https://github.com/ceph/ceph/pull/14601
- 06:25 PM Bug #19623 (Pending Backport): rgw_file: introduce rgw_lookup type hints
- Fix merged to master https://github.com/ceph/ceph/pull/14458
- 06:17 PM Bug #18921 (Resolved): change log level in user quota sync
- 06:14 PM Bug #19219 (Fix Under Review): Data sync problem in multisite
- https://github.com/ceph/ceph/pull/13851
- 06:12 PM Bug #18828 (Pending Backport): RGW S3 v4 authentication issue with X-Amz-Expires
- 06:10 PM Backport #19704: civetweb-worker segmentation fault
- It'd be great if you could provide more info here. Also, maybe try to install the debug package to include symbols so...
- 04:05 AM Backport #19704 (Resolved): civetweb-worker segmentation fault
- https://github.com/ceph/ceph/pull/14960
- 06:04 PM Bug #15562 (Duplicate): rgw folder name with underscore (_) problem
- http://tracker.ceph.com/issues/19432
- 06:02 PM Bug #18936: rgw slo manifest: etag and size should be optional
- will split out fix for optional etag and push PR; fix for optional size is more complex, will continue it in a separ...
- 05:59 PM Bug #18260: When uploading a large number of objects constantly, the objects number of bucket is ...
- Have not had a chance to reproduce. I will investigate and propose a fix.
- 05:58 PM Bug #19653: rgw_file: fix size and (c|m)time unix attrs in write_finish
- PR against master was https://github.com/ceph/ceph/pull/14609
- 05:54 PM Bug #19653 (Pending Backport): rgw_file: fix size and (c|m)time unix attrs in write_finish
- 04:19 PM Documentation #19397 (Resolved): admin ops: fix the quota section
- 04:19 PM Backport #19462 (Resolved): kraken: rgw: admin ops: fix the quota section
- 04:18 PM Backport #19461 (Resolved): jewel: admin ops: fix the quota section
- 09:43 AM Backport #19461 (In Progress): jewel: admin ops: fix the quota section
- 10:08 AM Backport #19575 (In Progress): jewel: rgw: unsafe access in RGWListBucket_ObjStore_SWIFT::send_re...
- 10:04 AM Backport #19523 (In Progress): jewel: `radosgw-admin zone create` command with specified zone-id ...
- 10:01 AM Backport #19478 (In Progress): jewel: "zonegroupmap set" does not work
- 09:58 AM Backport #19473 (In Progress): jewel: cannot cover the object expiration
- 09:57 AM Backport #19473: jewel: cannot cover the object expiration
- Incompatibility with OpenStack swift looks like a bug, and it might be OK to backport a fix.
https://github.com/ce... - 09:47 AM Backport #19473 (Need More Info): jewel: cannot cover the object expiration
- This is a feature and we don't ordinarily backport features.
- 09:57 AM Backport #19474 (Need More Info): jewel: multisite: EPERM when trying to read SLO objects as syst...
- Backporting this is complicated; better left to an RGW developer.
- 09:41 AM Backport #19469 (In Progress): jewel: rgw_file: leaf objects (which store Unix attrs) can be del...
- 09:36 AM Backport #19660 (In Progress): jewel: rgw_file: fix readdir after dir-change
- 09:32 AM Backport #19525: jewel: rgwfs hung due to missing unlock within unlink operation
- The rgw_file backports tend to conflict with eachother, so ganging them all up in a single PR.
- 09:30 AM Backport #19662 (In Progress): jewel: rgw_file: fix event expire check, don't expire directories ...
- 09:19 AM Bug #19365 (Resolved): `rgw_read_user_buckets` function forget to update `bool *is_truncated` option
- 02:11 AM Bug #19365: `rgw_read_user_buckets` function forget to update `bool *is_truncated` option
- Merged to https://github.com/ceph/ceph/commit/cde635965924125bb761f67716f081ba5bf88912
- 09:03 AM Bug #19195 (Resolved): when converting region_map we need to use rgw_zone_root_pool
- 09:02 AM Backport #19355 (Resolved): jewel: when converting region_map we need to use rgw_zone_root_pool
- 09:02 AM Bug #19231 (Resolved): upgrade to multisite v2 fails if there is a zone without zone info
- 09:02 AM Backport #19330 (Resolved): jewel: upgrade to multisite v2 fails if there is a zone without zone ...
- 09:01 AM Bug #19013 (Resolved): radosgw-admin: add the 'object stat' command to usage
- 09:00 AM Backport #19163 (Resolved): jewel: radosgw-admin: add the 'object stat' command to usage
- 08:59 AM Bug #19026 (Resolved): rgw: typo in rgw_admin.cc
- 08:59 AM Backport #19155 (Resolved): jewel: rgw: typo in rgw_admin.cc
- 08:56 AM Backport #18866 (Resolved): jewel: 'radosgw-admin sync status' on master zone of non-master zoneg...
- 03:13 AM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- Complete output from a ~2 hours massif run attached.
The top allocation groups are all the same, with similar rati... - 12:51 AM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- Same with hopefully better formatting:...
- 12:50 AM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- I did a bit more testing with valgrind. While memcheck didn't yield anything consistent, a massif run over ~30 minute...
04/19/2017
- 06:38 PM Bug #19268: Ceilometer receives NoSuchBucket via Swift API
- I had the same things happening, with both user accounts and with service accounts. This is how I fixed it, not sure ...
- 03:14 PM Bug #15226: vstart.sh/mstart.sh spawned rgw instances do not create pid files
- 08:34 AM Feature #19689 (New): rgw: bucket logging
- Trying to implement S3 feature bucket logging( http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html )
04/18/2017
- 07:37 PM Backport #19663 (Resolved): kraken: rgw_file: fix event expire check, don't expire directories be...
- https://github.com/ceph/ceph/pull/13871
- 07:37 PM Backport #19662 (Resolved): jewel: rgw_file: fix event expire check, don't expire directories bei...
- https://github.com/ceph/ceph/pull/14653
- 07:37 PM Backport #19661 (Resolved): kraken: rgw_file: fix readdir after dir-change
- https://github.com/ceph/ceph/pull/13871
- 07:37 PM Backport #19660 (Resolved): jewel: rgw_file: fix readdir after dir-change
- https://github.com/ceph/ceph/pull/14653
- 03:16 PM Bug #18343: Error responses do not fully conform to AWS spec
- I would like to see this included too, can I help?
- 02:16 PM Feature #19644: Add custom user data support in bucket index
- This feature enables us to have a meta field (which is generic in nature to store any user defined data) in the bucke...
- 09:10 AM Feature #19644: Add custom user data support in bucket index
- can you take a example to explain this feature detailedly? Or
is there existing some docs about this feature? Thanks! - 05:28 AM Feature #19644 (Fix Under Review): Add custom user data support in bucket index
- https://github.com/ceph/ceph/pull/14592
- 05:26 AM Feature #19644 (Resolved): Add custom user data support in bucket index
- One of our big data use cases was to fetch all segments' size of a swift DLO manifest without having to go through ea...
- 01:12 PM Bug #19653 (Resolved): rgw_file: fix size and (c|m)time unix attrs in write_finish
- This codepath serialized Unix attributes, but seemingly stored the pre-update values. This probably "worked" if the ...
- 11:45 AM Backport #19607 (In Progress): jewel: multisite: fetch_remote_obj() gets wrong version when copyi...
- https://github.com/ceph/ceph/pull/14607
- 11:42 AM Backport #19608 (In Progress): kraken: multisite: fetch_remote_obj() gets wrong version when copy...
- https://github.com/ceph/ceph/pull/14606
- 10:41 AM Backport #19476 (In Progress): jewel: RGW S3 v4 authentication issue with X-Amz-Expires
- https://github.com/ceph/ceph/pull/14605
- 09:55 AM Bug #18003: slave zonegroup cannot enable the bucket versioning
- Zhandong Guo wrote:
> Maybe PR:https://github.com/ceph/ceph/pull/12175 need to backport for jewel and kraken.
> It ... - 09:30 AM Bug #18003: slave zonegroup cannot enable the bucket versioning
- Maybe PR:https://github.com/ceph/ceph/pull/12175 need to backport for jewel and kraken.
It resolved this issue really. - 08:02 AM Bug #19645 (Resolved): bulkupload should be supported on every zone in multisite
- Currently, the swift bulkupload only takes effect in meta master zone. this feature should be supported on every zone...
- 04:33 AM Bug #19627: Several related(?) failures calling the s3 interface, when using Amazon libraries
- I'm uploading a slight evolution of your "main.go" file, which lacking any imagination I named "d1.go". It uses envi...
- 04:28 AM Bug #19627: Several related(?) failures calling the s3 interface, when using Amazon libraries
- A workaround for this is to add this option to the radosgw civetweb frontend line:
authentication_domain=campodus.e... - 03:19 AM Bug #19627: Several related(?) failures calling the s3 interface, when using Amazon libraries
- *But,* there's another problem. When the go program makes an http request, it puts in "absolute URLs" (and not absol...
- 03:10 AM Bug #19627: Several related(?) failures calling the s3 interface, when using Amazon libraries
- If you haven't done anything with dns then some (all?) of your test program should fail. If you don't
have somethin...
04/17/2017
- 09:52 PM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- It looks like the dynamic log level change didn't work the previous time, so here are actual level 20 log snippets.
- 09:17 PM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- Some more detail on the test:
swift-bench.conf:
[bench]
auth = http://ceph2-1/auth/v1.0
user = benchmark:swift
k... - 07:36 PM Bug #19446: rgw: heavy memory leak when multisite sync fail on 10.2.6
- To rule out libcurl issues I upgraded to 7.52.1-4 from Debian sid. This did not resolve the issue.
The vast majori...
04/16/2017
- 09:58 PM Bug #19627: Several related(?) failures calling the s3 interface, when using Amazon libraries
- I don't understand the question, but both the sucessful and unsucessful cases use the same hostname and IP address, d...
Also available in: Atom