Activity
From 09/11/2018 to 10/10/2018
10/10/2018
- 11:48 PM Backport #36216 (In Progress): mimic: multisite: data full sync does not limit concurrent bucket ...
- https://github.com/ceph/ceph/pull/24536
- 09:19 PM Bug #36372: OSD:Segmentation fault thread_name:tp_osd_tp--10.2.10
- Looks like a cls_rgw bug?
- 08:58 AM Bug #36372 (Duplicate): OSD:Segmentation fault thread_name:tp_osd_tp--10.2.10
- I have a radosgw cluster,using SSD as index pool.
this is the second time that three ssd osd down with error: Caught... - 05:00 PM Backport #36382 (In Progress): luminous: resharding produces invalid values of bucket stats
- 04:40 PM Backport #36382 (Resolved): luminous: resharding produces invalid values of bucket stats
- https://github.com/ceph/ceph/pull/24527
- 04:47 PM Backport #36381 (In Progress): mimic: resharding produces invalid values of bucket stats
- 04:40 PM Backport #36381 (Resolved): mimic: resharding produces invalid values of bucket stats
- https://github.com/ceph/ceph/pull/24526
- 04:39 PM Bug #36290 (Pending Backport): resharding produces invalid values of bucket stats
- 03:55 PM Bug #36377 (Closed): rgw keystone implicit tenants - differences between cepf.conf and runtime co...
- I would imagine your radosgw-admin is running as a user such as "client.admin", and your config setting is defined in...
- 12:41 PM Bug #36377 (Closed): rgw keystone implicit tenants - differences between cepf.conf and runtime co...
- Hi All,
I've changed 'rgw keystone implicit tenants' option to 'True' flag in ceph.conf and applied it into my env... - 11:22 AM Backport #36214 (In Progress): luminous: multisite: memory leak from curl_multi_add_handle()
- https://github.com/ceph/ceph/pull/24519
- 11:20 AM Backport #36213 (In Progress): mimic: multisite: memory leak from curl_multi_add_handle()
- https://github.com/ceph/ceph/pull/24518
- 11:03 AM Backport #36211 (In Progress): mimic: multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- https://github.com/ceph/ceph/pull/24516
- 01:08 AM Backport #36212 (In Progress): luminous: multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- https://github.com/ceph/ceph/pull/24507
10/08/2018
- 01:37 PM Bug #36343 (Duplicate): radosgw index has been inconsistent with reality
- 12:21 PM Bug #36343: radosgw index has been inconsistent with reality
- sorry for double creating, please delete this one.
- 12:16 PM Bug #36343 (Duplicate): radosgw index has been inconsistent with reality
- h3. Background:
Ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)
Index pool is ... - 12:48 PM Backport #36332 (In Progress): mimic: Beast frontend tries to bind (privileged) ports after dropp...
- https://github.com/ceph/ceph/pull/24436
- 12:19 PM Bug #36344 (Need More Info): radosgw index has been inconsistent with reality
- h3. Background:
Ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)
Index pool is on... - 10:06 AM Backport #36333 (In Progress): luminous: Beast frontend tries to bind (privileged) ports after dr...
10/06/2018
- 03:34 PM Backport #36333: luminous: Beast frontend tries to bind (privileged) ports after dropping privile...
- https://github.com/ceph/ceph/pull/24454
- 03:11 PM Backport #36333 (Resolved): luminous: Beast frontend tries to bind (privileged) ports after dropp...
- https://github.com/ceph/ceph/pull/24454
- 03:14 PM Backport #36311 (Resolved): luminous: multi-site: object name should be urlencoded when we put it...
- 03:11 PM Backport #36332 (Resolved): mimic: Beast frontend tries to bind (privileged) ports after dropping...
- https://github.com/ceph/ceph/pull/24436
10/05/2018
- 08:10 PM Backport #36311: luminous: multi-site: object name should be urlencoded when we put it into ES
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/24424
merged - 03:23 PM Bug #36041: Beast frontend tries to bind (privileged) ports after dropping privileges to do so
- mimic backport started at https://github.com/ceph/ceph/pull/24436
- 03:22 PM Bug #36041 (Pending Backport): Beast frontend tries to bind (privileged) ports after dropping pri...
- 09:30 AM Bug #36290 (Fix Under Review): resharding produces invalid values of bucket stats
- master pr: https://github.com/ceph/ceph/pull/24444
- 08:35 AM Bug #36293: InvalidBucketName expected in more cases: uppercase, adjacent chars, underscores
- Thanks for the comment. so, it more than just updating valid_s3_bucket_name().
What you mean is:
1. add entry fo... - 02:32 AM Backport #36137 (Resolved): luminous: multisite: update index segfault on shutdown/realm reload
- 02:32 AM Backport #36202 (Resolved): luminous: multisite: intermittent test_bucket_index_log_trim failures
- 02:30 AM Bug #24117 (Resolved): cls_bucket_list fails causes cascading osd crashes
- 02:29 AM Backport #24630 (Resolved): luminous: cls_bucket_list fails causes cascading osd crashes
- 02:28 AM Backport #36128 (Resolved): luminous: abort_bucket_multiparts() fails on missing multipart meta o...
- 02:27 AM Backport #26979 (Resolved): luminous: multisite: intermittent failures in test_bucket_sync_disabl...
- 02:26 AM Bug #35539 (Resolved): multisite: out of order updates to sync status markers
- 02:26 AM Backport #35703 (Resolved): luminous: multisite: out of order updates to sync status markers
- 02:25 AM Backport #35980 (Resolved): luminous: multisite: data sync error repo processing does not back of...
- 02:23 AM Backport #36124 (Resolved): luminous: Chunked encoding fails if chunk greater than 1MiB
10/04/2018
- 09:48 PM Backport #36137: luminous: multisite: update index segfault on shutdown/realm reload
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24397
merged - 09:47 PM Backport #36202: luminous: multisite: intermittent test_bucket_index_log_trim failures
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24398
merged - 09:08 PM Feature #36319 (New): rgw: expose useful x-amz headers beyond request-id
- rgw has some semi-internal data that would be very useful to expose to help debugging & metrics. Much of this is alre...
- 08:08 PM Backport #24630: luminous: cls_bucket_list fails causes cascading osd crashes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24391
merged - 08:06 PM Backport #36128: luminous: abort_bucket_multiparts() fails on missing multipart meta objects
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24389
merged - 08:02 PM Backport #26979: luminous: multisite: intermittent failures in test_bucket_sync_disable_enable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24316
merged - 08:01 PM Backport #35703: luminous: multisite: out of order updates to sync status markers
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24317
merged - 08:01 PM Backport #35980: luminous: multisite: data sync error repo processing does not back off on empty
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24318
merged - 08:01 PM Backport #36124: luminous: Chunked encoding fails if chunk greater than 1MiB
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24361
merged - 07:05 PM Bug #24568 (Resolved): Delete marker generated by lifecycle has no owner
- 07:05 PM Backport #26847 (Resolved): mimic: Delete marker generated by lifecycle has no owner
- 05:49 PM Bug #36293: InvalidBucketName expected in more cases: uppercase, adjacent chars, underscores
- @tmsn sure, you're welcome to take it. I do think that we need to overhaul a bit the way we control the bucket names ...
- 06:59 AM Bug #36293: InvalidBucketName expected in more cases: uppercase, adjacent chars, underscores
- Hello, I am new to ceph and want to start some contribution.
Can I take this issue as it seems simple and good start... - 01:43 PM Backport #36207 (Closed): luminous: multisite: invalid read in RGWCloneMetaLogCoroutine
- thanks. it looks like this bug is not present in luminous, so i'll close it
- 10:02 AM Backport #36311 (In Progress): luminous: multi-site: object name should be urlencoded when we put...
- 09:16 AM Backport #36311 (Resolved): luminous: multi-site: object name should be urlencoded when we put it...
- https://github.com/ceph/ceph/pull/24424
- 09:12 AM Bug #23216 (Pending Backport): multi-site: object name should be urlencoded when we put it into ES
- 02:54 AM Bug #36106: rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mounting a b...
- @min-sheng Lin
Sorry, i don't read your PR carefully. - 01:16 AM Backport #36140 (In Progress): mimic: multisite: make redundant data sync errors less scary
- https://github.com/ceph/ceph/pull/24418
- 01:14 AM Backport #36138 (In Progress): mimic: multisite: update index segfault on shutdown/realm reload
- https://github.com/ceph/ceph/pull/24417
10/03/2018
- 10:05 PM Bug #21615 (Resolved): radosgw-admin zonegroup get and zone get should return defaults when there...
- 10:05 PM Backport #22211: jewel: radosgw-admin zonegroup get and zone get should return defaults when ther...
- Jewel is EOL
- 10:05 PM Backport #22211 (Rejected): jewel: radosgw-admin zonegroup get and zone get should return default...
- 09:50 PM Backport #36208 (In Progress): mimic: multisite: invalid read in RGWCloneMetaLogCoroutine
- 09:49 PM Backport #36207 (Need More Info): luminous: multisite: invalid read in RGWCloneMetaLogCoroutine
- This one seems to need b2143cded0e which I hesitate to bring into luminous.
- 08:59 PM Bug #27219: lock in resharding may expires before the dynamic resharding completes
- PR: https://github.com/ceph/ceph/pull/24406
- 08:50 PM Bug #18806 (Resolved): anonymous user's error code of getting object is not consistent with SWIFT
- 08:50 PM Backport #19177 (Rejected): jewel: anonymous user's error code of getting object is not consisten...
- Jewel is EOL
- 08:49 PM Bug #19554 (Resolved): multisite: radosgw-admin period commands cant use --remote in another zone...
- 08:49 PM Backport #20029 (Rejected): jewel: multisite: radosgw-admin period commands cant use --remote in ...
- Jewel is EOL
- 03:43 PM Bug #36302 (Fix Under Review): librgw: crashes in multisite configuration
- https://github.com/ceph/ceph/pull/24402
- 03:24 PM Bug #36302 (Resolved): librgw: crashes in multisite configuration
- librgw is missing the calls to rgw_http_client_init() and rgw::curl::setup_curl() that allocate the RGWHTTPManager an...
- 12:34 PM Backport #36201 (In Progress): mimic: multisite: intermittent test_bucket_index_log_trim failures
- 12:31 PM Backport #36202 (In Progress): luminous: multisite: intermittent test_bucket_index_log_trim failures
- 12:21 PM Backport #36137 (In Progress): luminous: multisite: update index segfault on shutdown/realm reload
- 09:15 AM Backport #24630: luminous: cls_bucket_list fails causes cascading osd crashes
- first attempted backport https://github.com/ceph/ceph/pull/22928 was closed due to github refusal to cooperate with f...
- 03:49 AM Bug #36106: rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mounting a b...
- hoan nv wrote:
> i found some comment in code
>
> /* because RGWReaddirRequest::default_max is 1000 (XXX make
>... - 03:39 AM Bug #36106: rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mounting a b...
- i found some comment in code
/* because RGWReaddirRequest::default_max is 1000 (XXX make
* configurable?) a... - 03:40 AM Backport #36128 (In Progress): luminous: abort_bucket_multiparts() fails on missing multipart met...
- https://github.com/ceph/ceph/pull/24389
- 03:38 AM Backport #36129 (In Progress): mimic: abort_bucket_multiparts() fails on missing multipart meta o...
- https://github.com/ceph/ceph/pull/24388
10/02/2018
- 10:31 PM Bug #36293 (Resolved): InvalidBucketName expected in more cases: uppercase, adjacent chars, under...
- Reported for v12.2.8 Luminous here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030207.html
... - 04:32 PM Backport #35978 (Resolved): luminous: multisite: incremental data sync makes unnecessary call to ...
- 03:40 PM Backport #35978: luminous: multisite: incremental data sync makes unnecessary call to RGWReadRemo...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24242
merged - 04:14 PM Bug #36290: resharding produces invalid values of bucket stats
- reproducible with reshard add / dynamic resharding as well
- 04:01 PM Bug #36290 (Resolved): resharding produces invalid values of bucket stats
- Seen in 12.2.7 where a manual resharding of a bucket like ...
- 09:47 AM Feature #36287 (Closed): rgw: Add archive data sync module
- Add a new archive zone in a multizone configuration allowing to have the flexibility of a S3 object history in that z...
- 07:12 AM Backport #36125 (In Progress): mimic: Chunked encoding fails if chunk greater than 1MiB
- https://github.com/ceph/ceph/pull/24363
- 04:56 AM Bug #25192: radosgw: civetweb: redirect_to_https_port: no chunk, no close, no size.
- Sorry for long reply, perhaps missed notification.
I debugged deeper. The problem actually is why Civetweb waiting 3... - 04:38 AM Backport #36124 (In Progress): luminous: Chunked encoding fails if chunk greater than 1MiB
- https://github.com/ceph/ceph/pull/24361
10/01/2018
- 08:08 PM Backport #25025 (Resolved): luminous: cls_rgw test is only run in rados suite: add it to rgw suit...
- 02:43 PM Backport #25025: luminous: cls_rgw test is only run in rados suite: add it to rgw suite as well
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24070
merged
09/30/2018
- 09:45 AM Bug #36265: rgw: list bucket can not show the object uploaded by RGWPostObj when enable bucket ve...
- https://github.com/ceph/ceph/pull/24341 fix this
- 09:43 AM Bug #36265 (Resolved): rgw: list bucket can not show the object uploaded by RGWPostObj when enabl...
- when bucket **enable versioning** and we upload object by **post** object
the post object script
```
#!/usr/bin...
09/28/2018
- 07:57 AM Backport #35979 (In Progress): mimic: multisite: data sync error repo processing does not back of...
- 07:51 AM Backport #35980 (In Progress): luminous: multisite: data sync error repo processing does not back...
- 07:33 AM Backport #35703 (In Progress): luminous: multisite: out of order updates to sync status markers
- 07:30 AM Backport #26979 (In Progress): luminous: multisite: intermittent failures in test_bucket_sync_dis...
- 07:11 AM Backport #26945 (Need More Info): luminous: tempest tests failing with pysaml2 version conflict
- No qa/tasks/keystone.py or qa/tasks/tempest.py in luminous, so my guess is this shouldn't have been marked for backpo...
09/27/2018
- 05:47 PM Bug #36233: when using nfs-ganesha to upload file, rgw es sync module get failed
- updated pr: https://github.com/ceph/ceph/pull/24492
- 03:53 AM Bug #36233 (Resolved): when using nfs-ganesha to upload file, rgw es sync module get failed
- We use S3FS and nfs-ganesha to upload file to s3 and sync metadata to elasticsearch by es sync module. When uploading...
- 05:43 PM Bug #36234: swift: dump_account_metadata doesn't return quota info
- 06:32 AM Bug #36234: swift: dump_account_metadata doesn't return quota info
- Seems like the parser doesn't like the formatting, here's the RAW version...
- 05:57 AM Bug #36234 (New): swift: dump_account_metadata doesn't return quota info
- h3. Context
* Ceph cluster running 12.2.8
* RadosGW with keystone integration
* reproduced on 13.2.2 (with ce... - 05:35 PM Bug #23884 (In Progress): rgw: orphans find should avoid objects in indexless buckets
- https://github.com/ceph/ceph/pull/24152
- 07:54 AM Bug #36235 (New): unable to create users in different tenants with same e-mail address
- I'm unable to create RGW users in different tenants with the same e-mail address. I think this should be possible.
...
09/26/2018
- 10:26 PM Backport #36139 (Resolved): luminous: multisite: make redundant data sync errors less scary
- 10:13 PM Backport #36141 (Resolved): luminous: rgw: return x-amz-version-id: null when delete obj in versi...
- 04:30 PM Backport #36141: luminous: rgw: return x-amz-version-id: null when delete obj in versioning suspe...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24190
merged - 10:10 PM Backport #35856 (Resolved): luminous: multisite: segfault on shutdown/realm reload
- 04:29 PM Backport #35856: luminous: multisite: segfault on shutdown/realm reload
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24231
merged - 10:05 PM Backport #36223 (Resolved): mimic: rgw: default quota not set in radosgw for Openstack users
- https://github.com/ceph/ceph/pull/24907
- 10:05 PM Backport #36222 (Resolved): luminous: rgw: default quota not set in radosgw for Openstack users
- https://github.com/ceph/ceph/pull/24547
- 04:19 PM Backport #36216 (Resolved): mimic: multisite: data full sync does not limit concurrent bucket sync
- https://github.com/ceph/ceph/pull/24536
- 04:19 PM Backport #36215 (Resolved): luminous: multisite: data full sync does not limit concurrent bucket ...
- https://github.com/ceph/ceph/pull/24857
- 04:18 PM Backport #36214 (Resolved): luminous: multisite: memory leak from curl_multi_add_handle()
- https://github.com/ceph/ceph/pull/24519
- 04:18 PM Backport #36213 (Resolved): mimic: multisite: memory leak from curl_multi_add_handle()
- https://github.com/ceph/ceph/pull/24518
- 04:18 PM Backport #36212 (Resolved): luminous: multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- https://github.com/ceph/ceph/pull/24507
- 04:18 PM Backport #36211 (Resolved): mimic: multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- https://github.com/ceph/ceph/pull/24516
- 04:17 PM Backport #36208 (Resolved): mimic: multisite: invalid read in RGWCloneMetaLogCoroutine
- https://github.com/ceph/ceph/pull/24414
- 04:17 PM Backport #36207 (Closed): luminous: multisite: invalid read in RGWCloneMetaLogCoroutine
- 04:16 PM Backport #36202 (Resolved): luminous: multisite: intermittent test_bucket_index_log_trim failures
- https://github.com/ceph/ceph/pull/24398
- 04:16 PM Backport #36201 (Resolved): mimic: multisite: intermittent test_bucket_index_log_trim failures
- https://github.com/ceph/ceph/pull/24400
- 04:16 PM Bug #24595 (Pending Backport): rgw: default quota not set in radosgw for Openstack users
- 04:00 PM Bug #24595: rgw: default quota not set in radosgw for Openstack users
- Casey Bodley wrote:
> https://github.com/ceph/ceph/pull/24177
mergedReviewed-by: Abhishek Lekshmanan <abhishek.le... - 01:28 PM Bug #26897 (Pending Backport): multisite: data full sync does not limit concurrent bucket sync
09/25/2018
- 10:09 PM Bug #36034 (Pending Backport): multisite: intermittent test_bucket_index_log_trim failures
- 08:48 PM Bug #35851 (Pending Backport): multisite: invalid read in RGWCloneMetaLogCoroutine
- 08:41 PM Bug #35715 (Pending Backport): multisite: memory leak from curl_multi_add_handle()
- 08:29 PM Backport #36139: luminous: multisite: make redundant data sync errors less scary
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24135
merged - 08:28 PM Bug #35812 (Pending Backport): multisite: use-after-free in RGWAsyncGetBucketInstanceInfo
- 05:54 PM Bug #36041: Beast frontend tries to bind (privileged) ports after dropping privileges to do so
- 02:32 PM Bug #36041: Beast frontend tries to bind (privileged) ports after dropping privileges to do so
- https://github.com/ceph/ceph/pull/24271
- 01:40 PM Backport #36141 (In Progress): luminous: rgw: return x-amz-version-id: null when delete obj in ve...
- 01:40 PM Backport #36142 (In Progress): mimic: rgw: return x-amz-version-id: null when delete obj in versi...
- 11:16 AM Backport #36179 (Need More Info): mimic: Update radosgw-admin bucket link command for bucket rena...
- 11:11 AM Backport #36179 (Rejected): mimic: Update radosgw-admin bucket link command for bucket rename and...
- 11:16 AM Bug #35885 (Fix Under Review): Update radosgw-admin bucket link command for bucket rename and move
- 11:10 AM Bug #35885 (Pending Backport): Update radosgw-admin bucket link command for bucket rename and move
- 11:12 AM Backport #36178 (Need More Info): luminous: Update radosgw-admin bucket link command for bucket r...
- Backport PR opened early. All the commits have to be re-checked after the master PR gets merged.
- 11:11 AM Backport #36178 (Rejected): luminous: Update radosgw-admin bucket link command for bucket rename ...
- https://github.com/ceph/ceph/pull/24195
- 10:57 AM Bug #24873 (Resolved): 'radosgw-admin sync error trim' only trims partially
- 10:57 AM Backport #24983 (Resolved): luminous: 'radosgw-admin sync error trim' only trims partially
- 10:56 AM Backport #24985 (Resolved): luminous: multisite: object metadata operations are skipped by sync
- 10:56 AM Backport #35709 (Resolved): luminous: deadlock on shutdown in RGWIndexCompletionManager::stop()
- 07:47 AM Backport #35977 (In Progress): mimic: multisite: incremental data sync makes unnecessary call to ...
- -PR: https://github.com/ceph/ceph/pull/24261-
09/24/2018
- 12:41 PM Backport #35978 (In Progress): luminous: multisite: incremental data sync makes unnecessary call ...
- PR: https://github.com/ceph/ceph/pull/24242
- 11:06 AM Backport #36139 (In Progress): luminous: multisite: make redundant data sync errors less scary
- 11:00 AM Backport #36139 (Resolved): luminous: multisite: make redundant data sync errors less scary
- https://github.com/ceph/ceph/pull/24135
- 11:00 AM Backport #36142 (Resolved): mimic: rgw: return x-amz-version-id: null when delete obj in versioni...
- https://github.com/ceph/ceph/pull/24189
- 11:00 AM Backport #36141 (Resolved): luminous: rgw: return x-amz-version-id: null when delete obj in versi...
- https://github.com/ceph/ceph/pull/24190
- 11:00 AM Backport #36140 (Resolved): mimic: multisite: make redundant data sync errors less scary
- https://github.com/ceph/ceph/pull/24418
- 11:00 AM Backport #36138 (Resolved): mimic: multisite: update index segfault on shutdown/realm reload
- https://github.com/ceph/ceph/pull/24417
- 11:00 AM Backport #36137 (Resolved): luminous: multisite: update index segfault on shutdown/realm reload
- https://github.com/ceph/ceph/pull/24397
- 11:00 AM Backport #36129 (Resolved): mimic: abort_bucket_multiparts() fails on missing multipart meta objects
- https://github.com/ceph/ceph/pull/24388
- 11:00 AM Backport #36128 (Resolved): luminous: abort_bucket_multiparts() fails on missing multipart meta o...
- https://github.com/ceph/ceph/pull/24389
- 11:00 AM Backport #36125 (Resolved): mimic: Chunked encoding fails if chunk greater than 1MiB
- https://github.com/ceph/ceph/pull/24363
- 11:00 AM Backport #36124 (Resolved): luminous: Chunked encoding fails if chunk greater than 1MiB
- https://github.com/ceph/ceph/pull/24361
- 03:59 AM Backport #35857 (In Progress): mimic: multisite: segfault on shutdown/realm reload
- https://github.com/ceph/ceph/pull/24235
- 03:33 AM Backport #35856 (In Progress): luminous: multisite: segfault on shutdown/realm reload
- https://github.com/ceph/ceph/pull/24231
09/21/2018
- 04:19 PM Bug #36034 (Fix Under Review): multisite: intermittent test_bucket_index_log_trim failures
- https://github.com/ceph/ceph/pull/24221
- 12:33 PM Bug #36106: rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mounting a b...
- Updated title, specific to the new-ish ability to export a bucket (haven't reproduced yet)
- 12:32 PM Bug #36106 (Triaged): rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mo...
- 10:14 AM Bug #36106: rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mounting a b...
- Pull Request:
https://github.com/ceph/ceph/pull/24216 - 10:13 AM Bug #36106: rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mounting a b...
- Related issue:
https://github.com/nfs-ganesha/nfs-ganesha/issues/235 - 09:53 AM Bug #36106 (Triaged): rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW when mo...
- If using nfs-ganesha-rgw to export bucket as nfs, only first 1000 files be listed
> [vagrant@admin ~]$ ls -l /mnt/... - 02:53 AM Bug #36092: Radosgw elastic search sync module not working properly (all result same)
- My ceph cluster is running on 13.2.1 version.
09/20/2018
- 08:38 PM Bug #36077: rgw: master is on a different period
- I was able to fix this issue by using the "radosgw-admin metadata sync init" command
and the "metadata sync" sub-co... - 06:48 PM Bug #36077: rgw: master is on a different period
- any idea how to fix this error?
""" """"""
ERROR: sync status period=318f6d70-6a9e-474d-8073-41a9ee9d645e does not... - 06:46 PM Bug #36077: rgw: master is on a different period
- Also, when i restart the rgw hosts in the secondary/slave zone i get this in their logs
""""""""
2018-09-20 18:44... - 06:11 PM Bug #35988 (In Progress): RGW Ldap Authorization fails
- Hi Warren,
Ok, actually this looks like RGW isn't attempting to use the ExternalAuthStrategy. It does look like y... - 04:22 PM Bug #35885: Update radosgw-admin bucket link command for bucket rename and move
- luminous PR - https://github.com/ceph/ceph/pull/24195
- 01:23 PM Support #24457: large omap object
- I managed to get rid of this message by following a process:
* Enable dynamic sharding - there are many tutorials av... - 04:42 AM Bug #35814: rgw: return x-amz-version-id: null when delete obj in versioning suspended bucket
- luminous backport https://github.com/ceph/ceph/pull/24190
mimic backport https://github.com/ceph/ceph/pull/24... - 02:33 AM Bug #36092 (Resolved): Radosgw elastic search sync module not working properly (all result same)
- Hi all.
Currently, i create a zone with elasticsearch tier.
My radosgw log...
09/19/2018
- 04:07 PM Backport #24983: luminous: 'radosgw-admin sync error trim' only trims partially
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24054
merged - 04:05 PM Backport #24985: luminous: multisite: object metadata operations are skipped by sync
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24056
merged - 04:04 PM Backport #35709: luminous: deadlock on shutdown in RGWIndexCompletionManager::stop()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24069
merged - 02:50 PM Bug #35905 (Pending Backport): multisite: update index segfault on shutdown/realm reload
- 02:45 PM Bug #36077: rgw: master is on a different period
- Attached is the output of this command
radosgw-admin sync status -c /etc/ceph/ceph_ewr_prod.conf --debug-rgw=10 - 02:36 PM Bug #35990 (Pending Backport): Chunked encoding fails if chunk greater than 1MiB
- 02:35 PM Bug #35986 (Pending Backport): abort_bucket_multiparts() fails on missing multipart meta objects
- 02:28 PM Bug #35814 (Pending Backport): rgw: return x-amz-version-id: null when delete obj in versioning s...
- 02:02 PM Bug #24595 (Fix Under Review): rgw: default quota not set in radosgw for Openstack users
- https://github.com/ceph/ceph/pull/24177
09/18/2018
- 10:38 PM Bug #36077 (New): rgw: master is on a different period
- Hello,
I am trying to configure object store replication between two clusters "us-east-1" and "us-east-2" and i a... - 03:08 PM Bug #35990: Chunked encoding fails if chunk greater than 1MiB
- 03:05 PM Bug #36041: Beast frontend tries to bind (privileged) ports after dropping privileges to do so
09/17/2018
- 09:33 PM Bug #36041 (Resolved): Beast frontend tries to bind (privileged) ports after dropping privileges ...
- To reproduce: configure beast to listen on port 80/443 and run with --user ceph --group ceph.
It will fail because... - 05:45 PM Bug #36034 (Resolved): multisite: intermittent test_bucket_index_log_trim failures
- The 'bilog autotrim' command in this test sometimes fails to trim all entries, causing a failed assert(len(active_bil...
- 03:40 PM Cleanup #35830 (Pending Backport): multisite: make redundant data sync errors less scary
- 08:48 AM Bug #24505: radosgw-admin user stats are incorrect when dynamic re-sharding is enabled
- Thanks for the quick reproduction procedure in #27205
Those bugs seem related, I will debug. - 08:41 AM Bug #18260: When uploading a large number of objects constantly, the objects number of bucket is ...
- I am still debugging it case,
Reminder the stats discrepancy occurs when during multisite sync the replicating sit... - 03:59 AM Bug #36027 (New): multisite: set removed zone master will affect original zonegroup
- zonegroup have zones z1, z2; (e.g. realm_epoch 2, period epoch 3)
operations order:
1.zonegroup remove z2 in clus...
09/16/2018
- 07:22 PM Bug #27219: lock in resharding may expires before the dynamic resharding completes
- Update: I just noticed Mimic still uses stupidalloc by default. Either way, I’m not experiencing the same slowdown a...
- 06:59 PM Bug #27219: lock in resharding may expires before the dynamic resharding completes
- Thanks Orit.
I disabled resharding and have no problems writing to my buckets. I’ll await for the next major develo...
09/15/2018
- 12:13 AM Bug #35994 (Resolved): Configurable ListBucket max-keys limit
- ListBucket max-keys has no defined limit in the RGW codebase.
Presently it will fall all the way down to the limit... - 12:08 AM Bug #35993 (Resolved): AWSv4 presigned signature misses quoting on X-Amz-Credential
- External ticket https://gitlab.com/gitlab-org/gitlab-workhorse/issues/181
The X-Amz-Credential query parameter is ...
09/14/2018
- 09:39 PM Bug #35990: Chunked encoding fails if chunk greater than 1MiB
- https://github.com/ceph/ceph/pull/24114
- 08:46 PM Bug #35990 (Resolved): Chunked encoding fails if chunk greater than 1MiB
- If a response uses chunked encoding, and the chunk is greater than 1MiB, the chunk length is not correctly printed.
... - 07:57 PM Backport #26844 (Resolved): luminous: rgw_file: "deep stat"/stats of unenumerated paths not handled
- 04:40 PM Backport #26844: luminous: rgw_file: "deep stat"/stats of unenumerated paths not handled
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23499
merged - 07:57 PM Bug #24572 (Resolved): Lifecycle rules number on one bucket should be limited.
- 07:57 PM Backport #26846 (Resolved): luminous: Lifecycle rules number on one bucket should be limited.
- 03:41 PM Backport #26846: luminous: Lifecycle rules number on one bucket should be limited.
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23522
merged - 07:56 PM Backport #26848 (Resolved): luminous: Delete marker generated by lifecycle has no owner
- 03:41 PM Backport #26848: luminous: Delete marker generated by lifecycle has no owner
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23545
merged - 07:56 PM Bug #23801 (Resolved): possibly wrong log level in gc_iterate_entries (src/cls/rgw/cls_rgw.cc:3291)
- 07:55 PM Backport #26922 (Resolved): luminous: possibly wrong log level in gc_iterate_entries (src/cls/rgw...
- 03:40 PM Backport #26922: luminous: possibly wrong log level in gc_iterate_entries (src/cls/rgw/cls_rgw.cc...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23665
merged - 07:55 PM Fix #34537 (Resolved): cls/rgw: add rgw_usage_log_entry type to ceph-dencoder
- 07:55 PM Backport #35069 (Resolved): luminous: cls/rgw: add rgw_usage_log_entry type to ceph-dencoder
- 03:40 PM Backport #35069: luminous: cls/rgw: add rgw_usage_log_entry type to ceph-dencoder
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23974
merged - 07:54 PM Bug #24544 (Resolved): change default rgw_thread_pool_size to 512
- 07:54 PM Backport #25087 (Resolved): luminous: change default rgw_thread_pool_size to 512
- 03:39 PM Backport #25087: luminous: change default rgw_thread_pool_size to 512
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24034
merged - 07:54 PM Bug #25214 (Resolved): valgrind failures related to --max-threads prevent radosgw from starting
- 07:53 PM Backport #25217 (Resolved): luminous: valgrind failures related to --max-threads prevent radosgw ...
- 03:39 PM Backport #25217: luminous: valgrind failures related to --max-threads prevent radosgw from starting
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24034
merged - 07:36 PM Backport #35707 (Resolved): luminous: A period pull occasionally raises "curl_easy_perform return...
- 03:38 PM Backport #35707: luminous: A period pull occasionally raises "curl_easy_perform returned status 2...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24046
merged - 07:10 PM Bug #35986 (Fix Under Review): abort_bucket_multiparts() fails on missing multipart meta objects
- https://github.com/ceph/ceph/pull/24110
- 02:35 PM Bug #35986 (Resolved): abort_bucket_multiparts() fails on missing multipart meta objects
- If there are multipart meta objects listed in the bucket index that don't exist in the data_extra_pool, the ENOENT/ER...
- 06:35 PM Bug #23091 (Duplicate): rgw + OpenLDAP = Failed the auth strategy, reason=-13
- I could not reopen this. So I opened a new report and marked this as a duplicate.
- 06:21 PM Bug #23091: rgw + OpenLDAP = Failed the auth strategy, reason=-13
- I believe that I have run into this issue in the sepia lab.
Running rgw on vpm019 (ceph cluster vpm019 vpm125 vpm1... - 06:33 PM Bug #35988 (In Progress): RGW Ldap Authorization fails
- I believe that this is the same problem as 23091.
Trying to authenticate an ldap RGW user fails.... - 05:43 PM Bug #35953: Aborted dynamic resharding should clean up created bucket index objs
- @Orit:
This needs something to help clean up the leftovers from older aborts/fails. Can you cover how you think thes... - 07:21 AM Backport #35710 (In Progress): mimic: deadlock on shutdown in RGWIndexCompletionManager::stop()
- https://github.com/ceph/ceph/pull/24101
- 04:09 AM Bug #27654: Strange behavior of the S3 Get Bucket lifecycle when the Origin header is included in...
- Abhishek Lekshmanan wrote:
> Can you create a PR with this changeset?
PR
https://github.com/ceph/ceph/pull/24096
09/13/2018
- 06:57 PM Backport #35980 (Resolved): luminous: multisite: data sync error repo processing does not back of...
- https://github.com/ceph/ceph/pull/24318
- 06:57 PM Backport #35979 (Resolved): mimic: multisite: data sync error repo processing does not back off o...
- https://github.com/ceph/ceph/pull/24319
- 06:57 PM Backport #35978 (Resolved): luminous: multisite: incremental data sync makes unnecessary call to ...
- https://github.com/ceph/ceph/pull/24242
- 06:57 PM Backport #35977 (Resolved): mimic: multisite: incremental data sync makes unnecessary call to RGW...
- https://github.com/ceph/ceph/pull/24710
- 06:01 PM Bug #26938 (Pending Backport): multisite: data sync error repo processing does not back off on empty
- 06:00 PM Bug #26952 (Pending Backport): multisite: incremental data sync makes unnecessary call to RGWRead...
- 05:52 PM Bug #27215 (Need More Info): radosgw:Segmentation fault lead to rgw hangup
- 05:50 PM Bug #27364 (Fix Under Review): Swift example command got "Method Not Allowed (HTTP 405)"
- 05:48 PM Bug #27654: Strange behavior of the S3 Get Bucket lifecycle when the Origin header is included in...
- Can you create a PR with this changeset?
- 05:47 PM Bug #24364 (Resolved): rgw: s3cmd sync fails
- 05:47 PM Backport #35954 (Resolved): mimic: rgw: s3cmd sync fails
- 03:17 PM Backport #35954: mimic: rgw: s3cmd sync fails
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/24058
merged - 05:45 PM Bug #34540 (Need More Info): Get object: Failed to establish a new connection: [Errno 111] Connec...
- 05:44 PM Bug #35814: rgw: return x-amz-version-id: null when delete obj in versioning suspended bucket
- 05:43 PM Bug #35905: multisite: update index segfault on shutdown/realm reload
- 05:42 PM Bug #35973: radosgw-admin bucket limit check stuck generating high read ops with > 999 buckets pe...
- 05:19 PM Bug #35973 (Resolved): radosgw-admin bucket limit check stuck generating high read ops with > 999...
- Created over 1k buckets for user and @radosgw-admin bucket limit check@ stuck generating 4-5k read ops in our case. L...
- 05:40 PM Bug #22775: rgw: multisite: the huge object sync is slow in unstable network environment
- Lowering priority since teh curl network rate limit pr is merged
- 05:37 PM Bug #23661 (Resolved): RGWAsyncGetSystemObj failed assertion on shutdown/realm reload
- resolved in http://tracker.ceph.com/issues/35543, and backports are tracked there
- 06:48 AM Bug #27219: lock in resharding may expires before the dynamic resharding completes
- Hi Joreg,
You can try disabling dynamic resharding in the ceph conf file as a temporary workaround.
You can use "re... - 06:45 AM Bug #27219: lock in resharding may expires before the dynamic resharding completes
- Hello,
I’m experiencing this myself at the moment. Is there a workaround? I’m running Ceph 12.2.7 on a 4-node cl... - 06:08 AM Backport #35708 (In Progress): mimic: A period pull occasionally raises "curl_easy_perform return...
- https://github.com/ceph/ceph/pull/24071
- 06:03 AM Backport #25025 (In Progress): luminous: cls_rgw test is only run in rados suite: add it to rgw s...
- 04:35 AM Backport #35709 (In Progress): luminous: deadlock on shutdown in RGWIndexCompletionManager::stop()
- https://github.com/ceph/ceph/pull/24069
- 02:32 AM Support #24457: large omap object
- Will Marley wrote:
> jack jack wrote:
> > I have same issue ,I remove a large file used s3cmd and I got this issue ...
09/12/2018
- 03:55 PM Bug #24364: rgw: s3cmd sync fails
- Thank you very much for the quick fix. Using beast seems to work for now, and will change back to civetweb once this ...
- 01:33 PM Backport #35954 (In Progress): mimic: rgw: s3cmd sync fails
- 01:30 PM Backport #35954 (Resolved): mimic: rgw: s3cmd sync fails
- https://github.com/ceph/ceph/pull/24058
- 12:30 PM Bug #35953 (New): Aborted dynamic resharding should clean up created bucket index objs
- 12:29 PM Bug #27219: lock in resharding may expires before the dynamic resharding completes
- The renew of the lock logic is run in process_single_shard but a single buckert resharding can take a long time.
The... - 11:35 AM Backport #24985 (In Progress): luminous: multisite: object metadata operations are skipped by sync
- 10:57 AM Bug #22870: rgw: bucket stats size_actual is incorrect
- we just round to nearest 4k which might be ok as size on disk on filestore, though probably incorrect on bluestore?
- 10:28 AM Support #24457: large omap object
- We face the same issue in 13.2.1. We have a bucket of around 60TB in size and it doesnt even sync anymore since lumin...
- 09:59 AM Bug #34540: Get object: Failed to establish a new connection: [Errno 111] Connection refused
- Can you paste a larger section of RGW log (including the part just before the coredump) most likely is https://github...
- 08:23 AM Backport #24983 (In Progress): luminous: 'radosgw-admin sync error trim' only trims partially
- 03:01 AM Backport #35707 (In Progress): luminous: A period pull occasionally raises "curl_easy_perform ret...
- https://github.com/ceph/ceph/pull/24046
09/11/2018
- 08:09 PM Documentation #23081 (Resolved): docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searchfilter'
- 08:09 PM Backport #32129 (Resolved): mimic: docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searchfi...
- 08:08 PM Backport #32127 (Resolved): luminous: docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searc...
- 04:02 PM Backport #25217 (In Progress): luminous: valgrind failures related to --max-threads prevent rados...
- 11:16 AM Bug #24364: rgw: s3cmd sync fails
- Kevin, For now, if you're blocked, changing the rgw_frontends string to use beast instead of civetweb will get you mo...
- 09:49 AM Bug #24364: rgw: s3cmd sync fails
- Was able to trigger this, have a fix staged at https://github.com/ceph/civetweb/pull/29
Also available in: Atom