Jaka Močnik
- Login: jkmcnk
- Registered on: 11/23/2021
- Last sign in: 11/25/2021
Issues
open | closed | Total | |
---|---|---|---|
Assigned issues | 0 | 0 | 0 |
Reported issues | 1 | 1 | 2 |
Activity
12/22/2021
- 12:41 PM rgw Bug #53698 (Won't Fix - EOL): a slow reader of a large object receives corrupt object contents from rgw with civetweb frontend
- first, the real-life scenario:
we have been copying a number of large objects from swift interface of rgw of a mim...
11/26/2021
- 09:15 AM rgw Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- fwiw, 12 hours after setting max_time and period on all rgw instances to 1hr, the gc queue is empty. thanks again.
11/25/2021
- 12:28 PM rgw Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- I understand. however, as stated, the current behaviour is a bug as it will result in the gc queue never draining if ...
- 09:59 AM rgw Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- thanks for the insight, pritha.
I was suspecting the fact that large deletes take over 5mins time could be related...
11/24/2021
- 04:03 PM rgw Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- attaching complete log files - save the ones that containt HTTP_X_AUTH_TOKEN - of said rgw instance for some 40 minut...
- 02:21 PM rgw Bug #53384: tail objects that have already been garbage collected remain in the gc queue forever
- explicitly set rgw-related config options:
rgw_override_bucket_index_max_shards = 16
rgw_max_put_size = 109951162... - 02:13 PM rgw Bug #53384 (Triaged): tail objects that have already been garbage collected remain in the gc queue forever
- running an octopus cluster (upgraded from nautilus a few months ago) of some 0.5PB capacity. it is used exclusively a...
Also available in: Atom