Activity
From 08/27/2020 to 09/25/2020
09/25/2020
- 10:09 PM Bug #46658: Ceph-OSD nautilus/octopus memory leak ?
- Hi @Igor,
Thank you for your help :) To answer your questions :
1) First of all what's the current setting for o...
09/23/2020
- 05:31 PM Bug #45765: BlueStore::_collection_list causes huge latency growth pg deletion
- Thanks for your help Igor.
> Could you please raise rocksdb_delete_range_threshold to e.g. 1 billion. This wouldn... - 01:39 PM Backport #46350 (In Progress): octopus: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixCompre...
- 08:14 AM Backport #47195: octopus: Default value for 'bluestore_volume_selection_policy' is wrong
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37092
m...
09/22/2020
- 05:58 PM Bug #47053 (Resolved): Default value for 'bluestore_volume_selection_policy' is wrong
- 05:57 PM Backport #47195 (Resolved): octopus: Default value for 'bluestore_volume_selection_policy' is wrong
- 01:21 PM Backport #47195: octopus: Default value for 'bluestore_volume_selection_policy' is wrong
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37092
merged
09/21/2020
- 09:24 PM Bug #45765: BlueStore::_collection_list causes huge latency growth pg deletion
- Andy Allan wrote:
> I'm experiencing a similar problem with our brand new octopus cluster (default settings, radosgw... - 07:55 PM Bug #45765: BlueStore::_collection_list causes huge latency growth pg deletion
- I'm experiencing a similar problem with our brand new octopus cluster (default settings, radosgw only, 18 NVMe OSDs, ...
09/20/2020
- 09:25 AM Bug #47551 (Fix Under Review): Some structs aren't bound to mempools properly
- 12:54 AM Bug #47551 (In Progress): Some structs aren't bound to mempools properly
- 12:54 AM Bug #47551 (Resolved): Some structs aren't bound to mempools properly
- bluestore_shared_blob_t is dynamically allocated but isn't configured for mempool
bluestore_blob_use_tracker_t might... - 12:07 AM Bug #46658: Ceph-OSD nautilus/octopus memory leak ?
- @Christophe - given that excessive memory consumption occurs on image export (i.e. reading) I guess this might be rel...
09/19/2020
- 09:25 PM Bug #46658: Ceph-OSD nautilus/octopus memory leak ?
- I just upgraded from 15.2.4 to 15.2.5 and still facing the same memory problem :/
09/18/2020
- 04:42 PM Bug #47211 (Resolved): nautilus: unrecognised rocksdb_option crashes osd process while starting t...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
09/17/2020
- 07:03 PM Backport #47194: nautilus: Default value for 'bluestore_volume_selection_policy' is wrong
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37091
m... - 06:44 PM Backport #47521: nautilus: unrecognised rocksdb_option crashes osd process while starting the osd
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37055
m... - 06:42 PM Backport #47521 (Resolved): nautilus: unrecognised rocksdb_option crashes osd process while start...
- 06:41 PM Backport #47521 (Resolved): nautilus: unrecognised rocksdb_option crashes osd process while start...
- https://github.com/ceph/ceph/pull/37055
- 06:40 PM Bug #47211 (Pending Backport): nautilus: unrecognised rocksdb_option crashes osd process while st...
09/16/2020
- 07:03 PM Backport #47194 (Resolved): nautilus: Default value for 'bluestore_volume_selection_policy' is wrong
- 03:52 PM Backport #47194: nautilus: Default value for 'bluestore_volume_selection_policy' is wrong
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37091
merged
09/15/2020
- 08:09 PM Bug #47475 (Fix Under Review): Compressed blobs lack checksums
- 07:20 PM Bug #47475: Compressed blobs lack checksums
- As a result corrupted data might reach compressor header decoder which asserts/throws exception
- 07:19 PM Bug #47475 (Resolved): Compressed blobs lack checksums
- Looks like a bug in BlueStore::_do_allocate_data() func.
- 05:49 PM Bug #46490: osds crashing during deep-scrub
- Lawrence,
I'm not sure that it's enough to remove object to fix such misreferences. This still leaves inconsistent e... - 05:32 PM Bug #47211 (Resolved): nautilus: unrecognised rocksdb_option crashes osd process while starting t...
- 08:09 AM Bug #46124 (Fix Under Review): Potential race condition regression around new OSD flock()s
- 08:06 AM Bug #46124: Potential race condition regression around new OSD flock()s
- i think we also need to include https://github.com/ceph/ceph/pull/37153 for properly returning the right errno.
- 06:38 AM Bug #46124: Potential race condition regression around new OSD flock()s
- per https://repo.or.cz/glibc.git/blob/HEAD:/sysdeps/posix/flock.c,...
- 02:33 AM Bug #47453 (New): checksum failures lead to assert on OSD shutdown in lab tests
- ...
09/14/2020
- 09:19 PM Bug #47443 (Fix Under Review): Hybrid allocator might cause duplicate admin socket command regist...
- 02:32 PM Bug #47443 (Resolved): Hybrid allocator might cause duplicate admin socket command registration.
- This is rather not a big deal for Octopus+ releases which has https://github.com/ceph/ceph/pull/30217
They use a sin... - 07:46 PM Bug #47446 (New): No snap trim progress after removing large snapshots
- I've issues with cleaning up a cluster with a relatively large CephFS (201M objects, 372 TiB), after snapshots where ...
- 03:54 PM Bug #46124 (New): Potential race condition regression around new OSD flock()s
- per the discussion in https://github.com/ceph/ceph/pull/34566, i am reopening this ticket.
- 03:31 PM Bug #46124: Potential race condition regression around new OSD flock()s
- I believe this bug was not fixed properly; the added commit "just retries 3 times".
This makes success more likely... - 02:04 PM Bug #46124 (Pending Backport): Potential race condition regression around new OSD flock()s
- 03:32 PM Bug #47211: nautilus: unrecognised rocksdb_option crashes osd process while starting the osd
- https://github.com/ceph/ceph/pull/37055 merged
- 06:11 AM Backport #46714 (Resolved): nautilus: Rescue procedure for extremely large bluefs log
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36930
m...
09/13/2020
- 06:49 AM Bug #47271: ceph version 14.2.10-OSD fails
- 1) Is this happening to multiple OSDs?-yes all the osds has all so the same hdd module
2) Are OSDs able to start...
09/12/2020
- 02:15 PM Bug #46490: osds crashing during deep-scrub
- Neither the new gcc version, nor a new kernel version (upgraded form 4.14 to 5.4) were able to solve this issue for u...
09/11/2020
- 10:08 AM Bug #47174: [BlueStore] Pool/PG deletion(space reclamation) is very slow
- Serg D wrote:
> it looks similar to problem https://tracker.ceph.com/issues/47044
yep, indeed - 08:46 AM Bug #47174: [BlueStore] Pool/PG deletion(space reclamation) is very slow
- Thank you, Igor for looking into this one.
FYI, as this is continuously reproduced this time again we had deleted ... - 09:13 AM Bug #46886 (Resolved): upgrade/nautilus-x-master: bluefs mount failed to replay log: (14) Bad ad...
09/10/2020
- 06:36 PM Backport #47195 (In Progress): octopus: Default value for 'bluestore_volume_selection_policy' is ...
- 06:34 PM Backport #47194 (In Progress): nautilus: Default value for 'bluestore_volume_selection_policy' is...
09/09/2020
- 11:35 AM Backport #45683 (In Progress): mimic: Large (>=2 GB) writes are incomplete when bluefs_buffered_i...
- https://github.com/ceph/ceph/pull/37056
- 10:22 AM Bug #47174: [BlueStore] Pool/PG deletion(space reclamation) is very slow
- it looks similar to problem https://tracker.ceph.com/issues/47044
09/08/2020
- 09:17 PM Bug #47174 (In Progress): [BlueStore] Pool/PG deletion(space reclamation) is very slow
- 09:15 PM Bug #47211 (Fix Under Review): nautilus: unrecognised rocksdb_option crashes osd process while st...
- Missed backport, specific to Nautilus.
- 08:54 PM Bug #47271: ceph version 14.2.10-OSD fails
- Could you please answer the following questions:
1) Is this happening to multiple OSDs?
2) Are OSDs able to sta...
09/07/2020
- 08:10 PM Bug #46575 (Resolved): os/bluestore: simplify Onode pin/unpin logic.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:44 AM Bug #47330 (Closed): ceph-osd can't start when CURRENT file does not end with newline or content ...
When the machine is shutdown or reboot by IPMI, some ceph-osds cannot be started. The error message is as follo...
09/06/2020
- 10:19 AM Backport #46584 (Resolved): octopus: os/bluestore: simplify Onode pin/unpin logic.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36795
m...
09/02/2020
- 10:46 PM Backport #46714 (In Progress): nautilus: Rescue procedure for extremely large bluefs log
- https://github.com/ceph/ceph/pull/36930
merged - 12:28 PM Bug #47271 (Closed): ceph version 14.2.10-OSD fails
- Hi
we have updated ceph from version 14.2.9 to version.
14.2.10 and since then we are getting osd crash and the o... - 12:43 AM Bug #47243 (Duplicate): bluefs _allocate failed then assert
- -4> 2020-09-01T21:55:35.451+0800 ffffb5f50500 3 rocksdb: [le/block_based/filter_policy.cc:584] Using legacy Bloo...
08/31/2020
- 11:57 AM Backport #47213 (In Progress): nautilus: BlueFS volume selector assert
- 11:55 AM Backport #47213: nautilus: BlueFS volume selector assert
- https://github.com/ceph/ceph/pull/36909
- 11:45 AM Backport #47213 (Resolved): nautilus: BlueFS volume selector assert
- https://github.com/ceph/ceph/pull/36909
- 11:06 AM Bug #47211 (Resolved): nautilus: unrecognised rocksdb_option crashes osd process while starting t...
- while upgrading cluster from 14.2.9 to 14.2.11 unrecognised (legacy?) rocksdb_option crashes osd process while starti...
- 10:43 AM Bug #43538 (Pending Backport): BlueFS volume selector assert
- 09:10 AM Bug #43538: BlueFS volume selector assert
- I'm seeing this killing almost every osd one by one ( during 48h cycle) on one of our production clusters after upgra...
08/28/2020
- 02:38 PM Backport #47195 (Resolved): octopus: Default value for 'bluestore_volume_selection_policy' is wrong
- https://github.com/ceph/ceph/pull/37092
- 02:37 PM Backport #47194 (Resolved): nautilus: Default value for 'bluestore_volume_selection_policy' is wrong
- https://github.com/ceph/ceph/pull/37091
- 02:37 PM Bug #47069 (Resolved): os/bluestore: dump onode that has too many spanning blobs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:36 AM Bug #46124 (Resolved): Potential race condition regression around new OSD flock()s
08/27/2020
- 06:31 PM Bug #47174: [BlueStore] Pool/PG deletion(space reclamation) is very slow
- - Uploaded debug log file and osd.190.perf.dump - ...
- 06:19 PM Bug #47174: [BlueStore] Pool/PG deletion(space reclamation) is very slow
- - dump_mempools:...
- 05:54 PM Bug #47174: [BlueStore] Pool/PG deletion(space reclamation) is very slow
- I have selected one OSD for providing the debug log information. perf top from the same OSD.
ceph 84323 ... - 05:29 PM Bug #47174 (Resolved): [BlueStore] Pool/PG deletion(space reclamation) is very slow
- Version - 14.2.8 was also reproduced in 12.2.12
- We use cosbench to fill the cluster - obviously for the RGW work... - 05:21 PM Backport #46584: octopus: os/bluestore: simplify Onode pin/unpin logic.
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/36795
merged - 02:29 PM Bug #47053 (Pending Backport): Default value for 'bluestore_volume_selection_policy' is wrong
- 02:15 PM Bug #44731 (Closed): Space leak in Bluestore
- Doesn't look like a Ceph issue.
- 02:11 PM Bug #46490 (Need More Info): osds crashing during deep-scrub
- 02:03 PM Bug #46994 (Need More Info): 14.2.11 OSD crash BlueFS.cc: 1662: FAILED ceph_assert(r == 0)
- 10:56 AM Backport #47070 (Resolved): nautilus: os/bluestore: dump onode that has too many spanning blobs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36756
m...
Also available in: Atom