ERROR type entries of pglog do not update min_last_complete_ondisk, potentially ballooning memory usage
we use rbd discard api to zero the whole range of a very big volume. many extents of this volume yet to be written, before discard operation, so these extents map to those nonexistent object, when osd execute DELETE op initiated by rbd_discard for those objects will got an -ENOENT. for dup op detection sake, it record an ERROR log by a particular io path(see PrimaryLogPG::submit_log_entries) which didnot update last_complete_ondisk. In normal pglog update path, slave will update its last_complete_ondisk(ReplicatedBackend::sub_op_commit) and inform primary(ReplicatedBackend::sub_op_modify_reply), then primary use min_last_complete_ondisk as lowwer boundary to trim pglog, restrain its number. so, if a PG continuously receive this kind of DELETE op, with no successfull write occur meanwhile, primary has no change to update min_last_complete_ondisk to trim pglog, these ERROR type entries keep accumulating.
#1 Updated by Greg Farnum about 5 years ago
- Project changed from Ceph to RADOS
- Subject changed from ERROR type entries of pglog cannot be trimmed timely caused a large memory usage to ERROR type entries of pglog do not update min_last_complete_ondisk, potentially ballooning memory usage
- Category changed from OSD to Performance/Resource Usage
- Component(RADOS) OSD added
This one's tricky; I'm not sure we want to trim based on error entries in the general case. If a broken client submits an error op constantly, it could go through much more quickly than real ops and trimming based on that might cause issues if an OSD is rebooting at the same time...
#6 Updated by Josh Durgin over 4 years ago
- Assignee set to Josh Durgin
running a fix through testing http://pulpito.ceph.com/joshd-2018-03-09_00:39:29-rados-wip-pg-log-trim-errors-distro-basic-smithi