Project

General

Profile

Bug #21026

PR #16172 causing performance regression

Added by Mark Nelson about 1 year ago. Updated 5 months ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
Start date:
08/17/2017
Due date:
% Done:

0%

Source:
Tags:
Backport:
luminous, jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:

Description

This is most obvious during 4K random writes to NVMe backed bluestore RBD volumes, but it may be present in other tests as well. A git bisection very clearly points to PR #16172 (introduced with v12.1.2) as the culprit.

Performance decreases in the above described test from 30K IOPS to 15K IOPS for a single OSD. A wallclock profile shows extra time spent in pg_log_dup_t get_key_name (~0.7%) and encode (~1.7%) per tp_osd_tp thread. Greg hypothesized that we might be doing unnecessary string manipulation in get_key_name and indeed it looks like there may be extra string manipulation and memory copying going on. Given that we are spending about 1.7% of the time in each tp_osd_tp thread doing pg_log_dup_t encode however, I suspect the bigger issue is that are writing a lot more pglog data to the KV store now and this is less about CPU overhead than simply using a greater percentage of our available KV store throughput for pglog. The column family PR might give us a better hint if this is correct.


Related issues

Blocks Ceph - Feature #20298: store longer dup op information Resolved 06/14/2017
Copied to Ceph - Backport #21187: luminous: fix performance regression Resolved
Copied to RADOS - Backport #22400: jewel: PR #16172 causing performance regression Rejected

History

#2 Updated by Mark Nelson about 1 year ago

After discussion with Eric, we can avoid the new code path entirely by increasing the osd_min_pg_log_entries to the same value as osd_pg_log_dups_tracks (ie currently 1500 -> 3000), however this doesn't fix the problem, only avoids it by not invoking the new code. This has been verified to increase performance to near levels prior to PR #16172.

#3 Updated by Josh Durgin about 1 year ago

  • Assignee set to Josh Durgin

It looks to me like this is due to writing out all the dups whenever any are dirty, instead of keeping a 'dirty_to' version that we check like with pg_log_entry_t. Writing out just the dup ops that changed should fix this.

#4 Updated by Josh Durgin about 1 year ago

Mark reported further tests on plain SSD: default: ~10.2K IOPS, osd_min_pg_log_entries=3000: ~18.8K IOPS

#5 Updated by Josh Durgin about 1 year ago

  • Status changed from New to Need Review
  • Backport set to luminous

#6 Updated by Josh Durgin about 1 year ago

  • Status changed from Need Review to Pending Backport

#7 Updated by Nathan Cutler about 1 year ago

#8 Updated by Nathan Cutler about 1 year ago

  • Status changed from Pending Backport to Resolved

#9 Updated by Ken Dreyer 11 months ago

  • Status changed from Resolved to Pending Backport
  • Backport changed from luminous to luminous, jewel

#10 Updated by Ken Dreyer 11 months ago

#11 Updated by Nathan Cutler 11 months ago

  • Copied to Backport #22400: jewel: PR #16172 causing performance regression added

#12 Updated by Nathan Cutler 5 months ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF