Project

General

Profile

Bug #17999

ec scrub mismatch during teuthology run

Added by Samuel Just over 7 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

{description: 'rados/thrash-erasure-code/{leveldb.yaml rados.yaml clusters/{fixed-2.yaml
openstack.yaml} fast/fast.yaml fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/fastread.yaml
workloads/ec-small-objects-fast-read.yaml}', duration: 1241.6829888820648, failure_reason: '"2016-11-22
05:10:39.862682 osd.5 172.21.15.22:6800/9032 245 : cluster [ERR] 1.12s2 deep-scrub
stat mismatch, got 35/35 objects, 10/10 clones, 35/35 dirty, 0/0 omap, 0/0 pinned,
0/0 hit_set_archive, 0/0 whiteouts, 41426944/41025536 bytes, 0/0 hit_set_archive
bytes." in cluster log', flavor: basic, owner: scheduled_samuelj@teuthology, success: false}

samuelj@teuthology:/a/samuelj-2016-11-22_03:24:33-rados-wip-sam-testing---basic-smithi/568033

I'm going to see if it happens again overnight. If I get it again on the same workload, I can narrow down the set of problem operations.

History

#2 Updated by Samuel Just over 7 years ago

  • Subject changed from scrub mismatch on ec pool during teuthology run to scrub mismatch during teuthology run

Kefu's weren't on ec pools.

#4 Updated by Samuel Just over 7 years ago

  • Subject changed from scrub mismatch during teuthology run to ec scrub mismatch during teuthology run

I was wrong, the errors seem to be on the ec base tier.

#5 Updated by Samuel Just over 7 years ago

  • Status changed from New to In Progress
  • Assignee set to Samuel Just

#6 Updated by Samuel Just over 7 years ago

  • Status changed from In Progress to 7

#7 Updated by Samuel Just over 7 years ago

  • Status changed from 7 to Resolved

Also available in: Atom PDF