Project

General

Profile

Actions

Bug #17867

closed

jewel: omap deep scrub error during power cycle run

Added by Samuel Just over 7 years ago. Updated over 7 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

{description: 'powercycle/osd/{clusters/3osd-1per-target.yaml fs/btrfs.yaml powercycle/default.yaml
tasks/cfuse_workunit_misc.yaml}', duration: 2109.798131942749, failure_reason: '"2016-11-11
16:12:23.677095 osd.1 172.21.15.52:6800/18240 27 : cluster [ERR] 1.0 shard 1:
soid 1:0bd6d154:::602.00000000:head omap_digest 0x436f0117 != best guess omap_digest
0xce3e270e from auth shard 0" in cluster log', flavor: basic, owner: scheduled_yuriw@teuthology,
success: false}

Note: cephfs and powercycle and btrfs

Is this the directory corruption bug we already know about?


Related issues 1 (0 open1 closed)

Related to Ceph - Bug #17177: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest"ResolvedBrad Hubbard08/31/2016

Actions
Actions #1

Updated by Samuel Just over 7 years ago

  • Related to Bug #17177: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest" added
Actions #2

Updated by Samuel Just over 7 years ago

  • Priority changed from Urgent to High

This may well be a dup of 17177, but haven't dug in enough to confirm. The main difference is that this is on btrfs which may not actually hit the bug fixed by the 17177 patches, in which case this is either a different bug or a good, old-fashioned btrfs powercycle bug.

Actions #3

Updated by Samuel Just over 7 years ago

I wasn't able to find anything obvious in the logs, could simply be a bug with btrfs.

Actions #4

Updated by Samuel Just over 7 years ago

  • Status changed from New to Can't reproduce
Actions

Also available in: Atom PDF