Project

General

Profile

Bug #17867

jewel: omap deep scrub error during power cycle run

Added by Samuel Just almost 3 years ago. Updated almost 3 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
Start date:
11/11/2016
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

{description: 'powercycle/osd/{clusters/3osd-1per-target.yaml fs/btrfs.yaml powercycle/default.yaml
tasks/cfuse_workunit_misc.yaml}', duration: 2109.798131942749, failure_reason: '"2016-11-11
16:12:23.677095 osd.1 172.21.15.52:6800/18240 27 : cluster [ERR] 1.0 shard 1:
soid 1:0bd6d154:::602.00000000:head omap_digest 0x436f0117 != best guess omap_digest
0xce3e270e from auth shard 0" in cluster log', flavor: basic, owner: scheduled_yuriw@teuthology,
success: false}

Note: cephfs and powercycle and btrfs

Is this the directory corruption bug we already know about?


Related issues

Related to Ceph - Bug #17177: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest" Resolved 08/31/2016

History

#1 Updated by Samuel Just almost 3 years ago

  • Related to Bug #17177: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest" added

#2 Updated by Samuel Just almost 3 years ago

  • Priority changed from Urgent to High

This may well be a dup of 17177, but haven't dug in enough to confirm. The main difference is that this is on btrfs which may not actually hit the bug fixed by the 17177 patches, in which case this is either a different bug or a good, old-fashioned btrfs powercycle bug.

#3 Updated by Samuel Just almost 3 years ago

I wasn't able to find anything obvious in the logs, could simply be a bug with btrfs.

#4 Updated by Samuel Just almost 3 years ago

  • Status changed from New to Can't reproduce

Also available in: Atom PDF