Project

General

Profile

Actions

Bug #17867

closed

jewel: omap deep scrub error during power cycle run

Added by Samuel Just over 7 years ago. Updated over 7 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

{description: 'powercycle/osd/{clusters/3osd-1per-target.yaml fs/btrfs.yaml powercycle/default.yaml
tasks/cfuse_workunit_misc.yaml}', duration: 2109.798131942749, failure_reason: '"2016-11-11
16:12:23.677095 osd.1 172.21.15.52:6800/18240 27 : cluster [ERR] 1.0 shard 1:
soid 1:0bd6d154:::602.00000000:head omap_digest 0x436f0117 != best guess omap_digest
0xce3e270e from auth shard 0" in cluster log', flavor: basic, owner: scheduled_yuriw@teuthology,
success: false}

Note: cephfs and powercycle and btrfs

Is this the directory corruption bug we already know about?


Related issues 1 (0 open1 closed)

Related to Ceph - Bug #17177: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest"ResolvedBrad Hubbard08/31/2016

Actions
Actions

Also available in: Atom PDF