Bug #17867
closedjewel: omap deep scrub error during power cycle run
0%
Description
{description: 'powercycle/osd/{clusters/3osd-1per-target.yaml fs/btrfs.yaml powercycle/default.yaml
tasks/cfuse_workunit_misc.yaml}', duration: 2109.798131942749, failure_reason: '"2016-11-11
16:12:23.677095 osd.1 172.21.15.52:6800/18240 27 : cluster [ERR] 1.0 shard 1:
soid 1:0bd6d154:::602.00000000:head omap_digest 0x436f0117 != best guess omap_digest
0xce3e270e from auth shard 0" in cluster log', flavor: basic, owner: scheduled_yuriw@teuthology,
success: false}
Note: cephfs and powercycle and btrfs
Is this the directory corruption bug we already know about?
Updated by Samuel Just over 7 years ago
- Related to Bug #17177: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest" added
Updated by Samuel Just over 7 years ago
- Priority changed from Urgent to High
This may well be a dup of 17177, but haven't dug in enough to confirm. The main difference is that this is on btrfs which may not actually hit the bug fixed by the 17177 patches, in which case this is either a different bug or a good, old-fashioned btrfs powercycle bug.
Updated by Samuel Just over 7 years ago
I wasn't able to find anything obvious in the logs, could simply be a bug with btrfs.
Updated by Samuel Just over 7 years ago
- Status changed from New to Can't reproduce