Project

General

Profile

Actions

Bug #42490

closed

mimic->master: fsck error: #2:717a0223:::608.00000000:head# has omap that is not per-pool or pgmeta

Added by Sage Weil over 4 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

...
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 10 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow  #2:717a0223:::608.00000000:head#
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 -1 bluestore(/var/lib/ceph/osd/ceph-4) fsck error: #2:717a0223:::608.00000000:head# has omap that is not per-pool or pgmeta
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore(/var/lib/ceph/osd/ceph-4) _fsck_check_objects  collection 2.5_head cnode(bits 4)
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 10 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow  #2:a0000000::::head#
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 10 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow  #2:a93a17c2:::604.00000000:head#
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 -1 bluestore(/var/lib/ceph/osd/ceph-4) fsck error: #2:a93a17c2:::604.00000000:head# has omap that is not per-pool or pgmeta
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 10 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow  #2:aa448500:::500.00000000:head#
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.blob(0x55cd4fa2aa10) get_ref 0x0~5a Blob(0x55cd4fa2aa10 blob([0xc8000~4000] csum+has_unused crc32c/0x1000 unused=0xfff0) use_tracker(0x0 0x0) SharedBlob(0x55cd4fa2aa80 sbid 0x0))
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.blob(0x55cd4fa2aa10) get_ref init 0x4000, 4000
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow    0x0~5a: 0x0~5a Blob(0x55cd4fa2aa10 blob([0xc8000~4000] csum+has_unused crc32c/0x1000 unused=0xfff0) use_tracker(0x4000 0x5a) SharedBlob(0x55cd4fa2aa80 sbid 0x0))
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore(/var/lib/ceph/osd/ceph-4) _fsck_check_objects  referenced 0x1 for Blob(0x55cd4fa2aa10 blob([0xc8000~4000] csum+has_unused crc32c/0x1000 unused=0xfff0) use_tracker(0x4000 0x5a) SharedBlob(0x55cd4fa2aa80 sbid 0x0))
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.sharedblob(0x55cd4fa2aa80) put 0x55cd4fa2aa80 removing self from set 0x55cd4fa2c0d0
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.BufferSpace(0x55cd4fa2aa98 in 0x55cd4ebcefc0) _clear
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore(/var/lib/ceph/osd/ceph-4) _fsck_check_objects  collection 2.d_head cnode(bits 4)
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 10 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow  #2:b0000000::::head#
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 10 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow  #2:b50e409b:::mds_snaptable:head#
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.blob(0x55cd4fa2aa10) get_ref 0x0~46 Blob(0x55cd4fa2aa10 blob([0xcc000~4000] csum+has_unused crc32c/0x1000 unused=0xfff0) use_tracker(0x0 0x0) SharedBlob(0x55cd4fa2aaf0 sbid 0x0))
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.blob(0x55cd4fa2aa10) get_ref init 0x4000, 4000
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow    0x0~46: 0x0~46 Blob(0x55cd4fa2aa10 blob([0xcc000~4000] csum+has_unused crc32c/0x1000 unused=0xfff0) use_tracker(0x4000 0x46) SharedBlob(0x55cd4fa2aaf0 sbid 0x0))
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore(/var/lib/ceph/osd/ceph-4) _fsck_check_objects  referenced 0x1 for Blob(0x55cd4fa2aa10 blob([0xcc000~4000] csum+has_unused crc32c/0x1000 unused=0xfff0) use_tracker(0x4000 0x46) SharedBlob(0x55cd4fa2aaf0 sbid 0x0))
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.sharedblob(0x55cd4fa2aaf0) put 0x55cd4fa2aaf0 removing self from set 0x55cd4fa2c490
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore.BufferSpace(0x55cd4fa2ab08 in 0x55cd4ebcfce0) _clear
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 10 bluestore(/var/lib/ceph/osd/ceph-4) fsck_check_objects_shallow  #2:b871830b:::605.00000000:head#
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 -1 bluestore(/var/lib/ceph/osd/ceph-4) fsck error: #2:b871830b:::605.00000000:head# has omap that is not per-pool or pgmeta
2019-10-25T08:26:43.957+0000 7fdd52f9bbc0 20 bluestore(/var/lib/ceph/osd/ceph-4) _fsck_check_objects  collection 3.2_head cnode(bits 4)
...
2019-10-25T08:26:44.407+0000 7fdd52f9bbc0 -1 bluestore(/var/lib/ceph/osd/ceph-4) _mount fsck found 4 errors
2019-10-25T08:26:44.407+0000 7fdd52f9bbc0 -1 osd.4 0 OSD:init: unable to mount object store

/a/sage-2019-10-25_05:04:47-rados-wip-sage2-testing-2019-10-24-1944-distro-basic-smithi/4442860
Actions #1

Updated by Sage Weil over 4 years ago

  • Project changed from RADOS to bluestore
Actions #2

Updated by Igor Fedotov over 4 years ago

I think this happens after "fast" repair which fixes(sets) per_pool_omap flag in db but bypasses all the omaps (for the sake of performance).
So we have two options:
1) do not fix per_pool_omap in shallow repair mode
2) perform omap fixing during shallow repair. Which is costly...

Actions #3

Updated by Sage Weil over 4 years ago

  • Status changed from 12 to Fix Under Review
  • Pull request ID set to 31167
Actions #4

Updated by Sage Weil over 4 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF