Project

General

Profile

Bug #16034

OSD: crash on EIO during deep-scrubing

Added by xie xingguo almost 8 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
hammer
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

os/FileStore.cc: 2854: FAILED assert(allow_eio || !m_filestore_fail_eio || got != -5)

ceph version 0.94.7.1 (740be021de39c96a7ddfecae3482c177471798fc)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0xbddac5]
2: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xd1b) [0x917
88b]
3: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x311) [0xa0e791]
4: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::T
PHandle&)+0x2e8) [0x8dabb8]
5: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x213) [0x7e5d93]
6: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x498) [0x7ee1f8]
7: (PG::scrub(ThreadPool::TPHandle&)+0x21c) [0x7ef84c]
8: (OSD::ScrubWQ::_process(PG*, ThreadPool::TPHandle&)+0x29) [0x6c77c9]
9: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa76) [0xbce296]
10: (ThreadPool::WorkThread::entry()+0x10) [0xbcf320]
11: (()+0x7df3) [0x7f3849370df3]
12: (clone()+0x6d) [0x7f3847e5354d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Related issues

Copied to Ceph - Backport #16870: hammer: OSD: crash on EIO during deep-scrubing Resolved

History

#1 Updated by xie xingguo almost 8 years ago

  • Assignee deleted (xie xingguo)

#2 Updated by Nathan Cutler over 7 years ago

  • Status changed from New to Pending Backport

master PR: https://github.com/ceph/ceph/pull/3595

Note that the hammer backport will not be a straight-forward cherry pick. Sam wrote:

"@smithfarm @xiexingguo This backports way too much. I think @yuyuyu101 's commit fixed it basically by accident. We do not want to backport that whole feature merely to fix the two read() callers. Please create a new commit which only restores the correct behavior for the affected callers."

In other words, the backport should not look like this: https://github.com/ceph/ceph/pull/9341

#3 Updated by Nathan Cutler over 7 years ago

  • Copied to Backport #16870: hammer: OSD: crash on EIO during deep-scrubing added

#4 Updated by Nathan Cutler over 7 years ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF