Project

General

Profile

Actions

Bug #20545

closed

erasure coding = crashes

Added by Bob Bobington almost 7 years ago. Updated over 6 years ago.

Status:
Duplicate
Priority:
High
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Steps to reproduce:

  • Create 4 OSDs and a mon on a machine (4TB disk per OSD, Bluestore, using dm-crypt too), using Luminous RC built from the tag on Github
  • ceph osd erasure-code-profile set myprofile k=2 m=1 ruleset-failure-domain=osd ceph osd pool create imagesrep 256 256 erasure myprofile
  • Open an ioctx using rados.py
  • Walk a large directory, use ioctx.aio_write(key, data, offset=offset) with 4MB chunks, don't bother waiting for a response

The attached log.txt is the result.

Each time I've tried this, a different OSD has crashed but all display similar tracebacks.


Files

log.txt (9.33 KB) log.txt Bob Bobington, 07/07/2017 05:39 PM

Related issues 1 (0 open1 closed)

Related to RADOS - Bug #20295: bluestore: Timeout in tp_osd_tp threads when running RBD bench in EC pool w/ overwritesResolvedSage Weil06/14/2017

Actions
Actions

Also available in: Atom PDF