Project

General

Profile

Backport #40449

nautilus: "no available blob id" assertion might occur

Added by Nathan Cutler almost 5 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
Release:
nautilus
Crash signature (v1):
Crash signature (v2):

ceph-osd.28.log.xz (425 KB) Alexander Patrakov, 11/06/2020 02:36 PM


Related issues

Copied from bluestore - Bug #38272: "no available blob id" assertion might occur Resolved

History

#1 Updated by Nathan Cutler almost 5 years ago

  • Copied from Bug #38272: "no available blob id" assertion might occur added

#2 Updated by Nathan Cutler almost 5 years ago

  • Assignee set to Igor Fedotov

#3 Updated by Nathan Cutler over 4 years ago

  • Status changed from New to Need More Info

The master PR has been merged, but Sage wants it to bake for awhile before backporting.

Setting "Needs More Info" for now - I trust Igor will either do the backport himself when the time is right, or let us (backporting team) know.

#4 Updated by Igor Fedotov over 4 years ago

  • Status changed from Need More Info to In Progress

#5 Updated by Yuri Weinstein over 4 years ago

Igor Fedotov wrote:

https://github.com/ceph/ceph/pull/30144

merged

#6 Updated by Igor Fedotov over 4 years ago

  • Status changed from In Progress to Resolved

#7 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)
  • Status changed from Resolved to In Progress

#8 Updated by Nathan Cutler over 4 years ago

  • Status changed from In Progress to Resolved
  • Target version set to v14.2.5

This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
merge commit 95bbbc09d6caab8d6063373dffb0bdd73ecdae43 (v14.2.4-782-g95bbbc09d6)

#9 Updated by Alexander Patrakov over 3 years ago

Nathan Cutler wrote:

This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
merge commit 95bbbc09d6caab8d6063373dffb0bdd73ecdae43 (v14.2.4-782-g95bbbc09d6)

We have a customer who hit this assert on 14.2.8, where it is supposed to be already fixed. Last 50000 lines from the log are attached. Will also ask for ceph.conf.

#10 Updated by Alexander Patrakov over 3 years ago

Alexander Patrakov wrote:

Nathan Cutler wrote:

This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
merge commit 95bbbc09d6caab8d6063373dffb0bdd73ecdae43 (v14.2.4-782-g95bbbc09d6)

We have a customer who hit this assert on 14.2.8, where it is supposed to be already fixed. Last 50000 lines from the log are attached. Will also ask for ceph.conf.

Asked, nothing suspicious there except the fact that they still use separate roots instead of device classes:

[global]
cluster network = 10.15.0.0/24
fsid = 8700e000-d2ac-4393-8380-1cf4779166b5
mon host = [v2:10.15.0.201:3300,v1:10.15.0.201:6789],[v2:10.15.0.202:3300,v1:10.15.0.202:6789],[v2:10.15.0.203:3300,v1:10.15.0.203:6789],[v2:10.15.0.204:3300,v1:10.15.0.204:6789],[v2:10.15.0.205:3300,v1:10.15.0.205:6789]
mon initial members = ceph01,ceph02,ceph03,ceph04,ceph05
osd pool default crush rule = -1
public network = 10.15.0.0/24

[osd]
osd memory target = 11811762995
osd crush update on start = false
osd scrub begin hour = 23
osd scrub end hour = 3
osd scrub load threshold = 0.3

#11 Updated by Nathan Cutler over 3 years ago

@Alexander - it might make sense to open a new bug in the Bluestore project for that, since this one is closed.

#12 Updated by Igor Fedotov over 3 years ago

Nathan Cutler wrote:

@Alexander - it might make sense to open a new bug in the Bluestore project for that, since this one is closed.

Just created https://tracker.ceph.com/issues/48216 for that

Also available in: Atom PDF