Backport #40449
closednautilus: "no available blob id" assertion might occur
Files
Updated by Nathan Cutler almost 5 years ago
- Copied from Bug #38272: "no available blob id" assertion might occur added
Updated by Nathan Cutler over 4 years ago
- Status changed from New to Need More Info
The master PR has been merged, but Sage wants it to bake for awhile before backporting.
Setting "Needs More Info" for now - I trust Igor will either do the backport himself when the time is right, or let us (backporting team) know.
Updated by Igor Fedotov over 4 years ago
- Status changed from Need More Info to In Progress
Updated by Yuri Weinstein over 4 years ago
Updated by Igor Fedotov over 4 years ago
- Status changed from In Progress to Resolved
Updated by Nathan Cutler over 4 years ago
- Description updated (diff)
- Status changed from Resolved to In Progress
Updated by Nathan Cutler over 4 years ago
- Status changed from In Progress to Resolved
- Target version set to v14.2.5
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
merge commit 95bbbc09d6caab8d6063373dffb0bdd73ecdae43 (v14.2.4-782-g95bbbc09d6)
Updated by Alexander Patrakov over 3 years ago
- File ceph-osd.28.log.xz ceph-osd.28.log.xz added
Nathan Cutler wrote:
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
merge commit 95bbbc09d6caab8d6063373dffb0bdd73ecdae43 (v14.2.4-782-g95bbbc09d6)
We have a customer who hit this assert on 14.2.8, where it is supposed to be already fixed. Last 50000 lines from the log are attached. Will also ask for ceph.conf.
Updated by Alexander Patrakov over 3 years ago
Alexander Patrakov wrote:
Nathan Cutler wrote:
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30144
merge commit 95bbbc09d6caab8d6063373dffb0bdd73ecdae43 (v14.2.4-782-g95bbbc09d6)We have a customer who hit this assert on 14.2.8, where it is supposed to be already fixed. Last 50000 lines from the log are attached. Will also ask for ceph.conf.
Asked, nothing suspicious there except the fact that they still use separate roots instead of device classes:
[global]
cluster network = 10.15.0.0/24
fsid = 8700e000-d2ac-4393-8380-1cf4779166b5
mon host = [v2:10.15.0.201:3300,v1:10.15.0.201:6789],[v2:10.15.0.202:3300,v1:10.15.0.202:6789],[v2:10.15.0.203:3300,v1:10.15.0.203:6789],[v2:10.15.0.204:3300,v1:10.15.0.204:6789],[v2:10.15.0.205:3300,v1:10.15.0.205:6789]
mon initial members = ceph01,ceph02,ceph03,ceph04,ceph05
osd pool default crush rule = -1
public network = 10.15.0.0/24
[osd]
osd memory target = 11811762995
osd crush update on start = false
osd scrub begin hour = 23
osd scrub end hour = 3
osd scrub load threshold = 0.3
Updated by Nathan Cutler over 3 years ago
@Alexander - it might make sense to open a new bug in the Bluestore project for that, since this one is closed.
Updated by Igor Fedotov over 3 years ago
Nathan Cutler wrote:
@Alexander - it might make sense to open a new bug in the Bluestore project for that, since this one is closed.
Just created https://tracker.ceph.com/issues/48216 for that