Actions
Bug #56045
closedForce Promote Pool Fails
Status:
Duplicate
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
rbd mirror pool promote force fails
-2> 2022-06-14T15:04:08.077+0000 7f7995074640 5 librbd::Watcher: 0x7f797405f260 notifications_blocked: blocked=0
-1> 2022-06-14T15:04:08.080+0000 7f7995875640 -1 ../src/librbd/mirror/snapshot/PromoteRequest.cc: In function 'void librbd::mirror::snapshot::PromoteRequest<ImageCtxT>::rollback() [with ImageCtxT = librbd::ImageCtx]' thread 7f7995875640 time 2022-06-14T15:04:08.077748+0000
../src/librbd/mirror/snapshot/PromoteRequest.cc: 267: FAILED ceph_assert(info != nullptr)
ceph version 17.0.0-11475-g57c2ae079b4 (57c2ae079b42cde0e8f3deedc760183af81503b2) quincy (dev)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11c) [0x7f7999d4953f]
2: (ceph::register_assert_context(ceph::common::CephContext*)+0) [0x7f7999d49746]
...
Steps to reproduce:
-Create 100 images
-Run rbd bench to 100 of those images
-force promote pool on secondary cluster
Updated by Ilya Dryomov almost 2 years ago
- Is duplicate of Bug #53537: [rbd-mirror] ceph_assert on rollback_snapshot info during a forcepromote while snapshot is syncing added
Actions