Project

General

Profile

Bug #17588

rbd-mirror: disabling mirroring with option '--force' makes RBD-images unaccessible

Added by Nikita Shalnov 9 months ago. Updated 6 months ago.

Status:
Resolved
Priority:
High
Target version:
-
Start date:
10/17/2016
Due date:
% Done:

0%

Source:
other
Tags:
Backport:
jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Release:
jewel
Needs Doc:
No

Description

Hi!
I have two CEPH clusters and just tried to use a [rbd-mirroring][1]. One cluster operates with ceph version 10.2.2, second with ceph version 10.2.3. I have installed rbd-mirror daemons on both sides, they work fine, I can mirror rbd-images, promote them, resync etc. Then I wanted to test failover capability: what is happening, if the image is mirroring, but the primary cluster is going down? I promote my non-primary image with option --force. The image got primary status in both clusters (maybe it is actually not, but I am saying, how it is shown). Then I disabled the mirroring (with --force) on anyone nodes and tried to remove this image, but I could not.

root@test-hoster-kvm-01:~# rbd rm rbdkvm_sata/test-rbd-mirroring-root
2016-09-28 13:23:54.019417 7fee29ffb700 -1 librbd::journal::StandardPolicy: local image not promoted
2016-09-28 13:23:54.019430 7fee29ffb700 -1 librbd::exclusive_lock::AcquireRequest: failed to allocate journal tag: (1) Operation not permitted
2016-09-28 13:23:54.044298 7fee29ffb700 -1 librbd::ExclusiveLock: failed to acquire exclusive lock:(1) Operation not permitted
2016-09-28 13:23:54.044395 7fee4e368d40 -1 librbd: cannot obtain exclusive lock - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.

I see error message: `error: image still has watchers`. But there are no watchers:

root@test-hoster-kvm-01:~# rbd status rbdkvm_sata/test-rbd-mirroring-root
Watchers: none

More info can be seen below:

root@test-hoster-kvm-01:~# rbd info rbdkvm_sata/test-rbd-mirroring-root
rbd image 'test-rbd-mirroring-root':
size 4096 MB in 1024 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.f63036b8b4567
format: 2
features: layering, exclusive-lock, journaling
flags:
journal: f63036b8b4567
mirroring state: disabled

I can not remove any locks from image, because rbd does not see them:


root@test-mon-01:~# rbd lock list rbdkvm_sata/test-rbd-mirroring-root
root@test-mon-01:~#

When I try to enable mirroring again and promote the image, I got an error:


root@test-hoster-kvm-buffer-01a:~# rbd mirror image enable rbdkvm_sata/test-rbd-mirroring-root
2016-09-30 17:23:19.798621 7fb16e6f2d40 -1 librbd: cannot enable mirroring: last journal tag not owned by local cluster`

Because I use CEPH with libvirt, I can't now work with my libvirt rbd-pool (create images, refresh pool):


root@test-mon-01:~# virsh pool-refresh rbdkvm_sata
error: Failed to refresh pool rbdkvm_sata
error: Requested operation is not valid: storage pool 'rbdkvm_sata' is not active
root@test-mon-01:~# virsh pool-start rbdkvm_sata
error: Failed to start pool rbdkvm_sata
error: failed to open the RBD image 'test-rbd-mirroring-root': Function not implemented

Only `systemctl libvirtd restart` helps me to bring my pool back in active mode.

lsof and ps show that there are no processes on monitors or hosters using this image. But on one of storages the image is open by ceph-osd. Here is an example:


ms_pipe_w   13602  990366  ceph  146u      REG               8,97       16  805434978 /var/lib/ceph/osd/ceph-9/current/4.147_head/rbd\uid.test-rbd-mirroring-root__head_E0701D47__4`

I was testing this many times. And this is my conclusion: if you disable image mirroring on non-primary node (this means using option --force), every time your data becomes your data becomes unaccessible. And you can not remove this image from ceph. I know, it is not the best practice, but I think, this is very bad, that we can lose access to disk data because of inputed command, which should just disable the mirroring (at least I could not regain access to my data, maybe you know methods).

I tried mount such image to a virtual server as secondary disk, but then correct starting of the VS is impossible.

So, I have next questions:
1) Is this the bug of rbd-mirror?
2) Maybe I am wrong and did something fool, but how should `rbd mirror image disable {pool/image} --force` work?
3) Can I repair such images (no locks, no watchers, not mapped, used only by storage)?
4) How can I remove such images from ceph? Just remove whithout repairing.

My Debian versions are 8.5 and 8.6.

Please tell me if you need more information or something of the written you do not understand.


Related issues

Copied to rbd - Backport #17767: jewel: rbd-mirror: disabling mirroring with option '--force' makes RBD-images unaccessible Resolved

History

#1 Updated by Jason Dillaman 9 months ago

  • Status changed from New to Need More Info

If you disable the journaling feature, you should be able to remove the image. Unfortunately, there is an in-progress backport tracker ticket [1] for an issue that would prevent you from disabling journaling now that your image is in such a state. Would it be possible for you to install a master branch version of the rbd CLI into a scratch VM (or similar location) so that you can disable the journaling feature and remove the image?

[1] http://tracker.ceph.com/issues/17060

#2 Updated by Jason Dillaman 9 months ago

  • Status changed from Need More Info to In Progress
  • Assignee set to Jason Dillaman
  • Backport set to jewel

#3 Updated by Nikita Shalnov 9 months ago

Good idea, thank you. It would be possible for me.
But is there no way to restore my data? Is it corrupted?

#4 Updated by Jason Dillaman 9 months ago

@Nikita: no, your data should still be there (although you are attempting to delete the image, so I'm not sure how valuable the data is FWIW). There is an issue where force disabling mirroring on a demoted image isn't force-promoting the image so that it can be used again. The workaround is to disable the journaling feature on the image to remove that invalid state.

I did discover that the current master branch had a recent regression where you won't be able to disable journaling, so you will actually need the current jewel branch head.

#5 Updated by Nikita Shalnov 9 months ago

Jason Dillaman wrote:

@Nikita: no, your data should still be there (although you are attempting to delete the image, so I'm not sure how valuable the data is FWIW).

I was lucky and there were just test images. Now I just wanted to know how can I restore images in such a state, if this case would happen again and it would be not a test stand.
I got this. Thanks, Jason.

#6 Updated by Jason Dillaman 9 months ago

  • Status changed from In Progress to Need Review

#7 Updated by Mykola Golub 9 months ago

  • Status changed from Need Review to Pending Backport

#8 Updated by Loic Dachary 9 months ago

  • Copied to Backport #17767: jewel: rbd-mirror: disabling mirroring with option '--force' makes RBD-images unaccessible added

#9 Updated by Loic Dachary 9 months ago

@Mykola this backport look complicated because it relates to a state of the code base that does not exist in jewel, for the most part. How do you suggest we go about it ?

#10 Updated by Nathan Cutler 6 months ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF