Project

General

Profile

Actions

Bug #14483

closed

Flattening an rbd image with a running Qemu instances causes librbd worker threads to hang.

Added by Christian Theune over 8 years ago. Updated about 8 years ago.

Status:
Resolved
Priority:
High
Assignee:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
hammer, infernalis
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This Ceph 0.94.5 and Qemu 2.2.1 with some backported live-migration fixes (but Live migration isn't involved here).

We started provisioning VMs based on cloned images instead of freshly unpacking data into an RBD image.

Our golden images are maintained in rbd/fc-15.09-dev and snapshotted for specific releases, like rbd/fc-15.09-dev@da5e5ffc6.

When a VM gets created, we clone from the most recent snapshot of the appropriate branch (dev/staging/prod) and start the VM based on that.

I experienced surprising hangs, with the guest kernel complaining about stuck tasks. After I looked at various angles for a while I noticed that the VMs run fine until flattening happens (every ten minutes).

I checked that the RBD clients have appropriate permissions (mon r, osd rwx) and I didn't find any permission errors (as I found some reports from a while ago where this was a problem).

The correlation between flattening and RBD getting stuck is within less than a second of flattening happening and RBD getting stuck - every time, so I think that's a pretty solid assumption.

When flattening runs, here's what happens to the RBD thread in Qemu:

On the host causing the flattening:

Jan 24 16:50:02 cartman02 configure/vm-images:  Flattening test/test22.root

In Qemu:

2016-01-24 16:50:01.613083 7f4821cfd700 20 librbd::AioRequest: write 0x7f480c0bbce0 rbd_data.10d3181238e1f29.00000000000007c1 159744~4096 should_complete: r = 0
2016-01-24 16:50:01.613099 7f4821cfd700 20 librbd::AioRequest: WRITE_CHECK_GUARD
2016-01-24 16:50:01.613102 7f4821cfd700 20 librbd::AioRequest: complete 0x7f480c0bbce0
2016-01-24 16:50:01.613105 7f4821cfd700 20 librbd::AioCompletion: AioCompletion::complete_request() 0x7f48354c4df0 complete_cb=0x7f48342ecfd5 pending 1
2016-01-24 16:50:01.613109 7f4821cfd700 20 librbd::AioCompletion: AioCompletion::finalize() 0x7f48354c4df0 rval 0 read_buf 0 read_bl 0
2016-01-24 16:50:01.613114 7f4821cfd700 20 librbd::AsyncOperation: 0x7f48354c4f20 finish_op
2016-01-24 16:50:01.613265 7f48214fc700 20 librbd: aio_flush 0x7f483547e4e0 completion 0x7f48354ae140
2016-01-24 16:50:01.613278 7f48214fc700 20 librbd: ictx_check 0x7f483547e4e0
2016-01-24 16:50:01.613285 7f48214fc700 20 librbd::AsyncOperation: 0x7f48354ae270 start_op
2016-01-24 16:50:01.613295 7f48214fc700 20 librbd::AioCompletion: AioCompletion::finish_adding_requests 0x7f48354ae140 pending 2
2016-01-24 16:50:01.613301 7f48214fc700 20 librbd::AioCompletion: AioCompletion::complete_request() 0x7f48354ae140 complete_cb=0x7f48342ecfd5 pending 2
2016-01-24 16:50:01.613333 7f4821cfd700 20 librbd::AioCompletion: AioCompletion::complete_request() 0x7f48354ae140 complete_cb=0x7f48342ecfd5 pending 1
2016-01-24 16:50:01.613339 7f4821cfd700 20 librbd::AioCompletion: AioCompletion::finalize() 0x7f48354ae140 rval 0 read_buf 0 read_bl 0
2016-01-24 16:50:01.613342 7f4821cfd700 20 librbd::AsyncOperation: 0x7f48354ae270 finish_op
2016-01-24 16:50:02.076428 7f4821cfd700 10 librbd::ImageWatcher: 0x7f4835498050 image header updated
2016-01-24 16:50:06.419461 7f48214fc700 20 librbd: aio_write 0x7f483547e4e0 off = 3159449600 len = 4096 buf = 0x7f483548c200
2016-01-24 16:50:06.419498 7f48214fc700 20 librbd: ictx_check 0x7f483547e4e0
2016-01-24 16:50:06.419508 7f48214fc700 20 librbd: ictx_refresh 0x7f483547e4e0
2016-01-24 16:50:06.422687 7f48214fc700 20 librbd: close_image 0x7f48354a0fe0

The next thing I can see from this thread is:

2016-01-24 16:51:09.461851 7f4825f0b700  1 heartbeat_map is_healthy 'librbd::thread_pool thread 0x7f48214fc700' had timed out after 60

When talking to the Qemu monitor interface, trying to ask for status, then I see librbd doing this - the monitor never returns after that:

(qemu) info block

2016-01-24 17:40:43.944889 7f483461b900 20 librbd: info 0x7f483547e4e0
2016-01-24 17:40:43.944916 7f483461b900 20 librbd: ictx_check 0x7f483547e4e0

Notice that the info/ictx_check comes from a different thread but it uses a "cookie? session?" that is mentioned earlier in the log:

2016-01-24 16:50:06.419461 7f48214fc700 20 librbd: aio_write 0x7f483547e4e0 off = 3159449600 len = 4096 buf = 0x7f483548c200
2016-01-24 16:50:06.419498 7f48214fc700 20 librbd: ictx_check 0x7f483547e4e0
2016-01-24 16:50:06.419508 7f48214fc700 20 librbd: ictx_refresh 0x7f483547e4e0

To get the VM killed, I have to -9 it. A regular kill won't work.

Let me know if you


Related issues 2 (0 open2 closed)

Copied to rbd - Backport #14610: infernalis: Flattening an rbd image with a running Qemu instances causes librbd worker threads to hang.ResolvedJason DillamanActions
Copied to rbd - Backport #14611: hammer: Flattening an rbd image with a running Qemu instances causes librbd worker threads to hang.ResolvedJason DillamanActions
Actions #1

Updated by Christian Theune over 8 years ago

Here's the full log of the last days, with multiple instances, including the ones I mentioned above:
http://shared00.fe.rzob.gocept.net/~ctheune/ceph-14483-client.log.lzma

Actions #2

Updated by Christian Theune over 8 years ago

Something else that I'm absolutely unsure whether it's relevant: we run Qemu with dropping privileges to 'noboyd' and chrooting to an empty directory. We've seen weird behaviour in some previous instances (like resolv.conf stuff missing confusing the monitor). I wonder whether the apparent closing/opening of the images may cause librbd to try to re-read the configuration? Can't prove that at the moment, though. I'll run the VM with elevated privileges to check it out.

Actions #3

Updated by Christian Theune over 8 years ago

I ran the VM again without dropping privileges and without chroot. As soon as the image was flattened, all IO stalled. :(

Actions #4

Updated by Sage Weil about 8 years ago

  • Project changed from Ceph to rbd
  • Category deleted (librbd)
Actions #5

Updated by Jason Dillaman about 8 years ago

  • Status changed from New to In Progress
  • Priority changed from Normal to High
Actions #6

Updated by Jason Dillaman about 8 years ago

Christian, thank you for the logs!

Actions #7

Updated by Jason Dillaman about 8 years ago

  • Status changed from In Progress to Pending Backport
  • Backport set to hammer, infernalis
Actions #8

Updated by Jason Dillaman about 8 years ago

  • Copied to Backport #14610: infernalis: Flattening an rbd image with a running Qemu instances causes librbd worker threads to hang. added
Actions #9

Updated by Jason Dillaman about 8 years ago

  • Copied to Backport #14611: hammer: Flattening an rbd image with a running Qemu instances causes librbd worker threads to hang. added
Actions #10

Updated by Christian Theune about 8 years ago

Sure thing, thanks for picking this up. Let me know if you need more data - I can reproduce the issue easily.

Actions #11

Updated by Loïc Dachary about 8 years ago

  • Status changed from Pending Backport to Resolved
Actions #12

Updated by Christian Theune about 8 years ago

Hm. I think this is still an issue. I updated everything to 0.94.6 and this still happens. What's the most helpful thing you need? The straces again?

Actions #13

Updated by Christian Theune about 8 years ago

I had a small epiphany about our pool setup today and that might be something that is the cause for this: we're running a pool per customer and clone base image off a third pool. Could it be that flattening images whose bases are from a different pool confuses librbd?

Actions

Also available in: Atom PDF