Bug #489
closedMemory leak when doing a lot of I/O
0%
Description
I have a virtual machine with the following configuration:
<memory>4194304</memory> <currentMemory>4194304</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> <boot dev='cdrom'/> </os> ..... <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha'/> <target dev='vda' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-second'/> <target dev='vdb' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-third'/> <target dev='vdc' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-fourth'/> <target dev='vdd' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-fifth'/> <target dev='vde' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-sixth'/> <target dev='vdf' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-seventh'/> <target dev='vdg' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-eigth'/> <target dev='vdh' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-ninth'/> <target dev='vdi' bus='virtio'/> </disk> <disk type='virtual' device='disk'> <driver name='qemu' type='rbd' cache='writeback' aio='native'/> <source path='rbd:rbd/alpha-tenth'/> <target dev='vdj' bus='virtio'/> </disk>
There are all disks of 64GB, inside the VM I use LVM to group them to one VG and some LV's.
root@client01:~# rbd info alpha-second rbd image 'alpha-second': size 65536 MB in 16384 objects order 22 (4096 KB objects) root@client01:~# rbd info alpha-tenth rbd image 'alpha-tenth': size 65536 MB in 16384 objects order 22 (4096 KB objects) root@client01:~#
Inside the VM I tried to mirror Ubuntu's ISO's, that's 40GB of data, no problem, while doing so, I had 5 disks attached.
Then I expanded to the ten disks and tried to mirror Debian's cd-images from rsync://mirrors.nl.kernel.org/debian-cd/
While doing so, the memory usage of the qemu process keeps growing up to 70%, then the OOM-killers kicks in and kills the process.
[115005.005783] qemu-system-x86 invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 [115005.005790] qemu-system-x86 cpuset=/ mems_allowed=0 [115005.005795] Pid: 21522, comm: qemu-system-x86 Not tainted 2.6.36-rc7-rbd-20410-g47d3df7 #4
As you can see, I'm running 2.6.36-rc7 (master branch) with:
- qemu-kvm-0.12.3
- Latest RBD code (Backported) ( http://zooi.widodh.nl/ceph/qemu-kvm/qemu-kvm_0.12.3+noroms/rbd-support.patch )
I also tried to run the test with a extra attached disk from 500GB, I thought it might be due to the I/O's which were spread out over the disks, but even with one disk I saw the memory growth.
root@alpha:~# df -h /srv/mirror/debian-cd Filesystem Size Used Avail Use% Mounted on /dev/vdk 493G 3.4G 464G 1% /srv/mirror/debian-cd root@alpha:~#
Currently the disk to that disk is still running, but the memory growth is also still present, won't take long for the OOM-killer gets invoked.