Bug #181
closedmonitor eats 8G of memory before beeing oom killed
0%
Description
Hi, I installed the latest ceph 0c38b3d63dd24fb8b86283de5e00f260a03d4024, and the latest qemu-rbd e6d8dbce416bfdba88056e5fd53f295e6b5aadf6
did a full restart of the whole cluster, cleaned all rados objects using rados -p rbd rm * in a loop.
(btw, rbdtool was in bad mood rbdtool --delete resulted in: "terminate called after throwing an instance of 'ceph::buffer::end_of_buffer*'\nAborted")
Then I started converting a qemu image to rados using qemu-img convert -O rbd disk0.qcow2 rbd:rbd/testqcow
memory usage of mon0 grew to 8G, (4G RAM + 4G swap), it got killed. Then it seems mon1 took over, the qemu-img command finished, and now that the ceph cluster is idle, the mem usage of mds1 is 232.4M Resident, and mon2 is using 4340K.
Please find the log of mon0 attached.
I did not restart mon0, and mon1, mon2 are left untouched in case it may help...
Files