Brano Zarnovican
2013-Aug-07 12:24 UTC
[libvirt-users] libvirt possibly ignoring cache=none ?
Hi, I have an instance with 8G ram assigned. All block devices have cache disabled (cache=none) on host. However, cgroup is reporting 4G of cache associated to the instance (on host) # cgget -r memory.stat libvirt/qemu/i-000009fa libvirt/qemu/i-000009fa: memory.stat: cache 4318011392 rss 8676360192 ... When I drop all system caches on host.. # echo 3 > /proc/sys/vm/drop_caches # ..cache associated to the instance drops too. # cgget -r memory.stat libvirt/qemu/i-000009fa libvirt/qemu/i-000009fa: memory.stat: cache 122880 rss 8674291712 ... Can somebody explain what is cached, if there is cache=none everywhere ? Thanks, Brano Zarnovican PS: versions: Scientific Linux release 6.4 (Carbon) kernel-2.6.32-358.11.1.el6.x86_64 qemu-kvm-0.12.1.2-2.355.el6_4.5.x86_64 libvirt-0.10.2-18.el6_4.5.x86_64
Martin Kletzander
2013-Aug-08 07:39 UTC
Re: [libvirt-users] libvirt possibly ignoring cache=none ?
On 08/07/2013 02:24 PM, Brano Zarnovican wrote:> Hi, > > I have an instance with 8G ram assigned. All block devices have cache > disabled (cache=none) on host. However, cgroup is reporting 4G of > cache associated to the instance (on host) > > # cgget -r memory.stat libvirt/qemu/i-000009fa > libvirt/qemu/i-000009fa: > memory.stat: cache 4318011392 > rss 8676360192 > ... > > When I drop all system caches on host.. > > # echo 3 > /proc/sys/vm/drop_caches > # > > ..cache associated to the instance drops too. > > # cgget -r memory.stat libvirt/qemu/i-000009fa > libvirt/qemu/i-000009fa: > memory.stat: cache 122880 > rss 8674291712 > ... > > Can somebody explain what is cached, if there is cache=none everywhere ? >At first let me explain that libvirt is not ignoring the cache=none. This is propagated to qemu as a parameter for it's disk. From qemu's POV (anyone feel free to correct me if I'm mistaken) this means the file is opened with O_DIRECT flag; and from the open(2) manual, the O_DIRECT means "Try to minimize cache effects of the I/O to and from this file...", that doesn't necessarily mean there is no cache at all. But even if it does, this applies to files used as disks, but those disks are not the only files the process is using. You can check what othe files the process has mapped, opened etc. from the '/proc' filesystem or using the 'lsof' utility. All the other files can (and probably will) take some cache and there is nothing wrong with that. Are you trying to resolve an issue or asking just out of curiosity? Because this is wanted behavior and there should be no need for anyone to minimize this. Have a nice day, Martin
Brano Zarnovican
2013-Aug-08 15:03 UTC
Re: [libvirt-users] libvirt possibly ignoring cache=none ?
On Thu, Aug 8, 2013 at 9:39 AM, Martin Kletzander <mkletzan@redhat.com> wrote:> At first let me explain that libvirt is not ignoring the cache=none. > This is propagated to qemu as a parameter for it's disk. From qemu's > POV (anyone feel free to correct me if I'm mistaken) this means the file > is opened with O_DIRECT flag; and from the open(2) manual, the O_DIRECT > means "Try to minimize cache effects of the I/O to and from this > file...", that doesn't necessarily mean there is no cache at all.Thanks for explanation.> But even if it does, this applies to files used as disks, but those > disks are not the only files the process is using. You can check what > othe files the process has mapped, opened etc. from the '/proc' > filesystem or using the 'lsof' utility. All the other files can (and > probably will) take some cache and there is nothing wrong with that.In my case there was 4GB of caches. Just now, I have thrashed one instance with many read/writes on various devices. In total, tens of GB of data. But the cache (on host) did not grow beyond 3MB. I'm not yet able to reproduce the problem.> Are you trying to resolve an issue or asking just out of curiosity? > Because this is wanted behavior and there should be no need for anyone > to minimize this.Once or twice, one of our VMs was OOM killed because it reached 1.5 * memory limit for its cgroup. Here is an 8GB, instance. Libvirt created cgroup with 12.3GB memory limit, which we have filled to 98% [root@dev-cmp08 ~]# cgget -r memory.limit_in_bytes -r memory.usage_in_bytes libvirt/qemu/i-000009fa libvirt/qemu/i-000009fa: memory.limit_in_bytes: 13215727616 memory.usage_in_bytes: 12998287360 The 4G difference is the cache. That's why I'm so interested in what is consuming the cache on a VM which should be caching in guest only. Regards, Brano Zarnovican