On Wed, 2017-06-14 at 15:32 -0300, Thiago Oliveira wrote: [...]> I can see other thing, for example, change the hda=IDE to virtio.I'd say switching the disk from IDE to virtio should be the very first step - and while you're at it, you might as well use virtio for the network interface too. -- Andrea Bolognani / Red Hat / Virtualization
Hi, Thank you for your input. We already tried several tweaks but without luck. For example adding io='native' did not help improve the performance. It behaved exactly the same way before and after. I've read somewhere that cache='writethrough' could also help improving the performance, but we cannot do that because we take live snapshots to backup the machine while it runs. When cache is enabled, we observed that sometimes an external live snapshot cannot be merged with blockcommit without the host being shut down. Would you please explain what <cpu mode='host-passthrough' /> should do to improve the performance? Switching from IDE to virtio basically means that the host then knows that it runs on virtualized hardware and can do things differently? But it also requires to modify the host with specialized drivers that even influence the boot process. That feels more like a hack than a solution. We're astonished why the virtualized IO is so much slower. I could understand a performance penalty of 10% or even 20%, but a drop from 120Mb/s IO read to 1.4Mb/s IO read is suspicious to everyone of us. We'd have expected at least a throughput of 50Mb/s while reading from disk which is more than half the IO that the hardware can do. Please note that we do not observe the hosting machine to peak 100% with CPU or IO (using top and iotop) when the virtualized host does some io. Is there a lock contention or something else going on? When running a virtualized host for example with virtual box we don't see such an impact. What does virtual box do differently to improve virtualized IO and could that help libvirt/qemu/kvm? On 2017-06-15 04:08, Andrea Bolognani wrote:> On Wed, 2017-06-14 at 15:32 -0300, Thiago Oliveira wrote: > [...] >> I can see other thing, for example, change the hda=IDE to virtio. > I'd say switching the disk from IDE to virtio should be the > very first step - and while you're at it, you might as well > use virtio for the network interface too. > > -- > Andrea Bolognani / Red Hat / Virtualization
I'm in no way a performance expert, so I can't comment on most of the points you raise; hopefully someone with more experience in the area will be able to help you. That said... On Mon, 2017-06-19 at 12:38 +0200, Dominik Psenner wrote:> Switching from IDE to virtio basically means that the host then knows > that it runs on virtualized hardware and can do things differently? But > it also requires to modify the host with specialized drivers that even > influence the boot process. That feels more like a hack than a solution.... I don't see why this would be a problem: installing VirtIO drivers in a guest is not unlike installing drivers that are tailored to the specific GPU in your laptop rather than relying on the generic VGA drivers shipped with the OS. Different hardware, different drivers. Moreover, recent Windows versions ship Enlightened I/O drivers which AFAIK do for guests running on Hyper-V pretty much what VirtIO drivers do for those running on QEMU/KVM. -- Andrea Bolognani / Red Hat / Virtualization
On Mon, Jun 19, 2017 at 12:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:> When running a virtualized host for example with virtual box we don't see > such an impact. What does virtual box do differently to improve virtualized > IO and could that help libvirt/qemu/kvm? > >Hello, I would jump here to try to put some information Even in virtualbox, at virtual storage chapter https://www.virtualbox.org/manual/ch05.html you get: " In general, you should avoid IDE unless it is the only controller supported by your guest. Whether you use SATA, SCSI or SAS does not make any real difference. The variety of controllers is only supplied for VirtualBox for compatibility with existing hardware and other hypervisors. " In fact, just tried with a not so new version of virtualbox (4.3.6 I don't use it very often), if I create a VM with Windows 2012 as OS, by default it sets the os disk on top of a virtualized sata controller, that should be more efficient. Or are you saying that you explicitly configured Virtualbox and selected IDE as controller type for the guest? Indeed I verified that in new version of virt-manager, when you configure a windows 2012 R2 qemu/kvm VM instead, it does set IDE as the controller and so the performance problems you see. Probably you choose the default proposed? In the past I also tried IDE on vSphere and it had the same performance problems, because it is fully virtualized and unoptimized You should set SCSI as controller type in my opinion if you have a recent version of libvirt/qemu/kvm That said, I don't know what is the level of support for W2016 at time with virtio and virtio-scsi drivers. You can download iso and virtual floppy images here: https://fedoraproject.org/wiki/Windows_Virtio_Drivers The method could be to add a new disk with the desired controller to the guest: virtio or virtio-scsi Then configure it using iso or vfd images Then shutdown the guest and set also the boot disk with the same virtio or virtio-scsi controller and try to boot again. Having installed the drivers it should auto reconfigure (not tried with w2012 and w2016) If all goes well shutdown guest again and remove the second disk Also, you can try to install a new guest and change the controller and provide the installation process the vfs image file. Try it on a test system to see if it works and if it gives you the desired performance. HIH a little, Gianluca