Matt Schumacher
2018-Aug-08 17:35 UTC
[libvirt-users] Windows Guest I/O performance issues (already using virtio)
List, I have a number of Windows 2016 servers I am deploying, but I’m having some I/O performance issues. I have done all of the obvious things like virtio drivers, but am finding there is more performance to be found with hyper-v extensions, how we virtualize the hardware clock, and iothreads. I’m using ZVOLs to back the VM, and I’m using 4k block sizes, which seems to offer the best 4k random read/write performance (mail and database workloads), but maybe I’m missing something at this layer too. Questions: 1. Does my VM config look reasonable for the latest releases of windows? Are there features I should be using that will help performance? 2. Why does the hypervclock timer make so much performance difference in windows VMs? 3. Does my virtualized CPU model make sense? I defined Haswell-noTSX-IBRS and libvirt added the features. 4. Which kernel branch offers the best stability and performance? 5. Are there performance gains in using UEFI booting the windows guest and defining “<blockio logical_block_size='4096' physical_block_size='4096'/>”? Perhaps better block size consistency through to the zvol? Here is my setup: 48 core Haswell CPU 192G Ram Linux 4.14.61 or 4.9.114 (testing both) ZFS file system on optane SSD drive or ZFS file system on dumb HBA with 8 spindles of 15k disks (testing both) 4k block size zvol for virtual machines 32G arc cache Here is my VM: <domain type='kvm' id='12'> <name>testvm</name> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <vcpu placement='static'>12</vcpu> <iothreads>1</iothreads> <os> <type arch='x86_64' machine='pc-i440fx-2.12'>hvm</type> <boot dev='cdrom'/> <boot dev='hd'/> </os> <features> <acpi/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <runtime state='on'/> <synic state='on'/> <reset state='on'/> <vendor_id state='on' value='KVM Hv'/> </hyperv> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX-IBRS</model> <topology sockets='1' cores='6' threads='2'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='disable' name='spec-ctrl'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='yes'/> <timer name='hypervclock' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' ioeventfd='on' iothread='1'/> <source dev='/dev/zvol/datastore/vm/testvm-vda'/> <backingStore/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <source bridge='lan'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-12-testvm/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5901' autoport='no' listen='0.0.0.0'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
Gionatan Danti
2018-Aug-08 18:48 UTC
Re: [libvirt-users] Windows Guest I/O performance issues (already using virtio)
Il 08-08-2018 19:35 Matt Schumacher ha scritto:> I’m using ZVOLs to back the VM, and I’m > using 4k block sizes, which seems to offer the best 4k random > read/write performance (mail and database workloads), but maybe I’m > missing something at this layer too.For the ZFS part: try increasing your volblocksize to 64/128K. While 4K volblocksize seems tempting (ie: no read-modify-write, no wasted storage bandwidth, etc) it suffers from both a) high metadata overhead and b) ineffective compression (which you should *always* enable, using lz4). Try also disabling sync via sync=disabled. If VM I/O performance get a noticeable boost, it means you should use a fast SLOG for ZIL. Finally, simply realize that Win10/2016 are very, *very* I/O heavy. I often wonder on how Microsoft could release something so much I/O starved... That said, when running your virtual machine entirely on an Optane drive I would not expect slow I/O at all. Regards. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@assyoma.it - info@assyoma.it GPG public key ID: FF5F32A8
Apparently Analagous Threads
- Re: Windows Guest I/O performance issues (already using virtio) (Matt Schumacher)
- Bad SWAP performance from zvol
- Re: [ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work
- Change the volblocksize of a ZFS volume
- zvol used apparently greater than volsize for sparse volume