Allence
2018-Aug-09 03:53 UTC
Re: [libvirt-users] Windows Guest I/O performance issues (already using virtio) (Matt Schumacher)
I think performance is not just about your xml, the host system will have a bigger impact. Maybe you can see this link: Https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html-single/virtualization_tuning_and_optimization_guide/index>Date: Wed, 8 Aug 2018 17:35:11 +0000 >From: Matt Schumacher <matt.s@aptalaska.com> >To: "libvirt-users@redhat.com" <libvirt-users@redhat.com> >Subject: [libvirt-users] Windows Guest I/O performance issues (already > using virtio) >Message-ID: <30EE4896-CF82-4E23-82D5-0C56B38F5E09@aptalaska.com> >Content-Type: text/plain; charset="utf-8" > >List, > >I have a number of Windows 2016 servers I am deploying, but I?m having some I/O performance issues. I have done all of the obvious things like virtio drivers, but am finding there is more performance to be found with hyper-v extensions, how we virtualize the hardware clock, and iothreads. I?m using ZVOLs to back the VM, and I?m using 4k block sizes, which seems to offer the best 4k random read/write performance (mail and database workloads), but maybe I?m missing something at this layer too. > >Questions: > > > 1. Does my VM config look reasonable for the latest releases of windows? Are there features I should be using that will help performance? > 2. Why does the hypervclock timer make so much performance difference in windows VMs? > 3. Does my virtualized CPU model make sense? I defined Haswell-noTSX-IBRS and libvirt added the features. > 4. Which kernel branch offers the best stability and performance? > 5. Are there performance gains in using UEFI booting the windows guest and defining ?<blockio logical_block_size='4096' physical_block_size='4096'/>?? Perhaps better block size consistency through to the zvol? > > >Here is my setup: > >48 core Haswell CPU >192G Ram >Linux 4.14.61 or 4.9.114 (testing both) >ZFS file system on optane SSD drive or ZFS file system on dumb HBA with 8 spindles of 15k disks (testing both) >4k block size zvol for virtual machines >32G arc cache > >Here is my VM: > ><domain type='kvm' id='12'> > <name>testvm</name> > <memory unit='KiB'>33554432</memory> > <currentMemory unit='KiB'>33554432</currentMemory> > <vcpu placement='static'>12</vcpu> > <iothreads>1</iothreads> > <os> > <type arch='x86_64' machine='pc-i440fx-2.12'>hvm</type> > <boot dev='cdrom'/> > <boot dev='hd'/> > </os> > <features> > <acpi/> > <hyperv> > <relaxed state='on'/> > <vapic state='on'/> > <spinlocks state='on' retries='8191'/> > <vpindex state='on'/> > <runtime state='on'/> > <synic state='on'/> > <reset state='on'/> > <vendor_id state='on' value='KVM Hv'/> > </hyperv> > </features> > <cpu mode='custom' match='exact' check='full'> > <model fallback='forbid'>Haswell-noTSX-IBRS</model> > <topology sockets='1' cores='6' threads='2'/> > <feature policy='require' name='vme'/> > <feature policy='require' name='f16c'/> > <feature policy='require' name='rdrand'/> > <feature policy='require' name='hypervisor'/> > <feature policy='require' name='arat'/> > <feature policy='disable' name='spec-ctrl'/> > <feature policy='require' name='xsaveopt'/> > <feature policy='require' name='abm'/> > </cpu> > <clock offset='localtime'> > <timer name='rtc' tickpolicy='catchup'/> > <timer name='pit' tickpolicy='delay'/> > <timer name='hpet' present='yes'/> > <timer name='hypervclock' present='yes'/> > </clock> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <devices> > <emulator>/usr/bin/qemu-system-x86_64</emulator> > <disk type='block' device='disk'> > <driver name='qemu' type='raw' cache='none' io='native' ioeventfd='on' iothread='1'/> > <source dev='/dev/zvol/datastore/vm/testvm-vda'/> > <backingStore/> > <target dev='vda' bus='virtio'/> > <alias name='virtio-disk0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> > </disk> > <disk type='file' device='cdrom'> > <driver name='qemu'/> > <target dev='hdc' bus='ide'/> > <readonly/> > <alias name='ide0-1-0'/> > <address type='drive' controller='0' bus='1' target='0' unit='0'/> > </disk> > <controller type='ide' index='0'> > <alias name='ide'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> > </controller> > <controller type='usb' index='0' model='piix3-uhci'> > <alias name='usb'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> > </controller> > <controller type='pci' index='0' model='pci-root'> > <alias name='pci.0'/> > </controller> > <controller type='virtio-serial' index='0'> > <alias name='virtio-serial0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> > </controller> > <interface type='bridge'> > <source bridge='lan'/> > <target dev='vnet0'/> > <model type='virtio'/> > <alias name='net0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> > </interface> > <channel type='unix'> > <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-12-testvm/org.qemu.guest_agent.0'/> > <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> > <alias name='channel0'/> > <address type='virtio-serial' controller='0' bus='0' port='1'/> > </channel> > <input type='tablet' bus='usb'> > <alias name='input0'/> > <address type='usb' bus='0' port='1'/> > </input> > <input type='mouse' bus='ps2'> > <alias name='input1'/> > </input> > <input type='keyboard' bus='ps2'> > <alias name='input2'/> > </input> > <graphics type='vnc' port='5901' autoport='no' listen='0.0.0.0'> > <listen type='address' address='0.0.0.0'/> > </graphics> > <video> > <model type='cirrus' vram='16384' heads='1' primary='yes'/> > <alias name='video0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> > </video> > <memballoon model='none'/> > </devices> > <seclabel type='dynamic' model='dac' relabel='yes'> > <label>+0:+100</label> > <imagelabel>+0:+100</imagelabel> > </seclabel> ></domain>
Maybe Matching Threads
- Windows Guest I/O performance issues (already using virtio)
- Re: [ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work
- howto force shutdown if nut-snmp Communications lost
- libvirt RPC error
- samba 4.5.8 @ debian 9 - wrong groups IDs for PAM authorization