Hi, How can I improve performance of Windows XP / Windows Server 2003 using .vmdk files? The guests are extremely slow on .vmdk files instead of raw zvols. -- Regards, Piotr Jasiukajtis | estibi | SCA OS0072 http://estseg.blogspot.com
Piotr Jasiukajtis schrieb:> Hi, > > How can I improve performance of Windows XP / Windows Server 2003 > using .vmdk files? > > The guests are extremely slow on .vmdk files instead of raw zvols. >This is normal, the problem is, especially when you use sparse files, that the data is fragmented on disk, even more the additional layers of the file system and management slowing the hole wheel down. So it is best to use raw devices, they normally achieve nearly normal disk performance. Florian
Piotr Jasiukajtis wrote:> Hi, > > How can I improve performance of Windows XP / Windows Server 2003 > using .vmdk files? > > The guests are extremely slow on .vmdk files instead of raw zvols.They shouldn''t be... Are you using PV drivers? What''s the disk entry on the guest? xm list -l <guest> e.g. here''s one of mine.. (device (tap (uuid b6493411-5a81-798c-8683-d3e60e817e5a) (bootable 0) (dev hda:disk) (uname tap:vdisk:/tank/guests/windows7/disk0) (mode w) (backend 0) ) How disk you create the vmdk file? With vdiskadm? Is the disk on a local filesystem? If so UFS or ZFS? If ZFS, did you set the record size to 8k (zvols default to 8K. i.e. # zfs get recordsize tank/guests : alpha[1]#; zfs set recordsize=8k tank/guests : alpha[1]#; zfs get recordsize tank/guests NAME PROPERTY VALUE SOURCE tank/guests recordsize 8K local : alpha[1]#; NOTE: the record size is valid for files created *after* the record size is set. Anything created before you modify it continues to use the old setting. MRJ
On Mon, Feb 16, 2009 at 4:55 PM, Mark Johnson <Mark.Johnson@sun.com> wrote:> > > Piotr Jasiukajtis wrote: >> >> Hi, >> >> How can I improve performance of Windows XP / Windows Server 2003 >> using .vmdk files? >> >> The guests are extremely slow on .vmdk files instead of raw zvols. > > They shouldn''t be... > > > Are you using PV drivers? What''s the disk entry on the > guest?No PV drivers.> xm list -l <guest> > > e.g. here''s one of mine.. > > (device > (tap > (uuid b6493411-5a81-798c-8683-d3e60e817e5a) > (bootable 0) > (dev hda:disk) > (uname tap:vdisk:/tank/guests/windows7/disk0) > (mode w) > (backend 0) > )(device (tap (uname tap:vdisk:/export/xvm/isos/test.vmdk) (uuid 64df266d-6b58-0d26-ab4c-29a2b7d21c2f) (mode w) (dev hda:disk) (backend 0) (bootable 1) ) )> How disk you create the vmdk file? With vdiskadm?No, it''s created from physical machine via vmware converter.> Is the disk on a local filesystem? If so UFS or ZFS?Local disk ZFS root. SXCE107.> If ZFS, did you set the record size to 8k (zvols default > to 8K. > > > i.e. > > > # zfs get recordsize tank/guests > > : alpha[1]#; zfs set recordsize=8k tank/guests > : alpha[1]#; zfs get recordsize tank/guests > NAME PROPERTY VALUE SOURCE > tank/guests recordsize 8K local > : alpha[1]#;My vmdk file: # ls -alh /export/xvm/isos/test.vmdk -rw------- 1 xvm root 1,9G lut 17 11:44 /export/xvm/isos/test.vmdk # zfs get recordsize rpool/export/xvm/isos NAME PROPERTY VALUE SOURCE rpool/export/xvm/isos recordsize 128K default # zfs get recordsize rpool/export/xvm NAME PROPERTY VALUE SOURCE rpool/export/xvm recordsize 128K default My ZVOL: # zfs get recordsize rpool/export/xvm/winsrv2_c NAME PROPERTY VALUE SOURCE rpool/export/xvm/winsrv2_c recordsize - - -- Regards, Piotr Jasiukajtis | estibi | SCA OS0072 http://estseg.blogspot.com
Piotr Jasiukajtis wrote:> On Mon, Feb 16, 2009 at 4:55 PM, Mark Johnson <Mark.Johnson@sun.com> wrote: >> >> Piotr Jasiukajtis wrote: >>> Hi, >>> >>> How can I improve performance of Windows XP / Windows Server 2003 >>> using .vmdk files? >>> >>> The guests are extremely slow on .vmdk files instead of raw zvols. >> They shouldn''t be... >> >> >> Are you using PV drivers? What''s the disk entry on the >> guest? > No PV drivers.You really should be using PV drivers. IO performance is very bad without them :-) Without PV drivers, all the disk accesses are done in the qemu code.. i.e. your not using a blockend driver at all (although they are loaded in case you install PV drivers).> >> xm list -l <guest> >> >> e.g. here''s one of mine.. >> >> (device >> (tap >> (uuid b6493411-5a81-798c-8683-d3e60e817e5a) >> (bootable 0) >> (dev hda:disk) >> (uname tap:vdisk:/tank/guests/windows7/disk0) >> (mode w) >> (backend 0) >> ) > > (device > (tap > (uname tap:vdisk:/export/xvm/isos/test.vmdk) > (uuid 64df266d-6b58-0d26-ab4c-29a2b7d21c2f) > (mode w) > (dev hda:disk) > (backend 0) > (bootable 1) > ) > )looks good.> > > >> How disk you create the vmdk file? With vdiskadm? > No, it''s created from physical machine via vmware converter.ok. On a side note, one of these days we will be putting back convert functionality so you can be moving between a vmdk, vdi, vhd, and a zvol, disk, etc.>> Is the disk on a local filesystem? If so UFS or ZFS? > Local disk ZFS root. SXCE107. >I assume you are limiting the zfs ARC cache? You have to do this for dom0.>> If ZFS, did you set the record size to 8k (zvols default >> to 8K. >> >> >> i.e. >> >> >> # zfs get recordsize tank/guests >> >> : alpha[1]#; zfs set recordsize=8k tank/guests >> : alpha[1]#; zfs get recordsize tank/guests >> NAME PROPERTY VALUE SOURCE >> tank/guests recordsize 8K local >> : alpha[1]#; > > My vmdk file: > > # ls -alh /export/xvm/isos/test.vmdk > -rw------- 1 xvm root 1,9G lut 17 11:44 /export/xvm/isos/test.vmdk > > # zfs get recordsize rpool/export/xvm/isos > NAME PROPERTY VALUE SOURCE > rpool/export/xvm/isos recordsize 128K default >Can you try setting this to 8k, then moving the file out, then back into the zfs filesystem? MRJ> > # zfs get recordsize rpool/export/xvm > NAME PROPERTY VALUE SOURCE > rpool/export/xvm recordsize 128K default > > > My ZVOL: > > # zfs get recordsize rpool/export/xvm/winsrv2_c > NAME PROPERTY VALUE SOURCE > rpool/export/xvm/winsrv2_c recordsize - - >
On Tue, Feb 17, 2009 at 2:37 PM, Mark Johnson <Mark.Johnson@sun.com> wrote:>>> Are you using PV drivers? What's the disk entry on the >>> guest? >> >> No PV drivers. > > You really should be using PV drivers. IO performance > is very bad without them :-) > > Without PV drivers, all the disk accesses are done > in the qemu code.. i.e. your not using a blockend driver > at all (although they are loaded in case you install > PV drivers).I tried PV drivers on another host and there is a long way to go to improve performance of HVM systems (Windows, S10).>>> How disk you create the vmdk file? With vdiskadm? >> >> No, it's created from physical machine via vmware converter. > > ok. On a side note, one of these days we will be > putting back convert functionality so you can be > moving between a vmdk, vdi, vhd, and a zvol, disk, > etc.I guess people are waiting for these days :)>>> Is the disk on a local filesystem? If so UFS or ZFS? >> >> Local disk ZFS root. SXCE107. >> > > I assume you are limiting the zfs ARC cache? You have to > do this for dom0.Right. Anyway, I found there are some issues with local ZFS pools and dom0. Sometimes 'zfs snapshot' from dom0 can kill (halt) the machine. I guess you are aware of that?>>> If ZFS, did you set the record size to 8k (zvols default >>> to 8K. >>> >>> >>> i.e. >>> >>> >>> # zfs get recordsize tank/guests >>> >>> : alpha[1]#; zfs set recordsize=8k tank/guests >>> : alpha[1]#; zfs get recordsize tank/guests >>> NAME PROPERTY VALUE SOURCE >>> tank/guests recordsize 8K local >>> : alpha[1]#; >> >> My vmdk file: >> >> # ls -alh /export/xvm/isos/test.vmdk >> -rw------- 1 xvm root 1,9G lut 17 11:44 >> /export/xvm/isos/test.vmdk >> >> # zfs get recordsize rpool/export/xvm/isos >> NAME PROPERTY VALUE SOURCE >> rpool/export/xvm/isos recordsize 128K default >> > > Can you try setting this to 8k, then moving the file > out, then back into the zfs filesystem?I will try it next time. I don't have that machine anymore. -- Regards, Piotr Jasiukajtis | estibi | SCA OS0072 http://estseg.blogspot.com _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
Piotr Jasiukajtis wrote:> On Tue, Feb 17, 2009 at 2:37 PM, Mark Johnson <Mark.Johnson@sun.com> wrote: >>>> Are you using PV drivers? What''s the disk entry on the >>>> guest? >>> No PV drivers. >> You really should be using PV drivers. IO performance >> is very bad without them :-) >> >> Without PV drivers, all the disk accesses are done >> in the qemu code.. i.e. your not using a blockend driver >> at all (although they are loaded in case you install >> PV drivers). > I tried PV drivers on another host and there is a long way to go to > improve performance of HVM systems (Windows, S10).With respect to metal or to other virtualization platforms? Where the guests MP? Did you give dom0 some dedicated CPU cores?> >>>> How disk you create the vmdk file? With vdiskadm? >>> No, it''s created from physical machine via vmware converter. >> ok. On a side note, one of these days we will be >> putting back convert functionality so you can be >> moving between a vmdk, vdi, vhd, and a zvol, disk, >> etc. > I guess people are waiting for these days :) > > >>>> Is the disk on a local filesystem? If so UFS or ZFS? >>> Local disk ZFS root. SXCE107. >>> >> I assume you are limiting the zfs ARC cache? You have to >> do this for dom0. > Right. Anyway, I found there are some issues with local ZFS pools and dom0. > Sometimes ''zfs snapshot'' from dom0 can kill (halt) the machine. > I guess you are aware of that?No I wasn''t.. What build are the dom0 bits?>>>> If ZFS, did you set the record size to 8k (zvols default >>>> to 8K. >>>> >>>> >>>> i.e. >>>> >>>> >>>> # zfs get recordsize tank/guests >>>> >>>> : alpha[1]#; zfs set recordsize=8k tank/guests >>>> : alpha[1]#; zfs get recordsize tank/guests >>>> NAME PROPERTY VALUE SOURCE >>>> tank/guests recordsize 8K local >>>> : alpha[1]#; >>> My vmdk file: >>> >>> # ls -alh /export/xvm/isos/test.vmdk >>> -rw------- 1 xvm root 1,9G lut 17 11:44 >>> /export/xvm/isos/test.vmdk >>> >>> # zfs get recordsize rpool/export/xvm/isos >>> NAME PROPERTY VALUE SOURCE >>> rpool/export/xvm/isos recordsize 128K default >>> >> Can you try setting this to 8k, then moving the file >> out, then back into the zfs filesystem? > I will try it next time. > I don''t have that machine anymore.OK, thanks. MRJ
On Mon, Mar 2, 2009 at 3:37 PM, Mark Johnson <Mark.Johnson@sun.com> wrote:>> I tried PV drivers on another host and there is a long way to go to >> improve performance of HVM systems (Windows, S10). > > With respect to metal or to other virtualization > platforms?With respect to the metal and pvm domUs like CentOS 5, SXCE and 2008.11.> Where the guests MP? Did you give dom0 some > dedicated CPU cores?No, I didn't give any dedicated CPU.>> Right. Anyway, I found there are some issues with local ZFS pools and >> dom0. >> Sometimes 'zfs snapshot' from dom0 can kill (halt) the machine. >> I guess you are aware of that? > > No I wasn't.. What build are the dom0 bits?SXCE104. -- Regards, Piotr Jasiukajtis | estibi | SCA OS0072 http://estseg.blogspot.com _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
Piotr Jasiukajtis wrote:> On Mon, Mar 2, 2009 at 3:37 PM, Mark Johnson <Mark.Johnson@sun.com> wrote: >>> I tried PV drivers on another host and there is a long way to go to >>> improve performance of HVM systems (Windows, S10). >> With respect to metal or to other virtualization >> platforms? > With respect to the metal and pvm domUs like CentOS 5, SXCE and 2008.11.Ah, OK.. You need to do some fine tuning with HVM guests and we have more performance work to do their too.. You should have some cpus dedicated to dom0 for HVM guests. There is a qemu process per HVM guest which runs in dom0.. You don''t want the qemu process schedule for a dom0 CPU being used by another guest. e.g.. you can put something like the following in your xen.gz menu.lst entry (which will put dom0 only on cpus 0 and 1). dom0_max_vcpus=2 dom0_vcpus_pin=true You should be running as little as possible on dom0. qemu, IO, and domain management. If your doing MP HVM, you really need to dedicate CPUs to the HVM guest. Since Xen doesn''t currently have a gang scheduler, things can degrade fast if the CPUs are not running at the same time. Even with that, we have some perf work to do with HVM domains.. We will be spending time on that with the 3.3. work. MRJ>> Where the guests MP? Did you give dom0 some >> dedicated CPU cores? > No, I didn''t give any dedicated CPU. > >>> Right. Anyway, I found there are some issues with local ZFS pools and >>> dom0. >>> Sometimes ''zfs snapshot'' from dom0 can kill (halt) the machine. >>> I guess you are aware of that? >> No I wasn''t.. What build are the dom0 bits? > SXCE104. >