Hey, up untill recently I have been using gentoo-rebased xen patches for 2.6.34 kernel line, and it has been working fine. Recently i have decided to upgrade few of my dom0''s to 3.2 kernel, and everything seems fine except for the linux guests in HVM mode. Drive read and write is painfuly slow, yet windows guests on that same configuration (or any other variation) give normal/expected results. Paravirtual linux guests do not seem to be affected, and i have tried both 2.6 and 3.2 kernels on hvm guests but there was no difference. Xen version = 4.1.2 stable. Is there anything that I might be missing? Thanks in advance, mario
On 1 April 2012 01:33, Mario <mario@slackverse.org> wrote:> Hey, > > up untill recently I have been using gentoo-rebased xen patches for 2.6.34 > kernel line, and it has been working fine. Recently i have decided to > upgrade few of my dom0''s to 3.2 kernel, and everything seems fine except for > the linux guests in HVM mode. Drive read and write is painfuly slow, yet > windows guests on that same configuration (or any other variation) give > normal/expected results. Paravirtual linux guests do not seem to be > affected, and i have tried both 2.6 and 3.2 kernels on hvm guests but there > was no difference. Xen version = 4.1.2 stable. > Is there anything that I might be missing? > > Thanks in advance, > mario > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-usersHi Mario, Are you using PVonHVM for your Linux guests? QEMU emulated disks aren''t fast.. Check that you have xen_platform_pci=1 in your config file to make sure PV drivers are enabled. Are you using Gentoo guests too? Make sure you compile in the Xen drivers if that is the case. I am running many Gentoo 3.2.1 guests in PVHVM mode with excellent disk and network performance (60k IOPs, 600mb/s+ sequentials) Joseph. -- Founder | Director | VP Research Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846
On 04/01/2012 05:00 AM, Joseph Glanville wrote:> On 1 April 2012 01:33, Mario<mario@slackverse.org> wrote: >> Hey, >> >> up untill recently I have been using gentoo-rebased xen patches for 2.6.34 >> kernel line, and it has been working fine. Recently i have decided to >> upgrade few of my dom0''s to 3.2 kernel, and everything seems fine except for >> the linux guests in HVM mode. Drive read and write is painfuly slow, yet >> windows guests on that same configuration (or any other variation) give >> normal/expected results. Paravirtual linux guests do not seem to be >> affected, and i have tried both 2.6 and 3.2 kernels on hvm guests but there >> was no difference. Xen version = 4.1.2 stable. >> Is there anything that I might be missing? >> >> Thanks in advance, >> mario >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xen.org >> http://lists.xen.org/xen-users > > Hi Mario, > > Are you using PVonHVM for your Linux guests? QEMU emulated disks aren''t fast.. > Check that you have xen_platform_pci=1 in your config file to make > sure PV drivers are enabled. > > Are you using Gentoo guests too? Make sure you compile in the Xen > drivers if that is the case. > I am running many Gentoo 3.2.1 guests in PVHVM mode with excellent > disk and network performance (60k IOPs, 600mb/s+ sequentials) > > Joseph. >I am not using PVonHVM, any more ideas? m.
Am 02.04.2012 11:39, schrieb Mario:> On 04/01/2012 05:00 AM, Joseph Glanville wrote: >> Are you using PVonHVM for your Linux guests? QEMU emulated disks >> aren''t fast.. >> Check that you have xen_platform_pci=1 in your config file to make >> sure PV drivers are enabled. > > I am not using PVonHVM, any more ideas?If you want acceptable performance for disk-/net-IO on HVM guests (i.e., when not booting them as PV directly), you''ll need to enable PVonHVM as Joseph wrote. I have a set of Gentoo PVonHVM DomUs running on Xen 4.1.2 with gentoo-sources-3.3.0, and performance is absolutely fine. -- --- Heiko.
On 04/02/2012 11:53 AM, Heiko Wundram wrote:> Am 02.04.2012 11:39, schrieb Mario: >> On 04/01/2012 05:00 AM, Joseph Glanville wrote: >>> Are you using PVonHVM for your Linux guests? QEMU emulated disks >>> aren''t fast.. >>> Check that you have xen_platform_pci=1 in your config file to make >>> sure PV drivers are enabled. >> >> I am not using PVonHVM, any more ideas? > > If you want acceptable performance for disk-/net-IO on HVM guests (i.e., > when not booting them as PV directly), you''ll need to enable PVonHVM as > Joseph wrote. I have a set of Gentoo PVonHVM DomUs running on Xen 4.1.2 > with gentoo-sources-3.3.0, and performance is absolutely fine. >Well this was not the case on 2.6.34 dom0 kernel, and since windows HVM does not need PVonHVM, i dont understand why would linux domUs need it? m.
Am 02.04.2012 12:15, schrieb Mario:> Well this was not the case on 2.6.34 dom0 kernel, and since windows > HVM does not need PVonHVM, i dont understand why would linux domUs > need it?This has nothing to do with the Dom0-kernel - PVonHVM is an interface which exposes the PV infrastructure (i.e., para-virtualized disk and net) to fully virtualized guests. The Linux-kernel contains the respective DomU-side code since some time around 2.6.38 natively, and when using Xenified kernels, the corresponding infrastructure dates back to around 2.6.32 (IIRC). Windows, of course, does not natively support PVonHVM in any way (except when using corresponding drivers to enable that), so if you get excessively slower I/O-speeds on "fully" virtualized Linux DomUs than you do on Windows DomUs which _don''t_ have the corresponding PV-drivers installed, something else is amiss here; it''d help if you could describe your setup in a little more detail. -- --- Heiko.
On 04/02/2012 12:19 PM, Heiko Wundram wrote:> Am 02.04.2012 12:15, schrieb Mario: >> Well this was not the case on 2.6.34 dom0 kernel, and since windows >> HVM does not need PVonHVM, i dont understand why would linux domUs >> need it? > > This has nothing to do with the Dom0-kernel - PVonHVM is an interface > which exposes the PV infrastructure (i.e., para-virtualized disk and > net) to fully virtualized guests. The Linux-kernel contains the > respective DomU-side code since some time around 2.6.38 natively, and > when using Xenified kernels, the corresponding infrastructure dates back > to around 2.6.32 (IIRC). > > Windows, of course, does not natively support PVonHVM in any way (except > when using corresponding drivers to enable that), so if you get > excessively slower I/O-speeds on "fully" virtualized Linux DomUs than > you do on Windows DomUs which _don''t_ have the corresponding PV-drivers > installed, something else is amiss here; it''d help if you could describe > your setup in a little more detail. >Why are we still on PVonHVM subject? I do not want that, I want regular HVM to work with linux domU''s the same way it works with windows domU''s. I don''t have the luxury to install custom drivers on some domU''s, so there is no point in trying to force me to use PVonHVM because i can''t. So, anyone else please? :-)
Am 02.04.2012 13:33, schrieb Mario:> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >> Windows, of course, does not natively support PVonHVM in any way >> (except >> when using corresponding drivers to enable that), so if you get >> excessively slower I/O-speeds on "fully" virtualized Linux DomUs >> than >> you do on Windows DomUs which _don''t_ have the corresponding >> PV-drivers >> installed, something else is amiss here; it''d help if you could >> describe >> your setup in a little more detail. >> > > Why are we still on PVonHVM subject? I do not want that, I want > regular HVM to work with linux domU''s the same way it works with > windows domU''s. I don''t have the luxury to install custom drivers on > some domU''s, so there is no point in trying to force me to use > PVonHVM > because i can''t. > So, anyone else please? :-)Read my last paragraph again, please: Linux fully virtualized DomUs (which use the corresponding NIC and disk emulation as implemented by qemu) shouldn''t perform any different than a Windows DomU, I/O-performance wise, as both of them use the same infrastructure in Dom0 to do I/O (qemu process). You''re saying that they are different, I/O-wise, so: please be a little more concrete _what_ the problem is that you''re seeing. We don''t have crystal balls handy, sorry. -- --- Heiko.
On 04/02/2012 01:55 PM, Heiko Wundram wrote:> Am 02.04.2012 13:33, schrieb Mario: >> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>> Windows, of course, does not natively support PVonHVM in any way (except >>> when using corresponding drivers to enable that), so if you get >>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs than >>> you do on Windows DomUs which _don''t_ have the corresponding PV-drivers >>> installed, something else is amiss here; it''d help if you could describe >>> your setup in a little more detail. >>> >> >> Why are we still on PVonHVM subject? I do not want that, I want >> regular HVM to work with linux domU''s the same way it works with >> windows domU''s. I don''t have the luxury to install custom drivers on >> some domU''s, so there is no point in trying to force me to use PVonHVM >> because i can''t. >> So, anyone else please? :-) > > Read my last paragraph again, please: Linux fully virtualized DomUs > (which use the corresponding NIC and disk emulation as implemented by > qemu) shouldn''t perform any different than a Windows DomU, > I/O-performance wise, as both of them use the same infrastructure in > Dom0 to do I/O (qemu process). You''re saying that they are different, > I/O-wise, so: please be a little more concrete _what_ the problem is > that you''re seeing. We don''t have crystal balls handy, sorry. >Its actualy quite simple, here is an example: Windows hvm domU disk io on my test server is ~60MB/s (sequential read or write). Linux HVM guests (using same config file template) on the same server gives ~10MB/s (sequential read or write), i tried pretty much everything, from tuning scheduler to changing kernel. I am not sure what to do with it, other then roll back to kernel 2.6.34 on my dom0.
On 2 April 2012 22:49, Mario <mario@slackverse.org> wrote:> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >> >> Am 02.04.2012 13:33, schrieb Mario: >>> >>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>> >>>> Windows, of course, does not natively support PVonHVM in any way (except >>>> when using corresponding drivers to enable that), so if you get >>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs than >>>> you do on Windows DomUs which _don''t_ have the corresponding PV-drivers >>>> installed, something else is amiss here; it''d help if you could describe >>>> your setup in a little more detail. >>>> >>> >>> Why are we still on PVonHVM subject? I do not want that, I want >>> regular HVM to work with linux domU''s the same way it works with >>> windows domU''s. I don''t have the luxury to install custom drivers on >>> some domU''s, so there is no point in trying to force me to use PVonHVM >>> because i can''t. >>> So, anyone else please? :-) >> >> >> Read my last paragraph again, please: Linux fully virtualized DomUs >> (which use the corresponding NIC and disk emulation as implemented by >> qemu) shouldn''t perform any different than a Windows DomU, >> I/O-performance wise, as both of them use the same infrastructure in >> Dom0 to do I/O (qemu process). You''re saying that they are different, >> I/O-wise, so: please be a little more concrete _what_ the problem is >> that you''re seeing. We don''t have crystal balls handy, sorry. >> > > Its actualy quite simple, here is an example: Windows hvm domU disk io on my > test server is ~60MB/s (sequential read or write). Linux HVM guests (using > same config file template) on the same server gives ~10MB/s (sequential read > or write), i tried pretty much everything, from tuning scheduler to changing > kernel. I am not sure what to do with it, other then roll back to kernel > 2.6.34 on my dom0.Hieko and myself have told you what to do to get decent performance. Some examples of fully setup PVonHVM guests are available on my file mirror. http://mirror.orionvm.com.au Unless you use the PV drivers there isn''t really a whole lot more I can do for you. I can''t explain why performance would differ under 2.6.34 vs 3.2. This makes no sense as qemu-dm runs in userspace. You would have had to make some changes to the toolstack for this performance to differ.> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users-- Founder | Director | VP Research Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846
On 04/02/2012 08:45 PM, Joseph Glanville wrote:> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>> >>> Am 02.04.2012 13:33, schrieb Mario: >>>> >>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>> >>>>> Windows, of course, does not natively support PVonHVM in any way (except >>>>> when using corresponding drivers to enable that), so if you get >>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs than >>>>> you do on Windows DomUs which _don''t_ have the corresponding PV-drivers >>>>> installed, something else is amiss here; it''d help if you could describe >>>>> your setup in a little more detail. >>>>> >>>> >>>> Why are we still on PVonHVM subject? I do not want that, I want >>>> regular HVM to work with linux domU''s the same way it works with >>>> windows domU''s. I don''t have the luxury to install custom drivers on >>>> some domU''s, so there is no point in trying to force me to use PVonHVM >>>> because i can''t. >>>> So, anyone else please? :-) >>> >>> >>> Read my last paragraph again, please: Linux fully virtualized DomUs >>> (which use the corresponding NIC and disk emulation as implemented by >>> qemu) shouldn''t perform any different than a Windows DomU, >>> I/O-performance wise, as both of them use the same infrastructure in >>> Dom0 to do I/O (qemu process). You''re saying that they are different, >>> I/O-wise, so: please be a little more concrete _what_ the problem is >>> that you''re seeing. We don''t have crystal balls handy, sorry. >>> >> >> Its actualy quite simple, here is an example: Windows hvm domU disk io on my >> test server is ~60MB/s (sequential read or write). Linux HVM guests (using >> same config file template) on the same server gives ~10MB/s (sequential read >> or write), i tried pretty much everything, from tuning scheduler to changing >> kernel. I am not sure what to do with it, other then roll back to kernel >> 2.6.34 on my dom0. > > Hieko and myself have told you what to do to get decent performance. > Some examples of fully setup PVonHVM guests are available on my file mirror. > http://mirror.orionvm.com.au > > Unless you use the PV drivers there isn''t really a whole lot more I > can do for you. > > I can''t explain why performance would differ under 2.6.34 vs 3.2. > This makes no sense as qemu-dm runs in userspace. You would have had to make > some changes to the toolstack for this performance to differ. >Performance difference between dom0 kernels aside, what I don''t understand is why windows HVM domU works fine, while linux doesn''t? Isn''t HVM supposed to work same for every guest, or does linux actualy have something against HVM mode? I simply don''t get it.
On 3 April 2012 05:17, Mario <mario@slackverse.org> wrote:> On 04/02/2012 08:45 PM, Joseph Glanville wrote: >> >> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >>> >>> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>>> >>>> >>>> Am 02.04.2012 13:33, schrieb Mario: >>>>> >>>>> >>>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>>> >>>>>> >>>>>> Windows, of course, does not natively support PVonHVM in any way >>>>>> (except >>>>>> when using corresponding drivers to enable that), so if you get >>>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs than >>>>>> you do on Windows DomUs which _don''t_ have the corresponding >>>>>> PV-drivers >>>>>> installed, something else is amiss here; it''d help if you could >>>>>> describe >>>>>> your setup in a little more detail. >>>>>> >>>>> >>>>> Why are we still on PVonHVM subject? I do not want that, I want >>>>> regular HVM to work with linux domU''s the same way it works with >>>>> windows domU''s. I don''t have the luxury to install custom drivers on >>>>> some domU''s, so there is no point in trying to force me to use PVonHVM >>>>> because i can''t. >>>>> So, anyone else please? :-) >>>> >>>> >>>> >>>> Read my last paragraph again, please: Linux fully virtualized DomUs >>>> (which use the corresponding NIC and disk emulation as implemented by >>>> qemu) shouldn''t perform any different than a Windows DomU, >>>> I/O-performance wise, as both of them use the same infrastructure in >>>> Dom0 to do I/O (qemu process). You''re saying that they are different, >>>> I/O-wise, so: please be a little more concrete _what_ the problem is >>>> that you''re seeing. We don''t have crystal balls handy, sorry. >>>> >>> >>> Its actualy quite simple, here is an example: Windows hvm domU disk io on >>> my >>> test server is ~60MB/s (sequential read or write). Linux HVM guests >>> (using >>> same config file template) on the same server gives ~10MB/s (sequential >>> read >>> or write), i tried pretty much everything, from tuning scheduler to >>> changing >>> kernel. I am not sure what to do with it, other then roll back to kernel >>> 2.6.34 on my dom0. >> >> >> Hieko and myself have told you what to do to get decent performance. >> Some examples of fully setup PVonHVM guests are available on my file >> mirror. >> http://mirror.orionvm.com.au >> >> Unless you use the PV drivers there isn''t really a whole lot more I >> can do for you. >> >> I can''t explain why performance would differ under 2.6.34 vs 3.2. >> This makes no sense as qemu-dm runs in userspace. You would have had to >> make >> some changes to the toolstack for this performance to differ. >> > > Performance difference between dom0 kernels aside, what I don''t understand > is why windows HVM domU works fine, while linux doesn''t? > Isn''t HVM supposed to work same for every guest, or does linux actualy have > something against HVM mode? I simply don''t get it.It''s definitely not an optimized use case however I have never seen as low performance as you are reporting. There are too many reasons to list as to why performance between Windows HVM and Linux HVM would differ. What Linux guests are you attempting to run? Joseph.> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users-- Founder | Director | VP Research Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846
On 04/02/2012 09:27 PM, Joseph Glanville wrote:> On 3 April 2012 05:17, Mario<mario@slackverse.org> wrote: >> On 04/02/2012 08:45 PM, Joseph Glanville wrote: >>> >>> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >>>> >>>> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>>>> >>>>> >>>>> Am 02.04.2012 13:33, schrieb Mario: >>>>>> >>>>>> >>>>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>>>> >>>>>>> >>>>>>> Windows, of course, does not natively support PVonHVM in any way >>>>>>> (except >>>>>>> when using corresponding drivers to enable that), so if you get >>>>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs than >>>>>>> you do on Windows DomUs which _don''t_ have the corresponding >>>>>>> PV-drivers >>>>>>> installed, something else is amiss here; it''d help if you could >>>>>>> describe >>>>>>> your setup in a little more detail. >>>>>>> >>>>>> >>>>>> Why are we still on PVonHVM subject? I do not want that, I want >>>>>> regular HVM to work with linux domU''s the same way it works with >>>>>> windows domU''s. I don''t have the luxury to install custom drivers on >>>>>> some domU''s, so there is no point in trying to force me to use PVonHVM >>>>>> because i can''t. >>>>>> So, anyone else please? :-) >>>>> >>>>> >>>>> >>>>> Read my last paragraph again, please: Linux fully virtualized DomUs >>>>> (which use the corresponding NIC and disk emulation as implemented by >>>>> qemu) shouldn''t perform any different than a Windows DomU, >>>>> I/O-performance wise, as both of them use the same infrastructure in >>>>> Dom0 to do I/O (qemu process). You''re saying that they are different, >>>>> I/O-wise, so: please be a little more concrete _what_ the problem is >>>>> that you''re seeing. We don''t have crystal balls handy, sorry. >>>>> >>>> >>>> Its actualy quite simple, here is an example: Windows hvm domU disk io on >>>> my >>>> test server is ~60MB/s (sequential read or write). Linux HVM guests >>>> (using >>>> same config file template) on the same server gives ~10MB/s (sequential >>>> read >>>> or write), i tried pretty much everything, from tuning scheduler to >>>> changing >>>> kernel. I am not sure what to do with it, other then roll back to kernel >>>> 2.6.34 on my dom0. >>> >>> >>> Hieko and myself have told you what to do to get decent performance. >>> Some examples of fully setup PVonHVM guests are available on my file >>> mirror. >>> http://mirror.orionvm.com.au >>> >>> Unless you use the PV drivers there isn''t really a whole lot more I >>> can do for you. >>> >>> I can''t explain why performance would differ under 2.6.34 vs 3.2. >>> This makes no sense as qemu-dm runs in userspace. You would have had to >>> make >>> some changes to the toolstack for this performance to differ. >>> >> >> Performance difference between dom0 kernels aside, what I don''t understand >> is why windows HVM domU works fine, while linux doesn''t? >> Isn''t HVM supposed to work same for every guest, or does linux actualy have >> something against HVM mode? I simply don''t get it. > > It''s definitely not an optimized use case however I have never seen as > low performance as you are reporting. > There are too many reasons to list as to why performance between > Windows HVM and Linux HVM would differ. > > What Linux guests are you attempting to run? > > Joseph. >Running a slackware guest, tried various kernels, did not make any difference. Windows in question is 2008 R2.
On 3 April 2012 05:48, Mario <mario@slackverse.org> wrote:> On 04/02/2012 09:27 PM, Joseph Glanville wrote: >> >> On 3 April 2012 05:17, Mario<mario@slackverse.org> wrote: >>> >>> On 04/02/2012 08:45 PM, Joseph Glanville wrote: >>>> >>>> >>>> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >>>>> >>>>> >>>>> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>>>>> >>>>>> >>>>>> >>>>>> Am 02.04.2012 13:33, schrieb Mario: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Windows, of course, does not natively support PVonHVM in any way >>>>>>>> (except >>>>>>>> when using corresponding drivers to enable that), so if you get >>>>>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs >>>>>>>> than >>>>>>>> you do on Windows DomUs which _don''t_ have the corresponding >>>>>>>> PV-drivers >>>>>>>> installed, something else is amiss here; it''d help if you could >>>>>>>> describe >>>>>>>> your setup in a little more detail. >>>>>>>> >>>>>>> >>>>>>> Why are we still on PVonHVM subject? I do not want that, I want >>>>>>> regular HVM to work with linux domU''s the same way it works with >>>>>>> windows domU''s. I don''t have the luxury to install custom drivers on >>>>>>> some domU''s, so there is no point in trying to force me to use >>>>>>> PVonHVM >>>>>>> because i can''t. >>>>>>> So, anyone else please? :-) >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Read my last paragraph again, please: Linux fully virtualized DomUs >>>>>> (which use the corresponding NIC and disk emulation as implemented by >>>>>> qemu) shouldn''t perform any different than a Windows DomU, >>>>>> I/O-performance wise, as both of them use the same infrastructure in >>>>>> Dom0 to do I/O (qemu process). You''re saying that they are different, >>>>>> I/O-wise, so: please be a little more concrete _what_ the problem is >>>>>> that you''re seeing. We don''t have crystal balls handy, sorry. >>>>>> >>>>> >>>>> Its actualy quite simple, here is an example: Windows hvm domU disk io >>>>> on >>>>> my >>>>> test server is ~60MB/s (sequential read or write). Linux HVM guests >>>>> (using >>>>> same config file template) on the same server gives ~10MB/s (sequential >>>>> read >>>>> or write), i tried pretty much everything, from tuning scheduler to >>>>> changing >>>>> kernel. I am not sure what to do with it, other then roll back to >>>>> kernel >>>>> 2.6.34 on my dom0. >>>> >>>> >>>> >>>> Hieko and myself have told you what to do to get decent performance. >>>> Some examples of fully setup PVonHVM guests are available on my file >>>> mirror. >>>> http://mirror.orionvm.com.au >>>> >>>> Unless you use the PV drivers there isn''t really a whole lot more I >>>> can do for you. >>>> >>>> I can''t explain why performance would differ under 2.6.34 vs 3.2. >>>> This makes no sense as qemu-dm runs in userspace. You would have had to >>>> make >>>> some changes to the toolstack for this performance to differ. >>>> >>> >>> Performance difference between dom0 kernels aside, what I don''t >>> understand >>> is why windows HVM domU works fine, while linux doesn''t? >>> Isn''t HVM supposed to work same for every guest, or does linux actualy >>> have >>> something against HVM mode? I simply don''t get it. >> >> >> It''s definitely not an optimized use case however I have never seen as >> low performance as you are reporting. >> There are too many reasons to list as to why performance between >> Windows HVM and Linux HVM would differ. >> >> What Linux guests are you attempting to run? >> >> Joseph. >> > > Running a slackware guest, tried various kernels, did not make any > difference. Windows in question is 2008 R2.Try using a 3.0 guest kernel with XEN_PLATFORM_PCI=y and all of the Xen device drivers. This will enable PVHVM and you should be right as rain for performance.> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users-- Founder | Director | VP Research Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846
On 04/02/2012 09:55 PM, Joseph Glanville wrote:> On 3 April 2012 05:48, Mario<mario@slackverse.org> wrote: >> On 04/02/2012 09:27 PM, Joseph Glanville wrote: >>> >>> On 3 April 2012 05:17, Mario<mario@slackverse.org> wrote: >>>> >>>> On 04/02/2012 08:45 PM, Joseph Glanville wrote: >>>>> >>>>> >>>>> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >>>>>> >>>>>> >>>>>> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Am 02.04.2012 13:33, schrieb Mario: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Windows, of course, does not natively support PVonHVM in any way >>>>>>>>> (except >>>>>>>>> when using corresponding drivers to enable that), so if you get >>>>>>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs >>>>>>>>> than >>>>>>>>> you do on Windows DomUs which _don''t_ have the corresponding >>>>>>>>> PV-drivers >>>>>>>>> installed, something else is amiss here; it''d help if you could >>>>>>>>> describe >>>>>>>>> your setup in a little more detail. >>>>>>>>> >>>>>>>> >>>>>>>> Why are we still on PVonHVM subject? I do not want that, I want >>>>>>>> regular HVM to work with linux domU''s the same way it works with >>>>>>>> windows domU''s. I don''t have the luxury to install custom drivers on >>>>>>>> some domU''s, so there is no point in trying to force me to use >>>>>>>> PVonHVM >>>>>>>> because i can''t. >>>>>>>> So, anyone else please? :-) >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Read my last paragraph again, please: Linux fully virtualized DomUs >>>>>>> (which use the corresponding NIC and disk emulation as implemented by >>>>>>> qemu) shouldn''t perform any different than a Windows DomU, >>>>>>> I/O-performance wise, as both of them use the same infrastructure in >>>>>>> Dom0 to do I/O (qemu process). You''re saying that they are different, >>>>>>> I/O-wise, so: please be a little more concrete _what_ the problem is >>>>>>> that you''re seeing. We don''t have crystal balls handy, sorry. >>>>>>> >>>>>> >>>>>> Its actualy quite simple, here is an example: Windows hvm domU disk io >>>>>> on >>>>>> my >>>>>> test server is ~60MB/s (sequential read or write). Linux HVM guests >>>>>> (using >>>>>> same config file template) on the same server gives ~10MB/s (sequential >>>>>> read >>>>>> or write), i tried pretty much everything, from tuning scheduler to >>>>>> changing >>>>>> kernel. I am not sure what to do with it, other then roll back to >>>>>> kernel >>>>>> 2.6.34 on my dom0. >>>>> >>>>> >>>>> >>>>> Hieko and myself have told you what to do to get decent performance. >>>>> Some examples of fully setup PVonHVM guests are available on my file >>>>> mirror. >>>>> http://mirror.orionvm.com.au >>>>> >>>>> Unless you use the PV drivers there isn''t really a whole lot more I >>>>> can do for you. >>>>> >>>>> I can''t explain why performance would differ under 2.6.34 vs 3.2. >>>>> This makes no sense as qemu-dm runs in userspace. You would have had to >>>>> make >>>>> some changes to the toolstack for this performance to differ. >>>>> >>>> >>>> Performance difference between dom0 kernels aside, what I don''t >>>> understand >>>> is why windows HVM domU works fine, while linux doesn''t? >>>> Isn''t HVM supposed to work same for every guest, or does linux actualy >>>> have >>>> something against HVM mode? I simply don''t get it. >>> >>> >>> It''s definitely not an optimized use case however I have never seen as >>> low performance as you are reporting. >>> There are too many reasons to list as to why performance between >>> Windows HVM and Linux HVM would differ. >>> >>> What Linux guests are you attempting to run? >>> >>> Joseph. >>> >> >> Running a slackware guest, tried various kernels, did not make any >> difference. Windows in question is 2008 R2. > > Try using a 3.0 guest kernel with XEN_PLATFORM_PCI=y and all of the > Xen device drivers. > This will enable PVHVM and you should be right as rain for performance. >So what exactly do I do with linux guests that I don''t have kernel sources for? :-)
On 3 April 2012 06:03, Mario <mario@slackverse.org> wrote:> On 04/02/2012 09:55 PM, Joseph Glanville wrote: >> >> On 3 April 2012 05:48, Mario<mario@slackverse.org> wrote: >>> >>> On 04/02/2012 09:27 PM, Joseph Glanville wrote: >>>> >>>> >>>> On 3 April 2012 05:17, Mario<mario@slackverse.org> wrote: >>>>> >>>>> >>>>> On 04/02/2012 08:45 PM, Joseph Glanville wrote: >>>>>> >>>>>> >>>>>> >>>>>> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Am 02.04.2012 13:33, schrieb Mario: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Windows, of course, does not natively support PVonHVM in any way >>>>>>>>>> (except >>>>>>>>>> when using corresponding drivers to enable that), so if you get >>>>>>>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs >>>>>>>>>> than >>>>>>>>>> you do on Windows DomUs which _don''t_ have the corresponding >>>>>>>>>> PV-drivers >>>>>>>>>> installed, something else is amiss here; it''d help if you could >>>>>>>>>> describe >>>>>>>>>> your setup in a little more detail. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Why are we still on PVonHVM subject? I do not want that, I want >>>>>>>>> regular HVM to work with linux domU''s the same way it works with >>>>>>>>> windows domU''s. I don''t have the luxury to install custom drivers >>>>>>>>> on >>>>>>>>> some domU''s, so there is no point in trying to force me to use >>>>>>>>> PVonHVM >>>>>>>>> because i can''t. >>>>>>>>> So, anyone else please? :-) >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Read my last paragraph again, please: Linux fully virtualized DomUs >>>>>>>> (which use the corresponding NIC and disk emulation as implemented >>>>>>>> by >>>>>>>> qemu) shouldn''t perform any different than a Windows DomU, >>>>>>>> I/O-performance wise, as both of them use the same infrastructure in >>>>>>>> Dom0 to do I/O (qemu process). You''re saying that they are >>>>>>>> different, >>>>>>>> I/O-wise, so: please be a little more concrete _what_ the problem is >>>>>>>> that you''re seeing. We don''t have crystal balls handy, sorry. >>>>>>>> >>>>>>> >>>>>>> Its actualy quite simple, here is an example: Windows hvm domU disk >>>>>>> io >>>>>>> on >>>>>>> my >>>>>>> test server is ~60MB/s (sequential read or write). Linux HVM guests >>>>>>> (using >>>>>>> same config file template) on the same server gives ~10MB/s >>>>>>> (sequential >>>>>>> read >>>>>>> or write), i tried pretty much everything, from tuning scheduler to >>>>>>> changing >>>>>>> kernel. I am not sure what to do with it, other then roll back to >>>>>>> kernel >>>>>>> 2.6.34 on my dom0. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Hieko and myself have told you what to do to get decent performance. >>>>>> Some examples of fully setup PVonHVM guests are available on my file >>>>>> mirror. >>>>>> http://mirror.orionvm.com.au >>>>>> >>>>>> Unless you use the PV drivers there isn''t really a whole lot more I >>>>>> can do for you. >>>>>> >>>>>> I can''t explain why performance would differ under 2.6.34 vs 3.2. >>>>>> This makes no sense as qemu-dm runs in userspace. You would have had >>>>>> to >>>>>> make >>>>>> some changes to the toolstack for this performance to differ. >>>>>> >>>>> >>>>> Performance difference between dom0 kernels aside, what I don''t >>>>> understand >>>>> is why windows HVM domU works fine, while linux doesn''t? >>>>> Isn''t HVM supposed to work same for every guest, or does linux actualy >>>>> have >>>>> something against HVM mode? I simply don''t get it. >>>> >>>> >>>> >>>> It''s definitely not an optimized use case however I have never seen as >>>> low performance as you are reporting. >>>> There are too many reasons to list as to why performance between >>>> Windows HVM and Linux HVM would differ. >>>> >>>> What Linux guests are you attempting to run? >>>> >>>> Joseph. >>>> >>> >>> Running a slackware guest, tried various kernels, did not make any >>> difference. Windows in question is 2008 R2. >> >> >> Try using a 3.0 guest kernel with XEN_PLATFORM_PCI=y and all of the >> Xen device drivers. >> This will enable PVHVM and you should be right as rain for performance. >> > > So what exactly do I do with linux guests that I don''t have kernel sources > for? :-)Guests older than 2.6.32 could be a problem. Everything else should be fine. Anything earlier than 2.6.32 you probably want to run in pure PV mode.> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-usersJoseph. -- Founder | Director | VP Research Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846
On 04/03/2012 06:39 AM, Joseph Glanville wrote:> On 3 April 2012 06:03, Mario<mario@slackverse.org> wrote: >> On 04/02/2012 09:55 PM, Joseph Glanville wrote: >>> >>> On 3 April 2012 05:48, Mario<mario@slackverse.org> wrote: >>>> >>>> On 04/02/2012 09:27 PM, Joseph Glanville wrote: >>>>> >>>>> >>>>> On 3 April 2012 05:17, Mario<mario@slackverse.org> wrote: >>>>>> >>>>>> >>>>>> On 04/02/2012 08:45 PM, Joseph Glanville wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Am 02.04.2012 13:33, schrieb Mario: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Windows, of course, does not natively support PVonHVM in any way >>>>>>>>>>> (except >>>>>>>>>>> when using corresponding drivers to enable that), so if you get >>>>>>>>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs >>>>>>>>>>> than >>>>>>>>>>> you do on Windows DomUs which _don''t_ have the corresponding >>>>>>>>>>> PV-drivers >>>>>>>>>>> installed, something else is amiss here; it''d help if you could >>>>>>>>>>> describe >>>>>>>>>>> your setup in a little more detail. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Why are we still on PVonHVM subject? I do not want that, I want >>>>>>>>>> regular HVM to work with linux domU''s the same way it works with >>>>>>>>>> windows domU''s. I don''t have the luxury to install custom drivers >>>>>>>>>> on >>>>>>>>>> some domU''s, so there is no point in trying to force me to use >>>>>>>>>> PVonHVM >>>>>>>>>> because i can''t. >>>>>>>>>> So, anyone else please? :-) >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Read my last paragraph again, please: Linux fully virtualized DomUs >>>>>>>>> (which use the corresponding NIC and disk emulation as implemented >>>>>>>>> by >>>>>>>>> qemu) shouldn''t perform any different than a Windows DomU, >>>>>>>>> I/O-performance wise, as both of them use the same infrastructure in >>>>>>>>> Dom0 to do I/O (qemu process). You''re saying that they are >>>>>>>>> different, >>>>>>>>> I/O-wise, so: please be a little more concrete _what_ the problem is >>>>>>>>> that you''re seeing. We don''t have crystal balls handy, sorry. >>>>>>>>> >>>>>>>> >>>>>>>> Its actualy quite simple, here is an example: Windows hvm domU disk >>>>>>>> io >>>>>>>> on >>>>>>>> my >>>>>>>> test server is ~60MB/s (sequential read or write). Linux HVM guests >>>>>>>> (using >>>>>>>> same config file template) on the same server gives ~10MB/s >>>>>>>> (sequential >>>>>>>> read >>>>>>>> or write), i tried pretty much everything, from tuning scheduler to >>>>>>>> changing >>>>>>>> kernel. I am not sure what to do with it, other then roll back to >>>>>>>> kernel >>>>>>>> 2.6.34 on my dom0. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hieko and myself have told you what to do to get decent performance. >>>>>>> Some examples of fully setup PVonHVM guests are available on my file >>>>>>> mirror. >>>>>>> http://mirror.orionvm.com.au >>>>>>> >>>>>>> Unless you use the PV drivers there isn''t really a whole lot more I >>>>>>> can do for you. >>>>>>> >>>>>>> I can''t explain why performance would differ under 2.6.34 vs 3.2. >>>>>>> This makes no sense as qemu-dm runs in userspace. You would have had >>>>>>> to >>>>>>> make >>>>>>> some changes to the toolstack for this performance to differ. >>>>>>> >>>>>> >>>>>> Performance difference between dom0 kernels aside, what I don''t >>>>>> understand >>>>>> is why windows HVM domU works fine, while linux doesn''t? >>>>>> Isn''t HVM supposed to work same for every guest, or does linux actualy >>>>>> have >>>>>> something against HVM mode? I simply don''t get it. >>>>> >>>>> >>>>> >>>>> It''s definitely not an optimized use case however I have never seen as >>>>> low performance as you are reporting. >>>>> There are too many reasons to list as to why performance between >>>>> Windows HVM and Linux HVM would differ. >>>>> >>>>> What Linux guests are you attempting to run? >>>>> >>>>> Joseph. >>>>> >>>> >>>> Running a slackware guest, tried various kernels, did not make any >>>> difference. Windows in question is 2008 R2. >>> >>> >>> Try using a 3.0 guest kernel with XEN_PLATFORM_PCI=y and all of the >>> Xen device drivers. >>> This will enable PVHVM and you should be right as rain for performance. >>> >> >> So what exactly do I do with linux guests that I don''t have kernel sources >> for? :-) > > Guests older than 2.6.32 could be a problem. > Everything else should be fine. > Anything earlier than 2.6.32 you probably want to run in pure PV mode. >I tried 2.6.37 and 3.2.7, same results. Ok it has become clear that nobody knows, so lets forget about this one. I will keep xen for pv guests only, everything else will go into vmware. Thanks anyway, mario
On 3 April 2012 17:27, Mario <mario@slackverse.org> wrote:> On 04/03/2012 06:39 AM, Joseph Glanville wrote: >> >> On 3 April 2012 06:03, Mario<mario@slackverse.org> wrote: >>> >>> On 04/02/2012 09:55 PM, Joseph Glanville wrote: >>>> >>>> >>>> On 3 April 2012 05:48, Mario<mario@slackverse.org> wrote: >>>>> >>>>> >>>>> On 04/02/2012 09:27 PM, Joseph Glanville wrote: >>>>>> >>>>>> >>>>>> >>>>>> On 3 April 2012 05:17, Mario<mario@slackverse.org> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 04/02/2012 08:45 PM, Joseph Glanville wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 2 April 2012 22:49, Mario<mario@slackverse.org> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On 04/02/2012 01:55 PM, Heiko Wundram wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Am 02.04.2012 13:33, schrieb Mario: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 04/02/2012 12:19 PM, Heiko Wundram wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Windows, of course, does not natively support PVonHVM in any way >>>>>>>>>>>> (except >>>>>>>>>>>> when using corresponding drivers to enable that), so if you get >>>>>>>>>>>> excessively slower I/O-speeds on "fully" virtualized Linux DomUs >>>>>>>>>>>> than >>>>>>>>>>>> you do on Windows DomUs which _don''t_ have the corresponding >>>>>>>>>>>> PV-drivers >>>>>>>>>>>> installed, something else is amiss here; it''d help if you could >>>>>>>>>>>> describe >>>>>>>>>>>> your setup in a little more detail. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Why are we still on PVonHVM subject? I do not want that, I want >>>>>>>>>>> regular HVM to work with linux domU''s the same way it works with >>>>>>>>>>> windows domU''s. I don''t have the luxury to install custom drivers >>>>>>>>>>> on >>>>>>>>>>> some domU''s, so there is no point in trying to force me to use >>>>>>>>>>> PVonHVM >>>>>>>>>>> because i can''t. >>>>>>>>>>> So, anyone else please? :-) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Read my last paragraph again, please: Linux fully virtualized >>>>>>>>>> DomUs >>>>>>>>>> (which use the corresponding NIC and disk emulation as implemented >>>>>>>>>> by >>>>>>>>>> qemu) shouldn''t perform any different than a Windows DomU, >>>>>>>>>> I/O-performance wise, as both of them use the same infrastructure >>>>>>>>>> in >>>>>>>>>> Dom0 to do I/O (qemu process). You''re saying that they are >>>>>>>>>> different, >>>>>>>>>> I/O-wise, so: please be a little more concrete _what_ the problem >>>>>>>>>> is >>>>>>>>>> that you''re seeing. We don''t have crystal balls handy, sorry. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Its actualy quite simple, here is an example: Windows hvm domU disk >>>>>>>>> io >>>>>>>>> on >>>>>>>>> my >>>>>>>>> test server is ~60MB/s (sequential read or write). Linux HVM guests >>>>>>>>> (using >>>>>>>>> same config file template) on the same server gives ~10MB/s >>>>>>>>> (sequential >>>>>>>>> read >>>>>>>>> or write), i tried pretty much everything, from tuning scheduler to >>>>>>>>> changing >>>>>>>>> kernel. I am not sure what to do with it, other then roll back to >>>>>>>>> kernel >>>>>>>>> 2.6.34 on my dom0. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hieko and myself have told you what to do to get decent performance. >>>>>>>> Some examples of fully setup PVonHVM guests are available on my file >>>>>>>> mirror. >>>>>>>> http://mirror.orionvm.com.au >>>>>>>> >>>>>>>> Unless you use the PV drivers there isn''t really a whole lot more I >>>>>>>> can do for you. >>>>>>>> >>>>>>>> I can''t explain why performance would differ under 2.6.34 vs 3.2. >>>>>>>> This makes no sense as qemu-dm runs in userspace. You would have had >>>>>>>> to >>>>>>>> make >>>>>>>> some changes to the toolstack for this performance to differ. >>>>>>>> >>>>>>> >>>>>>> Performance difference between dom0 kernels aside, what I don''t >>>>>>> understand >>>>>>> is why windows HVM domU works fine, while linux doesn''t? >>>>>>> Isn''t HVM supposed to work same for every guest, or does linux >>>>>>> actualy >>>>>>> have >>>>>>> something against HVM mode? I simply don''t get it. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> It''s definitely not an optimized use case however I have never seen as >>>>>> low performance as you are reporting. >>>>>> There are too many reasons to list as to why performance between >>>>>> Windows HVM and Linux HVM would differ. >>>>>> >>>>>> What Linux guests are you attempting to run? >>>>>> >>>>>> Joseph. >>>>>> >>>>> >>>>> Running a slackware guest, tried various kernels, did not make any >>>>> difference. Windows in question is 2008 R2. >>>> >>>> >>>> >>>> Try using a 3.0 guest kernel with XEN_PLATFORM_PCI=y and all of the >>>> Xen device drivers. >>>> This will enable PVHVM and you should be right as rain for performance. >>>> >>> >>> So what exactly do I do with linux guests that I don''t have kernel >>> sources >>> for? :-) >> >> >> Guests older than 2.6.32 could be a problem. >> Everything else should be fine. >> Anything earlier than 2.6.32 you probably want to run in pure PV mode. >> > > I tried 2.6.37 and 3.2.7, same results. Ok it has become clear that nobody > knows, so lets forget about this one. I will keep xen for pv guests only, > everything else will go into vmware.Did you mean you had poor performance running 3.2.7 based HVM guests? Are you sure you set xen_platform_pci=1? (or left it out of the config) The 3.2.7 kernel assuming it was built correctly should automatically enable PV drivers which should have great performance. All current 3.0 and better binary distributions ship kernels that work perfectly in PVonHVM mode. I highly recommend you use the images I provided to assertain if you have a serious configuration problem or if there exists a bug/performance regression in your dom0 kernel. Joseph.> > Thanks anyway, > mario > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users-- Founder | Director | VP Research Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846