Fabiano Francesconi
2010-Apr-11 10:30 UTC
[Xen-users] Xen hard-disk performance regression?
Hello there, I''m noticing a performance regression for what''s matter the hard-disk I pass to the domU (aquaria). The slow-down is very noticeable even when moving small videos file (~350mb). In the configuration file of the domU, I pass the whole disk (''phy:/dev/sdb,xvdb,w'') to the guest OS. The distribution under both systems is Gentoo/Linux, x86 arch. The hypervisor is always the same, in both scenarios (Linux xevelon 2.6.32-xen-r1 #1 SMP Sat Apr 10 13:37:02 CEST 2010 i686 Intel(R) Atom(TM) CPU 330 @ 1.60GHz GenuineIntel GNU/Linux). The machine is an Atom, so it''s not capable of any VT technology but so far it''s a great machine and XEN is working just fine on it. I attach you few logs I''ve gathered. Thank you -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Apr 11, 2010 at 11:30 AM, Fabiano Francesconi <fabiano.francesconi@gmail.com> wrote:> Hello there, > I''m noticing a performance regression for what''s matter the hard-disk I > pass to the domU (aquaria).Have you checked the performance of the disk in dom0?> > The slow-down is very noticeable even when moving small videos file > (~350mb). > > In the configuration file of the domU, I pass the whole disk > (''phy:/dev/sdb,xvdb,w'') to the guest OS. > > The distribution under both systems is Gentoo/Linux, x86 arch. > > The hypervisor is always the same, in both scenarios (Linux xevelon > 2.6.32-xen-r1 #1 SMP Sat Apr 10 13:37:02 CEST 2010 i686 Intel(R) > Atom(TM) CPU 330 @ 1.60GHz GenuineIntel GNU/Linux). > > The machine is an Atom, so it''s not capable of any VT technology but so > far it''s a great machine and XEN is working just fine on it. > > I attach you few logs I''ve gathered. > > Thank you > > -- > Fabiano Francesconi [GPG key: 0x81E53461] > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-11 15:39 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Sun, Apr 11, 2010 at 02:55:24PM +0100, Andrew Lyon wrote:> On Sun, Apr 11, 2010 at 11:30 AM, Fabiano Francesconi > <fabiano.francesconi@gmail.com> wrote: > > Hello there, > > I''m noticing a performance regression for what''s matter the hard-disk I > > pass to the domU (aquaria). > > > Have you checked the performance of the disk in dom0? > > > > > The slow-down is very noticeable even when moving small videos file > > (~350mb). > > > > In the configuration file of the domU, I pass the whole disk > > (''phy:/dev/sdb,xvdb,w'') to the guest OS. > > > > The distribution under both systems is Gentoo/Linux, x86 arch. > > > > The hypervisor is always the same, in both scenarios (Linux xevelon > > 2.6.32-xen-r1 #1 SMP Sat Apr 10 13:37:02 CEST 2010 i686 Intel(R) > > Atom(TM) CPU 330 @ 1.60GHz GenuineIntel GNU/Linux). > > > > The machine is an Atom, so it''s not capable of any VT technology but so > > far it''s a great machine and XEN is working just fine on it. > > > > I attach you few logs I''ve gathered. > > > > Thank you > > > > -- > > Fabiano Francesconi [GPG key: 0x81E53461] > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > >It seems, actually, that something has changed since the performance seems to be restored back to the normality. Mh, I can''t explain that.. maybe I had some process keep running CPU high while I was performing that hdparm. I''ll keep this thread up-to-date if something changes again. Sorry for this bad report. -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Just FYI some Atom cpu''s do actually have VT. I got my hands on a netbook with Atom Z5 series cpu and was able to run a windows guest under it. http://ark.intel.com/ProductCollection.aspx?familyID=29035 - chris On Sun, Apr 11, 2010 at 11:39 AM, Fabiano Francesconi <fabiano.francesconi@gmail.com> wrote:> On Sun, Apr 11, 2010 at 02:55:24PM +0100, Andrew Lyon wrote: >> On Sun, Apr 11, 2010 at 11:30 AM, Fabiano Francesconi >> <fabiano.francesconi@gmail.com> wrote: >> > Hello there, >> > I''m noticing a performance regression for what''s matter the hard-disk I >> > pass to the domU (aquaria). >> >> >> Have you checked the performance of the disk in dom0? >> >> > >> > The slow-down is very noticeable even when moving small videos file >> > (~350mb). >> > >> > In the configuration file of the domU, I pass the whole disk >> > (''phy:/dev/sdb,xvdb,w'') to the guest OS. >> > >> > The distribution under both systems is Gentoo/Linux, x86 arch. >> > >> > The hypervisor is always the same, in both scenarios (Linux xevelon >> > 2.6.32-xen-r1 #1 SMP Sat Apr 10 13:37:02 CEST 2010 i686 Intel(R) >> > Atom(TM) CPU 330 @ 1.60GHz GenuineIntel GNU/Linux). >> > >> > The machine is an Atom, so it''s not capable of any VT technology but so >> > far it''s a great machine and XEN is working just fine on it. >> > >> > I attach you few logs I''ve gathered. >> > >> > Thank you >> > >> > -- >> > Fabiano Francesconi [GPG key: 0x81E53461] >> > >> > _______________________________________________ >> > Xen-users mailing list >> > Xen-users@lists.xensource.com >> > http://lists.xensource.com/xen-users >> > > > It seems, actually, that something has changed since the performance > seems to be restored back to the normality. > > Mh, I can''t explain that.. maybe I had some process keep running CPU > high while I was performing that hdparm. > > I''ll keep this thread up-to-date if something changes again. > > Sorry for this bad report. > > -- > Fabiano Francesconi [GPG key: 0x81E53461] > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-11 17:00 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Sun, Apr 11, 2010 at 12:16:46PM -0400, chris wrote:> Just FYI some Atom cpu''s do actually have VT. I got my hands on a > netbook with Atom Z5 series cpu and was able to run a windows guest > under it. > > http://ark.intel.com/ProductCollection.aspx?familyID=29035 > > - chris > > On Sun, Apr 11, 2010 at 11:39 AM, Fabiano Francesconi > <fabiano.francesconi@gmail.com> wrote: > > On Sun, Apr 11, 2010 at 02:55:24PM +0100, Andrew Lyon wrote: > >> On Sun, Apr 11, 2010 at 11:30 AM, Fabiano Francesconi > >> <fabiano.francesconi@gmail.com> wrote: > >> > Hello there, > >> > I''m noticing a performance regression for what''s matter the hard-disk I > >> > pass to the domU (aquaria). > >> > >> > >> Have you checked the performance of the disk in dom0? > >> > >> > > >> > The slow-down is very noticeable even when moving small videos file > >> > (~350mb). > >> > > >> > In the configuration file of the domU, I pass the whole disk > >> > (''phy:/dev/sdb,xvdb,w'') to the guest OS. > >> > > >> > The distribution under both systems is Gentoo/Linux, x86 arch. > >> > > >> > The hypervisor is always the same, in both scenarios (Linux xevelon > >> > 2.6.32-xen-r1 #1 SMP Sat Apr 10 13:37:02 CEST 2010 i686 Intel(R) > >> > Atom(TM) CPU 330 @ 1.60GHz GenuineIntel GNU/Linux). > >> > > >> > The machine is an Atom, so it''s not capable of any VT technology but so > >> > far it''s a great machine and XEN is working just fine on it. > >> > > >> > I attach you few logs I''ve gathered. > >> > > >> > Thank you > >> > > >> > -- > >> > Fabiano Francesconi [GPG key: 0x81E53461] > >> > > >> > _______________________________________________ > >> > Xen-users mailing list > >> > Xen-users@lists.xensource.com > >> > http://lists.xensource.com/xen-users > >> > > > > > It seems, actually, that something has changed since the performance > > seems to be restored back to the normality. > > > > Mh, I can''t explain that.. maybe I had some process keep running CPU > > high while I was performing that hdparm. > > > > I''ll keep this thread up-to-date if something changes again. > > > > Sorry for this bad report. > > > > -- > > Fabiano Francesconi [GPG key: 0x81E53461] > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersIt''s an Intel(R) Atom(TM) CPU 330 @ 1.60GHz ( http://ark.intel.com/Product.aspx?id=35641&processor=330&spec-codes=SLG9Y ) so it does not have any VT. -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-11 19:44 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Sun, Apr 11, 2010 at 02:55:24PM +0100, Andrew Lyon wrote:> On Sun, Apr 11, 2010 at 11:30 AM, Fabiano Francesconi > <fabiano.francesconi@gmail.com> wrote: > > Hello there, > > I''m noticing a performance regression for what''s matter the hard-disk I > > pass to the domU (aquaria). > > > Have you checked the performance of the disk in dom0? > > > > > The slow-down is very noticeable even when moving small videos file > > (~350mb). > > > > In the configuration file of the domU, I pass the whole disk > > (''phy:/dev/sdb,xvdb,w'') to the guest OS. > > > > The distribution under both systems is Gentoo/Linux, x86 arch. > > > > The hypervisor is always the same, in both scenarios (Linux xevelon > > 2.6.32-xen-r1 #1 SMP Sat Apr 10 13:37:02 CEST 2010 i686 Intel(R) > > Atom(TM) CPU 330 @ 1.60GHz GenuineIntel GNU/Linux). > > > > The machine is an Atom, so it''s not capable of any VT technology but so > > far it''s a great machine and XEN is working just fine on it. > > > > I attach you few logs I''ve gathered. > > > > Thank you > > > > -- > > Fabiano Francesconi [GPG key: 0x81E53461] > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > >Ok, confirmed. The problem is only visible when copying files so it''s not something related to the hard-disk itself. I sincerely don''t know how to dig into this but I have made some tests myself. I''ve tried copying a single avi file (350mb) from one partition to another one. Both partition are on the same device (WDC WD10EADS-00M2B0) and both formatted in XFS filesystem. Running 2.6.32-xen-r1 I have the following output: real 1m35.001s user 0m0.016s sys 0m0.722s Running 2.6.29-xen-r4 I have, instead, the following output: real 0m20.689s user 0m0.018s sys 0m2.047s How can I see such a difference? Is there some known regression for XFS filesystem? I might try to run a vanilla kernel instead of a xen-patched one. Any suggestion will be very much appreciated. -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Are you using LVM ? Barriers was implemented on LVM in recent versions, which can probably explain some performance regressions like this. Olivier On 11/04/2010 21:44, Fabiano Francesconi wrote:> Ok, confirmed. The problem is only visible when copying files so it''s > not something related to the hard-disk itself. > > I sincerely don''t know how to dig into this but I have made some tests > myself. > > I''ve tried copying a single avi file (350mb) from one partition to > another one. Both partition are on the same device (WDC WD10EADS-00M2B0) > and both formatted in XFS filesystem. > > Running 2.6.32-xen-r1 I have the following output: > > real 1m35.001s > user 0m0.016s > sys 0m0.722s > > Running 2.6.29-xen-r4 I have, instead, the following output: > > real 0m20.689s > user 0m0.018s > sys 0m2.047s > > How can I see such a difference? Is there some known regression for XFS > filesystem? I might try to run a vanilla kernel instead of a xen-patched > one. > > Any suggestion will be very much appreciated. > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-11 23:06 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Mon, Apr 12, 2010 at 01:01:38AM +0200, Olivier B. wrote:> Hi, > > Are you using LVM ? Barriers was implemented on LVM in recent versions, > which can probably explain some performance regressions like this. > > Olivier >Yes, as a matter of fact, but not for those partitions. Those are simply 4 partitions: /dev/xvdb1 1 71799 576725436 83 Linux /dev/xvdb2 71800 97908 209720542+ 83 Linux /dev/xvdb3 97909 117490 157292415 83 Linux /dev/xvdb4 117491 121601 33021607+ 83 Linux each one formatted with XFS filesystem. Another guess I made is about the destination fs'' fragmentation (that''s about 90%). But this shouldn''t affect the performance of only one kernel. It is so strange.> On 11/04/2010 21:44, Fabiano Francesconi wrote: > > Ok, confirmed. The problem is only visible when copying files so it''s > > not something related to the hard-disk itself. > > > > I sincerely don''t know how to dig into this but I have made some tests > > myself. > > > > I''ve tried copying a single avi file (350mb) from one partition to > > another one. Both partition are on the same device (WDC WD10EADS-00M2B0) > > and both formatted in XFS filesystem. > > > > Running 2.6.32-xen-r1 I have the following output: > > > > real 1m35.001s > > user 0m0.016s > > sys 0m0.722s > > > > Running 2.6.29-xen-r4 I have, instead, the following output: > > > > real 0m20.689s > > user 0m0.018s > > sys 0m2.047s > > > > How can I see such a difference? Is there some known regression for XFS > > filesystem? I might try to run a vanilla kernel instead of a xen-patched > > one. > > > > Any suggestion will be very much appreciated. > > > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users-- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-11 23:29 UTC
Re: [Xen-users] Xen hard-disk performance regression?
I''ve tried with a (almost) vanilla kernel (only gentoo patchsets). I have the same issue so, I guess, it''s a kernel misconfiguration / kernel regression but it must be something wrong upstream too. Since I''ve been sharing with you my whole anamnesi, any of you has any clue? -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 12/04/2010 01:29, Fabiano Francesconi wrote:> I''ve tried with a (almost) vanilla kernel (only gentoo patchsets). > > I have the same issue so, I guess, it''s a kernel misconfiguration / > kernel regression but it must be something wrong upstream too. > > Since I''ve been sharing with you my whole anamnesi, any of you has any > clue? >Can you try some more synthetics tests, with "dd" ?| latency on writes : dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k count=10000 write speed :|| dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k count=128000 read speed : ||dd if=/dev/sda of=/dev/null bs=4k count=128000||| (this one will be greatly affected by cache) And can you try with differents FS, ext3 and ext4 for example ? Olivier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2010-Apr-12 06:28 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Sun, Apr 11, 2010 at 05:39:27PM +0200, Fabiano Francesconi wrote:> On Sun, Apr 11, 2010 at 02:55:24PM +0100, Andrew Lyon wrote: > > On Sun, Apr 11, 2010 at 11:30 AM, Fabiano Francesconi > > <fabiano.francesconi@gmail.com> wrote: > > > Hello there, > > > I''m noticing a performance regression for what''s matter the hard-disk I > > > pass to the domU (aquaria). > > > > > > Have you checked the performance of the disk in dom0? > > > > > > > > The slow-down is very noticeable even when moving small videos file > > > (~350mb). > > > > > > In the configuration file of the domU, I pass the whole disk > > > (''phy:/dev/sdb,xvdb,w'') to the guest OS. > > > > > > The distribution under both systems is Gentoo/Linux, x86 arch. > > > > > > The hypervisor is always the same, in both scenarios (Linux xevelon > > > 2.6.32-xen-r1 #1 SMP Sat Apr 10 13:37:02 CEST 2010 i686 Intel(R) > > > Atom(TM) CPU 330 @ 1.60GHz GenuineIntel GNU/Linux). > > > > > > The machine is an Atom, so it''s not capable of any VT technology but so > > > far it''s a great machine and XEN is working just fine on it. > > > > > > I attach you few logs I''ve gathered. > > > > > > Thank you > > > > > > -- > > > Fabiano Francesconi [GPG key: 0x81E53461] > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@lists.xensource.com > > > http://lists.xensource.com/xen-users > > > > > It seems, actually, that something has changed since the performance > seems to be restored back to the normality. > > Mh, I can''t explain that.. maybe I had some process keep running CPU > high while I was performing that hdparm. > > I''ll keep this thread up-to-date if something changes again. > > Sorry for this bad report. >Have you read this wiki page?: http://wiki.xensource.com/xenwiki/XenBestPractices Might help. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-12 08:47 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Mon, Apr 12, 2010 at 01:41:19AM +0200, Olivier B. wrote:> On 12/04/2010 01:29, Fabiano Francesconi wrote: > > I''ve tried with a (almost) vanilla kernel (only gentoo patchsets). > > > > I have the same issue so, I guess, it''s a kernel misconfiguration / > > kernel regression but it must be something wrong upstream too. > > > > Since I''ve been sharing with you my whole anamnesi, any of you has any > > clue? > > > > Can you try some more synthetics tests, with "dd" ?| > latency on writes : dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k > count=10000 > write speed :|| dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k > count=128000 > read speed : ||dd if=/dev/sda of=/dev/null bs=4k count=128000||| (this > one will be greatly affected by cache) > > And can you try with differents FS, ext3 and ext4 for example ? > > Olivier> _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersI''ve ran the test you pointed me out. The results are interesting althought I haven''t found an explanation for such a behaviour. The dsync transfer ration is more than a minute slower on .32 kernel. The same for fdatasync. This for what concerns the root hard-disk (that''s *not* the one I''ve been talking since now). The storage hard-drive, instead, shows that dsync transfer is _very_ faster on .32, but fdatasync isn''t. These results are very strange. You''ll find both log file attached here. I made them in a way you can easily (vim)diff those. -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-12 09:12 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Mon, Apr 12, 2010 at 09:28:52AM +0300, Pasi Kärkkäinen wrote:> > Have you read this wiki page?: > http://wiki.xensource.com/xenwiki/XenBestPractices > > Might help. > > -- Pasi > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersThank you, I''d missed that. I''ve done some changes (like disabling balooning on dom0 and giving it a major weight in order to make sure it gets enough CPU while performing I/O tasks) but nothing have changed at all. Please, take a look at my previous report. Another pair of eyes could be helpful. -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2010-Apr-12 11:10 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Mon, Apr 12, 2010 at 10:47:59AM +0200, Fabiano Francesconi wrote:> On Mon, Apr 12, 2010 at 01:41:19AM +0200, Olivier B. wrote: > > On 12/04/2010 01:29, Fabiano Francesconi wrote: > > > I''ve tried with a (almost) vanilla kernel (only gentoo patchsets). > > > > > > I have the same issue so, I guess, it''s a kernel misconfiguration / > > > kernel regression but it must be something wrong upstream too. > > > > > > Since I''ve been sharing with you my whole anamnesi, any of you has any > > > clue? > > > > > > > Can you try some more synthetics tests, with "dd" ?| > > latency on writes : dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k > > count=10000 > > write speed :|| dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k > > count=128000 > > read speed : ||dd if=/dev/sda of=/dev/null bs=4k count=128000||| (this > > one will be greatly affected by cache) > > > > And can you try with differents FS, ext3 and ext4 for example ? > > > > Olivier > > > I''ve ran the test you pointed me out. The results are interesting > althought I haven''t found an explanation for such a behaviour. > > The dsync transfer ration is more than a minute slower on .32 kernel. > The same for fdatasync. > > This for what concerns the root hard-disk (that''s *not* the one I''ve > been talking since now). > > The storage hard-drive, instead, shows that dsync transfer is _very_ > faster on .32, but fdatasync isn''t. > > These results are very strange. > > You''ll find both log file attached here. I made them in a way you can > easily (vim)diff those. >How about oflag=direct transfers with dd? Are both kernels based on the novell/sles/opensuse patches? -- Pasi> -- > Fabiano Francesconi [GPG key: 0x81E53461]> 2.6.29-xen-r4: > > -- (guest''s root partition, ext3, another hard-drive) -- > > $ time dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k count=10000 > > 10000+0 records in > 10000+0 records out > 40960000 bytes (41 MB) copied, 477,071 s, 85,9 kB/s > > real 7m57.077s > user 0m0.004s > sys 0m0.130s > > $ time dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 28,7759 s, 18,2 MB/s > > real 0m28.782s > user 0m0.190s > sys 0m4.361s > > $ time dd if=/dev/xvda1 of=/dev/null bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 11,943 s, 43,9 MB/s > > real 0m11.951s > user 0m0.014s > sys 0m0.329s > > ************************************************************************** > -- (storage hard drive, the one fully formatted with XFS) -- > > $ time dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k count=10000 > 10000+0 records in > 10000+0 records out > 40960000 bytes (41 MB) copied, 431,506 s, 94,9 kB/s > > real 7m11.548s > user 0m0.002s > sys 0m0.274s > > $ time dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 21,6901 s, 24,2 MB/s > > real 0m21.697s > user 0m0.186s > sys 0m3.140s > > $ time dd if=/dev/xvdb of=/dev/null bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 4,884 s, 107 MB/s > > real 0m4.934s > user 0m0.059s > sys 0m0.650s >> 2.6.32-xen-r1: > > -- (guest''s root partition, ext3, another hard-drive) -- > $ time dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k count=10000 > 10000+0 records in > 10000+0 records out > 40960000 bytes (41 MB) copied, 550,039 s, 74,5 kB/s > > real 9m10.046s > user 0m0.005s > sys 0m0.133s > > $ time dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 41,5227 s, 12,6 MB/s > > real 0m41.544s > user 0m0.177s > sys 0m2.993s > > $ time dd if=/dev/xvda1 of=/dev/null bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 12,1582 s, 43,1 MB/s > > real 0m12.165s > user 0m0.024s > sys 0m0.241s > > ************************************************************************** > -- (storage hard drive, the one fully formatted with XFS) -- > > $ time dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k count=10000 > 10000+0 records in > 10000+0 records out > 40960000 bytes (41 MB) copied, 302,612 s, 135 kB/s > > real 5m2.619s > user 0m0.003s > sys 0m0.117s > > $ time dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 30,9351 s, 16,9 MB/s > > real 0m30.973s > user 0m0.159s > sys 0m2.274s > > $ time dd if=/dev/xvdb of=/dev/null bs=4k count=128000 > 128000+0 records in > 128000+0 records out > 524288000 bytes (524 MB) copied, 4,80787 s, 109 MB/s > > real 0m4.993s > user 0m0.042s > sys 0m0.509s> _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-12 13:21 UTC
Re: [Xen-users] Xen hard-disk performance regression?
On Mon, Apr 12, 2010 at 02:10:45PM +0300, Pasi Kärkkäinen wrote:> On Mon, Apr 12, 2010 at 10:47:59AM +0200, Fabiano Francesconi wrote: > > On Mon, Apr 12, 2010 at 01:41:19AM +0200, Olivier B. wrote: > > > On 12/04/2010 01:29, Fabiano Francesconi wrote: > > > > I''ve tried with a (almost) vanilla kernel (only gentoo patchsets). > > > > > > > > I have the same issue so, I guess, it''s a kernel misconfiguration / > > > > kernel regression but it must be something wrong upstream too. > > > > > > > > Since I''ve been sharing with you my whole anamnesi, any of you has any > > > > clue? > > > > > > > > > > Can you try some more synthetics tests, with "dd" ?| > > > latency on writes : dd oflag=dsync if=/dev/zero of=TESTFILE bs=4k > > > count=10000 > > > write speed :|| dd conv=fdatasync if=/dev/zero of=TESTFILE bs=4k > > > count=128000 > > > read speed : ||dd if=/dev/sda of=/dev/null bs=4k count=128000||| (this > > > one will be greatly affected by cache) > > > > > > And can you try with differents FS, ext3 and ext4 for example ? > > > > > > Olivier > > > > > > I''ve ran the test you pointed me out. The results are interesting > > althought I haven''t found an explanation for such a behaviour. > > > > The dsync transfer ration is more than a minute slower on .32 kernel. > > The same for fdatasync. > > > > This for what concerns the root hard-disk (that''s *not* the one I''ve > > been talking since now). > > > > The storage hard-drive, instead, shows that dsync transfer is _very_ > > faster on .32, but fdatasync isn''t. > > > > These results are very strange. > > > > You''ll find both log file attached here. I made them in a way you can > > easily (vim)diff those. > > > > How about oflag=direct transfers with dd? > > Are both kernels based on the novell/sles/opensuse patches? > > -- Pasi >Here''s the output of `time dd oflag=direct if=/dev/zero of=TESTFILE bs=4k count=10000`: 2.6.29-xen-r4: 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 150,401 s, 272 kB/s real 2m30.407s user 0m0.009s sys 0m0.226s 2.6.32-xen-r1: 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 150,063 s, 273 kB/s real 2m30.097s user 0m0.005s sys 0m0.170s = Same timings? Mhm.. interesting.. (althought those seem quite slowy) Both kernels use the patchset provided by http://code.google.com/p/gentoo-xen-kernel/downloads/list . If I remember correctly, the .29 patchset was provided by opensuse. The .32 uses the patchset from SLE11 as reported in the link above. -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabiano Francesconi
2010-Apr-12 18:56 UTC
Re: [Xen-users] Xen hard-disk performance regression?
Ok the problem seems solved. An ##xen irc user, jamon, told me to add ''elevator=noop'' as kernel bootflag in order to disable I/O scheduler on the domU. This is quite interesting since, as he states "getting different guests all trying to read/write at the same time with different schedulers, or even different kernels (linux cf. windows cf. bsd etc.) isn''t as good as letting the host do all the heavy lifting and not have to worry about different write caches, flushing etc. that different guests might be doing. Actually I was wondering about my situation. My whole disk is passed entirely to the domU so this shouldn''t be a problem at all. After rebooting the machine what I get is: real 0m16.993s user 0m0.009s sys 0m1.143s 4 seconds faster than the .29 kernel! Hurray! At this point I''m still curious about the reasons why this behaviour changed so unexpectantly and why this ''noop'' chages the things so badly even if my disk belongs entirely to a single domU. -- Fabiano Francesconi [GPG key: 0x81E53461] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jpp@jppozzi.dyndns.org
2010-Apr-12 23:11 UTC
Re: [Xen-users] Xen hard-disk performance regression?
Le lundi 12 avril 2010 à 20:56 +0200, Fabiano Francesconi a écrit :> Ok the problem seems solved. > An ##xen irc user, jamon, told me to add ''elevator=noop'' as kernel > bootflag in order to disable I/O scheduler on the domU. > > This is quite interesting since, as he states "getting different guests > all trying to read/write at the same time with different schedulers, or > even different kernels (linux cf. windows cf. bsd etc.) isn''t as good as > letting the host do all the heavy lifting and not have to worry about > different write caches, flushing etc. that different guests might be doing. > > Actually I was wondering about my situation. My whole disk is passed > entirely to the domU so this shouldn''t be a problem at all. > > After rebooting the machine what I get is: > > real 0m16.993s > user 0m0.009s > sys 0m1.143s >much work was done on IO on the last kernel versions, and the most recent ones are speedier than older, but some things changed (for example the "Anticipatory scheduler" is gone away in .33) and some tweaking has to be done. regards JPP _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users