DOGUET Emmanuel
2009-Feb-12 10:47 UTC
[Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
I have the same problem on my latest platform (2 plateforms on HP DL 360, 8GO, quad core...) and only on them. The only difference between this and faster platform is that Raid Volume/Hardrive architecture is different. (same hardware, software...) Dom0 have same speed on Older or New. only domU is x2 slower minmum. (with Oracle and dd ony 5Gb file) New plateform (slow in DomU) : (Rhel51 and rhel53) RAID 5 for dom0 and 2 domU. ''Older'' plateform : (rhel51) RAID 1 for dom0 RAID 5 for 2 domU I must try it on the future platform... but it''s necessary to have hard-drive/RAID only for dom0? Someone can confirm? Best regards. --- Emmanuel Doguet Ingénieur système Unix / DSI Société MANE Tél: 04 93 09 70 00, poste 1442 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Feb-12 13:28 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
On Thu, Feb 12, 2009 at 5:47 PM, DOGUET Emmanuel <Emmanuel.DOGUET@mane.com> wrote:> Dom0 have same speed on Older or New. only domU is x2 slower minmum. (with > Oracle and dd ony 5Gb file)You''re not giving enough details. - Do you use PV or HVM domU? - what backend you use for domU''s disk? file: ? tap:aio:? phy:? I''m guessing you probably use tap:aio. Try using phy: (i.e. using disk / partition / LVM as domU backend storage)> I must try it on the future platform... but it''s necessary to have > hard-drive/RAID only for dom0?If I understand your question correctly, the answer is yes. Dom0 should handle all redundancy (be it hardware or software raid). You don''t need raid on domU if it''s already done on dom0. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-12 13:37 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
Oops sorry! We use only phy: with LVM. PV only (Linux on domU,Linux form dom0). LVM is on hardware RAID. For the RAID my question was (I''m bad in English): It''s better to have : *case 1* Dom0 and DomU on hard-drive 1 (with HP raid: c0d0) Or *case 2* Dom0 on hard-drive 1 (if HP raid: c0d0) DomU on hard-drive 2 (if HP raid: c0d1) I don''t know if this is my problem but the 2 platform with slow IO in DomU use case 1. Other who have good IO use case 2. Best regard. -----Message d''origine----- De : Fajar A. Nugraha [mailto:fajar@fajar.net] Envoyé : jeudi 12 février 2009 14:28 À : DOGUET Emmanuel Cc : xen-users@lists.xensource.com Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow On Thu, Feb 12, 2009 at 5:47 PM, DOGUET Emmanuel <Emmanuel.DOGUET@mane.com> wrote:> Dom0 have same speed on Older or New. only domU is x2 slower minmum. (with > Oracle and dd ony 5Gb file)You''re not giving enough details. - Do you use PV or HVM domU? - what backend you use for domU''s disk? file: ? tap:aio:? phy:? I''m guessing you probably use tap:aio. Try using phy: (i.e. using disk / partition / LVM as domU backend storage)> I must try it on the future platform... but it''s necessary to have > hard-drive/RAID only for dom0?If I understand your question correctly, the answer is yes. Dom0 should handle all redundancy (be it hardware or software raid). You don''t need raid on domU if it''s already done on dom0. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Feb-12 14:02 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
On Thu, Feb 12, 2009 at 8:37 PM, DOGUET Emmanuel <Emmanuel.DOGUET@mane.com> wrote:> > Oops sorry! > > We use only phy: with LVM. PV only (Linux on domU,Linux form dom0). > LVM is on hardware RAID.That''s better :) Now for more questions : What kind of test did you run? How did you determine that "domU was 2x slower than dom0"? How much memory did you assign to domU and dom0? Are other programs running? What were the results (how many seconds, how many MBps, etc.) I''ve had good results so far, with domU''s disk I/O performance is similar or equal to dom0. A simple time dd if=/dev/zero of=test1G bs=1M count=1024 took about 5 seconds and give me about 200 MB/s on idle dom0 and domU. This is on IBM, hardware RAID, 7x144GB RAID5 + 1 hot spare 2.5" SAS disk. Both dom0 and domU has 512MB memory.> > For the RAID my question was (I''m bad in English): > > It''s better to have : > > *case 1* > Dom0 and DomU on hard-drive 1 (with HP raid: c0d0) > > Or > > *case 2* > Dom0 on hard-drive 1 (if HP raid: c0d0) > DomU on hard-drive 2 (if HP raid: c0d1) > >Depending on how you use it, it might not matter :) General rule-of-thumb, more disks should provide higher I/O throughput when setup properly. In general (like when all disks are the same, for general-purpose domUs) I''d simply put all available disks in a RAID5 (or multiple RAID5s for lots of disks) and put them all in a single VG. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-12 15:02 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
>-----Message d''origine----- >De : Fajar A. Nugraha [mailto:fajar@fajar.net] >Envoyé : jeudi 12 février 2009 15:03 >À : DOGUET Emmanuel >Cc : xen-users@lists.xensource.com >Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >performance: Xen I/O is definitely super super super slow > >On Thu, Feb 12, 2009 at 8:37 PM, DOGUET Emmanuel ><Emmanuel.DOGUET@mane.com> wrote: >> >> Oops sorry! >> >> We use only phy: with LVM. PV only (Linux on domU,Linux form dom0). >> LVM is on hardware RAID. > >That''s better :) Now for more questions : >What kind of test did you run? How did you determine that "domU was 2x >slower than dom0"? >How much memory did you assign to domU and dom0? Are other programs >running? What were the results (how many seconds, how many MBps, etc.)With : - Oracle (Create table space : time to create and iostat) and - dd if=/dev/zero of=TEST bs=4k count=1250000 (5Gb for avoid mem cache). New platform with : Dom0: 4Go (Quad Core) DomU1: 4Go 2 VCPUS DomU2: 10Go 4 VCPUS Trying two with only one DomU. this problem is only with 2 platform. Example with configuration with 2 RAID (HP ML370, 32 bits): dom0: 5120000000 bytes (5.1 GB) copied, 139.492 seconds, 36.7 MB/s domU 5120000000 bytes (5.1 GB) copied, 279.251 seconds, 18.3 MB/s release : 2.6.18-53.1.21.el5xen version : #1 SMP Wed May 7 09:10:58 EDT 2008 machine : i686 nr_cpus : 4 nr_nodes : 1 sockets_per_node : 2 cores_per_socket : 1 threads_per_core : 2 cpu_mhz : 3051 hw_caps : bfebfbff:00000000:00000000:00000080:00004400 total_memory : 4863 free_memory : 1 xen_major : 3 xen_minor : 1 xen_extra : .0-53.1.21.el5 xen_caps : xen-3.0-x86_32p xen_pagesize : 4096 platform_params : virt_start=0xf5800000 xen_changeset : unavailable cc_compiler : gcc version 4.1.2 20070626 (Red Hat 4.1.2-14) cc_compile_by : brewbuilder cc_compile_domain : build.redhat.com cc_compile_date : Wed May 7 08:39:04 EDT 2008 xend_config_format : 2 Example with configuration with 1 RAID (HP DL360, 64bits): dom0: 5120000000 bytes (5.1 GB) copied, 170.3 seconds, 30.1 MB/s domU: 5120000000 bytes (5.1 GB) copied, 666.184 seconds, 7.7 MB/s release : 2.6.18-128.el5xen version : #1 SMP Wed Dec 17 12:01:40 EST 2008 machine : x86_64 nr_cpus : 8 nr_nodes : 1 sockets_per_node : 2 cores_per_socket : 4 threads_per_core : 1 cpu_mhz : 2666 hw_caps : bfebfbff:20000800:00000000:00000140:000ce3bd:00000000:00000001 total_memory : 18429 free_memory : 0 node_to_cpu : node0:0-7 xen_major : 3 xen_minor : 1 xen_extra : .2-128.el5 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : unavailable cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-44) cc_compile_by : mockbuild cc_compile_domain : redhat.com cc_compile_date : Wed Dec 17 11:37:15 EST 2008 xend_config_format : 2 PS: I don''t use virt-install but generate myself the xmdomain.cfg. So PV or HVM???? Bye bye.> >I''ve had good results so far, with domU''s disk I/O performance is >similar or equal to dom0. A simple > >time dd if=/dev/zero of=test1G bs=1M count=1024 > >took about 5 seconds and give me about 200 MB/s on idle dom0 and domU. >This is on IBM, hardware RAID, 7x144GB RAID5 + 1 hot spare 2.5" SAS >disk. Both dom0 and domU has 512MB memory. > >> >> For the RAID my question was (I''m bad in English): >> >> It''s better to have : >> >> *case 1* >> Dom0 and DomU on hard-drive 1 (with HP raid: c0d0) >> >> Or >> >> *case 2* >> Dom0 on hard-drive 1 (if HP raid: c0d0) >> DomU on hard-drive 2 (if HP raid: c0d1) >> >> > >Depending on how you use it, it might not matter :) >General rule-of-thumb, more disks should provide higher I/O throughput >when setup properly. In general (like when all disks are the same, for >general-purpose domUs) I''d simply put all available disks in a RAID5 >(or multiple RAID5s for lots of disks) and put them all in a single >VG. > >Regards, > >Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-12 18:10 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
I have another information, I see this in a documentation: "If starting a fully-virtualized domains (ie to run unmodified OS) there are also logs in /var/log/xen/qemu-dm*.log which can contain useful information. " And in our ''slow'' platform we have this type of log /var/log/xen/qemu-dm*.log and this processes is running : root 5671 0.0 0.1 75412 6324 ? Sl 16:33 0:00 /usr/lib64/xen/bin/qemu-dm -M xenpv -d 1 -domain-name dom-v1 -vnc 0.0.0.0:0 -vncunused root 6559 0.0 0.1 75412 6328 ? Sl 16:33 0:00 /usr/lib64/xen/bin/qemu-dm -M xenpv -d 2 -domain-name dom-v2 -vnc 0.0.0.0:0 -vncunused and not in "fast" platform! Strange no? Configuration file is same ... ------------------------------------------- name = "dom-v1" uuid = "7d8cdbe4-6728-48fc-92db-baef9c70d7fd" maxmem = 4096 memory = 4096 vcpus = 2 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1" ] disk = [ ''phy:/dev/rootvg/bdd-root,xvda1,w'', ''phy:/dev/rootvg/bdd-tmp,xvda2,w'', ''phy:/dev/rootvg/bdd-user,xvda3,w'', ''phy:/dev/rootvg/bdd-var,xvda4,w'', ''phy:/dev/rootvg/bdd-swap,xvda5,w'', ''phy:/dev/rootvg/bdd-oracle,xvda6,w'', ''phy:/dev/rootvg/bdd-data,xvda7,w'', ''phy:/dev/rootvg/bdd-backup,xvda8,w'' ] vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] ------------------------------------------->-----Message d''origine----- >De : Fajar A. Nugraha [mailto:fajar@fajar.net] >Envoyé : jeudi 12 février 2009 14:28 >À : DOGUET Emmanuel >Cc : xen-users@lists.xensource.com >Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >performance: Xen I/O is definitely super super super slow > >On Thu, Feb 12, 2009 at 5:47 PM, DOGUET Emmanuel ><Emmanuel.DOGUET@mane.com> wrote: >> Dom0 have same speed on Older or New. only domU is x2 slower >minmum. (with >> Oracle and dd ony 5Gb file) > >You''re not giving enough details. >- Do you use PV or HVM domU? >- what backend you use for domU''s disk? file: ? tap:aio:? phy:? > >I''m guessing you probably use tap:aio. Try using phy: (i.e. using disk >/ partition / LVM as domU backend storage) > > >> I must try it on the future platform... but it''s necessary to have >> hard-drive/RAID only for dom0? > >If I understand your question correctly, the answer is yes. Dom0 >should handle all redundancy (be it hardware or software raid). You >don''t need raid on domU if it''s already done on dom0. > >Regards, > >Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Feb-13 04:11 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
On Thu, Feb 12, 2009 at 10:02 PM, DOGUET Emmanuel <Emmanuel.DOGUET@mane.com> wrote:> - dd if=/dev/zero of=TEST bs=4k count=1250000 (5Gb for avoid mem cache).> dom0: 5120000000 bytes (5.1 GB) copied, 139.492 seconds, 36.7 MB/s > domU 5120000000 bytes (5.1 GB) copied, 279.251 seconds, 18.3 MB/sHere''s what I get using "dd if=/dev/zero of=testfile bs=4k count=524288" dom0: 2147483648 bytes (2.1 GB) copied, 14.5523 seconds, 148 MB/s domU: 2147483648 bytes (2.1 GB) copied, 14.8254 seconds, 145 MB/s Since I only allocate 512M for dom0 and domU, 2G test file is enough to avoid memory cache effects. As you can see the performance is similar between dom0 and domU. Maybe you''re using HVM? Try "uname -a" on your domU. If it shows a xen kernel then it''s PV. It might also be because of the difference in disks used or another I/O-intensive process running on your server, since I got over 140 MB/s while you only get 36 MB/s on dom0. My point is PV domU should have similar I/O performance to dom0 when configured correctly (e.g. using LVM or partition-backed storage). If there''s a huge difference (like what you get) then maybe the source of the problem is elsewhere, not in Xen. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-13 06:13 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
I do ''5G ''test because linux FS have very good caching system.. and the RAID controler too. For my domU, I''m agree with you but I don''t find the problem. And what about the qemu-dm.. it seem to be a HVM functionnality? dom0: Linux host33 2.6.18-128.el5xen #1 SMP Wed Dec 17 12:01:40 EST 2008 x86_64 x86_64 x86_64 GNU/Linux domU Linux host33-v1 2.6.18-128.el5xen #1 SMP Wed Dec 17 12:01:40 EST 2008 x86_64 x86_64 x86_64 GNU/Linux Bye>-----Message d''origine----- >De : Fajar A. Nugraha [mailto:fajar@fajar.net] >Envoyé : vendredi 13 février 2009 05:11 >À : DOGUET Emmanuel >Cc : xen-users@lists.xensource.com >Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >performance: Xen I/O is definitely super super super slow > >On Thu, Feb 12, 2009 at 10:02 PM, DOGUET Emmanuel ><Emmanuel.DOGUET@mane.com> wrote: >> - dd if=/dev/zero of=TEST bs=4k count=1250000 (5Gb for avoid >mem cache). > >> dom0: 5120000000 bytes (5.1 GB) copied, 139.492 seconds, 36.7 MB/s >> domU 5120000000 bytes (5.1 GB) copied, 279.251 seconds, 18.3 MB/s > >Here''s what I get using "dd if=/dev/zero of=testfile bs=4k >count=524288" > >dom0: 2147483648 bytes (2.1 GB) copied, 14.5523 seconds, 148 MB/s >domU: 2147483648 bytes (2.1 GB) copied, 14.8254 seconds, 145 MB/s > >Since I only allocate 512M for dom0 and domU, 2G test file is enough >to avoid memory cache effects. As you can see the performance is >similar between dom0 and domU. Maybe you''re using HVM? Try "uname -a" >on your domU. If it shows a xen kernel then it''s PV. > >It might also be because of the difference in disks used or another >I/O-intensive process running on your server, since I got over 140 >MB/s while you only get 36 MB/s on dom0. > >My point is PV domU should have similar I/O performance to dom0 when >configured correctly (e.g. using LVM or partition-backed storage). If >there''s a huge difference (like what you get) then maybe the source of >the problem is elsewhere, not in Xen. > >Regards, > >Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-13 09:00 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs native performance:Xen I/O is definitely super super super slow
I have mount domU partition on dom0 for testing and it''s OK. But same partiton on domU side is slow. Strange.>-----Message d''origine----- >De : xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] De la part de >DOGUET Emmanuel >Envoyé : vendredi 13 février 2009 07:14 >À : Fajar A. Nugraha >Cc : xen-users@lists.xensource.com >Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs native >performance:Xen I/O is definitely super super super slow > > > >I do ''5G ''test because linux FS have very good caching >system.. and the RAID controler too. > >For my domU, I''m agree with you but I don''t find the problem. >And what about the qemu-dm.. it seem to be a HVM functionnality? > > >dom0: >Linux host33 2.6.18-128.el5xen #1 SMP Wed Dec 17 12:01:40 EST >2008 x86_64 x86_64 x86_64 GNU/Linux > >domU >Linux host33-v1 2.6.18-128.el5xen #1 SMP Wed Dec 17 12:01:40 >EST 2008 x86_64 x86_64 x86_64 GNU/Linux > > >Bye > > >>-----Message d''origine----- >>De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>Envoyé : vendredi 13 février 2009 05:11 >>À : DOGUET Emmanuel >>Cc : xen-users@lists.xensource.com >>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>performance: Xen I/O is definitely super super super slow >> >>On Thu, Feb 12, 2009 at 10:02 PM, DOGUET Emmanuel >><Emmanuel.DOGUET@mane.com> wrote: >>> - dd if=/dev/zero of=TEST bs=4k count=1250000 (5Gb for avoid >>mem cache). >> >>> dom0: 5120000000 bytes (5.1 GB) copied, 139.492 seconds, 36.7 MB/s >>> domU 5120000000 bytes (5.1 GB) copied, 279.251 seconds, 18.3 MB/s >> >>Here''s what I get using "dd if=/dev/zero of=testfile bs=4k >>count=524288" >> >>dom0: 2147483648 bytes (2.1 GB) copied, 14.5523 seconds, 148 MB/s >>domU: 2147483648 bytes (2.1 GB) copied, 14.8254 seconds, 145 MB/s >> >>Since I only allocate 512M for dom0 and domU, 2G test file is enough >>to avoid memory cache effects. As you can see the performance is >>similar between dom0 and domU. Maybe you''re using HVM? Try "uname -a" >>on your domU. If it shows a xen kernel then it''s PV. >> >>It might also be because of the difference in disks used or another >>I/O-intensive process running on your server, since I got over 140 >>MB/s while you only get 36 MB/s on dom0. >> >>My point is PV domU should have similar I/O performance to dom0 when >>configured correctly (e.g. using LVM or partition-backed storage). If >>there''s a huge difference (like what you get) then maybe the source of >>the problem is elsewhere, not in Xen. >> >>Regards, >> >>Fajar >> > >_______________________________________________ >Xen-users mailing list >Xen-users@lists.xensource.com >http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra Giraldez
2009-Feb-13 12:58 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
DOGUET Emmanuel wrote:> and this processes is running : > > root 5671 0.0 0.1 75412 6324 ? Sl 16:33 0:00 > /usr/lib64/xen/bin/qemu-dm -M xenpv -d 1 -domain-name dom-v1 -vnc 0.0.0.0:0 > -vncunused root 6559 0.0 0.1 75412 6328 ? Sl 16:33 0:00 > /usr/lib64/xen/bin/qemu-dm -M xenpv -d 2 -domain-name dom-v2 -vnc 0.0.0.0:0 > -vncunusedobviously your DomUs are HVM. no wonder they''re far slower than Dom0. now, why are they HVM?... that''s the question. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-13 13:05 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
After a research, it have no "/usr/lib64/xen/bin/qemu-dm" if I comment this line in my conf: #vfb = [ "type=vnc,vncunused=1" ] So I''m really in PV mode. A lot of person have less "bandwith" in domU.... and not easy to debug.>-----Message d''origine----- >De : Javier Guerra Giraldez [mailto:javier@guerrag.com] >Envoyé : vendredi 13 février 2009 13:59 >À : xen-users@lists.xensource.com >Cc : DOGUET Emmanuel; Fajar A. Nugraha >Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >performance: Xen I/O is definitely super super super slow > >DOGUET Emmanuel wrote: >> and this processes is running : >> >> root 5671 0.0 0.1 75412 6324 ? Sl 16:33 0:00 >> /usr/lib64/xen/bin/qemu-dm -M xenpv -d 1 -domain-name dom-v1 >-vnc 0.0.0.0:0 >> -vncunused root 6559 0.0 0.1 75412 6328 ? Sl > 16:33 0:00 >> /usr/lib64/xen/bin/qemu-dm -M xenpv -d 2 -domain-name dom-v2 >-vnc 0.0.0.0:0 >> -vncunused > >obviously your DomUs are HVM. no wonder they''re far slower than Dom0. > >now, why are they HVM?... that''s the question. > >-- >Javier >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Feb-14 05:22 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs native performance:Xen I/O is definitely super super super slow
2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>:> > > I have mount domU partition on dom0 for testing and it''s OK. > But same partiton on domU side is slow. > > Strange.Strange indeed. At least that ruled-out hardware problems :) Could try with a "simple" domU? - 1 vcpu - 512 M memory - only one vbd this should isolate whether or not the problem is on your particular domU (e.g. some config parameter actually make domU slower). Your config file should have only few lines, like this memory = "512" vcpus=1 disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] vfb =[''type=vnc''] bootloader="/usr/bin/pygrub" Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joris Dobbelsteen
2009-Feb-14 13:47 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs native performance:Xen I/O is definitely super super super slow
Fajar A. Nugraha wrote, On 14-02-09 06:22:> 2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >> >> I have mount domU partition on dom0 for testing and it''s OK. >> But same partiton on domU side is slow. >> >> Strange. > > Strange indeed. At least that ruled-out hardware problems :) > Could try with a "simple" domU? > - 1 vcpu > - 512 M memory > - only one vbd > > this should isolate whether or not the problem is on your particular > domU (e.g. some config parameter actually make domU slower). > > Your config file should have only few lines, like this > > memory = "512" > vcpus=1 > disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] > vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] > vfb =[''type=vnc''] > bootloader="/usr/bin/pygrub"I has one read somewhere that it could be caused by an interaction between LVM and Software RAID. You should break up your test into smaller fractions: * Plain disk (NO RAID OR LVM) * RAID only * LVM only * RAID + LVM Maybe this isolates the issue a bit. I''m also strugling with similar numbers for disk performance, but since I run all my network services on top of one Xen box (personal system, not business) I have little intention to actually change it now. Its running an outdated Gentoo installation (upgrade attempts broke Xen in my case). My memory might have failed after a year, but I believed that without RAID or LVM the performance was equal in both Dom0 and DomU. - Joris _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-24 13:21 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs native performance:Xen I/O is definitely super super super slow
I have made another test on another server (DL 380) And same thing! I''m always use this test : dd if=/dev/zero of=TEST bs=4k count=1250000 (be careful with memory cache) TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) --------------------------------------------------------------- dom0: 1GO, 1CPU, 2 RAID 5 rootvg(c0d0p1): 4596207616 bytes (4.6 GB) copied, 158.284 seconds, 29.0 MB/s datavg(c0d1p1): 5120000000 bytes (5.1 GB) copied, 155.414 seconds, 32.9 MB/s domU: 512M, 1CPU on System LVM/RAID5 (rootvg) 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s domU: 512M, 1 CPU on same RAID without LVM 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s TEST WITH RAID 0 (dom0 system on RAID 1) --------------------------------------- dom0 1GO RAM 1CPU on system (RAID1): i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s on direct HD (RAID 0 of cssiss), no LVM 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s dom0 4GO RAM 4CPU domU: 4GO, 4 CPU on direct HD (RAID 0), no LVM. 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s domU: 4GO, 4CPU same HD but ONE LVM on it 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s TEST with only ONE RAID 5 (6 x 146G) ------------------------------------ dom0 : 1024MB - 1CPUI (RHEL 5.3) 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s 512MB - 1 CPU 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s 512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) (too slow ..stopped :P) 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s 512MB - 1 CPU - On a file (root, no swap) 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s 4GB - 2 CPU 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s>-----Message d''origine----- >De : Fajar A. Nugraha [mailto:fajar@fajar.net] >Envoyé : samedi 14 février 2009 06:23 >À : DOGUET Emmanuel >Cc : xen-users >Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >performance:Xen I/O is definitely super super super slow > >2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >> >> >> I have mount domU partition on dom0 for testing and it''s OK. >> But same partiton on domU side is slow. >> >> Strange. > >Strange indeed. At least that ruled-out hardware problems :) >Could try with a "simple" domU? >- 1 vcpu >- 512 M memory >- only one vbd > >this should isolate whether or not the problem is on your particular >domU (e.g. some config parameter actually make domU slower). > >Your config file should have only few lines, like this > >memory = "512" >vcpus=1 >disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >vfb =[''type=vnc''] >bootloader="/usr/bin/pygrub" > >Regards, > >Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-24 16:50 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
For resuming : on RAID 0 dom0: 80MB domU: 56MB Loose: 30M on RAID1 dom0: 80MB domU: 55 MB Loose: 32% on RAID5: dom0: 30MB domU: 9MB Loose: 70% So loose seem to be "exponantial" ?>-----Message d''origine----- >De : xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] De la part de >DOGUET Emmanuel >Envoyé : mardi 24 février 2009 14:22 >À : Fajar A. Nugraha >Cc : xen-users; Joris Dobbelsteen >Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >nativeperformance:Xen I/O is definitely super super super slow > > >I have made another test on another server (DL 380) > >And same thing! > >I''m always use this test : > >dd if=/dev/zero of=TEST bs=4k count=1250000 > >(be careful with memory cache) > > >TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >--------------------------------------------------------------- > > dom0: 1GO, 1CPU, 2 RAID 5 > > rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >copied, 158.284 seconds, 29.0 MB/s > datavg(c0d1p1): 5120000000 bytes (5.1 GB) >copied, 155.414 seconds, 32.9 MB/s > >domU: 512M, 1CPU on System LVM/RAID5 (rootvg) > > 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s > >domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) > > 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s > >domU: 512M, 1 CPU on same RAID without LVM > > 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s > > >TEST WITH RAID 0 (dom0 system on RAID 1) >--------------------------------------- > >dom0 1GO RAM 1CPU > > on system (RAID1): > i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s > > on direct HD (RAID 0 of cssiss), no LVM > 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s > >dom0 4GO RAM 4CPU > > > >domU: 4GO, 4 CPU > > on direct HD (RAID 0), no LVM. > 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s > > >domU: 4GO, 4CPU same HD but ONE LVM on it > > 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s > > >TEST with only ONE RAID 5 (6 x 146G) >------------------------------------ > >dom0 : 1024MB - 1CPUI (RHEL 5.3) > > 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s > > >512MB - 1 CPU > 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s > > >512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) > > (too slow ..stopped :P) > 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s > >512MB - 1 CPU - On a file (root, no swap) > > 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s > >4GB - 2 CPU > 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s > > > > >>-----Message d''origine----- >>De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>Envoyé : samedi 14 février 2009 06:23 >>À : DOGUET Emmanuel >>Cc : xen-users >>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>performance:Xen I/O is definitely super super super slow >> >>2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>> >>> >>> I have mount domU partition on dom0 for testing and it''s OK. >>> But same partiton on domU side is slow. >>> >>> Strange. >> >>Strange indeed. At least that ruled-out hardware problems :) >>Could try with a "simple" domU? >>- 1 vcpu >>- 512 M memory >>- only one vbd >> >>this should isolate whether or not the problem is on your particular >>domU (e.g. some config parameter actually make domU slower). >> >>Your config file should have only few lines, like this >> >>memory = "512" >>vcpus=1 >>disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>vfb =[''type=vnc''] >>bootloader="/usr/bin/pygrub" >> >>Regards, >> >>Fajar >> > >_______________________________________________ >Xen-users mailing list >Xen-users@lists.xensource.com >http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-25 17:03 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
I have finished my tests on 3 servers. On each we loose some bandwidth with XEN. On our 10 platform ... We always loose some bandwidth, I think it''s normal. Just the bench method who must differ? I have made bench (write only) between hardware and software RAID under XEN (see attachment). Linux Software RAID is always faster than HP Raid. I must try too the "512MB+Cache Write" option for the HP Raid. So my problems seem to be here. ------------------------- HP DL 380 Quad core ------------------------- Test: dd if=/dev/zero of=TEST bs=4k count=1250000 Hardware Hardware Software Software RAID 5 RAID 5 RAID 5 RAID 5 4 x 146G 8 x 146G 4 x 146G 8 x 146G dom0 (1024MB, 1 cpu) 32MB 22MB 88MB (*) 144MB (*) domU ( 512MB, 1 cpu) 8MB 5MB 34MB 31MB domU (4096MB, 2 cpu) -- 7MB 51MB 35MB *: don''t understand this difference. This performance seems to be good for you? Best regards.>-----Message d''origine----- >De : DOGUET Emmanuel >Envoyé : mardi 24 février 2009 17:50 >À : DOGUET Emmanuel; Fajar A. Nugraha >Cc : xen-users; Joris Dobbelsteen >Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >nativeperformance:Xen I/O is definitely super super super slow > >For resuming : > >on RAID 0 > > dom0: 80MB domU: 56MB Loose: 30M > >on RAID1 > > dom0: 80MB domU: 55 MB Loose: 32% > >on RAID5: > > dom0: 30MB domU: 9MB Loose: 70% > > > >So loose seem to be "exponantial" ? > > > >>-----Message d''origine----- >>De : xen-users-bounces@lists.xensource.com >>[mailto:xen-users-bounces@lists.xensource.com] De la part de >>DOGUET Emmanuel >>Envoyé : mardi 24 février 2009 14:22 >>À : Fajar A. Nugraha >>Cc : xen-users; Joris Dobbelsteen >>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>nativeperformance:Xen I/O is definitely super super super slow >> >> >>I have made another test on another server (DL 380) >> >>And same thing! >> >>I''m always use this test : >> >>dd if=/dev/zero of=TEST bs=4k count=1250000 >> >>(be careful with memory cache) >> >> >>TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>--------------------------------------------------------------- >> >> dom0: 1GO, 1CPU, 2 RAID 5 >> >> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>copied, 158.284 seconds, 29.0 MB/s >> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>copied, 155.414 seconds, 32.9 MB/s >> >>domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >> >> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >> >>domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >> >> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >> >>domU: 512M, 1 CPU on same RAID without LVM >> >> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >> >> >>TEST WITH RAID 0 (dom0 system on RAID 1) >>--------------------------------------- >> >>dom0 1GO RAM 1CPU >> >> on system (RAID1): >> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >> >> on direct HD (RAID 0 of cssiss), no LVM >> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >> >>dom0 4GO RAM 4CPU >> >> >> >>domU: 4GO, 4 CPU >> >> on direct HD (RAID 0), no LVM. >> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >> >> >>domU: 4GO, 4CPU same HD but ONE LVM on it >> >> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >> >> >>TEST with only ONE RAID 5 (6 x 146G) >>------------------------------------ >> >>dom0 : 1024MB - 1CPUI (RHEL 5.3) >> >> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >> >> >>512MB - 1 CPU >> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >> >> >>512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >> >> (too slow ..stopped :P) >> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >> >>512MB - 1 CPU - On a file (root, no swap) >> >> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >> >>4GB - 2 CPU >> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >> >> >> >> >>>-----Message d''origine----- >>>De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>Envoyé : samedi 14 février 2009 06:23 >>>À : DOGUET Emmanuel >>>Cc : xen-users >>>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>performance:Xen I/O is definitely super super super slow >>> >>>2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>> >>>> >>>> I have mount domU partition on dom0 for testing and it''s OK. >>>> But same partiton on domU side is slow. >>>> >>>> Strange. >>> >>>Strange indeed. At least that ruled-out hardware problems :) >>>Could try with a "simple" domU? >>>- 1 vcpu >>>- 512 M memory >>>- only one vbd >>> >>>this should isolate whether or not the problem is on your particular >>>domU (e.g. some config parameter actually make domU slower). >>> >>>Your config file should have only few lines, like this >>> >>>memory = "512" >>>vcpus=1 >>>disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>vfb =[''type=vnc''] >>>bootloader="/usr/bin/pygrub" >>> >>>Regards, >>> >>>Fajar >>> >> >>_______________________________________________ >>Xen-users mailing list >>Xen-users@lists.xensource.com >>http://lists.xensource.com/xen-users >>_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Olivier B.
2009-Feb-25 19:47 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
I did some tests to on some servers ( time dd if=/dev/zero of=TEST bs=4k count=512000 ) First server : raid 1 hardware with a PCI 32bits 3ware card. dom0 (ext3) : 39MB/s domU (ext3) : 1.4MB/s !!! domU (ext4) : 40MB/s Second server : raid 1 software with 2 SATA disks dom0 (ext3) : 96MB/s domU (ext3) : 91MB/s domU (ext4) : 94MB/s Note : I use vanilla kernel on DomU. So : - I see a big write problem from domU on hardware raid - the writeback feature of ext4 seem to "erase" this problem Olivier DOGUET Emmanuel a écrit :> I have finished my tests on 3 servers. On each we loose some bandwidth with XEN. On our 10 platform ... We always loose some bandwidth, I think it''s normal. Just the bench method who must differ? > > I have made bench (write only) between hardware and software RAID under XEN (see attachment). > > Linux Software RAID is always faster than HP Raid. I must try too the "512MB+Cache Write" option for the HP Raid. > > So my problems seem to be here. > > > ------------------------- > HP DL 380 > Quad core > ------------------------- > Test: dd if=/dev/zero of=TEST bs=4k count=1250000 > > > > Hardware Hardware Software Software > RAID 5 RAID 5 RAID 5 RAID 5 > 4 x 146G 8 x 146G 4 x 146G 8 x 146G > dom0 > (1024MB, > 1 cpu) 32MB 22MB 88MB (*) 144MB (*) > > domU > ( 512MB, > 1 cpu) 8MB 5MB 34MB 31MB > > domU > (4096MB, > 2 cpu) -- 7MB 51MB 35MB > > > > *: don''t understand this difference. > > > This performance seems to be good for you? > > > > > Best regards. > > > > > >> -----Message d''origine----- >> De : DOGUET Emmanuel >> Envoyé : mardi 24 février 2009 17:50 >> À : DOGUET Emmanuel; Fajar A. Nugraha >> Cc : xen-users; Joris Dobbelsteen >> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >> nativeperformance:Xen I/O is definitely super super super slow >> >> For resuming : >> >> on RAID 0 >> >> dom0: 80MB domU: 56MB Loose: 30M >> >> on RAID1 >> >> dom0: 80MB domU: 55 MB Loose: 32% >> >> on RAID5: >> >> dom0: 30MB domU: 9MB Loose: 70% >> >> >> >> So loose seem to be "exponantial" ? >> >> >> >> >>> -----Message d''origine----- >>> De : xen-users-bounces@lists.xensource.com >>> [mailto:xen-users-bounces@lists.xensource.com] De la part de >>> DOGUET Emmanuel >>> Envoyé : mardi 24 février 2009 14:22 >>> À : Fajar A. Nugraha >>> Cc : xen-users; Joris Dobbelsteen >>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>> nativeperformance:Xen I/O is definitely super super super slow >>> >>> >>> I have made another test on another server (DL 380) >>> >>> And same thing! >>> >>> I''m always use this test : >>> >>> dd if=/dev/zero of=TEST bs=4k count=1250000 >>> >>> (be careful with memory cache) >>> >>> >>> TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>> --------------------------------------------------------------- >>> >>> dom0: 1GO, 1CPU, 2 RAID 5 >>> >>> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>> copied, 158.284 seconds, 29.0 MB/s >>> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>> copied, 155.414 seconds, 32.9 MB/s >>> >>> domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >>> >>> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >>> >>> domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >>> >>> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >>> >>> domU: 512M, 1 CPU on same RAID without LVM >>> >>> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >>> >>> >>> TEST WITH RAID 0 (dom0 system on RAID 1) >>> --------------------------------------- >>> >>> dom0 1GO RAM 1CPU >>> >>> on system (RAID1): >>> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >>> >>> on direct HD (RAID 0 of cssiss), no LVM >>> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >>> >>> dom0 4GO RAM 4CPU >>> >>> >>> >>> domU: 4GO, 4 CPU >>> >>> on direct HD (RAID 0), no LVM. >>> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >>> >>> >>> domU: 4GO, 4CPU same HD but ONE LVM on it >>> >>> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >>> >>> >>> TEST with only ONE RAID 5 (6 x 146G) >>> ------------------------------------ >>> >>> dom0 : 1024MB - 1CPUI (RHEL 5.3) >>> >>> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >>> >>> >>> 512MB - 1 CPU >>> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >>> >>> >>> 512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >>> >>> (too slow ..stopped :P) >>> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >>> >>> 512MB - 1 CPU - On a file (root, no swap) >>> >>> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >>> >>> 4GB - 2 CPU >>> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >>> >>> >>> >>> >>> >>>> -----Message d''origine----- >>>> De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>> Envoyé : samedi 14 février 2009 06:23 >>>> À : DOGUET Emmanuel >>>> Cc : xen-users >>>> Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>> performance:Xen I/O is definitely super super super slow >>>> >>>> 2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>> >>>>> I have mount domU partition on dom0 for testing and it''s OK. >>>>> But same partiton on domU side is slow. >>>>> >>>>> Strange. >>>>> >>>> Strange indeed. At least that ruled-out hardware problems :) >>>> Could try with a "simple" domU? >>>> - 1 vcpu >>>> - 512 M memory >>>> - only one vbd >>>> >>>> this should isolate whether or not the problem is on your particular >>>> domU (e.g. some config parameter actually make domU slower). >>>> >>>> Your config file should have only few lines, like this >>>> >>>> memory = "512" >>>> vcpus=1 >>>> disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>> vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>> vfb =[''type=vnc''] >>>> bootloader="/usr/bin/pygrub" >>>> >>>> Regards, >>>> >>>> Fajar >>>> >>>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >>> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Brian Krusic
2009-Feb-25 20:09 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
I''ve had the same experience using an onboard Intel RAID 1 (mirror) setup. So I disable Raid and use ext3/xfs to get expected r/w times. I just do a daily dump/xfsdump-restore/xfsrestore to my second disk to achieve some form of DR. I thought it was my MB/Raid so I never bothered posting. - Brian On Feb 25, 2009, at 11:47 AM, Olivier B. wrote:> I did some tests to on some servers ( time dd if=/dev/zero of=TEST > bs=4k count=512000 ) > > First server : raid 1 hardware with a PCI 32bits 3ware card. > dom0 (ext3) : 39MB/s > domU (ext3) : 1.4MB/s !!! > domU (ext4) : 40MB/s > > Second server : raid 1 software with 2 SATA disks > dom0 (ext3) : 96MB/s > domU (ext3) : 91MB/s > domU (ext4) : 94MB/s > > Note : I use vanilla kernel on DomU. > > So : > - I see a big write problem from domU on hardware raid > - the writeback feature of ext4 seem to "erase" this problem > > Olivier > DOGUET Emmanuel a écrit : >> I have finished my tests on 3 servers. On each we loose some >> bandwidth with XEN. On our 10 platform ... We always loose some >> bandwidth, I think it''s normal. Just the bench method who must >> differ? >> >> I have made bench (write only) between hardware and software RAID >> under XEN (see attachment). >> >> Linux Software RAID is always faster than HP Raid. I must try too >> the "512MB+Cache Write" option for the HP Raid. >> >> So my problems seem to be here. >> >> >> ------------------------- >> HP DL 380 >> Quad core >> ------------------------- >> Test: dd if=/dev/zero of=TEST bs=4k count=1250000 >> >> >> >> Hardware Hardware Software Software >> RAID 5 RAID 5 RAID 5 RAID 5 >> 4 x 146G 8 x 146G 4 x 146G 8 x 146G >> dom0 (1024MB, >> 1 cpu) 32MB 22MB 88MB (*) 144MB (*) >> >> domU ( 512MB, >> 1 cpu) 8MB 5MB 34MB 31MB >> domU >> (4096MB, >> 2 cpu) -- 7MB 51MB 35MB >> >> >> >> *: don''t understand this difference. >> >> >> This performance seems to be good for you? >> >> >> >> >> Best regards. >> >> >> >> >> >>> -----Message d''origine----- >>> De : DOGUET Emmanuel >>> Envoyé : mardi 24 février 2009 17:50 >>> À : DOGUET Emmanuel; Fajar A. Nugraha >>> Cc : xen-users; Joris Dobbelsteen >>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>> nativeperformance:Xen I/O is definitely super super super slow >>> >>> For resuming : >>> >>> on RAID 0 >>> >>> dom0: 80MB domU: 56MB Loose: 30M >>> >>> on RAID1 >>> >>> dom0: 80MB domU: 55 MB Loose: 32% >>> >>> on RAID5: >>> >>> dom0: 30MB domU: 9MB Loose: 70% >>> >>> >>> >>> So loose seem to be "exponantial" ? >>> >>> >>> >>> >>>> -----Message d''origine----- >>>> De : xen-users-bounces@lists.xensource.com >>>> [mailto:xen-users-bounces@lists.xensource.com] De la part de >>>> DOGUET Emmanuel >>>> Envoyé : mardi 24 février 2009 14:22 >>>> À : Fajar A. Nugraha >>>> Cc : xen-users; Joris Dobbelsteen >>>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>>> nativeperformance:Xen I/O is definitely super super super slow >>>> >>>> >>>> I have made another test on another server (DL 380) >>>> >>>> And same thing! >>>> >>>> I''m always use this test : >>>> >>>> dd if=/dev/zero of=TEST bs=4k count=1250000 >>>> >>>> (be careful with memory cache) >>>> >>>> >>>> TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>>> --------------------------------------------------------------- >>>> >>>> dom0: 1GO, 1CPU, 2 RAID 5 >>>> >>>> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>>> copied, 158.284 seconds, 29.0 MB/s >>>> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>>> copied, 155.414 seconds, 32.9 MB/s >>>> >>>> domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >>>> >>>> domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >>>> >>>> domU: 512M, 1 CPU on same RAID without LVM >>>> >>>> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >>>> >>>> >>>> TEST WITH RAID 0 (dom0 system on RAID 1) >>>> --------------------------------------- >>>> >>>> dom0 1GO RAM 1CPU >>>> >>>> on system (RAID1): >>>> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >>>> >>>> on direct HD (RAID 0 of cssiss), no LVM >>>> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >>>> >>>> dom0 4GO RAM 4CPU >>>> >>>> >>>> >>>> domU: 4GO, 4 CPU >>>> >>>> on direct HD (RAID 0), no LVM. >>>> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >>>> >>>> >>>> domU: 4GO, 4CPU same HD but ONE LVM on it >>>> >>>> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >>>> >>>> >>>> TEST with only ONE RAID 5 (6 x 146G) >>>> ------------------------------------ >>>> >>>> dom0 : 1024MB - 1CPUI (RHEL 5.3) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >>>> >>>> >>>> 512MB - 1 CPU >>>> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >>>> >>>> >>>> 512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >>>> >>>> (too slow ..stopped :P) >>>> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >>>> >>>> 512MB - 1 CPU - On a file (root, no swap) >>>> >>>> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >>>> >>>> 4GB - 2 CPU >>>> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >>>> >>>> >>>> >>>> >>>> >>>>> -----Message d''origine----- >>>>> De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>>> Envoyé : samedi 14 février 2009 06:23 >>>>> À : DOGUET Emmanuel >>>>> Cc : xen-users >>>>> Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>>> performance:Xen I/O is definitely super super super slow >>>>> >>>>> 2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>>> >>>>>> I have mount domU partition on dom0 for testing and it''s OK. >>>>>> But same partiton on domU side is slow. >>>>>> >>>>>> Strange. >>>>>> >>>>> Strange indeed. At least that ruled-out hardware problems :) >>>>> Could try with a "simple" domU? >>>>> - 1 vcpu >>>>> - 512 M memory >>>>> - only one vbd >>>>> >>>>> this should isolate whether or not the problem is on your >>>>> particular >>>>> domU (e.g. some config parameter actually make domU slower). >>>>> >>>>> Your config file should have only few lines, like this >>>>> >>>>> memory = "512" >>>>> vcpus=1 >>>>> disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>>> vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>>> vfb =[''type=vnc''] >>>>> bootloader="/usr/bin/pygrub" >>>>> >>>>> Regards, >>>>> >>>>> Fajar >>>>> >>>>> >>>> _______________________________________________ >>>> Xen-users mailing list >>>> Xen-users@lists.xensource.com >>>> http://lists.xensource.com/xen-users >>>> >>>> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-26 08:57 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:XenI/O is definitely super super super slow
If you have differences between ext3 and ext4, it can be due to the FS cache? In your tests what are the memory of dom0 and domU? Very strange our write problem to RAID harware :/ For my tests, I use "standard" redhat kernel. Bye.>-----Message d''origine----- >De : xen-users-bounces@lists.xensource.com [mailto:xen-users- >bounces@lists.xensource.com] De la part de Olivier B. >Envoyé : mercredi 25 février 2009 20:47 >Cc : xen-users >Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs >nativeperformance:XenI/O is definitely super super super slow > >I did some tests to on some servers ( time dd if=/dev/zero of=TEST bs=4k >count=512000 ) > >First server : raid 1 hardware with a PCI 32bits 3ware card. >dom0 (ext3) : 39MB/s >domU (ext3) : 1.4MB/s !!! >domU (ext4) : 40MB/s > >Second server : raid 1 software with 2 SATA disks >dom0 (ext3) : 96MB/s >domU (ext3) : 91MB/s >domU (ext4) : 94MB/s > >Note : I use vanilla kernel on DomU. > >So : >- I see a big write problem from domU on hardware raid >- the writeback feature of ext4 seem to "erase" this problem > >Olivier > >DOGUET Emmanuel a écrit : >> I have finished my tests on 3 servers. On each we loose some bandwidth >with XEN. On our 10 platform ... We always loose some bandwidth, I think >it''s normal. Just the bench method who must differ? >> >> I have made bench (write only) between hardware and software RAID under >XEN (see attachment). >> >> Linux Software RAID is always faster than HP Raid. I must try too the >"512MB+Cache Write" option for the HP Raid. >> >> So my problems seem to be here. >> >> >> ------------------------- >> HP DL 380 >> Quad core >> ------------------------- >> Test: dd if=/dev/zero of=TEST bs=4k count=1250000 >> >> >> >> Hardware Hardware Software Software >> RAID 5 RAID 5 RAID 5 RAID 5 >> 4 x 146G 8 x 146G 4 x 146G 8 x 146G >> dom0 >> (1024MB, >> 1 cpu) 32MB 22MB 88MB (*) 144MB (*) >> >> domU >> ( 512MB, >> 1 cpu) 8MB 5MB 34MB 31MB >> >> domU >> (4096MB, >> 2 cpu) -- 7MB 51MB 35MB >> >> >> >> *: don''t understand this difference. >> >> >> This performance seems to be good for you? >> >> >> >> >> Best regards. >> >> >> >> >> >>> -----Message d''origine----- >>> De : DOGUET Emmanuel >>> Envoyé : mardi 24 février 2009 17:50 >>> À : DOGUET Emmanuel; Fajar A. Nugraha >>> Cc : xen-users; Joris Dobbelsteen >>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>> nativeperformance:Xen I/O is definitely super super super slow >>> >>> For resuming : >>> >>> on RAID 0 >>> >>> dom0: 80MB domU: 56MB Loose: 30M >>> >>> on RAID1 >>> >>> dom0: 80MB domU: 55 MB Loose: 32% >>> >>> on RAID5: >>> >>> dom0: 30MB domU: 9MB Loose: 70% >>> >>> >>> >>> So loose seem to be "exponantial" ? >>> >>> >>> >>> >>>> -----Message d''origine----- >>>> De : xen-users-bounces@lists.xensource.com >>>> [mailto:xen-users-bounces@lists.xensource.com] De la part de >>>> DOGUET Emmanuel >>>> Envoyé : mardi 24 février 2009 14:22 >>>> À : Fajar A. Nugraha >>>> Cc : xen-users; Joris Dobbelsteen >>>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>>> nativeperformance:Xen I/O is definitely super super super slow >>>> >>>> >>>> I have made another test on another server (DL 380) >>>> >>>> And same thing! >>>> >>>> I''m always use this test : >>>> >>>> dd if=/dev/zero of=TEST bs=4k count=1250000 >>>> >>>> (be careful with memory cache) >>>> >>>> >>>> TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>>> --------------------------------------------------------------- >>>> >>>> dom0: 1GO, 1CPU, 2 RAID 5 >>>> >>>> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>>> copied, 158.284 seconds, 29.0 MB/s >>>> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>>> copied, 155.414 seconds, 32.9 MB/s >>>> >>>> domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >>>> >>>> domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >>>> >>>> domU: 512M, 1 CPU on same RAID without LVM >>>> >>>> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >>>> >>>> >>>> TEST WITH RAID 0 (dom0 system on RAID 1) >>>> --------------------------------------- >>>> >>>> dom0 1GO RAM 1CPU >>>> >>>> on system (RAID1): >>>> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >>>> >>>> on direct HD (RAID 0 of cssiss), no LVM >>>> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >>>> >>>> dom0 4GO RAM 4CPU >>>> >>>> >>>> >>>> domU: 4GO, 4 CPU >>>> >>>> on direct HD (RAID 0), no LVM. >>>> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >>>> >>>> >>>> domU: 4GO, 4CPU same HD but ONE LVM on it >>>> >>>> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >>>> >>>> >>>> TEST with only ONE RAID 5 (6 x 146G) >>>> ------------------------------------ >>>> >>>> dom0 : 1024MB - 1CPUI (RHEL 5.3) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >>>> >>>> >>>> 512MB - 1 CPU >>>> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >>>> >>>> >>>> 512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >>>> >>>> (too slow ..stopped :P) >>>> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >>>> >>>> 512MB - 1 CPU - On a file (root, no swap) >>>> >>>> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >>>> >>>> 4GB - 2 CPU >>>> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >>>> >>>> >>>> >>>> >>>> >>>>> -----Message d''origine----- >>>>> De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>>> Envoyé : samedi 14 février 2009 06:23 >>>>> À : DOGUET Emmanuel >>>>> Cc : xen-users >>>>> Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>>> performance:Xen I/O is definitely super super super slow >>>>> >>>>> 2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>>> >>>>>> I have mount domU partition on dom0 for testing and it''s OK. >>>>>> But same partiton on domU side is slow. >>>>>> >>>>>> Strange. >>>>>> >>>>> Strange indeed. At least that ruled-out hardware problems :) >>>>> Could try with a "simple" domU? >>>>> - 1 vcpu >>>>> - 512 M memory >>>>> - only one vbd >>>>> >>>>> this should isolate whether or not the problem is on your particular >>>>> domU (e.g. some config parameter actually make domU slower). >>>>> >>>>> Your config file should have only few lines, like this >>>>> >>>>> memory = "512" >>>>> vcpus=1 >>>>> disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>>> vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>>> vfb =[''type=vnc''] >>>>> bootloader="/usr/bin/pygrub" >>>>> >>>>> Regards, >>>>> >>>>> Fajar >>>>> >>>>> >>>> _______________________________________________ >>>> Xen-users mailing list >>>> Xen-users@lists.xensource.com >>>> http://lists.xensource.com/xen-users >>>> >>>> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > >_______________________________________________ >Xen-users mailing list >Xen-users@lists.xensource.com >http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Olivier B.
2009-Feb-26 09:22 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:XenI/O is definitely super super super slow
To be sure I redo all tests with adding "conv=fdatasync" to dd : dd conv=fdatasync if=/dev/zero of=TEST bs=4k count=128000 Results don''t really change. For memory, all my domU have 512Mo of memory. On dom0, I checked with 200Mo or 4Go free memory, this doesn''t change results. I adjust dd count parameter to create 512Mo or 8Go file, and have similar results (there is a small variation of course, about 10%). I checked also with the Debian Lenny Xen kernel for DomU (so without ext4), and have same results to. I really don''t think it''s a FS issue ; but I suppose the writeback feature of ext4 avoid this write problems (I didn''t try to disable it). Olivier DOGUET Emmanuel a écrit :> If you have differences between ext3 and ext4, it can be due to the FS cache? > In your tests what are the memory of dom0 and domU? > Very strange our write problem to RAID harware :/ > > For my tests, I use "standard" redhat kernel. > > > Bye. > > >> -----Message d''origine----- >> De : xen-users-bounces@lists.xensource.com [mailto:xen-users- >> bounces@lists.xensource.com] De la part de Olivier B. >> Envoyé : mercredi 25 février 2009 20:47 >> Cc : xen-users >> Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs >> nativeperformance:XenI/O is definitely super super super slow >> >> I did some tests to on some servers ( time dd if=/dev/zero of=TEST bs=4k >> count=512000 ) >> >> First server : raid 1 hardware with a PCI 32bits 3ware card. >> dom0 (ext3) : 39MB/s >> domU (ext3) : 1.4MB/s !!! >> domU (ext4) : 40MB/s >> >> Second server : raid 1 software with 2 SATA disks >> dom0 (ext3) : 96MB/s >> domU (ext3) : 91MB/s >> domU (ext4) : 94MB/s >> >> Note : I use vanilla kernel on DomU. >> >> So : >> - I see a big write problem from domU on hardware raid >> - the writeback feature of ext4 seem to "erase" this problem >> >> Olivier >> >> DOGUET Emmanuel a écrit : >> >>> I have finished my tests on 3 servers. On each we loose some bandwidth >>> >> with XEN. On our 10 platform ... We always loose some bandwidth, I think >> it''s normal. Just the bench method who must differ? >> >>> I have made bench (write only) between hardware and software RAID under >>> >> XEN (see attachment). >> >>> Linux Software RAID is always faster than HP Raid. I must try too the >>> >> "512MB+Cache Write" option for the HP Raid. >> >>> So my problems seem to be here. >>> >>> >>> ------------------------- >>> HP DL 380 >>> Quad core >>> ------------------------- >>> Test: dd if=/dev/zero of=TEST bs=4k count=1250000 >>> >>> >>> >>> Hardware Hardware Software Software >>> RAID 5 RAID 5 RAID 5 RAID 5 >>> 4 x 146G 8 x 146G 4 x 146G 8 x 146G >>> dom0 >>> (1024MB, >>> 1 cpu) 32MB 22MB 88MB (*) 144MB (*) >>> >>> domU >>> ( 512MB, >>> 1 cpu) 8MB 5MB 34MB 31MB >>> >>> domU >>> (4096MB, >>> 2 cpu) -- 7MB 51MB 35MB >>> >>> >>> >>> *: don''t understand this difference. >>> >>> >>> This performance seems to be good for you? >>> >>> >>> >>> >>> Best regards. >>> >>> >>> >>> >>> >>> >>>> -----Message d''origine----- >>>> De : DOGUET Emmanuel >>>> Envoyé : mardi 24 février 2009 17:50 >>>> À : DOGUET Emmanuel; Fajar A. Nugraha >>>> Cc : xen-users; Joris Dobbelsteen >>>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>>> nativeperformance:Xen I/O is definitely super super super slow >>>> >>>> For resuming : >>>> >>>> on RAID 0 >>>> >>>> dom0: 80MB domU: 56MB Loose: 30M >>>> >>>> on RAID1 >>>> >>>> dom0: 80MB domU: 55 MB Loose: 32% >>>> >>>> on RAID5: >>>> >>>> dom0: 30MB domU: 9MB Loose: 70% >>>> >>>> >>>> >>>> So loose seem to be "exponantial" ? >>>> >>>> >>>> >>>> >>>> >>>>> -----Message d''origine----- >>>>> De : xen-users-bounces@lists.xensource.com >>>>> [mailto:xen-users-bounces@lists.xensource.com] De la part de >>>>> DOGUET Emmanuel >>>>> Envoyé : mardi 24 février 2009 14:22 >>>>> À : Fajar A. Nugraha >>>>> Cc : xen-users; Joris Dobbelsteen >>>>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>>>> nativeperformance:Xen I/O is definitely super super super slow >>>>> >>>>> >>>>> I have made another test on another server (DL 380) >>>>> >>>>> And same thing! >>>>> >>>>> I''m always use this test : >>>>> >>>>> dd if=/dev/zero of=TEST bs=4k count=1250000 >>>>> >>>>> (be careful with memory cache) >>>>> >>>>> >>>>> TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>>>> --------------------------------------------------------------- >>>>> >>>>> dom0: 1GO, 1CPU, 2 RAID 5 >>>>> >>>>> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>>>> copied, 158.284 seconds, 29.0 MB/s >>>>> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>>>> copied, 155.414 seconds, 32.9 MB/s >>>>> >>>>> domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >>>>> >>>>> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >>>>> >>>>> domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >>>>> >>>>> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >>>>> >>>>> domU: 512M, 1 CPU on same RAID without LVM >>>>> >>>>> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >>>>> >>>>> >>>>> TEST WITH RAID 0 (dom0 system on RAID 1) >>>>> --------------------------------------- >>>>> >>>>> dom0 1GO RAM 1CPU >>>>> >>>>> on system (RAID1): >>>>> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >>>>> >>>>> on direct HD (RAID 0 of cssiss), no LVM >>>>> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >>>>> >>>>> dom0 4GO RAM 4CPU >>>>> >>>>> >>>>> >>>>> domU: 4GO, 4 CPU >>>>> >>>>> on direct HD (RAID 0), no LVM. >>>>> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >>>>> >>>>> >>>>> domU: 4GO, 4CPU same HD but ONE LVM on it >>>>> >>>>> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >>>>> >>>>> >>>>> TEST with only ONE RAID 5 (6 x 146G) >>>>> ------------------------------------ >>>>> >>>>> dom0 : 1024MB - 1CPUI (RHEL 5.3) >>>>> >>>>> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >>>>> >>>>> >>>>> 512MB - 1 CPU >>>>> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >>>>> >>>>> >>>>> 512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >>>>> >>>>> (too slow ..stopped :P) >>>>> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >>>>> >>>>> 512MB - 1 CPU - On a file (root, no swap) >>>>> >>>>> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >>>>> >>>>> 4GB - 2 CPU >>>>> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> -----Message d''origine----- >>>>>> De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>>>> Envoyé : samedi 14 février 2009 06:23 >>>>>> À : DOGUET Emmanuel >>>>>> Cc : xen-users >>>>>> Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>>>> performance:Xen I/O is definitely super super super slow >>>>>> >>>>>> 2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>>>> >>>>>> >>>>>>> I have mount domU partition on dom0 for testing and it''s OK. >>>>>>> But same partiton on domU side is slow. >>>>>>> >>>>>>> Strange. >>>>>>> >>>>>>> >>>>>> Strange indeed. At least that ruled-out hardware problems :) >>>>>> Could try with a "simple" domU? >>>>>> - 1 vcpu >>>>>> - 512 M memory >>>>>> - only one vbd >>>>>> >>>>>> this should isolate whether or not the problem is on your particular >>>>>> domU (e.g. some config parameter actually make domU slower). >>>>>> >>>>>> Your config file should have only few lines, like this >>>>>> >>>>>> memory = "512" >>>>>> vcpus=1 >>>>>> disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>>>> vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>>>> vfb =[''type=vnc''] >>>>>> bootloader="/usr/bin/pygrub" >>>>>> >>>>>> Regards, >>>>>> >>>>>> Fajar >>>>>> >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> Xen-users mailing list >>>>> Xen-users@lists.xensource.com >>>>> http://lists.xensource.com/xen-users >>>>> >>>>> >>>>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >>> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >>_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joris Dobbelsteen
2009-Feb-26 13:42 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
I don''t have any experience with tuning a system, but I can only make a few deductions: 1) Hardware RAID does not scale with spindles, which is rather strange. Also performance is rather low, which makes me question if this is also the case without running Xen (just plain Linux only)? Either this or your configuration doesn''t work out (cache might help indeed). 2) Software RAID scaled with number of spindles and that seems OK. 3) domU speeds have no consistent relation with dom0 attained speeds... The question is why... The only resonable way you could figure out seems to be doing traces in the kernel. You need a expert that has a clue what is exactly going on under the hood, especially since lots and lots of software layers are stacked. The results point to some kind of feature interaction that the software does not like. The addition to the system are communication between domains, using some kind of buffering and probably copying. In addition an entire Linux I/O scheduler + more is put on top of it. If you can spend the time, it might be interesting to see if Ubuntu 8.04 LTS or Debian 5.0 do any better, as these have a newer (at least different) kernel than RHEL. There was recently an announcement for a Debian 5.0 based Xen LiveCD that might work(tm). You can also try with a different DomU first, which is probably a lot easier. This way we can maybe isolate the problem domain a bit more. - Joris ________________________________ From: DOGUET Emmanuel [mailto:Emmanuel.DOGUET@MANE.com] Sent: Wed 25-Feb-2009 18:03 To: DOGUET Emmanuel; Fajar A. Nugraha Cc: xen-users; Joris Dobbelsteen Subject: RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow I have finished my tests on 3 servers. On each we loose some bandwidth with XEN. On our 10 platform ... We always loose some bandwidth, I think it''s normal. Just the bench method who must differ? I have made bench (write only) between hardware and software RAID under XEN (see attachment). Linux Software RAID is always faster than HP Raid. I must try too the "512MB+Cache Write" option for the HP Raid. So my problems seem to be here. ------------------------- HP DL 380 Quad core ------------------------- Test: dd if=/dev/zero of=TEST bs=4k count=1250000 Hardware Hardware Software Software RAID 5 RAID 5 RAID 5 RAID 5 4 x 146G 8 x 146G 4 x 146G 8 x 146G dom0 (1024MB, 1 cpu) 32MB 22MB 88MB (*) 144MB (*) domU ( 512MB, 1 cpu) 8MB 5MB 34MB 31MB domU (4096MB, 2 cpu) -- 7MB 51MB 35MB *: don''t understand this difference. This performance seems to be good for you? Best regards.>-----Message d''origine----- >De : DOGUET Emmanuel >Envoyé : mardi 24 février 2009 17:50 >À : DOGUET Emmanuel; Fajar A. Nugraha >Cc : xen-users; Joris Dobbelsteen >Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >nativeperformance:Xen I/O is definitely super super super slow > >For resuming : > >on RAID 0 > > dom0: 80MB domU: 56MB Loose: 30M > >on RAID1 > > dom0: 80MB domU: 55 MB Loose: 32% > >on RAID5: > > dom0: 30MB domU: 9MB Loose: 70% > > > >So loose seem to be "exponantial" ? > > > >>-----Message d''origine----- >>De : xen-users-bounces@lists.xensource.com >>[mailto:xen-users-bounces@lists.xensource.com] De la part de >>DOGUET Emmanuel >>Envoyé : mardi 24 février 2009 14:22 >>À : Fajar A. Nugraha >>Cc : xen-users; Joris Dobbelsteen >>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>nativeperformance:Xen I/O is definitely super super super slow >> >> >>I have made another test on another server (DL 380) >> >>And same thing! >> >>I''m always use this test : >> >>dd if=/dev/zero of=TEST bs=4k count=1250000 >> >>(be careful with memory cache) >> >> >>TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>--------------------------------------------------------------- >> >> dom0: 1GO, 1CPU, 2 RAID 5 >> >> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>copied, 158.284 seconds, 29.0 MB/s >> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>copied, 155.414 seconds, 32.9 MB/s >> >>domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >> >> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >> >>domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >> >> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >> >>domU: 512M, 1 CPU on same RAID without LVM >> >> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >> >> >>TEST WITH RAID 0 (dom0 system on RAID 1) >>--------------------------------------- >> >>dom0 1GO RAM 1CPU >> >> on system (RAID1): >> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >> >> on direct HD (RAID 0 of cssiss), no LVM >> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >> >>dom0 4GO RAM 4CPU >> >> >> >>domU: 4GO, 4 CPU >> >> on direct HD (RAID 0), no LVM. >> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >> >> >>domU: 4GO, 4CPU same HD but ONE LVM on it >> >> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >> >> >>TEST with only ONE RAID 5 (6 x 146G) >>------------------------------------ >> >>dom0 : 1024MB - 1CPUI (RHEL 5.3) >> >> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >> >> >>512MB - 1 CPU >> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >> >> >>512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >> >> (too slow ..stopped :P) >> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >> >>512MB - 1 CPU - On a file (root, no swap) >> >> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >> >>4GB - 2 CPU >> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >> >> >> >> >>>-----Message d''origine----- >>>De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>Envoyé : samedi 14 février 2009 06:23 >>>À : DOGUET Emmanuel >>>Cc : xen-users >>>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>performance:Xen I/O is definitely super super super slow >>> >>>2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>> >>>> >>>> I have mount domU partition on dom0 for testing and it''s OK. >>>> But same partiton on domU side is slow. >>>> >>>> Strange. >>> >>>Strange indeed. At least that ruled-out hardware problems :) >>>Could try with a "simple" domU? >>>- 1 vcpu >>>- 512 M memory >>>- only one vbd >>> >>>this should isolate whether or not the problem is on your particular >>>domU (e.g. some config parameter actually make domU slower). >>> >>>Your config file should have only few lines, like this >>> >>>memory = "512" >>>vcpus=1 >>>disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>vfb =[''type=vnc''] >>>bootloader="/usr/bin/pygrub" >>> >>>Regards, >>> >>>Fajar >>> >> >>_______________________________________________ >>Xen-users mailing list >>Xen-users@lists.xensource.com >>http://lists.xensource.com/xen-users >>_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
DOGUET Emmanuel
2009-Feb-27 13:03 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
I have receive my Battery pack and 512MB upgrade, now it''s fine : Hardware RAID 5 512M cache Write Cache (25/75) 8 x 146G dom0 (1024MB, 1 cpu) 213 MB/s domU ( 512MB, 1 cpu 192 MB/s domU (4096MB, 2 cpu) 249 MB/s ________________________________ De : Joris Dobbelsteen [mailto:Joris@familiedobbelsteen.nl] Envoyé : jeudi 26 février 2009 14:43 À : DOGUET Emmanuel; Fajar A. Nugraha Cc : xen-users Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow I don''t have any experience with tuning a system, but I can only make a few deductions: 1) Hardware RAID does not scale with spindles, which is rather strange. Also performance is rather low, which makes me question if this is also the case without running Xen (just plain Linux only)? Either this or your configuration doesn''t work out (cache might help indeed). 2) Software RAID scaled with number of spindles and that seems OK. 3) domU speeds have no consistent relation with dom0 attained speeds... The question is why... The only resonable way you could figure out seems to be doing traces in the kernel. You need a expert that has a clue what is exactly going on under the hood, especially since lots and lots of software layers are stacked. The results point to some kind of feature interaction that the software does not like. The addition to the system are communication between domains, using some kind of buffering and probably copying. In addition an entire Linux I/O scheduler + more is put on top of it. If you can spend the time, it might be interesting to see if Ubuntu 8.04 LTS or Debian 5.0 do any better, as these have a newer (at least different) kernel than RHEL. There was recently an announcement for a Debian 5.0 based Xen LiveCD that might work(tm). You can also try with a different DomU first, which is probably a lot easier. This way we can maybe isolate the problem domain a bit more. - Joris ________________________________ From: DOGUET Emmanuel [mailto:Emmanuel.DOGUET@MANE.com] Sent: Wed 25-Feb-2009 18:03 To: DOGUET Emmanuel; Fajar A. Nugraha Cc: xen-users; Joris Dobbelsteen Subject: RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow I have finished my tests on 3 servers. On each we loose some bandwidth with XEN. On our 10 platform ... We always loose some bandwidth, I think it''s normal. Just the bench method who must differ? I have made bench (write only) between hardware and software RAID under XEN (see attachment). Linux Software RAID is always faster than HP Raid. I must try too the "512MB+Cache Write" option for the HP Raid. So my problems seem to be here. ------------------------- HP DL 380 Quad core ------------------------- Test: dd if=/dev/zero of=TEST bs=4k count=1250000 Hardware Hardware Software Software RAID 5 RAID 5 RAID 5 RAID 5 4 x 146G 8 x 146G 4 x 146G 8 x 146G dom0 (1024MB, 1 cpu) 32MB 22MB 88MB (*) 144MB (*) domU ( 512MB, 1 cpu) 8MB 5MB 34MB 31MB domU (4096MB, 2 cpu) -- 7MB 51MB 35MB *: don''t understand this difference. This performance seems to be good for you? Best regards.>-----Message d''origine----- >De : DOGUET Emmanuel >Envoyé : mardi 24 février 2009 17:50 >À : DOGUET Emmanuel; Fajar A. Nugraha >Cc : xen-users; Joris Dobbelsteen >Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >nativeperformance:Xen I/O is definitely super super super slow > >For resuming : > >on RAID 0 > > dom0: 80MB domU: 56MB Loose: 30M > >on RAID1 > > dom0: 80MB domU: 55 MB Loose: 32% > >on RAID5: > > dom0: 30MB domU: 9MB Loose: 70% > > > >So loose seem to be "exponantial" ? > > > >>-----Message d''origine----- >>De : xen-users-bounces@lists.xensource.com >>[mailto:xen-users-bounces@lists.xensource.com] De la part de >>DOGUET Emmanuel >>Envoyé : mardi 24 février 2009 14:22 >>À : Fajar A. Nugraha >>Cc : xen-users; Joris Dobbelsteen >>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>nativeperformance:Xen I/O is definitely super super super slow >> >> >>I have made another test on another server (DL 380) >> >>And same thing! >> >>I''m always use this test : >> >>dd if=/dev/zero of=TEST bs=4k count=1250000 >> >>(be careful with memory cache) >> >> >>TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>--------------------------------------------------------------- >> >> dom0: 1GO, 1CPU, 2 RAID 5 >> >> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>copied, 158.284 seconds, 29.0 MB/s >> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>copied, 155.414 seconds, 32.9 MB/s >> >>domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >> >> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >> >>domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >> >> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >> >>domU: 512M, 1 CPU on same RAID without LVM >> >> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >> >> >>TEST WITH RAID 0 (dom0 system on RAID 1) >>--------------------------------------- >> >>dom0 1GO RAM 1CPU >> >> on system (RAID1): >> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >> >> on direct HD (RAID 0 of cssiss), no LVM >> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >> >>dom0 4GO RAM 4CPU >> >> >> >>domU: 4GO, 4 CPU >> >> on direct HD (RAID 0), no LVM. >> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >> >> >>domU: 4GO, 4CPU same HD but ONE LVM on it >> >> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >> >> >>TEST with only ONE RAID 5 (6 x 146G) >>------------------------------------ >> >>dom0 : 1024MB - 1CPUI (RHEL 5.3) >> >> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >> >> >>512MB - 1 CPU >> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >> >> >>512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >> >> (too slow ..stopped :P) >> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >> >>512MB - 1 CPU - On a file (root, no swap) >> >> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >> >>4GB - 2 CPU >> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >> >> >> >> >>>-----Message d''origine----- >>>De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>Envoyé : samedi 14 février 2009 06:23 >>>À : DOGUET Emmanuel >>>Cc : xen-users >>>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>performance:Xen I/O is definitely super super super slow >>> >>>2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>> >>>> >>>> I have mount domU partition on dom0 for testing and it''s OK. >>>> But same partiton on domU side is slow. >>>> >>>> Strange. >>> >>>Strange indeed. At least that ruled-out hardware problems :) >>>Could try with a "simple" domU? >>>- 1 vcpu >>>- 512 M memory >>>- only one vbd >>> >>>this should isolate whether or not the problem is on your particular >>>domU (e.g. some config parameter actually make domU slower). >>> >>>Your config file should have only few lines, like this >>> >>>memory = "512" >>>vcpus=1 >>>disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>vfb =[''type=vnc''] >>>bootloader="/usr/bin/pygrub" >>> >>>Regards, >>> >>>Fajar >>> >> >>_______________________________________________ >>Xen-users mailing list >>Xen-users@lists.xensource.com >>http://lists.xensource.com/xen-users >>_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2009-Feb-28 12:16 UTC
RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:XenI/O is definitely super super super slow
> > I have receive my Battery pack and 512MB upgrade, now it''s > fineThanks for following up. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Matthieu Patou
2009-Feb-28 12:32 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
On 02/27/2009 04:03 PM, DOGUET Emmanuel wrote:> I have receive my Battery pack and 512MB upgrade, now it''s fine : > > > > Hardware RAID 5 > > 512M cache > > Write Cache (25/75) > > 8 x 146G > > > > dom0 (1024MB, 1 cpu) 213 MB/s > > domU ( 512MB, 1 cpu 192 MB/s > > domU (4096MB, 2 cpu) 249 MB/s > >A lot of hardware card tends not to use write cache when there is no battery because they decide that''s it''s unsafe (which is not wrong ... to my mind). There is also a problems with filesystems ie xfs vs ext3 the first one use barrier (if the underlining device support it which means not lvm at least), if you do a bench against xfs it will be light year away from ext3, remounting xfs -o nobarrier gives you back performance (in short using barrier insure you that your metadata are *really* written to the disk before doing something to real data which seems to be an hypothesis of journalised FS) Last year there was some papers in LWN about that and the fact because some wants to turn on by default on ext3. As a rule of thumbs: always buy battery if you intend to use cache in your raid controller. Matthieu. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Olivier B.
2009-May-18 07:04 UTC
Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
mm... stupid I am. I found where was my problem : I had copied the xen-tools example "partitions.d/sample-server" wich mount the primary partition in sync mode... so writes are really slow. If I remount the partition without that (with : defaults,noatime,errors=remount-ro) all is fine, I have same performances on dom0 and domU. Olivier Olivier B. a écrit :> I did some tests to on some servers ( time dd if=/dev/zero of=TEST > bs=4k count=512000 ) > > First server : raid 1 hardware with a PCI 32bits 3ware card. > dom0 (ext3) : 39MB/s > domU (ext3) : 1.4MB/s !!! > domU (ext4) : 40MB/s > > Second server : raid 1 software with 2 SATA disks > dom0 (ext3) : 96MB/s > domU (ext3) : 91MB/s > domU (ext4) : 94MB/s > > Note : I use vanilla kernel on DomU. > > So : > - I see a big write problem from domU on hardware raid > - the writeback feature of ext4 seem to "erase" this problem > > Olivier > DOGUET Emmanuel a écrit : >> I have finished my tests on 3 servers. On each we loose some >> bandwidth with XEN. On our 10 platform ... We always loose some >> bandwidth, I think it''s normal. Just the bench method who must differ? >> >> I have made bench (write only) between hardware and software RAID >> under XEN (see attachment). >> >> Linux Software RAID is always faster than HP Raid. I must try too the >> "512MB+Cache Write" option for the HP Raid. >> >> So my problems seem to be here. >> >> >> ------------------------- >> HP DL 380 >> Quad core >> ------------------------- >> Test: dd if=/dev/zero of=TEST bs=4k count=1250000 >> >> >> >> Hardware Hardware Software Software >> RAID 5 RAID 5 RAID 5 RAID 5 >> 4 x 146G 8 x 146G 4 x 146G 8 x 146G >> dom0 (1024MB, >> 1 cpu) 32MB 22MB 88MB (*) 144MB (*) >> >> domU ( 512MB, >> 1 cpu) 8MB 5MB 34MB 31MB >> >> domU >> (4096MB, >> 2 cpu) -- 7MB 51MB 35MB >> >> >> >> *: don''t understand this difference. >> >> >> This performance seems to be good for you? >> >> >> >> >> Best regards. >> >> >> >> >> >>> -----Message d''origine----- >>> De : DOGUET Emmanuel >>> Envoyé : mardi 24 février 2009 17:50 >>> À : DOGUET Emmanuel; Fajar A. Nugraha >>> Cc : xen-users; Joris Dobbelsteen >>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>> nativeperformance:Xen I/O is definitely super super super slow >>> >>> For resuming : >>> >>> on RAID 0 >>> >>> dom0: 80MB domU: 56MB Loose: 30M >>> >>> on RAID1 >>> >>> dom0: 80MB domU: 55 MB Loose: 32% >>> >>> on RAID5: >>> >>> dom0: 30MB domU: 9MB Loose: 70% >>> >>> >>> >>> So loose seem to be "exponantial" ? >>> >>> >>> >>> >>>> -----Message d''origine----- >>>> De : xen-users-bounces@lists.xensource.com >>>> [mailto:xen-users-bounces@lists.xensource.com] De la part de >>>> DOGUET Emmanuel >>>> Envoyé : mardi 24 février 2009 14:22 >>>> À : Fajar A. Nugraha >>>> Cc : xen-users; Joris Dobbelsteen >>>> Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs >>>> nativeperformance:Xen I/O is definitely super super super slow >>>> >>>> >>>> I have made another test on another server (DL 380) >>>> >>>> And same thing! >>>> >>>> I''m always use this test : >>>> >>>> dd if=/dev/zero of=TEST bs=4k count=1250000 >>>> >>>> (be careful with memory cache) >>>> >>>> >>>> TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G) >>>> --------------------------------------------------------------- >>>> >>>> dom0: 1GO, 1CPU, 2 RAID 5 >>>> >>>> rootvg(c0d0p1): 4596207616 bytes (4.6 GB) >>>> copied, 158.284 seconds, 29.0 MB/s >>>> datavg(c0d1p1): 5120000000 bytes (5.1 GB) >>>> copied, 155.414 seconds, 32.9 MB/s >>>> >>>> domU: 512M, 1CPU on System LVM/RAID5 (rootvg) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s >>>> >>>> domU: 512M, 1CPU on DATA LVM/RAID5 (datavg) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s >>>> >>>> domU: 512M, 1 CPU on same RAID without LVM >>>> >>>> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s >>>> >>>> >>>> TEST WITH RAID 0 (dom0 system on RAID 1) >>>> --------------------------------------- >>>> >>>> dom0 1GO RAM 1CPU >>>> >>>> on system (RAID1): >>>> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s >>>> >>>> on direct HD (RAID 0 of cssiss), no LVM >>>> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s >>>> >>>> dom0 4GO RAM 4CPU >>>> >>>> >>>> >>>> domU: 4GO, 4 CPU >>>> >>>> on direct HD (RAID 0), no LVM. >>>> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s >>>> >>>> >>>> domU: 4GO, 4CPU same HD but ONE LVM on it >>>> >>>> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s >>>> >>>> >>>> TEST with only ONE RAID 5 (6 x 146G) >>>> ------------------------------------ >>>> >>>> dom0 : 1024MB - 1CPUI (RHEL 5.3) >>>> >>>> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s >>>> >>>> >>>> 512MB - 1 CPU >>>> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s >>>> >>>> >>>> 512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap) >>>> >>>> (too slow ..stopped :P) >>>> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s >>>> >>>> 512MB - 1 CPU - On a file (root, no swap) >>>> >>>> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s >>>> >>>> 4GB - 2 CPU >>>> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s >>>> >>>> >>>> >>>> >>>> >>>>> -----Message d''origine----- >>>>> De : Fajar A. Nugraha [mailto:fajar@fajar.net] >>>>> Envoyé : samedi 14 février 2009 06:23 >>>>> À : DOGUET Emmanuel >>>>> Cc : xen-users >>>>> Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native >>>>> performance:Xen I/O is definitely super super super slow >>>>> >>>>> 2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@mane.com>: >>>>> >>>>>> I have mount domU partition on dom0 for testing and it''s OK. >>>>>> But same partiton on domU side is slow. >>>>>> >>>>>> Strange. >>>>>> >>>>> Strange indeed. At least that ruled-out hardware problems :) >>>>> Could try with a "simple" domU? >>>>> - 1 vcpu >>>>> - 512 M memory >>>>> - only one vbd >>>>> >>>>> this should isolate whether or not the problem is on your particular >>>>> domU (e.g. some config parameter actually make domU slower). >>>>> >>>>> Your config file should have only few lines, like this >>>>> >>>>> memory = "512" >>>>> vcpus=1 >>>>> disk = [''phy:/dev/rootvg/bdd-root,xvda1,w'' ] >>>>> vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ] >>>>> vfb =[''type=vnc''] >>>>> bootloader="/usr/bin/pygrub" >>>>> >>>>> Regards, >>>>> >>>>> Fajar >>>>> >>>>> >>>> _______________________________________________ >>>> Xen-users mailing list >>>> Xen-users@lists.xensource.com >>>> http://lists.xensource.com/xen-users >>>> >>>> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users