I have a beefy machine (Intel dual-quad core, 16GB memory 2 x GigE) I have loaded RHEL5.1-xen on the hardware and have created two logical systems: 4 cpus, 7.5 GB memory 1 x Gige Following RHEL guidelines, I have it set up so that eth0->xenbr0 and eth1->xenbr1 Each of the two RHEL5.1 guests uses one of the interfaces and this is verified at the switch by seeing the unique MAC addresses. If I do a crude test from one guest over nfs, dd if=/dev/zero of=/nfs/test bs=32768 count=32768 This yields almost always 95-100MB/sec When I run two simultaneously, I cannot seem to get above 25MB/sec from each. It starts off with a large burst like each can do 100MB/sec, but then in a couple of seconds, tapers off to the 15-40MB/sec until the dd finishes. Things I have tried (installed on the host and the guests) net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 # recommended to increase this for 1000 BT or higher net.core.netdev_max_backlog = 2500 sysctl -w net.ipv4.tcp_congestion_control=cubic Any ideas? -- --tmac RedHat Certified Engineer #804006984323821 (RHEL4) RedHat Certified Engineer #805007643429572 (RHEL5) Principal Consultant, RABA Technologies _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Don''t get me wrong, but my first thought was: What is the maximum expected throughput of the nfs server? It should at least be connected with 2 GBit/s to the switch, to serve two dd''s with each ~100MB/s. Well, I assume both domU''s are using the same nfs server. Regards, Stephan tmac schrieb:> I have a beefy machine > (Intel dual-quad core, 16GB memory 2 x GigE) > > I have loaded RHEL5.1-xen on the hardware and have created two logical systems: > 4 cpus, 7.5 GB memory 1 x Gige > > Following RHEL guidelines, I have it set up so that eth0->xenbr0 and > eth1->xenbr1 > Each of the two RHEL5.1 guests uses one of the interfaces and this is > verified at the > switch by seeing the unique MAC addresses. > > If I do a crude test from one guest over nfs, > dd if=/dev/zero of=/nfs/test bs=32768 count=32768 > > This yields almost always 95-100MB/sec > > When I run two simultaneously, I cannot seem to get above 25MB/sec from each. > It starts off with a large burst like each can do 100MB/sec, but then > in a couple > of seconds, tapers off to the 15-40MB/sec until the dd finishes. > > Things I have tried (installed on the host and the guests) > > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > > net.ipv4.tcp_no_metrics_save = 1 > net.ipv4.tcp_moderate_rcvbuf = 1 > # recommended to increase this for 1000 BT or higher > net.core.netdev_max_backlog = 2500 > sysctl -w net.ipv4.tcp_congestion_control=cubic > > Any ideas? > >-- Stephan Seitz Senior System Administrator *netz-haut* e.K. multimediale kommunikation zweierweg 22 97074 würzburg fon: +49 931 2876247 fax: +49 931 2876248 web: www.netz-haut.de <http://www.netz-haut.de/> registriergericht: amtsgericht würzburg, hra 5054 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
NetApp GX with two heads and 10GigE''s. Measured at over 2Gigabytes/sec! Should easily handle 200MBytes/sec Network path: VirtHostA -GigE-> 4948-10G (port 1 )-10gigE-> 6509 -> -10GigE-> NetApp VirtHostB -GigE-> 4948-10G (port 17)-10gigE-> 6509 -> -10GigE-> NetApp On Dec 28, 2007 7:30 PM, Stephan Seitz <s.seitz@netz-haut.de> wrote:> Don''t get me wrong, > > but my first thought was: What is the maximum expected throughput of the > nfs server? It should at least be connected with 2 GBit/s to the switch, > to serve two dd''s with each ~100MB/s. > > Well, I assume both domU''s are using the same nfs server. > > > Regards, > > Stephan > > > tmac schrieb: > > > I have a beefy machine > > (Intel dual-quad core, 16GB memory 2 x GigE) > > > > I have loaded RHEL5.1-xen on the hardware and have created two logical systems: > > 4 cpus, 7.5 GB memory 1 x Gige > > > > Following RHEL guidelines, I have it set up so that eth0->xenbr0 and > > eth1->xenbr1 > > Each of the two RHEL5.1 guests uses one of the interfaces and this is > > verified at the > > switch by seeing the unique MAC addresses. > > > > If I do a crude test from one guest over nfs, > > dd if=/dev/zero of=/nfs/test bs=32768 count=32768 > > > > This yields almost always 95-100MB/sec > > > > When I run two simultaneously, I cannot seem to get above 25MB/sec from each. > > It starts off with a large burst like each can do 100MB/sec, but then > > in a couple > > of seconds, tapers off to the 15-40MB/sec until the dd finishes. > > > > Things I have tried (installed on the host and the guests) > > > > net.core.rmem_max = 16777216 > > net.core.wmem_max = 16777216 > > net.ipv4.tcp_rmem = 4096 87380 16777216 > > net.ipv4.tcp_wmem = 4096 65536 16777216 > > > > net.ipv4.tcp_no_metrics_save = 1 > > net.ipv4.tcp_moderate_rcvbuf = 1 > > # recommended to increase this for 1000 BT or higher > > net.core.netdev_max_backlog = 2500 > > sysctl -w net.ipv4.tcp_congestion_control=cubic > > > > Any ideas? > > > > > > > -- > Stephan Seitz > Senior System Administrator > > *netz-haut* e.K. > multimediale kommunikation > > zweierweg 22 > 97074 würzburg > > fon: +49 931 2876247 > fax: +49 931 2876248 > > web: www.netz-haut.de <http://www.netz-haut.de/> > > registriergericht: amtsgericht würzburg, hra 5054 >-- --tmac RedHat Certified Engineer #804006984323821 (RHEL4) RedHat Certified Engineer #805007643429572 (RHEL5) Principal Consultant, RABA Technologies _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
tmac schrieb:> NetApp GX with two heads and 10GigE''s. > Measured at over 2Gigabytes/sec! > Should easily handle 200MBytes/sec > > Network path: > > VirtHostA -GigE-> 4948-10G (port 1 )-10gigE-> 6509 -> -10GigE-> NetApp > VirtHostB -GigE-> 4948-10G (port 17)-10gigE-> 6509 -> -10GigE-> NetAppOk, I see you''re preferring *real* equipment ;) I don''t think the network stack of the domU''s has a problem, as one dd works. As I''m pretty sure, you always checked each domU is connected to its own bridge, I would take a look at the domU config. Have you tried to pin the VCPU''s to dedicated cores? For a quick test, I would reduce the domU''s to three cores and keep 2 in dom0. e.g. domUa.cfg cpus = ''1-3'' vcpus = ''3'' vif [ ''bridge=xenbr0,mac= ...'' ] domUb.cfg cpus = ''5-7'' vcpus = ''3'' vif [ ''bridge=xenbr1,mac= ...'' ] xend-config.sxp (dom0-cpus 2) or, temporarily, use xm vcpu-pin / xm vcpu-set I found similar (MP dualcore and MP quadcore Xeon) systems performing much better if the domU''s are using only cores located at the same cpu. Without deeper knowlegde about this, I assume this has to do with a better use of caches. Regards Stephan> On Dec 28, 2007 7:30 PM, Stephan Seitz <s.seitz@netz-haut.de> wrote: >> Don''t get me wrong, >> >> but my first thought was: What is the maximum expected throughput of the >> nfs server? It should at least be connected with 2 GBit/s to the switch, >> to serve two dd''s with each ~100MB/s. >> >> Well, I assume both domU''s are using the same nfs server. >> >> >> Regards, >> >> Stephan >> >> >> tmac schrieb: >> >>> I have a beefy machine >>> (Intel dual-quad core, 16GB memory 2 x GigE) >>> >>> I have loaded RHEL5.1-xen on the hardware and have created two logical systems: >>> 4 cpus, 7.5 GB memory 1 x Gige >>> >>> Following RHEL guidelines, I have it set up so that eth0->xenbr0 and >>> eth1->xenbr1 >>> Each of the two RHEL5.1 guests uses one of the interfaces and this is >>> verified at the >>> switch by seeing the unique MAC addresses. >>> >>> If I do a crude test from one guest over nfs, >>> dd if=/dev/zero of=/nfs/test bs=32768 count=32768 >>> >>> This yields almost always 95-100MB/sec >>> >>> When I run two simultaneously, I cannot seem to get above 25MB/sec from each. >>> It starts off with a large burst like each can do 100MB/sec, but then >>> in a couple >>> of seconds, tapers off to the 15-40MB/sec until the dd finishes. >>> >>> Things I have tried (installed on the host and the guests) >>> >>> net.core.rmem_max = 16777216 >>> net.core.wmem_max = 16777216 >>> net.ipv4.tcp_rmem = 4096 87380 16777216 >>> net.ipv4.tcp_wmem = 4096 65536 16777216 >>> >>> net.ipv4.tcp_no_metrics_save = 1 >>> net.ipv4.tcp_moderate_rcvbuf = 1 >>> # recommended to increase this for 1000 BT or higher >>> net.core.netdev_max_backlog = 2500 >>> sysctl -w net.ipv4.tcp_congestion_control=cubic >>> >>> Any ideas? >>> >>> >> >> -- >> Stephan Seitz >> Senior System Administrator >> >> *netz-haut* e.K. >> multimediale kommunikation >> >> zweierweg 22 >> 97074 würzburg >> >> fon: +49 931 2876247 >> fax: +49 931 2876248 >> >> web: www.netz-haut.de <http://www.netz-haut.de/> >> >> registriergericht: amtsgericht würzburg, hra 5054 >> > > >-- Stephan Seitz Senior System Administrator *netz-haut* e.K. multimediale kommunikation zweierweg 22 97074 würzburg fon: +49 931 2876247 fax: +49 931 2876248 web: www.netz-haut.de <http://www.netz-haut.de/> registriergericht: amtsgericht würzburg, hra 5054 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> >> If I do a crude test from one guest over nfs, >> dd if=/dev/zero of=/nfs/test bs=32768 count=32768 >> >> This yields almost always 95-100MB/sec >> >> When I run two simultaneously, I cannot seem to get above 25MB/sec from each. >> It starts off with a large burst like each can do 100MB/sec, but then >> in a couple >> of seconds, tapers off to the 15-40MB/sec until the dd finishes.IMHO this doesn''t sound like a XEN issue... and you''d be well off to test this without XEN in the picture, or at least from the dom0. It sounds like you might be expecting too much from your netapp... even it is limited to what it''s hard drives can do... if the drives have to seek back and forth between two big files, then getting 30-80 MB/sec aggregate is not bad. -Tom _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
if you want to get Gigabit performance on your domU (using HVM virtualization) you MUST compile the xen unmodified_drivers (in particular Netfront) and load those drivers as kernel modules on your domU. Then you must change the guest machine xen file using netfront insted of ioemu for the network interface. I have written a page on how to do it but it is written in italian. Anyway if you follow the instruction you should understand looking at the bare commands. https://calcolo.infn.it/wiki/doku.php?id=network_overbust_compilare_e_installare_il_kernel_module_con_il_supporto_netfront of couse the xen source coude depends on the xen version you are using on your dom0. Actually I was not satisfied of Xen 3.0.2 used on RHEL5 so we build rpm for Xen 3.1.2 and actually we are using those. Rick tmac ha scritto:> I have a beefy machine > (Intel dual-quad core, 16GB memory 2 x GigE) > > I have loaded RHEL5.1-xen on the hardware and have created two logical systems: > 4 cpus, 7.5 GB memory 1 x Gige > > Following RHEL guidelines, I have it set up so that eth0->xenbr0 and > eth1->xenbr1 > Each of the two RHEL5.1 guests uses one of the interfaces and this is > verified at the > switch by seeing the unique MAC addresses. > > If I do a crude test from one guest over nfs, > dd if=/dev/zero of=/nfs/test bs=32768 count=32768 > > This yields almost always 95-100MB/sec > > When I run two simultaneously, I cannot seem to get above 25MB/sec from each. > It starts off with a large burst like each can do 100MB/sec, but then > in a couple > of seconds, tapers off to the 15-40MB/sec until the dd finishes. > > Things I have tried (installed on the host and the guests) > > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > > net.ipv4.tcp_no_metrics_save = 1 > net.ipv4.tcp_moderate_rcvbuf = 1 > # recommended to increase this for 1000 BT or higher > net.core.netdev_max_backlog = 2500 > sysctl -w net.ipv4.tcp_congestion_control=cubic > > Any ideas? > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ok, well I have it working.... The following NFS mount options: hard,intr,vers=3,tcp,wsize=32768,wsize=32768,timeo=600 Here are the changes to the /etc/sysctl.conf file on the Guests (for the host, the last line, sunrpc, is not available, so remove it) net.core.netdev_max_backlog = 3000 net.core.rmem_default = 256960 net.core.rmem_max = 16777216 net.core.wmem_default = 256960 net.core.wmem_max = 16777216 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 net.ipv4.tcp_mem = 4096 4096 4096 sunrpc.tcp_slot_table_entries = 128 Also, add "/sbin/sysctl -p" as the first entry in /etc/init.d/netfs to make sure that the setrtings get read before any NFS mounts take place. For the record, I get 95-102MB/sec each with a simple DD --tmac On Dec 30, 2007 7:11 AM, Riccardo Veraldi <Riccardo.Veraldi@cnaf.infn.it> wrote:> > if you want to get Gigabit performance on your domU (using HVM > virtualization) > you MUST compile the xen unmodified_drivers (in particular Netfront) and > load > those drivers as kernel modules on your domU. > Then you must change the guest machine xen file using netfront insted of > ioemu > for the network interface. I have written a page on how to do it but it > is written in italian. > Anyway if you follow the instruction you should understand looking at > the bare commands. > > https://calcolo.infn.it/wiki/doku.php?id=network_overbust_compilare_e_installare_il_kernel_module_con_il_supporto_netfront > > of couse the xen source coude depends on the xen version you are using > on your dom0. > Actually I was not satisfied of Xen 3.0.2 used on RHEL5 so we build rpm > for Xen 3.1.2 > and actually we are using those. > > Rick > > > tmac ha scritto: > > > I have a beefy machine > > (Intel dual-quad core, 16GB memory 2 x GigE) > > > > I have loaded RHEL5.1-xen on the hardware and have created two logical systems: > > 4 cpus, 7.5 GB memory 1 x Gige > > > > Following RHEL guidelines, I have it set up so that eth0->xenbr0 and > > eth1->xenbr1 > > Each of the two RHEL5.1 guests uses one of the interfaces and this is > > verified at the > > switch by seeing the unique MAC addresses. > > > > If I do a crude test from one guest over nfs, > > dd if=/dev/zero of=/nfs/test bs=32768 count=32768 > > > > This yields almost always 95-100MB/sec > > > > When I run two simultaneously, I cannot seem to get above 25MB/sec from each. > > It starts off with a large burst like each can do 100MB/sec, but then > > in a couple > > of seconds, tapers off to the 15-40MB/sec until the dd finishes. > > > > Things I have tried (installed on the host and the guests) > > > > net.core.rmem_max = 16777216 > > net.core.wmem_max = 16777216 > > net.ipv4.tcp_rmem = 4096 87380 16777216 > > net.ipv4.tcp_wmem = 4096 65536 16777216 > > > > net.ipv4.tcp_no_metrics_save = 1 > > net.ipv4.tcp_moderate_rcvbuf = 1 > > # recommended to increase this for 1000 BT or higher > > net.core.netdev_max_backlog = 2500 > > sysctl -w net.ipv4.tcp_congestion_control=cubic > > > > Any ideas? > > > > > > > >-- --tmac RedHat Certified Engineer #804006984323821 (RHEL4) RedHat Certified Engineer #805007643429572 (RHEL5) Principal Consultant, RABA Technologies _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Most people seem to be assuming you are running PV guests. Are we correct in thinking this i the case? Cheers, Mark On Friday 28 December 2007, tmac wrote:> I have a beefy machine > (Intel dual-quad core, 16GB memory 2 x GigE) > > I have loaded RHEL5.1-xen on the hardware and have created two logical > systems: 4 cpus, 7.5 GB memory 1 x Gige > > Following RHEL guidelines, I have it set up so that eth0->xenbr0 and > eth1->xenbr1 > Each of the two RHEL5.1 guests uses one of the interfaces and this is > verified at the > switch by seeing the unique MAC addresses. > > If I do a crude test from one guest over nfs, > dd if=/dev/zero of=/nfs/test bs=32768 count=32768 > > This yields almost always 95-100MB/sec > > When I run two simultaneously, I cannot seem to get above 25MB/sec from > each. It starts off with a large burst like each can do 100MB/sec, but then > in a couple > of seconds, tapers off to the 15-40MB/sec until the dd finishes. > > Things I have tried (installed on the host and the guests) > > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > > net.ipv4.tcp_no_metrics_save = 1 > net.ipv4.tcp_moderate_rcvbuf = 1 > # recommended to increase this for 1000 BT or higher > net.core.netdev_max_backlog = 2500 > sysctl -w net.ipv4.tcp_congestion_control=cubic > > Any ideas?-- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels. Mark: My wheel has a wheel! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users