Hi,
I posted the original question to this list a few days ago but I got no
reply, perhaps because my question was too open.
I have tried various different combinations and found that the biggest
difference to network performance was the schedule slice configuration.
If I set up an iperf test from DomU - DomU - DomU then I get about 35 Mbits
using the default settings. If I set all the Domains to a schedule period
of 100ms and a slice of 10ms then the throughput leaps to about 100 Mbits.
Changing the slice value seems to make little difference, but all the
domains must be set to the same value to get the throughput improvement.
Can someone explain why this is?
Also, I appreciate that the Celeron 2.8 GHz isn''t the fastest processor
around (although it isn''t slow by any means). If I want to run a
system
with around 4 - 8 virtual machines, would I get a large boost in performance
if I went to a Pentium 4 or even a Pentium D? Perhaps there a table or
performance summary somewhere for different configurations on different
processors or a set of benchmark results? Even a hand-waving "rule of
thumb" or similar would be useful...
Thanks,
Roger
> -----Original Message-----
> From: xen-users-bounces@lists.xensource.com [mailto:xen-users-
> bounces@lists.xensource.com] On Behalf Of Roger Lucas
> Sent: 21 June 2006 21:03
> To: xen-users@lists.xensource.com
> Subject: [Xen-users] Expected network throughput
>
> Hi,
>
> I have just started to work with Xen and have a question regarding the
> expected network throughput. Here is my configuration:
>
> Processor: 2.8 GHz Intel Celeron (Socket 775)
> Motherboard: Gigabyte 8I865GVMF-775
> Memory: 1.5 GB
> Basic system: Kubuntu 6.06 Dapper Drake
> Xen version: 3.02 (Latest 3.0 stable download)
>
> I get the following iperf results:
>
> Src Dest Throughput
> Dom0 Dom0 (127.0.0.1) 1.8 Gbits
> DomU DomU (127.0.0.1) 2.1 Gbits
> Dom0 DomU 125 Mbits
> DomU Dom0 80 Mbits
> DomU DomU 55 Mbits
>
> If I go DomU-DomU-DomU then I get about 35 Mbits. For all the above
> tests,
> the CPU load is 100%.
>
> I have experienced the tcp checksum corruption issue, so I have use the
> "ethtool -K iface tx off" fix. I have also run "xm
sched-sedf 0 0 0 0 1
> 1"
> to help improve the performance (as recommended on one of the mailing list
> posts).
>
> With this class of processor and memory, I would have hoped that the
> network
> performance between DomU instances was better. Is this an unreasonable
> hope, or would I get better performance moving to the 3.0-Testing release
> (and would this fix the TCP checksum corruption)?
>
> Thanks,
>
> Roger
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xensource.com
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users