Ian Pratt
2005-Apr-04 23:30 UTC
RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU
> I did the following experiments to explore the MPI > application execution performance on both native linux > machines and inside of unpriviledged Xen user domains. I use > 8 machines with identical HW configurations (498.756 MHz dual > CPU, 512MB memory, on a 10MB/sec LAN) and I use Pallas MPI > Benchmarks (PMB).> The expreiment results show, running a same MPI benchmark in > user domains usually results in a worse (sometimes very bad) > performance comparing with on native linux machines. The > following are the results for PMB SendRecv benchmark for both > experiments (table1 and table2 report throughput and latency > respectively). As you may notice, SendRecv can achieve a > 14.9MB/sec throughput on native linux machines but can get a > maximum 7.07 MB/sec throughput if running inside of user > domains. The latency results also have big gap.> I will appreciate your help if you had the similar experience > and wanna share your insights.Xen (or any kind of virtualization) is not particularly well suited to MPI type applications, at least unless you''re using Inifiniband or some other smart NIC that avoids having to use dom0 to do the IO virtualization. However, the results you are seeing are lower than I''d expect. Are you running dom0 and the domU on the same CPU or different CPUs. How does changing this effect the results? Also, are you sure the MTU is the same in all cases? Further, please can you repeat the experiements with just a dom0 running on each node. Thanks, Ian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
xuehai zhang
2005-Apr-04 23:43 UTC
Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU
Ian, Thanks for the quick response! Explainations to your comments are inline below.>>I did the following experiments to explore the MPI >>application execution performance on both native linux >>machines and inside of unpriviledged Xen user domains. I use >>8 machines with identical HW configurations (498.756 MHz dual >>CPU, 512MB memory, on a 10MB/sec LAN) and I use Pallas MPI >>Benchmarks (PMB). > > >>The expreiment results show, running a same MPI benchmark in >>user domains usually results in a worse (sometimes very bad) >>performance comparing with on native linux machines. The >>following are the results for PMB SendRecv benchmark for both >>experiments (table1 and table2 report throughput and latency >>respectively). As you may notice, SendRecv can achieve a >>14.9MB/sec throughput on native linux machines but can get a >>maximum 7.07 MB/sec throughput if running inside of user >>domains. The latency results also have big gap. > > >>I will appreciate your help if you had the similar experience >>and wanna share your insights. > > > Xen (or any kind of virtualization) is not particularly well suited to > MPI type applications, at least unless you''re using Inifiniband or some > other smart NIC that avoids having to use dom0 to do the IO > virtualization. > > However, the results you are seeing are lower than I''d expect. > > Are you running dom0 and the domU on the same CPU or different CPUs. How > does changing this effect the results?I did not specify "cpu" option in Xen''s configuration file, so I think both dom0 and domU run on the same CPU (1st CPU). I will try to assign them to different CPUs later.> Also, are you sure the MTU is the same in all cases?The outputs of "ifconfig" show MTU is 1500 in all cases.> Further, please can you repeat the experiements with just a dom0 running > on each node.I will do it and update you later. Thanks again for the help. Xuehai _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
xuehai zhang
2005-Apr-05 22:29 UTC
Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU
xuehai zhang wrote:> Ian, > > Thanks for the quick response! Explainations to your comments are inline > below. > >>> I did the following experiments to explore the MPI application >>> execution performance on both native linux machines and inside of >>> unpriviledged Xen user domains. I use 8 machines with identical HW >>> configurations (498.756 MHz dual CPU, 512MB memory, on a 10MB/sec >>> LAN) and I use Pallas MPI Benchmarks (PMB). >> >> >> >>> The expreiment results show, running a same MPI benchmark in user >>> domains usually results in a worse (sometimes very bad) performance >>> comparing with on native linux machines. The following are the >>> results for PMB SendRecv benchmark for both experiments (table1 and >>> table2 report throughput and latency respectively). As you may >>> notice, SendRecv can achieve a 14.9MB/sec throughput on native linux >>> machines but can get a maximum 7.07 MB/sec throughput if running >>> inside of user domains. The latency results also have big gap. >> >> >> >>> I will appreciate your help if you had the similar experience and >>> wanna share your insights. >> >> >> >> Xen (or any kind of virtualization) is not particularly well suited to >> MPI type applications, at least unless you''re using Inifiniband or some >> other smart NIC that avoids having to use dom0 to do the IO >> virtualization. >> >> However, the results you are seeing are lower than I''d expect. >> >> Are you running dom0 and the domU on the same CPU or different CPUs. How >> does changing this effect the results? > > > I did not specify "cpu" option in Xen''s configuration file, so I think > both dom0 and domU run on the same CPU (1st CPU). I will try to assign > them to different CPUs later.I think I said something wrong here. If I do not specify "cpu" option in Xen config file, I observe Xen usually assigns the 2nd CPU to domU while running dom0 on the 1st CPU.>> Also, are you sure the MTU is the same in all cases? > > > The outputs of "ifconfig" show MTU is 1500 in all cases. > >> Further, please can you repeat the experiements with just a dom0 running >> on each node. > > > I will do it and update you later. > > Thanks again for the help. > > Xuehai > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mark Williamson
2005-Apr-05 22:34 UTC
Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU
On Tuesday 05 April 2005 23:29, xuehai zhang wrote:> > I did not specify "cpu" option in Xen''s configuration file, so I think > > both dom0 and domU run on the same CPU (1st CPU). I will try to assign > > them to different CPUs later. > > I think I said something wrong here. If I do not specify "cpu" option in > Xen config file, I observe Xen usually assigns the 2nd CPU to domU while > running dom0 on the 1st CPU.Last time I looked, the default was to assign in a round robin fashion. i.e. the next domain you create will be on the 1st CPU (with dom0) unless you explicitly specify otherwise - watch out this doesn''t confuse your testing. Cheers, Mark _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
xuehai zhang
2005-Apr-05 22:39 UTC
Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU
Mark, Thanks for the advice. I will explicitly specify "cpu" option in domU''s config file. So far, I think my experiments are not affected by this. In my experiments, I only run at most 1 domU besides dom0. I think the domU will be assigned to 2nd CPU even the assignment policy is round robin. Xuehai Mark Williamson wrote:> On Tuesday 05 April 2005 23:29, xuehai zhang wrote: > >>>I did not specify "cpu" option in Xen''s configuration file, so I think >>>both dom0 and domU run on the same CPU (1st CPU). I will try to assign >>>them to different CPUs later. >> >>I think I said something wrong here. If I do not specify "cpu" option in >>Xen config file, I observe Xen usually assigns the 2nd CPU to domU while >>running dom0 on the 1st CPU. > > > Last time I looked, the default was to assign in a round robin fashion. i.e. > the next domain you create will be on the 1st CPU (with dom0) unless you > explicitly specify otherwise - watch out this doesn''t confuse your testing. > > Cheers, > Mark >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mark Williamson
2005-Apr-05 22:43 UTC
Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU
> Thanks for the advice. I will explicitly specify "cpu" option in domU''s > config file.Probably good practice although, as you rightly point out, it doesn''t matter in this case.> So far, I think my experiments are not affected by this. In my experiments, > I only run at most 1 domU besides dom0. I think the domU will be assigned > to 2nd CPU even the assignment policy is round robin.Ah yes, the code''s been updated to choose the least loaded CPU. Should be fine then. Cheers, Mark> Xuehai > > Mark Williamson wrote: > > On Tuesday 05 April 2005 23:29, xuehai zhang wrote: > >>>I did not specify "cpu" option in Xen''s configuration file, so I think > >>>both dom0 and domU run on the same CPU (1st CPU). I will try to assign > >>>them to different CPUs later. > >> > >>I think I said something wrong here. If I do not specify "cpu" option in > >>Xen config file, I observe Xen usually assigns the 2nd CPU to domU while > >>running dom0 on the 1st CPU. > > > > Last time I looked, the default was to assign in a round robin fashion. > > i.e. the next domain you create will be on the 1st CPU (with dom0) unless > > you explicitly specify otherwise - watch out this doesn''t confuse your > > testing. > > > > Cheers, > > Mark_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
xuehai zhang
2005-Apr-06 04:25 UTC
Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU
Mark Williamson wrote:>>Thanks for the advice. I will explicitly specify "cpu" option in domU''s >>config file. > > > Probably good practice although, as you rightly point out, it doesn''t matter > in this case. > > >>So far, I think my experiments are not affected by this. In my experiments, >>I only run at most 1 domU besides dom0. I think the domU will be assigned >>to 2nd CPU even the assignment policy is round robin. > > > Ah yes, the code''s been updated to choose the least loaded CPU. Should be > fine then.Nice to know it. Thanks for the input. Xuehai _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel