I have tried what you suggested me. I pinned 1 core per guest also pin 1 core to Dom0 instead of allowing dom0 to use all 8 cores. But the results remained same. Below are the details:- [root@HPCNL-SR-2 ~]# xm vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 r-- 69.4 any cpu Domain-0 0 1 - --p 4.7 any cpu Domain-0 0 2 - --p 6.2 any cpu Domain-0 0 3 - --p 5.5 any cpu Domain-0 0 4 - --p 4.7 any cpu Domain-0 0 5 - --p 3.5 any cpu Domain-0 0 6 - --p 3.8 any cpu Domain-0 0 7 - --p 3.5 any cpu F11-G1S2 0 0.0 any cpu F11-G2S2 1 0 1 -b- 14.7 1 F11-G3S2 2 0 2 -b- 14.9 2 F11-G4S2 0 0.0 any cpu [root@F11-G2S2 ~]# netserver Starting netserver at port 12865 Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC [root@F11-G3S2 ~]# netperf -l 60 -H 10.11.21.212 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.11.21.212 (10.11.21.212) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.05 0.29 There is something strange that I have observed in my set-up, when i traceroute guest it doesn''t reach any destination. do not get reply from nay hope. [root@F11-G3S2 ~]# traceroute 10.11.21.212 traceroute to 10.11.21.212 (10.11.21.212), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * 4 * * * 5 * * * 6 *^C it displays the same stars till 30. normally it doesn''t happen. It should be something like [root@F11-G3S2 ~]# traceroute 10.11.21.32 traceroute to 10.11.21.32 (10.11.21.32), 30 hops max, 60 byte packets 1 10.11.21.32 (10.11.21.32) 0.740 ms 0.710 ms 0.674 ms I feel there is some network configuration issue. would Please guide me how to find out the route cause and to resolve the problem. How can i check ICMP thing in my fedora11 system? Regards, Fasiha Ashraf --- On Sat, 5/9/09, Fajar A. Nugraha <fajar@fajar.net> wrote: From: Fajar A. Nugraha <fajar@fajar.net> Subject: Re: [Xen-users] bridge throughput problem To: "Fasiha Ashraf" <feehapk@yahoo.co.in> Cc: xen-users@lists.xensource.com Date: Saturday, 5 September, 2009, 4:59 PM On Sat, Sep 5, 2009 at 12:06 PM, Fasiha Ashraf<feehapk@yahoo.co.in> wrote:> What is Guest1 and Guest2? > These are PV domUs of Fedora11(32bit). > Is it on the same dom0 or on different dom0? > Yes, they are on the same host on same physical machine.Perhaps it''s CPU/interrupt issue. Can you make sure that dom0, guest1, and guest2 ONLY use 1 vcpu each, and they''re located on DIFFERENT physical cpu/core (xm vcpu-set, xm vcpu-pin), and repeat the test. Also, have another window running for each dom0/domU, and observe CPU load during that test with "top". Which domain uses 100%? Is it user or system? -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users See the Web's breaking stories, chosen by people like you. Check out Yahoo! Buzz. http://in.buzz.yahoo.com/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 7, 2009 at 5:15 PM, Fasiha Ashraf<feehapk@yahoo.co.in> wrote:> I have tried what you suggested me. I pinned 1 core per guest also pin 1 > core to Dom0 instead of allowing dom0 to use all 8 cores. But the results > remained same.At this point I have to say I don''t know. I''m not famliar enough with F11 (especially the kernel) to know what kind of throughput to expect under Xen. In my RHEL5 setup (kernel 2.6.18), inter domU communication can easily reach 2 Gbps. Perhaps it''s performance issue with newer pv_ops kernel. Hopefully others familiar with this setup can help you. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fasiha, you''re not alone. I''ve got a xen-tip/master pv_ops dom0 running, and I get roughly the same figures you do. 0.14 domU to domU, and 12990.91 domU to dom0. The netserver end is completely idle (as reported by sar), as is dom0, during all test. Whereas a 2.6.18 based kernel on an old dual p3 xeon gets 327 and 456 respectively. On Monday 07 September 2009 11:15:01 Fasiha Ashraf wrote:> I have tried what you suggested me. I pinned 1 core per guest also pin 1 > core to Dom0 instead of allowing dom0 to use all 8 cores. But the results > remained same. Below are the details:- > [root@HPCNL-SR-2 ~]# xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity Domain-0 0 0 0 r-- 69.4 > any cpu Domain-0 0 1 - --p 4.7 > any cpu Domain-0 0 2 - --p 6.2 > any cpu Domain-0 0 3 - --p 5.5 > any cpu Domain-0 0 4 - --p 4.7 > any cpu Domain-0 0 5 - --p 3.5 > any cpu Domain-0 0 6 - --p 3.8 > any cpu Domain-0 0 7 - --p 3.5 > any cpu F11-G1S2 0 0.0 > any cpu F11-G2S2 1 0 1 -b- 14.7 > 1 F11-G3S2 2 0 2 -b- 14.9 2 > F11-G4S2 0 0.0 any cpu > > [root@F11-G2S2 ~]# netserver > Starting netserver at port 12865 > Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC > > [root@F11-G3S2 ~]# netperf -l 60 -H 10.11.21.212 > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.11.21.212 > (10.11.21.212) port 0 AF_INET Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 87380 16384 16384 60.05 0.29 > > There is something strange that I have observed in my set-up, when i > traceroute guest it doesn''t reach any destination. do not get reply from > nay hope. > [root@F11-G3S2 ~]# traceroute 10.11.21.212 > traceroute to 10.11.21.212 (10.11.21.212), 30 hops max, 60 byte packets > 1 * * * > 2 * * * > 3 * * * > 4 * * * > 5 * * * > 6 *^C > it displays the same stars till 30. normally it doesn''t happen. It should > be something like [root@F11-G3S2 ~]# traceroute 10.11.21.32 > traceroute to 10.11.21.32 (10.11.21.32), 30 hops max, 60 byte packets > 1 10.11.21.32 (10.11.21.32) 0.740 ms 0.710 ms 0.674 ms > > I feel there is some network configuration issue. would Please guide me how > to find out the route cause and to resolve the problem. How can i check > ICMP thing in my fedora11 system? > > Regards, > Fasiha Ashraf > > --- On Sat, 5/9/09, Fajar A. Nugraha <fajar@fajar.net> wrote: > > From: Fajar A. Nugraha <fajar@fajar.net> > Subject: Re: [Xen-users] bridge throughput problem > To: "Fasiha Ashraf" <feehapk@yahoo.co.in> > Cc: xen-users@lists.xensource.com > Date: Saturday, 5 September, 2009, 4:59 PM > > On Sat, Sep 5, 2009 at 12:06 PM, Fasiha Ashraf<feehapk@yahoo.co.in> wrote: > > What is Guest1 and Guest2? > > These are PV domUs of Fedora11(32bit). > > Is it on the same dom0 or on different dom0? > > Yes, they are on the same host on same physical machine. > > Perhaps it''s CPU/interrupt issue. Can you make sure that dom0, guest1, > and guest2 ONLY use 1 vcpu each, and they''re located on DIFFERENT > physical cpu/core (xm vcpu-set, xm vcpu-pin), and repeat the test. > > Also, have another window running for each dom0/domU, and observe CPU > load during that test with "top". Which domain uses 100%? Is it user > or system?-- Mike Williams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 07, 2009 at 10:16:38PM +0100, Mike Williams wrote:> Fasiha, you''re not alone. > I''ve got a xen-tip/master pv_ops dom0 running, and I get roughly the same > figures you do.Can you verify the throughput problem gets fixed if you change the dom0 kernel to non-pv_ops? (and keep the rest of the configuration and settings unchanged). http://xenbits.xen.org/linux-2.6.18-xen.hg -- Pasi> 0.14 domU to domU, and 12990.91 domU to dom0. > The netserver end is completely idle (as reported by sar), as is dom0, during > all test. > > Whereas a 2.6.18 based kernel on an old dual p3 xeon gets 327 and 456 > respectively. > > On Monday 07 September 2009 11:15:01 Fasiha Ashraf wrote: > > I have tried what you suggested me. I pinned 1 core per guest also pin 1 > > core to Dom0 instead of allowing dom0 to use all 8 cores. But the results > > remained same. Below are the details:- > > [root@HPCNL-SR-2 ~]# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity Domain-0 0 0 0 r-- 69.4 > > any cpu Domain-0 0 1 - --p 4.7 > > any cpu Domain-0 0 2 - --p 6.2 > > any cpu Domain-0 0 3 - --p 5.5 > > any cpu Domain-0 0 4 - --p 4.7 > > any cpu Domain-0 0 5 - --p 3.5 > > any cpu Domain-0 0 6 - --p 3.8 > > any cpu Domain-0 0 7 - --p 3.5 > > any cpu F11-G1S2 0 0.0 > > any cpu F11-G2S2 1 0 1 -b- 14.7 > > 1 F11-G3S2 2 0 2 -b- 14.9 2 > > F11-G4S2 0 0.0 any cpu > > > > [root@F11-G2S2 ~]# netserver > > Starting netserver at port 12865 > > Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC > > > > [root@F11-G3S2 ~]# netperf -l 60 -H 10.11.21.212 > > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.11.21.212 > > (10.11.21.212) port 0 AF_INET Recv Send Send > > Socket Socket Message Elapsed > > Size Size Size Time Throughput > > bytes bytes bytes secs. 10^6bits/sec > > > > 87380 16384 16384 60.05 0.29 > > > > There is something strange that I have observed in my set-up, when i > > traceroute guest it doesn''t reach any destination. do not get reply from > > nay hope. > > [root@F11-G3S2 ~]# traceroute 10.11.21.212 > > traceroute to 10.11.21.212 (10.11.21.212), 30 hops max, 60 byte packets > > 1 * * * > > 2 * * * > > 3 * * * > > 4 * * * > > 5 * * * > > 6 *^C > > it displays the same stars till 30. normally it doesn''t happen. It should > > be something like [root@F11-G3S2 ~]# traceroute 10.11.21.32 > > traceroute to 10.11.21.32 (10.11.21.32), 30 hops max, 60 byte packets > > 1 10.11.21.32 (10.11.21.32) 0.740 ms 0.710 ms 0.674 ms > > > > I feel there is some network configuration issue. would Please guide me how > > to find out the route cause and to resolve the problem. How can i check > > ICMP thing in my fedora11 system? > > > > Regards, > > Fasiha Ashraf > > > > --- On Sat, 5/9/09, Fajar A. Nugraha <fajar@fajar.net> wrote: > > > > From: Fajar A. Nugraha <fajar@fajar.net> > > Subject: Re: [Xen-users] bridge throughput problem > > To: "Fasiha Ashraf" <feehapk@yahoo.co.in> > > Cc: xen-users@lists.xensource.com > > Date: Saturday, 5 September, 2009, 4:59 PM > > > > On Sat, Sep 5, 2009 at 12:06 PM, Fasiha Ashraf<feehapk@yahoo.co.in> wrote: > > > What is Guest1 and Guest2? > > > These are PV domUs of Fedora11(32bit). > > > Is it on the same dom0 or on different dom0? > > > Yes, they are on the same host on same physical machine. > > > > Perhaps it''s CPU/interrupt issue. Can you make sure that dom0, guest1, > > and guest2 ONLY use 1 vcpu each, and they''re located on DIFFERENT > > physical cpu/core (xm vcpu-set, xm vcpu-pin), and repeat the test. > > > > Also, have another window running for each dom0/domU, and observe CPU > > load during that test with "top". Which domain uses 100%? Is it user > > or system? > > -- > Mike Williams > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Thanks for all your suggestions n help. I will surly give it a shot what you have suggested. Please tell me: Is that ok to rebuild Dom0 kernel(v-2.6.18) on fedora11(v-2.6.29)?? Or first I need to downgrade the kernel version? Regards, Fasiha Ashraf --- On Tue, 8/9/09, Pasi Kärkkäinen <pasik@iki.fi> wrote: From: Pasi Kärkkäinen <pasik@iki.fi> Subject: Re: Fw: Re: [Xen-users] bridge throughput problem To: "Mike Williams" <mike@gaima.co.uk> Cc: xen-users@lists.xensource.com Date: Tuesday, 8 September, 2009, 12:56 PM On Mon, Sep 07, 2009 at 10:16:38PM +0100, Mike Williams wrote:> Fasiha, you''re not alone. > I''ve got a xen-tip/master pv_ops dom0 running, and I get roughly the same > figures you do.Can you verify the throughput problem gets fixed if you change the dom0 kernel to non-pv_ops? (and keep the rest of the configuration and settings unchanged). http://xenbits.xen.org/linux-2.6.18-xen.hg -- Pasi> 0.14 domU to domU, and 12990.91 domU to dom0. > The netserver end is completely idle (as reported by sar), as is dom0, during > all test. > > Whereas a 2.6.18 based kernel on an old dual p3 xeon gets 327 and 456 > respectively. > > On Monday 07 September 2009 11:15:01 Fasiha Ashraf wrote: > > I have tried what you suggested me. I pinned 1 core per guest also pin 1 > > core to Dom0 instead of allowing dom0 to use all 8 cores. But the results > > remained same. Below are the details:- > > [root@HPCNL-SR-2 ~]# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity Domain-0 0 0 0 r-- 69.4 > > any cpu Domain-0 0 1 - --p 4.7 > > any cpu Domain-0 0 2 - --p 6.2 > > any cpu Domain-0 0 3 - --p 5.5 > > any cpu Domain-0 0 4 - --p 4.7 > > any cpu Domain-0 0 5 - --p 3.5 > > any cpu Domain-0 0 6 - --p 3.8 > > any cpu Domain-0 0 7 - --p 3.5 > > any cpu F11-G1S2 0 0.0 > > any cpu F11-G2S2 1 0 1 -b- 14.7 > > 1 F11-G3S2 2 0 2 -b- 14.9 2 > > F11-G4S2 0 0.0 any cpu > > > > [root@F11-G2S2 ~]# netserver > > Starting netserver at port 12865 > > Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC > > > > [root@F11-G3S2 ~]# netperf -l 60 -H 10.11.21.212 > > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.11.21.212 > > (10.11.21.212) port 0 AF_INET Recv Send Send > > Socket Socket Message Elapsed > > Size Size Size Time Throughput > > bytes bytes bytes secs. 10^6bits/sec > > > > 87380 16384 16384 60.05 0.29 > > > > There is something strange that I have observed in my set-up, when i > > traceroute guest it doesn''t reach any destination. do not get reply from > > nay hope. > > [root@F11-G3S2 ~]# traceroute 10.11.21.212 > > traceroute to 10.11.21.212 (10.11.21.212), 30 hops max, 60 byte packets > > 1 * * * > > 2 * * * > > 3 * * * > > 4 * * * > > 5 * * * > > 6 *^C > > it displays the same stars till 30. normally it doesn''t happen. It should > > be something like [root@F11-G3S2 ~]# traceroute 10.11.21.32 > > traceroute to 10.11.21.32 (10.11.21.32), 30 hops max, 60 byte packets > > 1 10.11.21.32 (10.11.21.32) 0.740 ms 0.710 ms 0.674 ms > > > > I feel there is some network configuration issue. would Please guide me how > > to find out the route cause and to resolve the problem. How can i check > > ICMP thing in my fedora11 system? > > > > Regards, > > Fasiha Ashraf > > > > --- On Sat, 5/9/09, Fajar A. Nugraha <fajar@fajar.net> wrote: > > > > From: Fajar A. Nugraha <fajar@fajar.net> > > Subject: Re: [Xen-users] bridge throughput problem > > To: "Fasiha Ashraf" <feehapk@yahoo.co.in> > > Cc: xen-users@lists.xensource.com > > Date: Saturday, 5 September, 2009, 4:59 PM > > > > On Sat, Sep 5, 2009 at 12:06 PM, Fasiha Ashraf<feehapk@yahoo.co.in> wrote: > > > What is Guest1 and Guest2? > > > These are PV domUs of Fedora11(32bit). > > > Is it on the same dom0 or on different dom0? > > > Yes, they are on the same host on same physical machine. > > > > Perhaps it''s CPU/interrupt issue. Can you make sure that dom0, guest1, > > and guest2 ONLY use 1 vcpu each, and they''re located on DIFFERENT > > physical cpu/core (xm vcpu-set, xm vcpu-pin), and repeat the test. > > > > Also, have another window running for each dom0/domU, and observe CPU > > load during that test with "top". Which domain uses 100%? Is it user > > or system? > > -- > Mike Williams > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users Love Cricket? Check out live scores, photos, video highlights and more. Click here http://cricket.yahoo.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Sep 08, 2009 at 03:01:40PM +0530, Fasiha Ashraf wrote:> Hi, > Thanks for all your suggestions n help. I will surly give it a shot what you have suggested. Please tell me: > Is that ok to rebuild Dom0 kernel(v-2.6.18) on fedora11(v-2.6.29)?? > Or first I need to downgrade the kernel version? >No need to downgrade fedora kernel. Just compile the 2.6.18-xen kernel.. (and hope it has all the drivers for your hardware). You could also try running CentOS5/RHEL5 kernel-xen on dom0, if the linux-2.6.18-xen doesn''t work. -- Pasi> Regards, > Fasiha Ashraf > > --- On Tue, 8/9/09, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > From: Pasi Kärkkäinen <pasik@iki.fi> > Subject: Re: Fw: Re: [Xen-users] bridge throughput problem > To: "Mike Williams" <mike@gaima.co.uk> > Cc: xen-users@lists.xensource.com > Date: Tuesday, 8 September, 2009, 12:56 PM > > On Mon, Sep 07, 2009 at 10:16:38PM +0100, Mike Williams wrote: > > Fasiha, you''re not alone. > > I''ve got a xen-tip/master pv_ops dom0 running, and I get roughly the same > > figures you do. > > Can you verify the throughput problem gets fixed if you change the dom0 > kernel to non-pv_ops? (and keep the rest of the configuration and settings unchanged). > > http://xenbits.xen.org/linux-2.6.18-xen.hg > > -- Pasi > > > 0.14 domU to domU, and 12990.91 domU to dom0. > > The netserver end is completely idle (as reported by sar), as is dom0, during > > all test. > > > > Whereas a 2.6.18 based kernel on an old dual p3 xeon gets 327 and 456 > > respectively. > > > > On Monday 07 September 2009 11:15:01 Fasiha Ashraf wrote: > > > I have tried what you suggested me. I pinned 1 core per guest also pin 1 > > > core to Dom0 instead of allowing dom0 to use all 8 cores. But the results > > > remained same. Below are the details:- > > > [root@HPCNL-SR-2 ~]# xm vcpu-list > > > Name ID VCPU CPU State Time(s) CPU > > > Affinity Domain-0 0 0 0 r-- 69.4 > > > any cpu Domain-0 0 1 - --p 4.7 > > > any cpu Domain-0 0 2 - --p 6.2 > > > any cpu Domain-0 0 3 - --p 5.5 > > > any cpu Domain-0 0 4 - --p 4.7 > > > any cpu Domain-0 0 5 - --p 3.5 > > > any cpu Domain-0 0 6 - --p 3.8 > > > any cpu Domain-0 0 7 - --p 3.5 > > > any cpu F11-G1S2 0 0.0 > > > any cpu F11-G2S2 1 0 1 -b- 14.7 > > > 1 F11-G3S2 2 0 2 -b- 14.9 2 > > > F11-G4S2 0 0.0 any cpu > > > > > > [root@F11-G2S2 ~]# netserver > > > Starting netserver at port 12865 > > > Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC > > > > > > [root@F11-G3S2 ~]# netperf -l 60 -H 10.11.21.212 > > > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.11.21.212 > > > (10.11.21.212) port 0 AF_INET Recv Send Send > > > Socket Socket Message Elapsed > > > Size Size Size Time Throughput > > > bytes bytes bytes secs. 10^6bits/sec > > > > > > 87380 16384 16384 60.05 0.29 > > > > > > There is something strange that I have observed in my set-up, when i > > > traceroute guest it doesn''t reach any destination. do not get reply from > > > nay hope. > > > [root@F11-G3S2 ~]# traceroute 10.11.21.212 > > > traceroute to 10.11.21.212 (10.11.21.212), 30 hops max, 60 byte packets > > > 1 * * * > > > 2 * * * > > > 3 * * * > > > 4 * * * > > > 5 * * * > > > 6 *^C > > > it displays the same stars till 30. normally it doesn''t happen. It should > > > be something like [root@F11-G3S2 ~]# traceroute 10.11.21.32 > > > traceroute to 10.11.21.32 (10.11.21.32), 30 hops max, 60 byte packets > > > 1 10.11.21.32 (10.11.21.32) 0.740 ms 0.710 ms 0.674 ms > > > > > > I feel there is some network configuration issue. would Please guide me how > > > to find out the route cause and to resolve the problem. How can i check > > > ICMP thing in my fedora11 system? > > > > > > Regards, > > > Fasiha Ashraf > > > > > > --- On Sat, 5/9/09, Fajar A. Nugraha <fajar@fajar.net> wrote: > > > > > > From: Fajar A. Nugraha <fajar@fajar.net> > > > Subject: Re: [Xen-users] bridge throughput problem > > > To: "Fasiha Ashraf" <feehapk@yahoo.co.in> > > > Cc: xen-users@lists.xensource.com > > > Date: Saturday, 5 September, 2009, 4:59 PM > > > > > > On Sat, Sep 5, 2009 at 12:06 PM, Fasiha Ashraf<feehapk@yahoo.co.in> wrote: > > > > What is Guest1 and Guest2? > > > > These are PV domUs of Fedora11(32bit). > > > > Is it on the same dom0 or on different dom0? > > > > Yes, they are on the same host on same physical machine. > > > > > > Perhaps it''s CPU/interrupt issue. Can you make sure that dom0, guest1, > > > and guest2 ONLY use 1 vcpu each, and they''re located on DIFFERENT > > > physical cpu/core (xm vcpu-set, xm vcpu-pin), and repeat the test. > > > > > > Also, have another window running for each dom0/domU, and observe CPU > > > load during that test with "top". Which domain uses 100%? Is it user > > > or system? > > > > -- > > Mike Williams > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > > > Love Cricket? Check out live scores, photos, video highlights and more. Click here http://cricket.yahoo.com_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tuesday 08 September 2009 08:56:05 Pasi Kärkkäinen wrote:> On Mon, Sep 07, 2009 at 10:16:38PM +0100, Mike Williams wrote: > > Fasiha, you''re not alone. > > I''ve got a xen-tip/master pv_ops dom0 running, and I get roughly the same > > figures you do. > > Can you verify the throughput problem gets fixed if you change the dom0 > kernel to non-pv_ops? (and keep the rest of the configuration and settings > unchanged).Using "netperf -l 60 -H <ip>", doing 2 runs of each. domU 2.6.31-rc6-g2b8a8d4 dom0 2.6.29-xen-r5 (opensuse patches) domU -> domU ~9000 domU -> dom0 ~11500 domU 2.6.29-xen-r5 (opensuse patches) dom0 2.6.29-xen-r5 (opensuse patches) domU -> domU ~12900 domU -> dom0 ~11700 Previously dom0 was 2.6.30-rc3-tip. -- Mike Williams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, some of my menuconfig options are not selected which are given below. all the others are marked [y]. Symbol: XEN_NETDEV_ACCEL_SFC_BACKEND [=n] Prompt: Network-device backend driver acceleration for Solarflare NICs Defined at drivers/xen/Kconfig:104 Depends on: XEN && XEN_NETDEV_BACKEND && SFC && SFC_RESOURCE && X86 Location: -> XEN -> Backend driver support (XEN_BACKEND [=y]) -> Network-device backend driver (XEN_NETDEV_BACKEND [=y]) Selects: XEN_NETDEV_ACCEL_SFC_UTIL Symbol: XEN_XENCOMM [=n] Symbol: XEN_UNPRIVILEGED_GUEST [=n] After building this kernel v-2.6.18.8 Its failed to boot this kernel displaying error 13 invalid or unsupported executable format What could be the reason? Is it because I am compiling it on a higher version fedora11 kernel 2.6.29?? Regards, Fasiha Ashraf --- On Tue, 8/9/09, Pasi Kärkkäinen <pasik@iki.fi> wrote: From: Pasi Kärkkäinen <pasik@iki.fi> Subject: Re: Fw: Re: [Xen-users] bridge throughput problem To: "Fasiha Ashraf" <feehapk@yahoo.co.in> Cc: xen-users@lists.xensource.com Date: Tuesday, 8 September, 2009, 2:48 PM On Tue, Sep 08, 2009 at 03:01:40PM +0530, Fasiha Ashraf wrote:> Hi, > Thanks for all your suggestions n help. I will surly give it a shot what you have suggested. Please tell me: > Is that ok to rebuild Dom0 kernel(v-2.6.18) on fedora11(v-2.6.29)?? > Or first I need to downgrade the kernel version? >No need to downgrade fedora kernel. Just compile the 2.6.18-xen kernel.. (and hope it has all the drivers for your hardware). You could also try running CentOS5/RHEL5 kernel-xen on dom0, if the linux-2.6.18-xen doesn''t work. -- Pasi> Regards, > Fasiha Ashraf > > --- On Tue, 8/9/09, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > From: Pasi Kärkkäinen <pasik@iki.fi> > Subject: Re: Fw: Re: [Xen-users] bridge throughput problem > To: "Mike Williams" <mike@gaima.co.uk> > Cc: xen-users@lists.xensource.com > Date: Tuesday, 8 September, 2009, 12:56 PM > > On Mon, Sep 07, 2009 at 10:16:38PM +0100, Mike Williams wrote: > > Fasiha, you''re not alone. > > I''ve got a xen-tip/master pv_ops dom0 running, and I get roughly the same > > figures you do. > > Can you verify the throughput problem gets fixed if you change the dom0 > kernel to non-pv_ops? (and keep the rest of the configuration and settings unchanged). > > http://xenbits.xen.org/linux-2.6.18-xen.hg > > -- Pasi > > > 0.14 domU to domU, and 12990.91 domU to dom0. > > The netserver end is completely idle (as reported by sar), as is dom0, during > > all test. > > > > Whereas a 2.6.18 based kernel on an old dual p3 xeon gets 327 and 456 > > respectively. > > > > On Monday 07 September 2009 11:15:01 Fasiha Ashraf wrote: > > > I have tried what you suggested me. I pinned 1 core per guest also pin 1 > > > core to Dom0 instead of allowing dom0 to use all 8 cores. But the results > > > remained same. Below are the details:- > > > [root@HPCNL-SR-2 ~]# xm vcpu-list > > > Name ID VCPU CPU State Time(s) CPU > > > Affinity Domain-0 0 0 0 r-- 69.4 > > > any cpu Domain-0 0 1 - --p 4.7 > > > any cpu Domain-0 0 2 - --p 6.2 > > > any cpu Domain-0 0 3 - --p 5.5 > > > any cpu Domain-0 0 4 - --p 4.7 > > > any cpu Domain-0 0 5 - --p 3.5 > > > any cpu Domain-0 0 6 - --p 3.8 > > > any cpu Domain-0 0 7 - --p 3.5 > > > any cpu F11-G1S2 0 0.0 > > > any cpu F11-G2S2 1 0 1 -b- 14.7 > > > 1 F11-G3S2 2 0 2 -b- 14.9 2 > > > F11-G4S2 0 0.0 any cpu > > > > > > [root@F11-G2S2 ~]# netserver > > > Starting netserver at port 12865 > > > Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC > > > > > > [root@F11-G3S2 ~]# netperf -l 60 -H 10.11.21.212 > > > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.11.21.212 > > > (10.11.21.212) port 0 AF_INET Recv Send Send > > > Socket Socket Message Elapsed > > > Size Size Size Time Throughput > > > bytes bytes bytes secs. 10^6bits/sec > > > > > > 87380 16384 16384 60.05 0.29 > > > > > > There is something strange that I have observed in my set-up, when i > > > traceroute guest it doesn''t reach any destination. do not get reply from > > > nay hope. > > > [root@F11-G3S2 ~]# traceroute 10.11.21.212 > > > traceroute to 10.11.21.212 (10.11.21.212), 30 hops max, 60 byte packets > > > 1 * * * > > > 2 * * * > > > 3 * * * > > > 4 * * * > > > 5 * * * > > > 6 *^C > > > it displays the same stars till 30. normally it doesn''t happen. It should > > > be something like [root@F11-G3S2 ~]# traceroute 10.11.21.32 > > > traceroute to 10.11.21.32 (10.11.21.32), 30 hops max, 60 byte packets > > > 1 10.11.21.32 (10.11.21.32) 0.740 ms 0.710 ms 0.674 ms > > > > > > I feel there is some network configuration issue. would Please guide me how > > > to find out the route cause and to resolve the problem. How can i check > > > ICMP thing in my fedora11 system? > > > > > > Regards, > > > Fasiha Ashraf > > > > > > --- On Sat, 5/9/09, Fajar A. Nugraha <fajar@fajar.net> wrote: > > > > > > From: Fajar A. Nugraha <fajar@fajar.net> > > > Subject: Re: [Xen-users] bridge throughput problem > > > To: "Fasiha Ashraf" <feehapk@yahoo.co.in> > > > Cc: xen-users@lists.xensource.com > > > Date: Saturday, 5 September, 2009, 4:59 PM > > > > > > On Sat, Sep 5, 2009 at 12:06 PM, Fasiha Ashraf<feehapk@yahoo.co.in> wrote: > > > > What is Guest1 and Guest2? > > > > These are PV domUs of Fedora11(32bit). > > > > Is it on the same dom0 or on different dom0? > > > > Yes, they are on the same host on same physical machine. > > > > > > Perhaps it''s CPU/interrupt issue. Can you make sure that dom0, guest1, > > > and guest2 ONLY use 1 vcpu each, and they''re located on DIFFERENT > > > physical cpu/core (xm vcpu-set, xm vcpu-pin), and repeat the test. > > > > > > Also, have another window running for each dom0/domU, and observe CPU > > > load during that test with "top". Which domain uses 100%? Is it user > > > or system? > > > > -- > > Mike Williams > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > > > Love Cricket? Check out live scores, photos, video highlights and more. Click here http://cricket.yahoo.comSee the Web's breaking stories, chosen by people like you. Check out Yahoo! Buzz. http://in.buzz.yahoo.com/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Please send me your xend config file. so that i can compare it with mine to correct the problem Fasiha Ashraf --- On Tue, 8/9/09, Mike Williams <mike@gaima.co.uk> wrote: From: Mike Williams <mike@gaima.co.uk> Subject: Re: Fw: Re: [Xen-users] bridge throughput problem To: xen-users@lists.xensource.com Date: Tuesday, 8 September, 2009, 11:23 PM On Tuesday 08 September 2009 08:56:05 Pasi Kärkkäinen wrote:> On Mon, Sep 07, 2009 at 10:16:38PM +0100, Mike Williams wrote: > > Fasiha, you''re not alone. > > I''ve got a xen-tip/master pv_ops dom0 running, and I get roughly the same > > figures you do. > > Can you verify the throughput problem gets fixed if you change the dom0 > kernel to non-pv_ops? (and keep the rest of the configuration and settings > unchanged).Using "netperf -l 60 -H <ip>", doing 2 runs of each. domU 2.6.31-rc6-g2b8a8d4 dom0 2.6.29-xen-r5 (opensuse patches) domU -> domU ~9000 domU -> dom0 ~11500 domU 2.6.29-xen-r5 (opensuse patches) dom0 2.6.29-xen-r5 (opensuse patches) domU -> domU ~12900 domU -> dom0 ~11700 Previously dom0 was 2.6.30-rc3-tip. -- Mike Williams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users Try the new Yahoo! India Homepage. Click here. http://in.yahoo.com/trynew _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users