Mike Kazmier
2009-Jan-13 00:42 UTC
[Xen-users] Lots of udp (multicast) packet loss in domU
Hello, After a few of us have spent a week google''ing around for answers, I feel compelled to ask this question: how do I stop packet loss between my dom0 and domU? We are currently running a xen-3.3.0 (and have tried with xen-3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0. We have also tried a 2.6.25 kernel for dom0 with exactly the same results. The goal is to run our multicast processing application in a domU with the BRIDGED configuration. Note: we do not have any problem if we put the network interfaces into PCI Passthrough and use them exclusively in the domU, but that is less than ideal as occasionally other domU''s need to communicate with those feeds. So, when we have little (sub 100 Mbps) of multicast traffic coming, everything is fine. Over that, we start to see packet loss approaching 5%. But the loss is only seen in the domU. I can run our realtime analysis tool in the dom0 and domU at the same time, on the same multicast feed and found that in the dom0, all packets are accounted for. Initially, I found a large number of dropped frames on the VIF interface, but after running " echo -n 256 > /sys/class/net/eth2/rxbuf_min " all reported errors have gone away (ie, dom0 does not report dropping any packets) but we still see loss. Any suggestions at all would be greatly appreciated. I know we have something borked on our end since this white paper shows high domU udp performace: http://www.cesnet.cz/doc/techzpravy/2008/effects-of-virtualisation-on-network-processing - but at the same time a qoogle search shows tons of issues around UDP packet loss with Xen. Any help would be greatly appreciaited. The hardware is an 8 core, 2.33 GHz penryn (Xeon) based system. We have Intel quad GigE nics with the latest intel IGB drivers. Here is the current domU config: ---------------------------------- # general name = "xenF"; memory = 1024; vcpus = 6; builder = "linux"; on_poweroff = "destroy"; on_reboot = "restart"; on_crash = "restart"; cpu_weight = 1024 # xenF processes get 4 times default (256) priority threads cpu_cap = 600 #Don''t use more than 6 real CPUs # This lets us use the xm console extra = " console=xvc0 xencons=xvc0"; # booting kernel = "/boot/vmlinuz-2.6.18-xen-domU"; # virtual harddisk disk = [ "file:/var/xen/domU-xenF,xvda1,w" ]; root = "/dev/xvda1 ro"; # virtual network vif = [ "bridge=xenbr0", "bridge=xenbr1", "bridge=xenbr2"] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2009-Jan-13 00:51 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
> Hello, > > After a few of us have spent a week google''ing around for answers, Ifeel> compelled to ask this question: how do I stop packet loss between mydom0> and domU? We are currently running a xen-3.3.0 (and have tried withxen-> 3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0.We> have also tried a 2.6.25 kernel for dom0 with exactly the sameresults.> > The goal is to run our multicast processing application in a domU withthe> BRIDGED configuration. Note: we do not have any problem if we put the > network interfaces into PCI Passthrough and use them exclusively inthe> domU, but that is less than ideal as occasionally other domU''s need to > communicate with those feeds. >Googling has probably already lead you to these tips but just in case: Try ''echo 0 > bridge-nf-call-iptables'' if you have''t already. This will stop bridged traffic traversing any of your iptables firewall rules. If you are using ipv6 then also ''echo 0 > bridge-nf-call-ip6tables'' Another thing to try is turning off checksum offloading. I don''t think it is likely to make much difference but due to the little effort required it''s probably worthwhile. (ethtool -k to see what settings are on, ethtool -K to modify them) Also try pinning Dom0 and DomU to separate physical CPU''s. Again I don''t think this is likely to make much difference but it''s easy to test. Please post back with what you try and what sort of difference it makes, as I''m always on the lookout for ways to improve network performance. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I have the same issue and I don''t have a firewall in Dom0, plus I have my dom0 pinned to processors 0 and 1 and all the domu''s scattered between cores 2-15. Still the quality is awful. The media is UDP only, since this IP telephony. Any ideas? -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of James Harper Sent: Monday, January 12, 2009 7:52 PM To: Mike Kazmier; xen-users@lists.xensource.com Subject: RE: [Xen-users] Lots of udp (multicast) packet loss in domU> Hello, > > After a few of us have spent a week google''ing around for answers, Ifeel> compelled to ask this question: how do I stop packet loss between mydom0> and domU? We are currently running a xen-3.3.0 (and have tried withxen-> 3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0.We> have also tried a 2.6.25 kernel for dom0 with exactly the sameresults.> > The goal is to run our multicast processing application in a domU withthe> BRIDGED configuration. Note: we do not have any problem if we put the > network interfaces into PCI Passthrough and use them exclusively inthe> domU, but that is less than ideal as occasionally other domU''s need to > communicate with those feeds. >Googling has probably already lead you to these tips but just in case: Try ''echo 0 > bridge-nf-call-iptables'' if you have''t already. This will stop bridged traffic traversing any of your iptables firewall rules. If you are using ipv6 then also ''echo 0 > bridge-nf-call-ip6tables'' Another thing to try is turning off checksum offloading. I don''t think it is likely to make much difference but due to the little effort required it''s probably worthwhile. (ethtool -k to see what settings are on, ethtool -K to modify them) Also try pinning Dom0 and DomU to separate physical CPU''s. Again I don''t think this is likely to make much difference but it''s easy to test. Please post back with what you try and what sort of difference it makes, as I''m always on the lookout for ways to improve network performance. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-13 15:23 UTC
Re: [Xen-users] [SOLVED] Lots of udp (multicast) packet loss in domU
Sorry for the top post - but I wanted to let everyone know I solved this issue and how it was solved. By removing the CPU cap (cpu_cap = 600 #Don''t use more than 6 real CPUs) everything runs perfectly (well, we lost 20 packets over a 12 hour period, much better than the 1 per second we were losing). You can also apply the settings live with: xm sched-credit -d 1 -w 1024 -c 0 (use your own appropriate weight for your domU). Now... why did this happen? No idea, our domU was running with cpu averages around 400 to 430% - since there were 6 cores in my domU, this would appear to be plenty of cycles left to process incoming UDP traffic. Any XEN experts care to weigh in as to why this would happen. Note we are using the credit scheduler, should we be? Best, --Mike On Mon, Jan 12, 2009 at 5:42 PM "Mike Kazmier" <DaKaZ@zenbe.com> wrote:>Hello, > > After a few of us have spent a week google''ing around for answers, I feel compelled to ask this question: how do I stop packet loss between my dom0 and domU? We are currently running a xen-3.3.0 (and have tried with xen-3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0. We have also tried a 2.6.25 kernel for dom0 with exactly the same results. > > The goal is to run our multicast processing application in a domU with the BRIDGED configuration. Note: we do not have any problem if we put the network interfaces into PCI Passthrough and use them exclusively in the domU, but that is less than ideal as occasionally other domU''s need to communicate with those feeds. > > So, when we have little (sub 100 Mbps) of multicast traffic coming, everything is fine. Over that, we start to see packet loss approaching 5%. But the loss is only seen in the domU. I can run our realtime analysis tool in the dom0 and domU at the same time, on the same multicast feed and found that in the dom0, all packets are accounted for. Initially, I found a large number of dropped frames on the VIF interface, but after running " echo -n 256 > /sys/class/net/eth2/rxbuf_min " all reported errors have gone away (ie, dom0 does not report dropping any packets) but we still see loss. > > Any suggestions at all would be greatly appreciated. I know we have something borked on our end since this white paper shows high domU udp performace: http://www.cesnet.cz/doc/techzpravy/2008/effects-of-virtualisation-on-network-processing - but at the same time a qoogle search shows tons of issues around UDP packet loss with Xen. Any help would be greatly appreciaited. > > The hardware is an 8 core, 2.33 GHz penryn (Xeon) based system. We have Intel quad GigE nics with the latest intel IGB drivers. Here is the current domU config: > > ---------------------------------- > # general > name = "xenF"; > memory = 1024; > vcpus = 6; > builder = "linux"; > on_poweroff = "destroy"; > on_reboot = "restart"; > on_crash = "restart"; > cpu_weight = 1024 # xenF processes get 4 times default (256) priority threads > cpu_cap = 600 #Don''t use more than 6 real CPUs > > # This lets us use the xm console > extra = " console=xvc0 xencons=xvc0"; > > # booting > kernel = "/boot/vmlinuz-2.6.18-xen-domU"; > > # virtual harddisk > disk = [ "file:/var/xen/domU-xenF,xvda1,w" ]; > root = "/dev/xvda1 ro"; > > # virtual network > vif = [ "bridge=xenbr0", "bridge=xenbr1", "bridge=xenbr2"] > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I pinned one application to one processor and the quality jumped dramatically. I wish to understand the second fix proposed by James. The checksum offloading. What is that exactly? -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of James Harper Sent: Monday, January 12, 2009 7:52 PM To: Mike Kazmier; xen-users@lists.xensource.com Subject: RE: [Xen-users] Lots of udp (multicast) packet loss in domU> Hello, > > After a few of us have spent a week google''ing around for answers, Ifeel> compelled to ask this question: how do I stop packet loss between mydom0> and domU? We are currently running a xen-3.3.0 (and have tried withxen-> 3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0.We> have also tried a 2.6.25 kernel for dom0 with exactly the sameresults.> > The goal is to run our multicast processing application in a domU withthe> BRIDGED configuration. Note: we do not have any problem if we put the > network interfaces into PCI Passthrough and use them exclusively inthe> domU, but that is less than ideal as occasionally other domU''s need to > communicate with those feeds. >Googling has probably already lead you to these tips but just in case: Try ''echo 0 > bridge-nf-call-iptables'' if you have''t already. This will stop bridged traffic traversing any of your iptables firewall rules. If you are using ipv6 then also ''echo 0 > bridge-nf-call-ip6tables'' Another thing to try is turning off checksum offloading. I don''t think it is likely to make much difference but due to the little effort required it''s probably worthwhile. (ethtool -k to see what settings are on, ethtool -K to modify them) Also try pinning Dom0 and DomU to separate physical CPU''s. Again I don''t think this is likely to make much difference but it''s easy to test. Please post back with what you try and what sort of difference it makes, as I''m always on the lookout for ways to improve network performance. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-14 01:52 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
Thanks for the reply James, there are some comments below but let me start by stating that indeed this problem is NOT solved. When we removed the CPU cap we only moved on to the NEXT problem. Here is the issue now, we are still getting massive packet loss (now upwards of 60%) and it appears the bridge is culprit. Again, we have approximately 600 Mbps of multicast (udp) traffic we are trying to pass TO and then back FROM a domU, and then other domU''s occasionally grab and use this traffic. Each domU that starts seems to consume >80% cpu of the dom0 - but only if it is passed the bridged ethernet ports. So, maybe our architecture just isn''t supported? Should we be using a routed configuration (with xorp for a multicast router?) and/or just use PCI-Passthrough? We don''t see any such issues in PCI passthrough, but then our domU''s have to be connected via an external switch, and this is something we were hoping to avoid. Any advice here would be great. On Mon, Jan 12, 2009 at 5:51 PM "James Harper" <james.harper@bendigoit.com.au> wrote:>> Hello, > > > > After a few of us have spent a week google''ing around for answers, I > feel > > compelled to ask this question: how do I stop packet loss between my > dom0 > > and domU? We are currently running a xen-3.3.0 (and have tried with > xen- > > 3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0. > We > > have also tried a 2.6.25 kernel for dom0 with exactly the same > results. > > > > The goal is to run our multicast processing application in a domU with > the > > BRIDGED configuration. Note: we do not have any problem if we put the > > network interfaces into PCI Passthrough and use them exclusively in > the > > domU, but that is less than ideal as occasionally other domU''s need to > > communicate with those feeds. > > > > Googling has probably already lead you to these tips but just in case: > > Try ''echo 0 > bridge-nf-call-iptables'' if you have''t already. This will > stop bridged traffic traversing any of your iptables firewall rules. If > you are using ipv6 then also ''echo 0 > bridge-nf-call-ip6tables''Tried this - no effect - we have no rules in place.> Another thing to try is turning off checksum offloading. I don''t think > it is likely to make much difference but due to the little effort > required it''s probably worthwhile. (ethtool -k to see what settings are > on, ethtool -K to modify them)Again, no difference here.> Also try pinning Dom0 and DomU to separate physical CPU''s. Again I don''t > think this is likely to make much difference but it''s easy to test.Did this, also pinned domU to unused CPUs. Again, no effect. --Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-14 02:10 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
On Tue, Jan 13, 2009 at 6:52 PM "Mike Kazmier" <DaKaZ@zenbe.com> wrote:>Thanks for the reply James, there are some comments below but let me start by stating that indeed this problem is NOT solved. When we removed the CPU cap we only moved on to the NEXT problem. Here is the issue now, we are still getting massive packet loss (now upwards of 60%) and it appears the bridge is culprit. Again, we have approximately 600 Mbps of multicast (udp) traffic we are trying to pass TO and then back FROM a domU, and then other domU''s occasionally grab and use this traffic. Each domU that starts seems to consume >80% cpu of the dom0 - but only if it is passed the bridged ethernet ports. So, maybe our architecture just isn''t supported? Should we be using a routed configuration (with xorp for a multicast router?) and/or just use PCI-Passthrough? We don''t see any such issues in PCI passthrough, but then our domU''s have to be connected via an external switch, and this is something we were hoping to avoid. Any advice here would be great.According to http://www.cs.ucl.ac.uk/staff/M.Handley/papers/xen-vrouters.pdf, we may be looking at only using the PCI-Passthrough model :( They show extremely poor performance between the dom0 and domU. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2009-Jan-14 02:11 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
> > Thanks for the reply James, there are some comments below but let mestart> by stating that indeed this problem is NOT solved. When we removedthe> CPU cap we only moved on to the NEXT problem. Here is the issue now,we> are still getting massive packet loss (now upwards of 60%) and itappears> the bridge is culprit. Again, we have approximately 600 Mbps ofmulticast> (udp) traffic we are trying to pass TO and then back FROM a domU, andthen> other domU''s occasionally grab and use this traffic. Each domU that > starts seems to consume >80% cpu of the dom0 - but only if it ispassed> the bridged ethernet ports. So, maybe our architecture just isn''t > supported? Should we be using a routed configuration (with xorp for a > multicast router?) and/or just use PCI-Passthrough? We don''t see anysuch> issues in PCI passthrough, but then our domU''s have to be connectedvia an> external switch, and this is something we were hoping to avoid. Any > advice here would be great.I''m not sure what version the multicast stuff was introduced in, maybe 3.2.1, but before that multicast traffic was treated as broadcast traffic and so echo''d to every domain. These days, each DomU network interface should be making Dom0 aware of what multicast traffic it should be receiving, so unless your domU kernels are old that shouldn''t be your problem, but maybe someone else can confirm that multicast is definitely in place? I think that bridging is definitely much lighter on CPU usage than routing, so I don''t think that going down that path would help you in any way. What size are your packets? If they are VoIP then you are probably stuck with tiny packets, but if they are something else that can use big packets then you may be able to improve things by increasing your MTU up to 9000, although that is a road less travelled and may not be supported. But maybe Xen isn''t the right solution to this problem? James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-14 15:14 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
Hello again James,> These days, each DomU network interface should be making Dom0 aware of > what multicast traffic it should be receiving, so unless your domU > kernels are old that shouldn''t be your problem, but maybe someone else > can confirm that multicast is definitely in place?Hmmm, how would I verify that? As far as I can see the dom0 is in constant promiscuous mode so that it can pass all bridged traffic. This doesn''t really matter though, I actually do need all the traffic I am receiving. The problem is that the load is exorbitant between dom0 and domU. I mean, with 600 Mbps of network IO, dom0 consumes an entire 5310 core (2.33 GHz penryn). Whereas if I pin that interface into the domU via PCI-Passthrough, we only get a 5% cpu load to ingest that traffic. I don''t know if its important or not, but in the dom0, if I use "top" the CPU is 99% idle. But if a run "xm top" this is where I see the 100% utilization on dom0.> I think that bridging is definitely much lighter on CPU usage than > routing, so I don''t think that going down that path would help you in > any way.I agree in principle, I just didn''t know what the Xen internals looked like so thought I would ask.> What size are your packets? If they are VoIP then you are probably stuck > with tiny packets, but if they are something else that can use big > packets then you may be able to improve things by increasing your MTU up > to 9000, although that is a road less travelled and may not be > supported.These are video packets, each packet has a 1316 byte UDP payload. Changing the MTU upstream is not possible for me.> But maybe Xen isn''t the right solution to this problem?No, I still think it is, we are having great success with Xen in our appliction, except for this passing of traffic. Until we find the answer, we''ll just have to use PCI-Passthrough and dedicate some NICs to the domU that needs the high-bandwidth. Thanks again, I look forward to any more insights or ideas from the community. --Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-14 15:42 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
James, on more item of note:> These days, each DomU network interface should be making Dom0 aware of > what multicast traffic it should be receiving, so unless your domU > kernels are old that shouldn''t be your problem, but maybe someone else > can confirm that multicast is definitely in place?We are running xen 3.3.0 on a 2.6.18 dom0 and domU, but my domU gets 100% of the traffic even when its not joined to any of the groups. Do you know which recent kernel versions would support igmp snooping / pruning such that my domU''s only get what they are registered for? One last question / idea: I have read a few times I think about ipv6 related issues... we currently have ipv6 enabled in our kernels but we are not using it, do you think that disabling it would achieve anything? Best, --Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-14 16:19 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
On Wed, Jan 14, 2009 at 8:42 AM "Mike Kazmier" <DaKaZ@zenbe.com> wrote:>James, on more item of note: > > > These days, each DomU network interface should be making Dom0 aware of > > what multicast traffic it should be receiving, so unless your domU > > kernels are old that shouldn''t be your problem, but maybe someone else > > can confirm that multicast is definitely in place? > > We are running xen 3.3.0 on a 2.6.18 dom0 and domU, but my domU gets 100% of the traffic even when its not joined to any of the groups. Do you know which recent kernel versions would support igmp snooping / pruning such that my domU''s only get what they are registered for?Just to test, I loaded up a 2.6.25 dom0 we have and unfortunately, we still see all multicast traffic in the domU without requesting it. a quick look at netstat -g shows only the 224.0.0.1 membership. Kernel has multicast enabled of course ;) --Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2009-Jan-14 22:56 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
> Hello again James, > > > These days, each DomU network interface should be making Dom0 awareof> > what multicast traffic it should be receiving, so unless your domU > > kernels are old that shouldn''t be your problem, but maybe someoneelse> > can confirm that multicast is definitely in place? > > Hmmm, how would I verify that? As far as I can see the dom0 is in > constant promiscuous mode so that it can pass all bridged traffic.This> doesn''t really matter though, I actually do need all the traffic I am > receiving. The problem is that the load is exorbitant between dom0and> domU. I mean, with 600 Mbps of network IO, dom0 consumes an entire5310> core (2.33 GHz penryn). Whereas if I pin that interface into the domUvia> PCI-Passthrough, we only get a 5% cpu load to ingest that traffic.If netback is treating multicast traffic as broadcast traffic, then all multicast traffic will be forwarded to all DomU network interfaces on that bridge. More work for Dom0 and more work for each DomU. If the DomU is telling Dom0 what sort of multicast traffic is desired then Dom0 doesn''t have to work so hard. If, as you say, all your DomU''s all want all the multicast traffic then this is probably irrelevant.> I > don''t know if its important or not, but in the dom0, if I use "top"the> CPU is 99% idle. But if a run "xm top" this is where I see the 100% > utilization on dom0.Hmmm... well if you have 600Mbps traffic of 1316 bytes per packet, that is ~60Mbytes/second / 1316 bytes = ~45000 packets per second. While things are going at 600Mbps, please try the following in both Dom0 and DomU: cat /proc/interrupts && sleep 10 && cat /proc/interrupts That should get a very approximate count of interrupts over a 10 second period. what is the difference (after - before) for: Dom0 physical Ethernet interface Dom0 vif (backend) interface DomU eth0 James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-15 17:04 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
Hi James, I accidently sent this directly to you and not the list... so sorry for the double email, Our physical eth3 is connected to xenbr2 which is used to send info to the domU.> Hmmm... well if you have 600Mbps traffic of 1316 bytes per packet, that > is ~60Mbytes/second / 1316 bytes = ~45000 packets per second. > > While things are going at 600Mbps, please try the following in both Dom0 > and DomU: > > cat /proc/interrupts && sleep 10 && cat /proc/interrupts > > That should get a very approximate count of interrupts over a 10 second > period. what is the difference (after - before) for: > > Dom0 physical Ethernet interface > Dom0 vif (backend) interface > DomU eth0Dom 0 36: 14136682 0 Phys-irq-level eth3, eth4 36: 14176719 0 Phys-irq-level eth3, eth4 Diff: 40037 275: 254326 0 Dynamic-irq-level vif8.2 275: 259086 0 Dynamic-irq-level vif8.2 Diff: 4760 Dom U 277: 1105308 0 0 0 0 Dynamic-irq eth2 277: 1147909 0 0 0 0 Dynamic-irq eth2 Diff: 42601 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2009-Jan-15 23:44 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
> > Our physical eth3 is connected to xenbr2 which is used to send info tothe> domU. > > > Hmmm... well if you have 600Mbps traffic of 1316 bytes per packet,that> > is ~60Mbytes/second / 1316 bytes = ~45000 packets per second. > > > > While things are going at 600Mbps, please try the following in bothDom0> > and DomU: > > > > cat /proc/interrupts && sleep 10 && cat /proc/interrupts > > > > That should get a very approximate count of interrupts over a 10second> > period. what is the difference (after - before) for: > > > > Dom0 physical Ethernet interface > > Dom0 vif (backend) interface > > DomU eth0 > > Dom 0 > 36: 14136682 0 Phys-irq-level eth3, eth4 > 36: 14176719 0 Phys-irq-level eth3, eth4 > Diff: 40037 > > 275: 254326 0 Dynamic-irq-level vif8.2 > 275: 259086 0 Dynamic-irq-level vif8.2 > Diff: 4760 > > Dom U > 277: 1105308 0 0 0 0 Dynamic-irq eth2 > 277: 1147909 0 0 0 0 Dynamic-irq eth2 > Diff: 42601 >So Dom0 is getting around 4000 interrupts a second, and DomU is getting around 4200 interrupts a second, so it sounds like both are doing interrupt moderation, which is good. I notice that eth3 and eth4 are both sharing an interrupt. Is eth4 active during this time? James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Der James I need to download the latest and greatest drivers for Windows 2003, one that can conquer the challenges of SMP. Can you please indicate where they are located? I use Intel. Federico _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Jan 16, 2009 at 11:48 AM, Venefax <venefax@gmail.com> wrote:> Der James > I need to download the latest and greatest drivers for Windows 2003, one > that can conquer the challenges of SMP. Can you please indicate where they > are located? I use Intel.I might be saying the obvious, but why not use 0.12-pre13 from http://meadowcourt.org/downloads/? I''m using it on 2-CPU W2k3-SP2 domUs, and so far it''s working correctly. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James has a newer one. I never go for the old. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Fajar A. Nugraha Sent: Friday, January 16, 2009 3:47 AM To: xen-users@lists.xensource.com Subject: Re: [Xen-users] GPLPV On Fri, Jan 16, 2009 at 11:48 AM, Venefax <venefax@gmail.com> wrote:> Der James > I need to download the latest and greatest drivers for Windows 2003, one > that can conquer the challenges of SMP. Can you please indicate where they > are located? I use Intel.I might be saying the obvious, but why not use 0.12-pre13 from http://meadowcourt.org/downloads/? I''m using it on 2-CPU W2k3-SP2 domUs, and so far it''s working correctly. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike Kazmier
2009-Jan-16 15:10 UTC
RE: [Xen-users] Lots of udp (multicast) packet loss in domU
On Thu, Jan 15, 2009 at 4:44 PM "James Harper" <james.harper@bendigoit.com.au> wrote:>> > > Our physical eth3 is connected to xenbr2 which is used to send info to > the > > domU. > > > > > Hmmm... well if you have 600Mbps traffic of 1316 bytes per packet, > that > > > is ~60Mbytes/second / 1316 bytes = ~45000 packets per second. > > > > > > While things are going at 600Mbps, please try the following in both > Dom0 > > > and DomU: > > > > > > cat /proc/interrupts && sleep 10 && cat /proc/interrupts > > > > > > That should get a very approximate count of interrupts over a 10 > second > > > period. what is the difference (after - before) for: > > > > > > Dom0 physical Ethernet interface > > > Dom0 vif (backend) interface > > > DomU eth0 > > > > Dom 0 > > 36: 14136682 0 Phys-irq-level eth3, eth4 > > 36: 14176719 0 Phys-irq-level eth3, eth4 > > Diff: 40037 > > > > 275: 254326 0 Dynamic-irq-level vif8.2 > > 275: 259086 0 Dynamic-irq-level vif8.2 > > Diff: 4760 > > > > Dom U > > 277: 1105308 0 0 0 0 Dynamic-irq eth2 > > 277: 1147909 0 0 0 0 Dynamic-irq eth2 > > Diff: 42601 > > > > So Dom0 is getting around 4000 interrupts a second, and DomU is getting > around 4200 interrupts a second, so it sounds like both are doing > interrupt moderation, which is good. > > I notice that eth3 and eth4 are both sharing an interrupt. Is eth4 > active during this time?No, Eth4 is down. --Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users