I''ve been running some simple tests trying to find out why the TCP accept() rate has been so low on my Xen guest. The rate at which I can accept new TCP connections is about five times better on a bare metal machine compared to my guest. Been using netperf with the TCP_CRR test to simulate this behavior. I originally posted this question at Server Fault ( http://serverfault.com/questions/272483/why-is-tcp-accept-performance-so-bad-under-xen) along with lots of more details how I have performed these tests. After a suggestion from a user there, I decided to try this list. Judging from the number of views the questions did receive at Server Fault and being top-3 voted at Hacker News, I presume this issue is something a lot of users care about. One user at HN also reported that this apparently is a known issue and is due to small packet performance, affecting both Xen and KVM. After collecting feedback from SF and HN users, my question is: what can you do to improve small packet performance in Xen? Is this a fundamentally difficult problem to solve with Xen or is there a "quick fix"? Thanks! -- Carl Byström http://cgbystrom.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, May 23, 2011 at 06:31:01PM +0200, Carl Byström wrote:> I''ve been running some simple tests trying to find out why the TCP > accept() rate has been so low on my Xen guest. > The rate at which I can accept new TCP connections is about five times > better on a bare metal machine compared to my guest. > Been using netperf with the TCP_CRR test to simulate this behavior. > I originally posted this question at Server Fault > ([1]http://serverfault.com/questions/272483/why-is-tcp-accept-performance-so-bad-under-xen) > along with lots of more details how I have performed these tests. > After a suggestion from a user there, I decided to try this list. Judging > from the number of views the questions did receive at Server Fault and > being top-3 voted at Hacker News, I presume this issue is something a lot > of users care about. > One user at HN also reported that this apparently is a known issue and is > due to small packet performance, affecting both Xen and KVM. > After collecting feedback from SF and HN users, my question is: what can > you do to improve small packet performance in Xen? > Is this a fundamentally difficult problem to solve with Xen or is there a > "quick fix"? > Thanks!Hello, - Did you try giving dom0 and the VM dedicated cpu cores? Did that help? (See http://wiki.xen.org/xenwiki/XenBestPractices) - Can you use Xen PCI passthru to dedicate a physical NIC to the VM ? - Can you post your benchmark numbers.. we need more info so we know what kind of numbers are we talking about. Also post the specs of your hardware and also full software/kernel/xen versions. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
In my experience, small packet performance (header congestion) is a critical issue on all networking gear. An example is an ethernet switch that needs to apply and strip 802.1q VLAN and COS tags, or for remarking DSCP. Small Business class switches don''t support monitoring of the switching CPU making it nearly impossible to gauge if your gear is suffering due to this. I am a dealer for a remote performance management suite of testing tools geared towards monitoring performance of hybrid physical and virtual networks and has the capability to detect (with a high degree of certainty) if your network has gear that is susceptible to small packet header congestion. "Small packet congestion detected" Summary Congestion caused by densely arriving packet headers has been detected. Recommended action * Identify devices such as switches, gateways, etc. associated with the Layer 3 hop where the loss first appears. * Assess the impact of the problem, i.e. determine whether you expect to have dense small packet bursts or streams across that segment. * If possible, perform intrusive flooding tests across the segment to isolate the device or software responsible. * Upgrade hardware or software of limiting device and/or turn off the software feature that is responsible. Detailed explanation This diagnostic involves a specific form of small packet loss that is attributed to some devices having difficulties with the handling of densely arriving packet headers. Unlike regular congestion which is sensitive to the amount of data, not the number of headers, this "header congestion" condition will affect applications specific to small packets, such as real-time voice and video, but only when there are many densely aggregated streams. A single voice stream is unlikely to generate this condition. The NIC, or some other device in the path, is unable to process headers at sufficiently high rates, and packet loss/corruption is the consequence. Small packet congestion is distinct from regular congestion, which is attributed more to large packets filling queues/buffers at store-forward devices (e.g. routers) or receiving NICs. Possible secondary messages * "Limiting network processor or other small packet sensitive constriction detected" * "May impact real-time traffic such as voice" Effectively, Xen makes ''virtual switches'' to connect the VMs. It''s quite likely that performance will suffer vs bare metal as the networking connections need to traverse many layers of virtual bridging to reach the VM and to get returned. I don''t know if this might be fixable by increasing Dom0 CPU access or by giving higher priority to the network processes. (I''m not sure what they are named). ________________________________ From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Carl Byström Sent: Monday, May 23, 2011 12:31 PM To: xen-users@lists.xensource.com Subject: [Xen-users] Bad TCP accept performance I''ve been running some simple tests trying to find out why the TCP accept() rate has been so low on my Xen guest. The rate at which I can accept new TCP connections is about five times better on a bare metal machine compared to my guest. Been using netperf with the TCP_CRR test to simulate this behavior. I originally posted this question at Server Fault (http://serverfault.com/questions/272483/why-is-tcp-accept-performance-so-bad-under-xen) along with lots of more details how I have performed these tests. After a suggestion from a user there, I decided to try this list. Judging from the number of views the questions did receive at Server Fault and being top-3 voted at Hacker News, I presume this issue is something a lot of users care about. One user at HN also reported that this apparently is a known issue and is due to small packet performance, affecting both Xen and KVM. After collecting feedback from SF and HN users, my question is: what can you do to improve small packet performance in Xen? Is this a fundamentally difficult problem to solve with Xen or is there a "quick fix"? Thanks! -- Carl Byström http://cgbystrom.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, May 23, 2011 at 10:22 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: - Did you try giving dom0 and the VM dedicated cpu cores? Did that help?> (See http://wiki.xen.org/xenwiki/XenBestPractices) > > - Can you use Xen PCI passthru to dedicate a physical NIC to the VM ? >I haven''t tried that, no. The dom0 is hosting other infrastructural servers so will need to get hold of our sysadmin to help me with this. Will get back to you.> > - Can you post your benchmark numbers.. we need more info so we know > what kind of numbers are we talking about. Also post the specs > of your hardware and also full software/kernel/xen versions. >I maintain a list of benchmark numbers at https://gist.github.com/985475 Submitted by myself and other users. It includes OS versions but not Xen version, will try to find out. Also tried this on EC2 with discouraging results as well. Russ, I tried this using only local loopback (127.0.0.1) to minimize any external factors. Does that make a difference to what you were suggesting? Appreciate any help I can get on this. -- Carl Byström http://cgbystrom.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, May 24, 2011 at 11:46:17PM +0200, Carl Byström wrote:> On Mon, May 23, 2011 at 10:22 PM, Pasi Kärkkäinen <[1]pasik@iki.fi> wrote: > > - Did you try giving dom0 and the VM dedicated cpu cores? Did that help? > (See [2]http://wiki.xen.org/xenwiki/XenBestPractices) > > - Can you use Xen PCI passthru to dedicate a physical NIC to the VM ? > > I haven''t tried that, no. The dom0 is hosting other infrastructural > servers so will need to get hold of our sysadmin to help me with this. > Will get back to you. > > > - Can you post your benchmark numbers.. we need more info so we know > what kind of numbers are we talking about. Also post the specs > of your hardware and also full software/kernel/xen versions. > > I maintain a list of benchmark numbers > at [3]https://gist.github.com/985475 > Submitted by myself and other users. It includes OS versions but not Xen > version, will try to find out. > Also tried this on EC2 with discouraging results as well. > Russ, I tried this using only local loopback (127.0.0.1) to minimize any > external factors. Does that make a difference to what you were suggesting? > Appreciate any help I can get on this. >Btw did you use 64bit or 32bit VMs? I''d suggest to try 32bit aswell if you didn''t. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, May 25, 2011 at 7:54 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> Btw did you use 64bit or 32bit VMs? I''d suggest to try 32bit aswell if you > didn''t. >I''ve tried it on EC2, but not on our private servers yet. Made no difference there. Someone has also suggested that increasing the backlog (well beyond 128) could help improve things. tcp_syncookies was another suggestion. I have no idea what they will do, haven''t tried them yet. Also got a report (see the previous link posted) from someone on KVM with a lot better performance. Whether those were correctly performed, I cannot tell but at least interesting if that''s the case. -- Carl Byström http://cgbystrom.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
None of your Xen tests where performed on a suitably recent kernel... KVM was performed on 2.6.38.6 kernel whereas Xen was performed on 2.6.32... Fair difference. Kernel development has been pretty active lately and I would not be surprised if you see major differences between older and newer kernels when using for virtualisation. I''m attempting to install Xen on top of 2.6.39 vanilla kernel today. Will run your tests on my box and see what the results are... Quite disheartening to see such a drop in accept performance, I for one hope it''s just misconfiguration (or requires further configuration) to bring the VMs up to spec. Iain On 25 May 2011, at 08:51, Carl Byström wrote:> > On Wed, May 25, 2011 at 7:54 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > Btw did you use 64bit or 32bit VMs? I''d suggest to try 32bit aswell if you didn''t. > > I''ve tried it on EC2, but not on our private servers yet. Made no difference there. > > Someone has also suggested that increasing the backlog (well beyond 128) could help improve things. > tcp_syncookies was another suggestion. I have no idea what they will do, haven''t tried them yet. > > Also got a report (see the previous link posted) from someone on KVM with a lot better performance. > Whether those were correctly performed, I cannot tell but at least interesting if that''s the case. > > -- > Carl Byström > http://cgbystrom.com > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Oh I see. My apologies I had not seen any mention of the newer kernels however I did just glance over the post this morning so that could have been why. I think the fact you have results from people doing the testing on dom0 showing similar drop in performance probably means it''s unlikely that Guest OS configuration alone will sort it out. However I am hopeful, and as soon as I have finished up in work I''ll play with 2.6.39 and post up results. Iain On 25 May 2011, at 11:55, Carl Byström wrote:> > 2011/5/25 Iain Kay <spam@iainkay.com> > None of your Xen tests where performed on a suitably recent kernel... KVM was performed on 2.6.38.6 kernel whereas Xen was performed on 2.6.32... Fair difference. Kernel development has been pretty active lately and I would not be surprised if you see major differences between older and newer kernels when using for virtualisation. > > I''ve tried on EC2 using a 2.6.38 kernel (Ubuntu 11.04 Natty 32-bit, AMI ami-e2af508b), same results there. > Someone also submitted a report from Linode with kernel 2.6.38-x86_64. > > I''m attempting to install Xen on top of 2.6.39 vanilla kernel today. Will run your tests on my box and see what the results are... > > Please do. > > Quite disheartening to see such a drop in accept performance, I for one hope it''s just misconfiguration (or requires further configuration) to bring the VMs up to spec. > > Yes, I hope so. At least I hope for the possibility to mitigate this by some guest OS tuning (but I''m starting to think that''s a bit naïve). > > -- > Carl_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Correct, using 127.0.0.1 should allow the VM to make a ridiculous amount of TCP connections to itself (max ~65000 concurrent) in a very short amount of time, however the loop back address is still ''in'' the NIC card so those tests do not really stay self contained in the VM, it does need to go thru the PV drivers to the virtual network layers and back, however what it does accomplish is that you can rule out pretty much anything external to the physical host as a test. I''m not sure if you mentioned using PV drivers or not but that''s definitely a good idea. How well 127.0.0.1 performs however is kind of a moot point however as no real application would make use of this. The real test is how many you can spin up from a physical host outside of the virtualization platform. There''s also value in knowing how many you can make from VM to VM. Here is a test I have running across the open internet... (See attachment) This breaks down the server response time by how long it took for DNS to resolve, how long it took to create the TCP connection, how long it took after the connection established to start feeding data, and how long it took to feed all the data. Also through graphing network responsiveness it shows how much of the result is due to network response vs server response. Running a test like this while you flood the server might also help to gauge the weakest point and how much load you can actually handle (which is ultimately the final question). The server in the report above is behind a 10Mbit Fiber link, but this product is capable of testing up to 2Gbit/sec in a virtual environment. Feel free to contact me FMI ________________________________ From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Carl Byström Sent: Tuesday, May 24, 2011 5:46 PM To: Pasi Kärkkäinen Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] Bad TCP accept performance Russ, I tried this using only local loopback (127.0.0.1) to minimize any external factors. Does that make a difference to what you were suggesting? Appreciate any help I can get on this. -- Carl Byström http://cgbystrom.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, May 25, 2011 at 10:04:18AM -0400, Russ Purinton wrote:> Correct, using 127.0.0.1 should allow the VM to make a ridiculous amount > of TCP connections to itself (max ~65000 concurrent) in a very short > amount of time, however the loop back address is still `in'' the NIC card > so those tests do not really stay self contained in the VM, it does need > to go thru the PV drivers to the virtual network layers and back, however > what it does accomplish is that you can rule out pretty much anything > external to the physical host as a test. I''m not sure if you mentioned > using PV drivers or not but that''s definitely a good idea. >Using localhost (127.0.0.1) goes thru loopback device only, ie. interface ''lo'', so it does NOT go through NIC card or xen/PV drivers..> > > How well 127.0.0.1 performs however is kind of a moot point however as no > real application would make use of this. >That''s true. -- Pasi> The real test is how many you > can spin up from a physical host outside of the virtualization platform. > There''s also value in knowing how many you can make from VM to VM. > > > > Here is a test I have running across the open internet... (See > attachment) > > > > This breaks down the server response time by how long it took for DNS to > resolve, how long it took to create the TCP connection, how long it took > after the connection established to start feeding data, and how long it > took to feed all the data. Also through graphing network responsiveness > it shows how much of the result is due to network response vs server > response. Running a test like this while you flood the server might also > help to gauge the weakest point and how much load you can actually handle > (which is ultimately the final question). > > > > The server in the report above is behind a 10Mbit Fiber link, but this > product is capable of testing up to 2Gbit/sec in a virtual environment. > Feel free to contact me FMI > > > > > > -------------------------------------------------------------------------- > > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Carl Byström > Sent: Tuesday, May 24, 2011 5:46 PM > To: Pasi Kärkkäinen > Cc: xen-users@lists.xensource.com > Subject: Re: [Xen-users] Bad TCP accept performance > > > > > > Russ, I tried this using only local loopback (127.0.0.1) to minimize any > external factors. Does that make a difference to what you were suggesting? > > > > Appreciate any help I can get on this. > > > > -- > > Carl Byström > > [1]http://cgbystrom.com > > References > > Visible links > 1. http://cgbystrom.com/_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, May 25, 2011 at 4:08 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> > > > > > How well 127.0.0.1 performs however is kind of a moot point however as > no > > real application would make use of this. > > > > That''s true. >I''ve tested and got about the same connection rate between two separate, physical machines. That''s what led me into testing using only loopback. My presumption was that if the loopback cannot produce the numbers I want, neither would the physical interface. But I guess it''s easy to be fooled by what Xen are doing at a low level when virtualized. So don''t get me wrong, I''m all ears. -- Carl Byström http://cgbystrom.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> How well 127.0.0.1 performs however is kind of a moot point however as no real > application would make use of this. The real test is how many you can spin up > from a physical host outside of the virtualization platform. There''s also > value in knowing how many you can make from VM to VM.It does reduce the scope of the test to the TCP/IP stack though. If 127.0.0.1 performs well then the test doesn''t tell you much, but if 127.0.0.1 performs badly then it definitely tells you where you should be looking for the problem, or at least it tells you that you shouldn''t be looking for problems in the DomX<->DomX layer. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I thought that even 127.0.0.0/8 traffic still went thru the NIC to loop. I can''t seem to find anything on the net supporting this one way or the other, though I''ve seen multiple posts about pinging 127.0.0.1 to test and verify the NIC card and drivers working properly. Not sure what this means. Also, if it doesn''t pass to the NIC, the NIC then the question arises how the IP and TCP checksums are being applied if offloading is enabled and it doesn''t pass thru the NIC. I had thought that on a physical host, the loopback ping would hit the network card. I''m guessing with a virtual host, the loopback ping would only hit the Virtual NIC but likely not the physical NIC. If it goes to the virtual NIC, then it would be passing thru the PV drivers to the QEMU layer supporting the virtual networking right? The packets shouldn''t be visible even from Dom0 because they should stay within the vNIC. Again, not finding any supporting documentation on the ''net, one way or the other, so feel free to prove me wrong. Thanks -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Pasi Kärkkäinen Sent: Wednesday, May 25, 2011 10:08 AM To: Russ Purinton Cc: Carl Byström; xen-users@lists.xensource.com Subject: Re: [Xen-users] Bad TCP accept performance Using localhost (127.0.0.1) goes thru loopback device only, ie. interface ''lo'', so it does NOT go through NIC card or xen/PV drivers.. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, May 25, 2011 at 11:49:21AM -0400, Russ Purinton wrote:> I thought that even 127.0.0.0/8 traffic still went thru the NIC to loop. I can''t seem to find anything on the net supporting this one way or the other, though I''ve seen multiple posts about pinging 127.0.0.1 to test and verify the NIC card and drivers working properly. Not sure what this means. > > Also, if it doesn''t pass to the NIC, the NIC then the question arises how the IP and TCP checksums are being applied if offloading is enabled and it doesn''t pass thru the NIC. > > I had thought that on a physical host, the loopback ping would hit the network card. I''m guessing with a virtual host, the loopback ping would only hit the Virtual NIC but likely not the physical NIC. If it goes to the virtual NIC, then it would be passing thru the PV drivers to the QEMU layer supporting the virtual networking right? > > The packets shouldn''t be visible even from Dom0 because they should stay within the vNIC. Again, not finding any supporting documentation on the ''net, one way or the other, so feel free to prove me wrong. >Just check "ifconfig -a" output. 127.0.0.1 IP is on the ''lo'' interface, not on ''ethX'' interface. lo-interface is provided by the loopback-driver. Even if you don''t have any nic drivers loaded, you still have ''lo'' and pinging localhost works. Try it. rmmod your nic driver and ping localhost. -- Pasi> Thanks > > -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Pasi Kärkkäinen > Sent: Wednesday, May 25, 2011 10:08 AM > To: Russ Purinton > Cc: Carl Byström; xen-users@lists.xensource.com > Subject: Re: [Xen-users] Bad TCP accept performance > > > Using localhost (127.0.0.1) goes thru loopback device only, ie. interface ''lo'', > so it does NOT go through NIC card or xen/PV drivers.._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''ve done some testing with Xen 4.1.0 and 2.6.39 vanilla kernel. Also tested on CentOS 5.6 stock kernel and 2.6.39 without xen-4.1.gz loading it. The results speak for themselves. I can definitely confirm the issue you are reporting on current xen stable (4.1.0) with latest 2.6.39 vanilla kernel. The tests I ran each (per kernel) 3 times and took the mean values. This is most fair I think. I then took those values and plotted a graph showing performance with the different kernels: https://secure.iainkay.net/images/mailing_lists/xen-users/netperf-graph.png For those of you that would like the raw output from the tests, along with /proc/cpuinfo + /proc/meminfo, here''s a link to that: https://secure.iainkay.net/images/mailing_lists/xen-users/netperf-workings.txt I think it''s interesting to observe that the 2.6.39 kernel performs the best of all without Xen loading but when loading kernel xen-4.1.gz with 2.6.39 as a module the performance plummets. Pretty knackered for tonight but I will see what I can do tomorrow and see if I can tune anything to get the performance up. Iain On 25 May 2011, at 11:55, Carl Byström wrote:> > 2011/5/25 Iain Kay <spam@iainkay.com> > None of your Xen tests where performed on a suitably recent kernel... KVM was performed on 2.6.38.6 kernel whereas Xen was performed on 2.6.32... Fair difference. Kernel development has been pretty active lately and I would not be surprised if you see major differences between older and newer kernels when using for virtualisation. > > I''ve tried on EC2 using a 2.6.38 kernel (Ubuntu 11.04 Natty 32-bit, AMI ami-e2af508b), same results there. > Someone also submitted a report from Linode with kernel > > I''m attempting to install Xen on top of 2.6.39 vanilla kernel today. Will run your tests on my box and see what the results are... > > Please do. > > Quite disheartening to see such a drop in accept performance, I for one hope it''s just misconfiguration (or requires further configuration) to bring the VMs up to spec. > > Yes, I hope so. At least I hope for the possibility to mitigate this by some guest OS tuning (but I''m starting to think that''s a bit naïve). > > -- > Carl_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I should have probably stated in my previous post that I performed this testing on Domain-0 hoping to achieve best performance. Will experiment inside VMs tomorrow also. Iain On 26 May 2011, at 00:09, Iain Kay wrote:> I''ve done some testing with Xen 4.1.0 and 2.6.39 vanilla kernel. Also tested on CentOS 5.6 stock kernel and 2.6.39 without xen-4.1.gz loading it. > > The results speak for themselves. I can definitely confirm the issue you are reporting on current xen stable (4.1.0) with latest 2.6.39 vanilla kernel. > > The tests I ran each (per kernel) 3 times and took the mean values. This is most fair I think. I then took those values and plotted a graph showing performance with the different kernels: > https://secure.iainkay.net/images/mailing_lists/xen-users/netperf-graph.png > > For those of you that would like the raw output from the tests, along with /proc/cpuinfo + /proc/meminfo, here''s a link to that: > https://secure.iainkay.net/images/mailing_lists/xen-users/netperf-workings.txt > > I think it''s interesting to observe that the 2.6.39 kernel performs the best of all without Xen loading but when loading kernel xen-4.1.gz with 2.6.39 as a module the performance plummets. > > Pretty knackered for tonight but I will see what I can do tomorrow and see if I can tune anything to get the performance up. > > Iain > > On 25 May 2011, at 11:55, Carl Byström wrote: > >> >> 2011/5/25 Iain Kay <spam@iainkay.com> >> None of your Xen tests where performed on a suitably recent kernel... KVM was performed on 2.6.38.6 kernel whereas Xen was performed on 2.6.32... Fair difference. Kernel development has been pretty active lately and I would not be surprised if you see major differences between older and newer kernels when using for virtualisation. >> >> I''ve tried on EC2 using a 2.6.38 kernel (Ubuntu 11.04 Natty 32-bit, AMI ami-e2af508b), same results there. >> Someone also submitted a report from Linode with kernel >> >> I''m attempting to install Xen on top of 2.6.39 vanilla kernel today. Will run your tests on my box and see what the results are... >> >> Please do. >> >> Quite disheartening to see such a drop in accept performance, I for one hope it''s just misconfiguration (or requires further configuration) to bring the VMs up to spec. >> >> Yes, I hope so. At least I hope for the possibility to mitigate this by some guest OS tuning (but I''m starting to think that''s a bit naïve). >> >> -- >> Carl > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> From: Russ Purinton > Sent: Wednesday, May 25, 2011 11:49 PM > > I thought that even 127.0.0.0/8 traffic still went thru the NIC to loop. I can''t > seem to find anything on the net supporting this one way or the other, though > I''ve seen multiple posts about pinging 127.0.0.1 to test and verify the NIC card > and drivers working properly. Not sure what this means. > > Also, if it doesn''t pass to the NIC, the NIC then the question arises how the IP > and TCP checksums are being applied if offloading is enabled and it doesn''t pass > thru the NIC. > > I had thought that on a physical host, the loopback ping would hit the network > card. I''m guessing with a virtual host, the loopback ping would only hit theloopback driver simply queues the tx packet back to rx queue, and there''s nothing to do with underlying NIC. Thanks Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, May 26, 2011 at 1:09 AM, Iain Kay <spam@iainkay.com> wrote:> I''ve done some testing with Xen 4.1.0 and 2.6.39 vanilla kernel. Also > tested on CentOS 5.6 stock kernel and 2.6.39 without xen-4.1.gz loading it. > > The results speak for themselves. I can definitely confirm the issue you > are reporting on current xen stable (4.1.0) with latest 2.6.39 vanilla > kernel. >Great work Iain, thanks! Isolation of the problem at least, now one just need to figure out if there''s an easy way improve this in Xen or if it requires some major changes. Can I post your results on Server Fault (the original question was posted there) ? I presume a lot of people there are also interested. -- Carl Byström http://cgbystrom.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Carl, Please do; my results are there to be shared and you can feel free to upload and share the graph as you wish too. I''m going to mess around today and see how things go inside a VM and I thought it might be of interest to see how it goes running the test on 2 VMs at once and then 2 VMs + dom0 at once. Iain On 26 May 2011, at 09:44, Carl Byström wrote:> On Thu, May 26, 2011 at 1:09 AM, Iain Kay <spam@iainkay.com> wrote: > I''ve done some testing with Xen 4.1.0 and 2.6.39 vanilla kernel. Also tested on CentOS 5.6 stock kernel and 2.6.39 without xen-4.1.gz loading it. > > The results speak for themselves. I can definitely confirm the issue you are reporting on current xen stable (4.1.0) with latest 2.6.39 vanilla kernel. > > Great work Iain, thanks! Isolation of the problem at least, now one just need to figure out if there''s an easy way improve this in Xen or if it requires some major changes. > > Can I post your results on Server Fault (the original question was posted there) ? I presume a lot of people there are also interested. > > -- > Carl Byström > http://cgbystrom.com_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Could there be a difference in loopback behavior with linux vs windows? I have noticed that TCP connections are established by apache in about 0.6ms and IIS takes about 75ms to accept them most times. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Tian, Kevin Sent: Wednesday, May 25, 2011 10:55 PM To: Russ Purinton; Pasi Kärkkäinen Cc: Carl Byström; xen-users@lists.xensource.com Subject: RE: [Xen-users] Bad TCP accept performance> From: Russ Purinton > Sent: Wednesday, May 25, 2011 11:49 PM > > I thought that even 127.0.0.0/8 traffic still went thru the NIC to loop. I can''t > seem to find anything on the net supporting this one way or the other, though > I''ve seen multiple posts about pinging 127.0.0.1 to test and verify the NIC card > and drivers working properly. Not sure what this means. > > Also, if it doesn''t pass to the NIC, the NIC then the question arises how the IP > and TCP checksums are being applied if offloading is enabled and it doesn''t pass > thru the NIC. > > I had thought that on a physical host, the loopback ping would hit the network > card. I''m guessing with a virtual host, the loopback ping would only hit theloopback driver simply queues the tx packet back to rx queue, and there''s nothing to do with underlying NIC. Thanks Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users