During performance testing we find out that increasing txqueuelen can increase throughput significantly. Following table shows our results: txQlen netperf message size 512 byte 4096 byte ----------------------------------- 32 1634.13 8402.32 64 1292.05 14198.48 128 4142.58 14677.39 256 4439.77 14626.80 512 5251.48 14809.59 1024 4875.96 15358.55 Based on this result, shouldn''t be good idea to change default txqueuelen? Physical devices uses txqueuelen of 1000. Regard, Mirek -- Miroslav Rezanina Software Engineer - Virtualization Team - XEN kernel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
The expectation was that domU would push enough receive buffers to dom0 to avoid packet loss. The txqueuelen is just a fallback for that. Still, yeah, it could be increased if it improves perf given default domU netfront behaviour. -- Keir On 22/10/2010 10:35, "Miroslav Rezanina" <mrezanin@redhat.com> wrote:> During performance testing we find out that increasing txqueuelen can increase > throughput significantly. Following table shows our results: > > txQlen netperf message size > 512 byte 4096 byte > ----------------------------------- > 32 1634.13 8402.32 > 64 1292.05 14198.48 > 128 4142.58 14677.39 > 256 4439.77 14626.80 > 512 5251.48 14809.59 > 1024 4875.96 15358.55 > > > Based on this result, shouldn''t be good idea to change default txqueuelen? > Physical devices uses txqueuelen of 1000. > > Regard, > Mirek_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > The expectation was that domU would push enough receive buffers todom0 to> avoid packet loss. The txqueuelen is just a fallback for that. Still,yeah,> it could be increased if it improves perf given default domU netfront > behaviour. >I have found the ring a bit small when trying to cope with many small buffers, but it''s workload and system dependent so it should probably be set on a case by case basis. Are there any disadvantages to increasing the txqueue? What ever happened to the new netchannel stuff? Did that promise larger rings? James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I was wondering about the short queues the other day. They definitely cause problems with short packet workloads and the suspicion is that the shortness is largely historical. I plan to work on a new netback receive side (moving the grant copy into the guest) shortly, but I have some other stuff to get through before I can make a proper start. Hope to have something in a few weeks though. Paul> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel- > bounces@lists.xensource.com] On Behalf Of James Harper > Sent: 23 October 2010 00:26 > To: Keir Fraser; Miroslav Rezanina; xen-devel > Subject: RE: [Xen-devel] Increase txqueuelen of vif devices > > > > > The expectation was that domU would push enough receive buffers to > dom0 to > > avoid packet loss. The txqueuelen is just a fallback for that. > Still, > yeah, > > it could be increased if it improves perf given default domU > netfront > > behaviour. > > > > I have found the ring a bit small when trying to cope with many > small > buffers, but it''s workload and system dependent so it should > probably be > set on a case by case basis. > > Are there any disadvantages to increasing the txqueue? > > What ever happened to the new netchannel stuff? Did that promise > larger > rings? > > James > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel