"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM:> > Results: > > > > Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046MB/sec).> > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms runningnetperf> > (4086 MB/sec -> 5545 MB/sec). > > > > filebench, 1 vm: > > ops/sec improved by 13% with the polling patch. Number of exits > was reduced by > > 31%. > > The same experiment with 3 vms running filebench showed similarnumbers.> > > > Signed-off-by: Razya Ladelsky <razya at il.ibm.com> > > This really needs more thourough benchmarking report, including > system data. One good example for a related patch: > http://lwn.net/Articles/551179/ > though for virtualization, we need data about host as well, and if you > want to look at streaming benchmarks, you need to test different message > sizes and measure packet size. >Hi Michael, I have already tried running netperf with several message sizes: 64,128,256,512,600,800... But the results are inconsistent even in the baseline/unpatched configuration. For smaller msg sizes, I get consistent numbers. However, at some point, when I increase the msg size I get unstable results. For example, for a 512B msg, I get two scenarios: vm utilization 100%, vhost utilization 75%, throughput ~6300 vm utilization 80%, vhost utilization 13%, throughput ~9400 (line rate) I don't know why vhost is behaving that way for certain message sizes. Do you have any insight to why this is happening? Thank you, Razya
From: Razya Ladelsky> "Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > > > Results: > > > > > > Netperf, 1 vm: > > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > > Number of exits/sec decreased 6x. > > > The same improvement was shown when I tested with 3 vms running netperf > > > (4086 MB/sec -> 5545 MB/sec). > > > > > > filebench, 1 vm: > > > ops/sec improved by 13% with the polling patch. Number of exits > > > was reduced by 31%. > > > The same experiment with 3 vms running filebench showed similar numbers. > > > > > > Signed-off-by: Razya Ladelsky <razya at il.ibm.com> > > > > This really needs more thourough benchmarking report, including > > system data. One good example for a related patch: > > http://lwn.net/Articles/551179/ > > though for virtualization, we need data about host as well, and if you > > want to look at streaming benchmarks, you need to test different message > > sizes and measure packet size. > > > > Hi Michael, > I have already tried running netperf with several message sizes: > 64,128,256,512,600,800... > But the results are inconsistent even in the baseline/unpatched > configuration. > For smaller msg sizes, I get consistent numbers. However, at some point, > when I increase the msg size > I get unstable results. For example, for a 512B msg, I get two scenarios: > vm utilization 100%, vhost utilization 75%, throughput ~6300 > vm utilization 80%, vhost utilization 13%, throughput ~9400 (line rate) > > I don't know why vhost is behaving that way for certain message sizes. > Do you have any insight to why this is happening?Have you tried looking at the actual ethernet packet sizes. It may well jump between using small packets (the size of the writes) and full sized ones. If you are trying to measure ethernet packet 'cost' you need to use UDP. However that probably uses different code paths. David
David Laight <David.Laight at ACULAB.COM> wrote on 21/08/2014 05:29:41 PM:> From: David Laight <David.Laight at ACULAB.COM> > To: Razya Ladelsky/Haifa/IBM at IBMIL, "Michael S. Tsirkin"<mst at redhat.com>> Cc: "abel.gordon at gmail.com" <abel.gordon at gmail.com>, Alex Glikson/ > Haifa/IBM at IBMIL, Eran Raichstein/Haifa/IBM at IBMIL, Joel Nider/Haifa/ > IBM at IBMIL, "kvm at vger.kernel.org" <kvm at vger.kernel.org>, "linux- > kernel at vger.kernel.org" <linux-kernel at vger.kernel.org>, > "netdev at vger.kernel.org" <netdev at vger.kernel.org>, > "virtualization at lists.linux-foundation.org" > <virtualization at lists.linux-foundation.org>, YossiKuperman1/Haifa/IBM at IBMIL> Date: 21/08/2014 05:31 PM > Subject: RE: [PATCH] vhost: Add polling mode > > From: Razya Ladelsky > > "Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > > > > > Results: > > > > > > > > Netperf, 1 vm: > > > > The polling patch improved throughput by ~33% (1516 MB/sec -> > 2046 MB/sec). > > > > Number of exits/sec decreased 6x. > > > > The same improvement was shown when I tested with 3 vms runningnetperf> > > > (4086 MB/sec -> 5545 MB/sec). > > > > > > > > filebench, 1 vm: > > > > ops/sec improved by 13% with the polling patch. Number of exits > > > > was reduced by 31%. > > > > The same experiment with 3 vms running filebench showed similarnumbers.> > > > > > > > Signed-off-by: Razya Ladelsky <razya at il.ibm.com> > > > > > > This really needs more thourough benchmarking report, including > > > system data. One good example for a related patch: > > > http://lwn.net/Articles/551179/ > > > though for virtualization, we need data about host as well, and ifyou> > > want to look at streaming benchmarks, you need to test differentmessage> > > sizes and measure packet size. > > > > > > > Hi Michael, > > I have already tried running netperf with several message sizes: > > 64,128,256,512,600,800... > > But the results are inconsistent even in the baseline/unpatched > > configuration. > > For smaller msg sizes, I get consistent numbers. However, at somepoint,> > when I increase the msg size > > I get unstable results. For example, for a 512B msg, I get twoscenarios:> > vm utilization 100%, vhost utilization 75%, throughput ~6300 > > vm utilization 80%, vhost utilization 13%, throughput ~9400 (linerate)> > > > I don't know why vhost is behaving that way for certain message sizes. > > Do you have any insight to why this is happening? > > Have you tried looking at the actual ethernet packet sizes. > It may well jump between using small packets (the size of the writes) > and full sized ones.I will check it, Thanks, Razya> > If you are trying to measure ethernet packet 'cost' you need to use UDP. > However that probably uses different code paths. > > David > > >