> > > > Hi Michael, > > > > Sorry for the delay, had some problems with my mailbox, and I realized> > just now that > > my reply wasn't sent. > > The vm indeed ALWAYS utilized 100% cpu, whether polling was enabled or> > not. > > The vhost thread utilized less than 100% (of the other cpu) whenpolling> > was disabled. > > Enabling polling increased its utilization to 100% (in which case both> > cpus were 100% utilized). > > Hmm this means the testing wasn't successful then, as you said: > > The idea was to get it 100% loaded, so we can see that the polling is > getting it to produce higher throughput. > > in fact here you are producing more throughput but spending more power > to produce it, which can have any number of explanations besides polling > improving the efficiency. For example, increasing system load might > disable host power management. >Hi Michael, I re-ran the tests, this time with the "turbo mode" and "C-states" features off. No Polling: 1 VM running netperf (msg size 64B): 1107 Mbits/sec Polling: 1 VM running netperf (msg size 64B): 1572 Mbits/sec As you can see from the new results, the numbers are lower, but relatively (polling on/off) there's no change. Thank you, Razya> > > > -- > > > MST > > > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo at vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >
On Sun, Aug 17, 2014 at 03:35:39PM +0300, Razya Ladelsky wrote:> > > > > > Hi Michael, > > > > > > Sorry for the delay, had some problems with my mailbox, and I realized > > > > just now that > > > my reply wasn't sent. > > > The vm indeed ALWAYS utilized 100% cpu, whether polling was enabled or > > > > not. > > > The vhost thread utilized less than 100% (of the other cpu) when > polling > > > was disabled. > > > Enabling polling increased its utilization to 100% (in which case both > > > > cpus were 100% utilized). > > > > Hmm this means the testing wasn't successful then, as you said: > > > > The idea was to get it 100% loaded, so we can see that the polling is > > getting it to produce higher throughput. > > > > in fact here you are producing more throughput but spending more power > > to produce it, which can have any number of explanations besides polling > > improving the efficiency. For example, increasing system load might > > disable host power management. > > > > Hi Michael, > I re-ran the tests, this time with the "turbo mode" and "C-states" > features off. > No Polling: > 1 VM running netperf (msg size 64B): 1107 Mbits/sec > Polling: > 1 VM running netperf (msg size 64B): 1572 Mbits/sec > > > > > > > > As you can see from the new results, the numbers are lower, > but relatively (polling on/off) there's no change. > Thank you, > RazyaThat was just one example. There many other possibilities. Either actually make the systems load all host CPUs equally, or divide throughput by host CPU.> > > > > > > > > > > -- > > > > MST > > > > > > -- > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > the body of a message to majordomo at vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > >
> That was just one example. There many other possibilities. Either > actually make the systems load all host CPUs equally, or divide > throughput by host CPU. >The polling patch adds this capability to vhost, reducing costly exit overhead when the vm is loaded. In order to load the vm I ran netperf with msg size of 256: Without polling: 2480 Mbits/sec, utilization: vm - 100% vhost - 64% With Polling: 4160 Mbits/sec, utilization: vm - 100% vhost - 100% Therefore, throughput/cpu without polling is 15.1, and 20.8 with polling. My intention was to load vhost as close as possible to 100% utilization without polling, in order to compare it to the polling utilization case (where vhost is always 100%). The best use case, of course, would be when the shared vhost thread work (TBD) is integrated and then vhost will actually be using its polling cycles to handle requests of multiple devices (even from multiple vms). Thanks, Razya