Displaying 13 results from an estimated 13 matches for "ioexit".
Did you mean:
ioexits
2015 Nov 12
2
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...polling was specified ioctl.
>>
>> Test were done through:
>>
>> - 50 us as busy loop timeout
>> - Netperf 2.6
>> - Two machines with back to back connected ixgbe
>> - Guest with 1 vcpu and 1 queue
>>
>> Results:
>> - For stream workload, ioexits were reduced dramatically in medium
>> size (1024-2048) of tx (at most -39%) and almost all rx (at most
>> -79%) as a result of polling. This compensate for the possible
>> wasted cpu cycles more or less. That porbably why we can still see
>> some increasing in the...
2015 Nov 12
2
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...polling was specified ioctl.
>>
>> Test were done through:
>>
>> - 50 us as busy loop timeout
>> - Netperf 2.6
>> - Two machines with back to back connected ixgbe
>> - Guest with 1 vcpu and 1 queue
>>
>> Results:
>> - For stream workload, ioexits were reduced dramatically in medium
>> size (1024-2048) of tx (at most -39%) and almost all rx (at most
>> -79%) as a result of polling. This compensate for the possible
>> wasted cpu cycles more or less. That porbably why we can still see
>> some increasing in the...
2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...x receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected ixgbe
- Guest with 1 vcpu and 1 queue
Results:
- For stream workload, ioexits were reduced dramatically in medium
size (1024-2048) of tx (at most -39%) and almost all rx (at most
-79%) as a result of polling. This compensate for the possible
wasted cpu cycles more or less. That porbably why we can still see
some increasing in the normalized throughput in some cases....
2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...x receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected ixgbe
- Guest with 1 vcpu and 1 queue
Results:
- For stream workload, ioexits were reduced dramatically in medium
size (1024-2048) of tx (at most -39%) and almost all rx (at most
-79%) as a result of polling. This compensate for the possible
wasted cpu cycles more or less. That porbably why we can still see
some increasing in the normalized throughput in some cases....
2015 Nov 12
0
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...> time (in us) could be spent on busy polling was specified ioctl.
>
> Test were done through:
>
> - 50 us as busy loop timeout
> - Netperf 2.6
> - Two machines with back to back connected ixgbe
> - Guest with 1 vcpu and 1 queue
>
> Results:
> - For stream workload, ioexits were reduced dramatically in medium
> size (1024-2048) of tx (at most -39%) and almost all rx (at most
> -79%) as a result of polling. This compensate for the possible
> wasted cpu cycles more or less. That porbably why we can still see
> some increasing in the normalized throu...
2016 Mar 04
6
[PATCH V4 0/3] basic busy polling support for vhost_net
...r latency (TCP_RR).
- Get a better or minor regression on most of the TX tests, but see
some regression on 4096 size.
- Except for 8 sessions of 4096 size RX, have a better or same
performance.
- CPU utilization were incrased as expected.
TCP_RR:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
1/ 1/ +8%/ -32%/ +8%/ +8%/ +7%
1/ 50/ +7%/ -19%/ +7%/ +7%/ +1%
1/ 100/ +5%/ -21%/ +5%/ +5%/ 0%
1/ 200/ +5%/ -21%/ +7%/ +7%/ +1%
64/ 1/ +11%/ -29%/ +11%/ +11%/ +10%
64/ 50/ +7%/ -19%/ +8%/ +8%/ +2%
64/...
2016 Mar 04
6
[PATCH V4 0/3] basic busy polling support for vhost_net
...r latency (TCP_RR).
- Get a better or minor regression on most of the TX tests, but see
some regression on 4096 size.
- Except for 8 sessions of 4096 size RX, have a better or same
performance.
- CPU utilization were incrased as expected.
TCP_RR:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
1/ 1/ +8%/ -32%/ +8%/ +8%/ +7%
1/ 50/ +7%/ -19%/ +7%/ +7%/ +1%
1/ 100/ +5%/ -21%/ +5%/ +5%/ 0%
1/ 200/ +5%/ -21%/ +7%/ +7%/ +1%
64/ 1/ +11%/ -29%/ +11%/ +11%/ +10%
64/ 50/ +7%/ -19%/ +8%/ +8%/ +2%
64/...
2015 Nov 13
0
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...t;
>>> Test were done through:
>>>
>>> - 50 us as busy loop timeout
>>> - Netperf 2.6
>>> - Two machines with back to back connected ixgbe
>>> - Guest with 1 vcpu and 1 queue
>>>
>>> Results:
>>> - For stream workload, ioexits were reduced dramatically in medium
>>> size (1024-2048) of tx (at most -39%) and almost all rx (at most
>>> -79%) as a result of polling. This compensate for the possible
>>> wasted cpu cycles more or less. That porbably why we can still see
>>> some in...
2016 Mar 09
0
[PATCH V4 0/3] basic busy polling support for vhost_net
...or minor regression on most of the TX tests, but see
> some regression on 4096 size.
> - Except for 8 sessions of 4096 size RX, have a better or same
> performance.
> - CPU utilization were incrased as expected.
>
> TCP_RR:
> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
> 1/ 1/ +8%/ -32%/ +8%/ +8%/ +7%
> 1/ 50/ +7%/ -19%/ +7%/ +7%/ +1%
> 1/ 100/ +5%/ -21%/ +5%/ +5%/ 0%
> 1/ 200/ +5%/ -21%/ +7%/ +7%/ +1%
> 64/ 1/ +11%/ -29%/ +11%/ +11%/ +10%
> 64/ 50/ +7%/ -19%...
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
...receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test A were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected ixgbe
- Guest with 1 vcpu and 1 queue
Results:
- For stream workload, ioexits were reduced dramatically in medium
size (1024-2048) of tx (at most -43%) and almost all rx (at most
-84%) as a result of polling. This compensate for the possible
wasted cpu cycles more or less. That porbably why we can still see
some increasing in the normalized throughput in some cases.
- Throu...
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
...receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test A were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected ixgbe
- Guest with 1 vcpu and 1 queue
Results:
- For stream workload, ioexits were reduced dramatically in medium
size (1024-2048) of tx (at most -43%) and almost all rx (at most
-84%) as a result of polling. This compensate for the possible
wasted cpu cycles more or less. That porbably why we can still see
some increasing in the normalized throughput in some cases.
- Throu...
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
...cted mlx4
- Guest with 8 vcpus and 1 queue
Results:
- TCP_RR was imporved obviously (at most 27%). And cpu utilizaton was
also improved in this case.
- No obvious differences in Guest RX throughput.
- Guest TX throughput was also improved.
TCP_RR:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
1/ 1/ +27%/ 0%/ +27%/ +27%/ +25%
1/ 50/ +2%/ +1%/ +2%/ +2%/ -4%
1/ 100/ +2%/ +1%/ +3%/ +3%/ -14%
1/ 200/ +2%/ +2%/ +5%/ +5%/ -15%
64/ 1/ +20%/ -13%/ +20%/ +20%/ +20%
64/ 50/ +17%/ +14%/ +16%/ +16%/ -11%
64/...
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
...cted mlx4
- Guest with 8 vcpus and 1 queue
Results:
- TCP_RR was imporved obviously (at most 27%). And cpu utilizaton was
also improved in this case.
- No obvious differences in Guest RX throughput.
- Guest TX throughput was also improved.
TCP_RR:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
1/ 1/ +27%/ 0%/ +27%/ +27%/ +25%
1/ 50/ +2%/ +1%/ +2%/ +2%/ -4%
1/ 100/ +2%/ +1%/ +3%/ +3%/ -14%
1/ 200/ +2%/ +2%/ +5%/ +5%/ -15%
64/ 1/ +20%/ -13%/ +20%/ +20%/ +20%
64/ 50/ +17%/ +14%/ +16%/ +16%/ -11%
64/...