Displaying 20 results from an estimated 50000 matches similar to: "Zerocopy VM-to-VM networking using virtio-net"
2015 Apr 22
1
Zerocopy VM-to-VM networking using virtio-net
On Wed, Apr 22, 2015 at 6:46 PM, Cornelia Huck <cornelia.huck at de.ibm.com> wrote:
> On Wed, 22 Apr 2015 18:01:38 +0100
> Stefan Hajnoczi <stefanha at redhat.com> wrote:
>
>> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
>> if you are a non-TC member.]
>>
>> Hi,
>> Some modern networking applications bypass the kernel
2015 Apr 22
1
Zerocopy VM-to-VM networking using virtio-net
On Wed, Apr 22, 2015 at 6:46 PM, Cornelia Huck <cornelia.huck at de.ibm.com> wrote:
> On Wed, 22 Apr 2015 18:01:38 +0100
> Stefan Hajnoczi <stefanha at redhat.com> wrote:
>
>> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
>> if you are a non-TC member.]
>>
>> Hi,
>> Some modern networking applications bypass the kernel
2015 Apr 22
0
Zerocopy VM-to-VM networking using virtio-net
On Wed, 22 Apr 2015 18:01:38 +0100
Stefan Hajnoczi <stefanha at redhat.com> wrote:
> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
> if you are a non-TC member.]
>
> Hi,
> Some modern networking applications bypass the kernel network stack so
> that rx/tx rings and DMA buffers can be directly mapped. This is
> typical in DPDK applications
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:
>>
>> The motivation for making VM-to-VM fast is that while software
>> switches on the host are efficient today (thanks to vhost-user), there
>> is no efficient solution if the software switch is a VM.
>
>
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:
>>
>> The motivation for making VM-to-VM fast is that while software
>> switches on the host are efficient today (thanks to vhost-user), there
>> is no efficient solution if the software switch is a VM.
>
>
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>> We used to stop the handling of tx when the number of pending DMAs
>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> of both host and guest. But it was too aggressive in some cases, since
>> any delay or blocking of a single packet
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>> We used to stop the handling of tx when the number of pending DMAs
>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> of both host and guest. But it was too aggressive in some cases, since
>> any delay or blocking of a single packet
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
> On 2014/2/26 13:53, Jason Wang wrote:
>> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
> On 2014/2/26 13:53, Jason Wang wrote:
>> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 05:23 PM, Michael S. Tsirkin wrote:
> On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote:
>> > On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
>>> > >On 2014/2/26 13:53, Jason Wang wrote:
>>>> > >>On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>>>> > >>>On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 05:23 PM, Michael S. Tsirkin wrote:
> On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote:
>> > On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
>>> > >On 2014/2/26 13:53, Jason Wang wrote:
>>>> > >>On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>>>> > >>>On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason
2015 Apr 27
4
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Mon, Apr 27, 2015 at 1:55 PM, Jan Kiszka <jan.kiszka at siemens.com> wrote:
> Am 2015-04-27 um 14:35 schrieb Jan Kiszka:
>> Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi:
>>> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
>>>> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:
>>>>>
2015 Apr 27
4
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Mon, Apr 27, 2015 at 1:55 PM, Jan Kiszka <jan.kiszka at siemens.com> wrote:
> Am 2015-04-27 um 14:35 schrieb Jan Kiszka:
>> Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi:
>>> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
>>>> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:
>>>>>
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of both host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of both host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking
2019 Apr 25
2
[PATCH net] vhost_net: fix possible infinite loop
When the rx buffer is too small for a packet, we will discard the vq
descriptor and retry it for the next packet:
while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
&busyloop_intr))) {
...
/* On overrun, truncate and discard */
if (unlikely(headcount > UIO_MAXIOV)) {
iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
err = sock->ops->recvmsg(sock,