Displaying 20 results from an estimated 300 matches similar to: "[net-next RFC PATCH 0/5] Series short description"
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
From: Jason Wang <jasowang at redhat.com>
Date: Fri, 30 Dec 2016 13:20:51 +0800
> @@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
> skb_probe_transport_header(skb, 0);
>
> rxhash = skb_get_hash(skb);
> +
> #ifndef CONFIG_4KSTACKS
> - local_bh_disable();
> - netif_receive_skb(skb);
> - local_bh_enable();
2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
From: Jason Wang <jasowang at redhat.com>
Date: Fri, 30 Dec 2016 13:20:51 +0800
> @@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
> skb_probe_transport_header(skb, 0);
>
> rxhash = skb_get_hash(skb);
> +
> #ifndef CONFIG_4KSTACKS
> - local_bh_disable();
> - netif_receive_skb(skb);
> - local_bh_enable();
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all:
This series tries to batch submitting packets to underlayer socket
through msg_control during sendmsg(). This is done by:
1) Doing userspace copy inside vhost_net
2) Build XDP buff
3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once
through msg_control during sendmsg().
4) Underlayer sockets can use XDP buffs directly when XDP is enalbed,
or build skb based on XDP
2018 May 21
20
[RFC PATCH net-next 00/12] XDP batching for TUN/vhost_net
Hi all:
We do not support XDP batching for TUN since it can only receive one
packet a time from vhost_net. This series tries to remove this
limitation by:
- introduce a TUN specific msg_control that can hold a pointer to an
array of XDP buffs
- try copy and build XDP buff in vhost_net
- store XDP buffs in an array and submit them once for every N packets
from vhost_net
- since TUN can only
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all:
This series tries to batch submitting packets to underlayer socket
through msg_control during sendmsg(). This is done by:
1) Doing userspace copy inside vhost_net
2) Build XDP buff
3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once
through msg_control during sendmsg().
4) Underlayer sockets can use XDP buffs directly when XDP is enalbed,
or build skb based on XDP
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all:
This series tries to batch submitting packets to underlayer socket
through msg_control during sendmsg(). This is done by:
1) Doing userspace copy inside vhost_net
2) Build XDP buff
3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once
through msg_control during sendmsg().
4) Underlayer sockets can use XDP buffs directly when XDP is enalbed,
or build skb based on XDP
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2015 Jan 30
9
[PATCH v2 0/3] Restore UFO support to virtio_net devices
commit 3d0ad09412ffe00c9afa201d01effdb6023d09b4
Author: Ben Hutchings <ben at decadent.org.uk>
Date: Thu Oct 30 18:27:12 2014 +0000
drivers/net: Disable UFO through virtio
Turned off UFO support to virtio-net based devices due to issues
with IPv6 fragment id generation for UFO packets. The issue
was that IPv6 UFO/GSO implementation expects the fragment id
to be supplied in
2015 Jan 30
9
[PATCH v2 0/3] Restore UFO support to virtio_net devices
commit 3d0ad09412ffe00c9afa201d01effdb6023d09b4
Author: Ben Hutchings <ben at decadent.org.uk>
Date: Thu Oct 30 18:27:12 2014 +0000
drivers/net: Disable UFO through virtio
Turned off UFO support to virtio-net based devices due to issues
with IPv6 fragment id generation for UFO packets. The issue
was that IPv6 UFO/GSO implementation expects the fragment id
to be supplied in
2014 Oct 27
3
[PATCH net 0/2] drivers/net,ipv6: Fix IPv6 fragment ID selection for virtio
The virtio net protocol supports UFO but does not provide for passing a
fragment ID for fragmentation of IPv6 packets. We used to generate a
fragment ID wherever such a packet was fragmented, but currently we
always use ID=0!
Ben.
Ben Hutchings (2):
drivers/net: Disable UFO through virtio
drivers/net,ipv6: Select IPv6 fragment idents for virtio UFO packets
drivers/net/macvtap.c | 16
2014 Oct 27
3
[PATCH net 0/2] drivers/net,ipv6: Fix IPv6 fragment ID selection for virtio
The virtio net protocol supports UFO but does not provide for passing a
fragment ID for fragmentation of IPv6 packets. We used to generate a
fragment ID wherever such a packet was fragmented, but currently we
always use ID=0!
Ben.
Ben Hutchings (2):
drivers/net: Disable UFO through virtio
drivers/net,ipv6: Select IPv6 fragment idents for virtio UFO packets
drivers/net/macvtap.c | 16