similar to: [RFC V3 PATCH 0/8] Packed ring for vhost

Displaying 20 results from an estimated 10000 matches similar to: "[RFC V3 PATCH 0/8] Packed ring for vhost"

2018 May 29
9
[RFC V5 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V5 at https://lkml.org/lkml/2018/5/22/138. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on TX PPS when doing pktgen from guest to host. No ovbious improvement on RX PPS. We can do lots of optimizations on top but for simple
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues which were described at [1]. In this version we try to address the performance regression saw by V2. The root cause is packed virtqueue need more times of userspace memory accesssing which turns out to be very expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq metadata through kernel virtual address"), such overhead cold be
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues which were described at [1]. In this version we try to address the performance regression saw by V2. The root cause is packed virtqueue need more times of userspace memory accesssing which turns out to be very expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq metadata through kernel virtual address"), such overhead cold be
2010 Apr 28
6
[PATCHv7] add mergeable buffers support to vhost_net
This patch adds mergeable receive buffer support to vhost_net. Signed-off-by: David L Stevens <dlstevens at us.ibm.com> diff -ruNp net-next-v0/drivers/vhost/net.c net-next-v7/drivers/vhost/net.c --- net-next-v0/drivers/vhost/net.c 2010-04-24 21:36:54.000000000 -0700 +++ net-next-v7/drivers/vhost/net.c 2010-04-28 12:26:18.000000000 -0700 @@ -74,6 +74,23 @@ static int move_iovec_hdr(struct
2010 Apr 28
6
[PATCHv7] add mergeable buffers support to vhost_net
This patch adds mergeable receive buffer support to vhost_net. Signed-off-by: David L Stevens <dlstevens at us.ibm.com> diff -ruNp net-next-v0/drivers/vhost/net.c net-next-v7/drivers/vhost/net.c --- net-next-v0/drivers/vhost/net.c 2010-04-24 21:36:54.000000000 -0700 +++ net-next-v7/drivers/vhost/net.c 2010-04-28 12:26:18.000000000 -0700 @@ -74,6 +74,23 @@ static int move_iovec_hdr(struct
2020 Jun 02
21
[PATCH RFC 00/13] vhost: format independence
We let the specifics of the ring format seep through to vhost API callers - mostly because there was only one format so it was hard to imagine what an independent API would look like. Now that there's an alternative in form of the packed ring, it's easier to see the issues, and fixing them is perhaps the cleanest way to add support for more formats. This patchset does this by indtroducing
2020 Jun 07
17
[PATCH RFC v5 00/13] vhost: ring format independence
This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to IOV later. The point is that we have a tight loop that fetches descriptors, which is good for cache utilization. This will
2010 Apr 06
1
[PATCH v3] Add Mergeable receive buffer support to vhost_net
This patch adds support for the Mergeable Receive Buffers feature to vhost_net. +-DLS Changes from previous revision: 1) renamed: vhost_discard_vq_desc -> vhost_discard_desc vhost_get_heads -> vhost_get_desc_n vhost_get_vq_desc -> vhost_get_desc 2) added heads as argument to ghost_get_desc_n 3) changed "vq->heads" from iovec to vring_used_elem, removed casts 4)
2010 Apr 06
1
[PATCH v3] Add Mergeable receive buffer support to vhost_net
This patch adds support for the Mergeable Receive Buffers feature to vhost_net. +-DLS Changes from previous revision: 1) renamed: vhost_discard_vq_desc -> vhost_discard_desc vhost_get_heads -> vhost_get_desc_n vhost_get_vq_desc -> vhost_get_desc 2) added heads as argument to ghost_get_desc_n 3) changed "vq->heads" from iovec to vring_used_elem, removed casts 4)
2010 Apr 26
1
[PATCH v6] Add mergeable rx buffer support to vhost_net
This patch adds mergeable receive buffer support to vhost_net. +-DLS Signed-off-by: David L Stevens <dlstevens at us.ibm.com> diff -ruNp net-next-v0/drivers/vhost/net.c net-next-v6/drivers/vhost/net.c --- net-next-v0/drivers/vhost/net.c 2010-04-24 21:36:54.000000000 -0700 +++ net-next-v6/drivers/vhost/net.c 2010-04-26 01:13:04.000000000 -0700 @@ -109,7 +109,7 @@ static void
2010 Apr 26
1
[PATCH v6] Add mergeable rx buffer support to vhost_net
This patch adds mergeable receive buffer support to vhost_net. +-DLS Signed-off-by: David L Stevens <dlstevens at us.ibm.com> diff -ruNp net-next-v0/drivers/vhost/net.c net-next-v6/drivers/vhost/net.c --- net-next-v0/drivers/vhost/net.c 2010-04-24 21:36:54.000000000 -0700 +++ net-next-v6/drivers/vhost/net.c 2010-04-26 01:13:04.000000000 -0700 @@ -109,7 +109,7 @@ static void
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully that is enough to fix vhost-net crashes reported here, but it helps me keep track of what changed. I will naturally squash them later when we are done. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that