search for: vhost_get_vq_desc_pack

Displaying 19 results from an estimated 19 matches for "vhost_get_vq_desc_pack".

2018 Feb 27
1
[PATCH RFC 2/2] vhost: packed ring support
...to_vhost16(vq, DESC_USED); > + } else { > + desc->flags &= ~cpu_to_vhost16(vq, DESC_AVAIL); > + desc->flags &= ~cpu_to_vhost16(vq, DESC_USED); > + } > + > + desc->flags = flags; The `desc->flags` is restored after the change. > +} > + > +static int vhost_get_vq_desc_packed(struct vhost_virtqueue *vq, > + struct iovec iov[], unsigned int iov_size, > + unsigned int *out_num, unsigned int *in_num, > + struct vhost_log *log, > + unsigned int *log_num) > +{ > + struct vring_desc_packed desc; > + int ret, access, i; > +...
2018 Feb 14
6
[PATCH RFC 0/2] Packed ring for vhost
Hi all: This RFC implement a subset of packed ring which was described at https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd07.pdf . The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost
2018 Feb 14
6
[PATCH RFC 0/2] Packed ring for vhost
Hi all: This RFC implement a subset of packed ring which was described at https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd07.pdf . The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost
2018 Feb 14
0
[PATCH RFC 2/2] vhost: packed ring support
...if (wrap_counter) { + desc->flags |= cpu_to_vhost16(vq, DESC_AVAIL); + desc->flags |= cpu_to_vhost16(vq, DESC_USED); + } else { + desc->flags &= ~cpu_to_vhost16(vq, DESC_AVAIL); + desc->flags &= ~cpu_to_vhost16(vq, DESC_USED); + } + + desc->flags = flags; +} + +static int vhost_get_vq_desc_packed(struct vhost_virtqueue *vq, + struct iovec iov[], unsigned int iov_size, + unsigned int *out_num, unsigned int *in_num, + struct vhost_log *log, + unsigned int *log_num) +{ + struct vring_desc_packed desc; + int ret, access, i; + u16 avail_idx = vq->last_avail_idx;...
2018 May 16
0
[RFC V4 PATCH 7/8] vhost: packed ring support
...flags |= cpu_to_vhost16(vq, DESC_AVAIL); + flags |= cpu_to_vhost16(vq, DESC_USED); + } else { + flags &= ~cpu_to_vhost16(vq, DESC_AVAIL); + flags &= ~cpu_to_vhost16(vq, DESC_USED); + } + + if (write) + flags |= cpu_to_vhost16(vq, VRING_DESC_F_WRITE); + + return flags; +} + +static int vhost_get_vq_desc_packed(struct vhost_virtqueue *vq, + struct vhost_used_elem *used, + struct iovec iov[], unsigned int iov_size, + unsigned int *out_num, unsigned int *in_num, + struct vhost_log *log, + unsigned int *log_num) +{ + struct vring_desc_packed desc; + int ret, access, i; +...
2018 Jul 16
0
[PATCH net-next V2 6/8] vhost: packed ring support
...ic bool vhost_vring_packed_need_event(struct vhost_virtqueue *vq, + bool wrap, __u16 off_wrap, __u16 new, + __u16 old) +{ + int off = off_wrap & ~(1 << 15); + + if (wrap != off_wrap >> 15) + off -= vq->num; + + return vring_need_event(off, new, old); +} + +static int vhost_get_vq_desc_packed(struct vhost_virtqueue *vq, + struct vhost_used_elem *used, + struct iovec iov[], unsigned int iov_size, + unsigned int *out_num, unsigned int *in_num, + struct vhost_log *log, + unsigned int *log_num) +{ + struct vring_packed_desc desc; + int ret, access, i; +...
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 May 29
9
[RFC V5 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V5 at https://lkml.org/lkml/2018/5/22/138. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on TX PPS when doing pktgen from guest to host. No ovbious improvement on RX PPS. We can do lots of optimizations on top but for simple
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues which were described at [1]. In this version we try to address the performance regression saw by V2. The root cause is packed virtqueue need more times of userspace memory accesssing which turns out to be very expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq metadata through kernel virtual address"), such overhead cold be
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues which were described at [1]. In this version we try to address the performance regression saw by V2. The root cause is packed virtqueue need more times of userspace memory accesssing which turns out to be very expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq metadata through kernel virtual address"), such overhead cold be