search for: vhost_get_avail

Displaying 20 results from an estimated 78 matches for "vhost_get_avail".

2019 Mar 06
1
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...> + &vq->used->idx); > +} > + > #define vhost_get_user(vq, x, ptr, type) \ > ({ \ > int ret; \ > @@ -907,6 +935,43 @@ static void vhost_dev_unlock_vqs(struct vhost_dev *d) > mutex_unlock(&d->vqs[i]->mutex); > } > > +static inline int vhost_get_avail_idx(struct vhost_virtqueue *vq, > + __virtio16 *idx) > +{ > + return vhost_get_avail(vq, *idx, &vq->avail->idx); > +} > + > +static inline int vhost_get_avail_head(struct vhost_virtqueue *vq, > + __virtio16 *head, int idx) > +{ > + return vhos...
2019 Mar 06
1
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...> + &vq->used->idx); > +} > + > #define vhost_get_user(vq, x, ptr, type) \ > ({ \ > int ret; \ > @@ -907,6 +935,43 @@ static void vhost_dev_unlock_vqs(struct vhost_dev *d) > mutex_unlock(&d->vqs[i]->mutex); > } > > +static inline int vhost_get_avail_idx(struct vhost_virtqueue *vq, > + __virtio16 *idx) > +{ > + return vhost_get_avail(vq, *idx, &vq->avail->idx); > +} > + > +static inline int vhost_get_avail_head(struct vhost_virtqueue *vq, > + __virtio16 *head, int idx) > +{ > + return vhos...
2019 Mar 06
0
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...er(vq, cpu_to_vhost16(vq, vq->last_used_idx), + &vq->used->idx); +} + #define vhost_get_user(vq, x, ptr, type) \ ({ \ int ret; \ @@ -907,6 +935,43 @@ static void vhost_dev_unlock_vqs(struct vhost_dev *d) mutex_unlock(&d->vqs[i]->mutex); } +static inline int vhost_get_avail_idx(struct vhost_virtqueue *vq, + __virtio16 *idx) +{ + return vhost_get_avail(vq, *idx, &vq->avail->idx); +} + +static inline int vhost_get_avail_head(struct vhost_virtqueue *vq, + __virtio16 *head, int idx) +{ + return vhost_get_avail(vq, *head, + &vq-&gt...
2019 Mar 07
0
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...); >> +} >> + >> #define vhost_get_user(vq, x, ptr, type) \ >> ({ \ >> int ret; \ >> @@ -907,6 +935,43 @@ static void vhost_dev_unlock_vqs(struct vhost_dev *d) >> mutex_unlock(&d->vqs[i]->mutex); >> } >> >> +static inline int vhost_get_avail_idx(struct vhost_virtqueue *vq, >> + __virtio16 *idx) >> +{ >> + return vhost_get_avail(vq, *idx, &vq->avail->idx); >> +} >> + >> +static inline int vhost_get_avail_head(struct vhost_virtqueue *vq, >> + __virtio16 *head, int idx...
2018 Dec 28
4
[RFC PATCH V2 0/3] vhost: accelerate metadata access through vmap()
Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Changes from V1: - instead of pinning pages, use MMU notifier to invalidate vmaps and remap duing
2017 Sep 28
1
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...gt; + u16 num, bool used_update) > +{ > + int ret, ret2; > + u16 last_avail_idx, last_used_idx, total, copied; > + __virtio16 avail_idx; > + struct vring_used_elem __user *used; > + int i; > + > + if (unlikely(vhost_get_avail(vq, avail_idx, &vq->avail->idx))) { > + vq_err(vq, "Failed to access avail idx at %p\n", > + &vq->avail->idx); > + return -EFAULT; > + } > + last_avail_idx = vq->last_avail_idx & (vq-&gt...
2017 Sep 28
1
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...gt; + u16 num, bool used_update) > +{ > + int ret, ret2; > + u16 last_avail_idx, last_used_idx, total, copied; > + __virtio16 avail_idx; > + struct vring_used_elem __user *used; > + int i; > + > + if (unlikely(vhost_get_avail(vq, avail_idx, &vq->avail->idx))) { > + vq_err(vq, "Failed to access avail idx at %p\n", > + &vq->avail->idx); > + return -EFAULT; > + } > + last_avail_idx = vq->last_avail_idx & (vq-&gt...
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. This is done through setup kernel address through vmap() and resigter MMU notifier for invalidation. Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see obvious improvement.
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. This is done through setup kernel address through vmap() and resigter MMU notifier for invalidation. Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see obvious improvement.
2018 Dec 13
11
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Please review Jason Wang (3): vhost: generalize adding used elem vhost: fine grain userspace memory
2018 Dec 13
11
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Please review Jason Wang (3): vhost: generalize adding used elem vhost: fine grain userspace memory
2016 Dec 14
1
[PATCH V2] vhost: introduce O(1) vq metadata cache
...(vq, ptr, sizeof(*ptr)); \ + (__typeof__(ptr)) __vhost_get_user(vq, ptr, \ + sizeof(*ptr), \ + type); \ if (from != NULL) \ ret = __get_user(x, from); \ else \ @@ -845,6 +901,12 @@ static void __user *__vhost_get_user(struct vhost_virtqueue *vq, ret; \ }) +#define vhost_get_avail(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_AVAIL) + +#define vhost_get_used(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_USED) + static void vhost_dev_lock_vqs(struct vhost_dev *d) { int i = 0; @@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev, ret =...
2016 Dec 14
1
[PATCH V2] vhost: introduce O(1) vq metadata cache
...(vq, ptr, sizeof(*ptr)); \ + (__typeof__(ptr)) __vhost_get_user(vq, ptr, \ + sizeof(*ptr), \ + type); \ if (from != NULL) \ ret = __get_user(x, from); \ else \ @@ -845,6 +901,12 @@ static void __user *__vhost_get_user(struct vhost_virtqueue *vq, ret; \ }) +#define vhost_get_avail(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_AVAIL) + +#define vhost_get_used(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_USED) + static void vhost_dev_lock_vqs(struct vhost_dev *d) { int i = 0; @@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev, ret =...
2017 Sep 26
2
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...eads, > + u16 num, bool used_update) why do you need to combine used update with prefetch? > +{ > + int ret, ret2; > + u16 last_avail_idx, last_used_idx, total, copied; > + __virtio16 avail_idx; > + struct vring_used_elem __user *used; > + int i; > + > + if (unlikely(vhost_get_avail(vq, avail_idx, &vq->avail->idx))) { > + vq_err(vq, "Failed to access avail idx at %p\n", > + &vq->avail->idx); > + return -EFAULT; > + } > + last_avail_idx = vq->last_avail_idx & (vq->num - 1); > + vq->avail_idx = vhost16_to_cpu...
2017 Sep 26
2
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...eads, > + u16 num, bool used_update) why do you need to combine used update with prefetch? > +{ > + int ret, ret2; > + u16 last_avail_idx, last_used_idx, total, copied; > + __virtio16 avail_idx; > + struct vring_used_elem __user *used; > + int i; > + > + if (unlikely(vhost_get_avail(vq, avail_idx, &vq->avail->idx))) { > + vq_err(vq, "Failed to access avail idx at %p\n", > + &vq->avail->idx); > + return -EFAULT; > + } > + last_avail_idx = vq->last_avail_idx & (vq->num - 1); > + vq->avail_idx = vhost16_to_cpu...
2016 Dec 14
2
[PATCH] vhost: introduce O(1) vq metadata cache
...(vq, ptr, sizeof(*ptr)); \ + (__typeof__(ptr)) __vhost_get_user(vq, ptr, \ + sizeof(*ptr), \ + type); \ if (from != NULL) \ ret = __get_user(x, from); \ else \ @@ -845,6 +901,12 @@ static void __user *__vhost_get_user(struct vhost_virtqueue *vq, ret; \ }) +#define vhost_get_avail(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_AVAIL) + +#define vhost_get_used(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_USED) + static void vhost_dev_lock_vqs(struct vhost_dev *d) { int i = 0; @@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev, ret =...
2016 Dec 14
2
[PATCH] vhost: introduce O(1) vq metadata cache
...(vq, ptr, sizeof(*ptr)); \ + (__typeof__(ptr)) __vhost_get_user(vq, ptr, \ + sizeof(*ptr), \ + type); \ if (from != NULL) \ ret = __get_user(x, from); \ else \ @@ -845,6 +901,12 @@ static void __user *__vhost_get_user(struct vhost_virtqueue *vq, ret; \ }) +#define vhost_get_avail(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_AVAIL) + +#define vhost_get_used(vq, x, ptr) \ + vhost_get_user(vq, x, ptr, VHOST_ADDR_USED) + static void vhost_dev_lock_vqs(struct vhost_dev *d) { int i = 0; @@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev, ret =...
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Changes from V2: - fix buggy range overlapping check - tear down MMU notifier during vhost ioctl to make sure
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Changes from V2: - fix buggy range overlapping check - tear down MMU notifier during vhost ioctl to make sure
2018 Nov 23
1
[PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
...ith descriptor numbers. */ > last_avail_idx = vq->last_avail_idx; > @@ -2034,15 +2035,19 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq, > > /* Grab the next descriptor number they're advertising, and increment > * the index we've seen. */ > - if (unlikely(vhost_get_avail(vq, ring_head, > - &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) { > - vq_err(vq, "Failed to read head: idx %d address %p\n", > - last_avail_idx, > - &vq->avail->ring[last_avail_idx % vq->num]); > - return -EFAULT;...