Displaying 20 results from an estimated 81 matches for "vhost_copy_to_us".
Did you mean:
vhost_copy_to_user
2018 Apr 11
3
[PATCH] vhost: Fix vhost_copy_to_user()
vhost_copy_to_user is used to copy vring used elements to userspace.
We should use VHOST_ADDR_USED instead of VHOST_ADDR_DESC.
Fixes: f88949138058 ("vhost: introduce O(1) vq metadata cache")
Signed-off-by: Eric Auger <eric.auger at redhat.com>
---
This fixes a stall observed when running an aarch...
2018 Apr 11
3
[PATCH] vhost: Fix vhost_copy_to_user()
vhost_copy_to_user is used to copy vring used elements to userspace.
We should use VHOST_ADDR_USED instead of VHOST_ADDR_DESC.
Fixes: f88949138058 ("vhost: introduce O(1) vq metadata cache")
Signed-off-by: Eric Auger <eric.auger at redhat.com>
---
This fixes a stall observed when running an aarch...
2018 Apr 11
0
[PATCH] vhost: Fix vhost_copy_to_user()
On 2018?04?11? 21:30, Eric Auger wrote:
> vhost_copy_to_user is used to copy vring used elements to userspace.
> We should use VHOST_ADDR_USED instead of VHOST_ADDR_DESC.
>
> Fixes: f88949138058 ("vhost: introduce O(1) vq metadata cache")
> Signed-off-by: Eric Auger <eric.auger at redhat.com>
>
> ---
>
> This fixes a...
2018 Dec 13
1
[PATCH net-next 1/3] vhost: generalize adding used elem
On Thu, Dec 13, 2018 at 06:10:20PM +0800, Jason Wang wrote:
> Use one generic vhost_copy_to_user() instead of two dedicated
> accessor. This will simplify the conversion to fine grain accessors.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
The reason we did it like this is because it was faster.
Want to try benchmarking before we change it?
> ---
> drivers...
2016 Dec 06
0
[PATCH 06/10] vhost: add missing __user annotations
...8 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -719,7 +719,7 @@ static int memory_access_ok(struct vhost_dev *d, struct vhost_umem *umem,
static int translate_desc(struct vhost_virtqueue *vq, u64 addr, u32 len,
struct iovec iov[], int iov_size, int access);
-static int vhost_copy_to_user(struct vhost_virtqueue *vq, void *to,
+static int vhost_copy_to_user(struct vhost_virtqueue *vq, void __user *to,
const void *from, unsigned size)
{
int ret;
@@ -749,7 +749,7 @@ static int vhost_copy_to_user(struct vhost_virtqueue *vq, void *to,
}
static int vhost_copy_from_user(...
2018 Dec 13
0
[PATCH net-next 1/3] vhost: generalize adding used elem
Use one generic vhost_copy_to_user() instead of two dedicated
accessor. This will simplify the conversion to fine grain accessors.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/vhost.c | 11 +----------
1 file changed, 1 insertion(+), 10 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhos...
2018 Dec 29
0
[RFC PATCH V3 1/5] vhost: generalize adding used elem
Use one generic vhost_copy_to_user() instead of two dedicated
accessor. This will simplify the conversion to fine grain
accessors. About 2% improvement of PPS were seen during vitio-user
txonly test.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/vhost.c | 11 +----------
1 file changed, 1 insertion(+...
2018 Dec 13
11
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Please review
Jason Wang (3):
vhost: generalize adding used elem
vhost: fine grain userspace memory
2018 Dec 13
11
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Please review
Jason Wang (3):
vhost: generalize adding used elem
vhost: fine grain userspace memory
2018 Dec 28
4
[RFC PATCH V2 0/3] vhost: accelerate metadata access through vmap()
Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Changes from V1:
- instead of pinning pages, use MMU notifier to invalidate vmaps and
remap duing
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Changes from V2:
- fix buggy range overlapping check
- tear down MMU notifier during vhost ioctl to make sure
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Changes from V2:
- fix buggy range overlapping check
- tear down MMU notifier during vhost ioctl to make sure
2019 Mar 06
1
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...*vq)
> +{
> + return vhost_put_user(vq, cpu_to_vhost16(vq, vq->avail_idx),
> + vhost_avail_event(vq));
> +}
> +
> +static inline int vhost_put_used(struct vhost_virtqueue *vq,
> + struct vring_used_elem *head, int idx,
> + int count)
> +{
> + return vhost_copy_to_user(vq, vq->used->ring + idx, head,
> + count * sizeof(*head));
> +}
> +
> +static inline int vhost_put_used_flags(struct vhost_virtqueue *vq)
> +
> +{
> + return vhost_put_user(vq, cpu_to_vhost16(vq, vq->used_flags),
> + &vq->used->flags);
>...
2019 Mar 06
1
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...*vq)
> +{
> + return vhost_put_user(vq, cpu_to_vhost16(vq, vq->avail_idx),
> + vhost_avail_event(vq));
> +}
> +
> +static inline int vhost_put_used(struct vhost_virtqueue *vq,
> + struct vring_used_elem *head, int idx,
> + int count)
> +{
> + return vhost_copy_to_user(vq, vq->used->ring + idx, head,
> + count * sizeof(*head));
> +}
> +
> +static inline int vhost_put_used_flags(struct vhost_virtqueue *vq)
> +
> +{
> + return vhost_put_user(vq, cpu_to_vhost16(vq, vq->used_flags),
> + &vq->used->flags);
>...
2017 Sep 26
2
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...last_avail_idx = (last_avail_idx + 1) & (vq->num - 1);
> + }
> +
> + if (!used_update)
> + return ret;
> +
> + last_used_idx = vq->last_used_idx & (vq->num - 1);
> + while (total) {
> + copied = min((u16)(vq->num - last_used_idx), total);
> + ret2 = vhost_copy_to_user(vq,
> + &vq->used->ring[last_used_idx],
> + &heads[ret - total],
> + copied * sizeof(*used));
> +
> + if (unlikely(ret2)) {
> + vq_err(vq, "Failed to update used ring!\n");
> + return -EFAULT;
> + }
> +
> + last_used_i...
2017 Sep 26
2
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...last_avail_idx = (last_avail_idx + 1) & (vq->num - 1);
> + }
> +
> + if (!used_update)
> + return ret;
> +
> + last_used_idx = vq->last_used_idx & (vq->num - 1);
> + while (total) {
> + copied = min((u16)(vq->num - last_used_idx), total);
> + ret2 = vhost_copy_to_user(vq,
> + &vq->used->ring[last_used_idx],
> + &heads[ret - total],
> + copied * sizeof(*used));
> +
> + if (unlikely(ret2)) {
> + vq_err(vq, "Failed to update used ring!\n");
> + return -EFAULT;
> + }
> +
> + last_used_i...
2017 Sep 28
1
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...more times.
> +
> + if (!used_update)
> + return ret;
> +
> + last_used_idx = vq->last_used_idx & (vq->num - 1);
> + while (total) {
> + copied = min((u16)(vq->num - last_used_idx), total);
> + ret2 = vhost_copy_to_user(vq,
> + &vq->used->ring[last_used_idx],
> + &heads[ret - total],
> + copied * sizeof(*used));
> +
> + if (unlikely(ret2)) {
> +...
2017 Sep 28
1
[PATCH net-next RFC 2/5] vhost: introduce helper to prefetch desc index
...more times.
> +
> + if (!used_update)
> + return ret;
> +
> + last_used_idx = vq->last_used_idx & (vq->num - 1);
> + while (total) {
> + copied = min((u16)(vq->num - last_used_idx), total);
> + ret2 = vhost_copy_to_user(vq,
> + &vq->used->ring[last_used_idx],
> + &heads[ret - total],
> + copied * sizeof(*used));
> +
> + if (unlikely(ret2)) {
> +...
2019 Mar 06
0
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...e int vhost_put_avail_event(struct vhost_virtqueue *vq)
+{
+ return vhost_put_user(vq, cpu_to_vhost16(vq, vq->avail_idx),
+ vhost_avail_event(vq));
+}
+
+static inline int vhost_put_used(struct vhost_virtqueue *vq,
+ struct vring_used_elem *head, int idx,
+ int count)
+{
+ return vhost_copy_to_user(vq, vq->used->ring + idx, head,
+ count * sizeof(*head));
+}
+
+static inline int vhost_put_used_flags(struct vhost_virtqueue *vq)
+
+{
+ return vhost_put_user(vq, cpu_to_vhost16(vq, vq->used_flags),
+ &vq->used->flags);
+}
+
+static inline int vhost_put_used_idx(s...
2019 Mar 07
0
[RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors
...t_put_user(vq, cpu_to_vhost16(vq, vq->avail_idx),
>> + vhost_avail_event(vq));
>> +}
>> +
>> +static inline int vhost_put_used(struct vhost_virtqueue *vq,
>> + struct vring_used_elem *head, int idx,
>> + int count)
>> +{
>> + return vhost_copy_to_user(vq, vq->used->ring + idx, head,
>> + count * sizeof(*head));
>> +}
>> +
>> +static inline int vhost_put_used_flags(struct vhost_virtqueue *vq)
>> +
>> +{
>> + return vhost_put_user(vq, cpu_to_vhost16(vq, vq->used_flags),
>> + &...