Displaying 7 results from an estimated 7 matches for "ptr_ring_consume_bh".
2017 Mar 22
1
[PATCH net-next 1/8] ptr_ring: introduce batch dequeuing
...te: callers invoking this in a loop must use a compiler barrier,
* for example cpu_relax().
*/
Also - it looks like it shouldn't matter if reads are reordered but I wonder.
Thoughts? Including some reasoning about it in commit log would be nice.
> @@ -297,6 +313,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r)
> return ptr;
> }
>
> +static inline int ptr_ring_consume_batched(struct ptr_ring *r,
> + void **array, int n)
> +{
> + int ret;
> +
> + spin_lock(&r->consumer_lock);
> + ret = __ptr_ring_consume_batched(r, array, n);
> + spin_...
2017 Mar 22
1
[PATCH net-next 1/8] ptr_ring: introduce batch dequeuing
...te: callers invoking this in a loop must use a compiler barrier,
* for example cpu_relax().
*/
Also - it looks like it shouldn't matter if reads are reordered but I wonder.
Thoughts? Including some reasoning about it in commit log would be nice.
> @@ -297,6 +313,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r)
> return ptr;
> }
>
> +static inline int ptr_ring_consume_batched(struct ptr_ring *r,
> + void **array, int n)
> +{
> + int ret;
> +
> + spin_lock(&r->consumer_lock);
> + ret = __ptr_ring_consume_batched(r, array, n);
> + spin_...
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
2017 Mar 21
0
[PATCH net-next 1/8] ptr_ring: introduce batch dequeuing
...+ ptr = __ptr_ring_consume(r);
+ if (!ptr)
+ break;
+ array[i++] = ptr;
+ }
+
+ return i;
+}
+
/*
* Note: resize (below) nests producer lock within consumer lock, so if you
* call this in interrupt or BH context, you must disable interrupts/BH when
@@ -297,6 +313,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r)
return ptr;
}
+static inline int ptr_ring_consume_batched(struct ptr_ring *r,
+ void **array, int n)
+{
+ int ret;
+
+ spin_lock(&r->consumer_lock);
+ ret = __ptr_ring_consume_batched(r, array, n);
+ spin_unlock(&r->consumer_lock);
+
+ return ret;
+}
+...
2017 Mar 21
1
[PATCH net-next 1/8] ptr_ring: introduce batch dequeuing
...reak;
> + array[i++] = ptr;
> + }
> +
> + return i;
> +}
> +
> /*
> * Note: resize (below) nests producer lock within consumer lock, so if you
> * call this in interrupt or BH context, you must disable interrupts/BH when
> @@ -297,6 +313,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r)
[...]
MBR, Sergei
2017 Mar 21
1
[PATCH net-next 1/8] ptr_ring: introduce batch dequeuing
...reak;
> + array[i++] = ptr;
> + }
> +
> + return i;
> +}
> +
> /*
> * Note: resize (below) nests producer lock within consumer lock, so if you
> * call this in interrupt or BH context, you must disable interrupts/BH when
> @@ -297,6 +313,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r)
[...]
MBR, Sergei