Displaying 20 results from an estimated 141 matches for "sk_receive_queue".
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this...
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this...
2016 Jun 17
0
[PATCH net-next V2] tun: introduce tx skb ring
On Wed, Jun 15, 2016 at 04:38:17PM +0800, Jason Wang wrote:
> We used to queue tx packets in sk_receive_queue, this is less
> efficient since it requires spinlocks to synchronize between producer
> and consumer.
>
> This patch tries to address this by:
>
> - introduce a new mode which will be only enabled with IFF_TX_ARRAY
> set and switch from sk_receive_queue to a fixed size of s...
2011 Jan 17
11
[PATCH 1/3] vhost-net: check the support of mergeable buffer outside the receive loop
No need to check the support of mergeable buffer inside the recevie
loop as the whole handle_rx()_xx is in the read critical region. So
this patch move it ahead of the receiving loop.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index
2011 Jan 17
11
[PATCH 1/3] vhost-net: check the support of mergeable buffer outside the receive loop
No need to check the support of mergeable buffer inside the recevie
loop as the whole handle_rx()_xx is in the read critical region. So
this patch move it ahead of the receiving loop.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index
2016 Jun 30
0
[PATCH net-next V3 6/6] tun: switch to use skb array for tx
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- switch from sk_receive_queue to a skb_array, and resize it when
tx_queue_len was changed.
- introduce a new proto_ops peek_len which was used for peeking the...
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the ability to resize multiple rings at once to avoid handling
partial resize failure for mutiple rings.
- add the support for zero length ring.
- intro...
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the ability to resize multiple rings at once to avoid handling
partial resize failure for mutiple rings.
- add the support for zero length ring.
- intro...
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the ability to resize multiple rings at once to avoid handling
partial resize failure for mutiple rings.
- add the support for zero length ring.
- intro...
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the ability to resize multiple rings at once to avoid handling
partial resize failure for mutiple rings.
- add the support for zero length ring.
- intro...
2018 Jul 03
2
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...ost_poll_start(poll, sock->file);
> }
>
> +static int sk_has_rx_data(struct sock *sk)
> +{
> + struct socket *sock = sk->sk_socket;
> +
> + if (sock->ops->peek_len)
> + return sock->ops->peek_len(sock);
> +
> + return skb_queue_empty(&sk->sk_receive_queue);
> +}
> +
> +static void vhost_net_busy_poll(struct vhost_net *net,
> + struct vhost_virtqueue *rvq,
> + struct vhost_virtqueue *tvq,
> + bool rx)
> +{
> + unsigned long uninitialized_var(endtime);
> + unsigned long busyloop_timeout;
> + struct socket *sock;...
2018 Jul 03
2
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...ost_poll_start(poll, sock->file);
> }
>
> +static int sk_has_rx_data(struct sock *sk)
> +{
> + struct socket *sock = sk->sk_socket;
> +
> + if (sock->ops->peek_len)
> + return sock->ops->peek_len(sock);
> +
> + return skb_queue_empty(&sk->sk_receive_queue);
> +}
> +
> +static void vhost_net_busy_poll(struct vhost_net *net,
> + struct vhost_virtqueue *rvq,
> + struct vhost_virtqueue *tvq,
> + bool rx)
> +{
> + unsigned long uninitialized_var(endtime);
> + unsigned long busyloop_timeout;
> + struct socket *sock;...
2018 Jul 02
1
[PATCH net-next v3 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...ost_poll_start(poll, sock->file);
> }
>
> +static int sk_has_rx_data(struct sock *sk)
> +{
> + struct socket *sock = sk->sk_socket;
> +
> + if (sock->ops->peek_len)
> + return sock->ops->peek_len(sock);
> +
> + return skb_queue_empty(&sk->sk_receive_queue);
> +}
> +
> +static void vhost_net_busy_poll(struct vhost_net *net,
> + struct vhost_virtqueue *rvq,
> + struct vhost_virtqueue *tvq,
> + bool rx)
> +{
> + unsigned long uninitialized_var(endtime);
> + struct socket *sock = rvq->private_data;
> + struct vh...
2018 Jul 03
1
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...x_data(struct sock *sk)
>>> +{
>>> + struct socket *sock = sk->sk_socket;
>>> +
>>> + if (sock->ops->peek_len)
>>> + return sock->ops->peek_len(sock);
>>> +
>>> + return skb_queue_empty(&sk->sk_receive_queue);
>>> +}
>>> +
>>> +static void vhost_net_busy_poll(struct vhost_net *net,
>>> + struct vhost_virtqueue *rvq,
>>> + struct vhost_virtqueue *tvq,
>>> + bool rx)
&...
2016 Jul 06
3
[PATCH net-next V4 0/6] switch to use tx skb array in tun
...ng <jasowang at redhat.com> wrote:
> Hi all:
>
> This series tries to switch to use skb array in tun. This is used to
> eliminate the spinlock contention between producer and consumer. The
> conversion was straightforward: just introdce a tx skb array and use
> it instead of sk_receive_queue.
I'm seeing the splat below after this series. I'm still wrapping my
head around this code, but it appears to be happening because the
tun_struct passed into tun_queue_resize is uninitialized.
Specifically, iteration over the disabled list_head fails because prev
= next = NULL. This seem...
2018 Aug 01
2
[PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...*nvq)
> nvq->done_idx = 0;
> }
>
> +static int sk_has_rx_data(struct sock *sk)
> +{
> + struct socket *sock = sk->sk_socket;
> +
> + if (sock->ops->peek_len)
> + return sock->ops->peek_len(sock);
> +
> + return skb_queue_empty(&sk->sk_receive_queue);
> +}
> +
> +static void vhost_net_busy_poll_try_queue(struct vhost_net *net,
> + struct vhost_virtqueue *vq)
> +{
> + if (!vhost_vq_avail_empty(&net->dev, vq)) {
> + vhost_poll_queue(&vq->poll);
> + } else if (unlikely(vhost_enable_notify(&net->...
2016 Jul 06
3
[PATCH net-next V4 0/6] switch to use tx skb array in tun
...ng <jasowang at redhat.com> wrote:
> Hi all:
>
> This series tries to switch to use skb array in tun. This is used to
> eliminate the spinlock contention between producer and consumer. The
> conversion was straightforward: just introdce a tx skb array and use
> it instead of sk_receive_queue.
I'm seeing the splat below after this series. I'm still wrapping my
head around this code, but it appears to be happening because the
tun_struct passed into tun_queue_resize is uninitialized.
Specifically, iteration over the disabled list_head fails because prev
= next = NULL. This seem...
2018 Aug 01
2
[PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...*nvq)
> nvq->done_idx = 0;
> }
>
> +static int sk_has_rx_data(struct sock *sk)
> +{
> + struct socket *sock = sk->sk_socket;
> +
> + if (sock->ops->peek_len)
> + return sock->ops->peek_len(sock);
> +
> + return skb_queue_empty(&sk->sk_receive_queue);
> +}
> +
> +static void vhost_net_busy_poll_try_queue(struct vhost_net *net,
> + struct vhost_virtqueue *vq)
> +{
> + if (!vhost_vq_avail_empty(&net->dev, vq)) {
> + vhost_poll_queue(&vq->poll);
> + } else if (unlikely(vhost_enable_notify(&net->...
2010 Mar 03
1
[RFC][ PATCH 1/3] vhost-net: support multiple buffer heads in receiver
....msg_name = NULL,
@@ -204,10 +213,11 @@
};
size_t len, total_len = 0;
- int err;
+ int err, headcount, datalen;
size_t hdr_size;
struct socket *sock = rcu_dereference(vq->private_data);
- if (!sock || skb_queue_empty(&sock->sk->sk_receive_queue))
+
+ if (!sock || !skb_head_len(&sock->sk->sk_receive_queue))
return;
use_mm(net->dev.mm);
@@ -218,13 +228,10 @@
vq_log = unlikely(vhost_has_feature(&net->dev, VHOST_F_LOG_ALL)) ?
vq->log : NULL;
- for (;;) {
-...
2010 Mar 03
1
[RFC][ PATCH 1/3] vhost-net: support multiple buffer heads in receiver
....msg_name = NULL,
@@ -204,10 +213,11 @@
};
size_t len, total_len = 0;
- int err;
+ int err, headcount, datalen;
size_t hdr_size;
struct socket *sock = rcu_dereference(vq->private_data);
- if (!sock || skb_queue_empty(&sock->sk->sk_receive_queue))
+
+ if (!sock || !skb_head_len(&sock->sk->sk_receive_queue))
return;
use_mm(net->dev.mm);
@@ -218,13 +228,10 @@
vq_log = unlikely(vhost_has_feature(&net->dev, VHOST_F_LOG_ALL)) ?
vq->log : NULL;
- for (;;) {
-...