search for: skb_dequeu

Displaying 20 results from an estimated 28 matches for "skb_dequeu".

Did you mean: skb_dequeue
2013 Jul 02
3
[PATCH RFC] xen-netback: remove guest RX path dependence on MAX_SKB_FRAGS
...c void xen_netbk_rx_action(struct xen_netbk *netbk) struct sk_buff *skb; LIST_HEAD(notify); int ret; - int nr_frags; int count; unsigned long offset; struct skb_cb_overlay *sco; @@ -690,20 +676,23 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) count = 0; while ((skb = skb_dequeue(&netbk->rx_queue)) != NULL) { + unsigned int ring_slots_required; vif = netdev_priv(skb->dev); - nr_frags = skb_shinfo(skb)->nr_frags; sco = (struct skb_cb_overlay *)skb->cb; - sco->meta_slots_used = netbk_gop_skb(skb, &npo); - - count += nr_frags + 1; - __skb...
2023 Mar 10
0
[RFC PATCH v4 0/4] several updates to virtio/vsock
...common words - we don't need to change skbuff state to update > 'rx_bytes' and 'fwd_cnt' correctly. >2) For SOCK_STREAM, when copying data to user fails, current skbuff is > not dropped. Next read attempt will use same skbuff and last offset. > Instead of 'skb_dequeue()', 'skb_peek()' + '__skb_unlink()' are used. > This behaviour was implemented before skbuff support. >3) For SOCK_SEQPACKET it removes unneeded 'skb_pull()' call, because for > this type of socket each skbuff is used only once: after removing it > fro...
2023 Mar 09
0
[RFC PATCH v3 0/4] several updates to virtio/vsock
...to change skbuff state to update >>> ? 'rx_bytes' and 'fwd_cnt' correctly. >>> 2) For SOCK_STREAM, when copying data to user fails, current skbuff is >>> ? not dropped. Next read attempt will use same skbuff and last offset. >>> ? Instead of 'skb_dequeue()', 'skb_peek()' + '__skb_unlink()' are used. >>> ? This behaviour was implemented before skbuff support. >>> 3) For SOCK_SEQPACKET it removes unneeded 'skb_pull()' call, because for >>> ? this type of socket each skbuff is used only once: aft...
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
...tun_chr_read\n", tun->dev->name); - len = iov_length(iv, count); - if (len < 0) { - ret = -EINVAL; - goto out; - } - add_wait_queue(&tun->socket.wait, &wait); while (len) { current->state = TASK_INTERRUPTIBLE; /* Read frames from the queue */ if (!(skb=skb_dequeue(&tun->socket.sk->sk_receive_queue))) { - if (file->f_flags & O_NONBLOCK) { + if (noblock) { ret = -EAGAIN; break; } @@ -805,6 +797,27 @@ static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv, current->state = TASK_RUNNING; remove_wait_...
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
...tun_chr_read\n", tun->dev->name); - len = iov_length(iv, count); - if (len < 0) { - ret = -EINVAL; - goto out; - } - add_wait_queue(&tun->socket.wait, &wait); while (len) { current->state = TASK_INTERRUPTIBLE; /* Read frames from the queue */ if (!(skb=skb_dequeue(&tun->socket.sk->sk_receive_queue))) { - if (file->f_flags & O_NONBLOCK) { + if (noblock) { ret = -EAGAIN; break; } @@ -805,6 +797,27 @@ static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv, current->state = TASK_RUNNING; remove_wait_...
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
...tun_chr_read\n", tun->dev->name); - len = iov_length(iv, count); - if (len < 0) { - ret = -EINVAL; - goto out; - } - add_wait_queue(&tun->socket.wait, &wait); while (len) { current->state = TASK_INTERRUPTIBLE; /* Read frames from the queue */ if (!(skb=skb_dequeue(&tun->socket.sk->sk_receive_queue))) { - if (file->f_flags & O_NONBLOCK) { + if (noblock) { ret = -EAGAIN; break; } @@ -805,6 +797,27 @@ static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv, current->state = TASK_RUNNING; remove_wait_...
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
...tun_chr_read\n", tun->dev->name); - len = iov_length(iv, count); - if (len < 0) { - ret = -EINVAL; - goto out; - } - add_wait_queue(&tun->socket.wait, &wait); while (len) { current->state = TASK_INTERRUPTIBLE; /* Read frames from the queue */ if (!(skb=skb_dequeue(&tun->socket.sk->sk_receive_queue))) { - if (file->f_flags & O_NONBLOCK) { + if (noblock) { ret = -EAGAIN; break; } @@ -805,6 +797,27 @@ static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv, current->state = TASK_RUNNING; remove_wait_...
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
...tun_chr_read\n", tun->dev->name); - len = iov_length(iv, count); - if (len < 0) { - ret = -EINVAL; - goto out; - } - add_wait_queue(&tun->socket.wait, &wait); while (len) { current->state = TASK_INTERRUPTIBLE; /* Read frames from the queue */ if (!(skb=skb_dequeue(&tun->socket.sk->sk_receive_queue))) { - if (file->f_flags & O_NONBLOCK) { + if (noblock) { ret = -EAGAIN; break; } @@ -805,6 +797,27 @@ static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv, current->state = TASK_RUNNING; remove_wait_...
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
...tun_chr_read\n", tun->dev->name); - len = iov_length(iv, count); - if (len < 0) { - ret = -EINVAL; - goto out; - } - add_wait_queue(&tun->socket.wait, &wait); while (len) { current->state = TASK_INTERRUPTIBLE; /* Read frames from the queue */ if (!(skb=skb_dequeue(&tun->socket.sk->sk_receive_queue))) { - if (file->f_flags & O_NONBLOCK) { + if (noblock) { ret = -EAGAIN; break; } @@ -805,6 +797,27 @@ static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv, current->state = TASK_RUNNING; remove_wait_...
2013 Jul 09
20
[PATCH 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get slots required by header data. This is wrong when offset in the page of header data is not zero, and is also inconsistent with following calculation for required slot in netbk_gop_skb. In netbk_gop_skb, required slots are calculated based on offset and len in page of header data. It is possible that required slots
2010 Jan 27
9
[Bridge] [PATCH 0/3 v3] macvtap driver
This is the third version of the macvtap device driver, following another major restructuring and a lot of bug fixes: * Change macvtap to be based around a struct sock * macvtap: fix initialization * return 0 to netlink * don't use rcu for q->file and q->vlan pointers * macvtap: checkpatch.pl fixes * macvtap: fix tun IFF flags * Use a struct socket to make tx flow control work * disable
2010 Jan 27
9
[Bridge] [PATCH 0/3 v3] macvtap driver
This is the third version of the macvtap device driver, following another major restructuring and a lot of bug fixes: * Change macvtap to be based around a struct sock * macvtap: fix initialization * return 0 to netlink * don't use rcu for q->file and q->vlan pointers * macvtap: checkpatch.pl fixes * macvtap: fix tun IFF flags * Use a struct socket to make tx flow control work * disable
2010 Jan 27
9
[Bridge] [PATCH 0/3 v3] macvtap driver
This is the third version of the macvtap device driver, following another major restructuring and a lot of bug fixes: * Change macvtap to be based around a struct sock * macvtap: fix initialization * return 0 to netlink * don't use rcu for q->file and q->vlan pointers * macvtap: checkpatch.pl fixes * macvtap: fix tun IFF flags * Use a struct socket to make tx flow control work * disable
2009 Dec 03
3
[RFC 0/2] macvtap, second try
I did not get this ready for the merge window, but people asked what the status of this is so I'm posting it now to solicit feedback. The first patch just adds some hooks into macvlan.c and is less invasive than the previous version. That part should be fine and I'd like this to get merged into macvlan for 2.6.33 if people agree that the approach is right. The second patch adds the
2009 Dec 03
3
[RFC 0/2] macvtap, second try
I did not get this ready for the merge window, but people asked what the status of this is so I'm posting it now to solicit feedback. The first patch just adds some hooks into macvlan.c and is less invasive than the previous version. That part should be fine and I'd like this to get merged into macvlan for 2.6.33 if people agree that the approach is right. The second patch adds the
2009 Dec 03
3
[RFC 0/2] macvtap, second try
I did not get this ready for the merge window, but people asked what the status of this is so I'm posting it now to solicit feedback. The first patch just adds some hooks into macvlan.c and is less invasive than the previous version. That part should be fine and I'd like this to get merged into macvlan for 2.6.33 if people agree that the approach is right. The second patch adds the
2008 Apr 05
11
[PATCH RFC 1/5] vringfd syscall
For virtualization, we've developed virtio_ring for efficient communication. This would also work well for userspace-kernel communication, particularly for things like the tun device. By using the same ABI, we can join guests to the host kernel trivially. These patches are fairly alpha; I've seen some network stalls I have to track down and there are some fixmes. Comments welcome!
2008 Apr 05
11
[PATCH RFC 1/5] vringfd syscall
For virtualization, we've developed virtio_ring for efficient communication. This would also work well for userspace-kernel communication, particularly for things like the tun device. By using the same ABI, we can join guests to the host kernel trivially. These patches are fairly alpha; I've seen some network stalls I have to track down and there are some fixmes. Comments welcome!
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of