Displaying 20 results from an estimated 2000 matches similar to: "xennet: skb rides the rocket: 20 slots"
2013 Jul 09
20
[PATCH 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get
slots required by header data. This is wrong when offset in the page of header
data is not zero, and is also inconsistent with following calculation for
required slot in netbk_gop_skb.
In netbk_gop_skb, required slots are calculated based on offset and len in page
of header data. It is possible that required slots
2013 Jul 10
13
[PATCH v2 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get
slots required by header data. This is wrong when offset in the page of header
data is not zero, and is also inconsistent with following calculation for
required slot in netbk_gop_skb.
In netbk_gop_skb, required slots are calculated based on offset and len in page
of header data. It is possible that required slots
2012 Jan 12
9
Re: [PATCH] add netconsole support for xen-netfront
On Wed, Jan 11, 2012 at 04:52:36PM +0800, Zhenzhong Duan wrote:
> add polling interface to xen-netfront device to support netconsole
> 
Ian, any thoughts on the spinlock changes?
> Signed-off-by: Tina.Yang <tina.yang@oracle.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Jeremy Fitzhardinge <jeremy@goop.org>
> Signed-off-by: Zhenzhong.Duan
2012 Aug 13
9
[PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
Hi,
I ran into an issue where netback driver is crashing with BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)). It is happening in Intel 10Gbps network when larger mtu values  are used. The problem seems to be  the way the slots are counted. After applying this patch things ran fine in my environment. I request to validate my changes.
Thanks
Siva
2013 Jul 02
3
[PATCH RFC] xen-netback: remove guest RX path dependence on MAX_SKB_FRAGS
This dependence is undesirable and logically incorrect.
It''s undesirable because Xen network protocol should not depend on a
OS-specific constant.
It''s incorrect because the ring slots required doesn''t correspond to the
number of frags a SKB has (consider compound page frags).
This patch removes this dependence by correctly counting the ring slots
required.
2008 Aug 02
0
[PATCH 10/10] drivers/net/xen-netfront.c: Use DIV_ROUND_UP
From: Julia Lawall <julia at diku.dk>
The kernel.h macro DIV_ROUND_UP performs the computation (((n) + (d) - 1) /
(d)) but is perhaps more readable.
An extract of the semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@haskernel@
@@
#include <linux/kernel.h>
@depends on haskernel@
expression n,d;
@@
(
- (n + d - 1) / d
+
2023 Mar 28
1
[PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo
This patch introduce a new function that releases the
xdp shinfo. The subsequent patch will reuse this function.
Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com>
---
 drivers/net/virtio_net.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 72b9d6ee4024..09aed60e2f51 100644
---
2013 Nov 12
12
[PATCH net-next 1/4] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2013 Nov 12
12
[PATCH net-next 1/4] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2013 Oct 31
6
[PATCH net-next 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
6
[PATCH net-next 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 28
8
[PATCH net-next] virtio_net: migrate mergeable rx buffers to page frag allocators
The virtio_net driver's mergeable receive buffer allocator
uses 4KB packet buffers. For MTU-sized traffic, SKB truesize
is > 4KB but only ~1500 bytes of the buffer is used to store
packet data, reducing the effective TCP window size
substantially. This patch addresses the performance concerns
with mergeable receive buffers by allocating MTU-sized packet
buffers using page frag allocators.
2013 Oct 28
8
[PATCH net-next] virtio_net: migrate mergeable rx buffers to page frag allocators
The virtio_net driver's mergeable receive buffer allocator
uses 4KB packet buffers. For MTU-sized traffic, SKB truesize
is > 4KB but only ~1500 bytes of the buffer is used to store
packet data, reducing the effective TCP window size
substantially. This patch addresses the performance concerns
with mergeable receive buffers by allocating MTU-sized packet
buffers using page frag allocators.
2012 Jun 23
7
GPLPV xennet bsod when vcpu>15
Hello,
I installed the signed drivers from
http://wiki.univention.de/index.php?title=Installing-signed-GPLPV-drivers and
I ran into a BSOD on a Windows 2008 Server R2 Enterprise domU with a
large number of vcpu''s. The BSOD is related to xennet.sys.
After some trials I found that it runs fine up to 15 cores. From 16 or
more, the BSOD kicks in when booting the domU.
The hardware (4 times
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2010 May 27
11
unistalling gplpv from win7x64 gives BSOD
I have installed the 210 gplpv (debug), when I try to update them with
current gplpv I get a BSOD even so when I try to uninstall them.
Now I have made a bootentry that specifies NOGPLPV, what gives me a
bootable system after such a crash, I found (uninstalling the leftover
driver packages after getting bsod during uninstall of the main package)
that uninstalling the net driver leads to bsod. Now
2008 Aug 15
1
No module xennet found for kernel 2.6.18.8-xen
Hi all,
 when I try to build an initial ramdisk with the option --with=xennet, I get
the following error
No module xennet found for kernel 2.6.18.8-xen
I looked at the  /lib/modules/2.6.18.8-xen/kernel/drivers/xen  and  there is
no xennet module over there, which causes the problem.
How can I get the xennet module?
Thanks,
 Luca
_______________________________________________
Xen-users
2009 Jan 29
8
Help on setting up a PVM
I''m going to set up a PVM on xen-3.3.1 debian-amd64. I need advices
about the best methods to install a fresh debian on it (i.e. how to
choose the kernel for pvm) . I''ve installed xen from sources and only
have one xen kernel in /boot which I''m using for dom0; should I use
the same kernel for domUs?
There is any problem on use vcpus=2; the other vm is an hvm running
2006 Jun 23
3
No eth0 in DomU in FC5
If I try to ifup eth0, I get the following:
Device eth0 does not seem to be present, delaying initialization.
DomU: Linux fedora1 2.6.16-1.2133_FC5xenU #1 SMP Tue Jun 6 02:58:27 EDT 2006
i686 i686 i386 GNU/Linux
config:
kernel = "/boot/vmlinuz-2.6-xenU"
#ramdisk="/boot/initrd-2.6.16-1.2133_FC5xenU.img"
memory = 128
name = "fedora1"
#dhcp = "dhcp"
disk =