Displaying 20 results from an estimated 30000 matches similar to: "skb_checksum_setup() placement in pv-ops vs. legacy kernel"
2013 Jan 04
31
xennet: skb rides the rocket: 20 slots
Hi Ian,
Today i fired up an old VM with a bittorrent client, trying to download some torrents.
I seem to be hitting the unlikely case of "xennet: skb rides the rocket: xx slots" and this results in some dropped packets in domU, I don''t see any warnings in dom0.
I have added some extra info, but i don''t have enough knowledge if this could/should be prevented from
2011 Jun 24
19
SKB paged fragment lifecycle on receive
When I was preparing Xen''s netback driver for upstream one of the things
I removed was the zero-copy guest transmit (i.e. netback receive)
support.
In this mode guest data pages ("foreign pages") were mapped into the
backend domain (using Xen grant-table functionality) and placed into the
skb''s paged frag list (skb_shinfo(skb)->frags, I hope I am using the
right
2011 Jan 03
13
Re: pvusb drivers for pvops 2.6.32.x kernel
Hello,
Jeremy: See the included patch. If it''s OK it''d be nice
to get it into xen/stable-2.6.32.x branch.
Thanks Nathanael!
-- Pasi
----- Forwarded message from Nathanael Rensen <nathanael@polymorpheus.com> -----
From: Nathanael Rensen <nathanael@polymorpheus.com>
To: Pasi Kärkkäinen <pasik@iki.fi>
Cc: n_iwamatsu@jp.fujitsu.com
Date: Mon, 3 Jan 2011
2010 Mar 17
11
Checksumming problem in pv_ops dom0 kernel / netback
Hello,
I seem to be having some troubles regarding the latest 2.6.31.6 and 2.6.32.9 Xen dom0 pv_ops trees.
Our platform:
-Xen 3.4.3-rc3 (also tried 3.4.2 on 2.6.31.6 pv_ops dom0)
-2.6.32.9 pv_ops dom0 kernel, perhaps a week old checkout from xen/stable git (can provide changeset if requested).
-100+ domU''s, all PV.
Ever since we switched to a pv_ops dom0 kernel (we were using 2.6.26
2011 Jan 06
11
[RFC PATCH v01] Xen PVSCSI drivers for pvops xen/stable-2.6.32.x kernel
Hello,
http://pasik.reaktio.net/xen/patches/xen-pvscsi-drivers-linux-2.6.32.27-pvops-v01.diff
This is the first version of Xen PVSCSI drivers, both the scsiback backend and
scsifront frontend, ported from Novell SLES11SP1 2.6.32 Xenlinux kernel to
pvops xen/stable-2.6.32.x branch.
At the moment it''s *only* compile-tested with the latest xen/stable-2.6.32.x
git kernel as of today
2010 Jun 02
14
ARP problems with xen 4.0 with pvops kernel
Hello,
Finally I managed to get a xen 4.0 working on ubuntu 10.04 with pvops
kernel and libvirt. However I am having some problems with networking...
after initial installation with netinstall image in hvm mode, when I
transform the vm in xen pv (via pygrub with the current ubuntu kernel),
networking startEd to act weird...
Basically I''m not using a network script from xen. I define a
2010 Jun 02
14
ARP problems with xen 4.0 with pvops kernel
Hello,
Finally I managed to get a xen 4.0 working on ubuntu 10.04 with pvops
kernel and libvirt. However I am having some problems with networking...
after initial installation with netinstall image in hvm mode, when I
transform the vm in xen pv (via pygrub with the current ubuntu kernel),
networking startEd to act weird...
Basically I''m not using a network script from xen. I define a
2011 Dec 17
12
xl and vifname
Hello,
While using xen 4.1-testing.hg (r23202) I noticed that the ''vifname''
config value is not handled by xl, but is handled by xm. Is there a
workaround or patch for this? My firewall and scripts depend on static
vifnames.
XM works good - it adds multiple interfaces to multiple bridges and
configures them appropriately. But with xl it differs -- here only tap
devices are
2011 Dec 17
12
xl and vifname
Hello,
While using xen 4.1-testing.hg (r23202) I noticed that the ''vifname''
config value is not handled by xl, but is handled by xm. Is there a
workaround or patch for this? My firewall and scripts depend on static
vifnames.
XM works good - it adds multiple interfaces to multiple bridges and
configures them appropriately. But with xl it differs -- here only tap
devices are
2013 Apr 30
6
[PATCH net-next 2/2] xen-netback: avoid allocating variable size array on stack
Tune xen_netbk_count_requests to not touch working array beyond limit, so that
we can make working array size constant.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
drivers/net/xen-netback/netback.c | 26 +++++++++++++++++++++-----
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index
2006 May 09
4
[PATCH] Fix checksum errors when firewalling in domU
Another checksum offload problem was reported on xen-users, when using a
domU as a firewall:
http://lists.xensource.com/archives/html/xen-users/2006-04/msg01150.html
It also fails without VLANs.
The path from dom0->domU with ip_summed==CHECKSUM_HW/proto_csum_blank==1
is broken.
- skb_checksum_setup() assumes that a checksum will definitely be
calculated in dev_queue_xmit(), but
the
2012 Aug 13
9
[PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
Hi,
I ran into an issue where netback driver is crashing with BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)). It is happening in Intel 10Gbps network when larger mtu values are used. The problem seems to be the way the slots are counted. After applying this patch things ran fine in my environment. I request to validate my changes.
Thanks
Siva
2013 Jul 09
20
[PATCH 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get
slots required by header data. This is wrong when offset in the page of header
data is not zero, and is also inconsistent with following calculation for
required slot in netbk_gop_skb.
In netbk_gop_skb, required slots are calculated based on offset and len in page
of header data. It is possible that required slots
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
> On 2014/2/26 13:53, Jason Wang wrote:
>> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
> On 2014/2/26 13:53, Jason Wang wrote:
>> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
2011 Dec 09
4
[PATCH v3 REPOST] xen-netfront: delay gARP until backend switches to Connected
After a guest is live migrated, the xen-netfront driver emits a gratuitous
ARP message, so that networking hardware on the target host's subnet can
take notice, and public routing to the guest is re-established. However,
if the packet appears on the backend interface before the backend is added
to the target host's bridge, the packet is lost, and the migrated guest's
peers become
2011 Dec 09
4
[PATCH v3 REPOST] xen-netfront: delay gARP until backend switches to Connected
After a guest is live migrated, the xen-netfront driver emits a gratuitous
ARP message, so that networking hardware on the target host's subnet can
take notice, and public routing to the guest is re-established. However,
if the packet appears on the backend interface before the backend is added
to the target host's bridge, the packet is lost, and the migrated guest's
peers become
2011 Dec 09
4
[PATCH v3 REPOST] xen-netfront: delay gARP until backend switches to Connected
After a guest is live migrated, the xen-netfront driver emits a gratuitous
ARP message, so that networking hardware on the target host's subnet can
take notice, and public routing to the guest is re-established. However,
if the packet appears on the backend interface before the backend is added
to the target host's bridge, the packet is lost, and the migrated guest's
peers become
2012 Apr 01
9
[ANNOUNCE] Prebuilt Xen PV-HVM templates.
Hi guys,
I have started preparing a library of PV-HVM templates for use on Xen
4.X+ and PVOPS dom0.
This was brought up late last year as something that would make Xen
alot easier for beginners to try.
They are also great for testing - I will be setting up some stuff to
do automatic testing of distro kernel compatibility against
xen-unstable.
Mirror page is here:
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>> We used to stop the handling of tx when the number of pending DMAs
>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> of both host and guest. But it was too aggressive in some cases, since
>> any delay or blocking of a single packet