Displaying 20 results from an estimated 3000 matches similar to: "[PATCH] VNIF: Using smart polling instead of event notification."
2010 May 05
5
[Pv-ops][PATCH 0/4 v4] Netback multiple threads support
This is netback multithread support patchset version 4.
Main Changes from v3:
1. Patchset is against xen/next tree.
2. Merge group and idx into netif->mapping.
3. Use vmalloc to allocate netbk structures.
Main Changes from v2:
1. Merge "group" and "idx" into "netif->mapping", therefore
page_ext is not used now.
2. Put netbk_add_netif() and netbk_remove_netif()
2012 Feb 23
5
Pls help: netfront tx ring frozen (any clues appreciated)
Hi,
We are running into a situation where rsp_prod index in the shared ring
is not getting updated
for the netfront tx ring by the netback.
We see that rsp_cons is the same value as rsp_prod, with req_prod 236
slots away(tx ring is full).
From looking at the netfront driver code, it looks as if xennet_tx_buf_gc
processing only happens if rsp_prod is more
than rsp_cons.
Our
2012 Feb 23
5
Pls help: netfront tx ring frozen (any clues appreciated)
Hi,
We are running into a situation where rsp_prod index in the shared ring
is not getting updated
for the netfront tx ring by the netback.
We see that rsp_cons is the same value as rsp_prod, with req_prod 236
slots away(tx ring is full).
From looking at the netfront driver code, it looks as if xennet_tx_buf_gc
processing only happens if rsp_prod is more
than rsp_cons.
Our
2013 Jul 09
20
[PATCH 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get
slots required by header data. This is wrong when offset in the page of header
data is not zero, and is also inconsistent with following calculation for
required slot in netbk_gop_skb.
In netbk_gop_skb, required slots are calculated based on offset and len in page
of header data. It is possible that required slots
2013 Jun 12
26
Interesting observation with network event notification and batching
Hi all
I''m hacking on a netback trying to identify whether TLB flushes causes
heavy performance penalty on Tx path. The hack is quite nasty (you would
not want to know, trust me).
Basically what is doesn''t is, 1) alter network protocol to pass along
mfns instead of grant references, 2) when the backend sees a new mfn,
map it RO and cache it in its own address space.
With this
2006 Oct 03
1
a domain VTx with the VNIF does hang.
Hi all, my name is Hirofumi Tsujimura.
We are porting and testing a PV-on-HVM in the IPF.
This is a first time to send mail.
I probably found the problem when I tested the VNIF.
My operation for the test is following.
1. create a domain VTx and attach the VNIF in it.
2. create a domain U.
3. send a packet to the domain VTx from the domain U with ping
command.
Then, the domain VTx
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms,
just use pr_<level>
Miscellaneous changes around these conversions:
Add a missing newline to avoid message interleaving,
coalesce formats, reflow modified lines to 80 columns.
Signed-off-by: Joe Perches <joe at perches.com>
---
drivers/net/xen-netback/netback.c | 7 +++----
drivers/net/xen-netfront.c | 28
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms,
just use pr_<level>
Miscellaneous changes around these conversions:
Add a missing newline to avoid message interleaving,
coalesce formats, reflow modified lines to 80 columns.
Signed-off-by: Joe Perches <joe at perches.com>
---
drivers/net/xen-netback/netback.c | 7 +++----
drivers/net/xen-netfront.c | 28
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms,
just use pr_<level>
Miscellaneous changes around these conversions:
Add a missing newline to avoid message interleaving,
coalesce formats, reflow modified lines to 80 columns.
Signed-off-by: Joe Perches <joe at perches.com>
---
drivers/net/xen-netback/netback.c | 7 +++----
drivers/net/xen-netfront.c | 28
2011 May 02
32
[PATCH] blkback: Fix block I/O latency issue
In blkback driver, after I/O requests are submitted to Dom-0 block I/O subsystem, blkback goes to ''sleep'' effectively without letting blkfront know about it (req_event isn''t set appropriately). Hence blkfront doesn''t notify blkback when it submits a new I/O thus delaying the ''dispatch'' of the new I/O to Dom-0 block I/O subsystem. The new I/O is
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2013 Jan 04
31
xennet: skb rides the rocket: 20 slots
Hi Ian,
Today i fired up an old VM with a bittorrent client, trying to download some torrents.
I seem to be hitting the unlikely case of "xennet: skb rides the rocket: xx slots" and this results in some dropped packets in domU, I don''t see any warnings in dom0.
I have added some extra info, but i don''t have enough knowledge if this could/should be prevented from
2015 Dec 31
0
[PATCH v2 34/34] xen/io: use virt_xxx barriers
include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.
For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.
Switch to virt_xxx barriers which serve this exact purpose.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
2015 Dec 30
0
[PATCH 32/34] xen/io: use __smp_XXX barriers
include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.
For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.
Switch to __smp_XXX barriers which serve this exact purpose.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
2008 Jun 02
2
problems with netback
hi
I've some problems with netback
1.what's mmap_pages for, which come from balloon operation ?
mmap_pages = alloc_empty_pages_and_pagevec(MAX_PENDING_REQS)
2.what's the meaning of "refcnt" defined in netif_t (netif->refcnt)
3.irq enable and disable
such as disable_irq(netif->irq);
why netback use such enable & disable operations while netfront dont use for its
2013 Sep 12
15
large packet support in netfront driver and guest network throughput
Hi All,
I am sure this has been answered somewhere in the list in the past, but I can''t find it. I was wondering if the linux guest netfront driver has GRO support in it. tcpdump shows packets coming in with 1500 bytes, although the eth0 in dom0 and the vif corresponding to the linux guest in dom0 is showing that they receive large packet:
In dom0:
eth0 Link encap:Ethernet HWaddr
2006 May 09
4
[PATCH] Fix checksum errors when firewalling in domU
Another checksum offload problem was reported on xen-users, when using a
domU as a firewall:
http://lists.xensource.com/archives/html/xen-users/2006-04/msg01150.html
It also fails without VLANs.
The path from dom0->domU with ip_summed==CHECKSUM_HW/proto_csum_blank==1
is broken.
- skb_checksum_setup() assumes that a checksum will definitely be
calculated in dev_queue_xmit(), but
the
2013 Jul 10
13
[PATCH v2 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get
slots required by header data. This is wrong when offset in the page of header
data is not zero, and is also inconsistent with following calculation for
required slot in netbk_gop_skb.
In netbk_gop_skb, required slots are calculated based on offset and len in page
of header data. It is possible that required slots
2011 Mar 31
3
[PATCH RESEND] net: convert xen-netfront to hw_features
Not tested in any way. The original code for offload setting seems broken
as it resets the features on every netback reconnect.
This will set GSO_ROBUST at device creation time (earlier than connect time).
RX checksum offload is forced on - so advertise as it is.
Signed-off-by: Micha? Miros?aw <mirq-linux at rere.qmqm.pl>
---
[I don't know Xen code enough to say this is correct. There