similar to: Interesting observation with network event notification and batching

Displaying 20 results from an estimated 3000 matches similar to: "Interesting observation with network event notification and batching"

2013 Sep 12
15
large packet support in netfront driver and guest network throughput
Hi All, I am sure this has been answered somewhere in the list in the past, but I can''t find it. I was wondering if the linux guest netfront driver has GRO support in it. tcpdump shows packets coming in with 1500 bytes, although the eth0 in dom0 and the vif corresponding to the linux guest in dom0 is showing that they receive large packet: In dom0: eth0 Link encap:Ethernet HWaddr
2006 Apr 11
1
problems with assign
Hello. I have n files in a directory: file1, ..., filen. I read them with the following commands: list=scan(file="list",what=list(nom="")) # in the file list, I have all the filenames. n=length(list[[1]]) for (i in 1:n){ aux <- paste("p",i,sep="") assign(aux, as.matrix(read.table(list[[1]][i]))) } R creates the matrices p1,p2,...,pn. I want
2006 May 25
3
netfront.c: gnttab_query_foreign_access returns non zero in network_tx_buf_gc
I''ve been working form the netfront.c in the testing tree and using SLES 10 RC1 for i386 on a SMP box. When I stress the network using iperf in a domU, domU acting as client on a gigabit network, I occasionally get a panic at the dev_kfree_skb_irq(skb); line. This is the same panic as reported in http://lists.xensource.com/archives/html/xen-devel/2006-05/msg00919.html The trace
2013 Jan 04
31
xennet: skb rides the rocket: 20 slots
Hi Ian, Today i fired up an old VM with a bittorrent client, trying to download some torrents. I seem to be hitting the unlikely case of "xennet: skb rides the rocket: xx slots" and this results in some dropped packets in domU, I don''t see any warnings in dom0. I have added some extra info, but i don''t have enough knowledge if this could/should be prevented from
2013 Feb 01
45
netback Oops then xenwatch stuck in D state
We''ve been hitting the following issue on a variety of hosts and recent Xen/dom0 version combinations. Here''s an excerpt from our latest: Xen: 4.1.4 (xenbits @ 23432) Dom0: 3.7.1-x86_64 BUG: unable to handle kernel NULL pointer dereference at 000000000000001c IP: [<ffffffff8141a301>] evtchn_from_irq+0x11/0x40 PGD 0 Oops: 0000 [#1] SMP Modules linked in: ebt_comment
2015 Mar 13
3
Network throughput testing software available for CentOS/Linux
On 12-03-2015 17:39, Digimer wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 12/03/15 04:29 PM, Gilbert Sebenste wrote: >> Hello everyone, >> >> A network engineer buddy of mine brought up for discussion with me >> that he'd like to do some throughput testing, but he's new to >> Linux/RedHat. Is there any software I can recommend to
2010 May 05
5
[Pv-ops][PATCH 0/4 v4] Netback multiple threads support
This is netback multithread support patchset version 4. Main Changes from v3: 1. Patchset is against xen/next tree. 2. Merge group and idx into netif->mapping. 3. Use vmalloc to allocate netbk structures. Main Changes from v2: 1. Merge "group" and "idx" into "netif->mapping", therefore page_ext is not used now. 2. Put netbk_add_netif() and netbk_remove_netif()
2012 Feb 23
5
Pls help: netfront tx ring frozen (any clues appreciated)
Hi, We are running into a situation where rsp_prod index in the shared ring is not getting updated for the netfront tx ring by the netback. We see that rsp_cons is the same value as rsp_prod, with req_prod 236 slots away(tx ring is full). From looking at the netfront driver code, it looks as if xennet_tx_buf_gc processing only happens if rsp_prod is more than rsp_cons. Our
2012 Feb 23
5
Pls help: netfront tx ring frozen (any clues appreciated)
Hi, We are running into a situation where rsp_prod index in the shared ring is not getting updated for the netfront tx ring by the netback. We see that rsp_cons is the same value as rsp_prod, with req_prod 236 slots away(tx ring is full). From looking at the netfront driver code, it looks as if xennet_tx_buf_gc processing only happens if rsp_prod is more than rsp_cons. Our
2015 Mar 12
3
Network throughput testing software available for CentOS/Linux
Hello everyone, A network engineer buddy of mine brought up for discussion with me that he'd like to do some throughput testing, but he's new to Linux/RedHat. Is there any software I can recommend to him that any of you find above par for CentOS 6/7? Thanks! Gilbert ******************************************************************************* Gilbert Sebenste
2014 Apr 18
3
[PATCH] virtio_net: zero is an invald queue_pairs number
Execute "ethtool -L eth0 combined 0" in guest, if multiqueue is enabled, virtnet_send_command() will return -EINVAL error, there is a validation in QEMU. But if multiqueue is disabled, virtnet_set_queues() will just return zero (success). We should return error for this situation. Signed-off-by: Amos Kong <akong at redhat.com> --- drivers/net/virtio_net.c | 2 +- 1 file changed,
2014 Apr 18
3
[PATCH] virtio_net: zero is an invald queue_pairs number
Execute "ethtool -L eth0 combined 0" in guest, if multiqueue is enabled, virtnet_send_command() will return -EINVAL error, there is a validation in QEMU. But if multiqueue is disabled, virtnet_set_queues() will just return zero (success). We should return error for this situation. Signed-off-by: Amos Kong <akong at redhat.com> --- drivers/net/virtio_net.c | 2 +- 1 file changed,
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
On 06/15/2014 08:47 AM, Peter Zijlstra wrote: > > > > +#ifdef CONFIG_PARAVIRT_SPINLOCKS > + > +/* > + * Write a comment about how all this works... > + */ > + > +#define _Q_LOCKED_SLOW (2U<< _Q_LOCKED_OFFSET) > + > +struct pv_node { > + struct mcs_spinlock mcs; > + struct mcs_spinlock __offset[3]; > + int cpu, head; > +}; I am wondering why
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
On 06/15/2014 08:47 AM, Peter Zijlstra wrote: > > > > +#ifdef CONFIG_PARAVIRT_SPINLOCKS > + > +/* > + * Write a comment about how all this works... > + */ > + > +#define _Q_LOCKED_SLOW (2U<< _Q_LOCKED_OFFSET) > + > +struct pv_node { > + struct mcs_spinlock mcs; > + struct mcs_spinlock __offset[3]; > + int cpu, head; > +}; I am wondering why
2013 Jun 24
3
[PATCH v2] xen-netback: add a pseudo pps rate limit
VM traffic is already limited by a throughput limit, but there is no control over the maximum packet per second (PPS). In DDOS attack the major issue is rather PPS than throughput. With provider offering more bandwidth to VMs, it becames easy to coordinate a massive attack using VMs. Example: 100Mbits ~ 200kpps using 64B packets. This patch provides a new option to limit VMs maximum packets per
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms, just use pr_<level> Miscellaneous changes around these conversions: Add a missing newline to avoid message interleaving, coalesce formats, reflow modified lines to 80 columns. Signed-off-by: Joe Perches <joe at perches.com> --- drivers/net/xen-netback/netback.c | 7 +++---- drivers/net/xen-netfront.c | 28
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms, just use pr_<level> Miscellaneous changes around these conversions: Add a missing newline to avoid message interleaving, coalesce formats, reflow modified lines to 80 columns. Signed-off-by: Joe Perches <joe at perches.com> --- drivers/net/xen-netback/netback.c | 7 +++---- drivers/net/xen-netfront.c | 28
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms, just use pr_<level> Miscellaneous changes around these conversions: Add a missing newline to avoid message interleaving, coalesce formats, reflow modified lines to 80 columns. Signed-off-by: Joe Perches <joe at perches.com> --- drivers/net/xen-netback/netback.c | 7 +++---- drivers/net/xen-netfront.c | 28
2013 Nov 28
4
[PATCH net] xen-netback: fix fragment detection in checksum setup
The code to detect fragments in checksum_setup() was missing for IPv4 and too eager for IPv6. (It transpires that Windows seems to send IPv6 packets with a fragment header even if they are not a fragment - i.e. offset is zero, and M bit is not set). Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com>
2011 Jun 29
1
[PATCH 4/4] xen/netback: Add module alias for autoloading
Add xen-backend:vif module alias to the xen-netback module. This allows automatic loading of the module. Signed-off-by: Bastian Blank <waldi at debian.org> Acked-by: Ian Campbell <ian.campbell at citrix.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com> --- drivers/net/xen-netback/netback.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git