similar to: Signed bit field; int have_hotplug_status_watch:1

Displaying 20 results from an estimated 6000 matches similar to: "Signed bit field; int have_hotplug_status_watch:1"

2011 Apr 04
0
[PATCH] xen: netback: use unsigned type for one-bit bitfield.
Fixes error from sparse: CHECK drivers/net/xen-netback/xenbus.c drivers/net/xen-netback/xenbus.c:29:40: error: dubious one-bit signed bitfield int have_hotplug_status_watch:1; Reported-by: Dr. David Alan Gilbert <linux at treblig.org> Signed-off-by: Ian Campbell <ian.campbell at citrix.com> Cc: netdev at vger.kernel.org Cc: xen-devel at lists.xensource.com ---
2011 Apr 04
0
[PATCH] xen: netback: use unsigned type for one-bit bitfield.
Fixes error from sparse: CHECK drivers/net/xen-netback/xenbus.c drivers/net/xen-netback/xenbus.c:29:40: error: dubious one-bit signed bitfield int have_hotplug_status_watch:1; Reported-by: Dr. David Alan Gilbert <linux at treblig.org> Signed-off-by: Ian Campbell <ian.campbell at citrix.com> Cc: netdev at vger.kernel.org Cc: xen-devel at lists.xensource.com ---
2013 Sep 20
5
[PATCH net-next 2/2] xen-netback: handle frontends that fail to transition through Closing
Some old Windows frontends fail to transition through the xenbus Closing state and move directly from Connected to Closed. Handle this case properly. Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> --- drivers/net/xen-netback/xenbus.c | 2 ++ 1
2013 Jun 24
3
[PATCH v2] xen-netback: add a pseudo pps rate limit
VM traffic is already limited by a throughput limit, but there is no control over the maximum packet per second (PPS). In DDOS attack the major issue is rather PPS than throughput. With provider offering more bandwidth to VMs, it becames easy to coordinate a massive attack using VMs. Example: 100Mbits ~ 200kpps using 64B packets. This patch provides a new option to limit VMs maximum packets per
2013 Feb 15
1
[PATCH 7/8] netback: split event channels support
Netback and netfront only use one event channel to do tx / rx notification. This may cause unnecessary wake-up of process routines. This patch adds a new feature called feautre-split-event-channel to netback, enabling it to handle Tx and Rx event separately. Netback will use tx_irq to notify guest for tx completion, rx_irq for rx notification. If frontend doesn''t support this feature,
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com> Add support for multi page ring for block devices. The number of pages is configurable for blkback via module parameter. blkback reports max-ring-page-order to blkfront via xenstore. blkfront reports its supported ring-page-order to blkback via xenstore. blkfront reports multi page ring references via ring-refNN in xenstore. The change allows
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com> Add support for multi page ring for block devices. The number of pages is configurable for blkback via module parameter. blkback reports max-ring-page-order to blkfront via xenstore. blkfront reports its supported ring-page-order to blkback via xenstore. blkfront reports multi page ring references via ring-refNN in xenstore. The change allows
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com> Add support for multi page ring for block devices. The number of pages is configurable for blkback via module parameter. blkback reports max-ring-page-order to blkfront via xenstore. blkfront reports its supported ring-page-order to blkback via xenstore. blkfront reports multi page ring references via ring-refNN in xenstore. The change allows
2006 Jun 15
2
xenbus api
Hello I have a lot of problem using the xenbus api (in xen-3.0-testing). I had to modify the network backend driver (file netback.c), and each call to a xenbus function in a virtual machine make my machine reboot (not the virtual, the real machine). for example, I ''ve add this line of code (wich is useless):
2013 Nov 28
4
[PATCH net] xen-netback: fix fragment detection in checksum setup
The code to detect fragments in checksum_setup() was missing for IPv4 and too eager for IPv6. (It transpires that Windows seems to send IPv6 packets with a fragment header even if they are not a fragment - i.e. offset is zero, and M bit is not set). Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com>
2011 Dec 01
11
[PATCH 0 of 2] Paging support updates for XCP dom0
This is a cherry pick of two patches that add support for guest paged out frames in the XCP 2.6.32 dom0 patch queue. First patch propagates the ENOENT returned by the hypervisor in the case of a paged out page, all the way up the call chain to the MMAPBATCH_V2 ioctl. The ioctl is mainly used to harvest those return values and retry. The second patch adds retry loops to all backend grant
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms, just use pr_<level> Miscellaneous changes around these conversions: Add a missing newline to avoid message interleaving, coalesce formats, reflow modified lines to 80 columns. Signed-off-by: Joe Perches <joe at perches.com> --- drivers/net/xen-netback/netback.c | 7 +++---- drivers/net/xen-netfront.c | 28
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms, just use pr_<level> Miscellaneous changes around these conversions: Add a missing newline to avoid message interleaving, coalesce formats, reflow modified lines to 80 columns. Signed-off-by: Joe Perches <joe at perches.com> --- drivers/net/xen-netback/netback.c | 7 +++---- drivers/net/xen-netfront.c | 28
2013 Jun 28
3
[PATCH next] xen: Use more current logging styles
Instead of mixing printk and pr_<level> forms, just use pr_<level> Miscellaneous changes around these conversions: Add a missing newline to avoid message interleaving, coalesce formats, reflow modified lines to 80 columns. Signed-off-by: Joe Perches <joe at perches.com> --- drivers/net/xen-netback/netback.c | 7 +++---- drivers/net/xen-netfront.c | 28
2007 Nov 20
2
netfront/back documentation on wiki
Hi all, I''ve taken a stab at documenting the current interface between netfront and netback drivers, here: http://wiki.xensource.com/xenwiki/XenNetFrontBackInterface Currently, the only way for non-Linux implementers to adhere to this interface is to study the Linux netfront driver, which has a great deal of optimizations and is not meant to be documentation. I''d love it if
2011 Dec 09
4
[PATCH v3 REPOST] xen-netfront: delay gARP until backend switches to Connected
After a guest is live migrated, the xen-netfront driver emits a gratuitous ARP message, so that networking hardware on the target host's subnet can take notice, and public routing to the guest is re-established. However, if the packet appears on the backend interface before the backend is added to the target host's bridge, the packet is lost, and the migrated guest's peers become
2011 Dec 09
4
[PATCH v3 REPOST] xen-netfront: delay gARP until backend switches to Connected
After a guest is live migrated, the xen-netfront driver emits a gratuitous ARP message, so that networking hardware on the target host's subnet can take notice, and public routing to the guest is re-established. However, if the packet appears on the backend interface before the backend is added to the target host's bridge, the packet is lost, and the migrated guest's peers become
2011 Dec 09
4
[PATCH v3 REPOST] xen-netfront: delay gARP until backend switches to Connected
After a guest is live migrated, the xen-netfront driver emits a gratuitous ARP message, so that networking hardware on the target host's subnet can take notice, and public routing to the guest is re-established. However, if the packet appears on the backend interface before the backend is added to the target host's bridge, the packet is lost, and the migrated guest's peers become
2007 Jul 13
12
XEN 3.1: critical bug: vif init failure after creating 15-17 VMs (XENBUS: Timeout connecting to device: device/vif)
We have found a critical problem with the XEN 3.1 release (for those who are running 15-20 VMs on a single server). We are using the official XEN 3.1 release on a rackable server (Dual-Core AMD Opteron, 8GB RAM). The problem we are seeing is that intermittently vifs fail to work properly in VMs after we create around 15-17 VMs on our server (all running at the same time, created one by
2013 Oct 10
3
[PATCH net-next v3 5/5] xen-netback: enable IPv6 TCP GSO to the guest
This patch adds code to handle SKB_GSO_TCPV6 skbs and construct appropriate extra or prefix segments to pass the large packet to the frontend. New xenstore flags, feature-gso-tcpv6 and feature-gso-tcpv6-prefix, are sampled to determine if the frontend is capable of handling such packets. Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: David