similar to: VMDQ / netchannel2 support?

Displaying 20 results from an estimated 10000 matches similar to: "VMDQ / netchannel2 support?"

2008 Dec 20
6
[PATCH] Multi-queue support for Netchannel2
The attached patches add vmq (multi-queue) support for netchannel2. (also known as VMDq) These patches were based on a previous implementation written for netchannel1 by Kaushik Kumar Ram. These patches are based on the latest netchannel2 public trees. Patches 1 and 2 are for the Xen tree and patches 3 and 4 for the linux tree. This version provides the basic multi-queue functionality but does
2009 Jan 27
5
[PATCH 2/2] Add VMDq support to ixgbe
This patch adds experimental VMDq support (AKA Netchannel2 vmq) to the ixgbe driver. This applies to the Netchannel2 tree, and should NOT be applied to the "normal" development tree. To enable VMDq functionality, load the driver with the command-line parameter VMDQ=<num queues>, as in: $ modprobe ixgbe VMDQ=8 You can then set up PV domains to use the device by modifying your VM
2010 Jul 02
1
VMDq SR-IOV
Hello, Is there a HowTo or tutorial about the configuration of VMDq or SR-IOV with XEN? Thanks in advance. Best Regards, -- Houssem MEDHIOUB Research Engineer TELECOM & Management SudParis _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2009 Feb 10
3
[PATCH 2/2] Use correct config option for ixgbe VMDq
The correct kernel configuration for VMDq support is CONFIG_XEN_NETDEV2_VMQ, not CONFIG_XEN_NETDEV2_BACKEND. Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> diff -urpN a/drivers/net/ixgbe/ixgbe.h b/drivers/net/ixgbe/ixgbe.h --- a/drivers/net/ixgbe/ixgbe.h 2009-02-06 09:03:44.000000000 -0800 +++ b/drivers/net/ixgbe/ixgbe.h 2009-02-10 14:32:57.000000000 -0800 @@ -35,7 +35,7 @@
2009 Feb 10
1
[PATCH 1/2] Fix ixgbe RSS operation
The addition of VMDq support to ixgbe completely broke normal RSS receive operation. Since RSS is the default operating mode, the driver would cause a kernel panic as soon as the interface was opened. This patch fixes the problem by correctly checking the VMDQ_ENABLED flag before attempting any VMDQ-specific call. Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> diff -r
2011 Jul 28
1
[RFC net-next PATCH 3/4] ethtool: Add new set commands
On Jul 28, 2011, at 1:38 PM, Rose, Gregory V wrote: > >> From: Anirban Chakraborty [mailto:anirban.chakraborty at qlogic.com] >> Sent: Thursday, July 28, 2011 12:04 PM >> To: Rose, Gregory V >> Cc: David Miller; netdev; Ben Hutchings; Kirsher, Jeffrey T >> Subject: Re: [RFC net-next PATCH 3/4] ethtool: Add new set commands >> >> >> On Jul 28,
2011 Jul 28
1
[RFC net-next PATCH 3/4] ethtool: Add new set commands
On Jul 28, 2011, at 1:38 PM, Rose, Gregory V wrote: > >> From: Anirban Chakraborty [mailto:anirban.chakraborty at qlogic.com] >> Sent: Thursday, July 28, 2011 12:04 PM >> To: Rose, Gregory V >> Cc: David Miller; netdev; Ben Hutchings; Kirsher, Jeffrey T >> Subject: Re: [RFC net-next PATCH 3/4] ethtool: Add new set commands >> >> >> On Jul 28,
2009 Sep 01
1
[RFC] Virtual Machine Device Queues(VMDq) support on KVM
[RFC] Virtual Machine Device Queues (VMDq) support on KVM Network adapter with VMDq technology presents multiple pairs of tx/rx queues, and renders network L2 sorting mechanism based on MAC addresses and VLAN tags for each tx/rx queue pair. Here we present a generic framework, in which network traffic to/from a tx/rx queue pair can be directed from/to a KVM guest without any software copy.
2009 Sep 01
1
[RFC] Virtual Machine Device Queues(VMDq) support on KVM
[RFC] Virtual Machine Device Queues (VMDq) support on KVM Network adapter with VMDq technology presents multiple pairs of tx/rx queues, and renders network L2 sorting mechanism based on MAC addresses and VLAN tags for each tx/rx queue pair. Here we present a generic framework, in which network traffic to/from a tx/rx queue pair can be directed from/to a KVM guest without any software copy.
2008 Oct 14
0
Intel VMDq support in Xen
Is Intel VMDq supported in Xen 3.3 or 3.4 ? And if not when will it be ? Ce message et toutes les pieces jointes sont etablis a l''attention exclusive de ses destinataires et sont strictement confidentiels. Pour en savoir plus cliquer ici This message and any attachments are confidential to the ordinary user of the e-mail address to which it was addressed and may also be privileged.
2012 Oct 11
0
How to use the VMDq of Intel NIC?
Hi, all: Does xen 4.1 & kernel 3.4 support intel VMDq? How can i make our domU benefit from this? -- 高永超 (flex) 运维工程师 Douban Inc. Gtalk: frostynova@gmail.com Skype:flex177 WebSite: http://blog.flib.me Mobile: +86 13811077854 No.14 Jiuxianqiao Road, Area 51 A1-1-1008, Beijing 100016 , China 北京市酒仙桥路14号51楼A1区1门1008,100016 _______________________________________________ Xen-users mailing
2017 Nov 28
5
[RFC] virtio-net: help live migrate SR-IOV devices
Hi, I'd like to get some feedback on a proposal to enhance virtio-net to ease configuration of a VM and that would enable live migration of passthrough network SR-IOV devices. Today we have SR-IOV network devices (VFs) that can be passed into a VM in order to enable high performance networking direct within the VM. The problem I am trying to address is that this configuration is generally
2017 Nov 28
5
[RFC] virtio-net: help live migrate SR-IOV devices
Hi, I'd like to get some feedback on a proposal to enhance virtio-net to ease configuration of a VM and that would enable live migration of passthrough network SR-IOV devices. Today we have SR-IOV network devices (VFs) that can be passed into a VM in order to enable high performance networking direct within the VM. The problem I am trying to address is that this configuration is generally
2017 Nov 30
4
[RFC] virtio-net: help live migrate SR-IOV devices
On Thu, 30 Nov 2017 11:29:56 +0800, Jason Wang wrote: > On 2017?11?29? 03:27, Jesse Brandeburg wrote: > > Hi, I'd like to get some feedback on a proposal to enhance virtio-net > > to ease configuration of a VM and that would enable live migration of > > passthrough network SR-IOV devices. > > > > Today we have SR-IOV network devices (VFs) that can be passed
2017 Nov 30
4
[RFC] virtio-net: help live migrate SR-IOV devices
On Thu, 30 Nov 2017 11:29:56 +0800, Jason Wang wrote: > On 2017?11?29? 03:27, Jesse Brandeburg wrote: > > Hi, I'd like to get some feedback on a proposal to enhance virtio-net > > to ease configuration of a VM and that would enable live migration of > > passthrough network SR-IOV devices. > > > > Today we have SR-IOV network devices (VFs) that can be passed
2017 Nov 30
1
[RFC] virtio-net: help live migrate SR-IOV devices
On Thu, 30 Nov 2017 15:54:40 +0200, Michael S. Tsirkin wrote: > On Wed, Nov 29, 2017 at 07:51:38PM -0800, Jakub Kicinski wrote: > > On Thu, 30 Nov 2017 11:29:56 +0800, Jason Wang wrote: > > > On 2017?11?29? 03:27, Jesse Brandeburg wrote: > > > > Hi, I'd like to get some feedback on a proposal to enhance virtio-net > > > > to ease configuration of a
2017 Nov 30
1
[RFC] virtio-net: help live migrate SR-IOV devices
On Thu, 30 Nov 2017 15:54:40 +0200, Michael S. Tsirkin wrote: > On Wed, Nov 29, 2017 at 07:51:38PM -0800, Jakub Kicinski wrote: > > On Thu, 30 Nov 2017 11:29:56 +0800, Jason Wang wrote: > > > On 2017?11?29? 03:27, Jesse Brandeburg wrote: > > > > Hi, I'd like to get some feedback on a proposal to enhance virtio-net > > > > to ease configuration of a
2009 Aug 27
5
[PATCHv5 3/3] vhost_net: a kernel-level virtio server
What it is: vhost net is a character device that can be used to reduce the number of system calls involved in virtio networking. Existing virtio net code is used in the guest without modification. There's similarity with vringfd, with some differences and reduced scope - uses eventfd for signalling - structures can be moved around in memory at any time (good for migration) - support memory
2009 Aug 27
5
[PATCHv5 3/3] vhost_net: a kernel-level virtio server
What it is: vhost net is a character device that can be used to reduce the number of system calls involved in virtio networking. Existing virtio net code is used in the guest without modification. There's similarity with vringfd, with some differences and reduced scope - uses eventfd for signalling - structures can be moved around in memory at any time (good for migration) - support memory
2008 Nov 21
22
[PATCH 0/13 v7] PCI: Linux kernel SR-IOV support
Greetings, Following patches are intended to support SR-IOV capability in the Linux kernel. With these patches, people can turn a PCI device with the capability into multiple ones from software perspective, which will benefit KVM and achieve other purposes such as QoS, security, and etc. The Physical Function and Virtual Function drivers using the SR-IOV APIs will come soon! Major changes from