Device passthrough technology allows a guest to bypass the hypervisor and drive the underlying physical device. VMware has been exploring various ways to deliver this technology to users in a manner which is easy to adopt. In this process we have prepared an architecture along with Intel - NPA (Network Plugin Architecture). NPA allows the guest to use the virtualized NIC vmxnet3 to passthrough to a number of physical NICs which support it. The document below provides an overview of NPA. We intend to upgrade the upstreamed vmxnet3 driver to implement NPA so that Linux users can exploit the benefits provided by passthrough devices in a seamless manner while retaining the benefits of virtualization. The document below tries to answer most of the questions which we anticipated. Please let us know your comments and queries. Thank you. Signed-off-by: Pankaj Thakkar <pthakkar at vmware.com> Network Plugin Architecture --------------------------- VMware has been working on various device passthrough technologies for the past few years. Passthrough technology is interesting as it can result in better performance/cpu utilization for certain demanding applications. In our vSphere product we support direct assignment of PCI devices like networking adapters to a guest virtual machine. This allows the guest to drive the device using the device drivers installed inside the guest. This is similar to the way KVM allows for passthrough of PCI devices to the guests. The hypervisor is bypassed for all I/O and control operations and hence it can not provide any value add features such as live migration, suspend/resume, etc. Network Plugin Architecture (NPA) is an approach which VMware has developed in joint partnership with Intel which allows us to retain the best of passthrough technology and virtualization. NPA allows for passthrough of the fast data (I/O) path and lets the hypervisor deal with the slow control path using traditional emulation/paravirtualization techniques. Through this splitting of data and control path the hypervisor can still provide the above mentioned value add features and exploit the performance benefits of passthrough. NPA requires SR-IOV hardware which allows for sharing of one single NIC adapter by multiple guests. SR-IOV hardware has many logically separate functions called virtual functions (VF) which can be independently assigned to the guest OS. They also have one or more physical functions (PF) (managed by a PF driver) which are used by the hypervisor to control certain aspects of the VFs and the rest of the hardware. NPA splits the guest driver into two components called the Shell and the Plugin. The shell is responsible for interacting with the guest networking stack and funneling the control operations to the hypervisor. The plugin is responsible for driving the data path of the virtual function exposed to the guest and is specific to the NIC hardware. NPA also requires an embedded switch in the NIC to allow for switching traffic among the virtual functions. The PF is also used as an uplink to provide connectivity to other VMs which are in emulation mode. The figure below shows the major components in a block diagram. +------------------------------+ | Guest VM | | | | +----------------+ | | | vmxnet3 driver | | | | Shell | | | | +============+ | | | | | Plugin | | | +------+-+------------+-+------+ | . +---------+ . | vmxnet3 | . |___+-----+ . | . | . +----------------------------+ | | | virtual switch | +----------------------------+ | . \ | . \ +=============+ . \ | PF control | . \ | | . \ | L2 driver | . \ +-------------+ . \ | . \ | . \ +------------------------+ +------------+ | PF VF1 VF2 ... VFn | | | | | | regular | | SR-IOV NIC | | nic | | +--------------+ | | +--------+ | | embedded | | +---+ | | switch | | | +--------------+ | | +---------------+ +--------+ NPA offers several benefits: 1. Performance: Critical performance sensitive paths are not trapped and the guest can directly drive the hardware without incurring virtualization overheads. 2. Hypervisor control: All control operations from the guest such as programming MAC address go through the hypervisor layer and hence can be subjected to hypervisor policies. The PF driver can be further used to put policy decisions like which VLAN the guest should be on. 3. Guest Management: No hardware specific drivers need to be installed in the guest virtual machine and hence no overheads are incurred for guest management. All software for the driver (including the PF driver and the plugin) is installed in the hypervisor. 4. IHV independence: The architecture provides guidelines for splitting the functionality between the VFs and PF but does not dictate how the hardware should be implemented. It gives the IHV the freedom to do asynchronous updates either to the software or the hardware to work around any defects. The fundamental tenet in NPA is to let the hypervisor control the passthrough functionality with minimal guest intervention. This gives a lot of flexibility to the hypervisor which can then treat passthrough as an offload feature (just like TSO, LRO, etc) which is offered to the guest virtual machine when there are no conflicting features present. For example, if the hypervisor wants to migrate the virtual machine from one host to another, the hypervisor can switch the virtual machine out of passthrough mode into paravirtualized/emulated mode and it can use existing technique to migrate the virtual machine. Once the virtual machine is migrated to the destination host the hypervisor can switch the virtual machine back to passthrough mode if a supporting SR-IOV nic is present. This may involve reloading of a different plugin corresponding to the new SR-IOV hardware. Internally we have explored various other options before settling on the NPA approach. For example there are approaches which create a bonding driver on top of a complete passthrough of a NIC device and an emulated/paravirtualized device. Though this approach allows for live migration to work it adds a lot of complexity and dependency. First the hypervisor has to rely on a guest with hot-add support. Second the hypervisor has to depend on the guest networking stack to cooperate to perform migration. Third the guest has to carry the driver images for all possible hardware to which the guest may migrate to. Fourth the hypervisor does not get full control for all the policy decisions. Another approach we have considered is to have a uniform interface for the data path between the emulated/paravirtualized device and the hardware device which allows the hypervisor to seamlessly switch from the emulated interface to the hardware interface. Though this approach is very attractive and can work without any guest involvement it is not acceptable to the IHVs as it does not give them the freedom to fix bugs/erratas and differentiate from each other. We believe NPA approach provides the right level of control and flexibility to the hypervisors while letting the guest exploit the benefits of passthrough. The plugin image is provided by the IHVs along with the PF driver and is packaged in the hypervisor. The plugin image is OS agnostic and can be loaded either into a Linux VM or a Windows VM. The plugin is written against the Shell API interface which the shell is responsible for implementing. The API interface allows the plugin to do TX and RX only by programming the hardware rings (along with things like buffer allocation and basic initialization). The virtual machine comes up in paravirtualized/emulated mode when it is booted. The hypervisor allocates the VF and other resources and notifies the shell of the availability of the VF. The hypervisor injects the plugin into memory location specified by the shell. The shell initializes the plugin by calling into a known entry point and the plugin initializes the data path. The control path is already initialized by the PF driver when the VF is allocated. At this point the shell switches to using the loaded plugin to do all further TX and RX operations. The guest networking stack does not participate in these operations and continues to function normally. All the control operations continue being trapped by the hypervisor and are directed to the PF driver as needed. For example, if the MAC address changes the hypervisor updates its internal state and changes the state of the embedded switch as well through the PF control API. We have reworked our existing Linux vmxnet3 driver to accomodate NPA by splitting the driver into two parts: Shell and Plugin. The new split driver is backwards compatible and continues to work on old/existing vmxnet3 device emulations. The shell implements the API interface and contains code to do the bookkeeping for TX/RX buffers along with interrupt management. The shell code also handles the loading of the plugin and verifying the license of the loaded plugin. The plugin contains the code specific to vmxnet3 ring and descriptor management. The plugin uses the same Shell API interface which would be used by other IHVs. This vmxnet3 plugin is compiled statically along with the shell as this is needed to provide connectivity when there is no underlying SR-IOV device present. The IHV plugins are required to be distributed under GPL license and we are currently looking at ways to verify this both within the hypervisor and within the shell.
Stephen Hemminger
2010-May-05 00:05 UTC
RFC: Network Plugin Architecture (NPA) for vmxnet3
On Tue, 4 May 2010 16:02:25 -0700 Pankaj Thakkar <pthakkar at vmware.com> wrote:> Device passthrough technology allows a guest to bypass the hypervisor and drive > the underlying physical device. VMware has been exploring various ways to > deliver this technology to users in a manner which is easy to adopt. In this > process we have prepared an architecture along with Intel - NPA (Network Plugin > Architecture). NPA allows the guest to use the virtualized NIC vmxnet3 to > passthrough to a number of physical NICs which support it. The document below > provides an overview of NPA. > > We intend to upgrade the upstreamed vmxnet3 driver to implement NPA so that > Linux users can exploit the benefits provided by passthrough devices in a > seamless manner while retaining the benefits of virtualization. The document > below tries to answer most of the questions which we anticipated. Please let us > know your comments and queries. > > Thank you. > > Signed-off-by: Pankaj Thakkar <pthakkar at vmware.com>Code please. Also, it has to work for all architectures not just VMware and Intel.
The purpose of this email is to introduce the architecture and the design principles. The overall project involves more than just changes to vmxnet3 driver and hence we though an overview email would be better. Once people agree to the design in general we intend to provide the code changes to the vmxnet3 driver. The architecture supports more than Intel NICs. We started the project with Intel but plan to support all major IHVs including Broadcom, Qlogic, Emulex and others through a certification program. The architecture works on VMware ESX server only as it requires significant support from the hypervisor. Also, the vmxnet3 driver works on VMware platform only. AFAICT Xen has a different model for supporting SR-IOV devices and allowing live migration and the document briefly talks about it (paragraph 6). Thanks, -pankaj On Tue, May 04, 2010 at 05:05:31PM -0700, Stephen Hemminger wrote:> Date: Tue, 4 May 2010 17:05:31 -0700 > From: Stephen Hemminger <shemminger at vyatta.com> > To: Pankaj Thakkar <pthakkar at vmware.com> > CC: "linux-kernel at vger.kernel.org" <linux-kernel at vger.kernel.org>, > "netdev at vger.kernel.org" <netdev at vger.kernel.org>, > "virtualization at lists.linux-foundation.org" > <virtualization at lists.linux-foundation.org>, > "pv-drivers at vmware.com" <pv-drivers at vmware.com>, > Shreyas Bhatewara <sbhatewara at vmware.com> > Subject: Re: RFC: Network Plugin Architecture (NPA) for vmxnet3 > > On Tue, 4 May 2010 16:02:25 -0700 > Pankaj Thakkar <pthakkar at vmware.com> wrote: > > > Device passthrough technology allows a guest to bypass the hypervisor and drive > > the underlying physical device. VMware has been exploring various ways to > > deliver this technology to users in a manner which is easy to adopt. In this > > process we have prepared an architecture along with Intel - NPA (Network Plugin > > Architecture). NPA allows the guest to use the virtualized NIC vmxnet3 to > > passthrough to a number of physical NICs which support it. The document below > > provides an overview of NPA. > > > > We intend to upgrade the upstreamed vmxnet3 driver to implement NPA so that > > Linux users can exploit the benefits provided by passthrough devices in a > > seamless manner while retaining the benefits of virtualization. The document > > below tries to answer most of the questions which we anticipated. Please let us > > know your comments and queries. > > > > Thank you. > > > > Signed-off-by: Pankaj Thakkar <pthakkar at vmware.com> > > > Code please. Also, it has to work for all architectures not just VMware and > Intel.
* Pankaj Thakkar (pthakkar at vmware.com) wrote:> We intend to upgrade the upstreamed vmxnet3 driver to implement NPA so that > Linux users can exploit the benefits provided by passthrough devices in a > seamless manner while retaining the benefits of virtualization. The document > below tries to answer most of the questions which we anticipated. Please let us > know your comments and queries.How does the throughput, latency, and host CPU utilization for normal data path compare with say NetQueue? And does this obsolete your UPT implementation?> Network Plugin Architecture > --------------------------- > > VMware has been working on various device passthrough technologies for the past > few years. Passthrough technology is interesting as it can result in better > performance/cpu utilization for certain demanding applications. In our vSphere > product we support direct assignment of PCI devices like networking adapters to > a guest virtual machine. This allows the guest to drive the device using the > device drivers installed inside the guest. This is similar to the way KVM > allows for passthrough of PCI devices to the guests. The hypervisor is bypassed > for all I/O and control operations and hence it can not provide any value add > features such as live migration, suspend/resume, etc. > > > Network Plugin Architecture (NPA) is an approach which VMware has developed in > joint partnership with Intel which allows us to retain the best of passthrough > technology and virtualization. NPA allows for passthrough of the fast data > (I/O) path and lets the hypervisor deal with the slow control path using > traditional emulation/paravirtualization techniques. Through this splitting of > data and control path the hypervisor can still provide the above mentioned > value add features and exploit the performance benefits of passthrough.How many cards actually support this NPA interface? What does it look like, i.e. where is the NPA specification? (AFAIK, we never got the UPT one).> NPA requires SR-IOV hardware which allows for sharing of one single NIC adapter > by multiple guests. SR-IOV hardware has many logically separate functions > called virtual functions (VF) which can be independently assigned to the guest > OS. They also have one or more physical functions (PF) (managed by a PF driver) > which are used by the hypervisor to control certain aspects of the VFs and the > rest of the hardware.How do you handle hardware which has a more symmetric view of the SR-IOV world (SR-IOV is only PCI sepcification, not a network driver specification)? Or hardware which has multiple functions per physical port (multiqueue, hw filtering, embedded switch, etc.)?> NPA splits the guest driver into two components called > the Shell and the Plugin. The shell is responsible for interacting with the > guest networking stack and funneling the control operations to the hypervisor. > The plugin is responsible for driving the data path of the virtual function > exposed to the guest and is specific to the NIC hardware. NPA also requires an > embedded switch in the NIC to allow for switching traffic among the virtual > functions. The PF is also used as an uplink to provide connectivity to other > VMs which are in emulation mode. The figure below shows the major components in > a block diagram. > > +------------------------------+ > | Guest VM | > | | > | +----------------+ | > | | vmxnet3 driver | | > | | Shell | | > | | +============+ | | > | | | Plugin | | | > +------+-+------------+-+------+ > | . > +---------+ . > | vmxnet3 | . > |___+-----+ . > | . > | . > +----------------------------+ > | | > | virtual switch | > +----------------------------+ > | . \ > | . \ > +=============+ . \ > | PF control | . \ > | | . \ > | L2 driver | . \ > +-------------+ . \ > | . \ > | . \ > +------------------------+ +------------+ > | PF VF1 VF2 ... VFn | | | > | | | regular | > | SR-IOV NIC | | nic | > | +--------------+ | | +--------+ > | | embedded | | +---+ > | | switch | | > | +--------------+ | > | +---------------+ > +--------+ > > NPA offers several benefits: > 1. Performance: Critical performance sensitive paths are not trapped and the > guest can directly drive the hardware without incurring virtualization > overheads.Can you demonstrate with data?> 2. Hypervisor control: All control operations from the guest such as programming > MAC address go through the hypervisor layer and hence can be subjected to > hypervisor policies. The PF driver can be further used to put policy decisions > like which VLAN the guest should be on.This can happen without NPA as well. VF simply needs to request the change via the PF (in fact, hw does that right now). Also, we already have a host side management interface via PF (see, for example, RTM_SETLINK IFLA_VF_MAC interface). What is control plane interface? Just something like a fixed register set?> 3. Guest Management: No hardware specific drivers need to be installed in the > guest virtual machine and hence no overheads are incurred for guest management. > All software for the driver (including the PF driver and the plugin) is > installed in the hypervisor.So we have a plugin per hardware VF implementation? And the hypervisor injects this code into the guest?> 4. IHV independence: The architecture provides guidelines for splitting the > functionality between the VFs and PF but does not dictate how the hardware > should be implemented. It gives the IHV the freedom to do asynchronous updates > either to the software or the hardware to work around any defects.Yes, this is important, esp. instead of the requirement for hw to implement a specific interface (I suspect you know all about this issue already).> The fundamental tenet in NPA is to let the hypervisor control the passthrough > functionality with minimal guest intervention. This gives a lot of flexibility > to the hypervisor which can then treat passthrough as an offload feature (just > like TSO, LRO, etc) which is offered to the guest virtual machine when there > are no conflicting features present. For example, if the hypervisor wants to > migrate the virtual machine from one host to another, the hypervisor can switch > the virtual machine out of passthrough mode into paravirtualized/emulated mode > and it can use existing technique to migrate the virtual machine. Once the > virtual machine is migrated to the destination host the hypervisor can switch > the virtual machine back to passthrough mode if a supporting SR-IOV nic is > present. This may involve reloading of a different plugin corresponding to the > new SR-IOV hardware. > > Internally we have explored various other options before settling on the NPA > approach. For example there are approaches which create a bonding driver on top > of a complete passthrough of a NIC device and an emulated/paravirtualized > device. Though this approach allows for live migration to work it adds a lot of > complexity and dependency. First the hypervisor has to rely on a guest with > hot-add support. Second the hypervisor has to depend on the guest networking > stack to cooperate to perform migration. Third the guest has to carry the > driver images for all possible hardware to which the guest may migrate to. > Fourth the hypervisor does not get full control for all the policy decisions. > Another approach we have considered is to have a uniform interface for the data > path between the emulated/paravirtualized device and the hardware device which > allows the hypervisor to seamlessly switch from the emulated interface to the > hardware interface. Though this approach is very attractive and can work > without any guest involvement it is not acceptable to the IHVs as it does not > give them the freedom to fix bugs/erratas and differentiate from each other. We > believe NPA approach provides the right level of control and flexibility to the > hypervisors while letting the guest exploit the benefits of passthrough.> The plugin image is provided by the IHVs along with the PF driver and is > packaged in the hypervisor. The plugin image is OS agnostic and can be loaded > either into a Linux VM or a Windows VM. The plugin is written against the ShellAnd it will need to be GPL AFAICT from what you've said thus far. It does sound worrisome, although I suppose hw firmware isn't particularly different.> API interface which the shell is responsible for implementing. The API > interface allows the plugin to do TX and RX only by programming the hardware > rings (along with things like buffer allocation and basic initialization). The > virtual machine comes up in paravirtualized/emulated mode when it is booted. > The hypervisor allocates the VF and other resources and notifies the shell of > the availability of the VF. The hypervisor injects the plugin into memory > location specified by the shell. The shell initializes the plugin by calling > into a known entry point and the plugin initializes the data path. The control > path is already initialized by the PF driver when the VF is allocated. At this > point the shell switches to using the loaded plugin to do all further TX and RX > operations. The guest networking stack does not participate in these operations > and continues to function normally. All the control operations continue being > trapped by the hypervisor and are directed to the PF driver as needed. For > example, if the MAC address changes the hypervisor updates its internal state > and changes the state of the embedded switch as well through the PF control > API.How does the shell switch back to emulated mode for live migration?> We have reworked our existing Linux vmxnet3 driver to accomodate NPA by > splitting the driver into two parts: Shell and Plugin. The new split driver is > backwards compatible and continues to work on old/existing vmxnet3 device > emulations. The shell implements the API interface and contains code to do the > bookkeeping for TX/RX buffers along with interrupt management. The shell code > also handles the loading of the plugin and verifying the license of the loaded > plugin. The plugin contains the code specific to vmxnet3 ring and descriptor > management. The plugin uses the same Shell API interface which would be used by > other IHVs. This vmxnet3 plugin is compiled statically along with the shell as > this is needed to provide connectivity when there is no underlying SR-IOV > device present. The IHV plugins are required to be distributed under GPL > license and we are currently looking at ways to verify this both within the > hypervisor and within the shell.Please make this shell API interface and the PF/VF requirments available. thanks, -chris
Christoph Hellwig
2010-May-05 17:23 UTC
RFC: Network Plugin Architecture (NPA) for vmxnet3
On Tue, May 04, 2010 at 04:02:25PM -0700, Pankaj Thakkar wrote:> The plugin image is provided by the IHVs along with the PF driver and is > packaged in the hypervisor. The plugin image is OS agnostic and can be loaded > either into a Linux VM or a Windows VM. The plugin is written against the Shell > API interface which the shell is responsible for implementing. The APIWe're not going to add any kind of loader for binry blobs into kernel space, sorry. Don't even bother wasting your time on this.
On 05/05/2010 02:02 AM, Pankaj Thakkar wrote:> 2. Hypervisor control: All control operations from the guest such as programming > MAC address go through the hypervisor layer and hence can be subjected to > hypervisor policies. The PF driver can be further used to put policy decisions > like which VLAN the guest should be on. >Is this enforced? Since you pass the hardware through, you can't rely on the guest actually doing this, yes?> The plugin image is provided by the IHVs along with the PF driver and is > packaged in the hypervisor. The plugin image is OS agnostic and can be loaded > either into a Linux VM or a Windows VM. The plugin is written against the Shell > API interface which the shell is responsible for implementing. The API > interface allows the plugin to do TX and RX only by programming the hardware > rings (along with things like buffer allocation and basic initialization). The > virtual machine comes up in paravirtualized/emulated mode when it is booted. > The hypervisor allocates the VF and other resources and notifies the shell of > the availability of the VF. The hypervisor injects the plugin into memory > location specified by the shell. The shell initializes the plugin by calling > into a known entry point and the plugin initializes the data path. The control > path is already initialized by the PF driver when the VF is allocated. At this > point the shell switches to using the loaded plugin to do all further TX and RX > operations. The guest networking stack does not participate in these operations > and continues to function normally. All the control operations continue being > trapped by the hypervisor and are directed to the PF driver as needed. For > example, if the MAC address changes the hypervisor updates its internal state > and changes the state of the embedded switch as well through the PF control > API. >This is essentially a miniature network stack with a its own mini bonding layer, mini hotplug, and mini API, except s/API/ABI/. Is this a correct view? If so, the Linuxy approach would be to use the ordinary drivers and the Linux networking API, and hide the bond setup using namespaces. The bond driver, or perhaps a new, similar, driver can be enhanced to propagate ethtool commands to its (hidden) components, and to have a control channel with the hypervisor. This would make the approach hypervisor agnostic, you're just pairing two devices and presenting them to the rest of the stack as a single device.> We have reworked our existing Linux vmxnet3 driver to accomodate NPA by > splitting the driver into two parts: Shell and Plugin. The new split driver is >So the Shell would be the reworked or new bond driver, and Plugins would be ordinary Linux network drivers. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.
Shreyas Bhatewara
2010-May-06 07:25 UTC
[Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
> -----Original Message----- > From: Scott Feldman [mailto:scofeldm at cisco.com] > Sent: Wednesday, May 05, 2010 7:04 PM > To: Shreyas Bhatewara; Arnd Bergmann; Dmitry Torokhov > Cc: Christoph Hellwig; pv-drivers at vmware.com; netdev at vger.kernel.org; > linux-kernel at vger.kernel.org; virtualization at lists.linux- > foundation.org; Pankaj Thakkar > Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for > vmxnet3 > > On 5/5/10 10:29 AM, "Dmitry Torokhov" <dtor at vmware.com> wrote: > > > It would not be a binary blob but software properly released under > GPL. > > The current plan is for the shell to enforce GPL requirement on the > > plugin code, similar to what module loaded does for regular kernel > > modules. > > On 5/5/10 3:05 PM, "Shreyas Bhatewara" <sbhatewara at vmware.com> wrote: > > > The plugin image is not linked against Linux kernel. It is OS > agnostic infact > > (Eg. same plugin works for Linux and Windows VMs) > > Are there any issues with injecting the GPL-licensed plug-in into the > Windows vmxnet3 NDIS driver? > > -scottScott, Thanks for pointing out. This issue can be resolved by adding exception to the plugin license which allows it to link to a non-free program .(http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF) ->Shreyas
Shreyas Bhatewara
2010-May-06 07:25 UTC
[Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
> -----Original Message----- > From: Scott Feldman [mailto:scofeldm at cisco.com] > Sent: Wednesday, May 05, 2010 7:04 PM > To: Shreyas Bhatewara; Arnd Bergmann; Dmitry Torokhov > Cc: Christoph Hellwig; pv-drivers at vmware.com; netdev at vger.kernel.org; > linux-kernel at vger.kernel.org; virtualization at lists.linux- > foundation.org; Pankaj Thakkar > Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for > vmxnet3 > > On 5/5/10 10:29 AM, "Dmitry Torokhov" <dtor at vmware.com> wrote: > > > It would not be a binary blob but software properly released under > GPL. > > The current plan is for the shell to enforce GPL requirement on the > > plugin code, similar to what module loaded does for regular kernel > > modules. > > On 5/5/10 3:05 PM, "Shreyas Bhatewara" <sbhatewara at vmware.com> wrote: > > > The plugin image is not linked against Linux kernel. It is OS > agnostic infact > > (Eg. same plugin works for Linux and Windows VMs) > > Are there any issues with injecting the GPL-licensed plug-in into the > Windows vmxnet3 NDIS driver? > > -scott
Pankaj Thakkar
2010-Jul-14 17:18 UTC
[Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
The plugin is guest agnostic and hence we did not want to rely on any kernel provided functions. The plugin uses only the interface provided by the shell. The assumption is that since the plugin is really simple and straight forward (all the control/init complexity lies in the PF driver in the hypervisor) we should be able to get by for most of the things and for things like memcpy/memset the plugin can write simple functions like this. -p ________________________________________ From: Greg KH [greg at kroah.com] Sent: Wednesday, July 14, 2010 2:49 AM To: Shreyas Bhatewara Cc: Christoph Hellwig; Stephen Hemminger; Pankaj Thakkar; pv-drivers at vmware.com; netdev at vger.kernel.org; linux-kernel at vger.kernel.org; virtualization at lists.linux-foundation.org Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3 Is there some reason that our in-kernel functions that do this type of logic are not working for you to require you to reimplement this? thanks, greg k-h
Shreyas Bhatewara
2010-Jul-14 17:19 UTC
[Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, 14 Jul 2010, Greg KH wrote:> On Mon, Jul 12, 2010 at 08:06:28PM -0700, Shreyas Bhatewara wrote: > > drivers/net/vmxnet3/vmxnet3_drv.c | 1845 > > +++++++++++++++++++-------------- > > Your patch is line-wrapped and can not be applied :( > > Care to fix your email client? > > One thing just jumped out at me when glancing at this: > > > +static INLINE void > > +MoveMemory(void *dst, > > + void *src, > > + size_t length) > > +{ > > + size_t i; > > + for (i = 0; i < length; ++i) > > + ((u8 *)dst)[i] = ((u8 *)src)[i]; > > +} > > + > > +static INLINE void > > +ZeroMemory(void *memory, > > + size_t length) > > +{ > > + size_t i; > > + for (i = 0; i < length; ++i) > > + ((u8 *)memory)[i] = 0; > > +} > > Is there some reason that our in-kernel functions that do this type of > logic are not working for you to require you to reimplement this? > > thanks, > > greg k-h >Greg, Thanks for pointing out. I will fix both these issues and repost the patch. ->Shreyas