Zhao, Yu
2008-Jan-31 09:14 UTC
[Xen-devel] [PATCH] Virtual machine queue NIC support in control panel
This patch enables virtual machine queue NIC support in control panel (xm/xend), so user can add or remove dedicated queue for a guest. Virtual machine queue is a technology for network devices, which intends to reduce the burden on the hypervisor while improving network I/O performance through the virtualized platform. Some vendors have lunched their products, like Intel(R) 82575/82598 (for more information of this technology: http://www.intel.com/technology/platform-technology/virtualization/VMDq_whitepaper.pdf). This patch requires a vendor-specified utility to control the NIC. This patch also could be applied to netchannel2. Singed-off-by: Yu Zhao <yu.zhao@intel.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Kieran Mansley
2008-Jan-31 10:45 UTC
Re: [Xen-devel] [PATCH] Virtual machine queue NIC support in control panel
On Thu, 2008-01-31 at 17:14 +0800, Zhao, Yu wrote:> This patch enables virtual machine queue NIC support in control panel > (xm/xend), so user can add or remove dedicated queue for a guest.I haven''t looked in detail, but the "vmq" option to the vif configuration seems to have the same syntax (and similar semantics as far as the user is concerned: make this vif go faster using the specified physical device) to the "accel" option. I wonder if the two could be combined? Kieran _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Santos, Jose Renato G
2008-Jan-31 18:41 UTC
[Xen-devel] RE: [PATCH] Virtual machine queue NIC support in control panel
Yu, Thanks for the patch I don''t know the python tools well enough to provide detailed comments on the patch. I just have one high level comment. Using the term "vmq" in the domain configuration file or in commands like "vmq-attach" may give the user the wrong impression that a device queue will be dedicated to the vif. This may or may not be true depending on how many queues are available and how many other vifs are using them. It seems that we should allow the control tools to bind a vif to a NIC and let netback decide which vifs will use dedicated queues and which will share a common queue. Thus it seems that using a name like "pdev" is more appropriate than "vmq". Using a "pdev" parameter to associate a vif with a physical device can be used by accelerator plugins as suggested by Kieran. That said, in the future it will be useful to add commands to list vifs and vmq mappings and to pin vifs to a vmq, in a similar way we list and pin vCPUs. Renato> -----Original Message----- > From: Zhao, Yu [mailto:yu.zhao@intel.com] > Sent: Thursday, January 31, 2008 1:14 AM > To: Keir.Fraser@cl.cam.ac.uk; Santos, Jose Renato G > Cc: xen-devel@lists.xensource.com > Subject: [PATCH] Virtual machine queue NIC support in control panel > > This patch enables virtual machine queue NIC support in > control panel (xm/xend), so user can add or remove dedicated > queue for a guest. > > Virtual machine queue is a technology for network devices, > which intends to reduce the burden on the hypervisor while > improving network I/O performance through the virtualized > platform. Some vendors have lunched their products, like > Intel(R) 82575/82598 (for more information of this > technology: > http://www.intel.com/technology/platform-technology/virtualiza > tion/VMDq_whitepaper.pdf). > > This patch requires a vendor-specified utility to control the NIC. > > This patch also could be applied to netchannel2. > > Singed-off-by: Yu Zhao <yu.zhao@intel.com> >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Zhao, Yu
2008-Feb-03 06:36 UTC
RE: [Xen-devel] [PATCH] Virtual machine queue NIC support incontrol panel
>On Thu, 2008-01-31 at 17:14 +0800, Zhao, Yu wrote: >> This patch enables virtual machine queue NIC support in control panel >> (xm/xend), so user can add or remove dedicated queue for a guest. > >I haven''t looked in detail, but the "vmq" option to the vif >configuration seems to have the same syntax (and similar semantics as >far as the user is concerned: make this vif go faster using the >specified physical device) to the "accel" option. I wonder if the two >could be combined? > >KieranThanks for the suggestion. We could make "accel" more general, then these two options (and other coming options) can be combined. Currently "accel" only supports static configuration, which means user has to set its values before the associated VIF is allocated. I''d like to turn "accel" into be a dynamic option, so user can enable an accelerator for a VIF even when this VIF is running. And the content of "accel" also could be in a multi-column format, rather than NIC name. For example, if "accel" could recognize "eth_name:feature:parameters", then user can pass flags to acceleration plug-in. This gives more flexibility. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Zhao, Yu
2008-Feb-03 07:06 UTC
[Xen-devel] RE: [PATCH] Virtual machine queue NIC support in control panel
Renato, Thanks for your comments. "vmq-attach/detach" are intended to associate a device queue to a vif when this vif is running. These two "xm" options require a physical NIC name and a vif reference, so they can invoke low level utility to do the real work. If the physical NIC doesn''t have any available queue, the low level utility is supposed to return an error, thus "xm vmq-attach" will report the failure. Using "accel" plug-in framework to do this is a decent solution. However, "accel" plug-in lacks dynamic association function, which means user cannot set up or change a accelerator for a VIF when this VIF is running (as I mentioned in another email to Kieran Mansley). If we can improve "accel" plug-in to support this and some other features that may be required by other acceleration technologies, "vmq" and other coming acceleration options can converge. Any other comments or suggestions, please feel free to let me know. I''m trying to revise this patch to use "accel" and will send it out later. Regards, Yu>-----Original Message----- >From: Santos, Jose Renato G [mailto:joserenato.santos@hp.com] >Sent: Friday, February 01, 2008 2:42 AM >To: Zhao, Yu; Keir.Fraser@cl.cam.ac.uk >Cc: xen-devel@lists.xensource.com >Subject: RE: [PATCH] Virtual machine queue NIC support in control panel > >Yu, >Thanks for the patch >I don''t know the python tools well enough to provide detailed commentson the>patch. >I just have one high level comment. Using the term "vmq" in the domain >configuration file or in commands like "vmq-attach" may give the userthe wrong>impression that a device queue will be dedicated to the vif. This mayor may>not be true depending on how many queues are available and how manyother vifs>are using them. It seems that we should allow the control tools to binda vif>to a NIC and let netback decide which vifs will use dedicated queuesand which>will share a common queue. Thus it seems that using a name like "pdev"is more>appropriate than "vmq". Using a "pdev" parameter to associate a vifwith a>physical device can be used by accelerator plugins as suggested byKieran. That>said, in the future it will be useful to add commands to list vifs andvmq mappings>and to pin vifs to a vmq, in a similar way we list and pin vCPUs. > >Renato > >> -----Original Message----- >> From: Zhao, Yu [mailto:yu.zhao@intel.com] >> Sent: Thursday, January 31, 2008 1:14 AM >> To: Keir.Fraser@cl.cam.ac.uk; Santos, Jose Renato G >> Cc: xen-devel@lists.xensource.com >> Subject: [PATCH] Virtual machine queue NIC support in control panel >> >> This patch enables virtual machine queue NIC support in >> control panel (xm/xend), so user can add or remove dedicated >> queue for a guest. >> >> Virtual machine queue is a technology for network devices, >> which intends to reduce the burden on the hypervisor while >> improving network I/O performance through the virtualized >> platform. Some vendors have lunched their products, like >> Intel(R) 82575/82598 (for more information of this >> technology: >> http://www.intel.com/technology/platform-technology/virtualiza >> tion/VMDq_whitepaper.pdf). >> >> This patch requires a vendor-specified utility to control the NIC. >> >> This patch also could be applied to netchannel2. >> >> Singed-off-by: Yu Zhao <yu.zhao@intel.com> >>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Kieran Mansley
2008-Feb-04 09:10 UTC
RE: [Xen-devel] [PATCH] Virtual machine queue NIC support incontrol panel
On Sun, 2008-02-03 at 14:36 +0800, Zhao, Yu wrote:> Currently "accel" only supports static configuration, which means user > has to set its values before the associated VIF is allocated. I''d like > to turn "accel" into be a dynamic option, so user can enable an > accelerator for a VIF even when this VIF is running.That sounds like a good idea, and should be pretty straightforward. It would need a watch on the accel xenbus entry in drivers/xen/netback/accel.c and then some care to ensure that any previous accelerator was removed before the new one gets added when the watch fires. The frontend already watches the configured accelerator so that side of things should just work already.> And the content of "accel" also could be in a multi-column format, > rather than NIC name. For example, if "accel" could recognize > "eth_name:feature:parameters", then user can pass flags to > acceleration plug-in. This gives more flexibility.I''m not sure if there''s a standard xenstore way of doing things like that, but I''d suggest that having the features and parameters as separate entries in xenstore would make the implementation simpler as you wouldn''t have to parse the option to work out what had changed when the configuration is modified. Kieran _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Santos, Jose Renato G
2008-Feb-04 23:30 UTC
[Xen-devel] RE: [PATCH] Virtual machine queue NIC support in control panel
> -----Original Message----- > From: Zhao, Yu [mailto:yu.zhao@intel.com] > Sent: Saturday, February 02, 2008 11:07 PM > To: Santos, Jose Renato G > Cc: xen-devel@lists.xensource.com; Keir.Fraser@cl.cam.ac.uk > Subject: RE: [PATCH] Virtual machine queue NIC support in > control panel > > Renato, > > Thanks for your comments. > "vmq-attach/detach" are intended to associate a device queue > to a vif when this vif is running. These two "xm" options > require a physical NIC name and a vif reference, so they can > invoke low level utility to do the real work. If the physical > NIC doesn''t have any available queue, the low level utility > is supposed to return an error, thus "xm vmq-attach" will > report the failure. > > Using "accel" plug-in framework to do this is a decent > solution. However, "accel" plug-in lacks dynamic association > function, which means user cannot set up or change a > accelerator for a VIF when this VIF is running (as I > mentioned in another email to Kieran Mansley). If we can > improve "accel" plug-in to support this and some other > features that may be required by other acceleration > technologies, "vmq" and other coming acceleration options can > converge. > > Any other comments or suggestions, please feel free to let me > know. I''m trying to revise this patch to use "accel" and will > send it out later. >I think we need to consider two use cases: 1) The system automatically controls the allocation of device queues: In this mode some policy in netback will allocate vifs to device queues. Initially this can be a very simple policy that just allocates device queues on a first come first served basis until all the queues are used and after that new vifs are mapped to a default shared queue. Over time we can have a more dynamic scheme which can use traffic measurements to change queue assignments dynamically. 2) The user control the allocation of device queues: In this mode the user specify that a particular vif should use a dedicated device queue. In this case the user will expect that the command either allocates a queue to that vif or fail if it cannot do that. He will also expect that the system do not dynamically change the assignment of that queue to a different vif. Basically, this will pin a device queue to a particular vif and avoid this queue to be used by any other vif. I think we need to support both cases. Probably we should assume case 1 by default and switch to case 2 on explict user config option or command. It probably makes sense to start with only case 1 first and add support for case 2 later. For case 1, the configuration parameter or command just needs to bind a vif to a physical device and let the system choose if the vif will use a dedicated queue or a shared queue. In this case I think we can share the same parameter with the Solarflare acelerator plugin framework, since all we are doing is binding a vif to physical device. For case 2 I am not sure if it is a good idea to share the same framework with Solarflare accelerator plugin. Since these are two different mechanisms it seems good to expose them with different commands or config options. This is really a phylosophical question: Should the user be able to distinguish between pinning a vif to device queue vs. a device context; or should these be hidden under a higher level abstraction command? What if the same device can support both the multi-queue model and the direct I/O model? Anyway, we need to be clear about the meaning of each command or parameter. Are they just binding a physical device and letting the system automaticaly choose the allocation of dedicated device queues or we allowing the user to directly assign queues to vifs. Please make this clear on your command and parameters. Also, we will probably need some command to list the status of a vif (i.e. using a dedicated queue or a shared queue) Thanks Regards Renato> Regards, > Yu >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel