Hello, I have been reading and found this "xen IB". What exactly is that? How do i use it? Thankefully, Andre Pfeiffer _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
It was a chemistry project funded by the American Chemical Society (ACS). Not sure if actaully became mainstream. Cheers, Nick. 2011/10/31 "André Almdeida Pfeiffer" <andre@nortecnet.com.br>:> Hello, > > I have been reading and found this "xen IB". What exactly is that? How do > i use it? > > Thankefully, > > Andre Pfeiffer > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> I have been reading and found this "xen IB". What exactly is that? How do >> i use it?AFAIK, it wasn''t merged into the tree, in part because it had severe security holes. It is based on an outdated tree, and I had a really hard time getting it to work. The idea was to allow paravirtualising IB devices. Basically you had : - a PV driver that talked to the dom0 driver - the PV driver could be used to request UARs on the physical card, and map them into the domU''s space - the domU could then use the UAR to send/recieve data straight through the infiniband card IIRC support was limited to a rat The performance critical path would go straight to the hardware, while the rest would go through Xen. If you''re trying to get IB to work in dom0, some will work out of the box, some won''t. See http://www.mail-archive.com/linux-rdma@vger.kernel.org/msg06851.html I didn''t follow the latest evolutions, but new PCI handling code was supposed to fix the problem I encountered. -- Vivien Bernet-Rollande Systems& Networking Engineer Alter Way Hosting _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello Vivien, Just out of curiosity, which IB devices actually support things like pci-passthrough or SR-IOV. Thanks in Advance, Nick. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Mellanox ConnectX-2 and ConnectX-3 cards support SR-IOV. Not sure about other vendors. Joseph. On 3 November 2011 03:47, Nick Khamis <symack@gmail.com> wrote:> Hello Vivien, > > Just out of curiosity, which IB devices actually support things like > pci-passthrough or SR-IOV. > > Thanks in Advance, > > Nick. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- * Founder | Director | VP Research Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thank you so much for you response. Are you going by the cute images and slides on the net or have you seen them at work? Which card and firmware version are you using? Last I checked, there was not firmware out. Nick. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, We use Mellanox Connect-X2 here at OrionVM. Currently we use a beta driver package sent to us from Mellanox but the SR-IOV drivers should be mainstream shortly. If you would like to try the firmware out get in contact with Liran Liss or Alex Neefus at Mellanox - they can hook you up with a driver/firmware package. SR-IOV is compatible with the other addon firmwares too so you can do PXEoIB as well as use the new SR-IOV functions. Joseph. On 5 November 2011 23:05, Nick Khamis <symack@gmail.com> wrote:> Thank you so much for you response. Are you going by the cute images > and slides on the > net or have you seen them at work? Which card and firmware version are > you using? Last I > checked, there was not firmware out. > > Nick. > > On Sat, Nov 5, 2011 at 4:26 AM, Joseph Glanville > <joseph.glanville@orionvm.com.au> wrote: > > Hi > > > > Mellanox ConnectX-2 and ConnectX-3 cards support SR-IOV. > > Not sure about other vendors. > > > > Joseph. > > > > On 3 November 2011 03:47, Nick Khamis <symack@gmail.com> wrote: > >> > >> Hello Vivien, > >> > >> Just out of curiosity, which IB devices actually support things like > >> pci-passthrough or SR-IOV. > >> > >> Thanks in Advance, > >> > >> Nick. > >> > >> _______________________________________________ > >> Xen-users mailing list > >> Xen-users@lists.xensource.com > >> http://lists.xensource.com/xen-users > > > > > > > > -- > > Founder | Director | VP Research > > Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 > 52 | > > Mobile: 0428 754 846 > > >-- * Founder | Director | VP Research Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Could anyone define what "support" means? We bought Mellanox connect-x2 cards 1+ years ago and we are still waiting for the drivers which they promised at that time. The last we talked to them the only supported drivers are for VMware and even those are not using the SR-IOV feature. What they are working on for KVM and Xen according to them is just a driver that presents the IB card to the VM as a big network pipe, not something that is recognizable with any regular mellanox driver or that can be used with regular IB MPI drivers. I hope I''m wrong and someone has got it working somewhere because we have a lot of cash sunk into these cards but the latest we heard from mellanox is that they still have both software and firmware issues to be resolved. Steve Timm On Sat, 5 Nov 2011, Joseph Glanville wrote:> Hi > > Mellanox ConnectX-2 and ConnectX-3 cards support SR-IOV. > Not sure about other vendors. > > Joseph. > > On 3 November 2011 03:47, Nick Khamis <symack@gmail.com> wrote: > >> Hello Vivien, >> >> Just out of curiosity, which IB devices actually support things like >> pci-passthrough or SR-IOV. >> >> Thanks in Advance, >> >> Nick. >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > > >-- ------------------------------------------------------------------ Steven C. Timm, Ph.D (630) 840-8525 timm@fnal.gov http://home.fnal.gov/~timm/ Fermilab Computing Division, Scientific Computing Facilities, Grid Facilities Department, FermiGrid Services Group, Group Leader. Lead of FermiCloud project. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Oh and by the way the virtualization drivers Mellanox has been working on all this time are not going to be open-source when they do come out. Steve On Sat, 5 Nov 2011, Nick Khamis wrote:> Thank you so much for you response. Are you going by the cute images > and slides on the > net or have you seen them at work? Which card and firmware version are > you using? Last I > checked, there was not firmware out. > > Nick. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- ------------------------------------------------------------------ Steven C. Timm, Ph.D (630) 840-8525 timm@fnal.gov http://home.fnal.gov/~timm/ Fermilab Computing Division, Scientific Computing Facilities, Grid Facilities Department, FermiGrid Services Group, Group Leader. Lead of FermiCloud project. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Do you know how many virtual functions are supported? I actually setup the current 30 node cluster with Intel X540 10G cards. Reason being, they come with 64 VFs. Which means, the client can add twice as many servers and still be good to go. Ninus. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
The driver package we have it supports 7 VFs per port. So not exactly as many as Intel x540 but sufficient for our needs. Joseph. On 6 November 2011 09:14, Nick Khamis <symack@gmail.com> wrote:> Do you know how many virtual functions are supported? I actually setup the > current 30 node cluster with Intel X540 10G cards. Reason being, they come > with 64 VFs. Which means, the client can add twice as many servers and > still > be good to go. > > Ninus. >-- * Founder | Director | VP Research Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joseph, Thank you so much for your time. I know your efforts of getting a high throughput VM architecture together must have caused you guys at orion a great deal of headache. It''s individuals like you that constantly raise the bar by being amongst the first to test the nose bleeds. If it''s not too much to ask, please keep us updated on what type of numbers you are getting, and when the drivers plateau in terms of stability, added VFs etc... Kind Regards, Ninus Khamis. On Sun, Nov 6, 2011 at 4:00 AM, Joseph Glanville <joseph.glanville@orionvm.com.au> wrote:> The driver package we have it supports 7 VFs per port. > So not exactly as many as Intel x540 but sufficient for our needs. > > Joseph. > > On 6 November 2011 09:14, Nick Khamis <symack@gmail.com> wrote: >> >> Do you know how many virtual functions are supported? I actually setup the >> current 30 node cluster with Intel X540 10G cards. Reason being, they come >> with 64 VFs. Which means, the client can add twice as many servers and >> still >> be good to go. >> >> Ninus. > > > > -- > Founder | Director | VP Research > Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | > Mobile: 0428 754 846 >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Nick, I received correspondence from Mellanox today that SR-IOV will be available in the production drivers on Q1 next year. It is also worth noting that OFED 2 is due in line with the Linux 3.2 kernel release. I will keep the list in the loop regarding extensions to the VF capabilities etc. Joseph. On 6 November 2011 23:19, Nick Khamis <symack@gmail.com> wrote:> Joseph, > > Thank you so much for your time. I know your efforts of getting a high > throughput VM > architecture together must have caused you guys at orion a great deal > of headache. > It''s individuals like you that constantly raise the bar by being > amongst the first to > test the nose bleeds. > If it''s not too much to ask, please keep us updated on what type of > numbers you > are getting, and when the drivers plateau in terms of stability, added > VFs etc... > > Kind Regards, > > Ninus Khamis. > > On Sun, Nov 6, 2011 at 4:00 AM, Joseph Glanville > <joseph.glanville@orionvm.com.au> wrote: > > The driver package we have it supports 7 VFs per port. > > So not exactly as many as Intel x540 but sufficient for our needs. > > > > Joseph. > > > > On 6 November 2011 09:14, Nick Khamis <symack@gmail.com> wrote: > >> > >> Do you know how many virtual functions are supported? I actually setup > the > >> current 30 node cluster with Intel X540 10G cards. Reason being, they > come > >> with 64 VFs. Which means, the client can add twice as many servers and > >> still > >> be good to go. > >> > >> Ninus. > > > > > > > > -- > > Founder | Director | VP Research > > Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 > 52 | > > Mobile: 0428 754 846 > > >-- * Founder | Director | VP Research Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Steven, Sorry I missed your post but I thought I would clarify what we have working, both for the benefit of the list and yourself. We are currently not using SR-IOV in production, I did however build a test stack consisting of a Connext-2 card and an IOMMU enabled Intel server (Intel VT-d) and Xen.org Source 4.1. This setup allowed me to create 7 VFs per port (this was a dual port card) for a total of 14 virtual adapters. The beta package I was using was dated March 2011 and seemed to be stable, it also presents 100% virtualized IB adapters that can either be used directly or passed through to virtual machines. As the VFs appear on the PCI bus you must have a server supporting an IOMMU (most recent servers). You can then use the standard OFED 1.5.X packages within the virtual machines to access the IB fabric. The reasons this hasn''t been used in production basically fall on the following reasons. 1) The driver was still beta - though it seemed to be stable we never deploy anything that is not certified to be stable. 2) As far as I can see there isn''t a security model in place that would allow us to use it in a multi-tenant environment. This might not be an issue for most users but we are a public IaaS platform. 3) Our current software stack isn''t able to make use of IOMMU PCI pass-through (limitation of our stack - not the Mellanox hardware, we have since resolved this) The performance is as native, the tooling is simple - standard PCI pass-through easy to do with both xl or legacy xm. I am looking forward to the production release so I can employ SR-IOV on our command and control stack and into the future when enough security features are available to offer it to clients on our multi-tenant platform. Joseph. On 6 November 2011 05:19, Steven Timm <timm@fnal.gov> wrote:> > Could anyone define what "support" means? > We bought Mellanox connect-x2 cards 1+ years ago and > we are still waiting for the drivers which they promised > at that time. The last we talked to them the only > supported drivers are for VMware and even those are not > using the SR-IOV feature. What they are working on for KVM > and Xen according to them is just a driver that presents > the IB card to the VM as a big network pipe, not something > that is recognizable with any regular mellanox driver > or that can be used with regular IB MPI drivers. > > I hope I''m wrong and someone has got it working somewhere > because we have a lot of cash sunk into these cards > but the latest we heard from mellanox is that they still have > both software and firmware issues to be resolved. > > Steve Timm > > > On Sat, 5 Nov 2011, Joseph Glanville wrote: > > Hi >> >> Mellanox ConnectX-2 and ConnectX-3 cards support SR-IOV. >> Not sure about other vendors. >> >> Joseph. >> >> On 3 November 2011 03:47, Nick Khamis <symack@gmail.com> wrote: >> >> Hello Vivien, >>> >>> Just out of curiosity, which IB devices actually support things like >>> pci-passthrough or SR-IOV. >>> >>> Thanks in Advance, >>> >>> Nick. >>> >>> ______________________________**_________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/**xen-users<http://lists.xensource.com/xen-users> >>> >>> >> >> >> >> > -- > ------------------------------**------------------------------**------ > Steven C. Timm, Ph.D (630) 840-8525 > timm@fnal.gov http://home.fnal.gov/~timm/ > Fermilab Computing Division, Scientific Computing Facilities, > Grid Facilities Department, FermiGrid Services Group, Group Leader. > Lead of FermiCloud project. >-- * Founder | Director | VP Research Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 10 Nov 2011, Joseph Glanville wrote:> Hi Steven, > > Sorry I missed your post but I thought I would clarify what we have > working, both for the benefit of the list and yourself. > > We are currently not using SR-IOV in production, I did however build a test > stack consisting of a Connext-2 card and an IOMMU enabled Intel server > (Intel VT-d) and Xen.org Source 4.1. > This setup allowed me to create 7 VFs per port (this was a dual port card) > for a total of 14 virtual adapters.What firmware revision is your card? Compatible with this one? 02:00.0 InfiniBand: Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE] (rev b0)> The beta package I was using was dated March 2011 and seemed to be stable, > it also presents 100% virtualized IB adapters that can either be used > directly or passed through to virtual machines. > As the VFs appear on the PCI bus you must have a server supporting an IOMMU > (most recent servers).We are running Intel E5640 (Westmere) with 5520 series chipset, that should be good enough, shouldn''t it? What if any software other than the firmware is needed on the dom0?> You can then use the standard OFED 1.5.X packages within the virtual > machines to access the IB fabric.> > The reasons this hasn''t been used in production basically fall on the > following reasons. > > 1) The driver was still beta - though it seemed to be stable we never > deploy anything that is not certified to be stable. > > 2) As far as I can see there isn''t a security model in place that would > allow us to use it in a multi-tenant environment. This might not be an > issue for most users but we are a public IaaS platform. > > 3) Our current software stack isn''t able to make use of IOMMU PCI > pass-through (limitation of our stack - not the Mellanox hardware, we have > since resolved this) > > The performance is as native, the tooling is simple - standard PCI > pass-through easy to do with both xl or legacy xm.Does this imply that there are some native pci-passthrough routines available in higher versions of Xen that are not functions that are accessible via libvirt for instance? Steve Timm> > I am looking forward to the production release so I can employ SR-IOV on > our command and control stack and into the future when enough security > features are available to offer it to clients on our multi-tenant platform. > > Joseph. > > On 6 November 2011 05:19, Steven Timm <timm@fnal.gov> wrote: > >> >> Could anyone define what "support" means? >> We bought Mellanox connect-x2 cards 1+ years ago and >> we are still waiting for the drivers which they promised >> at that time. The last we talked to them the only >> supported drivers are for VMware and even those are not >> using the SR-IOV feature. What they are working on for KVM >> and Xen according to them is just a driver that presents >> the IB card to the VM as a big network pipe, not something >> that is recognizable with any regular mellanox driver >> or that can be used with regular IB MPI drivers. >> >> I hope I''m wrong and someone has got it working somewhere >> because we have a lot of cash sunk into these cards >> but the latest we heard from mellanox is that they still have >> both software and firmware issues to be resolved. >> >> Steve Timm >> >> >> On Sat, 5 Nov 2011, Joseph Glanville wrote: >> >> Hi >>> >>> Mellanox ConnectX-2 and ConnectX-3 cards support SR-IOV. >>> Not sure about other vendors. >>> >>> Joseph. >>> >>> On 3 November 2011 03:47, Nick Khamis <symack@gmail.com> wrote: >>> >>> Hello Vivien, >>>> >>>> Just out of curiosity, which IB devices actually support things like >>>> pci-passthrough or SR-IOV. >>>> >>>> Thanks in Advance, >>>> >>>> Nick. >>>> >>>> ______________________________**_________________ >>>> Xen-users mailing list >>>> Xen-users@lists.xensource.com >>>> http://lists.xensource.com/**xen-users<http://lists.xensource.com/xen-users> >>>> >>>> >>> >>> >>> >>> >> -- >> ------------------------------**------------------------------**------ >> Steven C. Timm, Ph.D (630) 840-8525 >> timm@fnal.gov http://home.fnal.gov/~timm/ >> Fermilab Computing Division, Scientific Computing Facilities, >> Grid Facilities Department, FermiGrid Services Group, Group Leader. >> Lead of FermiCloud project. >> > > > >-- ------------------------------------------------------------------ Steven C. Timm, Ph.D (630) 840-8525 timm@fnal.gov http://home.fnal.gov/~timm/ Fermilab Computing Division, Scientific Computing Facilities, Grid Facilities Department, FermiGrid Services Group, Group Leader. Lead of FermiCloud project. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users