On Thu, 6 Oct 2011, Nick Khamis wrote:
> Hello Everyone,
>
> We are looking to assemble a VM cluster that uses the inifiniband
interconnect
> to work with RDMA. The two important issues that we would like some
> clarity on are stonith, and interconnect support. Also some things to
> think about
> when trying setup SR-IOV, and pci pass through.
Nick--what infiniband adapters are you using? We have Mellanox adapters
and are having real problems getting SR-IOV capable drivers for them.
Mellanox promised them 1+ years ago and they are not yet available
for Xen or KVM, just VMware. In fact the drivers don''t work with
pci pass through.
>
> In terms of STONITH, I know this can be supported using virt however,
> is it possible to have the stonith device that is installed on dom0 passed
> through to domU? This way we can have the different VM connected using
> STONITH.
>
> In regards to our interconnect of choice, how well does XEN support a
> Mellanox card through the use of for example OFED? Also, how does
> VT-d, SR-I0V, fit in our model?
See above--as far as I know Mellanox cards do not support SR-IOV or
pci pass through
at all at the moment, although there are a lot of presentations
you can google on the web that say that their techs are working on it.
and when the driver does come through, the stuff they are working
on won''t be open-source and will only present a thick network pipe
to the VM, not something that''s recognizable by the mlx4 drivers.
>
> Also when selecting our hardware for such an architecture, can you
> please let us know what to keep in mind. What is important for the
> hardware to support e.g. IOMMU (for CPUs), FLR (for devices).
If you haven''t actually bought mellanox yet, try to squeeze them
and say that you won''t buy unless they come up with the drivers.
Might not hurt to talk to their competitors as well.
Steve Timm>
> I understand that this may be beyond the scope of a developer mailing
> list, and I thank you for any of your expertise.
>
> Your Help is Greatly Appreciated,
>
> Nick.
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xensource.com
> http://lists.xensource.com/xen-users
>
--
------------------------------------------------------------------
Steven C. Timm, Ph.D (630) 840-8525
timm@fnal.gov http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users