similar to: Iser support for XEN

Displaying 6 results from an estimated 6 matches similar to: "Iser support for XEN"

2007 Oct 07
2
Local network between two DomUs without a Bridge
Hello Xen-Users! Is there a way to connect two DomU on the same real Host without using a bridge. Figurativly speaking I''m looking for a cross-over-cable for two DomU. An ideas are welcome Volker Jaenisch -- ==================================================== inqbus it-consulting +49 ( 341 ) 5643800 Dr. Volker Jaenisch http://www.inqbus.de Herloßsohnstr. 12
2008 Oct 17
1
Preventing DomU corruption in case of Split-Brain of heartbeat
Hi Xen-Users! We run an large HA XEN system based on heartbeat2. Storage base is an infiniband storage cluster exporting iSCSI devices to the frontend HA XEN Machines. The iSCSI devices are used as pysical devices for the domUs using the block-iscsi mechanism (by the way thanks for this cool script). Recently we had a split brain in our heartbeat system. This causes both of our XEN servers to
2007 Jul 01
2
Xen3.1 when avaible
Hi Debian Xen Maintainers! Thank you for the former work. The 3.0 Version of Xen works flawlessly. When can I exspect the 3.1 version in say experimental ? Please give me a roadmap ? Best Regards Volker -- ==================================================== inqbus it-consulting +49 ( 341 ) 5643800 Dr. Volker Jaenisch http://www.inqbus.de Herlo?sohnstr. 12 0 4 1
2009 Apr 29
0
FW: XEN and Infiniband
Hello, I going to try to install OFED 1.4 on Xen 3.3 and I`d like to realize a VMM kernel bypass. Thanks to it my doomU will be able to communicate directly with my HCA and not pass by the dom 0. How can I realize that ? Is there a soft or something like that to do the kernel by pass ? I saw that it exist things like XEN-IB driver and SoftIB, there can i get it ? Or is VMM kernel bypass
2008 Feb 29
0
[Fwd: [ofa-general] Announcing the release of MVAPICH 1.0]
Per the announcement from the MVAPICH team, I am pleased to let you know that the MPI-IO support for Lustre has been integrated into the new release of MVAPICH, version 1.0. > - Optimized and high-performance ADIO driver for Lustre > - This MPI-IO support is a contribution from Future Technologies > Group, Oak Ridge National Laboratory. >
2008 Dec 12
0
RE: [ofa-general] Infiniband performance
Hi Jan, I asked almost the exact same question as you about 6 months ago and someone provided some Gen4 results for me (But I can''t seem to find them in email), they were a fair bit better than Gen3. With IPOIB, you want connected mode and a large 32Kb+ MTU for max bandwidth, 1Gbyte/sec or more should be possible with Gen 4. Here are some of my original test results on my Opteron