I have not had requests for ib-bonding so far. I am sure that since I
let the cat out of the bag someone will ask me about it now...
On Jun 25, 2008, at 3:24 PM, Jody McIntyre wrote:
> Hi all,
>
> I''m having trouble building OFED''s ib-bonding module for
Software RAID
> OSS kernels and data mover kernels due to OFED bug 651:
> https://bugs.openfabrics.org/show_bug.cgi?id=651
>
> It has been built successfully for compute nodes and normal Lustre
> servers. According to the OFED documentation:
>
>> The ib-bonding driver is a High Availability solution for IPoIB
>> interfaces. It is based on the Linux Ethernet Bonding Driver and was
>> adopted to work with IPoIB. ib-bonding package contains a bonding
>> driver and a utility called ib-bond to manage and control the driver
>> operation. The ib-bonding driver comes with the ib-bonding package
>> (run
>> rpm -qi ib-bonding to get the package information).
>
> Is this feature important to anyone? We need to decide between:
>
> 1. Waiting for OFED 1.3.1 in the next version of our stack, which
> contains the fix.
>
> 2. Patching ib-bonding ourselves with the patch from bug 651.
>
> Obviously option 1 is less work, and slightly less risk since we''d
be
> staying with a stock version of OFED. This is my preference if nobody
> needs ib-bonding on software RAID OSSes or data movers.
>
> Cheers,
> Jody
>
> --
> Jody McIntyre - Linux Kernel Engineer, Sun HPC
> _______________________________________________
> Linux_hpc_swstack mailing list
> Linux_hpc_swstack at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/linux_hpc_swstack