This was an issue with the HCA firmware. The mellanox card was not working
well with the QLogic switch. The support guy''s as Mellanox were pretty
helpful and came up with a new firmware for us to try which solved my auto
neg issues.
On Thu, Feb 11, 2010 at 1:44 PM, Jagga Soorma <jagga13 at gmail.com>
wrote:
> Yet more information. Looks like the switch thinks that this could be set
> to 10Gbps (QDR):
>
> hpc116:/mnt/SLES11x86_64 # iblinkinfo.pl -R | grep -i reshpc116
> 1 34[ ] ==( 4X 5.0 Gbps Active / LinkUp)==> 20 1[ ]
> "hpc116 HCA-1" ( Could be 10.0 Gbps)
>
> -J
>
>
> On Thu, Feb 11, 2010 at 1:26 PM, Jagga Soorma <jagga13 at gmail.com>
wrote:
>
>> More information:
>>
>> hpc116:/mnt/SLES11x86_64 # lspci | grep -i mellanox
>> 10:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX IB QDR,
PCIe
>> 2.0 5GT/s] (rev a0)
>>
>> hpc116:/mnt/SLES11x86_64 # ibstatus
>> Infiniband device ''mlx4_0'' port 1 status:
>> default gid: fe80:0000:0000:0000:0002:c903:0006:9109
>> base lid: 0x14
>> sm lid: 0x1
>> state: 4: ACTIVE
>> phys state: 5: LinkUp
>> rate: 20 Gb/sec (4X DDR)
>>
>> Infiniband device ''mlx4_0'' port 2 status:
>> default gid: fe80:0000:0000:0000:0002:c903:0006:910a
>> base lid: 0x0
>> sm lid: 0x0
>> state: 1: DOWN
>> phys state: 2: Polling
>> rate: 10 Gb/sec (4X)
>>
>> hpc116:/mnt/SLES11x86_64 # ibstat
>> CA ''mlx4_0''
>> CA type: MT26428
>> Number of ports: 2
>> Firmware version: 2.6.100
>> Hardware version: a0
>> Node GUID: 0x0002c90300069108
>> System image GUID: 0x0002c9030006910b
>> Port 1:
>> State: Active
>> Physical state: LinkUp
>> Rate: 20
>> Base lid: 20
>> LMC: 0
>> SM lid: 1
>> Capability mask: 0x02510868
>> Port GUID: 0x0002c90300069109
>> Port 2:
>> State: Down
>> Physical state: Polling
>> Rate: 10
>> Base lid: 0
>> LMC: 0
>> SM lid: 0
>> Capability mask: 0x02510868
>> Port GUID: 0x0002c9030006910a
>>
>>
>>
>> On Thu, Feb 11, 2010 at 1:21 PM, Jagga Soorma <jagga13 at
gmail.com> wrote:
>>
>>> Hi Guys,
>>>
>>> Wanted to give a bit more information. So for some reason the
transfer
>>> rates on my ib interfaces are autonegotiating at 20Gb/s (4X DDR).
However,
>>> these are QDR HCA''s.
>>>
>>> Here is the hardware that I have:
>>>
>>> HP IB 4X QDR PCI-e G2 Dual Port HCA
>>> HP 3M 4X DDR/QDR QSFP IB Cu Cables
>>> Qlogic 12200 QDR switch
>>>
>>> I am using all the lustre provided rpms on my servers (RHEL 5.3)
and
>>> clients (SLES 11). All my servers in this cluster are auto
negotiating to
>>> 20Gb/s (4X DDR) which should be 40Gb/s (4X QDR).
>>>
>>> Are any others out there using QDR? If so, did you run into
anything
>>> similar to this? Is there any specific configuration that is
needed for the
>>> servers to detect the higher rates.
>>>
>>> Thanks in advance for your assistance.
>>>
>>> -J
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100212/4a3ef0b1/attachment-0001.html