I have a QDR ib switch that should support up to 40Gbps. After installing the kernel-ib and lustre client rpms on my SuSe nodes I see the following: hpc102:~ # ibstatus mlx4_0:1 Infiniband device ''mlx4_0'' port 1 status: default gid: fe80:0000:0000:0000:0002:c903:0006:de19 base lid: 0x7 sm lid: 0x1 state: 4: ACTIVE phys state: 5: LinkUp rate: 20 Gb/sec (4X DDR) Why is this only picking up 4X DDR at 20Gb/sec? Do the lustre rpm''s not support QDR? Is there something that I need to do on my side to force 40Gb/sec on these ports? Thanks in advance, -J -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100211/559f29c4/attachment.html
On Thursday 11 February 2010, Jagga Soorma wrote:> I have a QDR ib switch that should support up to 40Gbps. After installing > the kernel-ib and lustre client rpms on my SuSe nodes I see the following: > > hpc102:~ # ibstatus mlx4_0:1 > Infiniband device ''mlx4_0'' port 1 status: > default gid: fe80:0000:0000:0000:0002:c903:0006:de19 > base lid: 0x7 > sm lid: 0x1 > state: 4: ACTIVE > phys state: 5: LinkUp > rate: 20 Gb/sec (4X DDR) > > Why is this only picking up 4X DDR at 20Gb/sec? Do the lustre rpm''s not > support QDR? Is there something that I need to do on my side to force > 40Gb/sec on these ports?This is a bit OT, but, a 20G rate typically means that you have a problem with one of: switch, hca, cable. Maybe your HCA is a DDR HCA? Maybe you need to upgrade the HCA firmware? /Peter> Thanks in advance, > -J-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part. Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100212/8d350469/attachment.bin
What kind of machine are you using? If it doesn''t have a PCIe 2.0 bus it won''t be able to bring the card up to 40Gb. I have SunFire x4200''s with QDR and DDR cards in them (lustre routers to DDR IB networks). They are only able to bring the link up to 20Gb on the DDR card because of the PCI bus limitations. Erik On Fri, Feb 12, 2010 at 5:09 AM, Peter Kjellstrom <cap at nsc.liu.se> wrote:> On Thursday 11 February 2010, Jagga Soorma wrote: >> I have a QDR ib switch that should support up to 40Gbps. ?After installing >> the kernel-ib and lustre client rpms on my SuSe nodes I see the following: >> >> hpc102:~ # ibstatus mlx4_0:1 >> Infiniband device ''mlx4_0'' port 1 status: >> ? ? default gid: ? ? fe80:0000:0000:0000:0002:c903:0006:de19 >> ? ? base lid: ? ? 0x7 >> ? ? sm lid: ? ? ? ? 0x1 >> ? ? state: ? ? ? ? 4: ACTIVE >> ? ? phys state: ? ? 5: LinkUp >> ? ? rate: ? ? ? ? 20 Gb/sec (4X DDR) >> >> Why is this only picking up 4X DDR at 20Gb/sec? ?Do the lustre rpm''s not >> support QDR? ?Is there something that I need to do on my side to force >> 40Gb/sec on these ports? > > This is a bit OT, but, a 20G rate typically means that you have a problem with > one of: switch, hca, cable. Maybe your HCA is a DDR HCA? Maybe you need to > upgrade the HCA firmware? > > /Peter > >> Thanks in advance, >> -J > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss > >
Correction: They are only able to bring the link up to *20Gb* on the *QDR* card because of the PCI bus limitations. On Sat, Feb 13, 2010 at 11:37 AM, Erik Froese <erik.froese at gmail.com> wrote:> What kind of machine are you using? If it doesn''t have a PCIe 2.0 bus > it won''t be able to bring the card up to 40Gb. > I have SunFire x4200''s with QDR and DDR cards in them (lustre routers > to DDR IB networks). > They are only able to bring the link up to 20Gb on the DDR card > because of the PCI bus limitations. > Erik > > On Fri, Feb 12, 2010 at 5:09 AM, Peter Kjellstrom <cap at nsc.liu.se> wrote: >> On Thursday 11 February 2010, Jagga Soorma wrote: >>> I have a QDR ib switch that should support up to 40Gbps. ?After installing >>> the kernel-ib and lustre client rpms on my SuSe nodes I see the following: >>> >>> hpc102:~ # ibstatus mlx4_0:1 >>> Infiniband device ''mlx4_0'' port 1 status: >>> ? ? default gid: ? ? fe80:0000:0000:0000:0002:c903:0006:de19 >>> ? ? base lid: ? ? 0x7 >>> ? ? sm lid: ? ? ? ? 0x1 >>> ? ? state: ? ? ? ? 4: ACTIVE >>> ? ? phys state: ? ? 5: LinkUp >>> ? ? rate: ? ? ? ? 20 Gb/sec (4X DDR) >>> >>> Why is this only picking up 4X DDR at 20Gb/sec? ?Do the lustre rpm''s not >>> support QDR? ?Is there something that I need to do on my side to force >>> 40Gb/sec on these ports? >> >> This is a bit OT, but, a 20G rate typically means that you have a problem with >> one of: switch, hca, cable. Maybe your HCA is a DDR HCA? Maybe you need to >> upgrade the HCA firmware? >> >> /Peter >> >>> Thanks in advance, >>> -J >> >> _______________________________________________ >> Lustre-discuss mailing list >> Lustre-discuss at lists.lustre.org >> http://lists.lustre.org/mailman/listinfo/lustre-discuss >> >> >