There are a number of LNET routing examples in the Lustre documentation but the actual LNET topology requirements don''t seem to be spelled out. In the simplest terms, what are the actual network topology requirements for the MDS, OSS''s, and clients in order for lustre to work correctly? MDS NIDS ------------------------- 10.13.24.40 at o2ib 10.13.16.40 at tcp (ip/ethernet) ---------------------------------------- | OSS NIDS | ----------------------- | 10.13.24.42 at o2ib | 10.13.31.229 at tcp (ipoib subnet) --------------------------- ipoib gateway | IB Client NIDS | ----------------------- | 10.13.25.150 at o2ib | | Ethernet Client NIDS | ---------------------------- | 10.13.18.152 at tcp ip/ethernet ------------------------------------------- There is no problem with the o2ib side of things. The IB network is 10.13.24.0/21 so the clients, mds, and oss''s are all on the same "network". The ethernet clients have no access to the 10.13.24.0 network. They communicate with the MDS directly on the 10.13.16.0/22 network. However, the OSSs for the ethernet side all sit on their own subnetted /30 network each of which is bridged by an IPoIB gateway between the OSSs and the ethernet clients. So the MDS and the clients talk to the OSSs through the IPoIB gateway and the OSSs can talk to each other through the IPoIB Gateway (though on separate subnets - there is a good reason for this, trust me :) ). So there is complete connectivity in the "IP" sense. However, after mounting the lustre file system on the ethernet clients (which succeeds), the clients are always evicted immediately following the obd_timeout period with a message such as... Lustre: ufhpc-MDT0000: haven''t heard from client f6c6db8a-6fbc-6464-6261-af9ecfc0cb60 (at 10.13.18.152 at tcp) in 2462 seconds. I think it''s dead, and I am evicting it. Of course, once that happens, the client can no longer write to the file system although it works fine for the "obd_timeout" period. We don''t completely understand this because all the players can talk to each other via IP. Why would the MDT not be "hearing" from the ethernet client? They are on the same IP network. It seems like the problem is in the lnet topology? Do we really have to introduce an lnet router even though we have complete IP connectivity among the various components (mds, oss, clients)? I''m hoping that someone more familiar with the LNET abstraction layer can help us understand what the problem is. Thanks, charlie taylor uf hpc center -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071127/0819cf21/attachment-0002.html
On Tue, Nov 27, 2007 at 07:37:55AM -0500, Charles Taylor wrote:> There are a number of LNET routing examples in the Lustre documentation > but the actual LNET topology requirements don''t seem to be spelled out. > In the simplest terms, what are the actual network topology > requirements for the MDS, OSS''s, and clients in order for lustre to > work correctly?The simplest topology is one where all nodes reside in a same LNET network - like in your IB network, they''re all in @o2ib. [......]> However, after mounting the lustre file system on the ethernet clients > (which succeeds), the clients are always evicted immediately following > the obd_timeout period with a message such as... > > Lustre: ufhpc-MDT0000: haven''t heard from client > f6c6db8a-6fbc-6464-6261-af9ecfc0cb60 (at 10.13.18.152 at tcp) in 2462 > seconds. I think it''s dead, and I am evicting it. > > Of course, once that happens, the client can no longer write to the > file system although it works fine for the "obd_timeout" period. We > don''t completely understand this because all the players can talk to > each other via IP. Why would the MDT not be "hearing" from the > ethernet client? They are on the same IP network. It seems like theCan you please run ''lctl ping client_ip at tcp'' on the MDT and ''lctl ping mdt_ip at tcp'' on the client? It''d be also helpful to enable low level network error console logging on both nodes before running the commands: echo +neterror > /proc/sys/lnet/printk> problem is in the lnet topology? Do we really have to introduce an > lnet router even though we have complete IP connectivity among the > various components (mds, oss, clients)?No, you don''t have to. But for better performance you perhaps will need an LNET router between TCP clients and the OSSes. The traffic goes through different protocol stacks, in the case of IPoIB gateway: Client (LNET:SOCKLND:TCP:IP) => IPoIB GW (IP:IPoIB:IB) => OSS (IB:IPoIB:IP:TCP:SOCKLND:LNET) In the case of an LNET router: Client (LNET:SOCKLND:TCP:IP) => LNET RTR (IP:TCP:SOCKLND:LNET:O2IBLND:IB) => OSS (IB:O2IBLND:LNET) The total number of protocol stacks involved in the two paths are the same, but: 1. The extra stacks on an LNET router (SOCKLND:LNET:O2IBLND) are rather light-weight. 2. Running o2iblnd over IB shall yield better performance than running socklnd over IPoIB. HTH, Isaac