When I was setting this up I found that simply specifying multiple interfaces with lustre does not automatically make them used. I am sure someone more knowledable will correct me if I am wrong but you have to distribute your clients in different subnets to get them to use the different connections. Otherwise Lustre will only use the second interface if the first one goes down - not when it is saturated. I ended up just using the Linux bonding driver. Ken Smith wrote:>Thanks for clarifying that. > >Also, are there any performance ''gotchas'' in general when using multiple >interfaces? > >For instance, in my current test-bed environment I have single MDS, OSS, >and one client; the OSS and MDS have two 100Mb/s NICs, while the MDS has >a single 100Mb/s NIC. When running bonnie++ on the mounted filesystem >on the client, I see little or no difference between one and two >interfaces; that is, throughput is always about 11MB/s. > > >Cheers, >Ken Smith > >_______________________________________________ >Lustre-discuss mailing list >Lustre-discuss@lists.clusterfs.com >https://lists.clusterfs.com/mailman/listinfo/lustre-discuss > >
On Dec 09, 2005 16:04 -0600, Ken Smith wrote:> For those of you out there running on two or more interfaces, is it > preferable to have multiple nids per node, or multiple hostaddr, e.g. > should the lmc usage look like so: > > lmc -m lustre.xml --add net --node oss01 --nid nid01oss01 --hostaddr > 192.168.1.1 --nettype tcp > lmc -m lustre.xml --add net --node oss01 --nid nid02oss01 --hostaddr > 192.168.1.2 --nettype tcp > > Or alternatively: > > lmc -m lustre.xml --add net --node oss01 --nid nid01oss01 --hostaddr > 192.168.1.1 --hostaddr 192.168.1.2 --nettype tcpI believe both are equivalent, though I prefer the latter since it is clear that both addresses are for the same node, while the former you have to actually notice that the two "--node" values are the same. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
Thanks for clarifying that. Also, are there any performance ''gotchas'' in general when using multiple interfaces? For instance, in my current test-bed environment I have single MDS, OSS, and one client; the OSS and MDS have two 100Mb/s NICs, while the MDS has a single 100Mb/s NIC. When running bonnie++ on the mounted filesystem on the client, I see little or no difference between one and two interfaces; that is, throughput is always about 11MB/s. Cheers, Ken Smith
For those of you out there running on two or more interfaces, is it preferable to have multiple nids per node, or multiple hostaddr, e.g. should the lmc usage look like so: lmc -m lustre.xml --add net --node oss01 --nid nid01oss01 --hostaddr 192.168.1.1 --nettype tcp lmc -m lustre.xml --add net --node oss01 --nid nid02oss01 --hostaddr 192.168.1.2 --nettype tcp Or alternatively: lmc -m lustre.xml --add net --node oss01 --nid nid01oss01 --hostaddr 192.168.1.1 --hostaddr 192.168.1.2 --nettype tcp Or has anyone noticed any difference? Cheers, Ken Smith