On Thu, 2006-04-06 at 12:11 -0700, cliff white wrote:> Lennard Bakker wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > With my increasing need of storage, i am looking for solutions to fill > > in this need. And clusterfs looks me as a good solution for me :) > > > > But i am wonderig what kind of hardware setup is used in average. Is > > this always a SAN (iSCSI?) with failover OSS setup or taking a change > > for single storage OSS. MDS always with failover of a single MDS. ... > > Lustre is not a SAN, and attempts to use a SAN as back end storage > result in pain. Basically, you want maximum possible IO bandwidth out > of each OSS machine, file system clients will see the aggregate > bandwidth of all OSS machines.I recently took delivery of a number of these devices: http://www.infrant.com/products_ReadyNAS_NV.htm It''s a SPARC Linux box with 512MB main memory, gigabit ethernet, and (in my configuration) 1.2TB of usable filesystem for. I''ve been thinking these would make spiffy (and cute) Lustre OSS for the low end. I have no idea if Lustre even runs on SPARC, but at $1.33/GB it would be a screaming deal, and could do ~100MB/sec/node. -jwb
Lennard Bakker wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > With my increasing need of storage, i am looking for solutions to fill > in this need. And clusterfs looks me as a good solution for me :) > > But i am wonderig what kind of hardware setup is used in average. Is > this always a SAN (iSCSI?) with failover OSS setup or taking a change > for single storage OSS. MDS always with failover of a single MDS. ...Lustre is not a SAN, and attempts to use a SAN as back end storage result in pain. Basically, you want maximum possible IO bandwidth out of each OSS machine, file system clients will see the aggregate bandwidth of all OSS machines. Typically, the OSS server will have storage attached via fiber channel, SCSI or other high-performing interconnect. We haven''t seen iSCSI, but it should be doable. (I''m still not sure iSCSI counts as ''high performance'' ;) ) Failover setups require two nodes connected to shared storage, again typically dual-tailed SCSI or fiber. Failover setups require ''fencing'' to prevent simultaneous access to the shared storage, we work with standard HA software. (Heartbeat, CluManger,etc) The most popular ''fencing'' method is STONITH, usually requires remote power control of the failover nodes. For data integrity, we do recommend using array devices, such as DDN, Hitachi, etc. which provide RAID functionality to protect data in the event of single-drive failures. The system is generally robust, we have user running with and without failover. Typically OSS nodes are configured to maximize IO throughput, they do not require large memory. We see many sites with 2 or 4 CPU machines as OSS servers. We also see large (Altrix) machines used as servers. 64-bit machines are preferred due to wider IO bandwidth possible with the wider path. MDS machines do much less IO, and in general can be smaller than OSS machines. The MDS will take advantage of large memory with some workloads. Hope this helps. cliffw> > Lennard > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.1 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFENVcRB3IFhTJpAVkRAhXIAKDmQwqkRzj60owguaAX7gpupn6l1ACfeRVb > ttV3UjsI+grIaMDOFHVRyhU> =5VIv > -----END PGP SIGNATURE----- > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
On Apr 06, 2006 12:39 -0700, Jeffrey W. Baker wrote:> I recently took delivery of a number of these devices: > > http://www.infrant.com/products_ReadyNAS_NV.htm > > It''s a SPARC Linux box with 512MB main memory, gigabit ethernet, and (in > my configuration) 1.2TB of usable filesystem for. I''ve been thinking > these would make spiffy (and cute) Lustre OSS for the low end. I have > no idea if Lustre even runs on SPARC, but at $1.33/GB it would be a > screaming deal, and could do ~100MB/sec/node.I don''t think we have ever run Lustre on a Sparc system. It may work out of the box, since we already support 64-bit (x86_64, ia64) and big-endian (ppc64) machines. It may not... Feedback would be welcome. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 cliff white wrote: <-- snap a large story -->>> Hope this helps. >> cliffwThis really helped. Now looking for the right hardware needs. For my current calculations it will cost me EUR 2.51 for each 1GB (raw storage, 1TB=1000GB) for the hardware. Lennard -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEPo4CB3IFhTJpAVkRAleXAKDy+OEBPFqPNk0rdYa8pDscDTbuOQCfS7N5 7kDPmvXKUZiAsWJUzjRWdtA=dpPi -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 With my increasing need of storage, i am looking for solutions to fill in this need. And clusterfs looks me as a good solution for me :) But i am wonderig what kind of hardware setup is used in average. Is this always a SAN (iSCSI?) with failover OSS setup or taking a change for single storage OSS. MDS always with failover of a single MDS. ... Lennard -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFENVcRB3IFhTJpAVkRAhXIAKDmQwqkRzj60owguaAX7gpupn6l1ACfeRVb ttV3UjsI+grIaMDOFHVRyhU=5VIv -----END PGP SIGNATURE-----