Hi All, We are currently looking into deploying GlusterFS in our HPC environment. The idea would be to export volumes out via NFS (or native GlusterFS) to the compute nodes and via CIFS to desktops in nearby buildings. The underlying storage will either be a DDN S2A6620 ( http://www.ddn.com/products/s2a6620 ) or Nexsan E60 ( http://www.nexsan.com/products/e60 ). Whichever storage array we end up going with will have the following specs: - 30 x 3 TB SATA drives - 30 x 600 GB SAS (15K) drives - Dual controllers with 12 GB (DDN S2A6620) or 2 GB (Nexsan E60) cache - FC (DDN S2A6620) or FC/iSCSI (Nexsan E60) connectivity The 2 servers (storage nodes) will have the following specs: - HP ProLiant BL460c G1 (Dual Quad-Core Intel Xeon, 2833 MHz, 16 GB RAM) The 30 clients (compute nodes) will have the following specs: - HP ProLiant BL260c G5 (Dual Quad-Core Intel Xeon, 2833 MHz, 16 GB RAM) Each of the aforementioned 32 servers are attached to a 4X DDR (16 Gbps) InfiniBand fabric. Therefore, our plan is to provide LUNs from the storage array to each of the storage nodes via FC (or iSCSI), and then export these LUNs out via IB to the compute nodes. Does this sound like a solid configuration? It may be possible to increase the number of storage nodes (to increase performance), but do you think that will be necessary? Thanks In Advance, Mike