I'm currently evaluating glusterfs to see if it will work for our needs. I've setup a little test environment with 3 fairly fast machines and 4 raid devices. When using gigabit ethernet between the 3 systems, I get about 14%-20% performance (vs using the filesystem locally) on large files and about .5%-17% performance on small files. I used bonnie++ for benchmarking. Since these numbers were so low and I wanted to know if infiniband would help (although I don't have any infiniband hardware), I set it all up on one machine using unix sockets to get rid of as much network stuff in the middle as possible. Doing this bumps the performance up for large files to about 33%, but small file operations are still quite pitiful. Before we get into "Did you do x?" or whatever as far as tweaking goes, I really just want to find out what sort of performance is possible using glusterfs with the replication functionality. If I'm not going to be able to get something resonable, then I don't want to spend any more time trying to tweak it. What I am looking for is a system that has complete redundancy on the file server end (e.g. a server node or raid can fail and operations can still be going on), without taking what I consider to be a huge performance hit. I think even getting about 50% of the performance of what the disk could give locally would be acceptable, and could be compensated with adding more nodes and raids.