Hi ..., We have a requirement where high throughput and high-availability, both are essential. The storage will be used by animation artists and a render-farm with around 300 render nodes. -------------------------------------------------------------------------------------------------------------------------- 1. When a rendering job is fired, we can expect at least 50 render nodes to simultaneously hit the storage to read a single scene (information) file. Now, this file could be anywhere in the range of 100MB to 2GB in size. 2. Once the render is complete, each of this render node would write the generated image file back to the storage. The image files would be of 10 - 50MB is size. Here again, we can expect most of the renders to finish almost simultaneously, usually within a few seconds of each other. 3. The 100MB - 2GB scene will almost always be written to by a single artist i.e. no 2 artists would be working on the same scene file simultaneously. 4. The 10 - 50MB image files, from different rendering activities, would then be read by another set of nodes, for something called 'compositing'. Compositing, gives you the final 'shot' output. ------------------------------------------------------------------------------------------------------------------------ *We are trying to cater to both large file (100MB - 2GB) read speed and small file (10-50MB) read+write speed.* For this we, were thinking of having a mix of RAID 5 Volumes and Individual Hard Disks. We are looking at having 6 Servers, with 24 Disk JBODs each, in a 'replicate' configuration i.e. 3+3 Servers. Out of the 24 Disk in each JBOD, 1 - 9 would be used for a RAID5 Volume and 10 - 24 as individual disks. * The RAID 5 Volume would hold the 100MB to 2GB Scene Files while the individual disks would hold the rendered 10 - 50MB images. Can we use the Switch Scheduler ( http://gluster.org/community/documentation/index.php/Translators/cluster#Switch_Scheduler) for this?* OR is there a better approach. Regards, Indivar Nair -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121004/c5de6350/attachment.html>