For what it?s worth, I once created a stripe volume on ram disks. After the
initial creation of the bricks, I made a copy of all the files gluster created.
After reboot, the files are copied back to the ramdisk before gluster starts, so
basically after a reboot you have an empty gluster volume once again.
The performance was really good. Maxed out the dual 10GbE on each server.
If you need really-high IOPS to a file that may be too big for a ramdisk in 1
machine, consider a stripe volume of multiple ram disks.
> On Apr 5, 2016, at 8:53 AM, Sean Delaney <sdelaney at cp.dias.ie>
wrote:
>
> Hi all,
>
> I'm considering using my cluster's local scratch SSDs as a shared
filesystem. I'd like to be able to start glusterfs on a few nodes (say 16),
run a HPC job on those same nodes (reading/writing on glusterfs), copy the final
result off to the panasas storage, and shut down glusterfs until next time.
>
> I'm interested in this because my workload has shown strong performance
on the SSDs, which I'd like to scale out a little.
>
> Ultimately, I might be interested in setting up a tiered glusterfs using
the SSDs as the hot tier. Again, the ability to bring the filesystem up and down
easily would be of interest.
>
> Example cluster: 32 nodes, 1.5 TB SSD (xfs) per node, separate HDD for OS,
panasas storage.
>
> Thanks
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160405/8d8669fb/attachment.html>