During Gluster Summit, we discussed gluster volumes as storage for VM images - feedback on the usecase and upcoming features that may benefit this usecase. Some of the points discussed * Need to ensure there are no issues when expanding a gluster volume when sharding is turned on. * Throttling feature for self-heal, rebalance process could be useful for this usecase * Erasure coded volumes with sharding - seen as a good fit for VM disk storage * Performance related ** accessing qemu images using gfapi driver does not perform as well as fuse access. Need to understand why. ** Using zfs with cache or lvmcache for xfs filesystem is seen to improve performance If you have any further inputs on this topic, please add to thread. thanks! sahina -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171031/6370cff1/attachment.html>
----- Original Message -----> From: "Sahina Bose" <sabose at redhat.com> > To: gluster-users at gluster.org > Cc: "Gluster Devel" <gluster-devel at gluster.org> > Sent: Tuesday, October 31, 2017 11:46:57 AM > Subject: [Gluster-users] BoF - Gluster for VM store use case > > During Gluster Summit, we discussed gluster volumes as storage for VM images > - feedback on the usecase and upcoming features that may benefit this > usecase. > > Some of the points discussed > > * Need to ensure there are no issues when expanding a gluster volume when > sharding is turned on. > * Throttling feature for self-heal, rebalance process could be useful for > this usecase > * Erasure coded volumes with sharding - seen as a good fit for VM disk > storageI am working on this with a customer, we have been able to do 400-500 MB / sec writes! Normally things max out at ~150-250. The trick is to use multiple files, create the lvm stack and use native LVM striping. We have found that 4-6 files seems to give the best perf on our setup. I don't think we are using sharding on the EC vols, just multiple files and LVM striping. Sharding may be able to avoid the LVM striping, but I bet dollars to doughnuts you won't see this level of perf :) I am working on a blog post for RHHI and RHEV + RHS performance where I am able to in some cases get 2x+ the performance out of VMs / VM storage. I'd be happy to share my data / findings.> * Performance related > ** accessing qemu images using gfapi driver does not perform as well as fuse > access. Need to understand why.+1 I have some ideas here that I have came up with in my research. Happy to share these as well.> ** Using zfs with cache or lvmcache for xfs filesystem is seen to improve > performanceI have done some interesting stuff with customers here too, nothing with VMs iirc it was more for backing up bricks without geo-rep(was too slow for them). -b> > If you have any further inputs on this topic, please add to thread. > > thanks! > sahina > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
Paul Cuzner
2017-Nov-01 03:22 UTC
[Gluster-users] [Gluster-devel] BoF - Gluster for VM store use case
Just wanted to pick up on the EC for vm storage domains option..> > * Erasure coded volumes with sharding - seen as a good fit for VM disk > > storage > > I am working on this with a customer, we have been able to do 400-500 MB / > sec writes! Normally things max out at ~150-250. The trick is to use > multiple files, create the lvm stack and use native LVM striping. We have > found that 4-6 files seems to give the best perf on our setup. I don't > think we are using sharding on the EC vols, just multiple files and LVM > striping. Sharding may be able to avoid the LVM striping, but I bet > dollars to doughnuts you won't see this level of perf :) I am working on a > blog post for RHHI and RHEV + RHS performance where I am able to in some > cases get 2x+ the performance out of VMs / VM storage. I'd be happy to > share my data / findings. > >The main reason IIRC for sharding was to break down the vdisk image file into smaller chunks to improve self heal efficiency. With EC the vdisk image is already split, so do we really need sharding as well - especially given Ben's findings? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171101/b9abf693/attachment.html>
Shyam Ranganathan
2017-Nov-01 11:03 UTC
[Gluster-users] [Gluster-devel] BoF - Gluster for VM store use case
On 10/31/2017 08:36 PM, Ben Turner wrote:>> * Erasure coded volumes with sharding - seen as a good fit for VM disk >> storage > I am working on this with a customer, we have been able to do 400-500 MB / sec writes! Normally things max out at ~150-250. The trick is to use multiple files, create the lvm stack and use native LVM striping. We have found that 4-6 files seems to give the best perf on our setup. I don't think we are using sharding on the EC vols, just multiple files and LVM striping. Sharding may be able to avoid the LVM striping, but I bet dollars to doughnuts you won't see this level of perf:) I am working on a blog post for RHHI and RHEV + RHS performance where I am able to in some cases get 2x+ the performance out of VMs / VM storage. I'd be happy to share my data / findings. >Ben, we would like to hear more, so please do share your thoughts further. There are a fair number of users in the community who have this use-case and may have some interesting questions around the proposed method. Shyam
Possibly Parallel Threads
- BoF - Gluster for VM store use case
- BoF - Gluster for VM store use case
- [Gluster-Maintainers] Meeting minutes : May 2nd, 2018 Maintainers meeting.
- Announcing Gluster release 3.10.7 (Long Term Maintenance)
- [Gluster-Maintainers] [Gluster-devel] Release 3.11.1: Scheduled for 20th of June