On 01/09/2016 01:44 AM, Kyle Harris wrote:> > It?s been a while since I last ran GlusterFS so I thought I might give > it another try here at home in my lab. I am using the 3.7 branch on 2 > systems with a 3^rd being an arbiter node. Much like the last time I > tried GlusterFS, I keep running into issues with the glusterfsd > process eating up so many resources that the systems sometimes become > all but unusable. A quick Google search tells me I am not the only > one to run into this issue but I have yet to find a cure. The last > time I ran GlusterFS, it was to host web sites and I just chalked the > problem up to a large number of small files. This time, I am using it > to host VM?s and there are only 7 of them and while they are running, > they are not doing anything else. >The performance improvements for self-heal are still a (stalled_at_the_moment)-work-in-progress. But for VM use cases, you can turn on sharding [1], which will drastically reduce data self-heal time. Why don't you give it a spin on your lab setup and let us know how it goes? You might have to create the VMs again though since only the files that are created after enabling the feature will be sharded. -Ravi [1] http://blog.gluster.org/2015/12/introducing-shard-translator/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160109/7ff8953c/attachment.html>
Krutika Dhananjay
2016-Jan-09 02:12 UTC
[Gluster-users] High I/O And Processor Utilization
----- Original Message -----> From: "Ravishankar N" <ravishankar at redhat.com> > To: "Kyle Harris" <kyle.harris98 at gmail.com>, gluster-users at gluster.org > Sent: Saturday, January 9, 2016 7:06:04 AM > Subject: Re: [Gluster-users] High I/O And Processor Utilization> On 01/09/2016 01:44 AM, Kyle Harris wrote:> > It?s been a while since I last ran GlusterFS so I thought I might give it > > another try here at home in my lab. I am using the 3.7 branch on 2 systems > > with a 3 rd being an arbiter node. Much like the last time I tried > > GlusterFS, I keep running into issues with the glusterfsd process eating up > > so many resources that the systems sometimes become all but unusable. A > > quick Google search tells me I am not the only one to run into this issue > > but I have yet to find a cure. The last time I ran GlusterFS, it was to > > host > > web sites and I just chalked the problem up to a large number of small > > files. This time, I am using it to host VM?s and there are only 7 of them > > and while they are running, they are not doing anything else. > > The performance improvements for self-heal are still a > (stalled_at_the_moment)-work-in-progress. But for VM use cases, you can turn > on sharding [1], which will drastically reduce data self-heal time. Why > don't you give it a spin on your lab setup and let us know how it goes? You > might have to create the VMs again though since only the files that are > created after enabling the feature will be sharded.> -Ravi> [1] http://blog.gluster.org/2015/12/introducing-shard-translator/Kyle, I would recommend you to use glusterfs-3.7.6 if you intend to try sharding, because it contains some crucial bug fixes. -Krutika> _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160108/7326e74a/attachment.html>
Lindsay Mathieson
2016-Jan-09 22:53 UTC
[Gluster-users] High I/O And Processor Utilization
On 9/01/2016 11:36 AM, Ravishankar N wrote:> The performance improvements for self-heal are still a > (stalled_at_the_moment)-work-in-progress. But for VM use cases, you > can turn on sharding [1], which will drastically reduce data self-heal > time. Why don't you give it a spin on your lab setup and let us know > how it goes? You might have to create the VMs again though since only > the files that are created after enabling the feature will be sharded.I rather thought that the high I/O Kyle was seeing was due to continual heals across his VM's, which is not normal behavior. However possibly that was due to the network/firewall issues he described earlier. If that's not resolved, then the problem will probably continue will it not? just with a finer granularity ;) -- Lindsay Mathieson