I can make the change to sharding and then export/import the VMs to give it a try. So just to be clear, I am using v3.7.6-1. Is that sufficient? I would rather not have to compile from source and would probably wait for the next rpms if that is needed. Also, given the output below. what would you recommend I use for the shard block size and furthermore, how do you determine this? -rw-r--r-- 1 root root 53G Jan 9 09:34 03070877-9cf4-4d55-a66c-fbd3538eedb9.vhd -rw-r--r-- 1 root root 2.1M Jan 8 12:27 0b16f938-e859-41e3-bb33-fefba749a578.vhd -rw-r--r-- 1 root root 1.6G Jan 7 16:39 3d77b504-3109-4c34-a803-e9236e35d8bf.vhd -rw-r--r-- 1 root root 497M Jan 7 17:27 715ddb6c-67af-4047-9fa0-728019b49d63.vhd -rw-r--r-- 1 root root 341M Jan 7 16:17 72a33878-59f7-4f6e-b3e1-e137aeb19ced.vhd -rw-r--r-- 1 root root 2.1G Jan 9 09:34 7b7c8d8a-d223-4a47-bd35-8d72ee6927b9.vhd -rw-r--r-- 1 root root 8.1M Dec 28 11:07 8b49029c-7e55-4569-bb73-88c3360d6a0c.vhd -rw-r--r-- 1 root root 2.2G Jan 8 12:25 8c524ed9-e382-40cd-9361-60c23a2c1ae2.vhd -rw-r--r-- 1 root root 3.2G Jan 9 09:34 930196aa-0b85-4482-97ab-3d05e9928884.vhd -rw-r--r-- 1 root root 2.0G Jan 8 12:27 940ee016-8288-4369-9fb8-9c64cb3af256.vhd -rw-r--r-- 1 root root 12G Jan 9 09:34 b0cdf43c-7e6b-44bf-ab2d-efb14e9d2156.vhd -rw-r--r-- 1 root root 6.8G Jan 7 16:39 b803f735-cf7f-4568-be83-aedd746f6cec.vhd -rw-r--r-- 1 root root 2.1G Jan 9 09:34 be18622b-042a-48cb-ab94-51541ffe24eb.vhd -rw-r--r-- 1 root root 2.6G Jan 9 09:34 c2645723-efd9-474b-8cce-fe07ac9fbba9.vhd -rw-r--r-- 1 root root 2.1G Jan 9 09:34 d2873b74-f6be-43a9-bdf1-276761e3e228.vhd -rw-r--r-- 1 root root 1.4G Jan 7 17:27 db881623-490d-4fd8-8f12-9c82eea3c53c.vhd -rw-r--r-- 1 root root 2.1M Jan 8 12:33 eb21c443-6381-4a25-ac7c-f53a82289f10.vhd -rw-r--r-- 1 root root 13G Jan 7 16:39 f6b9cfba-09ba-478d-b8e0-543dd631e275.vhd Thanks again. On Fri, Jan 8, 2016 at 8:34 PM, Ravishankar N <ravishankar at redhat.com> wrote:> On 01/09/2016 07:42 AM, Krutika Dhananjay wrote: > > > > ------------------------------ > > *From: *"Ravishankar N" <ravishankar at redhat.com> <ravishankar at redhat.com> > *To: *"Kyle Harris" <kyle.harris98 at gmail.com> <kyle.harris98 at gmail.com>, > gluster-users at gluster.org > *Sent: *Saturday, January 9, 2016 7:06:04 AM > *Subject: *Re: [Gluster-users] High I/O And Processor Utilization > > On 01/09/2016 01:44 AM, Kyle Harris wrote: > > It?s been a while since I last ran GlusterFS so I thought I might give it > another try here at home in my lab. I am using the 3.7 branch on 2 systems > with a 3rd being an arbiter node. Much like the last time I tried > GlusterFS, I keep running into issues with the glusterfsd process eating up > so many resources that the systems sometimes become all but unusable. A > quick Google search tells me I am not the only one to run into this issue > but I have yet to find a cure. The last time I ran GlusterFS, it was to > host web sites and I just chalked the problem up to a large number of small > files. This time, I am using it to host VM?s and there are only 7 of them > and while they are running, they are not doing anything else. > > > The performance improvements for self-heal are still a > (stalled_at_the_moment)-work-in-progress. But for VM use cases, you can > turn on sharding [1], which will drastically reduce data self-heal time. > Why don't you give it a spin on your lab setup and let us know how it goes? > You might have to create the VMs again though since only the files that are > created after enabling the feature will be sharded. > > -Ravi > > [1] http://blog.gluster.org/2015/12/introducing-shard-translator/ > > > Kyle, > I would recommend you to use glusterfs-3.7.6 if you intend to try > sharding, because it contains some crucial bug fixes. > > > If you're trying arbiter, it would be good if you can compile the 3.7 > branch and use it since it has an important fix ( > http://review.gluster.org/#/c/12479/) that will only make it to > glusterfs-3.7.7. That way you'd get this fix and the sharding ones too > right away. > > -Krutika > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > > > > > _______________________________________________ > Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160109/3f78d8a2/attachment.html>
Lindsay Mathieson
2016-Jan-09 23:11 UTC
[Gluster-users] High I/O And Processor Utilization
On 10/01/2016 1:44 AM, Kyle Harris wrote:> I can make the change to sharding and then export/import the VMs to > give it a try. So just to be clear, I am using v3.7.6-1. Is that > sufficient? I would rather not have to compile from source and would > probably wait for the next rpms if that is needed.Speaking not as a dev (I'm not), but as a tester/user, 3.7.6 will do for testing & usage, thats what I am on. Write performance could be better and I believe there are some fixes for that due in 3.7.7> > Also, given the output below. what would you recommend I use for the > shard block size and furthermore, how do you determine this?features.shard: on features.shard-block-size: <size> Where size takes std unit sizes, e.g 64M, 1G etc. Default is 4M. Shard size also has some interesting implications for the upcoming SSD Tier volumes (3.8). They are available in 3.7, if you are ok with regular breakages :) I bench tested a lot and didn't find all that much difference in results for shard sizes. This is my current settings: Volume Name: datastore1 Type: Replicate Volume ID: 1261175d-64e1-48b1-9158-c32802cc09f0 Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: vnb.proxmox.softlog:/vmdata/datastore1 Brick2: vng.proxmox.softlog:/vmdata/datastore1 Brick3: vna.proxmox.softlog:/vmdata/datastore1 Options Reconfigured: features.shard: on features.shard-block-size: 64MB cluster.self-heal-window-size: 256 server.event-threads: 4 client.event-threads: 4 cluster.quorum-type: auto cluster.server-quorum-type: server performance.io-thread-count: 32 performance.cache-refresh-timeout: 4 nfs.disable: on nfs.addr-namelookup: off nfs.enable-ino32: off performance.write-behind: on performance.strict-write-ordering: on performance.stat-prefetch: off performance.quick-read: off performance.read-ahead: off performance.io-cache: off cluster.eager-lock: enable network.remote-dio: enable performance.readdir-ahead: on performance.write-behind-window-size: 256MB performance.cache-size: 256MB -- Lindsay Mathieson
Krutika Dhananjay
2016-Jan-11 05:37 UTC
[Gluster-users] High I/O And Processor Utilization
----- Original Message -----> From: "Kyle Harris" <kyle.harris98 at gmail.com> > To: "Ravishankar N" <ravishankar at redhat.com>, gluster-users at gluster.org > Sent: Saturday, January 9, 2016 9:14:36 PM > Subject: Re: [Gluster-users] High I/O And Processor Utilization> I can make the change to sharding and then export/import the VMs to give it a > try. So just to be clear, I am using v3.7.6-1. Is that sufficient? I would > rather not have to compile from source and would probably wait for the next > rpms if that is needed.> Also, given the output below. what would you recommend I use for the shard > block size and furthermore, how do you determine this?> -rw-r--r-- 1 root root 53G Jan 9 09:34 > 03070877-9cf4-4d55-a66c-fbd3538eedb9.vhd > -rw-r--r-- 1 root root 2.1M Jan 8 12:27 > 0b16f938-e859-41e3-bb33-fefba749a578.vhd > -rw-r--r-- 1 root root 1.6G Jan 7 16:39 > 3d77b504-3109-4c34-a803-e9236e35d8bf.vhd > -rw-r--r-- 1 root root 497M Jan 7 17:27 > 715ddb6c-67af-4047-9fa0-728019b49d63.vhd > -rw-r--r-- 1 root root 341M Jan 7 16:17 > 72a33878-59f7-4f6e-b3e1-e137aeb19ced.vhd > -rw-r--r-- 1 root root 2.1G Jan 9 09:34 > 7b7c8d8a-d223-4a47-bd35-8d72ee6927b9.vhd > -rw-r--r-- 1 root root 8.1M Dec 28 11:07 > 8b49029c-7e55-4569-bb73-88c3360d6a0c.vhd > -rw-r--r-- 1 root root 2.2G Jan 8 12:25 > 8c524ed9-e382-40cd-9361-60c23a2c1ae2.vhd > -rw-r--r-- 1 root root 3.2G Jan 9 09:34 > 930196aa-0b85-4482-97ab-3d05e9928884.vhd > -rw-r--r-- 1 root root 2.0G Jan 8 12:27 > 940ee016-8288-4369-9fb8-9c64cb3af256.vhd > -rw-r--r-- 1 root root 12G Jan 9 09:34 > b0cdf43c-7e6b-44bf-ab2d-efb14e9d2156.vhd > -rw-r--r-- 1 root root 6.8G Jan 7 16:39 > b803f735-cf7f-4568-be83-aedd746f6cec.vhd > -rw-r--r-- 1 root root 2.1G Jan 9 09:34 > be18622b-042a-48cb-ab94-51541ffe24eb.vhd > -rw-r--r-- 1 root root 2.6G Jan 9 09:34 > c2645723-efd9-474b-8cce-fe07ac9fbba9.vhd > -rw-r--r-- 1 root root 2.1G Jan 9 09:34 > d2873b74-f6be-43a9-bdf1-276761e3e228.vhd > -rw-r--r-- 1 root root 1.4G Jan 7 17:27 > db881623-490d-4fd8-8f12-9c82eea3c53c.vhd > -rw-r--r-- 1 root root 2.1M Jan 8 12:33 > eb21c443-6381-4a25-ac7c-f53a82289f10.vhd > -rw-r--r-- 1 root root 13G Jan 7 16:39 > f6b9cfba-09ba-478d-b8e0-543dd631e275.vhd > Thanks again.Kyle, Based on the testing we have done from our end, we've found that 512MB is a good number that is neither too big nor too small, and provides good performance both on the IO side and with respect to self-heal. -Krutika> On Fri, Jan 8, 2016 at 8:34 PM, Ravishankar N < ravishankar at redhat.com > > wrote:> > On 01/09/2016 07:42 AM, Krutika Dhananjay wrote: >> > > > From: "Ravishankar N" <ravishankar at redhat.com> > > > > > > > > > > To: "Kyle Harris" <kyle.harris98 at gmail.com> , gluster-users at gluster.org > > > > > > > > > > Sent: Saturday, January 9, 2016 7:06:04 AM > > > > > > > > > > Subject: Re: [Gluster-users] High I/O And Processor Utilization > > > > > >> > > > On 01/09/2016 01:44 AM, Kyle Harris wrote: > > > > > >> > > > > It?s been a while since I last ran GlusterFS so I thought I might > > > > > give > > > > > it > > > > > another try here at home in my lab. I am using the 3.7 branch on 2 > > > > > systems > > > > > with a 3 rd being an arbiter node. Much like the last time I tried > > > > > GlusterFS, I keep running into issues with the glusterfsd process > > > > > eating > > > > > up > > > > > so many resources that the systems sometimes become all but unusable. > > > > > A > > > > > quick Google search tells me I am not the only one to run into this > > > > > issue > > > > > but I have yet to find a cure. The last time I ran GlusterFS, it was > > > > > to > > > > > host > > > > > web sites and I just chalked the problem up to a large number of > > > > > small > > > > > files. This time, I am using it to host VM?s and there are only 7 of > > > > > them > > > > > and while they are running, they are not doing anything else. > > > > > > > > > > > > > > The performance improvements for self-heal are still a > > > > (stalled_at_the_moment)-work-in-progress. But for VM use cases, you can > > > > turn > > > > on sharding [1], which will drastically reduce data self-heal time. Why > > > > don't you give it a spin on your lab setup and let us know how it goes? > > > > You > > > > might have to create the VMs again though since only the files that are > > > > created after enabling the feature will be sharded. > > > > > >> > > > -Ravi > > > > > >> > > > [1] http://blog.gluster.org/2015/12/introducing-shard-translator/ > > > > > >> > > Kyle, > > > > > > I would recommend you to use glusterfs-3.7.6 if you intend to try > > > sharding, > > > because it contains some crucial bug fixes. > > >> > If you're trying arbiter, it would be good if you can compile the 3.7 > > branch > > and use it since it has an important fix ( > > http://review.gluster.org/#/c/12479/ ) that will only make it to > > glusterfs-3.7.7. That way you'd get this fix and the sharding ones too > > right > > away. >> > > -Krutika > > >> > > > _______________________________________________ > > > > > > > > > > Gluster-users mailing list > > > > > > > > > > Gluster-users at gluster.org > > > > > > > > > > http://www.gluster.org/mailman/listinfo/gluster-users > > > > > >> > > _______________________________________________ > > > > > > Gluster-users mailing list Gluster-users at gluster.org > > > http://www.gluster.org/mailman/listinfo/gluster-users > > >> _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160111/d6e0eb46/attachment.html>