Hi Ben, Thanks for the info. Cheers, Ron On 29/04/15 21:03, Ben Turner wrote:> ----- Original Message ----- >> From: "Ron Trompert" <ron.trompert at surfsara.nl> >> To: gluster-users at gluster.org >> Sent: Wednesday, April 29, 2015 1:25:59 PM >> Subject: [Gluster-users] Poor performance with small files >> >> Hi, >> >> We run gluster as storage solution for our Owncloud-based sync and share >> service. At the moment we have about 30 million files in the system >> which addup to a little more than 30TB. Most of these files are as you >> may expect very small, i.e. in the 100KB ball park. For about a year >> everything ran perfectly fine. We run 3.6.2 by the way. > > Upgrade to 3.6.3 and set client.event-threads and server.event-threads to at least 4: > > "Previously, epoll thread did socket even-handling and the same thread was used for serving the client or processing the response received from the server. Due to this, other requests were in a queue untill the current epoll thread completed its operation. With multi-threaded epoll, events are distributed that improves the performance due the parallel processing of requests/responses received." > > Here are the guidelines for tuning them: > > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Small_File_Performance_Enhancements.html > > In my testing with epoll threads at 4 I saw a between a 15% and 50% increase depending on the workload. > > There are several smallfile perf enhancements in the works: > > *http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf > > *Lookup unhashed is the next feature and should be ready with 3.7(correct me if I am wrong). > > *If you are using RAID 6 you may want to do some testing with RAID 10 or JBOD, but the benefits here only come into play with alot of concurrent access(30+ processes / threads working with different files). > > *Tiering may help here if you want to add some SSDs, this is also a 3.7 feature. > > HTH! > > -b > >> >> Now we are trying to commission new hardware. We have done this by >> adding the new nodes to our cluster and using the add-brick and >> remove-brick procedure to get the data to the new nodes. In a week we >> have migrated only 8.5TB this way. What are we doing wrong here? Is >> there a way to improve the gluster performance on small files? >> >> I have another question. If you want to setup a gluster that will >> contain lots of very small files. What would be a good practice to set >> things up in terms configuration, sizes of bricks related tot memory and >> number of cores, number of brick per node etc.? >> >> >> >> Best regards and thanks in advance, >> >> Ron >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users >>
> Upgrade to 3.6.3 and set client.event-threads and server.event-threads to at least 4: > > "Previously, epoll thread did socket even-handling and the same thread was used for serving the client or processing the response received from the server. Due to this, other requests were in a queue untill the current epoll thread completed its operation. With multi-threaded epoll, events are distributed that improves the performance due the parallel processing of requests/responses received." > > Here are the guidelines for tuning them: > > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Small_File_Performance_Enhancements.html > > In my testing with epoll threads at 4 I saw a between a 15% and 50% increase depending on the workload. > > There are several smallfile perf enhancements in the works: > > *http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf > > *Lookup unhashed is the next feature and should be ready with 3.7(correct me if I am wrong). > > *If you are using RAID 6 you may want to do some testing with RAID 10 or JBOD, but the benefits here only come into play with alot of concurrent access(30+ processes / threads working with different files). > > *Tiering may help here if you want to add some SSDs, this is also a 3.7 feature. > > HTH! > > -bHi, I'm on 3.6.3 and these options are not available: volume set: failed: option : client.event-threads does not exist volume set: failed: option : server.event-threads does not exist Any ideas? Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. "Transact" is operated by Integrated Financial Arrangements plc. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856).
----- Original Message -----> From: "Ron Trompert" <ron.trompert at surfsara.nl> > To: "Ben Turner" <bturner at redhat.com> > Cc: gluster-users at gluster.org > Sent: Thursday, April 30, 2015 1:25:42 AM > Subject: Re: [Gluster-users] Poor performance with small files > > > Hi Ben, > > Thanks for the info.My apologies Ron I just found out that MT epoll did not land in 3.6 and it won't be available until 3.7. If you want to try the alpha bits they are available in: http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.7/epel-6-x86_64/ These are alpha and should only be used to test, but if you want to get a feel for what is in the pipe its there for you to try. -b> > Cheers, > > Ron > > > On 29/04/15 21:03, Ben Turner wrote: > > ----- Original Message ----- > >> From: "Ron Trompert" <ron.trompert at surfsara.nl> > >> To: gluster-users at gluster.org > >> Sent: Wednesday, April 29, 2015 1:25:59 PM > >> Subject: [Gluster-users] Poor performance with small files > >> > >> Hi, > >> > >> We run gluster as storage solution for our Owncloud-based sync and share > >> service. At the moment we have about 30 million files in the system > >> which addup to a little more than 30TB. Most of these files are as you > >> may expect very small, i.e. in the 100KB ball park. For about a year > >> everything ran perfectly fine. We run 3.6.2 by the way. > > > > Upgrade to 3.6.3 and set client.event-threads and server.event-threads to > > at least 4: > > > > "Previously, epoll thread did socket even-handling and the same thread was > > used for serving the client or processing the response received from the > > server. Due to this, other requests were in a queue untill the current > > epoll thread completed its operation. With multi-threaded epoll, events > > are distributed that improves the performance due the parallel processing > > of requests/responses received." > > > > Here are the guidelines for tuning them: > > > > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Small_File_Performance_Enhancements.html > > > > In my testing with epoll threads at 4 I saw a between a 15% and 50% > > increase depending on the workload. > > > > There are several smallfile perf enhancements in the works: > > > > *http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf > > > > *Lookup unhashed is the next feature and should be ready with 3.7(correct > > me if I am wrong). > > > > *If you are using RAID 6 you may want to do some testing with RAID 10 or > > JBOD, but the benefits here only come into play with alot of concurrent > > access(30+ processes / threads working with different files). > > > > *Tiering may help here if you want to add some SSDs, this is also a 3.7 > > feature. > > > > HTH! > > > > -b > > > >> > >> Now we are trying to commission new hardware. We have done this by > >> adding the new nodes to our cluster and using the add-brick and > >> remove-brick procedure to get the data to the new nodes. In a week we > >> have migrated only 8.5TB this way. What are we doing wrong here? Is > >> there a way to improve the gluster performance on small files? > >> > >> I have another question. If you want to setup a gluster that will > >> contain lots of very small files. What would be a good practice to set > >> things up in terms configuration, sizes of bricks related tot memory and > >> number of cores, number of brick per node etc.? > >> > >> > >> > >> Best regards and thanks in advance, > >> > >> Ron > >> > >> > >> _______________________________________________ > >> Gluster-users mailing list > >> Gluster-users at gluster.org > >> http://www.gluster.org/mailman/listinfo/gluster-users > >> >