Pranith Kumar Karampuri
2017-May-05 11:49 UTC
[Gluster-users] disperse volume brick counts limits in RHES
On Fri, May 5, 2017 at 2:38 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:> It is the over all time, 8TB data disk healed 2x faster in 8+2 > configuration. >Wow, that is counter intuitive for me. I will need to explore about this to find out why that could be. Thanks a lot for this feedback!> > On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri > <pkarampu at redhat.com> wrote: > > > > > > On Fri, May 5, 2017 at 11:42 AM, Serkan ?oban <cobanserkan at gmail.com> > wrote: > >> > >> Healing gets slower as you increase m in m+n configuration. > >> We are using 16+4 configuration without any problems other then heal > >> speed. > >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on > >> 8+2 is faster by 2x. > > > > > > As you increase number of nodes that are participating in an EC set > number > > of parallel heals increase. Is the heal speed you saw improved per file > or > > the over all time it took to heal the data? > > > >> > >> > >> > >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey <aspandey at redhat.com> > wrote: > >> > > >> > 8+2 and 8+3 configurations are not the limitation but just > suggestions. > >> > You can create 16+3 volume without any issue. > >> > > >> > Ashish > >> > > >> > ________________________________ > >> > From: "Alastair Neil" <ajneil.tech at gmail.com> > >> > To: "gluster-users" <gluster-users at gluster.org> > >> > Sent: Friday, May 5, 2017 2:23:32 AM > >> > Subject: [Gluster-users] disperse volume brick counts limits in RHES > >> > > >> > > >> > Hi > >> > > >> > we are deploying a large (24node/45brick) cluster and noted that the > >> > RHES > >> > guidelines limit the number of data bricks in a disperse set to 8. Is > >> > there > >> > any reason for this. I am aware that you want this to be a power of > 2, > >> > but > >> > as we have a large number of nodes we were planning on going with > 16+3. > >> > Dropping to 8+2 or 8+3 will be a real waste for us. > >> > > >> > Thanks, > >> > > >> > > >> > Alastair > >> > > >> > > >> > _______________________________________________ > >> > Gluster-users mailing list > >> > Gluster-users at gluster.org > >> > http://lists.gluster.org/mailman/listinfo/gluster-users > >> > > >> > > >> > _______________________________________________ > >> > Gluster-users mailing list > >> > Gluster-users at gluster.org > >> > http://lists.gluster.org/mailman/listinfo/gluster-users > >> _______________________________________________ > >> Gluster-users mailing list > >> Gluster-users at gluster.org > >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > > > > > > > > -- > > Pranith >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170505/00cc4719/attachment.html>
Pranith Kumar Karampuri
2017-May-05 11:54 UTC
[Gluster-users] disperse volume brick counts limits in RHES
On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote:> > > On Fri, May 5, 2017 at 2:38 PM, Serkan ?oban <cobanserkan at gmail.com> > wrote: > >> It is the over all time, 8TB data disk healed 2x faster in 8+2 >> configuration. >> > > Wow, that is counter intuitive for me. I will need to explore about this > to find out why that could be. Thanks a lot for this feedback! >>From memory I remember you said you have a lot of small files hosted on thevolume, right? It could be because of the bug https://review.gluster.org/17151 is fixing. That is the only reason I could guess right now. We will try to test this kind of case if you could give us a bit more details about average file-size/depth of directories etc to simulate similar looking directory structure.> > >> >> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri >> <pkarampu at redhat.com> wrote: >> > >> > >> > On Fri, May 5, 2017 at 11:42 AM, Serkan ?oban <cobanserkan at gmail.com> >> wrote: >> >> >> >> Healing gets slower as you increase m in m+n configuration. >> >> We are using 16+4 configuration without any problems other then heal >> >> speed. >> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on >> >> 8+2 is faster by 2x. >> > >> > >> > As you increase number of nodes that are participating in an EC set >> number >> > of parallel heals increase. Is the heal speed you saw improved per file >> or >> > the over all time it took to heal the data? >> > >> >> >> >> >> >> >> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey <aspandey at redhat.com> >> wrote: >> >> > >> >> > 8+2 and 8+3 configurations are not the limitation but just >> suggestions. >> >> > You can create 16+3 volume without any issue. >> >> > >> >> > Ashish >> >> > >> >> > ________________________________ >> >> > From: "Alastair Neil" <ajneil.tech at gmail.com> >> >> > To: "gluster-users" <gluster-users at gluster.org> >> >> > Sent: Friday, May 5, 2017 2:23:32 AM >> >> > Subject: [Gluster-users] disperse volume brick counts limits in RHES >> >> > >> >> > >> >> > Hi >> >> > >> >> > we are deploying a large (24node/45brick) cluster and noted that the >> >> > RHES >> >> > guidelines limit the number of data bricks in a disperse set to 8. >> Is >> >> > there >> >> > any reason for this. I am aware that you want this to be a power of >> 2, >> >> > but >> >> > as we have a large number of nodes we were planning on going with >> 16+3. >> >> > Dropping to 8+2 or 8+3 will be a real waste for us. >> >> > >> >> > Thanks, >> >> > >> >> > >> >> > Alastair >> >> > >> >> > >> >> > _______________________________________________ >> >> > Gluster-users mailing list >> >> > Gluster-users at gluster.org >> >> > http://lists.gluster.org/mailman/listinfo/gluster-users >> >> > >> >> > >> >> > _______________________________________________ >> >> > Gluster-users mailing list >> >> > Gluster-users at gluster.org >> >> > http://lists.gluster.org/mailman/listinfo/gluster-users >> >> _______________________________________________ >> >> Gluster-users mailing list >> >> Gluster-users at gluster.org >> >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > >> > >> > >> > >> > -- >> > Pranith >> > > > > -- > Pranith >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170505/cf6b61d8/attachment.html>
Xavier Hernandez
2017-May-08 07:02 UTC
[Gluster-users] disperse volume brick counts limits in RHES
On 05/05/17 13:49, Pranith Kumar Karampuri wrote:> > > On Fri, May 5, 2017 at 2:38 PM, Serkan ?oban <cobanserkan at gmail.com > <mailto:cobanserkan at gmail.com>> wrote: > > It is the over all time, 8TB data disk healed 2x faster in 8+2 > configuration. > > > Wow, that is counter intuitive for me. I will need to explore about this > to find out why that could be. Thanks a lot for this feedback!Matrix multiplication for encoding/decoding of 8+2 is 4 times faster than 16+4 (one matrix of 16x16 is composed by 4 submatrices of 8x8), however each matrix operation on a 16+4 configuration takes twice the amount of data of a 8+2, so net effect is that 8+2 is twice as fast as 16+4. An 8+2 also uses bigger blocks on each brick, processing the same amount of data in less I/O operations and bigger network packets. Probably these are the reasons why 16+4 is slower than 8+2. See my other email for more detailed description. Xavi> > > > On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri > <pkarampu at redhat.com <mailto:pkarampu at redhat.com>> wrote: > > > > > > On Fri, May 5, 2017 at 11:42 AM, Serkan ?oban > <cobanserkan at gmail.com <mailto:cobanserkan at gmail.com>> wrote: > >> > >> Healing gets slower as you increase m in m+n configuration. > >> We are using 16+4 configuration without any problems other then heal > >> speed. > >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on > >> 8+2 is faster by 2x. > > > > > > As you increase number of nodes that are participating in an EC > set number > > of parallel heals increase. Is the heal speed you saw improved per > file or > > the over all time it took to heal the data? > > > >> > >> > >> > >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey > <aspandey at redhat.com <mailto:aspandey at redhat.com>> wrote: > >> > > >> > 8+2 and 8+3 configurations are not the limitation but just > suggestions. > >> > You can create 16+3 volume without any issue. > >> > > >> > Ashish > >> > > >> > ________________________________ > >> > From: "Alastair Neil" <ajneil.tech at gmail.com > <mailto:ajneil.tech at gmail.com>> > >> > To: "gluster-users" <gluster-users at gluster.org > <mailto:gluster-users at gluster.org>> > >> > Sent: Friday, May 5, 2017 2:23:32 AM > >> > Subject: [Gluster-users] disperse volume brick counts limits in > RHES > >> > > >> > > >> > Hi > >> > > >> > we are deploying a large (24node/45brick) cluster and noted > that the > >> > RHES > >> > guidelines limit the number of data bricks in a disperse set to > 8. Is > >> > there > >> > any reason for this. I am aware that you want this to be a > power of 2, > >> > but > >> > as we have a large number of nodes we were planning on going > with 16+3. > >> > Dropping to 8+2 or 8+3 will be a real waste for us. > >> > > >> > Thanks, > >> > > >> > > >> > Alastair > >> > > >> > > >> > _______________________________________________ > >> > Gluster-users mailing list > >> > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> > >> > http://lists.gluster.org/mailman/listinfo/gluster-users > <http://lists.gluster.org/mailman/listinfo/gluster-users> > >> > > >> > > >> > _______________________________________________ > >> > Gluster-users mailing list > >> > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> > >> > http://lists.gluster.org/mailman/listinfo/gluster-users > <http://lists.gluster.org/mailman/listinfo/gluster-users> > >> _______________________________________________ > >> Gluster-users mailing list > >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> > >> http://lists.gluster.org/mailman/listinfo/gluster-users > <http://lists.gluster.org/mailman/listinfo/gluster-users> > > > > > > > > > > -- > > Pranith > > > > > -- > Pranith > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >