Hi,
Any progress about the patch?
On Thu, Aug 4, 2016 at 10:16 AM, Pranith Kumar Karampuri
<pkarampu at redhat.com> wrote:>
>
> On Thu, Aug 4, 2016 at 11:30 AM, Serkan ?oban <cobanserkan at
gmail.com> wrote:
>>
>> Thanks Pranith,
>> I am waiting for RPMs to show, I will do the tests as soon as possible
>> and inform you.
>
>
> I guess on 3.7.x the RPMs are not automatically built. Let me find how it
> can be done. I will inform you after finding that out. Give me a day.
>
>>
>>
>> On Wed, Aug 3, 2016 at 11:19 PM, Pranith Kumar Karampuri
>> <pkarampu at redhat.com> wrote:
>> >
>> >
>> > On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri
>> > <pkarampu at redhat.com> wrote:
>> >>
>> >>
>> >>
>> >> On Thu, Aug 4, 2016 at 12:51 AM, Serkan ?oban <cobanserkan
at gmail.com>
>> >> wrote:
>> >>>
>> >>> I use rpms for installation. Redhat/Centos 6.8.
>> >>
>> >>
>> >> http://review.gluster.org/#/c/15084 is the patch. In some time
the rpms
>> >> will be built actually.
>> >
>> >
>> > In the same URL above it will actually post the rpms for
fedora/el6/el7
>> > at
>> > the end of the page.
>> >
>> >>
>> >>
>> >> Use gluster volume set <volname>
disperse.shd-max-threads <num-threads
>> >> (range: 1-64)>
>> >>
>> >> While testing this I thought of ways to decrease the number of
crawls
>> >> as
>> >> well. But they are a bit involved. Try to create same set of
data and
>> >> see
>> >> what is the time it takes to complete heals using number of
threads as
>> >> you
>> >> increase the number of parallel heals from 1 to 64.
>> >>
>> >>>
>> >>> On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri
>> >>> <pkarampu at redhat.com> wrote:
>> >>> >
>> >>> >
>> >>> > On Thu, Aug 4, 2016 at 12:45 AM, Serkan ?oban
>> >>> > <cobanserkan at gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> I prefer 3.7 if it is ok for you. Can you also
provide build
>> >>> >> instructions?
>> >>> >
>> >>> >
>> >>> > 3.7 should be fine. Do you use
rpms/debs/anything-else?
>> >>> >
>> >>> >>
>> >>> >>
>> >>> >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar
Karampuri
>> >>> >> <pkarampu at redhat.com> wrote:
>> >>> >> >
>> >>> >> >
>> >>> >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan
?oban
>> >>> >> > <cobanserkan at gmail.com>
>> >>> >> > wrote:
>> >>> >> >>
>> >>> >> >> Yes, but I can create 2+1(or 8+2) ec
using two servers right? I
>> >>> >> >> have
>> >>> >> >> 26 disks on each server.
>> >>> >> >
>> >>> >> >
>> >>> >> > On which release-branch do you want the
patch? I am testing it on
>> >>> >> > master-branch now.
>> >>> >> >
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith
Kumar Karampuri
>> >>> >> >> <pkarampu at redhat.com> wrote:
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> > On Thu, Aug 4, 2016 at 12:23 AM,
Serkan ?oban
>> >>> >> >> > <cobanserkan at gmail.com>
>> >>> >> >> > wrote:
>> >>> >> >> >>
>> >>> >> >> >> I have two of my storage
servers free, I think I can use them
>> >>> >> >> >> for
>> >>> >> >> >> testing. Is two server testing
environment ok for you?
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> > I think it would be better if you
have at least 3. You can
>> >>> >> >> > test
>> >>> >> >> > it
>> >>> >> >> > with
>> >>> >> >> > 2+1
>> >>> >> >> > ec configuration.
>> >>> >> >> >
>> >>> >> >> >>
>> >>> >> >> >>
>> >>> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM,
Pranith Kumar Karampuri
>> >>> >> >> >> <pkarampu at redhat.com>
wrote:
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> > On Wed, Aug 3, 2016 at
6:01 PM, Serkan ?oban
>> >>> >> >> >> > <cobanserkan at
gmail.com>
>> >>> >> >> >> > wrote:
>> >>> >> >> >> >>
>> >>> >> >> >> >> Hi,
>> >>> >> >> >> >>
>> >>> >> >> >> >> May I ask if
multi-threaded self heal for distributed
>> >>> >> >> >> >> disperse
>> >>> >> >> >> >> volumes
>> >>> >> >> >> >> implemented in this
release?
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> > Serkan,
>> >>> >> >> >> > At the moment I am
a bit busy with different work,
>> >>> >> >> >> > Is
>> >>> >> >> >> > it
>> >>> >> >> >> > possible
>> >>> >> >> >> > for you to help test the
feature if I provide a patch?
>> >>> >> >> >> > Actually
>> >>> >> >> >> > the
>> >>> >> >> >> > patch
>> >>> >> >> >> > should be small. Testing
is where lot of time will be spent
>> >>> >> >> >> > on.
>> >>> >> >> >> >
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> Thanks,
>> >>> >> >> >> >> Serkan
>> >>> >> >> >> >>
>> >>> >> >> >> >> On Tue, Aug 2, 2016 at
5:30 PM, David Gossage
>> >>> >> >> >> >> <dgossage at
carouselchecks.com> wrote:
>> >>> >> >> >> >> > On Tue, Aug 2,
2016 at 6:01 AM, Lindsay Mathieson
>> >>> >> >> >> >> >
<lindsay.mathieson at gmail.com> wrote:
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> On 2/08/2016
5:07 PM, Kaushal M wrote:
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
GlusterFS-3.7.14 has been released. This is a regular
>> >>> >> >> >> >> >>> minor
>> >>> >> >> >> >> >>> release.
>> >>> >> >> >> >> >>> The
release-notes are available at
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
>> >>> >> >> >> >> >>>
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> Thanks
Kaushal, I'll check it out
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >
>> >>> >> >> >> >> > So far on my test
box its working as expected. At least
>> >>> >> >> >> >> > the
>> >>> >> >> >> >> > issues
>> >>> >> >> >> >> > that
>> >>> >> >> >> >> > prevented it from
running as before have disappeared.
>> >>> >> >> >> >> > Will
>> >>> >> >> >> >> > need
>> >>> >> >> >> >> > to
>> >>> >> >> >> >> > see
>> >>> >> >> >> >> > how
>> >>> >> >> >> >> > my test VM
behaves after a few days.
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >> --
>> >>> >> >> >> >> >> Lindsay
Mathieson
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >>
_______________________________________________
>> >>> >> >> >> >> >> Gluster-users
mailing list
>> >>> >> >> >> >> >> Gluster-users
at gluster.org
>> >>> >> >> >> >> >>
http://www.gluster.org/mailman/listinfo/gluster-users
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
_______________________________________________
>> >>> >> >> >> >> > Gluster-users
mailing list
>> >>> >> >> >> >> > Gluster-users at
gluster.org
>> >>> >> >> >> >> >
http://www.gluster.org/mailman/listinfo/gluster-users
>> >>> >> >> >> >>
_______________________________________________
>> >>> >> >> >> >> Gluster-users mailing
list
>> >>> >> >> >> >> Gluster-users at
gluster.org
>> >>> >> >> >> >>
http://www.gluster.org/mailman/listinfo/gluster-users
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> > --
>> >>> >> >> >> > Pranith
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> > --
>> >>> >> >> > Pranith
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> > --
>> >>> >> > Pranith
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Pranith
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Pranith
>> >
>> >
>> >
>> >
>> > --
>> > Pranith
>
>
>
>
> --
> Pranith