similar to: glfsheal-v0.log Too many open files

Displaying 20 results from an estimated 2000 matches similar to: "glfsheal-v0.log Too many open files"

2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the logs after stopping all three volumes and restarting glusterd in all nodes. I waited 70 minutes after glusterd restart but it is still consuming %100 CPU. https://www.dropbox.com/s/pzl0f198v03twx3/80servers_after_glusterd_restart.zip?dl=0 On Tue, Aug 29, 2017 at 12:37 PM, Gaurav Yadav <gyadav at redhat.com> wrote: > > I believe logs you have shared logs which consist of
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the requested logs: https://www.dropbox.com/s/vt187h0gtu5doip/gluster_logs_20_40_80_servers.zip?dl=0 On Tue, Aug 29, 2017 at 7:48 AM, Gaurav Yadav <gyadav at redhat.com> wrote: > Till now I haven't found anything significant. > > Can you send me gluster logs along with command-history-logs for these > scenarios: > Scenario1 : 20 servers > Scenario2 : 40
2017 Aug 28
2
Glusterd proccess hangs on reboot
Hi Gaurav, Any progress about the problem? On Thursday, August 24, 2017, Serkan ?oban <cobanserkan at gmail.com> wrote: > Thank you Gaurav, > Here is more findings: > Problem does not happen using only 20 servers each has 68 bricks. > (peer probe only 20 servers) > If we use 40 servers with single volume, glusterd cpu %100 state > continues for 5 minutes and it goes to
2017 Aug 29
0
Glusterd proccess hangs on reboot
glusterd returned to normal, here is the logs: https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0 On Tue, Aug 29, 2017 at 1:47 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Here is the logs after stopping all three volumes and restarting > glusterd in all nodes. I waited 70 minutes after glusterd restart but > it is still consuming %100 CPU.
2017 Jun 30
2
Multi petabyte gluster
We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the rebuild time are bottlenecked by matrix operations which scale as the square of the number of data stripes. There are some savings because of larger data chunks but we ended up using 8+3 and heal times are about half compared to 16+3. -Alastair On 30 June 2017 at 02:22, Serkan ?oban <cobanserkan at gmail.com>
2017 Sep 03
2
Glusterd proccess hangs on reboot
----- Original Message ----- > From: "Ben Turner" <bturner at redhat.com> > To: "Serkan ?oban" <cobanserkan at gmail.com> > Cc: "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, September 3, 2017 2:30:31 PM > Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot > > ----- Original Message ----- > >
2017 Aug 21
2
Brick count limit in a volume
Hi, Gluster version is 3.10.5. I am trying to create a 5500 brick volume, but getting an error stating that 4444 bricks is the limit. Is this a known limit? Can I change this with an option? Thanks, Serkan
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10 seconds between each trace. https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0 Content of the first stack trace is here: Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)): #0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0 #2
2017 Sep 03
2
Glusterd proccess hangs on reboot
No worries Serkan, You can continue to use your 40 node clusters. The backtrace has resolved the function names and it *should* be sufficient to debug the issue. Thanks for letting us know. We'll post on this thread again to notify you about the findings. On Sat, Sep 2, 2017 at 2:42 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Milind, > > Anything new about the
2017 Aug 24
2
Glusterd proccess hangs on reboot
I am working on it and will share my findings as soon as possible. Thanks Gaurav On Thu, Aug 24, 2017 at 3:58 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Restarting glusterd causes the same thing. I tried with 3.12.rc0, > 3.10.5. 3.8.15, 3.7.20 all same behavior. > My OS is centos 6.9, I tried with centos 6.8 problem remains... > Only way to a healthy state is
2017 Aug 29
0
Glusterd proccess hangs on reboot
I believe logs you have shared logs which consist of create volume followed by starting the volume. However, you have mentioned that when a node from 80 server cluster gets rebooted, glusterd process hangs. Could you please provide the logs which led glusterd to hang for all the cases along with gusterd process utilization. Thanks Gaurav On Tue, Aug 29, 2017 at 2:44 PM, Serkan ?oban
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Mon, Sep 4, 2017 at 5:28 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > >1. On 80 nodes cluster, did you reboot only one node or multiple ones? > Tried both, result is same, but the logs/stacks are from stopping and > starting glusterd only on one server while others are running. > > >2. Are you sure that pstack output was always constantly pointing on >
2017 Aug 29
0
Glusterd proccess hangs on reboot
Till now I haven't found anything significant. Can you send me gluster logs along with command-history-logs for these scenarios: Scenario1 : 20 servers Scenario2 : 40 servers Scenario3: 80 Servers Thanks Gaurav On Mon, Aug 28, 2017 at 11:22 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Gaurav, > Any progress about the problem? > > On Thursday, August 24,
2017 Sep 03
0
Glusterd proccess hangs on reboot
i usually change event threads to 4. But those logs are from a default installation. On Sun, Sep 3, 2017 at 9:52 PM, Ben Turner <bturner at redhat.com> wrote: > ----- Original Message ----- >> From: "Ben Turner" <bturner at redhat.com> >> To: "Serkan ?oban" <cobanserkan at gmail.com> >> Cc: "Gluster Users" <gluster-users at
2017 Aug 23
2
Brick count limit in a volume
An upstream bug would be ideal as github issue is mainly used for enhancements. In the mean time, could you point to the exact failure shown at the command line and the log entry from cli.log? On Wed, Aug 23, 2017 at 12:10 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi, I think this is the line limiting brick count: >
2017 Jun 30
0
Multi petabyte gluster
Did you test healing by increasing disperse.shd-max-threads? What is your heal times per brick now? On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the > rebuild time are bottlenecked by matrix operations which scale as the square > of the number of data stripes. There are
2017 Aug 23
2
Glusterd proccess hangs on reboot
Not yet. Gaurav will be taking a look at it tomorrow. On Wed, 23 Aug 2017 at 20:14, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Atin, > > Do you have time to check the logs? > > On Wed, Aug 23, 2017 at 10:02 AM, Serkan ?oban <cobanserkan at gmail.com> > wrote: > > Same thing happens with 3.12.rc0. This time perf top shows hanging in > >
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire <mchangir at redhat.com> wrote: > Serkan, > I have gone through other mails in the mail thread as well but responding > to this one specifically. > > Is this a source install or an RPM install ? > If this is an RPM install, could you please install the > glusterfs-debuginfo RPM and retry to capture the gdb backtrace. >
2017 Sep 01
2
Glusterd proccess hangs on reboot
Hi, You can find pstack sampes here: https://www.dropbox.com/s/6gw8b6tng8puiox/pstack_with_debuginfo.zip?dl=0 Here is the first one: Thread 8 (Thread 0x7f92879ae700 (LWP 78909)): #0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000310fe37d57 in gf_timer_proc () from /usr/lib64/libglusterfs.so.0 #2 0x0000003d99c07aa1 in start_thread () from /lib64/libpthread.so.0 #3
2017 Sep 03
0
Glusterd proccess hangs on reboot
----- Original Message ----- > From: "Milind Changire" <mchangir at redhat.com> > To: "Serkan ?oban" <cobanserkan at gmail.com> > Cc: "Gluster Users" <gluster-users at gluster.org> > Sent: Saturday, September 2, 2017 11:44:40 PM > Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot > > No worries Serkan, > You can