search for: sigwaiting

Displaying 20 results from an estimated 32 matches for "sigwaiting".

Did you mean: sigwaitinfo
2017 Sep 05
0
Glusterd proccess hangs on reboot
Some corrections about the previous mails. Problem does not happen when no volumes created. Problem happens volumes created but in stopped state. Problem also happens when volumes started state. Below is the 5 stack traces taken by 10 min intervals and volumes stopped state. --1-- Thread 8 (Thread 0x7f413f3a7700 (LWP 104249)): #0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0 #1
2017 Sep 05
1
Glusterd proccess hangs on reboot
On Tue, Sep 5, 2017 at 6:13 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Some corrections about the previous mails. Problem does not happen > when no volumes created. > Problem happens volumes created but in stopped state. Problem also > happens when volumes started state. > Below is the 5 stack traces taken by 10 min intervals and volumes stopped > state. > As
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Mon, 4 Sep 2017 at 20:04, Serkan ?oban <cobanserkan at gmail.com> wrote: > I have been using a 60 server 1560 brick 3.7.11 cluster without > problems for 1 years. I did not see this problem with it. > Note that this problem does not happen when I install packages & start > glusterd & peer probe and create the volumes. But after glusterd > restart. > > Also
2017 Sep 01
2
Glusterd proccess hangs on reboot
Hi, You can find pstack sampes here: https://www.dropbox.com/s/6gw8b6tng8puiox/pstack_with_debuginfo.zip?dl=0 Here is the first one: Thread 8 (Thread 0x7f92879ae700 (LWP 78909)): #0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000310fe37d57 in gf_timer_proc () from /usr/lib64/libglusterfs.so.0 #2 0x0000003d99c07aa1 in start_thread () from /lib64/libpthread.so.0 #3
2017 Sep 02
0
Glusterd proccess hangs on reboot
Hi Milind, Anything new about the issue? Can you able to find the problem, anything else you need? I will continue with two clusters each 40 servers, so I will not be able to provide any further info for 80 servers. On Fri, Sep 1, 2017 at 10:30 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi, > You can find pstack sampes here: >
2017 Sep 03
2
Glusterd proccess hangs on reboot
No worries Serkan, You can continue to use your 40 node clusters. The backtrace has resolved the function names and it *should* be sufficient to debug the issue. Thanks for letting us know. We'll post on this thread again to notify you about the findings. On Sat, Sep 2, 2017 at 2:42 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Milind, > > Anything new about the
2014 Jul 29
2
[LLVMdev] Sanitizer test failure
Yup, using SIGHUP works. On 29 July 2014 13:14, Evgeniy Stepanov <eugeni.stepanov at gmail.com> wrote: > Could it be that I'm misunderstanding signal semantics, and SIGUSR1 is > not guaranteed to be delivered (to the same process!) before kill() > returns? Could you check if that's what happens by adding a sleep() > somewhere? > > On Tue, Jul 29, 2014 at 2:17 AM,
2017 Sep 03
2
Glusterd proccess hangs on reboot
----- Original Message ----- > From: "Ben Turner" <bturner at redhat.com> > To: "Serkan ?oban" <cobanserkan at gmail.com> > Cc: "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, September 3, 2017 2:30:31 PM > Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot > > ----- Original Message ----- > >
2017 Sep 03
0
Glusterd proccess hangs on reboot
----- Original Message ----- > From: "Milind Changire" <mchangir at redhat.com> > To: "Serkan ?oban" <cobanserkan at gmail.com> > Cc: "Gluster Users" <gluster-users at gluster.org> > Sent: Saturday, September 2, 2017 11:44:40 PM > Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot > > No worries Serkan, > You can
2002 Feb 08
3
SCP Problem with OpenSSH 3.0.2p1 linux->solaris
Hello, i am experiencing scp hangs. This command is executed: system("/usr/bin/scp -v -v -v -C root\@$ip:$LOG_DIR_CLIENT$SYSTEM_LOG"."_transfer $LOG_DIR_SERVER$SYSTEM_LOG-$ip >$SSH_STEP3_LOG 2>&1"); from within a perl script.
2017 Sep 03
0
Glusterd proccess hangs on reboot
i usually change event threads to 4. But those logs are from a default installation. On Sun, Sep 3, 2017 at 9:52 PM, Ben Turner <bturner at redhat.com> wrote: > ----- Original Message ----- >> From: "Ben Turner" <bturner at redhat.com> >> To: "Serkan ?oban" <cobanserkan at gmail.com> >> Cc: "Gluster Users" <gluster-users at
2017 Sep 01
0
Glusterd proccess hangs on reboot
Serkan, I have gone through other mails in the mail thread as well but responding to this one specifically. Is this a source install or an RPM install ? If this is an RPM install, could you please install the glusterfs-debuginfo RPM and retry to capture the gdb backtrace. If this is a source install, then you'll need to configure the build with --enable-debug and reinstall and retry
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10 seconds between each trace. https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0 Content of the first stack trace is here: Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)): #0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0 #2
2017 Aug 24
2
Glusterd proccess hangs on reboot
I am working on it and will share my findings as soon as possible. Thanks Gaurav On Thu, Aug 24, 2017 at 3:58 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Restarting glusterd causes the same thing. I tried with 3.12.rc0, > 3.10.5. 3.8.15, 3.7.20 all same behavior. > My OS is centos 6.9, I tried with centos 6.8 problem remains... > Only way to a healthy state is
2017 Aug 24
0
Glusterd proccess hangs on reboot
Restarting glusterd causes the same thing. I tried with 3.12.rc0, 3.10.5. 3.8.15, 3.7.20 all same behavior. My OS is centos 6.9, I tried with centos 6.8 problem remains... Only way to a healthy state is destroy gluster config/rpms, reinstall and recreate volumes. On Thu, Aug 24, 2017 at 8:49 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Here you can find 10 stack trace samples
2017 Aug 28
2
Glusterd proccess hangs on reboot
Hi Gaurav, Any progress about the problem? On Thursday, August 24, 2017, Serkan ?oban <cobanserkan at gmail.com> wrote: > Thank you Gaurav, > Here is more findings: > Problem does not happen using only 20 servers each has 68 bricks. > (peer probe only 20 servers) > If we use 40 servers with single volume, glusterd cpu %100 state > continues for 5 minutes and it goes to
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the requested logs: https://www.dropbox.com/s/vt187h0gtu5doip/gluster_logs_20_40_80_servers.zip?dl=0 On Tue, Aug 29, 2017 at 7:48 AM, Gaurav Yadav <gyadav at redhat.com> wrote: > Till now I haven't found anything significant. > > Can you send me gluster logs along with command-history-logs for these > scenarios: > Scenario1 : 20 servers > Scenario2 : 40
2017 Aug 24
0
Glusterd proccess hangs on reboot
Thank you Gaurav, Here is more findings: Problem does not happen using only 20 servers each has 68 bricks. (peer probe only 20 servers) If we use 40 servers with single volume, glusterd cpu %100 state continues for 5 minutes and it goes to normal state. with 80 servers we have no working state yet... On Thu, Aug 24, 2017 at 1:33 PM, Gaurav Yadav <gyadav at redhat.com> wrote: > > I am
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire <mchangir at redhat.com> wrote: > Serkan, > I have gone through other mails in the mail thread as well but responding > to this one specifically. > > Is this a source install or an RPM install ? > If this is an RPM install, could you please install the > glusterfs-debuginfo RPM and retry to capture the gdb backtrace. >
2017 Aug 29
0
Glusterd proccess hangs on reboot
Till now I haven't found anything significant. Can you send me gluster logs along with command-history-logs for these scenarios: Scenario1 : 20 servers Scenario2 : 40 servers Scenario3: 80 Servers Thanks Gaurav On Mon, Aug 28, 2017 at 11:22 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Gaurav, > Any progress about the problem? > > On Thursday, August 24,