Displaying 7 results from an estimated 7 matches for "event_pool".
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
...64 in socket_event_poll_in (this=0x3b6c060) at socket.c:1677
#14 0x00007fc4eeeb0847 in socket_event_handler (fd=<value optimized out>, idx=265, data=0x3b6c060, poll_in=1, poll_out=0, poll_err=<value optimized out>) at socket.c:1792
#15 0x00007fc4f2846464 in event_dispatch_epoll_handler (event_pool=0x177cdf0) at event.c:785
#16 event_dispatch_epoll (event_pool=0x177cdf0) at event.c:847
#17 0x000000000040736a in main (argc=<value optimized out>, argv=0x7fffcb83efc8) at glusterfsd.c:1689
-----Original Message-----
From: jowalker at redhat.com [mailto:jowalker at redhat.com] On Behalf Of...
2017 Dec 06
0
[Gluster-devel] Crash in glusterd!!!
...f847ff89c in socket_event_handler (fd=<optimized out>,
idx=<optimized out>, data=0x3fff74002210, poll_in=<optimized out>,
poll_out=<optimized out>, poll_err=<optimized out>) at socket.c:2349
#15 0x00003fff88616874 in event_dispatch_epoll_handler
(event=0x3fff83d9d6a0, event_pool=0x10045bc0
<_GLOBAL__sub_I__ZN29DrhIfRhControlPdrProxyC_ActorC2EP12RTControllerP10RTActorRef()+116>)
at event-epoll.c:575
#16 event_dispatch_epoll_worker (data=0x100bb4a0
<main_thread_func__()+1756>) at event-epoll.c:678
#17 0x00003fff884cfb10 in start_thread (arg=0x3fff83d9e160) at
p...
2017 Dec 06
1
[Gluster-devel] Crash in glusterd!!!
...t_handler (fd=<optimized out>,
> idx=<optimized out>, data=0x3fff74002210, poll_in=<optimized out>,
> poll_out=<optimized out>, poll_err=<optimized out>) at socket.c:2349
>
> #15 0x00003fff88616874 in event_dispatch_epoll_handler
> (event=0x3fff83d9d6a0, event_pool=0x10045bc0 <_GLOBAL__sub_I__
> ZN29DrhIfRhControlPdrProxyC_ActorC2EP12RTControllerP10RTActorRef()+116>)
> at event-epoll.c:575
>
> #16 event_dispatch_epoll_worker (data=0x100bb4a0
> <main_thread_func__()+1756>) at event-epoll.c:678
>
> #17 0x00003fff884cfb10 in star...
2017 Dec 06
2
[Gluster-devel] Crash in glusterd!!!
Without the glusterd log file and the core file or the backtrace I can't
comment anything.
On Wed, Dec 6, 2017 at 3:09 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com>
wrote:
> Any suggestion....
>
> On Dec 6, 2017 11:51, "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com> wrote:
>
>> Hi Team,
>>
>> We are getting the crash in glusterd after
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all,
I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following:
All machines are running CentOS 6.4 and using
2011 Jun 09
1
NFS problem
Hi,
I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0
Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
network.ping-timeout: 20
performance.cache-size: 512MB
2010 Apr 22
1
Transport endpoint not connected
Hey guys,
I've recently implemented gluster to share webcontent read-write between
two servers.
Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
Fuse : 2.7.2-1ubuntu2.1
Platform : ubuntu 8.04LTS
I used the following command to generate my configs:
/usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export