Displaying 7 results from an estimated 7 matches for "event_dispatch_epoll_handl".
Did you mean:
event_dispatch_epoll_handler
2017 Jul 20
1
Error while mounting gluster volume
...4.420422] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[1970-01-02 10:54:04.420429] I [glusterfsd-mgmt.c:1824:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[1970-01-02 10:54:04.420480] E [MSGID: 101063]
[event-epoll.c:550:event_dispatch_epoll_handler] 0-epoll: stale fd found on
idx=0, gen=0, events=0, slot->gen=2
[1970-01-02 10:54:04.420511] E [MSGID: 101063]
[event-epoll.c:550:event_dispatch_epoll_handler] 0-epoll: stale fd found on
idx=0, gen=0, events=0, slot->gen=3
[1970-01-02 10:54:04.420534] E [MSGID: 101063]
[event-epoll.c:550:ev...
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
...ort.c:489
#13 0x00007fc4eeeb0764 in socket_event_poll_in (this=0x3b6c060) at socket.c:1677
#14 0x00007fc4eeeb0847 in socket_event_handler (fd=<value optimized out>, idx=265, data=0x3b6c060, poll_in=1, poll_out=0, poll_err=<value optimized out>) at socket.c:1792
#15 0x00007fc4f2846464 in event_dispatch_epoll_handler (event_pool=0x177cdf0) at event.c:785
#16 event_dispatch_epoll (event_pool=0x177cdf0) at event.c:847
#17 0x000000000040736a in main (argc=<value optimized out>, argv=0x7fffcb83efc8) at glusterfsd.c:1689
-----Original Message-----
From: jowalker at redhat.com [mailto:jowalker at redhat.com...
2017 Jun 15
1
peer probe failures
...nt_submit]
0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2,
Proc: 1) to rpc-transport (glusterfs)
[2017-04-03 22:20:24.705739] D [rpc-clnt-ping.c:281:rpc_clnt_start_ping]
0-glusterfs: ping timeout is 0, returning
[2017-04-03 22:20:24.705723] D [MSGID: 0]
[event-epoll.c:591:event_dispatch_epoll_handler] 0-epoll: generation bumped
on idx=1 from gen=1 to slot->gen=2, fd=7, slot->fd=7
[2017-04-03 22:20:27.614881] T [rpc-clnt.c:418:rpc_clnt_reconnect]
0-glusterfs: attempting reconnect
[2017-04-03 22:20:27.615151] T [socket.c:2879:socket_connect] (-->
/usr/lib/x86_64-linux-gnu/libglusterfs....
2017 Dec 06
0
[Gluster-devel] Crash in glusterd!!!
...ntry=0x3fff74002210)
at socket.c:2236
#14 0x00003fff847ff89c in socket_event_handler (fd=<optimized out>,
idx=<optimized out>, data=0x3fff74002210, poll_in=<optimized out>,
poll_out=<optimized out>, poll_err=<optimized out>) at socket.c:2349
#15 0x00003fff88616874 in event_dispatch_epoll_handler
(event=0x3fff83d9d6a0, event_pool=0x10045bc0
<_GLOBAL__sub_I__ZN29DrhIfRhControlPdrProxyC_ActorC2EP12RTControllerP10RTActorRef()+116>)
at event-epoll.c:575
#16 event_dispatch_epoll_worker (data=0x100bb4a0
<main_thread_func__()+1756>) at event-epoll.c:678
#17 0x00003fff884cfb10 in st...
2017 Dec 06
1
[Gluster-devel] Crash in glusterd!!!
...et.c:2236
>
> #14 0x00003fff847ff89c in socket_event_handler (fd=<optimized out>,
> idx=<optimized out>, data=0x3fff74002210, poll_in=<optimized out>,
> poll_out=<optimized out>, poll_err=<optimized out>) at socket.c:2349
>
> #15 0x00003fff88616874 in event_dispatch_epoll_handler
> (event=0x3fff83d9d6a0, event_pool=0x10045bc0 <_GLOBAL__sub_I__
> ZN29DrhIfRhControlPdrProxyC_ActorC2EP12RTControllerP10RTActorRef()+116>)
> at event-epoll.c:575
>
> #16 event_dispatch_epoll_worker (data=0x100bb4a0
> <main_thread_func__()+1756>) at event-epoll.c:678...
2017 Dec 06
2
[Gluster-devel] Crash in glusterd!!!
Without the glusterd log file and the core file or the backtrace I can't
comment anything.
On Wed, Dec 6, 2017 at 3:09 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com>
wrote:
> Any suggestion....
>
> On Dec 6, 2017 11:51, "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com> wrote:
>
>> Hi Team,
>>
>> We are getting the crash in glusterd after
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all,
I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following:
All machines are running CentOS 6.4 and using