search for: epollerr

Displaying 20 results from an estimated 55 matches for "epollerr".

Did you mean: pollerr
2017 Aug 06
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
...: Received friend update from uuid: d5a487e3-4c9b-4e5a-91ff-b8d85fd51da9 [2017-08-06 03:12:38.584598] I [MSGID: 106502] [glusterd-handler.c:2762:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-08-06 03:12:38.599340] I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2017-08-06 03:12:38.613745] I [MSGID: 106005] [glusterd-handler.c:5846:__glusterd_brick_rpc_notify] 0-management: Brick taupo:/srv/gluster/gv2/brick1/gvol has disconnected from glusterd. ---------- I checked that cluster.brick-multiplex is off. How can I debug this further? T...
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
...a7c2a, host: web2.dasilva.network, port: 0 [2017-08-20 20:31:04.762187] I [MSGID: 106493] [glusterd-rpc-ops.c:700:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 8f66df4a-e286-4c63-9b0b-257c1ccd08b0 [2017-08-20 20:31:04.763112] I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2017-08-20 20:31:04.763782] I [MSGID: 106005] [glusterd-handler.c:5846:__glusterd_brick_rpc_notify] 0-management: Brick web1.dasilva.network:/data/gluster/etc has disconnected from glusterd. [2017-08-20 20:31:04.764406] I [socket.c:2474:socket_event_handler] 0-transport: EPOLLE...
2018 Sep 07
3
Auth process sometimes stop responding after upgrade
In data venerd? 7 settembre 2018 10:06:00 CEST, Sami Ketola ha scritto: > > On 7 Sep 2018, at 11.00, Simone Lazzaris <s.lazzaris at interactive.eu> > > wrote: > > > > > > The only suspect thing is this: > > > > Sep 6 14:45:41 imap-front13 dovecot: director: doveadm: Host > > 192.168.1.142 > > vhost count changed from 100 to 0 >
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
...tslutpunkten > ?r inte f?rbunden > [2017-08-20 20:30:40.466745] D [socket.c:733:__socket_disconnect] > 0-glusterfs: __socket_teardown_connection () failed: Transportslutpunkten > ?r inte f?rbunden > [2017-08-20 20:30:40.466749] D [socket.c:2474:socket_event_handler] > 0-transport: EPOLLERR - disconnecting now > [2017-08-20 20:30:40.466751] D [socket.c:2474:socket_event_handler] > 0-transport: EPOLLERR - disconnecting now > [2017-08-20 20:30:40.466764] I [glusterfsd-mgmt.c:2171:mgmt_rpc_notify] > 0-glusterfsd-mgmt: disconnected from remote-host: web1.dasilva.network > [...
2017 Aug 20
2
Glusterd not working with systemd in redhat 7
...connection () failed: Transportslutpunkten ?r inte f?rbunden [2017-08-20 20:30:40.466745] D [socket.c:733:__socket_disconnect] 0-glusterfs: __socket_teardown_connection () failed: Transportslutpunkten ?r inte f?rbunden [2017-08-20 20:30:40.466749] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2017-08-20 20:30:40.466751] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2017-08-20 20:30:40.466764] I [glusterfsd-mgmt.c:2171:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: web1.dasilva.network [2017-08-20 20:30:40.46676...
2018 Sep 07
6
Auth process sometimes stop responding after upgrade
...an instance without the "service_count = 0" configuration directive on pop3-login. I've observed that while the issue is occurring, the director process goes 100% CPU. I've straced the process. It is seemingly looping: ... ... epoll_ctl(13, EPOLL_CTL_ADD, 78, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=149035320, u64=149035320}}) = 0 epoll_ctl(13, EPOLL_CTL_DEL, 78, {0, {u32=149035320, u64=149035320}}) = 0 epoll_ctl(13, EPOLL_CTL_ADD, 78, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=149035320, u64=149035320}}) = 0 epoll_ctl(13, EPOLL_CTL_DEL, 78, {0, {u32=149035320, u64=149035320}})...
2018 Feb 08
2
Thousands of EPOLLERR - disconnecting now
Hello I have a large cluster in which every node is logging: I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now At a rate of of around 4 or 5 per second per node, which is adding up to a lot of messages. This seems to happen while my cluster is idle. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attach...
2018 Feb 08
0
Thousands of EPOLLERR - disconnecting now
On Thu, Feb 8, 2018 at 2:04 PM, Gino Lisignoli <glisignoli at gmail.com> wrote: > Hello > > I have a large cluster in which every node is logging: > > I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - > disconnecting now > > At a rate of of around 4 or 5 per second per node, which is adding up to a > lot of messages. This seems to happen while my cluster is idle. > This log message is normally seen repetitively when there are problems in the network layer. Can you please verif...
2019 Mar 08
1
Dovecot v2.3.5 released
On 7.3.2019 23.37, A. Schulze via dovecot wrote: > > Am 07.03.19 um 17:33 schrieb Aki Tuomi via dovecot: > >>> test-http-client-errors.c:2989: Assert failed: FALSE >>> connection timed out ................................................. : FAILED > Hello Aki, > >> Are you running with valgrind or on really slow system? > I'm not aware my buildsystem
2015 Jun 21
3
dovecot auth using 100% CPU
...dovecot 2.2.13, I've just upgraded to 2.2.18 strace -r -p 17956 output: Process 17956 attached 0.000000 lseek(19, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) 0.000057 getsockname(19, {sa_family=AF_LOCAL, NULL}, [2]) = 0 0.000043 epoll_ctl(15, EPOLL_CTL_ADD, 19, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=850618928, u64=140128453618224}}) = 0 0.000040 write(19, "VERSION\tauth-worker\t1\t0\nDBHASH\t5"..., 97) = -1 EPIPE (Broken pipe) 0.000035 --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=17956, si_uid=108} --- 0.000020 epoll_wait(15, {{EPOLLIN|EPOLLHUP...
2014 Nov 12
1
closed fd causes: lmtp(18385): Panic: epoll_ctl(del, 11) failed: Bad file descriptor
...at main.c:123 strace shows this: read(11, "\27\3\1\0 q\r\252\3551\21\237l\33\330\33\303\340\306l\334k\0360p\303)HF\331\234g"..., 1261) = 37 read(11, 0xf09908, 1224) = -1 EAGAIN (Resource temporarily unavailable) epoll_ctl(10, EPOLL_CTL_MOD, 11, {EPOLLIN|EPOLLPRI|EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=15754432, u64=15754432}}) = 0 epoll_ctl(10, EPOLL_CTL_MOD, 11, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=15754432, u64=15754432}}) = 0 write(11, "\27\3\1\0 \204\375\373!\253\263n\204\274duj/\n\202\236\373\342\303[\22\17\264\10\23\346\225"..., 90) = 90 epoll_ctl(10, EPOLL_CT...
2010 Oct 10
3
pop3 TCP_CORK too late error
I was straceing a pop3 process and noticed that the TCP_CORK option isn't set soon enough: epoll_wait(8, {{EPOLLOUT, {u32=37481984, u64=37481984}}}, 38, 207) = 1 write(41, "iTxPBrNlaNFao+yQzLhuO4/+tQ5cuiKSe"..., 224) = 224 epoll_ctl(8, EPOLL_CTL_MOD, 41, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=37481984, u64=37481984}}) = 0 pread(19, "AFABQAlAC0AJ\nQAUALQAUAFABQAlAC0AF"..., 8192, 811008) = 8192 setsockopt(41, SOL_TCP, TCP_CORK, [1], 4) = 0 write(41, "\r\nKUWtGCjKO5N8UbW5uYLZbS0nmaNi4ZB"..., 4134) = 4134 write(41, "\r\npckt0KMGuho6r4H1ay0sXbx+YyuC0Sn...
2018 Sep 07
1
Auth process sometimes stop responding after upgrade
...configuration directive on pop3-login. >> >> I've observed that while the issue is occurring, the director process goes 100% CPU. I've straced the process. It is seemingly looping: >> >> ... >> ... >> epoll_ctl(13, EPOLL_CTL_ADD, 78, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=149035320, u64=149035320}}) = 0 >> epoll_ctl(13, EPOLL_CTL_DEL, 78, {0, {u32=149035320, u64=149035320}}) = 0 >> epoll_ctl(13, EPOLL_CTL_ADD, 78, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=149035320, u64=149035320}}) = 0 >> epoll_ctl(13, EPOLL_CTL_DEL, 78, {0, {u32=14...
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
...rom uuid: d5a487e3-4c9b-4e5a-91ff- > b8d85fd51da9 > [2017-08-06 03:12:38.584598] I [MSGID: 106502] [glusterd-handler.c:2762:__glusterd_handle_friend_update] > 0-management: Received my uuid as Friend > [2017-08-06 03:12:38.599340] I [socket.c:2474:socket_event_handler] > 0-transport: EPOLLERR - disconnecting now > [2017-08-06 03:12:38.613745] I [MSGID: 106005] [glusterd-handler.c:5846:__glusterd_brick_rpc_notify] > 0-management: Brick taupo:/srv/gluster/gv2/brick1/gvol has disconnected > from glusterd. > ---------- > > I checked that cluster.brick-multiplex is off. How...
2015 Jun 21
0
dovecot auth using 100% CPU
...o 2.2.18 > > strace -r -p 17956 output: > > Process 17956 attached > 0.000000 lseek(19, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) > 0.000057 getsockname(19, {sa_family=AF_LOCAL, NULL}, [2]) = 0 > 0.000043 epoll_ctl(15, EPOLL_CTL_ADD, 19, > {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=850618928, > u64=140128453618224}}) = 0 > 0.000040 write(19, "VERSION\tauth-worker\t1\t0\nDBHASH\t5"..., 97) > = -1 EPIPE (Broken pipe) > 0.000035 --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, > si_pid=17956, si_uid=108} --- > 0.000020 epo...
2018 Sep 07
0
Auth process sometimes stop responding after upgrade
...service_count = 0" configuration directive on pop3-login. > > I've observed that while the issue is occurring, the director process goes 100% CPU. I've straced the process. It is seemingly looping: > > ... > ... > epoll_ctl(13, EPOLL_CTL_ADD, 78, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=149035320, u64=149035320}}) = 0 > epoll_ctl(13, EPOLL_CTL_DEL, 78, {0, {u32=149035320, u64=149035320}}) = 0 > epoll_ctl(13, EPOLL_CTL_ADD, 78, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=149035320, u64=149035320}}) = 0 > epoll_ctl(13, EPOLL_CTL_DEL, 78, {0, {u32=149035320, u64...
2013 Sep 25
3
Dovecot extremely slow!
...9759648824528}}) = 0 18:30:37.066574 inotify_rm_watch(13, 2) = 0 18:30:37.066795 epoll_ctl(9, EPOLL_CTL_DEL, 13, {0, {u32=1413125552, u64=139759648937392}}) = 0 18:30:37.067053 write(11, "8 OK Idle completed.\r\n", 22) = 22 18:30:37.067185 epoll_ctl(9, EPOLL_CTL_ADD, 11, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=1413012688, u64=139759648824528}}) = 0 18:30:37.067301 epoll_wait(9, {{EPOLLIN, {u32=1413012688, u64=139759648824528}}}, 6, 1800000) = 1 18:30:37.291601 read(11, "9 noop\r\n", 8024) = 8 18:30:37.291914 stat("/home/pato/mail/Astro/conferences", {st_mode=S_IFREG|064...
2018 Apr 26
1
director stuck in inifite loop on 2.2.35
...le dovecot with -g and run a version with debugging info, to be able to inspect more when the crash happens again? "strace" showed this endless loop: 09:27:52.837960 epoll_ctl(14, EPOLL_CTL_DEL, 213, 0x7ffe8e642cdc) = 0 09:27:52.837993 epoll_ctl(14, EPOLL_CTL_ADD, 213, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=3317538000, u64=94320799358160}}) = 0 09:27:52.838035 epoll_ctl(14, EPOLL_CTL_DEL, 213, 0x7ffe8e642cdc) = 0 09:27:52.838070 epoll_ctl(14, EPOLL_CTL_ADD, 213, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=3317538000, u64=94320799358160}}) = 0 "ltrace" showed the same loop as:...
2018 Mar 27
4
[PATCH net V2] vhost: correctly remove wait queue during poll failure
...vhost.c b/drivers/vhost/vhost.c index 1b3e8d2d..5d5a9d9 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -212,8 +212,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file) if (mask) vhost_poll_wakeup(&poll->wait, 0, 0, poll_to_key(mask)); if (mask & EPOLLERR) { - if (poll->wqh) - remove_wait_queue(poll->wqh, &poll->wait); + vhost_poll_stop(poll); ret = -EINVAL; } -- 2.7.4
2018 Mar 27
4
[PATCH net V2] vhost: correctly remove wait queue during poll failure
...vhost.c b/drivers/vhost/vhost.c index 1b3e8d2d..5d5a9d9 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -212,8 +212,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file) if (mask) vhost_poll_wakeup(&poll->wait, 0, 0, poll_to_key(mask)); if (mask & EPOLLERR) { - if (poll->wqh) - remove_wait_queue(poll->wqh, &poll->wait); + vhost_poll_stop(poll); ret = -EINVAL; } -- 2.7.4