search for: poll_in

Displaying 20 results from an estimated 27 matches for "poll_in".

2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
...event=<value optimized out>, data=<value optimized out>) at rpc-clnt.c:905 #12 0x00007fc4f25faeb8 in rpc_transport_notify (this=<value optimized out>, event=<value optimized out>, data=<value optimized out>) at rpc-transport.c:489 #13 0x00007fc4eeeb0764 in socket_event_poll_in (this=0x3b6c060) at socket.c:1677 #14 0x00007fc4eeeb0847 in socket_event_handler (fd=<value optimized out>, idx=265, data=0x3b6c060, poll_in=1, poll_out=0, poll_err=<value optimized out>) at socket.c:1792 #15 0x00007fc4f2846464 in event_dispatch_epoll_handler (event_pool=0x177cdf0) at e...
2017 Dec 06
0
[Gluster-devel] Crash in glusterd!!!
...mydata=<optimized out>, event=<optimized out>, data=<optimized out>) at rpcsvc.c:799 #12 0x00003fff885514fc in rpc_transport_notify (this=<optimized out>, event=<optimized out>, data=<optimized out>) at rpc-transport.c:546 #13 0x00003fff847fcd44 in socket_event_poll_in (this=this at entry=0x3fff74002210) at socket.c:2236 #14 0x00003fff847ff89c in socket_event_handler (fd=<optimized out>, idx=<optimized out>, data=0x3fff74002210, poll_in=<optimized out>, poll_out=<optimized out>, poll_err=<optimized out>) at socket.c:2349 #15 0x0000...
2017 Dec 06
1
[Gluster-devel] Crash in glusterd!!!
..., event=<optimized out>, data=<optimized out>) at > rpcsvc.c:799 > > #12 0x00003fff885514fc in rpc_transport_notify (this=<optimized out>, > event=<optimized out>, data=<optimized out>) at rpc-transport.c:546 > > #13 0x00003fff847fcd44 in socket_event_poll_in (this=this at entry=0x3fff74002210) > at socket.c:2236 > > #14 0x00003fff847ff89c in socket_event_handler (fd=<optimized out>, > idx=<optimized out>, data=0x3fff74002210, poll_in=<optimized out>, > poll_out=<optimized out>, poll_err=<optimized out>) at s...
2017 Dec 06
2
[Gluster-devel] Crash in glusterd!!!
Without the glusterd log file and the core file or the backtrace I can't comment anything. On Wed, Dec 6, 2017 at 3:09 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com> wrote: > Any suggestion.... > > On Dec 6, 2017 11:51, "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com> wrote: > >> Hi Team, >> >> We are getting the crash in glusterd after
2010 Aug 26
5
[PATCH 0/4] virtio: console: fixes, SIGIO
Hi Rusty, The main thing in these patches is the introduction of injecting SIGIO on host-side connect/disconnect events and when new data is available for ports. The first two patches fix bugs that I haven't seen, but look like the right thing to do. These have been tested extensively using the test-virtserial test suite. Please apply, Amit. Amit Shah (4): virtio: console: Un-block
2010 Aug 26
5
[PATCH 0/4] virtio: console: fixes, SIGIO
Hi Rusty, The main thing in these patches is the introduction of injecting SIGIO on host-side connect/disconnect events and when new data is available for ports. The first two patches fix bugs that I haven't seen, but look like the right thing to do. These have been tested extensively using the test-virtserial test suite. Please apply, Amit. Amit Shah (4): virtio: console: Un-block
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
...ocket.file = NULL; netif_tx_unlock_bh(tun->dev); /* Drop read queue */ @@ -387,7 +389,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) /* Notify and wake up reader process */ if (tun->flags & TUN_FASYNC) kill_fasync(&tun->fasync, SIGIO, POLL_IN); - wake_up_interruptible(&tun->socket.wait); + wake_up_interruptible_poll(&tun->socket.wait, POLLIN | + POLLRDNORM | POLLRDBAND); return NETDEV_TX_OK; drop: @@ -743,7 +746,7 @@ static __inline__ ssize_t tun_put_user(struct tun_struct *tun, len = min_t(int, skb->len, l...
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
...ocket.file = NULL; netif_tx_unlock_bh(tun->dev); /* Drop read queue */ @@ -387,7 +389,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) /* Notify and wake up reader process */ if (tun->flags & TUN_FASYNC) kill_fasync(&tun->fasync, SIGIO, POLL_IN); - wake_up_interruptible(&tun->socket.wait); + wake_up_interruptible_poll(&tun->socket.wait, POLLIN | + POLLRDNORM | POLLRDBAND); return NETDEV_TX_OK; drop: @@ -743,7 +746,7 @@ static __inline__ ssize_t tun_put_user(struct tun_struct *tun, len = min_t(int, skb->len, l...
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
...ocket.file = NULL; netif_tx_unlock_bh(tun->dev); /* Drop read queue */ @@ -387,7 +389,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) /* Notify and wake up reader process */ if (tun->flags & TUN_FASYNC) kill_fasync(&tun->fasync, SIGIO, POLL_IN); - wake_up_interruptible(&tun->socket.wait); + wake_up_interruptible_poll(&tun->socket.wait, POLLIN | + POLLRDNORM | POLLRDBAND); return NETDEV_TX_OK; drop: @@ -743,7 +746,7 @@ static __inline__ ssize_t tun_put_user(struct tun_struct *tun, len = min_t(int, skb->len, l...
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
...ocket.file = NULL; netif_tx_unlock_bh(tun->dev); /* Drop read queue */ @@ -387,7 +389,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) /* Notify and wake up reader process */ if (tun->flags & TUN_FASYNC) kill_fasync(&tun->fasync, SIGIO, POLL_IN); - wake_up_interruptible(&tun->socket.wait); + wake_up_interruptible_poll(&tun->socket.wait, POLLIN | + POLLRDNORM | POLLRDBAND); return NETDEV_TX_OK; drop: @@ -743,7 +746,7 @@ static __inline__ ssize_t tun_put_user(struct tun_struct *tun, len = min_t(int, skb->len, l...
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
...ocket.file = NULL; netif_tx_unlock_bh(tun->dev); /* Drop read queue */ @@ -387,7 +389,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) /* Notify and wake up reader process */ if (tun->flags & TUN_FASYNC) kill_fasync(&tun->fasync, SIGIO, POLL_IN); - wake_up_interruptible(&tun->socket.wait); + wake_up_interruptible_poll(&tun->socket.wait, POLLIN | + POLLRDNORM | POLLRDBAND); return NETDEV_TX_OK; drop: @@ -743,7 +746,7 @@ static __inline__ ssize_t tun_put_user(struct tun_struct *tun, len = min_t(int, skb->len, l...
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
...ocket.file = NULL; netif_tx_unlock_bh(tun->dev); /* Drop read queue */ @@ -387,7 +389,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) /* Notify and wake up reader process */ if (tun->flags & TUN_FASYNC) kill_fasync(&tun->fasync, SIGIO, POLL_IN); - wake_up_interruptible(&tun->socket.wait); + wake_up_interruptible_poll(&tun->socket.wait, POLLIN | + POLLRDNORM | POLLRDBAND); return NETDEV_TX_OK; drop: @@ -743,7 +746,7 @@ static __inline__ ssize_t tun_put_user(struct tun_struct *tun, len = min_t(int, skb->len, l...
2009 Apr 02
7
[Lguest] [PATCH 4/5] lguest: use KVM hypercalls
fre, 27 03 2009 kl. 10:22 +1030, skrev Rusty Russell: > From: Matias Zabaljauregui <zabaljauregui at gmail.com> > > Impact: cleanup > > This patch allow us to use KVM hypercalls Something has broken in relation to this change. I'm not sure it is this change itself or one following, but I get the following error when using lguest: lguest: unhandled trap 6 at 0x418726
2009 Apr 02
7
[Lguest] [PATCH 4/5] lguest: use KVM hypercalls
fre, 27 03 2009 kl. 10:22 +1030, skrev Rusty Russell: > From: Matias Zabaljauregui <zabaljauregui at gmail.com> > > Impact: cleanup > > This patch allow us to use KVM hypercalls Something has broken in relation to this change. I'm not sure it is this change itself or one following, but I get the following error when using lguest: lguest: unhandled trap 6 at 0x418726
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all, I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: All machines are running CentOS 6.4 and using
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2019 Oct 31
37
[Bug 3085] New: seccomp issue after upgrading openssl
https://bugzilla.mindrot.org/show_bug.cgi?id=3085 Bug ID: 3085 Summary: seccomp issue after upgrading openssl Product: Portable OpenSSH Version: 8.1p1 Hardware: Other OS: Linux Status: NEW Severity: critical Priority: P5 Component: sshd Assignee: unassigned-bugs at mindrot.org
2010 Apr 22
1
Transport endpoint not connected
Hey guys, I've recently implemented gluster to share webcontent read-write between two servers. Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50 Fuse : 2.7.2-1ubuntu2.1 Platform : ubuntu 8.04LTS I used the following command to generate my configs: /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1 10.10.130.11:/data/export
2013 Jan 25
4
[PATCH 0/1] VM Sockets for Linux upstreaming
From: Andy King <acking at vmware.com> ** Introduce VM Sockets *** In an effort to improve the out-of-the-box experience with Linux kernels for VMware users, VMware is working on readying the VM Sockets (VSOCK, formerly VMCI Sockets) (vmw_vsock) kernel module for inclusion in the Linux kernel. The purpose of this post is to acquire feedback on the vmw_vsock kernel module. Unlike previous