Displaying 20 results from an estimated 5006 matches for "polls".
Did you mean:
poll
2013 Jan 06
2
[PATCH V3 0/2] handle polling errors
This is an update version of last version to fix the handling of polling errors
in vhost/vhost_net.
Currently, vhost and vhost_net ignore polling errors which can lead kernel
crashing when it tries to remove itself from waitqueue after the polling
failure. Fix this by checking the poll->wqh before the removing and report an
error when meet polling errors.
Changes from v2:
- check poll->wqh
2013 Jan 06
2
[PATCH V3 0/2] handle polling errors
This is an update version of last version to fix the handling of polling errors
in vhost/vhost_net.
Currently, vhost and vhost_net ignore polling errors which can lead kernel
crashing when it tries to remove itself from waitqueue after the polling
failure. Fix this by checking the poll->wqh before the removing and report an
error when meet polling errors.
Changes from v2:
- check poll->wqh
2013 Mar 07
3
[PATCH] vhost_net: remove tx polling state
After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle polling
errors when setting backend), we in fact track the polling state through
poll->wqh, so there's no need to duplicate the work with an extra
vhost_net_polling_state. So this patch removes this and make the code simpler.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 60
2013 Mar 07
3
[PATCH] vhost_net: remove tx polling state
After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle polling
errors when setting backend), we in fact track the polling state through
poll->wqh, so there's no need to duplicate the work with an extra
vhost_net_polling_state. So this patch removes this and make the code simpler.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 60
2012 Dec 27
3
[PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()
Fix the leaking of oldubufs and fd refcnt when fail to initialized used ring.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 14 +++++++++++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index ebd08b2..629d6b5 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -834,8 +834,10 @@ static
2012 Dec 27
3
[PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()
Fix the leaking of oldubufs and fd refcnt when fail to initialized used ring.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 14 +++++++++++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index ebd08b2..629d6b5 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -834,8 +834,10 @@ static
2018 Mar 27
4
[PATCH net V2] vhost: correctly remove wait queue during poll failure
We tried to remove vq poll from wait queue, but do not check whether
or not it was in a list before. This will lead double free. Fixing
this by switching to use vhost_poll_stop() which zeros poll->wqh after
removing poll from waitqueue to make sure it won't be freed twice.
Cc: Darren Kenny <darren.kenny at oracle.com>
Reported-by: syzbot+c0272972b01b872e604a at
2018 Mar 27
4
[PATCH net V2] vhost: correctly remove wait queue during poll failure
We tried to remove vq poll from wait queue, but do not check whether
or not it was in a list before. This will lead double free. Fixing
this by switching to use vhost_poll_stop() which zeros poll->wqh after
removing poll from waitqueue to make sure it won't be freed twice.
Cc: Darren Kenny <darren.kenny at oracle.com>
Reported-by: syzbot+c0272972b01b872e604a at
2017 Nov 14
4
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 2017/11/13 18:53, Juergen Gross wrote:
> On 13/11/17 11:06, Quan Xu wrote:
>> From: Quan Xu <quan.xu0 at gmail.com>
>>
>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
>> in idle path which will poll for a while before we enter the real idle
>> state.
>>
>> In virtualization, idle path includes several heavy operations
2017 Nov 14
4
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 2017/11/13 18:53, Juergen Gross wrote:
> On 13/11/17 11:06, Quan Xu wrote:
>> From: Quan Xu <quan.xu0 at gmail.com>
>>
>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
>> in idle path which will poll for a while before we enter the real idle
>> state.
>>
>> In virtualization, idle path includes several heavy operations
2014 Aug 10
7
[PATCH] vhost: Add polling mode
...nabled) for a virtqueue.
+ *
+ * Enabling this mode it tells the guest not to notify ("kick") us when its
+ * has made more work available on this virtqueue; Rather, we will continuously
+ * poll this virtqueue in the worker thread. If multiple virtqueues are polled,
+ * the worker thread polls them all, e.g., in a round-robin fashion.
+ * Note that vqpoll.enabled doesn't always mean that this virtqueue is
+ * actually being polled: The backend (e.g., net.c) may temporarily disable it
+ * using vhost_disable/enable_notify(), while vqpoll.enabled is unchanged.
+ *
+ * It is assumed tha...
2014 Aug 10
7
[PATCH] vhost: Add polling mode
...nabled) for a virtqueue.
+ *
+ * Enabling this mode it tells the guest not to notify ("kick") us when its
+ * has made more work available on this virtqueue; Rather, we will continuously
+ * poll this virtqueue in the worker thread. If multiple virtqueues are polled,
+ * the worker thread polls them all, e.g., in a round-robin fashion.
+ * Note that vqpoll.enabled doesn't always mean that this virtqueue is
+ * actually being polled: The backend (e.g., net.c) may temporarily disable it
+ * using vhost_disable/enable_notify(), while vqpoll.enabled is unchanged.
+ *
+ * It is assumed tha...
2018 Mar 27
1
[PATCH net] vhost: correctly remove wait queue during poll failure
On 2018?03?27? 17:28, Darren Kenny wrote:
> Hi Jason,
>
> On Tue, Mar 27, 2018 at 11:47:22AM +0800, Jason Wang wrote:
>> We tried to remove vq poll from wait queue, but do not check whether
>> or not it was in a list before. This will lead double free. Fixing
>> this by checking poll->wqh to make sure it was in a list.
>
> This text seems at odds with the code
2016 May 30
4
[PATCH V2 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V1:
- use vhost_net_disable_vq()/vhost_net_enable_vq() instead of open
coding.
-
2016 May 30
4
[PATCH V2 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V1:
- use vhost_net_disable_vq()/vhost_net_enable_vq() instead of open
coding.
-
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V2:
- Don't enable rx vq if we meet an err or rx vq is empty
Changes from V1:
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi:
This series tries to optimize vhost_net polling at two points:
- Stop rx polling for reduicng the unnecessary wakeups during
handle_rx().
- Conditonally enable tx polling for reducing the unnecessary
traversing and spinlock touching.
Test shows about 17% improvement on rx pps.
Please review
Changes from V2:
- Don't enable rx vq if we meet an err or rx vq is empty
Changes from V1:
2003 Oct 27
1
Fwd: Re: Asterisk on FreeBSD
Your log file almost looks like a bug in Asterisk doesn't it?
Why call poll() with a zero timeout while passing only one FD?
and then why do the read when there is no data?
Read the man pages for all the system calls
Take a look at the source chan_sip.c
/* Wait for sched or io */
res = ast_sched_wait(sched);
if ((res < 0) || (res > 1000))
2014 Aug 10
0
[PATCH] vhost: Add polling mode
...> + *
> + * Enabling this mode it tells the guest not to notify ("kick") us when its
> + * has made more work available on this virtqueue; Rather, we will continuously
> + * poll this virtqueue in the worker thread. If multiple virtqueues are polled,
> + * the worker thread polls them all, e.g., in a round-robin fashion.
> + * Note that vqpoll.enabled doesn't always mean that this virtqueue is
> + * actually being polled: The backend (e.g., net.c) may temporarily disable it
> + * using vhost_disable/enable_notify(), while vqpoll.enabled is unchanged.
> + *
&...
2013 Feb 19
13
[PATCH] mini-os: implement poll(2)
It is just a wrapper around select(2).
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
extras/mini-os/include/posix/poll.h | 1 +
extras/mini-os/lib/sys.c | 90 ++++++++++++++++++++++++++++++++++-
2 files changed, 90 insertions(+), 1 deletion(-)
create mode 100644 extras/mini-os/include/posix/poll.h
diff --git a/extras/mini-os/include/posix/poll.h