search for: single_task_running

Displaying 20 results from an estimated 32 matches for "single_task_running".

2016 Feb 28
2
[PATCH V3 3/3] vhost_net: basic polling support
...(struct vhost_dev *dev, > + unsigned long endtime) > +{ > + return likely(!need_resched()) && > + likely(!time_after(busy_clock(), endtime)) && > + likely(!signal_pending(current)) && > + !vhost_has_work(dev) && > + single_task_running(); So I find it quite unfortunate that this still uses single_task_running. This means that for example a SCHED_IDLE task will prevent polling from becoming active, and that seems like a bug, or at least an undocumented feature :). Unfortunately this logic affects the behaviour as observed by use...
2016 Feb 28
2
[PATCH V3 3/3] vhost_net: basic polling support
...(struct vhost_dev *dev, > + unsigned long endtime) > +{ > + return likely(!need_resched()) && > + likely(!time_after(busy_clock(), endtime)) && > + likely(!signal_pending(current)) && > + !vhost_has_work(dev) && > + single_task_running(); So I find it quite unfortunate that this still uses single_task_running. This means that for example a SCHED_IDLE task will prevent polling from becoming active, and that seems like a bug, or at least an undocumented feature :). Unfortunately this logic affects the behaviour as observed by use...
2016 Feb 29
0
[PATCH V3 3/3] vhost_net: basic polling support
...e) >> > +{ >> > + return likely(!need_resched()) && >> > + likely(!time_after(busy_clock(), endtime)) && >> > + likely(!signal_pending(current)) && >> > + !vhost_has_work(dev) && >> > + single_task_running(); > So I find it quite unfortunate that this still uses single_task_running. > This means that for example a SCHED_IDLE task will prevent polling from > becoming active, and that seems like a bug, or at least > an undocumented feature :). Yes, it may need more thoughts. > > Unf...
2016 Feb 28
1
[PATCH V3 3/3] vhost_net: basic polling support
...(struct vhost_dev *dev, > + unsigned long endtime) > +{ > + return likely(!need_resched()) && > + likely(!time_after(busy_clock(), endtime)) && > + likely(!signal_pending(current)) && > + !vhost_has_work(dev) && > + single_task_running(); > +} > + > +static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > + struct vhost_virtqueue *vq, > + struct iovec iov[], unsigned int iov_size, > + unsigned int *out_num, unsigned int *in_num) > +{ > + unsigned long uninitialized_var(endtime); &...
2016 Feb 28
1
[PATCH V3 3/3] vhost_net: basic polling support
...(struct vhost_dev *dev, > + unsigned long endtime) > +{ > + return likely(!need_resched()) && > + likely(!time_after(busy_clock(), endtime)) && > + likely(!signal_pending(current)) && > + !vhost_has_work(dev) && > + single_task_running(); > +} > + > +static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > + struct vhost_virtqueue *vq, > + struct iovec iov[], unsigned int iov_size, > + unsigned int *out_num, unsigned int *in_num) > +{ > + unsigned long uninitialized_var(endtime); &...
2016 Jan 25
1
[PATCH V2 0/3] basic busy polling support for vhost_net
...o vhost-net ([1], [2]). Some of them seem relevant for these > > patches as well: > > > > - What happens in overcommit scenarios? > > We have an optimization here: busy polling will end if more than one > processes is runnable on local cpu. This was done by checking > single_task_running() in each iteration. So at the worst case, busy > polling should be as fast as or only a minor regression compared to > normal case. You can see this from the last test result. > > > - Have you checked the effect of polling on some macro benchmarks? > > I'm not sure I get...
2016 Jan 25
1
[PATCH V2 0/3] basic busy polling support for vhost_net
...o vhost-net ([1], [2]). Some of them seem relevant for these > > patches as well: > > > > - What happens in overcommit scenarios? > > We have an optimization here: busy polling will end if more than one > processes is runnable on local cpu. This was done by checking > single_task_running() in each iteration. So at the worst case, busy > polling should be as fast as or only a minor regression compared to > normal case. You can see this from the last test result. > > > - Have you checked the effect of polling on some macro benchmarks? > > I'm not sure I get...
2015 Oct 22
4
[PATCH net-next RFC 2/2] vhost_net: basic polling support
...ix would be to record the CPU id and break out of loop if that changes. Also - defer this until we actually know we need it? > + > + return busyloop_timeout && !need_resched() && > + !time_after(now, endtime) && !vhost_has_work(dev) && > + single_task_running(); signal pending as well? > +} > + > /* Expects to be always run from workqueue - which acts as > * read-size critical section for our kind of RCU. */ > static void handle_tx(struct vhost_net *net) > { > struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_...
2015 Oct 22
4
[PATCH net-next RFC 2/2] vhost_net: basic polling support
...ix would be to record the CPU id and break out of loop if that changes. Also - defer this until we actually know we need it? > + > + return busyloop_timeout && !need_resched() && > + !time_after(now, endtime) && !vhost_has_work(dev) && > + single_task_running(); signal pending as well? > +} > + > /* Expects to be always run from workqueue - which acts as > * read-size critical section for our kind of RCU. */ > static void handle_tx(struct vhost_net *net) > { > struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_...
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
...(struct vhost_dev *dev, > + unsigned long endtime) > +{ > + return likely(!need_resched()) && > + likely(!time_after(busy_clock(), endtime)) && > + likely(!signal_pending(current)) && > + !vhost_has_work(dev) && > + single_task_running(); > +} > + > +static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > + struct vhost_virtqueue *vq, > + struct iovec iov[], unsigned int iov_size, > + unsigned int *out_num, unsigned int *in_num) > +{ > + unsigned long uninitialized_var(endtime); &...
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
...(struct vhost_dev *dev, > + unsigned long endtime) > +{ > + return likely(!need_resched()) && > + likely(!time_after(busy_clock(), endtime)) && > + likely(!signal_pending(current)) && > + !vhost_has_work(dev) && > + single_task_running(); > +} > + > +static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > + struct vhost_virtqueue *vq, > + struct iovec iov[], unsigned int iov_size, > + unsigned int *out_num, unsigned int *in_num) > +{ > + unsigned long uninitialized_var(endtime); &...
2016 Jan 21
1
[PATCH V2 3/3] vhost_net: basic polling support
...endtime) > >>+{ > >>+ return likely(!need_resched()) && > >>+ likely(!time_after(busy_clock(), endtime)) && > >>+ likely(!signal_pending(current)) && > >>+ !vhost_has_work(dev) && > >>+ single_task_running(); > >>+} > >>+ > >>+static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > >>+ struct vhost_virtqueue *vq, > >>+ struct iovec iov[], unsigned int iov_size, > >>+ unsigned int *out_num, unsigned int *in_num) > >&...
2016 Jan 21
1
[PATCH V2 3/3] vhost_net: basic polling support
...endtime) > >>+{ > >>+ return likely(!need_resched()) && > >>+ likely(!time_after(busy_clock(), endtime)) && > >>+ likely(!signal_pending(current)) && > >>+ !vhost_has_work(dev) && > >>+ single_task_running(); > >>+} > >>+ > >>+static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > >>+ struct vhost_virtqueue *vq, > >>+ struct iovec iov[], unsigned int iov_size, > >>+ unsigned int *out_num, unsigned int *in_num) > >&...
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test A were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected mlx4 - Guest
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test A were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected mlx4 - Guest
2015 Oct 22
4
[PATCH net-next RFC 1/2] vhost: introduce vhost_has_work()
This path introduces a helper which can give a hint for whether or not there's a work queued in the work list. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 6 ++++++ drivers/vhost/vhost.h | 1 + 2 files changed, 7 insertions(+) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index eec2f11..d42d11e 100644 --- a/drivers/vhost/vhost.c +++
2015 Oct 22
4
[PATCH net-next RFC 1/2] vhost: introduce vhost_has_work()
This path introduces a helper which can give a hint for whether or not there's a work queued in the work list. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 6 ++++++ drivers/vhost/vhost.h | 1 + 2 files changed, 7 insertions(+) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index eec2f11..d42d11e 100644 --- a/drivers/vhost/vhost.c +++
2015 Oct 22
0
[PATCH net-next RFC 2/2] vhost_net: basic polling support
...h(); } +static bool tx_can_busy_poll(struct vhost_dev *dev, + unsigned long endtime) +{ + unsigned long now = local_clock() >> 10; + + return busyloop_timeout && !need_resched() && + !time_after(now, endtime) && !vhost_has_work(dev) && + single_task_running(); +} + /* Expects to be always run from workqueue - which acts as * read-size critical section for our kind of RCU. */ static void handle_tx(struct vhost_net *net) { struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; struct vhost_virtqueue *vq = &nvq->vq; + unsign...
2016 Mar 04
6
[PATCH V4 0/3] basic busy polling support for vhost_net
...-3%/ -11%/ +23%/ +7%/ +42% 16384/ 4/ -3%/ -3%/ -4%/ +5%/ +115% 16384/ 8/ -1%/ 0%/ -1%/ -3%/ +32% 65535/ 1/ +1%/ 0%/ +2%/ 0%/ +66% 65535/ 4/ -1%/ -1%/ 0%/ +4%/ +492% 65535/ 8/ 0%/ -1%/ -1%/ +4%/ +38% Changes from V3: - drop single_task_running() - use cpu_relax_lowlatency() instead of cpu_relax() Changes from V2: - rename vhost_vq_more_avail() to vhost_vq_avail_empty(). And return false we __get_user() fails. - do not bother premmptions/timers for good path. - use vhost_vring_state as ioctl parameter instead of reinveting a new one. - a...
2016 Mar 04
6
[PATCH V4 0/3] basic busy polling support for vhost_net
...-3%/ -11%/ +23%/ +7%/ +42% 16384/ 4/ -3%/ -3%/ -4%/ +5%/ +115% 16384/ 8/ -1%/ 0%/ -1%/ -3%/ +32% 65535/ 1/ +1%/ 0%/ +2%/ 0%/ +66% 65535/ 4/ -1%/ -1%/ 0%/ +4%/ +492% 65535/ 8/ 0%/ -1%/ -1%/ +4%/ +38% Changes from V3: - drop single_task_running() - use cpu_relax_lowlatency() instead of cpu_relax() Changes from V2: - rename vhost_vq_more_avail() to vhost_vq_avail_empty(). And return false we __get_user() fails. - do not bother premmptions/timers for good path. - use vhost_vring_state as ioctl parameter instead of reinveting a new one. - a...