Displaying 20 results from an estimated 301 matches for "preempt_en".
Did you mean:
preempt_rt
2012 Jun 01
1
[PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()
...weird, because xen_play_dead()
is invoked in the cpu down path, whereas cpu_bringup() (as the name suggests)
is useful in the cpu bringup path.
Getting rid of xen_play_dead()'s dependency on cpu_bringup() helps in hooking
on to the generic SMP booting framework.
Also remove the extra call to preempt_enable() added by commit 41bd956
(xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while
atomic) because it becomes unnecessary after this change.
Cc: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
Cc: Jeremy Fitzhardinge <jeremy at goop.org>
Cc: Thomas Gleixner <tgl...
2012 Jun 01
1
[PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()
...weird, because xen_play_dead()
is invoked in the cpu down path, whereas cpu_bringup() (as the name suggests)
is useful in the cpu bringup path.
Getting rid of xen_play_dead()'s dependency on cpu_bringup() helps in hooking
on to the generic SMP booting framework.
Also remove the extra call to preempt_enable() added by commit 41bd956
(xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while
atomic) because it becomes unnecessary after this change.
Cc: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
Cc: Jeremy Fitzhardinge <jeremy at goop.org>
Cc: Thomas Gleixner <tgl...
2017 Sep 01
2
[PATCH net] vhost_net: correctly check tx avail during rx busy polling
...et.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 06d0448..1b68253 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -634,7 +634,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
preempt_enable();
- if (vhost_enable_notify(&net->dev, vq))
+ if (!vhost_vq_avail_empty(&net->dev, vq))
vhost_poll_queue(&vq->poll);
mutex_unlock(&vq->mutex);
--
2.7.4
2017 Sep 01
2
[PATCH net] vhost_net: correctly check tx avail during rx busy polling
...et.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 06d0448..1b68253 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -634,7 +634,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
preempt_enable();
- if (vhost_enable_notify(&net->dev, vq))
+ if (!vhost_vq_avail_empty(&net->dev, vq))
vhost_poll_queue(&vq->poll);
mutex_unlock(&vq->mutex);
--
2.7.4
2018 Jul 03
2
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...+
> + preempt_disable();
> + endtime = busy_clock() + busyloop_timeout;
> + while (vhost_can_busy_poll(tvq->dev, endtime) &&
> + !(sock && sk_has_rx_data(sock->sk)) &&
> + vhost_vq_avail_empty(tvq->dev, tvq))
> + cpu_relax();
> + preempt_enable();
> +
> + if ((rx && !vhost_vq_avail_empty(&net->dev, vq)) ||
> + (!rx && (sock && sk_has_rx_data(sock->sk)))) {
> + vhost_poll_queue(&vq->poll);
> + } else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
One last questio...
2018 Jul 03
2
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...+
> + preempt_disable();
> + endtime = busy_clock() + busyloop_timeout;
> + while (vhost_can_busy_poll(tvq->dev, endtime) &&
> + !(sock && sk_has_rx_data(sock->sk)) &&
> + vhost_vq_avail_empty(tvq->dev, tvq))
> + cpu_relax();
> + preempt_enable();
> +
> + if ((rx && !vhost_vq_avail_empty(&net->dev, vq)) ||
> + (!rx && (sock && sk_has_rx_data(sock->sk)))) {
> + vhost_poll_queue(&vq->poll);
> + } else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
One last questio...
2020 Nov 03
0
[patch V3 06/37] highmem: Provide generic variant of kmap_atomic*
...mic(addr) \
-do { \
- BUILD_BUG_ON(__same_type((addr), struct page *)); \
- kunmap_atomic_high(addr); \
- pagefault_enable(); \
- preempt_enable(); \
+#define kunmap_atomic(__addr) \
+do { \
+ BUILD_BUG_ON(__same_type((__addr), struct page *)); \
+ __kunmap_atomic(__addr); \
+ pagefault_enable(); \
+ preempt_enable(); \
} while (0)
-
/* when CONFIG_HIGHMEM is not set these...
2018 Aug 02
2
[PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
On 2018?08?01? 17:52, Tonghao Zhang wrote:
>>> +
>>> + cpu_relax();
>>> + }
>>> +
>>> + preempt_enable();
>>> +
>>> + if (!rx)
>>> + vhost_net_enable_vq(net, vq);
>> No need to enable rx virtqueue, if we are sure handle_rx() will be
>> called soon.
> If we disable rx virtqueue in handle_tx and don't send packets from
> guest anymor...
2018 Aug 02
2
[PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
On 2018?08?01? 17:52, Tonghao Zhang wrote:
>>> +
>>> + cpu_relax();
>>> + }
>>> +
>>> + preempt_enable();
>>> +
>>> + if (!rx)
>>> + vhost_net_enable_vq(net, vq);
>> No need to enable rx virtqueue, if we are sure handle_rx() will be
>> called soon.
> If we disable rx virtqueue in handle_tx and don't send packets from
> guest anymor...
2018 Aug 01
2
[PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...k->sk) &&
> + !vhost_vq_avail_empty(&net->dev, rvq)) ||
> + !vhost_vq_avail_empty(&net->dev, tvq))
> + break;
Some checks were duplicated in vhost_net_busy_poll_check(). Need
consider to unify them.
> +
> + cpu_relax();
> + }
> +
> + preempt_enable();
> +
> + if (!rx)
> + vhost_net_enable_vq(net, vq);
No need to enable rx virtqueue, if we are sure handle_rx() will be
called soon.
> +
> + vhost_net_busy_poll_check(net, rvq, tvq, rx);
It looks to me just open code all check here is better and easier to be
reviewed.
Tha...
2018 Aug 01
2
[PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...k->sk) &&
> + !vhost_vq_avail_empty(&net->dev, rvq)) ||
> + !vhost_vq_avail_empty(&net->dev, tvq))
> + break;
Some checks were duplicated in vhost_net_busy_poll_check(). Need
consider to unify them.
> +
> + cpu_relax();
> + }
> +
> + preempt_enable();
> +
> + if (!rx)
> + vhost_net_enable_vq(net, vq);
No need to enable rx virtqueue, if we are sure handle_rx() will be
called soon.
> +
> + vhost_net_busy_poll_check(net, rvq, tvq, rx);
It looks to me just open code all check here is better and easier to be
reviewed.
Tha...
2018 Jul 04
2
[PATCH net-next v5 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...+
> + preempt_disable();
> + endtime = busy_clock() + busyloop_timeout;
> + while (vhost_can_busy_poll(tvq->dev, endtime) &&
> + !(sock && sk_has_rx_data(sock->sk)) &&
> + vhost_vq_avail_empty(tvq->dev, tvq))
> + cpu_relax();
> + preempt_enable();
> +
> + if ((rx && !vhost_vq_avail_empty(&net->dev, vq)) ||
> + (!rx && (sock && sk_has_rx_data(sock->sk)))) {
> + vhost_poll_queue(&vq->poll);
> + } else if (vhost_enable_notify(&net->dev, vq) && rx) {
Hmm... on tx...
2018 Jul 04
2
[PATCH net-next v5 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...+
> + preempt_disable();
> + endtime = busy_clock() + busyloop_timeout;
> + while (vhost_can_busy_poll(tvq->dev, endtime) &&
> + !(sock && sk_has_rx_data(sock->sk)) &&
> + vhost_vq_avail_empty(tvq->dev, tvq))
> + cpu_relax();
> + preempt_enable();
> +
> + if ((rx && !vhost_vq_avail_empty(&net->dev, vq)) ||
> + (!rx && (sock && sk_has_rx_data(sock->sk)))) {
> + vhost_poll_queue(&vq->poll);
> + } else if (vhost_enable_notify(&net->dev, vq) && rx) {
Hmm... on tx...
2018 Jul 02
1
[PATCH net-next v3 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...+
> + preempt_disable();
> + endtime = busy_clock() + busyloop_timeout;
> + while (vhost_can_busy_poll(tvq->dev, endtime) &&
> + !(sock && sk_has_rx_data(sock->sk)) &&
> + vhost_vq_avail_empty(tvq->dev, tvq))
> + cpu_relax();
> + preempt_enable();
> +
> + if ((rx && !vhost_vq_avail_empty(&net->dev, vq)) ||
> + (!rx && (sock && sk_has_rx_data(sock->sk)))) {
> + vhost_poll_queue(&vq->poll);
> + } else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> + vhost_dis...
2018 May 21
20
[RFC PATCH net-next 00/12] XDP batching for TUN/vhost_net
Hi all:
We do not support XDP batching for TUN since it can only receive one
packet a time from vhost_net. This series tries to remove this
limitation by:
- introduce a TUN specific msg_control that can hold a pointer to an
array of XDP buffs
- try copy and build XDP buff in vhost_net
- store XDP buffs in an array and submit them once for every N packets
from vhost_net
- since TUN can only
2018 Jul 03
1
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...op_timeout;
>>> + while (vhost_can_busy_poll(tvq->dev, endtime) &&
>>> + !(sock && sk_has_rx_data(sock->sk)) &&
>>> + vhost_vq_avail_empty(tvq->dev, tvq))
>>> + cpu_relax();
>>> + preempt_enable();
>>> +
>>> + if ((rx && !vhost_vq_avail_empty(&net->dev, vq)) ||
>>> + (!rx && (sock && sk_has_rx_data(sock->sk)))) {
>>> + vhost_poll_queue(&vq->poll);
>>> + } else if (unlikely(...
2018 Jul 02
5
[PATCH net-next v4 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the same time.
handle_rx do that in the same way.
For more performance report, see patch 4.
v3 -> v4:
fix some issues
v2 -> v3:
This patches are splited from previous big patch:
2020 Nov 03
0
[patch V3 20/37] io-mapping: Cleanup atomic iomap
...omap.h
@@ -13,14 +13,7 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
-void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot);
-
-static inline void iounmap_atomic(void __iomem *vaddr)
-{
- kunmap_local_indexed((void __force *)vaddr);
- pagefault_enable();
- preempt_enable();
-}
+void __iomem *__iomap_local_pfn_prot(unsigned long pfn, pgprot_t prot);
int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot);
--- a/arch/x86/mm/iomap_32.c
+++ b/arch/x86/mm/iomap_32.c
@@ -44,7 +44,7 @@ void iomap_free(resource_size_t base, un
}
EXPORT_SYMB...
2017 Sep 01
0
[PATCH net] vhost_net: correctly check tx avail during rx busy polling
...644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -634,7 +634,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
In fact why does it poll the ring at all? I thought this function's
job is to poll the socket, isn't it?
>
> preempt_enable();
>
> - if (vhost_enable_notify(&net->dev, vq))
> + if (!vhost_vq_avail_empty(&net->dev, vq))
> vhost_poll_queue(&vq->poll);
> mutex_unlock(&vq->mutex);
Adding more contex:
mutex_lock(&vq->mutex);
v...
2018 May 21
0
[RFC PATCH net-next 06/12] tuntap: enable premmption early
...ged, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 44d4f3d..24ecd82 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1697,6 +1697,8 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
goto err_xdp;
}
}
+ rcu_read_unlock();
+ preempt_enable();
skb = build_skb(buf, buflen);
if (!skb) {
@@ -1710,9 +1712,6 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
get_page(alloc_frag->page);
alloc_frag->offset += buflen;
- rcu_read_unlock();
- preempt_enable();
-
return skb;
err_redirect:
--
2.7.4