Displaying 14 results from an estimated 14 matches for "kvm_apf_trap_init".
2019 May 27
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...fact from before the static_key; an attempt to
make the pv interface less awkward.
Something like the below would work for KVM I suspect, the others
(Hyper-V and Xen are more 'interesting').
---
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -580,7 +580,7 @@ static void __init kvm_apf_trap_init(voi
static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
const struct flush_tlb_info *info)
{
u8 state;
@@ -594,6 +594,9 @@ static void kvm_flush_tlb_others(c...
2019 May 27
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...fact from before the static_key; an attempt to
make the pv interface less awkward.
Something like the below would work for KVM I suspect, the others
(Hyper-V and Xen are more 'interesting').
---
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -580,7 +580,7 @@ static void __init kvm_apf_trap_init(voi
static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
const struct flush_tlb_info *info)
{
u8 state;
@@ -594,6 +594,9 @@ static void kvm_flush_tlb_others(c...
2019 May 27
1
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
On Mon, May 27, 2019 at 12:21:59PM +0200, Paolo Bonzini wrote:
> On 27/05/19 11:47, Peter Zijlstra wrote:
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -580,7 +580,7 @@ static void __init kvm_apf_trap_init(voi
> >
> > static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
> >
> > -static void kvm_flush_tlb_others(const struct cpumask *cpumask,
> > +static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
> > const struct flush_tlb_info *info)
> >...
2017 Nov 17
2
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
..."(cpuidle_idle_call() --> default_idle_call())..
thanks Xen guys, who has implemented the paravirt framework. I can
implement it
as easy as following:
???????????? --- a/arch/x86/kernel/kvm.c
???????????? +++ b/arch/x86/kernel/kvm.c
???????????? @@ -465,6 +465,12 @@ static void __init
kvm_apf_trap_init(void)
???????????????????? update_intr_gate(X86_TRAP_PF, async_page_fault);
????????????? }
???????????? +static __cpuidle void kvm_safe_halt(void)
???????????? +{
??? ???? +??????? /* 1. POLL, if need_resched() --> return */
??? ???? +
???????????? +??????? asm volatile("sti; hlt&q...
2017 Nov 17
2
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
..."(cpuidle_idle_call() --> default_idle_call())..
thanks Xen guys, who has implemented the paravirt framework. I can
implement it
as easy as following:
???????????? --- a/arch/x86/kernel/kvm.c
???????????? +++ b/arch/x86/kernel/kvm.c
???????????? @@ -465,6 +465,12 @@ static void __init
kvm_apf_trap_init(void)
???????????????????? update_intr_gate(X86_TRAP_PF, async_page_fault);
????????????? }
???????????? +static __cpuidle void kvm_safe_halt(void)
???????????? +{
??? ???? +??????? /* 1. POLL, if need_resched() --> return */
??? ???? +
???????????? +??????? asm volatile("sti; hlt&q...
2019 May 27
0
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...o
> make the pv interface less awkward.
>
> Something like the below would work for KVM I suspect, the others
> (Hyper-V and Xen are more 'interesting').
>
> ---
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -580,7 +580,7 @@ static void __init kvm_apf_trap_init(voi
>
> static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
>
> -static void kvm_flush_tlb_others(const struct cpumask *cpumask,
> +static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
> const struct flush_tlb_info *info)
> {
> u8 state;
> @@ -594,6...
2017 Nov 17
0
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
...implemented the paravirt framework. I can implement
> it
> as easy as following:
>
> ???????????? --- a/arch/x86/kernel/kvm.c
Your email client is using a very strange formatting.
> ???????????? +++ b/arch/x86/kernel/kvm.c
> ???????????? @@ -465,6 +465,12 @@ static void __init kvm_apf_trap_init(void)
> ???????????????????? update_intr_gate(X86_TRAP_PF, async_page_fault);
> ????????????? }
>
> ???????????? +static __cpuidle void kvm_safe_halt(void)
> ???????????? +{
> ??? ???? +??????? /* 1. POLL, if need_resched() --> return */
> ??? ???? +
> ???????????? +????...
2017 Nov 16
1
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
On 2017-11-16 06:03, Thomas Gleixner wrote:
> On Wed, 15 Nov 2017, Peter Zijlstra wrote:
>
>> On Mon, Nov 13, 2017 at 06:06:02PM +0800, Quan Xu wrote:
>>> From: Yang Zhang <yang.zhang.wz at gmail.com>
>>>
>>> Implement a generic idle poll which resembles the functionality
>>> found in arch/. Provide weak arch_cpu_idle_poll function which
2019 Jul 19
0
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
..._PROTO(const struct cpumask *cpus,
const struct flush_tlb_info *info),
TP_ARGS(cpus, info),
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index b7f34fe2171e..de40657d9025 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -595,7 +595,7 @@ static void __init kvm_apf_trap_init(void)
static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
const struct flush_tlb_info *info)
{
u8 state;
@@ -609,6 +609,11 @@ static void kvm_flush_tlb_other...
2019 Jul 02
0
[PATCH v2 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
..._PROTO(const struct cpumask *cpus,
const struct flush_tlb_info *info),
TP_ARGS(cpus, info),
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5169b8cc35bb..d00d551d4a2a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -580,7 +580,7 @@ static void __init kvm_apf_trap_init(void)
static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
const struct flush_tlb_info *info)
{
u8 state;
@@ -594,6 +594,11 @@ static void kvm_flush_tlb_other...
2019 May 25
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.
Add a static key to tell
2019 May 25
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.
Add a static key to tell
2019 Jul 02
2
[PATCH v2 0/9] x86: Concurrent TLB flushes
Currently, local and remote TLB flushes are not performed concurrently,
which introduces unnecessary overhead - each INVLPG can take 100s of
cycles. This patch-set allows TLB flushes to be run concurrently: first
request the remote CPUs to initiate the flush, then run it locally, and
finally wait for the remote CPUs to finish their work.
In addition, there are various small optimizations to avoid
2019 Jul 19
5
[PATCH v3 0/9] x86: Concurrent TLB flushes
[ Cover-letter is identical to v2, including benchmark results,
excluding the change log. ]
Currently, local and remote TLB flushes are not performed concurrently,
which introduces unnecessary overhead - each INVLPG can take 100s of
cycles. This patch-set allows TLB flushes to be run concurrently: first
request the remote CPUs to initiate the flush, then run it locally, and
finally wait for