search for: x86_idl

Displaying 4 results from an estimated 4 matches for "x86_idl".

Did you mean: x86_idle
2013 Feb 10
0
[PATCH 16/16] xen idle: make xen-specific macro xen-specific
...kernel/process.c index ceb05db..b86de21 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -390,7 +390,8 @@ void default_idle(void) EXPORT_SYMBOL(default_idle); #endif -bool set_pm_idle_to_default(void) +#ifdef CONFIG_XEN +bool xen_set_default_idle(void) { bool ret = !!x86_idle; @@ -398,6 +399,7 @@ bool set_pm_idle_to_default(void) return ret; } +#endif void stop_this_cpu(void *dummy) { local_irq_disable(); diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c index 8971a26..2b73b5c 100644 --- a/arch/x86/xen/setup.c +++ b/arch/x86/xen/setup.c @@ -561,7 +561...
2017 Nov 13
1
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
...++ kernel/sched/idle.c | 2 ++ 2 files changed, 9 insertions(+), 0 deletions(-) diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index c676853..f7db8b5 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -333,6 +333,13 @@ void arch_cpu_idle(void) x86_idle(); } +#ifdef CONFIG_PARAVIRT +void arch_cpu_idle_poll(void) +{ + paravirt_idle_poll(); +} +#endif + /* * We use this if we don't have any better idle routine.. */ diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 257f4f0..df7c422 100644 --- a/kernel/sched/idle.c +++ b/kernel...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com> Some latency-intensive workload have seen obviously performance drop when running inside VM. The main reason is that the overhead is amplified when running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com> Some latency-intensive workload have seen obviously performance drop when running inside VM. The main reason is that the overhead is amplified when running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we