Displaying 3 results from an estimated 3 matches for "arch_cpu_idle_prepar".
Did you mean:
arch_cpu_idle_prepare
2017 Nov 13
1
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
....c b/kernel/sched/idle.c
index 257f4f0..df7c422 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -74,6 +74,7 @@ static noinline int __cpuidle cpu_idle_poll(void)
}
/* Weak implementations for optional arch specific functions */
+void __weak arch_cpu_idle_poll(void) { }
void __weak arch_cpu_idle_prepare(void) { }
void __weak arch_cpu_idle_enter(void) { }
void __weak arch_cpu_idle_exit(void) { }
@@ -219,6 +220,7 @@ static void do_idle(void)
*/
__current_set_polling();
+ arch_cpu_idle_poll();
quiet_vmstat();
tick_nohz_idle_enter();
--
1.7.1
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com>
Some latency-intensive workload have seen obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost I have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com>
Some latency-intensive workload have seen obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost I have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we