Displaying 3 results from an estimated 3 matches for "257f4f0".
Did you mean:
25740
2017 Nov 13
1
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
...cess.c
@@ -333,6 +333,13 @@ void arch_cpu_idle(void)
x86_idle();
}
+#ifdef CONFIG_PARAVIRT
+void arch_cpu_idle_poll(void)
+{
+ paravirt_idle_poll();
+}
+#endif
+
/*
* We use this if we don't have any better idle routine..
*/
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 257f4f0..df7c422 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -74,6 +74,7 @@ static noinline int __cpuidle cpu_idle_poll(void)
}
/* Weak implementations for optional arch specific functions */
+void __weak arch_cpu_idle_poll(void) { }
void __weak arch_cpu_idle_prepare(void) { }
void...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com>
Some latency-intensive workload have seen obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost I have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com>
Some latency-intensive workload have seen obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost I have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we