search for: contextswitch

Displaying 11 results from an estimated 11 matches for "contextswitch".

Did you mean: context_switch
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...kvm dynamic poll mechanism, even you could provide a better algorism, how could you bypass timer access (LAPIC timer or TSC deadline timer), or a hardware context switch between virtual machine and hypervisor. I know these is a tradeoff. Furthermore, here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ? 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll: ? 1163.5 ns/ctxsw -- 205.5 %CPU 3. w/ kvm dynamic poll: ? 2280.6 ns/ctxsw -- 199.5 %CPU so, these tow solution are qu...
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...kvm dynamic poll mechanism, even you could provide a better algorism, how could you bypass timer access (LAPIC timer or TSC deadline timer), or a hardware context switch between virtual machine and hypervisor. I know these is a tradeoff. Furthermore, here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ? 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll: ? 1163.5 ns/ctxsw -- 205.5 %CPU 3. w/ kvm dynamic poll: ? 2280.6 ns/ctxsw -- 199.5 %CPU so, these tow solution are qu...
2017 Nov 14
0
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...ism, even you could provide a > better algorism, how could you bypass timer access (LAPIC timer or TSC > deadline timer), or a hardware context switch between virtual machine > and hypervisor. I know these is a tradeoff. > > Furthermore, here is the data we get when running benchmark contextswitch > to measure the latency(lower is better): > > 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): > 3402.9 ns/ctxsw -- 199.8 %CPU > > 2. w/ patch and disable kvm dynamic poll: > 1163.5 ns/ctxsw -- 205.5 %CPU > > 3. w/ kvm dynamic poll: > 2280.6 ns/ctxsw...
2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=2000...
2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=2000...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=2000...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=2000...
2017 Nov 14
4
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 2017/11/13 18:53, Juergen Gross wrote: > On 13/11/17 11:06, Quan Xu wrote: >> From: Quan Xu <quan.xu0 at gmail.com> >> >> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called >> in idle path which will poll for a while before we enter the real idle >> state. >> >> In virtualization, idle path includes several heavy operations
2017 Nov 14
4
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 2017/11/13 18:53, Juergen Gross wrote: > On 13/11/17 11:06, Quan Xu wrote: >> From: Quan Xu <quan.xu0 at gmail.com> >> >> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called >> in idle path which will poll for a while before we enter the real idle >> state. >> >> In virtualization, idle path includes several heavy operations
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=2000...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...running inside VM. The most cost I have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=2000...