search for: halt_poll_ns

Displaying 20 results from an estimated 22 matches for "halt_poll_ns".

2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...sm to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU...
2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...sm to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...sm to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...sm to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...sm to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...sm to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU...
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...;> Cc: xen-devel at lists.xenproject.org >>> Hmm, is the idle entry path really so critical to performance that a new >>> pvops function is necessary? >> Juergen, Here is the data we get when running benchmark netperf: >> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >> 29031.6 bit/s -- 76.1 %CPU >> >> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >> 35787.7 bit/s -- 129.4 %CPU >> >> 3. w/ kvm dynamic poll: >> 35735.6 bit/s -- 200.0 %CPU > Actually we can reduce the CPU utilization...
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...;> Cc: xen-devel at lists.xenproject.org >>> Hmm, is the idle entry path really so critical to performance that a new >>> pvops function is necessary? >> Juergen, Here is the data we get when running benchmark netperf: >> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >> 29031.6 bit/s -- 76.1 %CPU >> >> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >> 35787.7 bit/s -- 129.4 %CPU >> >> 3. w/ kvm dynamic poll: >> 35735.6 bit/s -- 200.0 %CPU > Actually we can reduce the CPU utilization...
2017 Nov 14
4
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...inux-kernel at vger.kernel.org >> Cc: xen-devel at lists.xenproject.org > Hmm, is the idle entry path really so critical to performance that a new > pvops function is necessary? Juergen, Here is the data we get when running benchmark netperf: ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ??? 29031.6 bit/s -- 76.1 %CPU ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): ??? 35787.7 bit/s -- 129.4 %CPU ?3. w/ kvm dynamic poll: ??? 35735.6 bit/s -- 200.0 %CPU ?4. w/patch and w/ kvm dynamic poll: ??? 42225.3 bit/s -- 198.7 %CPU ?5. idle=poll ??? 37081.7 bit/s -...
2017 Nov 14
4
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...inux-kernel at vger.kernel.org >> Cc: xen-devel at lists.xenproject.org > Hmm, is the idle entry path really so critical to performance that a new > pvops function is necessary? Juergen, Here is the data we get when running benchmark netperf: ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ??? 29031.6 bit/s -- 76.1 %CPU ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): ??? 35787.7 bit/s -- 129.4 %CPU ?3. w/ kvm dynamic poll: ??? 35735.6 bit/s -- 200.0 %CPU ?4. w/patch and w/ kvm dynamic poll: ??? 42225.3 bit/s -- 198.7 %CPU ?5. idle=poll ??? 37081.7 bit/s -...
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...;> Cc: xen-devel at lists.xenproject.org >>> Hmm, is the idle entry path really so critical to performance that a new >>> pvops function is necessary? >> Juergen, Here is the data we get when running benchmark netperf: >> ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >> ??? 29031.6 bit/s -- 76.1 %CPU >> >> ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >> ??? 35787.7 bit/s -- 129.4 %CPU >> >> ?3. w/ kvm dynamic poll: >> ??? 35735.6 bit/s -- 200.0 %CPU >> >> ?4. w/patch and w/ kvm dynam...
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...;> Cc: xen-devel at lists.xenproject.org >>> Hmm, is the idle entry path really so critical to performance that a new >>> pvops function is necessary? >> Juergen, Here is the data we get when running benchmark netperf: >> ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >> ??? 29031.6 bit/s -- 76.1 %CPU >> >> ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >> ??? 35787.7 bit/s -- 129.4 %CPU >> >> ?3. w/ kvm dynamic poll: >> ??? 35735.6 bit/s -- 200.0 %CPU >> >> ?4. w/patch and w/ kvm dynam...
2017 Nov 14
0
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...t;>>> >>>> Hmm, is the idle entry path really so critical to performance that a new >>>> pvops function is necessary? >>> >>> Juergen, Here is the data we get when running benchmark netperf: >>> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >>> 29031.6 bit/s -- 76.1 %CPU >>> >>> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >>> 35787.7 bit/s -- 129.4 %CPU >>> >>> 3. w/ kvm dynamic poll: >>> 35735.6 bit/s -- 200.0 %CPU >> >> A...
2017 Nov 14
1
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...t;>> Hmm, is the idle entry path really so critical to performance that a >>>>> new >>>>> pvops function is necessary? >>>> Juergen, Here is the data we get when running benchmark netperf: >>>> ??1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >>>> ???? 29031.6 bit/s -- 76.1 %CPU >>>> >>>> ??2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >>>> ???? 35787.7 bit/s -- 129.4 %CPU >>>> >>>> ??3. w/ kvm dynamic poll: >>>> ???? 35735.6 bit/s...
2017 Nov 14
1
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...t;>> Hmm, is the idle entry path really so critical to performance that a >>>>> new >>>>> pvops function is necessary? >>>> Juergen, Here is the data we get when running benchmark netperf: >>>> ??1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >>>> ???? 29031.6 bit/s -- 76.1 %CPU >>>> >>>> ??2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >>>> ???? 35787.7 bit/s -- 129.4 %CPU >>>> >>>> ??3. w/ kvm dynamic poll: >>>> ???? 35735.6 bit/s...
2017 Nov 14
0
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...>> Cc: xen-devel at lists.xenproject.org >> >> Hmm, is the idle entry path really so critical to performance that a new >> pvops function is necessary? > > Juergen, Here is the data we get when running benchmark netperf: > 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): > 29031.6 bit/s -- 76.1 %CPU > > 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): > 35787.7 bit/s -- 129.4 %CPU > > 3. w/ kvm dynamic poll: > 35735.6 bit/s -- 200.0 %CPU Actually we can reduce the CPU utilization by sleeping a period of time as what...
2017 Nov 14
0
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...ernel.org >>> Cc: xen-devel at lists.xenproject.org >> Hmm, is the idle entry path really so critical to performance that a new >> pvops function is necessary? > Juergen, Here is the data we get when running benchmark netperf: > ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): > ??? 29031.6 bit/s -- 76.1 %CPU > > ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): > ??? 35787.7 bit/s -- 129.4 %CPU > > ?3. w/ kvm dynamic poll: > ??? 35735.6 bit/s -- 200.0 %CPU > > ?4. w/patch and w/ kvm dynamic poll: > ??? 42225.3 bit/s -- 198....
2017 Nov 14
0
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...roject.org >>>> Hmm, is the idle entry path really so critical to performance that a >>>> new >>>> pvops function is necessary? >>> Juergen, Here is the data we get when running benchmark netperf: >>> ??1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >>> ???? 29031.6 bit/s -- 76.1 %CPU >>> >>> ??2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >>> ???? 35787.7 bit/s -- 129.4 %CPU >>> >>> ??3. w/ kvm dynamic poll: >>> ???? 35735.6 bit/s -- 200.0 %CPU >>> >&g...
2017 Nov 14
0
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...is the idle entry path really so critical to performance that a >>>>>> new >>>>>> pvops function is necessary? >>>>> Juergen, Here is the data we get when running benchmark netperf: >>>>> ???1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >>>>> ????? 29031.6 bit/s -- 76.1 %CPU >>>>> >>>>> ???2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >>>>> ????? 35787.7 bit/s -- 129.4 %CPU >>>>> >>>>> ???3. w/ kvm dynamic poll: >>>&...
2017 Nov 13
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
From: Quan Xu <quan.xu0 at gmail.com> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called in idle path which will poll for a while before we enter the real idle state. In virtualization, idle path includes several heavy operations includes timer access(LAPIC timer or TSC deadline timer) which will hurt performance especially for latency intensive workload like message