Displaying 6 results from an estimated 6 matches for "halt_poll_threshold".
2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...ugh the heavy overhead path.
Here is the data we get when running benchmark contextswitch to measure
the latency(lower is better):
1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
3402.9 ns/ctxsw -- 199.8 %CPU
2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU
halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=40000 -- 1155.4 ns/ctxsw -- 199.3 %CPU
halt_poll_threshold=50000 -- 1161.0 ns/ctxsw -- 200.0...
2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...ugh the heavy overhead path.
Here is the data we get when running benchmark contextswitch to measure
the latency(lower is better):
1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
3402.9 ns/ctxsw -- 199.8 %CPU
2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU
halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=40000 -- 1155.4 ns/ctxsw -- 199.3 %CPU
halt_poll_threshold=50000 -- 1161.0 ns/ctxsw -- 200.0...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...ugh the heavy overhead path.
Here is the data we get when running benchmark contextswitch to measure
the latency(lower is better):
1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
3402.9 ns/ctxsw -- 199.8 %CPU
2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU
halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=40000 -- 1155.4 ns/ctxsw -- 199.3 %CPU
halt_poll_threshold=50000 -- 1161.0 ns/ctxsw -- 200.0...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...ugh the heavy overhead path.
Here is the data we get when running benchmark contextswitch to measure
the latency(lower is better):
1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
3402.9 ns/ctxsw -- 199.8 %CPU
2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU
halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=40000 -- 1155.4 ns/ctxsw -- 199.3 %CPU
halt_poll_threshold=50000 -- 1161.0 ns/ctxsw -- 200.0...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...ugh the heavy overhead path.
Here is the data we get when running benchmark contextswitch to measure
the latency(lower is better):
1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
3402.9 ns/ctxsw -- 199.8 %CPU
2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU
halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=40000 -- 1155.4 ns/ctxsw -- 199.3 %CPU
halt_poll_threshold=50000 -- 1161.0 ns/ctxsw -- 200.0...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...ugh the heavy overhead path.
Here is the data we get when running benchmark contextswitch to measure
the latency(lower is better):
1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
3402.9 ns/ctxsw -- 199.8 %CPU
2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU
halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU
halt_poll_threshold=40000 -- 1155.4 ns/ctxsw -- 199.3 %CPU
halt_poll_threshold=50000 -- 1161.0 ns/ctxsw -- 200.0...