search for: ctxsw

Displaying 20 results from an estimated 61 matches for "ctxsw".

2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...tering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU halt_poll_threshol...
2017 Nov 13
0
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...tering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU halt_poll_threshol...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...tering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU halt_poll_threshol...
2017 Nov 13
2
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...tering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU halt_poll_threshol...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...tering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU halt_poll_threshol...
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
...tering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): halt_poll_threshold=10000 -- 1151.4 ns/ctxsw -- 200.1 %CPU halt_poll_threshold=20000 -- 1149.7 ns/ctxsw -- 199.9 %CPU halt_poll_threshold=30000 -- 1151.0 ns/ctxsw -- 199.9 %CPU halt_poll_threshol...
2019 Feb 17
0
[PATCH] gr/gf100-: correctly expose fecs methods for ctxsw start and stop
Allow fecs to potentially set both methods: - 0x38 STOP_CTXSW - 0x39 START_CTXSW At present the code only ever starts context swap, and never pauses it as appears to be the intent of one caller of gf100_gr_fecs_ctrl_ctxs(). Cc: Ben Skeggs <bskeggs at redhat.com> Fixes: 2642e0b5 ("gr/gf100-: expose fecs methods for pausing ctxsw") Signed-off-...
2007 Jun 01
2
lguest problem on boot of guest kernel
Hi ! Kenrel 2.6.21 (kernel.org) Patch lguest-2.6.21-254.patch Distro Slackware 11.0 GCC 3.4.6 GLIBC 2.3.6 HW model name : AMD Duron(tm) procu{s{ Module Size Used by tun 7680 0 lg 54600 0 just started playing with lguest - patching, compiling and booting the host-kernel goes ok - compiling lguest is ok as well after
2007 Jun 01
2
lguest problem on boot of guest kernel
Hi ! Kenrel 2.6.21 (kernel.org) Patch lguest-2.6.21-254.patch Distro Slackware 11.0 GCC 3.4.6 GLIBC 2.3.6 HW model name : AMD Duron(tm) procu{s{ Module Size Used by tun 7680 0 lg 54600 0 just started playing with lguest - patching, compiling and booting the host-kernel goes ok - compiling lguest is ok as well after
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...line timer), or a hardware context switch between virtual machine and hypervisor. I know these is a tradeoff. Furthermore, here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ? 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll: ? 1163.5 ns/ctxsw -- 205.5 %CPU 3. w/ kvm dynamic poll: ? 2280.6 ns/ctxsw -- 199.5 %CPU so, these tow solution are quite similar, but not duplicate.. that's also why to add a generic idle poll before enter real idle path. When a resc...
2017 Nov 14
2
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...line timer), or a hardware context switch between virtual machine and hypervisor. I know these is a tradeoff. Furthermore, here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ? 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll: ? 1163.5 ns/ctxsw -- 205.5 %CPU 3. w/ kvm dynamic poll: ? 2280.6 ns/ctxsw -- 199.5 %CPU so, these tow solution are quite similar, but not duplicate.. that's also why to add a generic idle poll before enter real idle path. When a resc...
2017 Nov 14
0
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
...xt switch between virtual machine > and hypervisor. I know these is a tradeoff. > > Furthermore, here is the data we get when running benchmark contextswitch > to measure the latency(lower is better): > > 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): > 3402.9 ns/ctxsw -- 199.8 %CPU > > 2. w/ patch and disable kvm dynamic poll: > 1163.5 ns/ctxsw -- 205.5 %CPU > > 3. w/ kvm dynamic poll: > 2280.6 ns/ctxsw -- 199.5 %CPU > > so, these tow solution are quite similar, but not duplicate.. > > that's also why to add a generic idle p...
2016 Oct 30
2
Nouveau regression since kernel 4.3: loading NVIDIA's firwmare files
...s you're well-aware, your commit 8539b37acef73949861a16808b60cb8b5b9b3bab (drm/nouveau/gr: use NVIDIA-provided external firmwares) broke tons of existing setups for people who were using extracted firmware files (stored in the "nouveau" firmware directory) as a result of nouveau's ctxsw fw being ... lacking. This is especially common on GK106's for some reason. The arguments for doing this at the time was that (a) all the bugs in nouveau's fw have been fixed and thus those people don't need to be using those extracted firmware files and (b) NVIDIA was going to release...
2015 Aug 11
3
Odd text behavior on Websites and others
...gt; > using > > the Fedora 22 Update repository. > > I am not familiar with details I just can report what happens right > > now... > > OK, there's no EXA support for maxwell, so you're using glamor. > Before > kernel 4.1, unless you had extracted your own ctxsw firmware, you > didn't have acceleration at all, that was likely the change that > triggered the issue. > > The glamor integration in nouveau is, sadly, broken. But it's unclear > whether that's the cause of your issue. You can either add > > Option "NoAccel&...
2008 Apr 15
4
NFS Performance
Hi, With help from Oleg we got the right patches applied and NFS working well. Maximum performance was about 60 MB/sec. Last week that dropped to about 12.5 MB/sec and I cannot find a reason. Lustre clients all obtain 100+ MB/sec on GigE. Each OST is good for 270 MB/sec. When mounting the client on one of the OSSs I get 230 MB/sec. Seems the speed is there. How can NFS and Lustre be tuned
2017 Jan 11
3
GP106M+Intel Skylake, Kernel 4.10-rc3 : No display on HDMI or DP
Hi all, On my recent MSI Apache Pro laptop with Intel Skylake + GP106M Nvidia chip, i am unable to use the external HDMI and DP outputs. The outputs are available thru xrandr and can be activated, but the connected monitors always show a black screen. Note that the integrated LCD is connected to the Skylage GPU, while the HDMI and DP outputs are connected to the NVIDIA GP106M GPU. The
2017 Nov 17
2
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
...make power/idle state decisions, is the last idle state's residency time. IIUC this data is duration from idle to wakeup, which maybe by reschedule irq or other irq. I also test that the reschedule irq overlap by more than 90% (trace the need_resched status after cpuidle_idle_call), when I run ctxsw/netperf for one minute. as the overlap, I think I can input the last idle state's residency time to make decisions about probabilistic polling, as @dev->last_residency does. it is much easier to get data. 2. do a HV specific idle driver (function) so far, power management is not exposed...
2017 Nov 17
2
[PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path
...make power/idle state decisions, is the last idle state's residency time. IIUC this data is duration from idle to wakeup, which maybe by reschedule irq or other irq. I also test that the reschedule irq overlap by more than 90% (trace the need_resched status after cpuidle_idle_call), when I run ctxsw/netperf for one minute. as the overlap, I think I can input the last idle state's residency time to make decisions about probabilistic polling, as @dev->last_residency does. it is much easier to get data. 2. do a HV specific idle driver (function) so far, power management is not exposed...
2015 Aug 11
3
Odd text behavior on Websites and others
Online, just now noticed some websites are coming up ok, but then characters rapidly begin to disappear or get replaced by wrong characters. If I click on any words in this post several times, the missing characters "Kr" intermittently appear in the example above and disappear in this sentence. Also just noticed that same behavior occurs when I move the mouse in and out of the text
2014 Dec 05
2
errors with GeForce GTX 650 Ti and 3 monitors
There were fixes in 3.17 that were supposed to help this, but apparently they didn't help enough. See https://bugs.freedesktop.org/show_bug.cgi?id=72180 -- basically some sort of card setup failure on our part is causing our ctxsw to die, but nvidia's appears to be more resilient to the screwups. On Fri, Dec 5, 2014 at 1:16 PM, Rob Jansen <rob.g.jansen at nrl.navy.mil> wrote: > Well, I downgraded back to 2 monitors and it turns out that this problem has reappeared when using only 2 monitors as well. There must...