Displaying 20 results from an estimated 600 matches similar to: "[PATCH 1/2] export cpu_tlbstate to modules"
2018 Jan 04
2
possible issue with nvidia and new patches?
Twitter user stintel, in this thread:
https://twitter.com/stintel/status/948499157282623488
mentions a possible problem with the new patches and the
nvidia driver:
"As if the @Intel bug isn't bad enough, #KPTI renders @nvidia driver
incompatible due to GPL-only symbol 'cpu_tlbstate'. #epicfail"
Also:
https://twitter.com/tomasz_gwozdz/status/948590364679655429
2007 Apr 28
3
[PATCH] i386: introduce voyager smp_ops, fix voyager build
This adds an smp_ops for voyager, and hooks things up appropriately.
This is the first baby-step to making subarch runtime switchable.
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
---
arch/i386/kernel/Makefile | 1
arch/i386/kernel/smp.c
2007 Apr 28
3
[PATCH] i386: introduce voyager smp_ops, fix voyager build
This adds an smp_ops for voyager, and hooks things up appropriately.
This is the first baby-step to making subarch runtime switchable.
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
---
arch/i386/kernel/Makefile | 1
arch/i386/kernel/smp.c
2019 Jul 22
2
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
On Thu, Jul 18, 2019 at 05:58:32PM -0700, Nadav Amit wrote:
> @@ -709,8 +716,9 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
> * doing a speculative memory access.
> */
> if (info->freed_tables) {
> - smp_call_function_many(cpumask, flush_tlb_func_remote,
> - (void *)info, 1);
> + __smp_call_function_many(cpumask, flush_tlb_func_remote,
2019 Jul 22
2
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
On Thu, Jul 18, 2019 at 05:58:32PM -0700, Nadav Amit wrote:
> @@ -709,8 +716,9 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
> * doing a speculative memory access.
> */
> if (info->freed_tables) {
> - smp_call_function_many(cpumask, flush_tlb_func_remote,
> - (void *)info, 1);
> + __smp_call_function_many(cpumask, flush_tlb_func_remote,
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.
Add a static key to tell
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.
Add a static key to tell
2007 Apr 18
2
[PATCH 21/21] i386 Ldt context inline
I was also able to get the LDT switching functionality out of the
critical path in switch_mm, which reduces the number of function calls,
potential TLB misses and code size.
Signed-off-by: Zachary Amsden <zach@vmware.com>
Index: linux-2.6.14-zach-work/include/asm-i386/desc.h
===================================================================
---
2007 Apr 18
2
[PATCH 21/21] i386 Ldt context inline
I was also able to get the LDT switching functionality out of the
critical path in switch_mm, which reduces the number of function calls,
potential TLB misses and code size.
Signed-off-by: Zachary Amsden <zach@vmware.com>
Index: linux-2.6.14-zach-work/include/asm-i386/desc.h
===================================================================
---
2019 Jul 02
2
[PATCH v2 0/9] x86: Concurrent TLB flushes
Currently, local and remote TLB flushes are not performed concurrently,
which introduces unnecessary overhead - each INVLPG can take 100s of
cycles. This patch-set allows TLB flushes to be run concurrently: first
request the remote CPUs to initiate the flush, then run it locally, and
finally wait for the remote CPUs to finish their work.
In addition, there are various small optimizations to avoid
2020 Apr 04
0
[PATCH 4/6] kernel: move use_mm/unuse_mm to kthread.c
These helpers are only for use with kernel threads, and I will tie them
more into the kthread infrastructure going forward. Also move the
prototypes to kthread.h - mmu_context.h was a little weird to start with
as it otherwise contains very low-level MM bits.
Signed-off-by: Christoph Hellwig <hch at lst.de>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 1 +
2019 Jul 19
5
[PATCH v3 0/9] x86: Concurrent TLB flushes
[ Cover-letter is identical to v2, including benchmark results,
excluding the change log. ]
Currently, local and remote TLB flushes are not performed concurrently,
which introduces unnecessary overhead - each INVLPG can take 100s of
cycles. This patch-set allows TLB flushes to be run concurrently: first
request the remote CPUs to initiate the flush, then run it locally, and
finally wait for
2009 Aug 13
0
[PATCHv3 1/2] mm: export use_mm/unuse_mm to modules
vhost net module wants to do copy to/from user from a kernel thread,
which needs use_mm (like what fs/aio has). Move that into mm/ and
export to modules.
Acked-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
fs/aio.c | 47 +----------------------------------
include/linux/mmu_context.h | 9 ++++++
2009 Aug 13
0
[PATCHv3 1/2] mm: export use_mm/unuse_mm to modules
vhost net module wants to do copy to/from user from a kernel thread,
which needs use_mm (like what fs/aio has). Move that into mm/ and
export to modules.
Acked-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
fs/aio.c | 47 +----------------------------------
include/linux/mmu_context.h | 9 ++++++
2009 Aug 19
0
[PATCHv4 1/2] mm: export use_mm/unuse_mm to modules
vhost net module wants to do copy to/from user from a kernel thread,
which needs use_mm (like what fs/aio has). Move that into mm/ and
export to modules.
Acked-by: Andrew Morton <akpm at linux-foundation.org>
Acked-by: Andrea Arcangeli <aarcange at redhat.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
fs/aio.c | 47
2009 Aug 19
0
[PATCHv4 1/2] mm: export use_mm/unuse_mm to modules
vhost net module wants to do copy to/from user from a kernel thread,
which needs use_mm (like what fs/aio has). Move that into mm/ and
export to modules.
Acked-by: Andrew Morton <akpm at linux-foundation.org>
Acked-by: Andrea Arcangeli <aarcange at redhat.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
fs/aio.c | 47
2009 Sep 17
0
[PATCHv3 1/2] mm: move use_mm/unuse_mm from aio.c to mm/
Anyone who wants to do copy to/from user from a kernel thread, needs
use_mm (like what fs/aio has). Move that into mm/, to make reusing and
exporting easier down the line, and make aio use it. Next intended
user, besides aio, will be vhost-net.
Acked-by: Andrea Arcangeli <aarcange at redhat.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
fs/aio.c |
2009 Sep 17
0
[PATCHv3 1/2] mm: move use_mm/unuse_mm from aio.c to mm/
Anyone who wants to do copy to/from user from a kernel thread, needs
use_mm (like what fs/aio has). Move that into mm/, to make reusing and
exporting easier down the line, and make aio use it. Next intended
user, besides aio, will be vhost-net.
Acked-by: Andrea Arcangeli <aarcange at redhat.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
fs/aio.c |
2007 Apr 18
4
paravirt repo rebased to 2.6.21-rc6-mm1
Seems to work OK for native and Xen. I had to play a bit with the
paravirt-sched-clock patch to deal with the VMI changes. Zach, can you
check that it still works?
Thanks,
J
2007 Apr 18
4
paravirt repo rebased to 2.6.21-rc6-mm1
Seems to work OK for native and Xen. I had to play a bit with the
paravirt-sched-clock patch to deal with the VMI changes. Zach, can you
check that it still works?
Thanks,
J