search for: stride_shift

Displaying 7 results from an estimated 7 matches for "stride_shift".

2019 Jul 22
2
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...mm; /* 0 8 */ long unsigned int start; /* 8 8 */ long unsigned int end; /* 16 8 */ u64 new_tlb_gen; /* 24 8 */ unsigned int stride_shift; /* 32 4 */ bool freed_tables; /* 36 1 */ /* size: 40, cachelines: 1, members: 6 */ /* padding: 3 */ /* last cacheline: 40 bytes */ }; IIRC what you did was make void *__call_single_data::info the last member and...
2019 Jul 22
2
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...mm; /* 0 8 */ long unsigned int start; /* 8 8 */ long unsigned int end; /* 16 8 */ u64 new_tlb_gen; /* 24 8 */ unsigned int stride_shift; /* 32 4 */ bool freed_tables; /* 36 1 */ /* size: 40, cachelines: 1, members: 6 */ /* padding: 3 */ /* last cacheline: 40 bytes */ }; IIRC what you did was make void *__call_single_data::info the last member and...
2019 Jul 22
0
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
.../* 0 8 */ > long unsigned int start; /* 8 8 */ > long unsigned int end; /* 16 8 */ > u64 new_tlb_gen; /* 24 8 */ > unsigned int stride_shift; /* 32 4 */ > bool freed_tables; /* 36 1 */ > > /* size: 40, cachelines: 1, members: 6 */ > /* padding: 3 */ > /* last cacheline: 40 bytes */ > }; > > IIRC what you did was make void *__call_si...
2019 Jul 19
5
[PATCH v3 0/9] x86: Concurrent TLB flushes
[ Cover-letter is identical to v2, including benchmark results, excluding the change log. ] Currently, local and remote TLB flushes are not performed concurrently, which introduces unnecessary overhead - each INVLPG can take 100s of cycles. This patch-set allows TLB flushes to be run concurrently: first request the remote CPUs to initiate the flush, then run it locally, and finally wait for
2019 Jul 02
0
[PATCH v2 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...flushes TLBs on multiple cpus * * ..but the i386 has somewhat limited tlb flushing capabilities, * and page-granular flushes are available only on i486 and up. @@ -563,13 +563,14 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned int stride_shift, bool freed_tables); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); +extern void flush_tlb_func_local(const struct flush_tlb_info *info); static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { flush_tlb_mm_range(vma->vm_mm, a, a...
2019 Jul 02
2
[PATCH v2 0/9] x86: Concurrent TLB flushes
Currently, local and remote TLB flushes are not performed concurrently, which introduces unnecessary overhead - each INVLPG can take 100s of cycles. This patch-set allows TLB flushes to be run concurrently: first request the remote CPUs to initiate the flush, then run it locally, and finally wait for the remote CPUs to finish their work. In addition, there are various small optimizations to avoid
2019 Jul 19
0
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...any(cond_cpumask, flush_tlb_func_remote, + __smp_call_function_many(cond_cpumask, flush_tlb_func_remote, + flush_tlb_func_local, (void *)info, 1); } } @@ -818,16 +827,20 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables, new_tlb_gen); - if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) { + /* + * flush_tlb_multi() is not optimized for the common case in which only + * a local TLB flush is needed. Optimize this use-case by calling + * flush_tlb_func_local() directly in this case. + */ + if...