search for: pte_none

Displaying 20 results from an estimated 104 matches for "pte_none".

Did you mean: pmd_none
2020 Jul 31
1
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
...holds a migration entry (so the page can't be faulted and the CPU page table set > valid again), and there are no extra page references (pins), the page > "should not be modified". That is the physical page though, it doesn't prove nobody else is reading the PTE. > For pte_none()/is_zero_pfn() entries, migrate_vma_setup() leaves the > pte_none()/is_zero_pfn() entry in place but does still call > mmu_notifier_invalidate_range_start() for the whole range being migrated. Ok.. > In the migrate_vma_pages() step, the pte page table is locked and the > pte entry ch...
2020 Jul 28
2
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote: > When migrating the special zero page, migrate_vma_pages() calls > mmu_notifier_invalidate_range_start() before replacing the zero page > PFN in the CPU page tables. This is unnecessary since the range was > invalidated in migrate_vma_setup() and the page table entry is checked > to be sure it hasn't changed
2020 Jul 28
0
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
...te_vma_setup() stage, and the page is isolated from the LRU cache, locked, unmapped, and the page table holds a migration entry (so the page can't be faulted and the CPU page table set valid again), and there are no extra page references (pins), the page "should not be modified". For pte_none()/is_zero_pfn() entries, migrate_vma_setup() leaves the pte_none()/is_zero_pfn() entry in place but does still call mmu_notifier_invalidate_range_start() for the whole range being migrated. In the migrate_vma_pages() step, the pte page table is locked and the pte entry checked to be sure it is sti...
2006 Mar 14
12
[RFC] VMI for Xen?
I''m sure everyone has seen the drop of VMI patches for Linux at this point, but just in case, the link is included below. I''ve read this version of the VMI spec and have made my way through most of the patches. While I wasn''t really that impressed with the first spec wrt Xen, the second version seems to be much more palatable. Specifically, the code inlining and
2020 Nov 03
0
[patch V3 24/37] sched: highmem: Store local kmaps in task struct
...ar kmaps */ + for (i = 0; i < tsk->kmap_ctrl.idx; i++) { + pte_t pteval = tsk->kmap_ctrl.pteval[i]; + unsigned long addr; + int idx; + + /* With debug all even slots are unmapped and act as guard */ + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) { + WARN_ON_ONCE(!pte_none(pteval)); + continue; + } + if (WARN_ON_ONCE(pte_none(pteval))) + continue; + + /* + * This is a horrible hack for XTENSA to calculate the + * coloured PTE index. Uses the PFN encoded into the pteval + * and the map index calculation because the actual mapped + * virtual address is n...
2007 Apr 18
0
[PATCH 3/9] 00mm3 lazy mmu mode hooks.patch
...spin_unlock(src_ptl); pte_unmap_nested(src_pte - 1); add_mm_rss(dst_mm, rss[0], rss[1]); @@ -628,6 +630,7 @@ static unsigned long zap_pte_range(struc int anon_rss = 0; pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + arch_enter_lazy_mmu_mode(); do { pte_t ptent = *pte; if (pte_none(ptent)) { @@ -694,6 +697,7 @@ static unsigned long zap_pte_range(struc } while (pte++, addr += PAGE_SIZE, (addr != end && *zap_work > 0)); add_mm_rss(mm, file_rss, anon_rss); + arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte - 1, ptl); return addr; @@ -1109,6 +1113,7 @@ stat...
2020 May 20
2
[PATCH] nouveau/hmm: fix migrate zero page to GPU
When calling OpenCL clEnqueueSVMMigrateMem() on a region of memory that is backed by pte_none() or zero pages, migrate_vma_setup() will fill the source PFN array with an entry indicating the source page is zero. Use this to optimize migration to device private memory by allocating GPU memory and zero filling it instead of failing to migrate the page. Signed-off-by: Ralph Campbell <rcamp...
2020 May 20
1
[PATCH] nouveau/hmm: fix migrate zero page to GPU
On 5/20/20 12:20 PM, Jason Gunthorpe wrote: > On Wed, May 20, 2020 at 11:36:52AM -0700, Ralph Campbell wrote: >> When calling OpenCL clEnqueueSVMMigrateMem() on a region of memory that >> is backed by pte_none() or zero pages, migrate_vma_setup() will fill the >> source PFN array with an entry indicating the source page is zero. >> Use this to optimize migration to device private memory by allocating >> GPU memory and zero filling it instead of failing to migrate the page. >> >...
2020 Nov 03
0
[patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
...p; - - type = kmap_atomic_idx_push(); - - idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); - vaddr = __fix_to_virt(idx); -#ifdef CONFIG_DEBUG_HIGHMEM - /* - * With debugging enabled, kunmap_atomic forces that entry to 0. - * Make sure it was indeed properly unmapped. - */ - BUG_ON(!pte_none(get_fixmap_pte(vaddr))); -#endif - /* - * When debugging is off, kunmap_atomic leaves the previous mapping - * in place, so the contained TLB flush ensures the TLB is updated - * with the new mapping. - */ - set_fixmap_pte(idx, mk_pte(page, prot)); - - return (void *)vaddr; -} -EXPORT_SYMBOL(km...
2011 Mar 20
6
PATCH: Hugepage support for Domains booting with 4KB pages
We have implemented hugepage support for guests in following manner In our implementation we added a parameter hugepage_num which is specified in the config file of the DomU. It is the number of hugepages that the guest is guaranteed to receive whenever the kernel asks for hugepage by using its boot time parameter or reserving after booting (eg. Using echo XX > /proc/sys/vm/nr_hugepages).
2020 May 20
0
[PATCH] nouveau/hmm: fix migrate zero page to GPU
On Wed, May 20, 2020 at 11:36:52AM -0700, Ralph Campbell wrote: > When calling OpenCL clEnqueueSVMMigrateMem() on a region of memory that > is backed by pte_none() or zero pages, migrate_vma_setup() will fill the > source PFN array with an entry indicating the source page is zero. > Use this to optimize migration to device private memory by allocating > GPU memory and zero filling it instead of failing to migrate the page. > > Signed-off-by:...
2020 Sep 02
0
[PATCH v2 1/7] mm/thp: fix __split_huge_pmd_locked() for migration PMD
..._pmd(*pmd)) { /* * FIXME: Do we want to invalidate secondary mmu by calling * mmu_notifier_invalidate_range() see comments below inside @@ -2117,30 +2117,34 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, pte = pte_offset_map(&_pmd, addr); BUG_ON(!pte_none(*pte)); set_pte_at(mm, addr, pte, entry); - atomic_inc(&page[i]._mapcount); - pte_unmap(pte); - } - - /* - * Set PG_double_map before dropping compound_mapcount to avoid - * false-negative page_mapped(). - */ - if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {...
2007 Apr 18
0
[PATCH 3/5] Fix missing pte update.patch
...lear(xp) do { set_pmd(xp, __pmd(0)); } while (0) -#define __HAVE_ARCH_PTEP_GET_AND_CLEAR -#define ptep_get_and_clear(mm,addr,xp) __pte(xchg(&(xp)->pte_low, 0)) +#define raw_ptep_get_and_clear(xp) __pte(xchg(&(xp)->pte_low, 0)) #define pte_page(x) pfn_to_page(pte_pfn(x)) #define pte_none(x) (!(x).pte_low) diff -r f1dd818c2f06 include/asm-i386/pgtable-3level.h --- a/include/asm-i386/pgtable-3level.h Thu Oct 19 03:03:09 2006 -0700 +++ b/include/asm-i386/pgtable-3level.h Thu Oct 19 03:03:18 2006 -0700 @@ -119,8 +119,7 @@ static inline void pmd_clear(pmd_t *pmd) *(tmp + 1) = 0; }...
2007 Apr 18
0
[PATCH 3/5] Fix missing pte update.patch
...lear(xp) do { set_pmd(xp, __pmd(0)); } while (0) -#define __HAVE_ARCH_PTEP_GET_AND_CLEAR -#define ptep_get_and_clear(mm,addr,xp) __pte(xchg(&(xp)->pte_low, 0)) +#define raw_ptep_get_and_clear(xp) __pte(xchg(&(xp)->pte_low, 0)) #define pte_page(x) pfn_to_page(pte_pfn(x)) #define pte_none(x) (!(x).pte_low) diff -r f1dd818c2f06 include/asm-i386/pgtable-3level.h --- a/include/asm-i386/pgtable-3level.h Thu Oct 19 03:03:09 2006 -0700 +++ b/include/asm-i386/pgtable-3level.h Thu Oct 19 03:03:18 2006 -0700 @@ -119,8 +119,7 @@ static inline void pmd_clear(pmd_t *pmd) *(tmp + 1) = 0; }...
2007 Apr 18
1
[PATCH 3/4] Pte xchg optimization.patch
...ep); + return res; +} + +#ifdef CONFIG_SMP static inline pte_t native_ptep_get_and_clear(pte_t *xp) { return __pte(xchg(&xp->pte_low, 0)); } +#else +#define native_ptep_get_and_clear(xp) native_local_ptep_get_and_clear(xp) +#endif #define pte_page(x) pfn_to_page(pte_pfn(x)) #define pte_none(x) (!(x).pte_low) diff -r 47495b2532b3 include/asm-i386/pgtable-3level.h --- a/include/asm-i386/pgtable-3level.h Wed Apr 11 18:23:01 2007 -0700 +++ b/include/asm-i386/pgtable-3level.h Wed Apr 11 18:23:05 2007 -0700 @@ -139,6 +139,17 @@ static inline void pud_clear (pud_t * pu #define pmd_offset(p...
2007 Apr 18
1
[PATCH 3/4] Pte xchg optimization.patch
...ep); + return res; +} + +#ifdef CONFIG_SMP static inline pte_t native_ptep_get_and_clear(pte_t *xp) { return __pte(xchg(&xp->pte_low, 0)); } +#else +#define native_ptep_get_and_clear(xp) native_local_ptep_get_and_clear(xp) +#endif #define pte_page(x) pfn_to_page(pte_pfn(x)) #define pte_none(x) (!(x).pte_low) diff -r 47495b2532b3 include/asm-i386/pgtable-3level.h --- a/include/asm-i386/pgtable-3level.h Wed Apr 11 18:23:01 2007 -0700 +++ b/include/asm-i386/pgtable-3level.h Wed Apr 11 18:23:05 2007 -0700 @@ -139,6 +139,17 @@ static inline void pud_clear (pud_t * pu #define pmd_offset(p...
2007 Apr 18
0
[PATCH 5/9] 00mm6 kpte flush.patch
...================================================= --- a/arch/i386/mm/highmem.c +++ b/arch/i386/mm/highmem.c @@ -44,22 +44,19 @@ void *kmap_atomic(struct page *page, enu idx = type + KM_TYPE_NR*smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); -#ifdef CONFIG_DEBUG_HIGHMEM if (!pte_none(*(kmap_pte-idx))) BUG(); -#endif set_pte(kmap_pte-idx, mk_pte(page, kmap_prot)); - __flush_tlb_one(vaddr); return (void*) vaddr; } void kunmap_atomic(void *kvaddr, enum km_type type) { -#ifdef CONFIG_DEBUG_HIGHMEM unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; enum...
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all