search for: pte_free

Displaying 18 results from an estimated 18 matches for "pte_free".

Did you mean: pgd_free
2007 Jan 10
1
[PATCH] linux/i386: allow CONFIG_HIGHPTE on i386 (take 2)
...t_op op; + + kmap_flush_unused(); + op.cmd = MMUEXT_PIN_L1_TABLE; + op.arg1.mfn = pfn_to_mfn(page_to_pfn(pte)); + BUG_ON(HYPERVISOR_mmuext_op(&op, 1, NULL, DOMID_SELF) < 0); + } #else pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0); +#endif if (pte) { SetPageForeign(pte, pte_free); set_page_count(pte, 1); } -#endif return pte; } void pte_free(struct page *pte) { - unsigned long va = (unsigned long)__va(page_to_pfn(pte)<<PAGE_SHIFT); + unsigned long pfn = page_to_pfn(pte); - if (!pte_write(*virt_to_ptep(va))) - BUG_ON(HYPERVISOR_update_va_mapping( - va...
2007 Apr 18
1
[PATCH 1/5] Add pagetable allocation notifiers
...\ + SetPagePTE(pte); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -33,7 +41,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + tlb_remove_page((tlb),(pte)); \ + ClearPagePTE(pte); \ +} while (0) #ifdef CONFIG_X86_PAE /* Index: linux-2.6.13/include/asm-i386/mach-default/mach_pgalloc.h...
2007 Apr 18
1
[PATCH 1/5] Add pagetable allocation notifiers
...\ + SetPagePTE(pte); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -33,7 +41,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + tlb_remove_page((tlb),(pte)); \ + ClearPagePTE(pte); \ +} while (0) #ifdef CONFIG_X86_PAE /* Index: linux-2.6.13/include/asm-i386/mach-default/mach_pgalloc.h...
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
...if (!is_huge_zero_pmd(*pmdp)) + goto unlock_abort; + flush = true; + } else if (!pmd_none(*pmdp)) + goto unlock_abort; + + get_page(page); + page_add_new_anon_rmap(page, vma, haddr, true); + if (!is_device_private_page(page)) + lru_cache_add_active_or_unevictable(page, vma); + if (flush) { + pte_free(mm, pgtable); + flush_cache_range(vma, haddr, haddr + HPAGE_PMD_SIZE); + pmdp_invalidate(vma, haddr, pmdp); + } else { + pgtable_trans_huge_deposit(mm, pmdp, pgtable); + mm_inc_nr_ptes(mm); + } + set_pmd_at(mm, haddr, pmdp, entry); + update_mmu_cache_pmd(vma, haddr, pmdp); + add_mm_counter(mm,...
2007 Apr 18
0
[PATCH 1/5] Paravirt page alloc.patch
...lloc_pt(page_to_pfn(pte)); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -32,7 +50,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + paravirt_release_pt(page_to_pfn(pte)); \ + tlb_remove_page((tlb),(pte)); \ +} while (0) #ifdef CONFIG_X86_PAE /*
2007 Apr 18
0
[PATCH 1/5] Paravirt page alloc.patch
...lloc_pt(page_to_pfn(pte)); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -32,7 +50,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + paravirt_release_pt(page_to_pfn(pte)); \ + tlb_remove_page((tlb),(pte)); \ +} while (0) #ifdef CONFIG_X86_PAE /*
2007 Apr 18
0
[PATCH 1/6] Page allocation hooks for VMI backend
...lloc_pt(page_to_pfn(pte)); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -32,7 +50,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + paravirt_release_pt(page_to_pfn(pte)); \ + tlb_remove_page((tlb),(pte)); \ +} while (0) #ifdef CONFIG_X86_PAE /*
2007 Apr 18
0
[PATCH 1/6] Page allocation hooks for VMI backend
...lloc_pt(page_to_pfn(pte)); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -32,7 +50,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + paravirt_release_pt(page_to_pfn(pte)); \ + tlb_remove_page((tlb),(pte)); \ +} while (0) #ifdef CONFIG_X86_PAE /*
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...rt; > + flush = true; > + } else if (!pmd_none(*pmdp)) > + goto unlock_abort; > + > + get_page(page); > + page_add_new_anon_rmap(page, vma, haddr, true); > + if (!is_device_private_page(page)) > + lru_cache_add_active_or_unevictable(page, vma); > + if (flush) { > + pte_free(mm, pgtable); > + flush_cache_range(vma, haddr, haddr + HPAGE_PMD_SIZE); > + pmdp_invalidate(vma, haddr, pmdp); > + } else { > + pgtable_trans_huge_deposit(mm, pmdp, pgtable); > + mm_inc_nr_ptes(mm); > + } > + set_pmd_at(mm, haddr, pmdp, entry); > + update_mmu_cache_pmd(...
2007 Apr 18
2
pgd_alloc and [cd]tors
Is there any real use in having a ctor/dtor for the pgd cache? Given that all pgd allocation happens via pgd_alloc/pgd_free, why not just fold the [cd]tor in? I'm asking because Xen wants pgd[3] to be unshared in the PAE case, and it looks to me like the easiest way to handle that is by making pgd_alloc/free pv-ops and doing the appropriate thing in the Xen code. Would need to sort out the
2007 Apr 18
2
pgd_alloc and [cd]tors
Is there any real use in having a ctor/dtor for the pgd cache? Given that all pgd allocation happens via pgd_alloc/pgd_free, why not just fold the [cd]tor in? I'm asking because Xen wants pgd[3] to be unshared in the PAE case, and it looks to me like the easiest way to handle that is by making pgd_alloc/free pv-ops and doing the appropriate thing in the Xen code. Would need to sort out the
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...if (!is_huge_zero_pmd(*pmdp)) + goto unlock_abort; + flush = true; + } else if (!pmd_none(*pmdp)) + goto unlock_abort; + + get_page(page); + page_add_new_anon_rmap(page, vma, haddr, true); + if (!is_zone_device_page(page)) + lru_cache_add_inactive_or_unevictable(page, vma); + if (flush) { + pte_free(mm, pgtable); + flush_cache_range(vma, haddr, haddr + HPAGE_PMD_SIZE); + pmdp_invalidate(vma, haddr, pmdp); + } else { + pgtable_trans_huge_deposit(mm, pmdp, pgtable); + mm_inc_nr_ptes(mm); + } + set_pmd_at(mm, haddr, pmdp, entry); + update_mmu_cache_pmd(vma, haddr, pmdp); + add_mm_counter(mm,...
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...tup_pte(page_to_pfn(pte)); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -33,7 +41,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + tlb_remove_page((tlb),(pte)); \ + mach_release_pte(page_to_pfn(pte)); \ +} while (0) #ifdef CONFIG_X86_PAE /* Index: linux-2.6.16-rc5/include/asm-i386/pgtable-2...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...tup_pte(page_to_pfn(pte)); \ set_pmd(pmd, __pmd(_PAGE_TABLE + \ ((unsigned long long)page_to_pfn(pte) << \ - (unsigned long long) PAGE_SHIFT))) + (unsigned long long) PAGE_SHIFT))); \ +} while (0) + /* * Allocate and free page tables. */ @@ -33,7 +41,11 @@ static inline void pte_free(struct page } -#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) +#define __pte_free_tlb(tlb,pte) \ +do { \ + tlb_remove_page((tlb),(pte)); \ + mach_release_pte(page_to_pfn(pte)); \ +} while (0) #ifdef CONFIG_X86_PAE /* Index: linux-2.6.16-rc5/include/asm-i386/pgtable-2...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan
2006 Mar 14
12
[RFC] VMI for Xen?
I''m sure everyone has seen the drop of VMI patches for Linux at this point, but just in case, the link is included below. I''ve read this version of the VMI spec and have made my way through most of the patches. While I wasn''t really that impressed with the first spec wrt Xen, the second version seems to be much more palatable. Specifically, the code inlining and