search for: pte_present

Displaying 20 results from an estimated 89 matches for "pte_present".

2006 Mar 14
12
[RFC] VMI for Xen?
I''m sure everyone has seen the drop of VMI patches for Linux at this point, but just in case, the link is included below. I''ve read this version of the VMI spec and have made my way through most of the patches. While I wasn''t really that impressed with the first spec wrt Xen, the second version seems to be much more palatable. Specifically, the code inlining and
2008 May 27
3
[PATCH] VT-d: IOTLB flush fixups
...rivers/passthrough/vtd/iommu.c --- a/xen/drivers/passthrough/vtd/iommu.c Tue May 27 11:46:52 2008 +0100 +++ b/xen/drivers/passthrough/vtd/iommu.c Tue May 27 17:16:51 2008 +0100 @@ -1525,6 +1525,7 @@ struct iommu *iommu; struct dma_pte *page = NULL, *pte = NULL; u64 pg_maddr; + int pte_present; drhd = list_entry(acpi_drhd_units.next, typeof(*drhd), list); iommu = drhd->iommu; @@ -1540,6 +1541,7 @@ return -ENOMEM; page = (struct dma_pte *)map_vtd_domain_page(pg_maddr); pte = page + (gfn & LEVEL_MASK); + pte_present = dma_pte_present(*pte); dma...
2008 May 15
0
[PATCH] linux/x86: utilize lookup_address() for virt_to_ptep()
...irt_to_machine(__va) \ -({ \ - maddr_t m = (maddr_t)pte_mfn(*virt_to_ptep(__va)) << PAGE_SHIFT;\ - m | ((unsigned long)(__va) & (PAGE_SIZE-1)); \ -}) +#define virt_to_ptep(va) \ +({ \ + pte_t *__ptep = lookup_address((unsigned long)(va)); \ + BUG_ON(!__ptep || !pte_present(*__ptep)); \ + __ptep; \ +}) + +#define arbitrary_virt_to_machine(va) \ + (((maddr_t)pte_mfn(*virt_to_ptep(va)) << PAGE_SHIFT) \ + | ((unsigned long)(va) & (PAGE_SIZE - 1))) #endif /* !__ASSEMBLY__ */ Index: head-2008-05-08/include/asm-x86_64/mach-xen/asm/pgtable.h ===...
2008 Mar 19
10
Illegal PV kernel pfm/pfn translations on PROT_NONE ioremaps
...ptes from being mfn-based to pfn-based when the hardware _PAGE_PRESENT bit is cleared. We do this for PROT_NONE pages, which appear to the HV to be non-present, but which are special-cased in the kernel to appear present (a different bit in the pte remains set for these pages and is caught by the pte_present() tests.) Unfortunately, it looks like recent X servers are attempting to do mprotect(PROT_NONE) and back on regions of ioremap()ed memory. When we do so, the translation of mfn to pfn results on x86_64 in end_pfn: maddr.h: static inline unsigned long mfn_to_pfn(unsigned long mfn) { ... if (unl...
2008 Feb 01
0
[PATCH] linux/x86: make xen_change_pte_range() compatible with CONFIG_HIGHPTE
...= --- head-2008-01-28.orig/arch/i386/mm/hypervisor.c 2007-10-19 17:18:37.000000000 +0200 +++ head-2008-01-28/arch/i386/mm/hypervisor.c 2008-01-31 17:38:56.000000000 +0100 @@ -569,7 +569,9 @@ int xen_change_pte_range(struct mm_struc pte = pte_offset_map_lock(mm, pmd, addr, &ptl); do { if (pte_present(*pte)) { - u[i].ptr = virt_to_machine(pte) | MMU_PT_UPDATE_PRESERVE_AD; + u[i].ptr = (__pmd_val(*pmd) & PHYSICAL_PAGE_MASK) + | ((unsigned long)pte & ~PAGE_MASK) + | MMU_PT_UPDATE_PRESERVE_AD; u[i].val = __pte_val(pte_modify(*pte, newprot)); if (++i == MAX_BATCHED_FU...
2007 Apr 18
0
[PATCH 2/5] Add subarch mmu queue flush hook
...======================================================== --- linux-2.6.13.orig/arch/i386/mm/fault.c 2005-08-24 09:30:53.000000000 -0700 +++ linux-2.6.13/arch/i386/mm/fault.c 2005-08-24 09:43:27.000000000 -0700 @@ -562,6 +562,15 @@ vmalloc_fault: pte_k = pte_offset_kernel(pmd_k, address); if (!pte_present(*pte_k)) goto no_context; + + /* + * We have just updated this root with a copy of the kernel + * pmd. To return without flushing would introduce a fault + * loop if running on a hypervisor which uses queued page + * table updates. + */ + update_mmu_cache(vma, address, pte_k); +...
2007 Apr 18
0
[PATCH 2/5] Add subarch mmu queue flush hook
...======================================================== --- linux-2.6.13.orig/arch/i386/mm/fault.c 2005-08-24 09:30:53.000000000 -0700 +++ linux-2.6.13/arch/i386/mm/fault.c 2005-08-24 09:43:27.000000000 -0700 @@ -562,6 +562,15 @@ vmalloc_fault: pte_k = pte_offset_kernel(pmd_k, address); if (!pte_present(*pte_k)) goto no_context; + + /* + * We have just updated this root with a copy of the kernel + * pmd. To return without flushing would introduce a fault + * loop if running on a hypervisor which uses queued page + * table updates. + */ + update_mmu_cache(vma, address, pte_k); +...
2020 Mar 16
0
[PATCH 4/4] mm: check the device private page owner in hmm_range_fault
...mm_range *range, + swp_entry_t entry) +{ + return is_device_private_entry(entry) && + device_private_entry_to_page(entry)->pgmap->owner == + range->dev_private_owner; +} + static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte) { if (pte_none(pte) || !pte_present(pte) || pte_protnone(pte)) @@ -254,7 +262,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * Never fault in device private pages pages, but just report * the PFN even if not present. */ - if (is_device_private_entry(entry)) { + if (hmm_is_device_private_ent...
2020 Mar 16
0
[PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma
...md_t *pmdp, arch_enter_lazy_mmu_mode(); for (; addr < end; addr += PAGE_SIZE, ptep++) { - unsigned long mpfn, pfn; + unsigned long mpfn = 0, pfn; struct page *page; swp_entry_t entry; pte_t pte; @@ -2255,8 +2255,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, } if (!pte_present(pte)) { - mpfn = 0; - /* * Only care about unaddressable device page special * page table entry. Other special swap entries are not @@ -2267,11 +2265,16 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; page = device_private_entry_to_page(entry); + if (page-...
2020 Mar 16
14
ensure device private pages have an owner v2
When acting on device private mappings a driver needs to know if the device (or other entity in case of kvmppc) actually owns this private mapping. This series adds an owner field and converts the migrate_vma code over to check it. I looked into doing the same for hmm_range_fault, but as far as I can tell that code has never been wired up to actually work for device private memory, so instead of
2020 Mar 16
4
ensure device private pages have an owner
When acting on device private mappings a driver needs to know if the device (or other entity in case of kvmppc) actually owns this private mapping. This series adds an owner field and converts the migrate_vma code over to check it. I looked into doing the same for hmm_range_fault, but as far as I can tell that code has never been wired up to actually work for device private memory, so instead of
2007 Apr 18
0
[PATCH 3/9] 00mm3 lazy mmu mode hooks.patch
...=============================================================== --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -34,6 +34,7 @@ static void change_pte_range(struct mm_s spinlock_t *ptl; pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + arch_enter_lazy_mmu_mode(); do { oldpte = *pte; if (pte_present(oldpte)) { @@ -70,6 +71,7 @@ static void change_pte_range(struct mm_s } } while (pte++, addr += PAGE_SIZE, addr != end); + arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte - 1, ptl); } =================================================================== --- a/mm/mremap.c +++ b/mm/mremap....
2020 Mar 16
0
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
...mp; range->flags[HMM_PFN_WRITE]; - *fault = true; - } - return; - } /* If CPU page table is not valid then we need to fault */ *fault = !(cpu_flags & range->flags[HMM_PFN_VALID]); @@ -259,25 +250,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, if (!pte_present(pte)) { swp_entry_t entry = pte_to_swp_entry(pte); - /* - * This is a special swap entry, ignore migration, use - * device and report anything else as error. - */ - if (is_device_private_entry(entry)) { - cpu_flags = range->flags[HMM_PFN_VALID] | - range->flags[HMM_PFN_DEVIC...
2008 Mar 20
0
[RFC/PATCH 02/15] preparation: host memory management changes for s390 kvm
...y & _PAGE_REFERENCED) + rcp_set_bits(ptep, _PAGE_RCP_GR); + if (rcp_test_and_clear_bits(ptep, _PAGE_RCP_HC)) + SetPageDirty(page); + if (rcp_test_and_clear_bits(ptep, _PAGE_RCP_HR)) + SetPageReferenced(page); +#endif +} + /* * query functions pte_write/pte_dirty/pte_young only work if * pte_present() is true. Undefined behaviour if not.. @@ -599,6 +668,8 @@ static inline void pmd_clear(pmd_t *pmd) static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { + if (mm->context.pgstes) + ptep_rcp_copy(ptep); pte_val(*ptep) = _PAGE_TYPE_EMPTY; if (mm->conte...
2007 Apr 18
0
[RFC/PATCH PV_OPS X86_64 08/17] paravirt_ops - memory managment
....\n", __FILE__, __LINE__, &(e), pud_val(e)) +#define pgd_ERROR(e) \ + printk("%s:%d: bad pgd %p(%016lx).\n", __FILE__, __LINE__, &(e), pgd_val(e)) struct mm_struct; @@ -238,7 +250,6 @@ static inline unsigned long pmd_bad(pmd_ #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT)) /* FIXME: is this right? */ @@ -247,11 +258,11 @@ static inline unsigned long pmd_bad(pmd_...
2007 Apr 18
0
[RFC/PATCH PV_OPS X86_64 08/17] paravirt_ops - memory managment
....\n", __FILE__, __LINE__, &(e), pud_val(e)) +#define pgd_ERROR(e) \ + printk("%s:%d: bad pgd %p(%016lx).\n", __FILE__, __LINE__, &(e), pgd_val(e)) struct mm_struct; @@ -238,7 +250,6 @@ static inline unsigned long pmd_bad(pmd_ #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT)) /* FIXME: is this right? */ @@ -247,11 +258,11 @@ static inline unsigned long pmd_bad(pmd_...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
..._context; set_pmd(pmd, *pmd_k); + /* + * Needed. We have just updated this root with a copy of + * the kernel pmd. To return without flushing would + * introduce a fault loop. + */ + update_mmu_cache(NULL, pmd, pmd_k->pmd); + pte_k = pte_offset_kernel(pmd_k, address); if (!pte_present(*pte_k)) goto no_context; Index: linux-2.6.16-rc5/arch/i386/mm/init.c =================================================================== --- linux-2.6.16-rc5.orig/arch/i386/mm/init.c 2006-03-10 12:55:05.000000000 -0800 +++ linux-2.6.16-rc5/arch/i386/mm/init.c 2006-03-10 15:57:08.000000000 -080...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
..._context; set_pmd(pmd, *pmd_k); + /* + * Needed. We have just updated this root with a copy of + * the kernel pmd. To return without flushing would + * introduce a fault loop. + */ + update_mmu_cache(NULL, pmd, pmd_k->pmd); + pte_k = pte_offset_kernel(pmd_k, address); if (!pte_present(*pte_k)) goto no_context; Index: linux-2.6.16-rc5/arch/i386/mm/init.c =================================================================== --- linux-2.6.16-rc5.orig/arch/i386/mm/init.c 2006-03-10 12:55:05.000000000 -0800 +++ linux-2.6.16-rc5/arch/i386/mm/init.c 2006-03-10 15:57:08.000000000 -080...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 01/13] HV VM Fix map area for HV.
...(hvvm_lock); + +static DECLARE_BITMAP(hvvm_avail_pages, NR_HV_PAGES); + + +static void hvvm_pte_unmap(pmd_t *pmd, unsigned long addr) +{ + pte_t *pte; + pte_t ptent; + + pte = pte_offset_kernel(pmd, addr); + ptent = ptep_get_and_clear(&init_mm, addr, pte); + WARN_ON(!pte_none(ptent) && !pte_present(ptent)); +} + +static inline void hvvm_pmd_unmap(pud_t *pud, unsigned long addr) +{ + pmd_t *pmd; + + pmd = pmd_offset(pud, addr); + if (pmd_none_or_clear_bad(pmd)) + return; + hvvm_pte_unmap(pmd, addr); +} + +static inline void hvvm_pud_unmap(pgd_t *pgd, unsigned long addr) +{ + pud_t *pud; + + p...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 01/13] HV VM Fix map area for HV.
...(hvvm_lock); + +static DECLARE_BITMAP(hvvm_avail_pages, NR_HV_PAGES); + + +static void hvvm_pte_unmap(pmd_t *pmd, unsigned long addr) +{ + pte_t *pte; + pte_t ptent; + + pte = pte_offset_kernel(pmd, addr); + ptent = ptep_get_and_clear(&init_mm, addr, pte); + WARN_ON(!pte_none(ptent) && !pte_present(ptent)); +} + +static inline void hvvm_pmd_unmap(pud_t *pud, unsigned long addr) +{ + pmd_t *pmd; + + pmd = pmd_offset(pud, addr); + if (pmd_none_or_clear_bad(pmd)) + return; + hvvm_pte_unmap(pmd, addr); +} + +static inline void hvvm_pud_unmap(pgd_t *pgd, unsigned long addr) +{ + pud_t *pud; + + p...