search for: pmd

Displaying 20 results from an estimated 760 matches for "pmd".

Did you mean: cmd
2020 Mar 20
0
[PATCH 2/2] mm/thp: Rename pmd_mknotpresent() as pmd_mknotvalid()
pmd_present() is expected to test positive after pmdp_mknotpresent() as the PMD entry still points to a valid huge page in memory. pmdp_mknotpresent() implies that given PMD entry is just invalidated from MMU perspective while still holding on to pmd_page() referred valid huge page thus also clearing p...
2020 Apr 22
1
[PATCH V2 0/2] mm/thp: Rename pmd_mknotpresent() as pmd_mkinvalid()
This series renames pmd_mknotpresent() as pmd_mkinvalid(). Before that it drops an existing pmd_mknotpresent() definition from powerpc platform which was never required as it defines it's pmdp_invalidate() through subscribing __HAVE_ARCH_PMDP_INVALIDATE. This does not create any functional change. This rename was sug...
2020 Mar 20
4
[PATCH 0/2] mm/thp: Rename pmd_mknotpresent() as pmd_mknotvalid()
This series renames pmd_mknotpresent() as pmd_mknotvalid(). Before that it drops an existing pmd_mknotpresent() definition from powerpc platform which was never required as it defines it's pmdp_invalidate() through subscribing __HAVE_ARCH_PMDP_INVALIDATE. This does not create any functional change. This rename was su...
2020 Sep 02
1
[PATCH v2 1/7] mm/thp: fix __split_huge_pmd_locked() for migration PMD
On 2 Sep 2020, at 12:58, Ralph Campbell wrote: > A migrating transparent huge page has to already be unmapped. Otherwise, > the page could be modified while it is being copied to a new page and > data could be lost. The function __split_huge_pmd() checks for a PMD > migration entry before calling __split_huge_pmd_locked() leading one to > think that __split_huge_pmd_locked() can handle splitting a migrating PMD. > However, the code always increments the page->_mapcount and adjusts the > memory control group accounting assumi...
2020 Sep 02
0
[PATCH v2 1/7] mm/thp: fix __split_huge_pmd_locked() for migration PMD
A migrating transparent huge page has to already be unmapped. Otherwise, the page could be modified while it is being copied to a new page and data could be lost. The function __split_huge_pmd() checks for a PMD migration entry before calling __split_huge_pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the page->_mapcount and adjusts the memory control group accounting assuming the page is mappe...
2020 Sep 03
1
[PATCH v3] mm/thp: fix __split_huge_pmd_locked() for migration PMD
A migrating transparent huge page has to already be unmapped. Otherwise, the page could be modified while it is being copied to a new page and data could be lost. The function __split_huge_pmd() checks for a PMD migration entry before calling __split_huge_pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the page->_mapcount and adjusts the memory control group accounting assuming the page is mappe...
2007 Apr 18
0
[PATCH 1/5] Paravirt page alloc.patch
....flush_tlb_single = native_flush_tlb_single, + .alloc_pt = (void *)native_nop, + .alloc_pd = (void *)native_nop, + .alloc_pd_clone = (void *)native_nop, + .release_pt = (void *)native_nop, + .release_pd = (void *)native_nop, + .set_pte = native_set_pte, .set_pte_at = native_set_pte_at, .set_pmd = native_set_pmd, =================================================================== --- a/arch/i386/mm/init.c +++ b/arch/i386/mm/init.c @@ -62,6 +62,7 @@ static pmd_t * __init one_md_table_init( #ifdef CONFIG_X86_PAE pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE); + paravirt_allo...
2007 Apr 18
0
[PATCH 1/5] Paravirt page alloc.patch
....flush_tlb_single = native_flush_tlb_single, + .alloc_pt = (void *)native_nop, + .alloc_pd = (void *)native_nop, + .alloc_pd_clone = (void *)native_nop, + .release_pt = (void *)native_nop, + .release_pd = (void *)native_nop, + .set_pte = native_set_pte, .set_pte_at = native_set_pte_at, .set_pmd = native_set_pmd, =================================================================== --- a/arch/i386/mm/init.c +++ b/arch/i386/mm/init.c @@ -62,6 +62,7 @@ static pmd_t * __init one_md_table_init( #ifdef CONFIG_X86_PAE pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE); + paravirt_allo...
2007 Apr 18
1
[PATCH 1/5] Add pagetable allocation notifiers
...86/mm/init.c =================================================================== --- linux-2.6.13.orig/arch/i386/mm/init.c 2005-08-24 09:31:05.000000000 -0700 +++ linux-2.6.13/arch/i386/mm/init.c 2005-08-24 09:31:31.000000000 -0700 @@ -79,6 +79,7 @@ static pte_t * __init one_page_table_ini { if (pmd_none(*pmd)) { pte_t *page_table = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); + SetPagePTE(virt_to_page(page_table)); set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE)); if (page_table != pte_offset_kernel(pmd, 0)) BUG(); Index: linux-2.6.13/arch/i386/mm/pageattr.c ===============...
2007 Apr 18
1
[PATCH 1/5] Add pagetable allocation notifiers
...86/mm/init.c =================================================================== --- linux-2.6.13.orig/arch/i386/mm/init.c 2005-08-24 09:31:05.000000000 -0700 +++ linux-2.6.13/arch/i386/mm/init.c 2005-08-24 09:31:31.000000000 -0700 @@ -79,6 +79,7 @@ static pte_t * __init one_page_table_ini { if (pmd_none(*pmd)) { pte_t *page_table = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); + SetPagePTE(virt_to_page(page_table)); set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE)); if (page_table != pte_offset_kernel(pmd, 0)) BUG(); Index: linux-2.6.13/arch/i386/mm/pageattr.c ===============...
2006 Mar 14
12
[RFC] VMI for Xen?
I''m sure everyone has seen the drop of VMI patches for Linux at this point, but just in case, the link is included below. I''ve read this version of the VMI spec and have made my way through most of the patches. While I wasn''t really that impressed with the first spec wrt Xen, the second version seems to be much more palatable. Specifically, the code inlining and
2007 Apr 18
0
[PATCH 1/6] Page allocation hooks for VMI backend
...while it is protected under the pgd lock to correctly copy the shadow. We don't need to allocate pgds in PAE mode, (PDPs in Intel terminology) as they only have 4 entries, and are cached entirely by the processor, which makes shadowing them rather simple. For base page table level allocation, pmd_populate provides the exact hook point we need. Also, we need to allocate pages when splitting a large page, and we must release pages before returning the page to any free pool. Despite being required with these slightly odd semantics for VMI, Xen also uses these hooks to determine the exact mo...
2007 Apr 18
0
[PATCH 1/6] Page allocation hooks for VMI backend
...while it is protected under the pgd lock to correctly copy the shadow. We don't need to allocate pgds in PAE mode, (PDPs in Intel terminology) as they only have 4 entries, and are cached entirely by the processor, which makes shadowing them rather simple. For base page table level allocation, pmd_populate provides the exact hook point we need. Also, we need to allocate pages when splitting a large page, and we must release pages before returning the page to any free pool. Despite being required with these slightly odd semantics for VMI, Xen also uses these hooks to determine the exact mo...
2006 Jun 22
9
OT -- BASH
Can someone explain why this: find . -depth -print0 | cpio --null -pmd /tmp/test will copy all files in and below the current directory -and- this: find . -depth -print | grep -v .iso$ | wc -l will count all the non-iso files -and- this: find . -depth -print | grep .iso$ | wc -l will count *only* the iso files -but- this: find . -depth -print0 | grep -v .iso$ |...
2020 Apr 20
2
[PATCH 2/2] mm/thp: Rename pmd_mknotpresent() as pmd_mknotvalid()
On Fri, Mar 20, 2020 at 10:24:17AM +0530, Anshuman Khandual wrote: > pmd_present() is expected to test positive after pmdp_mknotpresent() as the > PMD entry still points to a valid huge page in memory. pmdp_mknotpresent() > implies that given PMD entry is just invalidated from MMU perspective while > still holding on to pmd_page() referred valid huge page thus...
2020 Jun 22
1
[PATCH 09/16] mm/hmm: add output flag for compound page mapping
...> > > than order 0 (PAGE_SIZE), there is no indication that the page is mapped > > > using a larger page size. To be fully general, hmm_range_fault() would need > > > to return the mapping size to handle cases like a 1GB compound page being > > > mapped with 2MB PMD entries. However, the most common case is the mapping > > > size is the same as the underlying compound page size. > > > Add a new output flag to indicate this so that callers know it is safe to > > > use a large device page table mapping if one is available. > > &...
2020 Jun 22
2
[PATCH 09/16] mm/hmm: add output flag for compound page mapping
...nd_order(page) but if the page is larger > than order 0 (PAGE_SIZE), there is no indication that the page is mapped > using a larger page size. To be fully general, hmm_range_fault() would need > to return the mapping size to handle cases like a 1GB compound page being > mapped with 2MB PMD entries. However, the most common case is the mapping > size is the same as the underlying compound page size. > Add a new output flag to indicate this so that callers know it is safe to > use a large device page table mapping if one is available. But what size should the caller use? You...
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...+void prep_compound_page(struct page *page, unsigned int order); > > #ifdef CONFIG_MMU > /* > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 78c84bee7e29..25d95f7b1e98 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1663,23 +1663,35 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > } else { > struct page *page = NULL; > int flush_needed = 1; > + bool is_anon = false; > > if (pmd_present(orig_pmd)) { > page = pmd_page(orig_pmd); > + is_anon = PageAnon(page); > page_remove_rma...
2020 Apr 28
0
[PATCH v3 22/75] x86/boot/compressed/64: Add set_page_en/decrypted() helpers
...<asm/trap_defs.h> +#include <asm/cmpxchg.h> /* Use the static base for this part of the boot process */ #undef __PAGE_OFFSET #define __PAGE_OFFSET __PAGE_OFFSET_BASE @@ -157,6 +158,139 @@ void initialize_identity_maps(void) write_cr3(top_level_pgt); } +static pte_t *split_large_pmd(struct x86_mapping_info *info, + pmd_t *pmdp, unsigned long __address) +{ + unsigned long page_flags; + unsigned long address; + pte_t *pte; + pmd_t pmd; + int i; + + pte = (pte_t *)info->alloc_pgt_page(info->context); + if (!pte) + return NULL; + + address = __address & PMD_...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 01/13] HV VM Fix map area for HV.
...e.h> +#include <linux/highmem.h> +#include <linux/slab.h> +#include <linux/spinlock.h> +#include <linux/interrupt.h> + +#include <asm/hv_vm.h> + +static DEFINE_MUTEX(hvvm_lock); + +static DECLARE_BITMAP(hvvm_avail_pages, NR_HV_PAGES); + + +static void hvvm_pte_unmap(pmd_t *pmd, unsigned long addr) +{ + pte_t *pte; + pte_t ptent; + + pte = pte_offset_kernel(pmd, addr); + ptent = ptep_get_and_clear(&init_mm, addr, pte); + WARN_ON(!pte_none(ptent) && !pte_present(ptent)); +} + +static inline void hvvm_pmd_unmap(pud_t *pud, unsigned long addr) +{ + pmd_t *...