search for: pmds

Displaying 20 results from an estimated 63 matches for "pmds".

Did you mean: pmd
2007 Feb 14
2
[PATCH 8/8] 2.6.17: scan DMI early
...it phys_pmd_init(pmd_t *pmd, unsigned long address, unsigned long end) @@ -648,9 +670,9 @@ void __init extend_init_mapping(unsigned } } -static void __init find_early_table_space(unsigned long end) +static unsigned long __init find_early_table_space(unsigned long end) { - unsigned long puds, pmds, ptes, tables; + unsigned long puds, pmds, ptes, tables, fixmap_tables; puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; @@ -660,7 +682,16 @@ static void __init find_early_table_spac round_up(pmds * 8, PAGE_SIZE) + round_up(ptes * 8, PAGE...
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
...is overkill here. */ - native_set_pmd(&pmd, __pmd(((unsigned long)(linear + i) - - mem_base) | _PAGE_PRESENT | _PAGE_RW | _PAGE_USER)); + pmd = pfn_pmd(((unsigned long)&linear[i] - mem_base)/PAGE_SIZE, + __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER)); if (copy_to_user(&pmds[j], &pmd, sizeof(pmd)) != 0) return -EFAULT; } /* One PGD entry, pointing to that PMD page. */ - set_pgd(&pgd, __pgd(((u32)pmds - mem_base) | _PAGE_PRESENT)); + pgd = __pgd(((unsigned long)pmds - mem_base) | _PAGE_PRESENT); /* Copy it in as the first PGD entry (ie. addresses 0-1...
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
...is overkill here. */ - native_set_pmd(&pmd, __pmd(((unsigned long)(linear + i) - - mem_base) | _PAGE_PRESENT | _PAGE_RW | _PAGE_USER)); + pmd = pfn_pmd(((unsigned long)&linear[i] - mem_base)/PAGE_SIZE, + __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER)); if (copy_to_user(&pmds[j], &pmd, sizeof(pmd)) != 0) return -EFAULT; } /* One PGD entry, pointing to that PMD page. */ - set_pgd(&pgd, __pgd(((u32)pmds - mem_base) | _PAGE_PRESENT)); + pgd = __pgd(((unsigned long)pmds - mem_base) | _PAGE_PRESENT); /* Copy it in as the first PGD entry (ie. addresses 0-1...
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v3. Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code. However, this solution requires some changes in baremetal x86 code. First of
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v3. Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code. However, this solution requires some changes in baremetal x86 code. First of
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v3. Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code. However, this solution requires some changes in baremetal x86 code. First of
2011 Jul 18
2
[PATCH tip/x86/mm] x86_32: calculate additional memory needed by the fixmap
...++++++++++++++ 1 files changed, 47 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index e72c9f8..a7ee16b 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -33,6 +33,9 @@ static void __init find_early_table_space(unsigned long start, { unsigned long pmds = 0, ptes = 0, tables = 0, good_end = end, pud_mapped = 0, pmd_mapped = 0, size = end - start; + int kmap_begin_pmd_idx, kmap_end_pmd_idx; + int fixmap_begin_pmd_idx, fixmap_end_pmd_idx; + int btmap_begin_pmd_idx; phys_addr_t base; pud_mapped = DIV_ROUND_UP(PFN_PHYS(max_pfn_mapped), @@...
2009 Jun 05
1
[PATCH] lguest: PAE support
...entry's not present, there's nothing to release. */ + if (pgd_flags(*spgd) & _PAGE_PRESENT) { + unsigned int i; + pmd_t *pmdpage = __va(pgd_pfn(*spgd) << PAGE_SHIFT); + + for (i = 0; i < PTRS_PER_PMD; i++) + release_pmd(&pmdpage[i]); + + /* Now we can free the page of PMDs */ + free_page((long)pmdpage); + /* And zero out the PGD entry so we never release it twice. */ + set_pgd(spgd, __pgd(0)); + } +} + +#else /* !CONFIG_X86_PAE */ /*H:450 If we chase down the release_pgd() code, it looks like this: */ static void release_pgd(pgd_t *spgd) { @@ -341,7 +494,7 @@ s...
2009 Jun 05
1
[PATCH] lguest: PAE support
...entry's not present, there's nothing to release. */ + if (pgd_flags(*spgd) & _PAGE_PRESENT) { + unsigned int i; + pmd_t *pmdpage = __va(pgd_pfn(*spgd) << PAGE_SHIFT); + + for (i = 0; i < PTRS_PER_PMD; i++) + release_pmd(&pmdpage[i]); + + /* Now we can free the page of PMDs */ + free_page((long)pmdpage); + /* And zero out the PGD entry so we never release it twice. */ + set_pgd(spgd, __pgd(0)); + } +} + +#else /* !CONFIG_X86_PAE */ /*H:450 If we chase down the release_pgd() code, it looks like this: */ static void release_pgd(pgd_t *spgd) { @@ -341,7 +494,7 @@ s...
2009 Apr 16
1
NULL pointer dereference at __switch_to() ( __unlazy_fpu ) with lguest PAE patch
...ntry's not present, there's nothing to release. */ + if (pgd_flags(*spgd) & _PAGE_PRESENT) { + unsigned int i; + pmd_t *pmdpage = __va(pgd_pfn(*spgd) << PAGE_SHIFT); + + for (i = 0; i < PTRS_PER_PMD; i++) + release_pmd(&pmdpage[i]); + + /* Now we can free the page of PMDs */ + free_page((long)pmdpage); + /* And zero out the PGD entry so we never release it twice. */ + native_set_pud ((pud_t *)spgd, __pud(0)); + } +} + +#else /* !CONFIG_X86_PAE */ + /*H:450 If we chase down the release_pgd() code, it looks like this: */ -static void release_pgd(struct lguest *lg,...
2009 Apr 16
1
NULL pointer dereference at __switch_to() ( __unlazy_fpu ) with lguest PAE patch
...ntry's not present, there's nothing to release. */ + if (pgd_flags(*spgd) & _PAGE_PRESENT) { + unsigned int i; + pmd_t *pmdpage = __va(pgd_pfn(*spgd) << PAGE_SHIFT); + + for (i = 0; i < PTRS_PER_PMD; i++) + release_pmd(&pmdpage[i]); + + /* Now we can free the page of PMDs */ + free_page((long)pmdpage); + /* And zero out the PGD entry so we never release it twice. */ + native_set_pud ((pud_t *)spgd, __pud(0)); + } +} + +#else /* !CONFIG_X86_PAE */ + /*H:450 If we chase down the release_pgd() code, it looks like this: */ -static void release_pgd(struct lguest *lg,...
2007 Apr 18
0
[PATCH 7/9] 00mma remove set pte atomic.patch
...el.h +++ b/include/asm-m32r/pgtable-2level.h @@ -44,7 +44,7 @@ static inline int pgd_present(pgd_t pgd) */ #define set_pte(pteptr, pteval) (*(pteptr) = pteval) #define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) -#define set_pte_atomic(pteptr, pteval) set_pte(pteptr, pteval) + /* * (pmds are folded into pgds so this doesnt get actually called, * but the define is needed for a generic inline function.)
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 00/13] Lguest for the x86_64
...pmd that holds the pte. And if needed, we can find the pud that holds the pmd, and the pgd/cr3 that holds the pud. This facilitates the managing of the page tables. TODO: ===== To prevent a guest from stealing all the hosts memory pages, we can use these hashes to also limit the number of puds, pmds, and ptes. If the page is not pinned (currently used), we can set up LRU lists, and find those pages that are somewhat stale, and free them. This can be done safely since we have all the info we need to put them back if the guest needs them again. cr3: ==== Right now we hold many more cr3/pgd&...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 00/13] Lguest for the x86_64
...pmd that holds the pte. And if needed, we can find the pud that holds the pmd, and the pgd/cr3 that holds the pud. This facilitates the managing of the page tables. TODO: ===== To prevent a guest from stealing all the hosts memory pages, we can use these hashes to also limit the number of puds, pmds, and ptes. If the page is not pinned (currently used), we can set up LRU lists, and find those pages that are somewhat stale, and free them. This can be done safely since we have all the info we need to put them back if the guest needs them again. cr3: ==== Right now we hold many more cr3/pgd&...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...= 1) { + mach_setup_pgd(__pa(pgd) >> PAGE_SHIFT, + __pa(swapper_pg_dir) >> PAGE_SHIFT, + USER_PTRS_PER_PGD, + PTRS_PER_PGD - USER_PTRS_PER_PGD); + return pgd; + } + + /* PAE mode will set up the pmds here */ + mach_setup_pgd(__pa(pgd) >> PAGE_SHIFT, + __pa(swapper_pg_dir) >> PAGE_SHIFT, + USER_PTRS_PER_PGD, + PTRS_PER_PGD - USER_PTRS_PER_PGD); for (i = 0; i < USER_PTRS_PER_PGD; ++i) { pmd_t *pmd = kmem_cache_a...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...= 1) { + mach_setup_pgd(__pa(pgd) >> PAGE_SHIFT, + __pa(swapper_pg_dir) >> PAGE_SHIFT, + USER_PTRS_PER_PGD, + PTRS_PER_PGD - USER_PTRS_PER_PGD); + return pgd; + } + + /* PAE mode will set up the pmds here */ + mach_setup_pgd(__pa(pgd) >> PAGE_SHIFT, + __pa(swapper_pg_dir) >> PAGE_SHIFT, + USER_PTRS_PER_PGD, + PTRS_PER_PGD - USER_PTRS_PER_PGD); for (i = 0; i < USER_PTRS_PER_PGD; ++i) { pmd_t *pmd = kmem_cache_a...
2004 Jul 26
0
FW: IA64 test report: 2.6.8-rc1 /tiger 2004-7-20: Boot Hang!
...xc418) vector 59 ACPI: PCI interrupt 0000:12:01.0[A] -> GSI 120 (level, low) -> IRQ 59 GSI 143 (level, low) -> CPU 2 (0xc418) vector 60 ACPI: PCI interrupt 0000:12:1f.0[A] -> GSI 143 (level, low) -> IRQ 60 perfmon: version 2.0 IRQ 238 perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits) PAL Information Facility v0.5 perfmon: added sampling format default_format perfmon_default_smpl: default_format v2.0 registered Total HugeTLB memory allocated, 0 Installing knfsd (copyright (C) 1996 okir at monad.swb.de). udf: registering filesystem Initializing Crypto...
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
..._vma); } end = -1; mapping = NULL; - anon_vma_lock_write(anon_vma); } else { mapping = head->mapping; @@ -2686,13 +2719,19 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* * Racy check if we can split the page, before unmap_page() will * split PMDs + * If we are splitting a migrating THP, there is no check needed + * because the page is already unmapped and isolated from the LRU. */ - if (!can_split_huge_page(head, &extra_pins)) { + if (!remap) + extra_pins = thp_nr_pages(page) - 1 + + is_device_private_page(head); + else if (!can...
2008 May 23
6
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction
Hi all, This little series adds a new transaction-like abstraction for doing RMW updates to a pte, hooks it into paravirt_ops, and then makes use of it in Xen. The basic problem is that mprotect is very slow under Xen (up to 50x slower than native), primarily because of the ptent = ptep_get_and_clear(mm, addr, pte); ptent = pte_modify(ptent, newprot); /* ... */ set_pte_at(mm, addr, pte,
2008 May 23
6
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction
Hi all, This little series adds a new transaction-like abstraction for doing RMW updates to a pte, hooks it into paravirt_ops, and then makes use of it in Xen. The basic problem is that mprotect is very slow under Xen (up to 50x slower than native), primarily because of the ptent = ptep_get_and_clear(mm, addr, pte); ptent = pte_modify(ptent, newprot); /* ... */ set_pte_at(mm, addr, pte,