search for: pmd_shift

Displaying 20 results from an estimated 27 matches for "pmd_shift".

2019 Sep 11
0
[vhost:linux-next 16/17] include/linux/page_reporting.h:9:34: note: in expansion of macro 'pageblock_order'
...from include/linux/kobject.h:20, from include/linux/device.h:16, from drivers/scsi/snic/snic_attrs.c:19: include/linux/page_reporting.h: In function '__del_page_from_reported_list': arch/riscv/include/asm/page.h:24:22: error: 'PMD_SHIFT' undeclared (first use in this function); did you mean 'NMI_SHIFT'? #define HPAGE_SHIFT PMD_SHIFT ^ arch/riscv/include/asm/page.h:27:34: note: in expansion of macro 'HPAGE_SHIFT' #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)...
2011 Jul 18
2
[PATCH tip/x86/mm] x86_32: calculate additional memory needed by the fixmap
...d_mapped = DIV_ROUND_UP(PFN_PHYS(max_pfn_mapped), @@ -92,6 +95,50 @@ static void __init find_early_table_space(unsigned long start, } else ptes = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; +#ifdef CONFIG_X86_32 + fixmap_begin_pmd_idx = __fix_to_virt(__end_of_fixed_addresses - 1) + >> PMD_SHIFT; + /* + * fixmap_end_pmd_idx is the end of the fixmap minus the PMD that + * has been defined in the data section by head_32.S (see + * initial_pg_fixmap). + * Note: This is similar to what early_ioremap_page_table_range_init + * does except that the "end" has PMD_SIZE expunged as pe...
2007 May 29
0
Fw: [RFC] makedumpfile: xen extraction
...L(frametable_pg_dir) - DIRECTMAP_VIRT_START; + dirp += ((addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) * sizeof(unsigned long long); + if (!readpmem(info, dirp, &entry, sizeof(entry))) + return FALSE; + + dirp = entry & _PFN_MASK; + if (!dirp) + return 0; + dirp += ((addr >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) * sizeof(unsigned long long); + if (!readpmem(info, dirp, &entry, sizeof(entry))) + return FALSE; + + dirp = entry & _PFN_MASK; + if (!dirp) + return 0; + dirp += ((addr >> PAGESHIFT()) & (PTRS_PER_PTE - 1)) * sizeof(unsigned long long); + if (!readpme...
2007 Feb 14
2
[PATCH 8/8] 2.6.17: scan DMI early
...rly_table_space(unsigned long end) +static unsigned long __init find_early_table_space(unsigned long end) { - unsigned long puds, pmds, ptes, tables; + unsigned long puds, pmds, ptes, tables, fixmap_tables; puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; @@ -660,7 +682,16 @@ static void __init find_early_table_spac round_up(pmds * 8, PAGE_SIZE) + round_up(ptes * 8, PAGE_SIZE); - extend_init_mapping(tables); + /* Also reserve pages for fixmaps that need to be set up early. + * Their pud is shared with the kernel pud. + */ + pmds = (PMD_...
2007 Apr 18
0
[RFC/PATCH LGUEST X86_64 04/13] Useful debugging
...d_idx, u64 pud_idx, u64 pmd_idx, u64 pte_idx) +{ + printk(" %3llx: %llx\n", pte_idx, pte); + printk (" (%llx)\n", + ((pgd_idx&(1<<8)?(-1ULL):0ULL)<<48) | + (pgd_idx<<PGDIR_SHIFT) | + (pud_idx<<PUD_SHIFT) | + (pmd_idx<<PMD_SHIFT) | + (pte_idx<<PAGE_SHIFT)); +} + +static void print_pmd(struct lguest_vcpu *vcpu, + u64 pmd, u64 pgd_idx, u64 pud_idx, u64 pmd_idx) +{ + u64 pte; + u64 ptr; + u64 i; + + printk(" %3llx: %llx\n", pmd_idx, pmd); + + /* 2M page? */ + if (pmd & (1<<7)) { + pri...
2007 Apr 18
0
[RFC/PATCH LGUEST X86_64 04/13] Useful debugging
...d_idx, u64 pud_idx, u64 pmd_idx, u64 pte_idx) +{ + printk(" %3llx: %llx\n", pte_idx, pte); + printk (" (%llx)\n", + ((pgd_idx&(1<<8)?(-1ULL):0ULL)<<48) | + (pgd_idx<<PGDIR_SHIFT) | + (pud_idx<<PUD_SHIFT) | + (pmd_idx<<PMD_SHIFT) | + (pte_idx<<PAGE_SHIFT)); +} + +static void print_pmd(struct lguest_vcpu *vcpu, + u64 pmd, u64 pgd_idx, u64 pud_idx, u64 pmd_idx) +{ + u64 pte; + u64 ptr; + u64 i; + + printk(" %3llx: %llx\n", pmd_idx, pmd); + + /* 2M page? */ + if (pmd & (1<<7)) { + pri...
2020 Jun 30
0
[PATCH v2 2/5] mm/hmm: add output flags for PMD/PUD page mapping
...@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, i = (start - range->start) >> PAGE_SHIFT; pfn_req_flags = range->hmm_pfns[i]; cpu_flags = pte_to_hmm_pfn_flags(range, entry); + if (hshift >= PUD_SHIFT) + cpu_flags |= HMM_PFN_PUD; + else if (hshift >= PMD_SHIFT) + cpu_flags |= HMM_PFN_PMD; required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, cpu_flags); if (required_fault) { -- 2.20.1
2020 Jul 01
0
[PATCH v3 2/5] mm/hmm: add hmm_mapping order
...9a545751108..de04bbed47b3 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -170,7 +170,10 @@ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range, { if (pmd_protnone(pmd)) return 0; - return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return ((unsigned long)(PMD_SHIFT - PAGE_SHIFT) << + HMM_PFN_ORDER_SHIFT) | + pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : + HMM_PFN_VALID; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -389,7 +392,10 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, { if (!pud_present(pud)) return...
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 01/13] HV VM Fix map area for HV.
...ffset]; + + if (!(pgd & 1)) + return 0; + + p = get_vaddr(pgd); + + offset = (unsigned long)addr; + offset >>= PUD_SHIFT; + offset &= PTRS_PER_PUD-1; + + pud = p[offset]; + + if (!(pud & 1)) + return 0; + + p = get_vaddr(pud); + + offset = (unsigned long)addr; + offset >>= PMD_SHIFT; + offset &= PTRS_PER_PMD-1; + + pmd = p[offset]; + + if (!(pmd & 1)) + return 0; + + /* Now check to see if we are 2M pages or 4K pages */ + if (pmd & (1 << 7)) { + /* stop here, we are 2M pages */ + pte = pmd; + mask = (1<<21)-1; + goto calc; + } + + p = get_vaddr(pmd...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 01/13] HV VM Fix map area for HV.
...ffset]; + + if (!(pgd & 1)) + return 0; + + p = get_vaddr(pgd); + + offset = (unsigned long)addr; + offset >>= PUD_SHIFT; + offset &= PTRS_PER_PUD-1; + + pud = p[offset]; + + if (!(pud & 1)) + return 0; + + p = get_vaddr(pud); + + offset = (unsigned long)addr; + offset >>= PMD_SHIFT; + offset &= PTRS_PER_PMD-1; + + pmd = p[offset]; + + if (!(pmd & 1)) + return 0; + + /* Now check to see if we are 2M pages or 4K pages */ + if (pmd & (1 << 7)) { + /* stop here, we are 2M pages */ + pte = pmd; + mask = (1<<21)-1; + goto calc; + } + + p = get_vaddr(pmd...
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order() function. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using a larger sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's hmm tree. These were
2008 Feb 25
6
[PATCH 0/4] ia64/xen: paravirtualization of hand written assembly code
Hi. The patch I send before was too large so that it was dropped from the maling list. I'm sending again with smaller size. This patch set is the xen paravirtualization of hand written assenbly code. And I expect that much clean up is necessary before merge. We really need the feed back before starting actual clean up as Eddie already said before. Eddie discussed how to clean up and suggested
2008 Feb 25
6
[PATCH 0/4] ia64/xen: paravirtualization of hand written assembly code
Hi. The patch I send before was too large so that it was dropped from the maling list. I'm sending again with smaller size. This patch set is the xen paravirtualization of hand written assenbly code. And I expect that much clean up is necessary before merge. We really need the feed back before starting actual clean up as Eddie already said before. Eddie discussed how to clean up and suggested
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan
2019 Sep 11
6
[PATCH 0/4] HMM tests and minor fixes
These changes are based on Jason's latest hmm branch. Patch 1 was previously posted here [1] but was dropped from the orginal series. Hopefully, the tests will reduce concerns about edge conditions. I'm sure more tests could be usefully added but I thought this was a good starting point. [1] https://lore.kernel.org/linux-mm/20190726005650.2566-6-rcampbell at nvidia.com/ Ralph Campbell