search for: pmd_size

Displaying 20 results from an estimated 66 matches for "pmd_size".

2020 Jul 14
0
[PATCH v4 17/75] x86/boot/compressed/64: Change add_identity_map() to take start and end
...* Adds the specified range to the identity mappings. */ -static void add_identity_map(unsigned long start, unsigned long size) +static void add_identity_map(unsigned long start, unsigned long end) { - unsigned long end = start + size; - /* Align boundary to 2M. */ start = round_down(start, PMD_SIZE); end = round_up(end, PMD_SIZE); @@ -107,8 +105,6 @@ static void add_identity_map(unsigned long start, unsigned long size) /* Locates and clears a region for a new top level page table. */ void initialize_identity_maps(void) { - unsigned long start, size; - /* If running as an SEV guest, the...
2020 Jul 14
0
[PATCH v4 15/75] x86/boot/compressed/64: Always switch to own page-table
...identity mappings. + * Once all ranges have been added, the new mapping is activated by calling + * finalize_identity_maps() below. + */ +void add_identity_map(unsigned long start, unsigned long size) +{ + unsigned long end = start + size; + + /* Align boundary to 2M. */ + start = round_down(start, PMD_SIZE); + end = round_up(end, PMD_SIZE); + if (start >= end) + return; + + /* Build the mapping. */ + kernel_ident_mapping_init(&mapping_info, (pgd_t *)top_level_pgt, + start, end); +} + /* Locates and clears a region for a new top level page table. */ void initialize_identity_maps(void)...
2020 Nov 06
1
[PATCH v3 1/6] mm/thp: add prep_transhuge_device_private_page()
...); > + /* Only the head page has a reference to the pgmap. */ > + percpu_ref_put_many(page->pgmap->ref, HPAGE_PMD_NR - 1); > +} > +EXPORT_SYMBOL_GPL(prep_transhuge_device_private_page); Something else that may interest you from my patch series is support for page sizes other than PMD_SIZE. I don't know what page sizes your hardware supports. There's no support for page sizes other than PMD for anonymous memory, so this might not be too useful for you yet.
2020 Apr 28
0
[PATCH v3 24/75] x86/boot/compressed/64: Unmap GHCB page before booting the kernel
...+int set_page_non_present(unsigned long address) +{ + return set_clr_page_flags(&mapping_info, address, 0, _PAGE_PRESENT); +} + void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code) { - unsigned long address = native_read_cr2() & PMD_MASK; - unsigned long end = address + PMD_SIZE; + unsigned long address = native_read_cr2(); + unsigned long end; + bool ghcb_fault; + + ghcb_fault = sev_es_check_ghcb_fault(address); + + address &= PMD_MASK; + end = address + PMD_SIZE; /* * Check for unexpected error codes. Unexpected are: @@ -302,9 +313,13 @@ void do_boot_...
2011 Jul 18
2
[PATCH tip/x86/mm] x86_32: calculate additional memory needed by the fixmap
...) + >> PMD_SHIFT; + /* + * fixmap_end_pmd_idx is the end of the fixmap minus the PMD that + * has been defined in the data section by head_32.S (see + * initial_pg_fixmap). + * Note: This is similar to what early_ioremap_page_table_range_init + * does except that the "end" has PMD_SIZE expunged as per previous + * comment. + */ + fixmap_end_pmd_idx = (FIXADDR_TOP - 1) >> PMD_SHIFT; + btmap_begin_pmd_idx = __fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT; + + size = fixmap_end_pmd_idx - fixmap_begin_pmd_idx; + /* + * early_ioremap_init has already allocated a PMD at + *...
2007 Feb 14
2
[PATCH 8/8] 2.6.17: scan DMI early
...tic void __init find_early_table_space(unsigned long end) +static unsigned long __init find_early_table_space(unsigned long end) { - unsigned long puds, pmds, ptes, tables; + unsigned long puds, pmds, ptes, tables, fixmap_tables; puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; @@ -660,7 +682,16 @@ static void __init find_early_table_spac round_up(pmds * 8, PAGE_SIZE) + round_up(ptes * 8, PAGE_SIZE); - extend_init_mapping(tables); + /* Also reserve pages for fixmaps that need to be set up early. + * Their pud is shared with the kernel p...
2020 Jul 14
0
[PATCH v4 14/75] x86/boot/compressed/64: Add page-fault handler
...+ if (error_code & (X86_PF_PROT | X86_PF_USER | X86_PF_RSVD)) + do_pf_error("Unexpected page-fault:", error_code, address, regs->ip); + + /* + * Error code is sane - now identity map the 2M region around + * the faulting address. + */ + add_identity_map(address & PMD_MASK, PMD_SIZE); +} diff --git a/arch/x86/boot/compressed/idt_64.c b/arch/x86/boot/compressed/idt_64.c index 082cd6bca033..5f083092a86d 100644 --- a/arch/x86/boot/compressed/idt_64.c +++ b/arch/x86/boot/compressed/idt_64.c @@ -40,5 +40,7 @@ void load_stage2_idt(void) { boot_idt_desc.address = (unsigned long)bo...
2020 Jun 30
0
[PATCH v2 2/5] mm/hmm: add output flags for PMD/PUD page mapping
...the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_PMD - if HMM_PFN_VALID is set, the page is at least of size + * PMD_SIZE and fully mapped by the CPU with consistent + * protection (e.g., all writeable if HMM_PFN_WRITE is set). + * HMM_PFN_PUD - if HMM_PFN_VALID is set, the page is at least of size + * PUD_SIZE and fully mapped by the CPU with consistent + * protection...
2020 Jul 14
0
[PATCH v4 16/75] x86/boot/compressed/64: Don't pre-map memory in KASLR code
.../ @@ -436,11 +430,6 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size, /* Enumerate the immovable memory regions */ num_immovable_mem = count_immovable_mem_regions(); - -#ifdef CONFIG_X86_VERBOSE_BOOTUP - /* Make sure video RAM can be used. */ - add_identity_map(0, PMD_SIZE); -#endif } /* @@ -919,19 +908,8 @@ void choose_random_location(unsigned long input, warn("Physical KASLR disabled: no suitable memory region!"); } else { /* Update the new physical address location. */ - if (*output != random_addr) { - add_identity_map(random_addr, output_...
2020 Apr 02
0
[PATCH 14/70] x86/boot/compressed/64: Add page-fault handler
...+ */ > + if (error_code & (X86_PF_PROT | X86_PF_USER | X86_PF_RSVD)) > + pf_error(error_code, address, regs); > + > + /* > + * Error code is sane - now identity map the 2M region around > + * the faulting address. > + */ > + add_identity_map(address & PMD_MASK, PMD_SIZE); > +} > diff --git a/arch/x86/boot/compressed/idt_64.c b/arch/x86/boot/compressed/idt_64.c > index 46ecea671b90..84ba57d9d436 100644 > --- a/arch/x86/boot/compressed/idt_64.c > +++ b/arch/x86/boot/compressed/idt_64.c > @@ -39,5 +39,7 @@ void load_stage2_idt(void) > { > b...
2011 Feb 09
8
reboot after "scrubbing free ram"
Hello, I have some diagnostic Info for you. Tried to get xen-4.0.1 working on another node of my cluster. Only difference: Newer revision of the Hardware. Exactly same Software Stack. Stack: Ubuntu Lucid 64bit, xen-4.0.1 from Source, selfbuilt linux-2.6.32.27 from xen/stable-2.6.32.x Nodes working: HP DL380G5 16GB (Intel X5460) Node not working: HP DL380G6 16GB (Intel X5550) Effect: Xen
2020 Nov 03
0
[patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
...via kprobes, jump labels, etc. */ FIX_TEXT_POKE0, --- a/arch/arm/include/asm/highmem.h +++ b/arch/arm/include/asm/highmem.h @@ -2,7 +2,7 @@ #ifndef _ASM_HIGHMEM_H #define _ASM_HIGHMEM_H -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #define PKMAP_BASE (PAGE_OFFSET - PMD_SIZE) #define LAST_PKMAP PTRS_PER_PTE @@ -46,19 +46,32 @@ extern pte_t *pkmap_page_table; #ifdef ARCH_NEEDS_KMAP_HIGH_GET extern void *kmap_high_get(struct page *page); -#else + +static inline void *arch_kmap_local_high_get(struct page *page) +{ + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !c...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2007 Jun 15
11
[PATCH 00/10] paravirt/subarchitecture boot protocol
This series updates the boot protocol to 2.07 and uses it to implement paravirtual booting. This allows the bootloader to tell the kernel what kind of hardware/pseudo-hardware environment it's coming up under, and the kernel can use the appropriate boot sequence code. Specifically: - Update the boot protocol to 2.07, which adds fields to specify the hardware subarchitecture and some
2007 Jun 15
11
[PATCH 00/10] paravirt/subarchitecture boot protocol
This series updates the boot protocol to 2.07 and uses it to implement paravirtual booting. This allows the bootloader to tell the kernel what kind of hardware/pseudo-hardware environment it's coming up under, and the kernel can use the appropriate boot sequence code. Specifically: - Update the boot protocol to 2.07, which adds fields to specify the hardware subarchitecture and some
2007 Jun 15
11
[PATCH 00/10] paravirt/subarchitecture boot protocol
This series updates the boot protocol to 2.07 and uses it to implement paravirtual booting. This allows the bootloader to tell the kernel what kind of hardware/pseudo-hardware environment it's coming up under, and the kernel can use the appropriate boot sequence code. Specifically: - Update the boot protocol to 2.07, which adds fields to specify the hardware subarchitecture and some
2007 Jun 20
9
[PATCH 0/9] x86 boot protocol updates
[ This patch depends on the cross-architecture ELF cleanup patch. ] This series updates the boot protocol to 2.07 and uses it to implement paravirtual booting. This allows the bootloader to tell the kernel what kind of hardware/pseudo-hardware environment it's coming up under, and the kernel can use the appropriate boot sequence code. Specifically: - Update the boot protocol to 2.07, which
2007 Jun 20
9
[PATCH 0/9] x86 boot protocol updates
[ This patch depends on the cross-architecture ELF cleanup patch. ] This series updates the boot protocol to 2.07 and uses it to implement paravirtual booting. This allows the bootloader to tell the kernel what kind of hardware/pseudo-hardware environment it's coming up under, and the kernel can use the appropriate boot sequence code. Specifically: - Update the boot protocol to 2.07, which
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan