search for: init_mm

Displaying 20 results from an estimated 189 matches for "init_mm".

2006 Apr 29
0
RE: [PATCH 2/2] balloon driver: don''t useapply_to_page_range for xenLinux/ia64
>From: Keir Fraser >Sent: 2006年4月28日 21:57 >On 28 Apr 2006, at 10:14, Isaku Yamahata wrote: > >> Unfortunately no. >> Roughly init_mm is only used to map vmalloc area on Linux/ia64. >> init_mm of Linux/ia64 doesn''t map the area of [PAGE_OFFSET, ...]. >> Traversing init_mm with a virtual address of the area >> gives a zero-filled pte entry. >> It also populates unnecessary pud/pmd/pte pages. >&g...
2020 Nov 03
0
[patch V3 24/37] sched: highmem: Store local kmaps in task struct
...cpu_sub_return(__kmap_local_idx, KM_INCR); - - BUG_ON(idx < 0); + current->kmap_ctrl.idx -= KM_INCR; + BUG_ON(current->kmap_ctrl.idx < 0); } #ifndef arch_kmap_local_post_map @@ -461,6 +457,7 @@ void *__kmap_local_pfn_prot(unsigned lon pteval = pfn_pte(pfn, prot); set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval); arch_kmap_local_post_map(vaddr, pteval); + current->kmap_ctrl.pteval[kmap_local_idx()] = pteval; preempt_enable(); return (void *)vaddr; @@ -505,10 +502,92 @@ void kunmap_local_indexed(void *vaddr) arch_kmap_local_pre_unmap(addr); pte_clear(&init_...
2007 Apr 18
2
pte_offset_map + lazy mmu
...h a Xen implementation. But I think its probably an excess pv_op for a relatively minor corner case. It seems to me that it would be better to define kpte_clear_flush as: #define kpte_clear_flush(ptep, vaddr) \ do { \ arch_enter_lazy_mmu_mode(); \ pte_clear(&init_mm, vaddr, ptep); \ __flush_tlb_one(vaddr); \ arch_leave_lazy_mmu_mode(); \ } while (0) and take advantage of mmu batching to make this operation efficient. But I'm not sure if this is safe. (Also, kmap_atomic could use set_pte_at rather than set_pte.) What do...
2007 Apr 18
2
pte_offset_map + lazy mmu
...h a Xen implementation. But I think its probably an excess pv_op for a relatively minor corner case. It seems to me that it would be better to define kpte_clear_flush as: #define kpte_clear_flush(ptep, vaddr) \ do { \ arch_enter_lazy_mmu_mode(); \ pte_clear(&init_mm, vaddr, ptep); \ __flush_tlb_one(vaddr); \ arch_leave_lazy_mmu_mode(); \ } while (0) and take advantage of mmu batching to make this operation efficient. But I'm not sure if this is safe. (Also, kmap_atomic could use set_pte_at rather than set_pte.) What do...
2019 Jul 02
0
[PATCH v2 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...void *))tlb_remove_page, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 5c9b1607191d..074288a6916e 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, * garbage into our TLB. Since switching to init_mm is barely * slower than a minimal flush, just switch to init_mm. * - * This should be rare, with native_flush_tlb_others skipping + * This should be rare, with native_flush_tlb_multi() skipping * IPIs to lazy TLB mode CPUs. */ switch_mm_irqs_off(NULL, &init_mm, NULL); @@...
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
..., void *))tlb_remove_page, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index c34bcf03f06f..db73d5f1dd43 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, * garbage into our TLB. Since switching to init_mm is barely * slower than a minimal flush, just switch to init_mm. * - * This should be rare, with native_flush_tlb_others skipping + * This should be rare, with native_flush_tlb_multi skipping * IPIs to lazy TLB mode CPUs. */ switch_mm_irqs_off(NULL, &init_mm, NULL); @@ -6...
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
..., void *))tlb_remove_page, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index c34bcf03f06f..db73d5f1dd43 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, * garbage into our TLB. Since switching to init_mm is barely * slower than a minimal flush, just switch to init_mm. * - * This should be rare, with native_flush_tlb_others skipping + * This should be rare, with native_flush_tlb_multi skipping * IPIs to lazy TLB mode CPUs. */ switch_mm_irqs_off(NULL, &init_mm, NULL); @@ -6...
2009 Aug 10
1
[PATCH 1/2] export cpu_tlbstate to modules
...h/x86/mm/tlb.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 821e970..e33a5f0 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -13,6 +13,7 @@ DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = { &init_mm, 0, }; +EXPORT_PER_CPU_SYMBOL_GPL(cpu_tlbstate); /* * Smarter SMP flushing macros. -- 1.6.2.5
2009 Aug 10
1
[PATCH 1/2] export cpu_tlbstate to modules
...h/x86/mm/tlb.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 821e970..e33a5f0 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -13,6 +13,7 @@ DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = { &init_mm, 0, }; +EXPORT_PER_CPU_SYMBOL_GPL(cpu_tlbstate); /* * Smarter SMP flushing macros. -- 1.6.2.5
2007 Apr 18
0
[PATCH 5/9] 00mm6 kpte flush.patch
...thout first remap it + * Force other mappings to Oops if they'll try to access this pte + * without first remap it. Keeping stale mappings around is a bad idea + * also, in case the page changes cacheability attributes or becomes + * a protected page in a hypervisor. */ - pte_clear(&init_mm, vaddr, kmap_pte-idx); - __flush_tlb_one(vaddr); -#endif + kpte_clear_flush(kmap_pte-idx, vaddr); dec_preempt_count(); preempt_check_resched(); @@ -94,7 +91,6 @@ void *kmap_atomic_pfn(unsigned long pfn, idx = type + KM_TYPE_NR*smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx...
2008 May 20
4
[PATCH O/4] BIO tracking take2
Hi all, With this series of patches, you can determine the owners of any type of I/Os. I ported the previous version to linux-2.6.26-rc2-mm1. This makes dm-ioband -- I/O bandwidth controller -- be able to control the Block I/O bandwidths even when it accepts delayed write requests. Dm-ioband can find the owner cgroup of each request. It is also possible that OpenVz team and NEC Uchida-san team
2008 May 20
4
[PATCH O/4] BIO tracking take2
Hi all, With this series of patches, you can determine the owners of any type of I/Os. I ported the previous version to linux-2.6.26-rc2-mm1. This makes dm-ioband -- I/O bandwidth controller -- be able to control the Block I/O bandwidths even when it accepts delayed write requests. Dm-ioband can find the owner cgroup of each request. It is also possible that OpenVz team and NEC Uchida-san team
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 01/13] HV VM Fix map area for HV.
...h> + +#include <asm/hv_vm.h> + +static DEFINE_MUTEX(hvvm_lock); + +static DECLARE_BITMAP(hvvm_avail_pages, NR_HV_PAGES); + + +static void hvvm_pte_unmap(pmd_t *pmd, unsigned long addr) +{ + pte_t *pte; + pte_t ptent; + + pte = pte_offset_kernel(pmd, addr); + ptent = ptep_get_and_clear(&init_mm, addr, pte); + WARN_ON(!pte_none(ptent) && !pte_present(ptent)); +} + +static inline void hvvm_pmd_unmap(pud_t *pud, unsigned long addr) +{ + pmd_t *pmd; + + pmd = pmd_offset(pud, addr); + if (pmd_none_or_clear_bad(pmd)) + return; + hvvm_pte_unmap(pmd, addr); +} + +static inline void hvvm_...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 01/13] HV VM Fix map area for HV.
...h> + +#include <asm/hv_vm.h> + +static DEFINE_MUTEX(hvvm_lock); + +static DECLARE_BITMAP(hvvm_avail_pages, NR_HV_PAGES); + + +static void hvvm_pte_unmap(pmd_t *pmd, unsigned long addr) +{ + pte_t *pte; + pte_t ptent; + + pte = pte_offset_kernel(pmd, addr); + ptent = ptep_get_and_clear(&init_mm, addr, pte); + WARN_ON(!pte_none(ptent) && !pte_present(ptent)); +} + +static inline void hvvm_pmd_unmap(pud_t *pud, unsigned long addr) +{ + pmd_t *pmd; + + pmd = pmd_offset(pud, addr); + if (pmd_none_or_clear_bad(pmd)) + return; + hvvm_pte_unmap(pmd, addr); +} + +static inline void hvvm_...
2013 Oct 31
1
[PATCH 3/3] x86: Support compiling out userspace I/O (iopl and ioperm)
...io_bitmap); > - > - /* > - * <= is required because the CPU will access up to > - * 8 bits beyond the end of the IO permission bitmap. > - */ > - for (i = 0; i <= IO_BITMAP_LONGS; i++) > - t->io_bitmap[i] = ~0UL; > + init_tss_io(t); > > atomic_inc(&init_mm.mm_count); > me->active_mm = &init_mm; > @@ -1351,7 +1343,7 @@ void cpu_init(void) > load_TR_desc(); > load_LDT(&init_mm.context); > > - t->x86_tss.io_bitmap_base = offsetof(struct tss_struct, io_bitmap); > + init_tss_io(t); This patch is too big. I think...
2013 Oct 31
1
[PATCH 3/3] x86: Support compiling out userspace I/O (iopl and ioperm)
...io_bitmap); > - > - /* > - * <= is required because the CPU will access up to > - * 8 bits beyond the end of the IO permission bitmap. > - */ > - for (i = 0; i <= IO_BITMAP_LONGS; i++) > - t->io_bitmap[i] = ~0UL; > + init_tss_io(t); > > atomic_inc(&init_mm.mm_count); > me->active_mm = &init_mm; > @@ -1351,7 +1343,7 @@ void cpu_init(void) > load_TR_desc(); > load_LDT(&init_mm.context); > > - t->x86_tss.io_bitmap_base = offsetof(struct tss_struct, io_bitmap); > + init_tss_io(t); This patch is too big. I think...
2007 Apr 18
1
how set_pte_at()'s vaddr and ptep args relate
...ich corresponds to the vaddr, but is this necessarily the case? For example, it is valid to pass a non-highmem page kmap_atomic(), which will simply return a direct pointer to the page. kunmap_atomic() takes this address, as well as the kmap slot index, and ends up calling: set_pte_at(&init_mm, lowmem_vaddr, kmap_ptep, 0); ie, the vaddr and the ptep bear no relationship to each other. Is this a bug in kunmap_atomic (it shouldn't try to clear the pte for lowmem addresses), or should set_pte_at's implementation be able to cope with this. Certainly at the moment, having mismatc...
2007 Apr 18
1
how set_pte_at()'s vaddr and ptep args relate
...ich corresponds to the vaddr, but is this necessarily the case? For example, it is valid to pass a non-highmem page kmap_atomic(), which will simply return a direct pointer to the page. kunmap_atomic() takes this address, as well as the kmap slot index, and ends up calling: set_pte_at(&init_mm, lowmem_vaddr, kmap_ptep, 0); ie, the vaddr and the ptep bear no relationship to each other. Is this a bug in kunmap_atomic (it shouldn't try to clear the pte for lowmem addresses), or should set_pte_at's implementation be able to cope with this. Certainly at the moment, having mismatc...
2008 Mar 28
12
[PATCH 00/12] Xen arch portability patches (take 4)
Hi Jeremy. According to your suggestion, I recreated patches for Ingo's x86.git tree. And this patch series includes Eddie's modification. Please review and forward them. (or push back to respin.) Recently the xen-ia64 community started to make efforts to merge xen/ia64 Linux to upstream. The first step is to merge up domU portion. This patchset is preliminary for xen/ia64 domU linux
2008 Mar 28
12
[PATCH 00/12] Xen arch portability patches (take 4)
Hi Jeremy. According to your suggestion, I recreated patches for Ingo's x86.git tree. And this patch series includes Eddie's modification. Please review and forward them. (or push back to respin.) Recently the xen-ia64 community started to make efforts to merge xen/ia64 Linux to upstream. The first step is to merge up domU portion. This patchset is preliminary for xen/ia64 domU linux