search for: flush_cache_page

Displaying 20 results from an estimated 30 matches for "flush_cache_page".

Did you mean: flush_dcache_page
2019 Mar 11
4
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...ally tagged caches? >> >> >> Anything different that you worry? > > If caches have virtual tags then kernel and userspace view of memory > might not be automatically in sync if they access memory > through different virtual addresses. You need to do things like > flush_cache_page, probably multiple times. "flush_dcache_page()"
2019 Mar 11
4
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...ally tagged caches? >> >> >> Anything different that you worry? > > If caches have virtual tags then kernel and userspace view of memory > might not be automatically in sync if they access memory > through different virtual addresses. You need to do things like > flush_cache_page, probably multiple times. "flush_dcache_page()"
2019 Mar 11
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/8 ??10:12, Christoph Hellwig wrote: > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >> This series tries to access virtqueue metadata through kernel virtual >> address instead of copy_user() friends since they had too much >> overheads like checks, spec barriers or even hardware feature >> toggling. This is done through setup kernel address
2019 Mar 11
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/8 ??10:12, Christoph Hellwig wrote: > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >> This series tries to access virtqueue metadata through kernel virtual >> address instead of copy_user() friends since they had too much >> overheads like checks, spec barriers or even hardware feature >> toggling. This is done through setup kernel address
2019 Mar 12
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...outside of > ptrace context. kmap has been used generally either to access whole > pages (i.e. copy_user_page), so ptrace may actually be the only use > case with subpage granularity access. > > #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ > do { \ > flush_cache_page(vma, vaddr, page_to_pfn(page)); \ > memcpy(dst, src, len); \ > flush_ptrace_access(vma, page, vaddr, src, len, 0); \ > } while (0) > > So I wouldn't rule out the need for a dual model, until we solve how > to run this stable on non-x86 arches with not physically tagge...
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...> > > Anything different that you worry? > > > If caches have virtual tags then kernel and userspace view of memory > > > might not be automatically in sync if they access memory > > > through different virtual addresses. You need to do things like > > > flush_cache_page, probably multiple times. > > "flush_dcache_page()" > > > I get this. Then I think the current set_bit_to_user() is suspicious, we > probably miss a flush_dcache_page() there: > > > static int set_bit_to_user(int nr, void __user *addr) > { > ??????? un...
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...> > > Anything different that you worry? > > > If caches have virtual tags then kernel and userspace view of memory > > > might not be automatically in sync if they access memory > > > through different virtual addresses. You need to do things like > > > flush_cache_page, probably multiple times. > > "flush_dcache_page()" > > > I get this. Then I think the current set_bit_to_user() is suspicious, we > probably miss a flush_dcache_page() there: > > > static int set_bit_to_user(int nr, void __user *addr) > { > ??????? un...
2007 Apr 18
0
[patch 6/9] Guest page hinting: writable page table entries.
...update_mmu_cache(vma, addr, pte_val); diff -urpN linux-2.6/mm/memory.c linux-2.6-patched/mm/memory.c --- linux-2.6/mm/memory.c 2006-09-01 12:50:24.000000000 +0200 +++ linux-2.6-patched/mm/memory.c 2006-09-01 12:50:24.000000000 +0200 @@ -1558,6 +1558,7 @@ static int do_wp_page(struct mm_struct * flush_cache_page(vma, address, pte_pfn(orig_pte)); entry = pte_mkyoung(orig_pte); entry = maybe_mkwrite(pte_mkdirty(entry), vma); + page_check_writable(old_page, entry); ptep_set_access_flags(vma, address, page_table, entry, 1); update_mmu_cache(vma, address, entry); lazy_mmu_prot_update(entry); @@...
2007 Apr 18
0
[patch 6/9] Guest page hinting: writable page table entries.
...update_mmu_cache(vma, addr, pte_val); diff -urpN linux-2.6/mm/memory.c linux-2.6-patched/mm/memory.c --- linux-2.6/mm/memory.c 2006-09-01 12:50:24.000000000 +0200 +++ linux-2.6-patched/mm/memory.c 2006-09-01 12:50:24.000000000 +0200 @@ -1558,6 +1558,7 @@ static int do_wp_page(struct mm_struct * flush_cache_page(vma, address, pte_pfn(orig_pte)); entry = pte_mkyoung(orig_pte); entry = maybe_mkwrite(pte_mkdirty(entry), vma); + page_check_writable(old_page, entry); ptep_set_access_flags(vma, address, page_table, entry, 1); update_mmu_cache(vma, address, entry); lazy_mmu_prot_update(entry); @@...
2019 Mar 11
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...his going to work for CPUs with virtually tagged caches? > > > Anything different that you worry? If caches have virtual tags then kernel and userspace view of memory might not be automatically in sync if they access memory through different virtual addresses. You need to do things like flush_cache_page, probably multiple times. > I can have a test but do you know any > archs that use virtual tag cache? sparc I believe. > Thanks
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...luate what happens if this is used outside of ptrace context. kmap has been used generally either to access whole pages (i.e. copy_user_page), so ptrace may actually be the only use case with subpage granularity access. #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ flush_cache_page(vma, vaddr, page_to_pfn(page)); \ memcpy(dst, src, len); \ flush_ptrace_access(vma, page, vaddr, src, len, 0); \ } while (0) So I wouldn't rule out the need for a dual model, until we solve how to run this stable on non-x86 arches with not physically tagged caches. Thanks, Andrea
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...gged caches? >>> >>> Anything different that you worry? >> If caches have virtual tags then kernel and userspace view of memory >> might not be automatically in sync if they access memory >> through different virtual addresses. You need to do things like >> flush_cache_page, probably multiple times. > "flush_dcache_page()" I get this. Then I think the current set_bit_to_user() is suspicious, we probably miss a flush_dcache_page() there: static int set_bit_to_user(int nr, void __user *addr) { ??????? unsigned long log = (unsigned long)addr; ???????...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...ifferent that you worry? >>>> If caches have virtual tags then kernel and userspace view of >>>> memory >>>> might not be automatically in sync if they access memory >>>> through different virtual addresses. You need to do things like >>>> flush_cache_page, probably multiple times. >>> "flush_dcache_page()" >> >> I get this. Then I think the current set_bit_to_user() is suspicious, >> we >> probably miss a flush_dcache_page() there: >> >> >> static int set_bit_to_user(int nr, void __user *a...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...t;> Anything different that you worry? >>>> If caches have virtual tags then kernel and userspace view of memory >>>> might not be automatically in sync if they access memory >>>> through different virtual addresses. You need to do things like >>>> flush_cache_page, probably multiple times. >>> "flush_dcache_page()" >> >> I get this. Then I think the current set_bit_to_user() is suspicious, we >> probably miss a flush_dcache_page() there: >> >> >> static int set_bit_to_user(int nr, void __user *addr) >...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...cause the CPU has the L1 > cache within easy reach. The only event when flush takes a large > amount time is if we actually have dirty data to write back to main > memory. The double hit is in parisc copy_to_user_page: #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ flush_cache_page(vma, vaddr, page_to_pfn(page)); \ memcpy(dst, src, len); \ flush_kernel_dcache_range_asm((unsigned long)dst, (unsigned long)dst + len); \ } while (0) That is executed just before kunmap: static inline void kunmap(struct page *page) { flush_kernel_dcache_page_addr(page_address(page)); } Can...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
...ate_vma_insert_page(struct migrate_vma *migrate, goto unlock_abort; inc_mm_counter(mm, MM_ANONPAGES); + get_page(page); page_add_new_anon_rmap(page, vma, addr, false); if (!is_zone_device_page(page)) lru_cache_add_active_or_unevictable(page, vma); - get_page(page); if (flush) { flush_cache_page(vma, addr, pte_pfn(*ptep)); @@ -2850,7 +2995,6 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, } pte_unmap_unlock(ptep, ptl); - *src = MIGRATE_PFN_MIGRATE; return; unlock_abort: diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 48eb0f1410d4..a852ed2f204c 100644 ---...
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...t; goto unlock_abort; > > inc_mm_counter(mm, MM_ANONPAGES); > + get_page(page); > page_add_new_anon_rmap(page, vma, addr, false); > if (!is_zone_device_page(page)) > lru_cache_add_active_or_unevictable(page, vma); > - get_page(page); > > if (flush) { > flush_cache_page(vma, addr, pte_pfn(*ptep)); > @@ -2850,7 +2995,6 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, > } > > pte_unmap_unlock(ptep, ptl); > - *src = MIGRATE_PFN_MIGRATE; > return; > > unlock_abort: > diff --git a/mm/page_alloc.c b/mm/page_alloc.c &g...
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...e_vma_insert_page(struct migrate_vma *migrate, goto unlock_abort; inc_mm_counter(mm, MM_ANONPAGES); + get_page(page); page_add_new_anon_rmap(page, vma, addr, false); if (!is_zone_device_page(page)) lru_cache_add_inactive_or_unevictable(page, vma); - get_page(page); if (flush) { flush_cache_page(vma, addr, pte_pfn(*ptep)); @@ -2957,7 +3215,6 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, } pte_unmap_unlock(ptep, ptl); - *src = MIGRATE_PFN_MIGRATE; return; unlock_abort: @@ -2988,11 +3245,23 @@ void migrate_vma_pages(struct migrate_vma *migrate) struct addr...