Displaying 20 results from an estimated 69 matches for "flush_dcache_pag".
Did you mean:
flush_dcache_page
2019 Mar 11
4
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...nything different that you worry?
>
> If caches have virtual tags then kernel and userspace view of memory
> might not be automatically in sync if they access memory
> through different virtual addresses. You need to do things like
> flush_cache_page, probably multiple times.
"flush_dcache_page()"
2019 Mar 11
4
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...nything different that you worry?
>
> If caches have virtual tags then kernel and userspace view of memory
> might not be automatically in sync if they access memory
> through different virtual addresses. You need to do things like
> flush_cache_page, probably multiple times.
"flush_dcache_page()"
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...al tags then kernel and userspace view of
>>>> memory
>>>> might not be automatically in sync if they access memory
>>>> through different virtual addresses. You need to do things like
>>>> flush_cache_page, probably multiple times.
>>> "flush_dcache_page()"
>>
>> I get this. Then I think the current set_bit_to_user() is suspicious,
>> we
>> probably miss a flush_dcache_page() there:
>>
>>
>> static int set_bit_to_user(int nr, void __user *addr)
>> {
>> unsigned long log = (unsign...
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...> If caches have virtual tags then kernel and userspace view of memory
> > > might not be automatically in sync if they access memory
> > > through different virtual addresses. You need to do things like
> > > flush_cache_page, probably multiple times.
> > "flush_dcache_page()"
>
>
> I get this. Then I think the current set_bit_to_user() is suspicious, we
> probably miss a flush_dcache_page() there:
>
>
> static int set_bit_to_user(int nr, void __user *addr)
> {
> ??????? unsigned long log = (unsigned long)addr;
> ??????? struct...
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...> If caches have virtual tags then kernel and userspace view of memory
> > > might not be automatically in sync if they access memory
> > > through different virtual addresses. You need to do things like
> > > flush_cache_page, probably multiple times.
> > "flush_dcache_page()"
>
>
> I get this. Then I think the current set_bit_to_user() is suspicious, we
> probably miss a flush_dcache_page() there:
>
>
> static int set_bit_to_user(int nr, void __user *addr)
> {
> ??????? unsigned long log = (unsigned long)addr;
> ??????? struct...
2019 Apr 09
2
[PATCH net] vhost: flush dcache page when logging dirty pages
...-git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 351af88231ad..34a1cedbc5ba 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -1711,6 +1711,7 @@ static int set_bit_to_user(int nr, void __user *addr)
base = kmap_atomic(page);
set_bit(bit, base);
kunmap_atomic(base);
+ flush_dcache_page(page);
set_page_dirty_lock(page);
put_page(page);
return 0;
--
2.19.1
2019 Apr 09
2
[PATCH net] vhost: flush dcache page when logging dirty pages
...-git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 351af88231ad..34a1cedbc5ba 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -1711,6 +1711,7 @@ static int set_bit_to_user(int nr, void __user *addr)
base = kmap_atomic(page);
set_bit(bit, base);
kunmap_atomic(base);
+ flush_dcache_page(page);
set_page_dirty_lock(page);
put_page(page);
return 0;
--
2.19.1
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...; > See James's reply - I stand corrected we do kunmap so no need to
> > flush.
>
> Well, I said that's what we do on Parisc. The cachetlb document
> definitely says if you alter the data between kmap and kunmap you are
> responsible for the flush. It's just that flush_dcache_page() is a no-
> op on x86 so they never remember to add it and since it will crash
> parisc if you get it wrong we finally gave up trying to make them.
>
> But that's the point: it is a no-op on your favourite architecture so
> it costs you nothing to add it.
Yes, the fact Parisc...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 02:19:15PM -0700, James Bottomley wrote:
> I mean in the sequence
>
> flush_dcache_page(page);
> flush_dcache_page(page);
>
> The first flush_dcache_page did all the work and the second it a
> tightly pipelined no-op. That's what I mean by there not really being
> a double hit.
Ok I wasn't sure it was clear there was a double (profiling) hit on
that function...
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...or all architectures
> > that need cache flushing and then remove the explicit flushing in
> > the callers..
>
> Well, it's already done on parisc ... I can help with this if we agree
> it's the best way forward. It's really only architectures that
> implement flush_dcache_page that would need modifying.
>
> It may also improve performance because some kmap/use/flush/kunmap
> sequences have flush_dcache_page() instead of
> flush_kernel_dcache_page() and the former is hugely expensive and
> usually unnecessary because GUP already flushed all the user alias...
2010 Dec 07
9
[PATCH] Btrfs: pwrite blocked when writing from the mmaped buffer of the same page
...ge, i, offset, count);
- pagefault_enable();
+ if (unlikely(iov_iter_fault_in_readable(i, count)))
+ return -EFAULT;
+
+ /* Copy data from userspace to the current page */
+ copied = iov_iter_copy_from_user(page, i, offset, count);
/* Flush processor''s dcache for this page */
flush_dcache_page(page);
@@ -978,15 +974,6 @@ static ssize_t btrfs_file_aio_write(struct kiocb *iocb,
if (ret)
goto out;
- /*
- * fault pages before locking them in prepare_pages
- * to avoid recursive lock
- */
- if (unlikely(iov_iter_fault_in_readable(&i, write_bytes))) {
-...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...ent that you worry?
>> If caches have virtual tags then kernel and userspace view of memory
>> might not be automatically in sync if they access memory
>> through different virtual addresses. You need to do things like
>> flush_cache_page, probably multiple times.
> "flush_dcache_page()"
I get this. Then I think the current set_bit_to_user() is suspicious, we
probably miss a flush_dcache_page() there:
static int set_bit_to_user(int nr, void __user *addr)
{
??????? unsigned long log = (unsigned long)addr;
??????? struct page *page;
??????? void *base;
??????? int b...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...caches have virtual tags then kernel and userspace view of memory
>>>> might not be automatically in sync if they access memory
>>>> through different virtual addresses. You need to do things like
>>>> flush_cache_page, probably multiple times.
>>> "flush_dcache_page()"
>>
>> I get this. Then I think the current set_bit_to_user() is suspicious, we
>> probably miss a flush_dcache_page() there:
>>
>>
>> static int set_bit_to_user(int nr, void __user *addr)
>> {
>> ??????? unsigned long log = (unsigned long)ad...
2019 Apr 09
0
[PATCH net] vhost: flush dcache page when logging dirty pages
...host/vhost.c
> index 351af88231ad..34a1cedbc5ba 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -1711,6 +1711,7 @@ static int set_bit_to_user(int nr, void __user *addr)
> base = kmap_atomic(page);
> set_bit(bit, base);
> kunmap_atomic(base);
> + flush_dcache_page(page);
> set_page_dirty_lock(page);
> put_page(page);
> return 0;
Ignoring the question of whether this actually helps, I doubt
flush_dcache_page is appropriate here. Pls take a look at
Documentation/core-api/cachetlb.rst as well as the actual
implementation.
I think you meant fl...
2023 Mar 02
1
[PATCH] ocfs2: Fix data corruption after failed write
...ot bother zeroing the page. Invalidate
+ * it instead so that writeback does not get confused
+ * put page & buffer dirty bits into inconsistent
+ * state.
+ */
+ block_invalidate_folio(page_folio(wc->w_target_page),
+ 0, PAGE_SIZE);
+ }
}
if (wc->w_target_page)
flush_dcache_page(wc->w_target_page);
--
2.35.3
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...the cache flushing on
copy_to_user_page and copy_user_page, not on kunmap.
#define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr)
void __cpu_copy_user_page(void *kto, const void *kfrom, unsigned long vaddr)
{
struct page *page = virt_to_page(kto);
copy_page(kto, kfrom);
flush_dcache_page(page);
}
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
sparc_flush_page_to_ram(page); \
} while (0)
And they do nothing on kunmap:
static inline void kunmap(struct page *page)
{
BUG_ON(in_interrupt());
if (!PageHighMem(page))
return;
kunmap_high(page);
}
v...
2019 Sep 06
0
[vhost:linux-next 13/15] arch/ia64/include/asm/page.h:51:23: warning: "hpage_shift" is not defined, evaluates to 0
...asm-ia64/page.h Linus Torvalds 2005-04-16 68 /*
^1da177e4c3f41 include/asm-ia64/page.h Linus Torvalds 2005-04-16 69 * clear_user_page() and copy_user_page() can't be inline functions because
^1da177e4c3f41 include/asm-ia64/page.h Linus Torvalds 2005-04-16 70 * flush_dcache_page() can't be defined until later...
^1da177e4c3f41 include/asm-ia64/page.h Linus Torvalds 2005-04-16 71 */
^1da177e4c3f41 include/asm-ia64/page.h Linus Torvalds 2005-04-16 72 #define clear_user_page(addr, vaddr, page) \
^1da177e4c3f41 include/asm-ia64/page.h Linus Tor...
2023 Mar 20
2
FAILED: patch "[PATCH] ocfs2: fix data corruption after failed write" failed to apply to 5.10-stable tree
...ot bother zeroing the page. Invalidate
+ * it instead so that writeback does not get confused
+ * put page & buffer dirty bits into inconsistent
+ * state.
+ */
+ block_invalidate_folio(page_folio(wc->w_target_page),
+ 0, PAGE_SIZE);
+ }
}
if (wc->w_target_page)
flush_dcache_page(wc->w_target_page);
2023 Mar 20
1
FAILED: patch "[PATCH] ocfs2: fix data corruption after failed write" failed to apply to 4.19-stable tree
...ot bother zeroing the page. Invalidate
+ * it instead so that writeback does not get confused
+ * put page & buffer dirty bits into inconsistent
+ * state.
+ */
+ block_invalidate_folio(page_folio(wc->w_target_page),
+ 0, PAGE_SIZE);
+ }
}
if (wc->w_target_page)
flush_dcache_page(wc->w_target_page);
2013 Nov 27
0
[PATCH 07/25] block: Convert bio_for_each_segment() to bvec_iter
...mem += vec.bv_len;
+ transfered += vec.bv_len;
}
bio_endio(bio, 0);
}
diff --git a/block/blk-core.c b/block/blk-core.c
index 5c2ab2c..5da8e90 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2746,10 +2746,10 @@ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
void rq_flush_dcache_pages(struct request *rq)
{
struct req_iterator iter;
- struct bio_vec *bvec;
+ struct bio_vec bvec;
rq_for_each_segment(bvec, rq, iter)
- flush_dcache_page(bvec->bv_page);
+ flush_dcache_page(bvec.bv_page);
}
EXPORT_SYMBOL_GPL(rq_flush_dcache_pages);
#endif
diff --git a/block/blk-merge....