Displaying 7 results from an estimated 7 matches for "cachetlb".
Did you mean:
cachectl
2019 Apr 09
2
[PATCH net] vhost: flush dcache page when logging dirty pages
We set dirty bit through setting up kmaps and access them through
kernel virtual address, this may result alias in virtually tagged
caches that require a dcache flush afterwards.
Cc: Christoph Hellwig <hch at infradead.org>
Cc: James Bottomley <James.Bottomley at HansenPartnership.com>
Cc: Andrea Arcangeli <aarcange at redhat.com>
Fixes: 3a4d5c94e9593 ("vhost_net: a
2019 Apr 09
2
[PATCH net] vhost: flush dcache page when logging dirty pages
We set dirty bit through setting up kmaps and access them through
kernel virtual address, this may result alias in virtually tagged
caches that require a dcache flush afterwards.
Cc: Christoph Hellwig <hch at infradead.org>
Cc: James Bottomley <James.Bottomley at HansenPartnership.com>
Cc: Andrea Arcangeli <aarcange at redhat.com>
Fixes: 3a4d5c94e9593 ("vhost_net: a
2005 Dec 07
6
PG_arch_1
Xenlinux uses a special architecture-dependent bit in the page table,
called PG_arch_1 to indicate that a page is "foreign" (PG_foreign).
It also apparently uses it to determine if a page is pinned (PG_pinned).
Linux/ia64 (and apparently Linux/ppc and Linux/ppc64) use the PG_arch_1
bit for other purposes. On Linux/ia64, it is used to determine if
the instruction cache needs to be
2019 Apr 09
0
[PATCH net] vhost: flush dcache page when logging dirty pages
...redhat.com>
I am not sure this is a good idea.
The region in question is supposed to be accessed
by userspace at the same time, through atomic operations.
How do we know userspace didn't access it just before?
Is that an issue at all given we use
atomics for access? Documentation/core-api/cachetlb.rst does
not mention atomics.
Which architectures are affected?
Assuming atomics actually do need a flush, then don't we need
a flush in the other direction too? How are atomics
supposed to work at all?
I really think we need new APIs along the lines of
set_bit_to_user.
> ---
> driver...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...atomic ops per bit is way to expensive.
> > >
> > >
> > > Yes.
> > >
> > > Thanks
> >
> > See James's reply - I stand corrected we do kunmap so no need to
> > flush.
>
> Well, I said that's what we do on Parisc. The cachetlb document
> definitely says if you alter the data between kmap and kunmap you are
> responsible for the flush. It's just that flush_dcache_page() is a no-
> op on x86 so they never remember to add it and since it will crash
> parisc if you get it wrong we finally gave up trying to m...
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote:
>
> On 2019/3/12 ??2:14, David Miller wrote:
> > From: "Michael S. Tsirkin" <mst at redhat.com>
> > Date: Mon, 11 Mar 2019 09:59:28 -0400
> >
> > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:
> > > > On 2019/3/8 ??10:12, Christoph Hellwig wrote:
> > >
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote:
>
> On 2019/3/12 ??2:14, David Miller wrote:
> > From: "Michael S. Tsirkin" <mst at redhat.com>
> > Date: Mon, 11 Mar 2019 09:59:28 -0400
> >
> > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:
> > > > On 2019/3/8 ??10:12, Christoph Hellwig wrote:
> > >