search for: copyxuser

Displaying 13 results from an estimated 13 matches for "copyxuser".

Did you mean: copy_user
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...do any TLB flushes (except on 32bit archs if > > the page is above the direct mapping but it never happens on 64bit > > archs). > > > I see, I believe we don't care much about the performance of 32bit archs (or > we can just fallback to copy_to_user() friends). Using copyXuser is better I guess. > Using direct mapping (I > guess kernel will always try hugepage for that?) should be better and we can > even use it for the data transfer not only for the metadata. > > Thanks We can't really. The big issue is get user pages. Doing that on data path will...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...do any TLB flushes (except on 32bit archs if > > the page is above the direct mapping but it never happens on 64bit > > archs). > > > I see, I believe we don't care much about the performance of 32bit archs (or > we can just fallback to copy_to_user() friends). Using copyXuser is better I guess. > Using direct mapping (I > guess kernel will always try hugepage for that?) should be better and we can > even use it for the data transfer not only for the metadata. > > Thanks We can't really. The big issue is get user pages. Doing that on data path will...
2019 Mar 12
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...archs if > > > > the page is above the direct mapping but it never happens on 64bit > > > > archs). > > > I see, I believe we don't care much about the performance of 32bit archs (or > > > we can just fallback to copy_to_user() friends). > > Using copyXuser is better I guess. > > > Ok. > > > > > > > Using direct mapping (I > > > guess kernel will always try hugepage for that?) should be better and we can > > > even use it for the data transfer not only for the metadata. > > > > > &gt...
2019 Mar 12
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...y TLB flushes (except on 32bit archs if >>> the page is above the direct mapping but it never happens on 64bit >>> archs). >> I see, I believe we don't care much about the performance of 32bit archs (or >> we can just fallback to copy_to_user() friends). > Using copyXuser is better I guess. Ok. > >> Using direct mapping (I >> guess kernel will always try hugepage for that?) should be better and we can >> even use it for the data transfer not only for the metadata. >> >> Thanks > We can't really. The big issue is get user p...
2019 Mar 12
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/11 ??9:43, Andrea Arcangeli wrote: > On Mon, Mar 11, 2019 at 08:48:37AM -0400, Michael S. Tsirkin wrote: >> Using copyXuser is better I guess. > It certainly would be faster there, but I don't think it's needed if > that would be the only use case left that justifies supporting two > different models. On small 32bit systems with little RAM kmap won't > perform measurably different on 32bit or 64b...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 08:48:37AM -0400, Michael S. Tsirkin wrote: > Using copyXuser is better I guess. It certainly would be faster there, but I don't think it's needed if that would be the only use case left that justifies supporting two different models. On small 32bit systems with little RAM kmap won't perform measurably different on 32bit or 64bit systems. If the...
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson, On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > Just to make sure I understand here. For boosting through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson, On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > Just to make sure I understand here. For boosting through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...Our internal cache line is around > 32 bytes (some have 16 and some have 64) but that means we need 128 > flushes for a page ... we definitely can't pipeline them all. So I > agree duplicate flush elimination would be a small improvement. > > James I suspect we'll keep the copyXuser path around for 32 bit anyway - right Jason? So we can also keep using that on parisc... -- MST
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote: > > On 2019/3/12 ??2:14, David Miller wrote: > > From: "Michael S. Tsirkin" <mst at redhat.com> > > Date: Mon, 11 Mar 2019 09:59:28 -0400 > > > > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: > > > > On 2019/3/8 ??10:12, Christoph Hellwig wrote: > > >
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote: > > On 2019/3/12 ??2:14, David Miller wrote: > > From: "Michael S. Tsirkin" <mst at redhat.com> > > Date: Mon, 11 Mar 2019 09:59:28 -0400 > > > > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: > > > > On 2019/3/8 ??10:12, Christoph Hellwig wrote: > > >
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...is around >> 32 bytes (some have 16 and some have 64) but that means we need 128 >> flushes for a page ... we definitely can't pipeline them all. So I >> agree duplicate flush elimination would be a small improvement. >> >> James > I suspect we'll keep the copyXuser path around for 32 bit anyway - > right Jason? Yes since we don't want to slow down 32bit. Thanks > So we can also keep using that on parisc... > > --
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...is around >> 32 bytes (some have 16 and some have 64) but that means we need 128 >> flushes for a page ... we definitely can't pipeline them all. So I >> agree duplicate flush elimination would be a small improvement. >> >> James > I suspect we'll keep the copyXuser path around for 32 bit anyway - > right Jason? Yes since we don't want to slow down 32bit. Thanks > So we can also keep using that on parisc... > > --