search for: copy_to_user_pag

Displaying 13 results from an estimated 13 matches for "copy_to_user_pag".

Did you mean: copy_to_user_page
2019 Mar 12
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...fter the > kmap. Supposedly with vmap, the vmap layer should have taken care of > that (I didn't verify that yet). vmap_page_range()/free_unmap_vmap_area() will call fluch_cache_vmap()/flush_cache_vunmap(). So vmap layer should be ok. Thanks > > There are some accessories like copy_to_user_page() > copy_from_user_page() that could work and obviously defines to raw > memcpy on x86 (the main cons is they don't provide word granular > access) and at least on sparc they're tailored to ptrace assumptions > so then we'd need to evaluate what happens if this is used outs...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...timize you can use the referenced and dirty bits on > the kmapped pte to tell you what operation to do, but if your flush is > your invalidate, you simply assume the data needs flushing on kunmap > without checking anything. Except other archs like arm64 and sparc do the cache flushing on copy_to_user_page and copy_user_page, not on kunmap. #define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr) void __cpu_copy_user_page(void *kto, const void *kfrom, unsigned long vaddr) { struct page *page = virt_to_page(kto); copy_page(kto, kfrom); flush_dcache_page(page); } #define copy...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...opy-user model requires. As a rule of thumb any arch where copy_user_page doesn't define as copy_page will require some additional cache flushing after the kmap. Supposedly with vmap, the vmap layer should have taken care of that (I didn't verify that yet). There are some accessories like copy_to_user_page() copy_from_user_page() that could work and obviously defines to raw memcpy on x86 (the main cons is they don't provide word granular access) and at least on sparc they're tailored to ptrace assumptions so then we'd need to evaluate what happens if this is used outside of ptrace contex...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote: > > On 2019/3/12 ??2:14, David Miller wrote: > > From: "Michael S. Tsirkin" <mst at redhat.com> > > Date: Mon, 11 Mar 2019 09:59:28 -0400 > > > > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: > > > > On 2019/3/8 ??10:12, Christoph Hellwig wrote: > > >
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote: > > On 2019/3/12 ??2:14, David Miller wrote: > > From: "Michael S. Tsirkin" <mst at redhat.com> > > Date: Mon, 11 Mar 2019 09:59:28 -0400 > > > > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: > > > > On 2019/3/8 ??10:12, Christoph Hellwig wrote: > > >
2007 Apr 18
0
[RFC/PATCH LGUEST X86_64 09/13] lguest64 devices
...>len[si] + && di < LGUEST_MAX_DMA_SECTIONS && dst->len[di]) { + u32 len = min(src->len[si] - srcoff, dst->len[di] - dstoff); + + if (!maddr) + maddr = kmap(pages[di]); + + /* FIXME: This is not completely portable, since + archs do different things for copy_to_user_page. */ + if (copy_from_user(maddr + (dst->addr[di] + dstoff)%PAGE_SIZE, + (void *__user)src->addr[si], len) != 0) { + totlen = 0; + break; + } + + totlen += len; + srcoff += len; + dstoff += len; + if (srcoff == src->len[si]) { + si++; + srcoff = 0; + } + if (dstoff ==...
2007 Apr 18
0
[RFC/PATCH LGUEST X86_64 09/13] lguest64 devices
...>len[si] + && di < LGUEST_MAX_DMA_SECTIONS && dst->len[di]) { + u32 len = min(src->len[si] - srcoff, dst->len[di] - dstoff); + + if (!maddr) + maddr = kmap(pages[di]); + + /* FIXME: This is not completely portable, since + archs do different things for copy_to_user_page. */ + if (copy_from_user(maddr + (dst->addr[di] + dstoff)%PAGE_SIZE, + (void *__user)src->addr[si], len) != 0) { + totlen = 0; + break; + } + + totlen += len; + srcoff += len; + dstoff += len; + if (srcoff == src->len[si]) { + si++; + srcoff = 0; + } + if (dstoff ==...
2007 May 09
1
[patch 3/9] lguest: the host code
...>len[si] + && di < LGUEST_MAX_DMA_SECTIONS && dst->len[di]) { + u32 len = min(src->len[si] - srcoff, dst->len[di] - dstoff); + + if (!maddr) + maddr = kmap(pages[di]); + + /* FIXME: This is not completely portable, since + archs do different things for copy_to_user_page. */ + if (copy_from_user(maddr + (dst->addr[di] + dstoff)%PAGE_SIZE, + (void *__user)src->addr[si], len) != 0) { + kill_guest(srclg, "bad address in sending DMA"); + totlen = 0; + break; + } + + totlen += len; + srcoff += len; + dstoff += len; + if (srcoff == src-...
2007 May 09
1
[patch 3/9] lguest: the host code
...>len[si] + && di < LGUEST_MAX_DMA_SECTIONS && dst->len[di]) { + u32 len = min(src->len[si] - srcoff, dst->len[di] - dstoff); + + if (!maddr) + maddr = kmap(pages[di]); + + /* FIXME: This is not completely portable, since + archs do different things for copy_to_user_page. */ + if (copy_from_user(maddr + (dst->addr[di] + dstoff)%PAGE_SIZE, + (void *__user)src->addr[si], len) != 0) { + kill_guest(srclg, "bad address in sending DMA"); + totlen = 0; + break; + } + + totlen += len; + srcoff += len; + dstoff += len; + if (srcoff == src-...
2007 Sep 25
50
[patch 00/43] lguest: Patches for 2.6.24 (and patchbomb test)
Hi all, These are the patches I'm planning to submit for 2.6.24. Comments gratefully accepted. Along with the usual cleanups and improvements are Jes' de-i386-ification patches, and a new "virtio" mechanism designed to be shared with KVM (and hopefully other hypervisors). Cheers, Rusty. Documentation/lguest/Makefile | 30 Documentation/lguest/lguest.c
2007 Sep 25
50
[patch 00/43] lguest: Patches for 2.6.24 (and patchbomb test)
Hi all, These are the patches I'm planning to submit for 2.6.24. Comments gratefully accepted. Along with the usual cleanups and improvements are Jes' de-i386-ification patches, and a new "virtio" mechanism designed to be shared with KVM (and hopefully other hypervisors). Cheers, Rusty. Documentation/lguest/Makefile | 30 Documentation/lguest/lguest.c