search for: vumap

Displaying 4 results from an estimated 4 matches for "vumap".

Did you mean: vmap
2019 Mar 07
5
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > > > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > > > + .invalidate_range = vhost_invalidate_range, > > > +}; > > > + > >
2019 Mar 07
5
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > > > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > > > + .invalidate_range = vhost_invalidate_range, > > > +}; > > > + > >
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...the page is allowed without any lock. > > Locking will happen once the userspace pte are tear down through the > > page table lock. > > > Can I simply can set_page_dirty() before vunmap() in the mmu notifier > callback, or is there any reason that it must be called within vumap()? > > Thanks I think this is what Jerome is saying, yes. Maybe add a patch to mmu notifier doc file, documenting this? > > > > > > It's because of all these issues that I preferred just accessing > > > userspace memory and handling faults. Unfortunately...
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...and calling set_page_dirty on the page is allowed without any lock. > Locking will happen once the userspace pte are tear down through the > page table lock. Can I simply can set_page_dirty() before vunmap() in the mmu notifier callback, or is there any reason that it must be called within vumap()? Thanks > >> It's because of all these issues that I preferred just accessing >> userspace memory and handling faults. Unfortunately there does not >> appear to exist an API that whitelists a specific driver along the lines >> of "I checked this code for spe...