search for: vhost_set_vmap_dirty

Displaying 9 results from an estimated 9 matches for "vhost_set_vmap_dirty".

Did you mean: vhost_set_map_dirty
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on page lock to mark page dirty. > > > > Could it happen...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on page lock to mark page dirty. > > > > Could it happen...
2019 Mar 06
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...;num), false); > +} > + > +static int vhost_setup_used_vmap(struct vhost_virtqueue *vq, > + unsigned long used) > +{ > + return vhost_init_vmap(vq->dev, &vq->used_ring, used, > + vhost_get_used_size(vq, vq->num), true); > +} > + > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > +{ > + int i; > + > + for (i = 0; i < used->npages; i++) > + set_page_dirty_lock(used->pages[i]); This seems to rely on page lock to mark page dirty. Could it happen that page writeback will check the page, find it clean, and then you mark it d...
2019 Mar 06
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...;num), false); > +} > + > +static int vhost_setup_used_vmap(struct vhost_virtqueue *vq, > + unsigned long used) > +{ > + return vhost_init_vmap(vq->dev, &vq->used_ring, used, > + vhost_get_used_size(vq, vq->num), true); > +} > + > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > +{ > + int i; > + > + for (i = 0; i < used->npages; i++) > + set_page_dirty_lock(used->pages[i]); This seems to rely on page lock to mark page dirty. Could it happen that page writeback will check the page, find it clean, and then you mark it d...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: >> +static void vhost_set_vmap_dirty(struct vhost_vmap *used) >> +{ >> + int i; >> + >> + for (i = 0; i < used->npages; i++) >> + set_page_dirty_lock(used->pages[i]); > This seems to rely on page lock to mark page dirty. > > Could it happen that page writeback will check the > page,...
2019 Mar 06
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...g, desc, + vhost_get_desc_size(vq, vq->num), false); +} + +static int vhost_setup_used_vmap(struct vhost_virtqueue *vq, + unsigned long used) +{ + return vhost_init_vmap(vq->dev, &vq->used_ring, used, + vhost_get_used_size(vq, vq->num), true); +} + +static void vhost_set_vmap_dirty(struct vhost_vmap *used) +{ + int i; + + for (i = 0; i < used->npages; i++) + set_page_dirty_lock(used->pages[i]); +} + void vhost_dev_cleanup(struct vhost_dev *dev) { int i; @@ -662,8 +815,12 @@ void vhost_dev_cleanup(struct vhost_dev *dev) kthread_stop(dev->worker); dev-&gt...
2019 Mar 07
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:34:39AM -0500, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > > +{ > > > > + int i; > > > > + > > > > + for (i = 0; i < used->npages; i++) > > > > + set_page_dirty_lock(used->pages[i]); > > > This seems to rely on page lock to mark page dirty. > &g...
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. This is done through setup kernel address through vmap() and resigter MMU notifier for invalidation. Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see obvious improvement.
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. This is done through setup kernel address through vmap() and resigter MMU notifier for invalidation. Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see obvious improvement.