search for: vhost_set_map_dirty

Displaying 20 results from an estimated 34 matches for "vhost_set_map_dirty".

2019 Jul 23
2
[PATCH 5/6] vhost: mark dirty pages during map uninit
...--git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index 89c9f08b5146..5b8821d00fe4 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -306,6 +306,18 @@ static void vhost_map_unprefetch(struct vhost_map *map) > kfree(map); > } > > +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, > + struct vhost_map *map, int index) > +{ > + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > + int i; > + > + if (uaddr->write) { > + for (i = 0; i < map->npages; i++) > + set_page_dirty(map->pages[i]); > + } > +...
2019 Jul 23
2
[PATCH 5/6] vhost: mark dirty pages during map uninit
...--git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index 89c9f08b5146..5b8821d00fe4 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -306,6 +306,18 @@ static void vhost_map_unprefetch(struct vhost_map *map) > kfree(map); > } > > +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, > + struct vhost_map *map, int index) > +{ > + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > + int i; > + > + if (uaddr->write) { > + for (i = 0; i < map->npages; i++) > + set_page_dirty(map->pages[i]); > + } > +...
2019 Jul 23
0
[PATCH 5/6] vhost: mark dirty pages during map uninit
..., 16 insertions(+), 6 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 89c9f08b5146..5b8821d00fe4 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -306,6 +306,18 @@ static void vhost_map_unprefetch(struct vhost_map *map) kfree(map); } +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, + struct vhost_map *map, int index) +{ + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; + int i; + + if (uaddr->write) { + for (i = 0; i < map->npages; i++) + set_page_dirty(map->pages[i]); + } +} + static void vhost_uninit_vq_maps(struct vhost...
2019 Jul 23
0
[PATCH 5/6] vhost: mark dirty pages during map uninit
...ivers/vhost/vhost.c >> index 89c9f08b5146..5b8821d00fe4 100644 >> --- a/drivers/vhost/vhost.c >> +++ b/drivers/vhost/vhost.c >> @@ -306,6 +306,18 @@ static void vhost_map_unprefetch(struct vhost_map *map) >> kfree(map); >> } >> >> +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, >> + struct vhost_map *map, int index) >> +{ >> + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; >> + int i; >> + >> + if (uaddr->write) { >> + for (i = 0; i < map->npages; i++) >> + set_page_dirty(ma...
2019 Aug 07
12
[PATCH V4 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the
2019 Aug 07
12
[PATCH V4 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the
2019 Aug 07
11
[PATCH V3 00/10] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the range is overlapped with metadata Jason Wang (9):
2019 Aug 09
11
[PATCH V5 0/9] Fixes for vhost metadata acceleration
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V4: - switch to use spinlock synchronize MMU notifier with accessors Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost
2019 Aug 09
11
[PATCH V5 0/9] Fixes for vhost metadata acceleration
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V4: - switch to use spinlock synchronize MMU notifier with accessors Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost
2019 Jul 23
10
[PATCH 0/6] Fixes for meta data acceleration
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Jason Wang (6): vhost: don't set uaddr for invalid address vhost: validate MMU notifier registration vhost: fix vhost map leak vhost: reset invalidate_count in vhost_set_vring_num_addr() vhost: mark dirty pages during map uninit vhost: don't do synchronize_rcu() in
2019 Jul 31
14
[PATCH V2 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V1: - Try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the range is overlapped with metadata Jason Wang (9): vhost: don't set uaddr for invalid address vhost: validate MMU notifier
2019 Jul 31
1
[PATCH V2 9/9] vhost: do not return -EAGIAN for non blocking invalidation too early
...range_overlap(uaddr, start, end)) > - return; > + return 0; > + else if (!blockable) > + return -EAGAIN; > > spin_lock(&vq->mmu_lock); > ++vq->invalidate_count; > @@ -423,6 +426,8 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > vhost_set_map_dirty(vq, map, index); > vhost_map_unprefetch(map); > } > + > + return 0; > } > > static void vhost_invalidate_vq_end(struct vhost_virtqueue *vq, > @@ -443,18 +448,19 @@ static int vhost_invalidate_range_start(struct mmu_notifier *mn, > { > struct vhost_dev *dev...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...struct vhost_virtqueue *vq) > > spin_lock(&vq->mmu_lock); > for (i = 0; i < VHOST_NUM_ADDRS; i++) { > - map[i] = rcu_dereference_protected(vq->maps[i], > - lockdep_is_held(&vq->mmu_lock)); > + map[i] = vq->maps[i]; > if (map[i]) { > vhost_set_map_dirty(vq, map[i], i); > - rcu_assign_pointer(vq->maps[i], NULL); > + vq->maps[i] = NULL; > } > } > spin_unlock(&vq->mmu_lock); > > - /* No need for synchronize_rcu() or kfree_rcu() since we are > - * serialized with memory accessors (e.g vq mutex held)....
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...struct vhost_virtqueue *vq) > > spin_lock(&vq->mmu_lock); > for (i = 0; i < VHOST_NUM_ADDRS; i++) { > - map[i] = rcu_dereference_protected(vq->maps[i], > - lockdep_is_held(&vq->mmu_lock)); > + map[i] = vq->maps[i]; > if (map[i]) { > vhost_set_map_dirty(vq, map[i], i); > - rcu_assign_pointer(vq->maps[i], NULL); > + vq->maps[i] = NULL; > } > } > spin_unlock(&vq->mmu_lock); > > - /* No need for synchronize_rcu() or kfree_rcu() since we are > - * serialized with memory accessors (e.g vq mutex held)....
2019 Aug 07
2
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...struct vhost_virtqueue *vq) > > spin_lock(&vq->mmu_lock); > for (i = 0; i < VHOST_NUM_ADDRS; i++) { > - map[i] = rcu_dereference_protected(vq->maps[i], > - lockdep_is_held(&vq->mmu_lock)); > + map[i] = vq->maps[i]; > if (map[i]) { > vhost_set_map_dirty(vq, map[i], i); > - rcu_assign_pointer(vq->maps[i], NULL); > + vq->maps[i] = NULL; > } > } > spin_unlock(&vq->mmu_lock); > > - /* No need for synchronize_rcu() or kfree_rcu() since we are > - * serialized with memory accessors (e.g vq mutex held)....
2019 Aug 07
2
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...struct vhost_virtqueue *vq) > > spin_lock(&vq->mmu_lock); > for (i = 0; i < VHOST_NUM_ADDRS; i++) { > - map[i] = rcu_dereference_protected(vq->maps[i], > - lockdep_is_held(&vq->mmu_lock)); > + map[i] = vq->maps[i]; > if (map[i]) { > vhost_set_map_dirty(vq, map[i], i); > - rcu_assign_pointer(vq->maps[i], NULL); > + vq->maps[i] = NULL; > } > } > spin_unlock(&vq->mmu_lock); > > - /* No need for synchronize_rcu() or kfree_rcu() since we are > - * serialized with memory accessors (e.g vq mutex held)....
2019 Sep 08
3
[PATCH 2/2] vhost: re-introducing metadata acceleration through kernel virtual address
...tic void vhost_vq_meta_reset(struct vhost_dev *d) > __vhost_vq_meta_reset(d->vqs[i]); > } > > +#if VHOST_ARCH_CAN_ACCEL_UACCESS > +static void vhost_map_unprefetch(struct vhost_map *map) > +{ > + kfree(map->pages); > + kfree(map); > +} > + > +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, > + struct vhost_map *map, int index) > +{ > + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > + int i; > + > + if (uaddr->write) { > + for (i = 0; i < map->npages; i++) > + set_page_dirty(map->pages[i]); > + } > +...
2019 Sep 08
3
[PATCH 2/2] vhost: re-introducing metadata acceleration through kernel virtual address
...tic void vhost_vq_meta_reset(struct vhost_dev *d) > __vhost_vq_meta_reset(d->vqs[i]); > } > > +#if VHOST_ARCH_CAN_ACCEL_UACCESS > +static void vhost_map_unprefetch(struct vhost_map *map) > +{ > + kfree(map->pages); > + kfree(map); > +} > + > +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, > + struct vhost_map *map, int index) > +{ > + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > + int i; > + > + if (uaddr->write) { > + for (i = 0; i < map->npages; i++) > + set_page_dirty(map->pages[i]); > + } > +...
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...>> spin_lock(&vq->mmu_lock); >> for (i = 0; i < VHOST_NUM_ADDRS; i++) { >> - map[i] = rcu_dereference_protected(vq->maps[i], >> - lockdep_is_held(&vq->mmu_lock)); >> + map[i] = vq->maps[i]; >> if (map[i]) { >> vhost_set_map_dirty(vq, map[i], i); >> - rcu_assign_pointer(vq->maps[i], NULL); >> + vq->maps[i] = NULL; >> } >> } >> spin_unlock(&vq->mmu_lock); >> >> - /* No need for synchronize_rcu() or kfree_rcu() since we are >> - * serialized with me...
2019 Aug 07
0
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...,16 @@ static void vhost_uninit_vq_maps(struct vhost_virtqueue *vq) spin_lock(&vq->mmu_lock); for (i = 0; i < VHOST_NUM_ADDRS; i++) { - map[i] = rcu_dereference_protected(vq->maps[i], - lockdep_is_held(&vq->mmu_lock)); + map[i] = vq->maps[i]; if (map[i]) { vhost_set_map_dirty(vq, map[i], i); - rcu_assign_pointer(vq->maps[i], NULL); + vq->maps[i] = NULL; } } spin_unlock(&vq->mmu_lock); - /* No need for synchronize_rcu() or kfree_rcu() since we are - * serialized with memory accessors (e.g vq mutex held). + /* No need for synchronization since w...