search for: vhost_invalidate_vq_start

Displaying 20 results from an estimated 34 matches for "vhost_invalidate_vq_start".

2019 Jul 31
1
[PATCH V2 9/9] vhost: do not return -EAGIAN for non blocking invalidation too early
...ivers/vhost/vhost.c b/drivers/vhost/vhost.c > index fc2da8a0c671..96c6aeb1871f 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -399,16 +399,19 @@ static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq) > smp_mb(); > } > > -static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > - int index, > - unsigned long start, > - unsigned long end) > +static int vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > + int index, > + unsigned long start, > + unsigned long end, >...
2019 Jul 31
0
[PATCH V2 9/9] vhost: do not return -EAGIAN for non blocking invalidation too early
...ons(+), 13 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index fc2da8a0c671..96c6aeb1871f 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -399,16 +399,19 @@ static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq) smp_mb(); } -static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, - int index, - unsigned long start, - unsigned long end) +static int vhost_invalidate_vq_start(struct vhost_virtqueue *vq, + int index, + unsigned long start, + unsigned long end, + bool blockable) { struct vh...
2019 Jul 23
2
[PATCH 5/6] vhost: mark dirty pages during map uninit
...[i], > lockdep_is_held(&vq->mmu_lock)); > - if (map[i]) > + if (map[i]) { > + vhost_set_map_dirty(vq, map[i], i); > rcu_assign_pointer(vq->maps[i], NULL); > + } > } > spin_unlock(&vq->mmu_lock); > > @@ -354,7 +368,6 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > { > struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > struct vhost_map *map; > - int i; > > if (!vhost_map_range_overlap(uaddr, start, end)) > return; > @@ -365,10 +378,7 @@ static void vhost_invalidate_vq_start(struct vhost_v...
2019 Jul 23
2
[PATCH 5/6] vhost: mark dirty pages during map uninit
...[i], > lockdep_is_held(&vq->mmu_lock)); > - if (map[i]) > + if (map[i]) { > + vhost_set_map_dirty(vq, map[i], i); > rcu_assign_pointer(vq->maps[i], NULL); > + } > } > spin_unlock(&vq->mmu_lock); > > @@ -354,7 +368,6 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > { > struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > struct vhost_map *map; > - int i; > > if (!vhost_map_range_overlap(uaddr, start, end)) > return; > @@ -365,10 +378,7 @@ static void vhost_invalidate_vq_start(struct vhost_v...
2019 Jul 31
14
[PATCH V2 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V1: - Try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the range is overlapped with metadata Jason Wang (9): vhost: don't set uaddr for invalid address vhost: validate MMU notifier
2019 Aug 07
11
[PATCH V3 00/10] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the range is overlapped with metadata Jason Wang (9):
2019 Aug 09
11
[PATCH V5 0/9] Fixes for vhost metadata acceleration
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V4: - switch to use spinlock synchronize MMU notifier with accessors Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost
2019 Aug 09
11
[PATCH V5 0/9] Fixes for vhost metadata acceleration
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V4: - switch to use spinlock synchronize MMU notifier with accessors Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost
2019 Aug 07
12
[PATCH V4 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the
2019 Aug 07
12
[PATCH V4 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V3: - remove the unnecessary patch Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the
2019 Jul 23
0
[PATCH 5/6] vhost: mark dirty pages during map uninit
...map[i] = rcu_dereference_protected(vq->maps[i], lockdep_is_held(&vq->mmu_lock)); - if (map[i]) + if (map[i]) { + vhost_set_map_dirty(vq, map[i], i); rcu_assign_pointer(vq->maps[i], NULL); + } } spin_unlock(&vq->mmu_lock); @@ -354,7 +368,6 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, { struct vhost_uaddr *uaddr = &vq->uaddrs[index]; struct vhost_map *map; - int i; if (!vhost_map_range_overlap(uaddr, start, end)) return; @@ -365,10 +378,7 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, map = rcu_dereference_pr...
2019 Jul 23
0
[PATCH 5/6] vhost: mark dirty pages during map uninit
...->mmu_lock)); >> - if (map[i]) >> + if (map[i]) { >> + vhost_set_map_dirty(vq, map[i], i); >> rcu_assign_pointer(vq->maps[i], NULL); >> + } >> } >> spin_unlock(&vq->mmu_lock); >> >> @@ -354,7 +368,6 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, >> { >> struct vhost_uaddr *uaddr = &vq->uaddrs[index]; >> struct vhost_map *map; >> - int i; >> >> if (!vhost_map_range_overlap(uaddr, start, end)) >> return; >> @@ -365,10 +378,7 @@ static void vh...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...); > + } > + } This is basically read_seqcount_begin()' with a schedule instead of cpu_relax > + /* Make sure ref counter was checked before any other > + * operations that was dene on map. */ > + smp_mb(); should be in a smp_load_acquire() > +} > + > static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > int index, > unsigned long start, > @@ -376,16 +413,15 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > spin_lock(&vq->mmu_lock); > ++vq->invalidate_count; > > - map = rcu_dereference_prot...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...); > + } > + } This is basically read_seqcount_begin()' with a schedule instead of cpu_relax > + /* Make sure ref counter was checked before any other > + * operations that was dene on map. */ > + smp_mb(); should be in a smp_load_acquire() > +} > + > static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > int index, > unsigned long start, > @@ -376,16 +413,15 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > spin_lock(&vq->mmu_lock); > ++vq->invalidate_count; > > - map = rcu_dereference_prot...
2019 Jul 23
10
[PATCH 0/6] Fixes for meta data acceleration
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Jason Wang (6): vhost: don't set uaddr for invalid address vhost: validate MMU notifier registration vhost: fix vhost map leak vhost: reset invalidate_count in vhost_set_vring_num_addr() vhost: mark dirty pages during map uninit vhost: don't do synchronize_rcu() in
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...> > >> + /* Make sure ref counter was checked before any other >> + * operations that was dene on map. */ >> + smp_mb(); > should be in a smp_load_acquire() Right, if we use smp_load_acquire() to load the counter. > >> +} >> + >> static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, >> int index, >> unsigned long start, >> @@ -376,16 +413,15 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, >> spin_lock(&vq->mmu_lock); >> ++vq->invalidate_count; >> >...
2019 Aug 07
0
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...e sure no reader can see + * previous map */ + while (raw_read_seqcount(&vq->seq) == seq) { + if (need_resched()) + schedule(); + } + } + /* Make sure seq counter was checked before map is + * freed. Paired with smp_wmb() in write_seqcount_end(). + */ + smp_mb(); +} + static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, int index, unsigned long start, @@ -376,16 +409,15 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, spin_lock(&vq->mmu_lock); ++vq->invalidate_count; - map = rcu_dereference_protected(vq->maps[index], - loc...
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...When ref change, we are sure no reader can see + * previous map */ + while (READ_ONCE(vq->ref) == ref) { + set_current_state(TASK_RUNNING); + schedule(); + } + } + /* Make sure ref counter was checked before any other + * operations that was dene on map. */ + smp_mb(); +} + static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, int index, unsigned long start, @@ -376,16 +413,15 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, spin_lock(&vq->mmu_lock); ++vq->invalidate_count; - map = rcu_dereference_protected(vq->maps[index], - loc...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...got preempted with an odd ref. So I suspect we'll have to disable preemption when we make ref odd. > + } > + /* Make sure ref counter was checked before any other > + * operations that was dene on map. */ was dene -> were done? > + smp_mb(); > +} > + > static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > int index, > unsigned long start, > @@ -376,16 +413,15 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > spin_lock(&vq->mmu_lock); > ++vq->invalidate_count; > > - map = rcu_dereference_prot...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...got preempted with an odd ref. So I suspect we'll have to disable preemption when we make ref odd. > + } > + /* Make sure ref counter was checked before any other > + * operations that was dene on map. */ was dene -> were done? > + smp_mb(); > +} > + > static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > int index, > unsigned long start, > @@ -376,16 +413,15 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, > spin_lock(&vq->mmu_lock); > ++vq->invalidate_count; > > - map = rcu_dereference_prot...