Displaying 18 results from an estimated 18 matches for "vhost_vq_sync_access".
2019 Aug 08
3
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
??????? /* Ensure map was read after incresing the counter.
Paired??????????????????????????????????????????????????????????????????????????????????????
???????? * with smp_mb() in
vhost_vq_sync_access().?????????????????????????????????????????????????????????????????????????????????????????????????????
????????
*/?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
???????
smp_mb();???????????????????????...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ONCE(vq->ref);
If the write to vq->ref is not locked this algorithm won't work, if it
is locked the READ_ONCE is not needed.
> + /* Make sure vq access is done before increasing ref counter */
> + smp_store_release(&vq->ref, ref + 1);
> +}
> +
> +static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
> +{
> + int ref;
> +
> + /* Make sure map change was done before checking ref counter */
> + smp_mb();
This is probably smp_rmb after reading ref, and if you are setting ref
with smp_store_release then this should be smp_load_acquire() without
an explici...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ONCE(vq->ref);
If the write to vq->ref is not locked this algorithm won't work, if it
is locked the READ_ONCE is not needed.
> + /* Make sure vq access is done before increasing ref counter */
> + smp_store_release(&vq->ref, ref + 1);
> +}
> +
> +static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
> +{
> + int ref;
> +
> + /* Make sure map change was done before checking ref counter */
> + smp_mb();
This is probably smp_rmb after reading ref, and if you are setting ref
with smp_store_release then this should be smp_load_acquire() without
an explici...
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...>ref is not locked this algorithm won't work, if it
> is locked the READ_ONCE is not needed.
Yes.
>
>> + /* Make sure vq access is done before increasing ref counter */
>> + smp_store_release(&vq->ref, ref + 1);
>> +}
>> +
>> +static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
>> +{
>> + int ref;
>> +
>> + /* Make sure map change was done before checking ref counter */
>> + smp_mb();
> This is probably smp_rmb after reading ref, and if you are setting ref
> with smp_store_release then this should be smp_load...
2019 Jul 31
14
[PATCH V2 0/9] Fixes for metadata accelreation
Hi all:
This series try to fix several issues introduced by meta data
accelreation series. Please review.
Changes from V1:
- Try not use RCU to syncrhonize MMU notifier with vhost worker
- set dirty pages after no readers
- return -EAGAIN only when we find the range is overlapped with
metadata
Jason Wang (9):
vhost: don't set uaddr for invalid address
vhost: validate MMU notifier
2019 Jul 31
1
[PATCH V2 9/9] vhost: do not return -EAGIAN for non blocking invalidation too early
...+++-------------
> 1 file changed, 19 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index fc2da8a0c671..96c6aeb1871f 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -399,16 +399,19 @@ static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
> smp_mb();
> }
>
> -static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq,
> - int index,
> - unsigned long start,
> - unsigned long end)
> +static int vhost_invalidate_vq_start(struct vhost_virtqueue *v...
2019 Aug 07
2
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Wed, Aug 07, 2019 at 03:06:15AM -0400, Jason Wang wrote:
> We used to use RCU to synchronize MMU notifier with worker. This leads
> calling synchronize_rcu() in invalidate_range_start(). But on a busy
> system, there would be many factors that may slow down the
> synchronize_rcu() which makes it unsuitable to be called in MMU
> notifier.
>
> So this patch switches use
2019 Aug 07
2
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Wed, Aug 07, 2019 at 03:06:15AM -0400, Jason Wang wrote:
> We used to use RCU to synchronize MMU notifier with worker. This leads
> calling synchronize_rcu() in invalidate_range_start(). But on a busy
> system, there would be many factors that may slow down the
> synchronize_rcu() which makes it unsuitable to be called in MMU
> notifier.
>
> So this patch switches use
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...t; +
> +static void inline vhost_vq_access_map_end(struct vhost_virtqueue *vq)
> +{
> + int ref = READ_ONCE(vq->ref);
> +
> + /* Make sure vq access is done before increasing ref counter */
> + smp_store_release(&vq->ref, ref + 1);
> +}
> +
> +static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
> +{
> + int ref;
> +
> + /* Make sure map change was done before checking ref counter */
> + smp_mb();
> +
> + ref = READ_ONCE(vq->ref);
> + if (ref & 0x1) {
Please document the even/odd trick here too, not just in the commit log.
> +...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...t; +
> +static void inline vhost_vq_access_map_end(struct vhost_virtqueue *vq)
> +{
> + int ref = READ_ONCE(vq->ref);
> +
> + /* Make sure vq access is done before increasing ref counter */
> + smp_store_release(&vq->ref, ref + 1);
> +}
> +
> +static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
> +{
> + int ref;
> +
> + /* Make sure map change was done before checking ref counter */
> + smp_mb();
> +
> + ref = READ_ONCE(vq->ref);
> + if (ref & 0x1) {
Please document the even/odd trick here too, not just in the commit log.
> +...
2019 Aug 03
1
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...s_map_end(struct vhost_virtqueue *vq)
> {
> - int ref = READ_ONCE(vq->ref);
> -
> - /* Make sure vq access is done before increasing ref counter */
> - smp_store_release(&vq->ref, ref + 1);
> + write_seqcount_end(&vq->seq);
> }
>
> static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
> {
> - int ref;
> + unsigned int ret;
>
> /* Make sure map change was done before checking ref counter */
> smp_mb();
> -
> - ref = READ_ONCE(vq->ref);
> - if (ref & 0x1) {
> - /* When ref change, we are sure no reader can se...
2019 Aug 07
11
[PATCH V3 00/10] Fixes for metadata accelreation
Hi all:
This series try to fix several issues introduced by meta data
accelreation series. Please review.
Changes from V2:
- use seqlck helper to synchronize MMU notifier with vhost worker
Changes from V1:
- try not use RCU to syncrhonize MMU notifier with vhost worker
- set dirty pages after no readers
- return -EAGAIN only when we find the range is overlapped with
metadata
Jason Wang (9):
2019 Aug 01
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...seq);
}
static void inline vhost_vq_access_map_end(struct vhost_virtqueue *vq)
{
- int ref = READ_ONCE(vq->ref);
-
- /* Make sure vq access is done before increasing ref counter */
- smp_store_release(&vq->ref, ref + 1);
+ write_seqcount_end(&vq->seq);
}
static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
{
- int ref;
+ unsigned int ret;
/* Make sure map change was done before checking ref counter */
smp_mb();
-
- ref = READ_ONCE(vq->ref);
- if (ref & 0x1) {
- /* When ref change, we are sure no reader can see
+ ret = raw_read_seqcount(&vq->seq);
+ if...
2019 Aug 07
12
[PATCH V4 0/9] Fixes for metadata accelreation
Hi all:
This series try to fix several issues introduced by meta data
accelreation series. Please review.
Changes from V3:
- remove the unnecessary patch
Changes from V2:
- use seqlck helper to synchronize MMU notifier with vhost worker
Changes from V1:
- try not use RCU to syncrhonize MMU notifier with vhost worker
- set dirty pages after no readers
- return -EAGAIN only when we find the
2019 Aug 07
12
[PATCH V4 0/9] Fixes for metadata accelreation
Hi all:
This series try to fix several issues introduced by meta data
accelreation series. Please review.
Changes from V3:
- remove the unnecessary patch
Changes from V2:
- use seqlck helper to synchronize MMU notifier with vhost worker
Changes from V1:
- try not use RCU to syncrhonize MMU notifier with vhost worker
- set dirty pages after no readers
- return -EAGAIN only when we find the
2019 Aug 07
0
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...>uaddr - 1 + uaddr->size);
}
+static void inline vhost_vq_access_map_begin(struct vhost_virtqueue *vq)
+{
+ write_seqcount_begin(&vq->seq);
+}
+
+static void inline vhost_vq_access_map_end(struct vhost_virtqueue *vq)
+{
+ write_seqcount_end(&vq->seq);
+}
+
+static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
+{
+ unsigned int seq;
+
+ /* Make sure any changes to map was done before checking seq
+ * counter. Paired with smp_wmb() in write_seqcount_begin().
+ */
+ smp_mb();
+ seq = raw_read_seqcount(&vq->seq);
+ /* Odd means the map was currently accessed by vhost wor...
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...map */
+ smp_load_acquire(&vq->ref);
+}
+
+static void inline vhost_vq_access_map_end(struct vhost_virtqueue *vq)
+{
+ int ref = READ_ONCE(vq->ref);
+
+ /* Make sure vq access is done before increasing ref counter */
+ smp_store_release(&vq->ref, ref + 1);
+}
+
+static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
+{
+ int ref;
+
+ /* Make sure map change was done before checking ref counter */
+ smp_mb();
+
+ ref = READ_ONCE(vq->ref);
+ if (ref & 0x1) {
+ /* When ref change, we are sure no reader can see
+ * previous map */
+ while (READ_ONCE(vq->ref) == ref) {
+...
2019 Jul 31
0
[PATCH V2 9/9] vhost: do not return -EAGIAN for non blocking invalidation too early
...vhost/vhost.c | 32 +++++++++++++++++++-------------
1 file changed, 19 insertions(+), 13 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index fc2da8a0c671..96c6aeb1871f 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -399,16 +399,19 @@ static void inline vhost_vq_sync_access(struct vhost_virtqueue *vq)
smp_mb();
}
-static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq,
- int index,
- unsigned long start,
- unsigned long end)
+static int vhost_invalidate_vq_start(struct vhost_virtqueue *vq,
+ int index,
+ unsign...