search for: vhost_iotlb_notify_vq

Displaying 20 results from an estimated 50 matches for "vhost_iotlb_notify_vq".

2018 Nov 30
3
[PATCH] vhost: fix IOTLB locking
Commit 78139c94dc8c ("net: vhost: lock the vqs one by one") moved the vq lock to improve scalability, but introduced a possible deadlock in vhost-iotlb. vhost_iotlb_notify_vq() now takes vq->mutex while holding the device's IOTLB spinlock. And on the vhost_iotlb_miss() path, the spinlock is taken while holding vq->mutex. As long as we hold dev->mutex to prevent an ioctl from modifying vq->poll concurrently, we can safely call vhost_poll_queue() without...
2018 Nov 30
3
[PATCH] vhost: fix IOTLB locking
Commit 78139c94dc8c ("net: vhost: lock the vqs one by one") moved the vq lock to improve scalability, but introduced a possible deadlock in vhost-iotlb. vhost_iotlb_notify_vq() now takes vq->mutex while holding the device's IOTLB spinlock. And on the vhost_iotlb_miss() path, the spinlock is taken while holding vq->mutex. As long as we hold dev->mutex to prevent an ioctl from modifying vq->poll concurrently, we can safely call vhost_poll_queue() without...
2018 Nov 29
2
[REBASE PATCH net-next v9 1/4] net: vhost: lock the vqs one by one
...nt i = 0; > - for (i = 0; i < d->nvqs; ++i) > - mutex_unlock(&d->vqs[i]->mutex); > -} > - > static int vhost_new_umem_range(struct vhost_umem *umem, > u64 start, u64 size, u64 end, > u64 userspace_addr, int perm) > @@ -954,7 +943,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, > if (msg->iova <= vq_msg->iova && > msg->iova + msg->size - 1 >= vq_msg->iova && > vq_msg->type == VHOST_IOTLB_MISS) { > + mutex_lock(&node->vq->mutex); This seems to introduce a deadlock (and sl...
2018 Nov 29
2
[REBASE PATCH net-next v9 1/4] net: vhost: lock the vqs one by one
...nt i = 0; > - for (i = 0; i < d->nvqs; ++i) > - mutex_unlock(&d->vqs[i]->mutex); > -} > - > static int vhost_new_umem_range(struct vhost_umem *umem, > u64 start, u64 size, u64 end, > u64 userspace_addr, int perm) > @@ -954,7 +943,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, > if (msg->iova <= vq_msg->iova && > msg->iova + msg->size - 1 >= vq_msg->iova && > vq_msg->type == VHOST_IOTLB_MISS) { > + mutex_lock(&node->vq->mutex); This seems to introduce a deadlock (and sl...
2018 Nov 30
0
[PATCH] vhost: fix IOTLB locking
On Fri, Nov 30, 2018 at 11:37:02AM +0000, Jean-Philippe Brucker wrote: > Commit 78139c94dc8c ("net: vhost: lock the vqs one by one") moved the vq > lock to improve scalability, but introduced a possible deadlock in > vhost-iotlb. vhost_iotlb_notify_vq() now takes vq->mutex while holding > the device's IOTLB spinlock. Indeed spin_lock is just outside this snippet. Yack. > And on the vhost_iotlb_miss() path, the > spinlock is taken while holding vq->mutex. > > As long as we hold dev->mutex to prevent an ioctl from mo...
2018 Nov 30
1
[PATCH v2] vhost: fix IOTLB locking
Commit 78139c94dc8c ("net: vhost: lock the vqs one by one") moved the vq lock to improve scalability, but introduced a possible deadlock in vhost-iotlb. vhost_iotlb_notify_vq() now takes vq->mutex while holding the device's IOTLB spinlock. And on the vhost_iotlb_miss() path, the spinlock is taken while holding vq->mutex. Since calling vhost_poll_queue() doesn't require any lock, avoid the deadlock by not taking vq->mutex. Fixes: 78139c94dc8c ("ne...
2018 Nov 30
0
[REBASE PATCH net-next v9 1/4] net: vhost: lock the vqs one by one
...< d->nvqs; ++i) >> - mutex_unlock(&d->vqs[i]->mutex); >> -} >> - >> static int vhost_new_umem_range(struct vhost_umem *umem, >> u64 start, u64 size, u64 end, >> u64 userspace_addr, int perm) >> @@ -954,7 +943,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, >> if (msg->iova <= vq_msg->iova && >> msg->iova + msg->size - 1 >= vq_msg->iova && >> vq_msg->type == VHOST_IOTLB_MISS) { >> + mutex_lock(&node->vq->mutex); > This seems to intro...
2018 Jul 22
2
[PATCH net-next v6 1/4] net: vhost: lock the vqs one by one
...nt i = 0; > - for (i = 0; i < d->nvqs; ++i) > - mutex_unlock(&d->vqs[i]->mutex); > -} > - > static int vhost_new_umem_range(struct vhost_umem *umem, > u64 start, u64 size, u64 end, > u64 userspace_addr, int perm) > @@ -953,7 +942,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, > if (msg->iova <= vq_msg->iova && > msg->iova + msg->size - 1 > vq_msg->iova && > vq_msg->type == VHOST_IOTLB_MISS) { > + mutex_lock(&node->vq->mutex); > vhost_poll_queue(&node->vq-&...
2018 Jul 22
2
[PATCH net-next v6 1/4] net: vhost: lock the vqs one by one
...nt i = 0; > - for (i = 0; i < d->nvqs; ++i) > - mutex_unlock(&d->vqs[i]->mutex); > -} > - > static int vhost_new_umem_range(struct vhost_umem *umem, > u64 start, u64 size, u64 end, > u64 userspace_addr, int perm) > @@ -953,7 +942,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, > if (msg->iova <= vq_msg->iova && > msg->iova + msg->size - 1 > vq_msg->iova && > vq_msg->type == VHOST_IOTLB_MISS) { > + mutex_lock(&node->vq->mutex); > vhost_poll_queue(&node->vq-&...
2018 Jan 23
5
[PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to hold mutexes of all virtqueues. This may confuse lockdep to report a possible deadlock because of trying to hold locks belong to same class. Switch to use mutex_lock_nested() to avoid false positive. Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") Reported-by: syzbot+dbb7c1161485e61b0241 at
2018 Jan 23
5
[PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to hold mutexes of all virtqueues. This may confuse lockdep to report a possible deadlock because of trying to hold locks belong to same class. Switch to use mutex_lock_nested() to avoid false positive. Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") Reported-by: syzbot+dbb7c1161485e61b0241 at
2018 Sep 25
6
[REBASE PATCH net-next v9 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4 Tonghao Zhang (4): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation net: vhost: factor out busy
2016 Nov 18
1
[PATCH 1/2] vhost: remove unused feature bit
Signed-off-by: Jason Wang <jasowang at redhat.com> --- include/uapi/linux/vhost.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h index 56b7ab5..60180c0 100644 --- a/include/uapi/linux/vhost.h +++ b/include/uapi/linux/vhost.h @@ -172,8 +172,6 @@ struct vhost_memory { #define VHOST_F_LOG_ALL 26 /* vhost-net should add
2018 Jan 23
0
[PATCH net 2/2] vhost: do not try to access device IOTLB when not initialized
...--- drivers/vhost/vhost.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 549771a..5727b18 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -1015,6 +1015,10 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, vhost_iotlb_notify_vq(dev, msg); break; case VHOST_IOTLB_INVALIDATE: + if (!dev->iotlb) { + ret = -EFAULT; + break; + } vhost_vq_meta_reset(dev); vhost_del_umem_range(dev->iotlb, msg->iova, msg->iova + msg->size - 1); -- 2.7.4
2018 Jun 30
0
[PATCH net-next v3 1/4] net: vhost: lock the vqs one by one
...v_unlock_vqs(struct vhost_dev *d) -{ - int i = 0; - for (i = 0; i < d->nvqs; ++i) - mutex_unlock(&d->vqs[i]->mutex); -} - static int vhost_new_umem_range(struct vhost_umem *umem, u64 start, u64 size, u64 end, u64 userspace_addr, int perm) @@ -950,7 +939,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, if (msg->iova <= vq_msg->iova && msg->iova + msg->size - 1 > vq_msg->iova && vq_msg->type == VHOST_IOTLB_MISS) { + mutex_lock(&node->vq->mutex); vhost_poll_queue(&node->vq->poll); + mutex_unloc...
2018 Jul 21
0
[PATCH net-next v6 1/4] net: vhost: lock the vqs one by one
...v_unlock_vqs(struct vhost_dev *d) -{ - int i = 0; - for (i = 0; i < d->nvqs; ++i) - mutex_unlock(&d->vqs[i]->mutex); -} - static int vhost_new_umem_range(struct vhost_umem *umem, u64 start, u64 size, u64 end, u64 userspace_addr, int perm) @@ -953,7 +942,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, if (msg->iova <= vq_msg->iova && msg->iova + msg->size - 1 > vq_msg->iova && vq_msg->type == VHOST_IOTLB_MISS) { + mutex_lock(&node->vq->mutex); vhost_poll_queue(&node->vq->poll); + mutex_unloc...
2018 Sep 25
0
[REBASE PATCH net-next v9 1/4] net: vhost: lock the vqs one by one
...v_unlock_vqs(struct vhost_dev *d) -{ - int i = 0; - for (i = 0; i < d->nvqs; ++i) - mutex_unlock(&d->vqs[i]->mutex); -} - static int vhost_new_umem_range(struct vhost_umem *umem, u64 start, u64 size, u64 end, u64 userspace_addr, int perm) @@ -954,7 +943,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, if (msg->iova <= vq_msg->iova && msg->iova + msg->size - 1 >= vq_msg->iova && vq_msg->type == VHOST_IOTLB_MISS) { + mutex_lock(&node->vq->mutex); vhost_poll_queue(&node->vq->poll); + mutex_unlo...
2016 Nov 18
1
[PATCH 1/2] vhost: remove unused feature bit
Signed-off-by: Jason Wang <jasowang at redhat.com> --- include/uapi/linux/vhost.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h index 56b7ab5..60180c0 100644 --- a/include/uapi/linux/vhost.h +++ b/include/uapi/linux/vhost.h @@ -172,8 +172,6 @@ struct vhost_memory { #define VHOST_F_LOG_ALL 26 /* vhost-net should add
2018 Jul 25
0
[PATCH net-next v6 1/4] net: vhost: lock the vqs one by one
...->vqs[i]->mutex); > > -} > > - > > static int vhost_new_umem_range(struct vhost_umem *umem, > > u64 start, u64 size, u64 end, > > u64 userspace_addr, int perm) > > @@ -953,7 +942,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d, > > if (msg->iova <= vq_msg->iova && > > msg->iova + msg->size - 1 > vq_msg->iova && > > vq_msg->type == VHOST_IOTLB_MISS) { > > + mutex_lock(&a...
2018 Jul 21
7
[PATCH net-next v6 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4. v5->v6: rebase the codes. Tonghao Zhang (4): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation