search for: vhost_process_iotlb_msg

Displaying 20 results from an estimated 82 matches for "vhost_process_iotlb_msg".

2018 May 22
3
[PATCH net] vhost: synchronize IOTLB message with dev cleanup
DaeRyong Jeong reports a race between vhost_dev_cleanup() and vhost_process_iotlb_msg(): Thread interleaving: CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup) (In the case of both VHOST_IOTLB_UPDATE and VHOST_IOTLB_INVALIDATE) ===== ===== vhost_umem_clean(dev->iotlb); if (!dev->iotlb) { ret = -EFAULT; break; } dev->iotlb = NULL;...
2018 May 21
2
KASAN: use-after-free Read in vhost_chr_write_iter
...escribe more at the end of this > > > report. Our analysis shows that the race occurs when invoking two > > > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. > > > > > > > > > Analysis: > > > We think the concurrent execution of vhost_process_iotlb_msg() and > > > vhost_dev_cleanup() causes the crash. > > > Both of functions can run concurrently (please see call sequence below), > > > and possibly, there is a race on dev->iotlb. > > > If the switch occurs right after vhost_dev_cleanup() frees > > >...
2018 May 21
2
KASAN: use-after-free Read in vhost_chr_write_iter
...escribe more at the end of this > > > report. Our analysis shows that the race occurs when invoking two > > > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. > > > > > > > > > Analysis: > > > We think the concurrent execution of vhost_process_iotlb_msg() and > > > vhost_dev_cleanup() causes the crash. > > > Both of functions can run concurrently (please see call sequence below), > > > and possibly, there is a race on dev->iotlb. > > > If the switch occurs right after vhost_dev_cleanup() frees > > >...
2018 May 18
3
KASAN: use-after-free Read in vhost_chr_write_iter
...RaceFuzzer (a modified > version of Syzkaller), which we describe more at the end of this > report. Our analysis shows that the race occurs when invoking two > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. > > > Analysis: > We think the concurrent execution of vhost_process_iotlb_msg() and > vhost_dev_cleanup() causes the crash. > Both of functions can run concurrently (please see call sequence below), > and possibly, there is a race on dev->iotlb. > If the switch occurs right after vhost_dev_cleanup() frees > dev->iotlb, vhost_process_iotlb_msg() still see...
2018 May 18
3
KASAN: use-after-free Read in vhost_chr_write_iter
...RaceFuzzer (a modified > version of Syzkaller), which we describe more at the end of this > report. Our analysis shows that the race occurs when invoking two > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. > > > Analysis: > We think the concurrent execution of vhost_process_iotlb_msg() and > vhost_dev_cleanup() causes the crash. > Both of functions can run concurrently (please see call sequence below), > and possibly, there is a race on dev->iotlb. > If the switch occurs right after vhost_dev_cleanup() frees > dev->iotlb, vhost_process_iotlb_msg() still see...
2018 May 22
0
KASAN: use-after-free Read in vhost_chr_write_iter
...re at the end of this >>>> report. Our analysis shows that the race occurs when invoking two >>>> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. >>>> >>>> >>>> Analysis: >>>> We think the concurrent execution of vhost_process_iotlb_msg() and >>>> vhost_dev_cleanup() causes the crash. >>>> Both of functions can run concurrently (please see call sequence below), >>>> and possibly, there is a race on dev->iotlb. >>>> If the switch occurs right after vhost_dev_cleanup() frees >&gt...
2018 May 21
0
KASAN: use-after-free Read in vhost_chr_write_iter
...> version of Syzkaller), which we describe more at the end of this >> report. Our analysis shows that the race occurs when invoking two >> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. >> >> >> Analysis: >> We think the concurrent execution of vhost_process_iotlb_msg() and >> vhost_dev_cleanup() causes the crash. >> Both of functions can run concurrently (please see call sequence below), >> and possibly, there is a race on dev->iotlb. >> If the switch occurs right after vhost_dev_cleanup() frees >> dev->iotlb, vhost_process_i...
2018 May 22
0
KASAN: use-after-free Read in vhost_chr_write_iter
...re at the end of this >>>> report. Our analysis shows that the race occurs when invoking two >>>> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. >>>> >>>> >>>> Analysis: >>>> We think the concurrent execution of vhost_process_iotlb_msg() and >>>> vhost_dev_cleanup() causes the crash. >>>> Both of functions can run concurrently (please see call sequence below), >>>> and possibly, there is a race on dev->iotlb. >>>> If the switch occurs right after vhost_dev_cleanup() frees >&gt...
2018 Jul 22
2
[PATCH net-next v6 1/4] net: vhost: lock the vqs one by one
...>type == VHOST_IOTLB_MISS) { > + mutex_lock(&node->vq->mutex); > vhost_poll_queue(&node->vq->poll); > + mutex_unlock(&node->vq->mutex); > + > list_del(&node->node); > kfree(node); > } > @@ -985,7 +977,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, > int ret = 0; > > mutex_lock(&dev->mutex); > - vhost_dev_lock_vqs(dev); > switch (msg->type) { > case VHOST_IOTLB_UPDATE: > if (!dev->iotlb) { > @@ -1019,7 +1010,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,...
2018 Jul 22
2
[PATCH net-next v6 1/4] net: vhost: lock the vqs one by one
...>type == VHOST_IOTLB_MISS) { > + mutex_lock(&node->vq->mutex); > vhost_poll_queue(&node->vq->poll); > + mutex_unlock(&node->vq->mutex); > + > list_del(&node->node); > kfree(node); > } > @@ -985,7 +977,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, > int ret = 0; > > mutex_lock(&dev->mutex); > - vhost_dev_lock_vqs(dev); > switch (msg->type) { > case VHOST_IOTLB_UPDATE: > if (!dev->iotlb) { > @@ -1019,7 +1010,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,...
2018 Aug 03
4
[PATCH net-next] vhost: switch to use new message format
...-315,6 +315,7 @@ static void vhost_vq_reset(struct vhost_dev *dev, vq->log_addr = -1ull; vq->private_data = NULL; vq->acked_features = 0; + vq->acked_backend_features = 0; vq->log_base = NULL; vq->error_ctx = NULL; vq->kick = NULL; @@ -1027,28 +1028,40 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, ssize_t vhost_chr_write_iter(struct vhost_dev *dev, struct iov_iter *from) { - struct vhost_msg_node node; - unsigned size = sizeof(struct vhost_msg); - size_t ret; - int err; + struct vhost_iotlb_msg msg; + size_t offset; + int type, ret; - if (iov_iter_count(fr...
2018 Aug 03
4
[PATCH net-next] vhost: switch to use new message format
...-315,6 +315,7 @@ static void vhost_vq_reset(struct vhost_dev *dev, vq->log_addr = -1ull; vq->private_data = NULL; vq->acked_features = 0; + vq->acked_backend_features = 0; vq->log_base = NULL; vq->error_ctx = NULL; vq->kick = NULL; @@ -1027,28 +1028,40 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, ssize_t vhost_chr_write_iter(struct vhost_dev *dev, struct iov_iter *from) { - struct vhost_msg_node node; - unsigned size = sizeof(struct vhost_msg); - size_t ret; - int err; + struct vhost_iotlb_msg msg; + size_t offset; + int type, ret; - if (iov_iter_count(fr...
2019 Mar 26
1
INFO: task hung in vhost_net_stop_vq
...INFO: possible recursive locking detected ] > [ 221.744944] 4.7.0+ #1 Not tainted > [ 221.745326] --------------------------------------------- > [ 221.746128] syz-executor1/6823 is trying to acquire lock: > [ 221.746737] (&vq->mutex){+.+...}, at: [<ffffffff84484b70>] vhost_process_iotlb_msg+0xe0/0x9e0 > [ 221.747789] > [ 221.747789] but task is already holding lock: > [ 221.748470] (&vq->mutex){+.+...}, at: [<ffffffff84484b70>] vhost_process_iotlb_msg+0xe0/0x9e0 > [ 221.749535] > [ 221.749535] other info that might help us debug this: > [ 221.7502...
2019 Mar 25
2
INFO: task hung in vhost_net_stop_vq
Looks like more iotlb locking mess? On Tue, Mar 19, 2019 at 10:21:00PM -0700, syzbot wrote: > syzbot has bisected this bug to: > > commit 6b1e6cc7855b09a0a9bfa1d9f30172ba366f161c > Author: Jason Wang <jasowang at redhat.com> > Date: Thu Jun 23 06:04:32 2016 +0000 > > vhost: new device IOTLB API > > bisection log:
2019 Mar 25
2
INFO: task hung in vhost_net_stop_vq
Looks like more iotlb locking mess? On Tue, Mar 19, 2019 at 10:21:00PM -0700, syzbot wrote: > syzbot has bisected this bug to: > > commit 6b1e6cc7855b09a0a9bfa1d9f30172ba366f161c > Author: Jason Wang <jasowang at redhat.com> > Date: Thu Jun 23 06:04:32 2016 +0000 > > vhost: new device IOTLB API > > bisection log:
2018 Nov 30
3
[PATCH] vhost: fix IOTLB locking
...ile holding the device's IOTLB spinlock. And on the vhost_iotlb_miss() path, the spinlock is taken while holding vq->mutex. As long as we hold dev->mutex to prevent an ioctl from modifying vq->poll concurrently, we can safely call vhost_poll_queue() without holding vq->mutex. Since vhost_process_iotlb_msg() holds dev->mutex when calling vhost_iotlb_notify_vq(), avoid the deadlock by not taking vq->mutex. Fixes: 78139c94dc8c ("net: vhost: lock the vqs one by one") Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker at arm.com> --- drivers/vhost/vhost.c | 6 +++--- 1 fi...
2018 Nov 30
3
[PATCH] vhost: fix IOTLB locking
...ile holding the device's IOTLB spinlock. And on the vhost_iotlb_miss() path, the spinlock is taken while holding vq->mutex. As long as we hold dev->mutex to prevent an ioctl from modifying vq->poll concurrently, we can safely call vhost_poll_queue() without holding vq->mutex. Since vhost_process_iotlb_msg() holds dev->mutex when calling vhost_iotlb_notify_vq(), avoid the deadlock by not taking vq->mutex. Fixes: 78139c94dc8c ("net: vhost: lock the vqs one by one") Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker at arm.com> --- drivers/vhost/vhost.c | 6 +++--- 1 fi...
2019 Oct 03
1
[PATCH 07/11] vhost: convert vhost_umem_interval_tree to half closed intervals
...4,7 @@ static int vhost_new_umem_range(struct vhost_umem *umem, node->start = start; node->size = size; - node->last = end; + node->end = end; node->userspace_addr = userspace_addr; node->perm = perm; INIT_LIST_HEAD(&node->link); @@ -1112,7 +1112,7 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, } vhost_vq_meta_reset(dev); if (vhost_new_umem_range(dev->iotlb, msg->iova, msg->size, - msg->iova + msg->size - 1, + msg->iova + msg->size, msg->uaddr, msg->perm)) { ret = -ENOMEM; break; @@ -1126,7 +1126,7 @@ stat...
2016 Dec 06
0
[PATCH 06/10] vhost: add missing __user annotations
...vhost_virtqueue *vq, void *to, } static void __user *__vhost_get_user(struct vhost_virtqueue *vq, - void *addr, unsigned size) + void __user *addr, unsigned size) { int ret; @@ -934,8 +934,8 @@ static int umem_access_ok(u64 uaddr, u64 size, int access) return 0; } -int vhost_process_iotlb_msg(struct vhost_dev *dev, - struct vhost_iotlb_msg *msg) +static int vhost_process_iotlb_msg(struct vhost_dev *dev, + struct vhost_iotlb_msg *msg) { int ret = 0; -- MST
2018 Jun 30
0
[PATCH net-next v3 1/4] net: vhost: lock the vqs one by one
...q_msg->iova && vq_msg->type == VHOST_IOTLB_MISS) { + mutex_lock(&node->vq->mutex); vhost_poll_queue(&node->vq->poll); + mutex_unlock(&node->vq->mutex); + list_del(&node->node); kfree(node); } @@ -982,7 +974,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, int ret = 0; mutex_lock(&dev->mutex); - vhost_dev_lock_vqs(dev); switch (msg->type) { case VHOST_IOTLB_UPDATE: if (!dev->iotlb) { @@ -1016,7 +1007,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev, break; } - vhost_dev_unlock_vqs(d...