We flush under vq mutex when changing backends. This creates a deadlock as workqueue being flushed needs this lock as well. https://bugzilla.redhat.com/show_bug.cgi?id=612421 Drop the vq mutex before flush: we have the device mutex which is sufficient to prevent another ioctl from touching the vq. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/vhost/net.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 28d7786..50df58e6 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -534,11 +534,16 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd) rcu_assign_pointer(vq->private_data, sock); vhost_net_enable_vq(n, vq); done: + mutex_unlock(&vq->mutex); + if (oldsock) { vhost_net_flush_vq(n, index); fput(oldsock->file); } + mutex_unlock(&n->dev.mutex); + return 0; + err_vq: mutex_unlock(&vq->mutex); err: -- 1.7.2.rc0.14.g41c1c
On Thu, Jul 15, 2010 at 03:19:12PM +0300, Michael S. Tsirkin wrote:> We flush under vq mutex when changing backends. > This creates a deadlock as workqueue being flushed > needs this lock as well. > > https://bugzilla.redhat.com/show_bug.cgi?id=612421 > > Drop the vq mutex before flush: we have the device mutex > which is sufficient to prevent another ioctl from touching > the vq. > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com>Dave, just to clarify, I'll send pull request to merge it through my tree, there's no need for you to bother with this.> --- > drivers/vhost/net.c | 5 +++++ > 1 files changed, 5 insertions(+), 0 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 28d7786..50df58e6 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -534,11 +534,16 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd) > rcu_assign_pointer(vq->private_data, sock); > vhost_net_enable_vq(n, vq); > done: > + mutex_unlock(&vq->mutex); > + > if (oldsock) { > vhost_net_flush_vq(n, index); > fput(oldsock->file); > } > > + mutex_unlock(&n->dev.mutex); > + return 0; > + > err_vq: > mutex_unlock(&vq->mutex); > err: > -- > 1.7.2.rc0.14.g41c1c
On Thu, 2010-07-15 at 15:19 +0300, Michael S. Tsirkin wrote:> We flush under vq mutex when changing backends. > This creates a deadlock as workqueue being flushed > needs this lock as well. > > https://bugzilla.redhat.com/show_bug.cgi?id=612421 > > Drop the vq mutex before flush: we have the device mutex > which is sufficient to prevent another ioctl from touching > the vq.Why do we need to flush the vq when trying to set the backend and we find that it is already set. Is this just an optimization? Thanks Sridhar> > Signed-off-by: Michael S. Tsirkin <mst at redhat.com> > --- > drivers/vhost/net.c | 5 +++++ > 1 files changed, 5 insertions(+), 0 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 28d7786..50df58e6 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -534,11 +534,16 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd) > rcu_assign_pointer(vq->private_data, sock); > vhost_net_enable_vq(n, vq); > done: > + mutex_unlock(&vq->mutex); > + > if (oldsock) { > vhost_net_flush_vq(n, index); > fput(oldsock->file); > } > > + mutex_unlock(&n->dev.mutex); > + return 0; > + > err_vq: > mutex_unlock(&vq->mutex); > err:
Possibly Parallel Threads
- [PATCH] vhost-net: avoid flush under lock
- [PATCHv9] vhost: experimental tx zero-copy support
- [PATCHv9] vhost: experimental tx zero-copy support
- [PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()
- [PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()