Jason Wang
2023-Jul-24 06:52 UTC
[PATCH net-next v4 2/2] virtio-net: add cond_resched() to the command waiting loop
On Mon, Jul 24, 2023 at 2:46?PM Michael S. Tsirkin <mst at redhat.com> wrote:> > On Fri, Jul 21, 2023 at 10:18:03PM +0200, Maxime Coquelin wrote: > > > > > > On 7/21/23 17:10, Michael S. Tsirkin wrote: > > > On Fri, Jul 21, 2023 at 04:58:04PM +0200, Maxime Coquelin wrote: > > > > > > > > > > > > On 7/21/23 16:45, Michael S. Tsirkin wrote: > > > > > On Fri, Jul 21, 2023 at 04:37:00PM +0200, Maxime Coquelin wrote: > > > > > > > > > > > > > > > > > > On 7/20/23 23:02, Michael S. Tsirkin wrote: > > > > > > > On Thu, Jul 20, 2023 at 01:26:20PM -0700, Shannon Nelson wrote: > > > > > > > > On 7/20/23 1:38 AM, Jason Wang wrote: > > > > > > > > > > > > > > > > > > Adding cond_resched() to the command waiting loop for a better > > > > > > > > > co-operation with the scheduler. This allows to give CPU a breath to > > > > > > > > > run other task(workqueue) instead of busy looping when preemption is > > > > > > > > > not allowed on a device whose CVQ might be slow. > > > > > > > > > > > > > > > > > > Signed-off-by: Jason Wang <jasowang at redhat.com> > > > > > > > > > > > > > > > > This still leaves hung processes, but at least it doesn't pin the CPU any > > > > > > > > more. Thanks. > > > > > > > > Reviewed-by: Shannon Nelson <shannon.nelson at amd.com> > > > > > > > > > > > > > > > > > > > > > > I'd like to see a full solution > > > > > > > 1- block until interrupt > > > > > > > > > > > > Would it make sense to also have a timeout? > > > > > > And when timeout expires, set FAILED bit in device status? > > > > > > > > > > virtio spec does not set any limits on the timing of vq > > > > > processing. > > > > > > > > Indeed, but I thought the driver could decide it is too long for it. > > > > > > > > The issue is we keep waiting with rtnl locked, it can quickly make the > > > > system unusable. > > > > > > if this is a problem we should find a way not to keep rtnl > > > locked indefinitely. > > > > From the tests I have done, I think it is. With OVS, a reconfiguration is > > performed when the VDUSE device is added, and when a MLX5 device is > > in the same bridge, it ends up doing an ioctl() that tries to take the > > rtnl lock. In this configuration, it is not possible to kill OVS because > > it is stuck trying to acquire rtnl lock for mlx5 that is held by virtio- > > net. > > So for sure, we can queue up the work and process it later. > The somewhat tricky part is limiting the memory consumption.And it needs to sync with rtnl somehow, e.g device unregistering which seems not easy. Thanks> > > > > > > > > > > > 2- still handle surprise removal correctly by waking in that case > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > drivers/net/virtio_net.c | 4 +++- > > > > > > > > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > > > > > > > > index 9f3b1d6ac33d..e7533f29b219 100644 > > > > > > > > > --- a/drivers/net/virtio_net.c > > > > > > > > > +++ b/drivers/net/virtio_net.c > > > > > > > > > @@ -2314,8 +2314,10 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd, > > > > > > > > > * into the hypervisor, so the request should be handled immediately. > > > > > > > > > */ > > > > > > > > > while (!virtqueue_get_buf(vi->cvq, &tmp) && > > > > > > > > > - !virtqueue_is_broken(vi->cvq)) > > > > > > > > > + !virtqueue_is_broken(vi->cvq)) { > > > > > > > > > + cond_resched(); > > > > > > > > > cpu_relax(); > > > > > > > > > + } > > > > > > > > > > > > > > > > > > return vi->ctrl->status == VIRTIO_NET_OK; > > > > > > > > > } > > > > > > > > > -- > > > > > > > > > 2.39.3 > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > Virtualization mailing list > > > > > > > > > Virtualization at lists.linux-foundation.org > > > > > > > > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization > > > > > > > > > > > > > > > >
Michael S. Tsirkin
2023-Jul-24 07:18 UTC
[PATCH net-next v4 2/2] virtio-net: add cond_resched() to the command waiting loop
On Mon, Jul 24, 2023 at 02:52:49PM +0800, Jason Wang wrote:> On Mon, Jul 24, 2023 at 2:46?PM Michael S. Tsirkin <mst at redhat.com> wrote: > > > > On Fri, Jul 21, 2023 at 10:18:03PM +0200, Maxime Coquelin wrote: > > > > > > > > > On 7/21/23 17:10, Michael S. Tsirkin wrote: > > > > On Fri, Jul 21, 2023 at 04:58:04PM +0200, Maxime Coquelin wrote: > > > > > > > > > > > > > > > On 7/21/23 16:45, Michael S. Tsirkin wrote: > > > > > > On Fri, Jul 21, 2023 at 04:37:00PM +0200, Maxime Coquelin wrote: > > > > > > > > > > > > > > > > > > > > > On 7/20/23 23:02, Michael S. Tsirkin wrote: > > > > > > > > On Thu, Jul 20, 2023 at 01:26:20PM -0700, Shannon Nelson wrote: > > > > > > > > > On 7/20/23 1:38 AM, Jason Wang wrote: > > > > > > > > > > > > > > > > > > > > Adding cond_resched() to the command waiting loop for a better > > > > > > > > > > co-operation with the scheduler. This allows to give CPU a breath to > > > > > > > > > > run other task(workqueue) instead of busy looping when preemption is > > > > > > > > > > not allowed on a device whose CVQ might be slow. > > > > > > > > > > > > > > > > > > > > Signed-off-by: Jason Wang <jasowang at redhat.com> > > > > > > > > > > > > > > > > > > This still leaves hung processes, but at least it doesn't pin the CPU any > > > > > > > > > more. Thanks. > > > > > > > > > Reviewed-by: Shannon Nelson <shannon.nelson at amd.com> > > > > > > > > > > > > > > > > > > > > > > > > > I'd like to see a full solution > > > > > > > > 1- block until interrupt > > > > > > > > > > > > > > Would it make sense to also have a timeout? > > > > > > > And when timeout expires, set FAILED bit in device status? > > > > > > > > > > > > virtio spec does not set any limits on the timing of vq > > > > > > processing. > > > > > > > > > > Indeed, but I thought the driver could decide it is too long for it. > > > > > > > > > > The issue is we keep waiting with rtnl locked, it can quickly make the > > > > > system unusable. > > > > > > > > if this is a problem we should find a way not to keep rtnl > > > > locked indefinitely. > > > > > > From the tests I have done, I think it is. With OVS, a reconfiguration is > > > performed when the VDUSE device is added, and when a MLX5 device is > > > in the same bridge, it ends up doing an ioctl() that tries to take the > > > rtnl lock. In this configuration, it is not possible to kill OVS because > > > it is stuck trying to acquire rtnl lock for mlx5 that is held by virtio- > > > net. > > > > So for sure, we can queue up the work and process it later. > > The somewhat tricky part is limiting the memory consumption. > > And it needs to sync with rtnl somehow, e.g device unregistering which > seems not easy. > > Thankssince when does device unregister need to send cvq commands?> > > > > > > > > > > > > > > > 2- still handle surprise removal correctly by waking in that case > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > > drivers/net/virtio_net.c | 4 +++- > > > > > > > > > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > > > > > > > > > index 9f3b1d6ac33d..e7533f29b219 100644 > > > > > > > > > > --- a/drivers/net/virtio_net.c > > > > > > > > > > +++ b/drivers/net/virtio_net.c > > > > > > > > > > @@ -2314,8 +2314,10 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd, > > > > > > > > > > * into the hypervisor, so the request should be handled immediately. > > > > > > > > > > */ > > > > > > > > > > while (!virtqueue_get_buf(vi->cvq, &tmp) && > > > > > > > > > > - !virtqueue_is_broken(vi->cvq)) > > > > > > > > > > + !virtqueue_is_broken(vi->cvq)) { > > > > > > > > > > + cond_resched(); > > > > > > > > > > cpu_relax(); > > > > > > > > > > + } > > > > > > > > > > > > > > > > > > > > return vi->ctrl->status == VIRTIO_NET_OK; > > > > > > > > > > } > > > > > > > > > > -- > > > > > > > > > > 2.39.3 > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > Virtualization mailing list > > > > > > > > > > Virtualization at lists.linux-foundation.org > > > > > > > > > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization > > > > > > > > > > > > > > > > > > > >