Jason Wang
2021-Apr-08 09:45 UTC
[PATCH 5/5] vdpa/mlx5: Fix suspend/resume index restoration
? 2021/4/8 ??5:10, Eli Cohen ??:> When we suspend the VM, the VDPA interface will be reset. When the VM is > resumed again, clear_virtqueues() will clear the available and used > indices resulting in hardware virqtqueue objects becoming out of sync. > We can avoid this function alltogether since qemu will clear them if > required, e.g. when the VM went through a reboot. > > Moreover, since the hw available and used indices should always be > identical on query and should be restored to the same value same value > for virtqueues that complete in order, we set the single value provided > by set_vq_state(). In get_vq_state() we return the value of hardware > used index. > > Fixes: b35ccebe3ef7 ("vdpa/mlx5: Restore the hardware used index after change map") > Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices") > Signed-off-by: Eli Cohen <elic at nvidia.com> > ---Acked-by: Jason Wang <jasowang at redhat.com>> drivers/vdpa/mlx5/net/mlx5_vnet.c | 21 ++++++++------------- > 1 file changed, 8 insertions(+), 13 deletions(-) > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > index 6fe61fc57790..4d2809c7d4e3 100644 > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > @@ -1169,6 +1169,7 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m > return; > } > mvq->avail_idx = attr.available_index; > + mvq->used_idx = attr.used_index; > } > > static void suspend_vqs(struct mlx5_vdpa_net *ndev) > @@ -1426,6 +1427,7 @@ static int mlx5_vdpa_set_vq_state(struct vdpa_device *vdev, u16 idx, > return -EINVAL; > } > > + mvq->used_idx = state->avail_index; > mvq->avail_idx = state->avail_index; > return 0; > } > @@ -1443,7 +1445,11 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa > * that cares about emulating the index after vq is stopped. > */ > if (!mvq->initialized) { > - state->avail_index = mvq->avail_idx; > + /* Firmware returns a wrong value for the available index. > + * Since both values should be identical, we take the value of > + * used_idx which is reported correctly. > + */ > + state->avail_index = mvq->used_idx; > return 0; > } > > @@ -1452,7 +1458,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa > mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n"); > return err; > } > - state->avail_index = attr.available_index; > + state->avail_index = attr.used_index; > return 0; > } > > @@ -1540,16 +1546,6 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev) > } > } > > -static void clear_virtqueues(struct mlx5_vdpa_net *ndev) > -{ > - int i; > - > - for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) { > - ndev->vqs[i].avail_idx = 0; > - ndev->vqs[i].used_idx = 0; > - } > -} > - > /* TODO: cross-endian support */ > static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev) > { > @@ -1785,7 +1781,6 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status) > if (!status) { > mlx5_vdpa_info(mvdev, "performing device reset\n"); > teardown_driver(ndev); > - clear_virtqueues(ndev); > mlx5_vdpa_destroy_mr(&ndev->mvdev); > ndev->mvdev.status = 0; > ndev->mvdev.mlx_features = 0;