Jason Wang
2021-Dec-09 09:12 UTC
[PATCH v1 6/7] vdpa: Add support for querying control virtqueue index
On Thu, Dec 9, 2021 at 4:26 PM Eli Cohen <elic at nvidia.com> wrote:> > On Thu, Dec 09, 2021 at 04:17:22PM +0800, Jason Wang wrote: > > On Thu, Dec 9, 2021 at 4:04 PM Eli Cohen <elic at nvidia.com> wrote: > > > > > > On Thu, Dec 09, 2021 at 03:55:48PM +0800, Jason Wang wrote: > > > > On Thu, Dec 9, 2021 at 3:09 PM Eli Cohen <elic at nvidia.com> wrote: > > > > > > > > > > On Thu, Dec 09, 2021 at 01:44:56PM +0800, Jason Wang wrote: > > > > > > On Thu, Dec 9, 2021 at 4:15 AM Eli Cohen <elic at nvidia.com> wrote: > > > > > > > > > > > > > > Add netlink attribute and callback function to query the control VQ > > > > > > > index of a device. > > > > > > > > > > > > > > Example: > > > > > > > > > > > > > > $ vdpa dev config show vdpa-a > > > > > > > vdpa-a: mac 00:00:00:00:88:88 link up link_announce false max_vq_pairs 5 \ > > > > > > > mtu 9000 ctrl_vq_idx 10 > > > > > > > > > > > > > > > > > > I still wonder about the motivation for this. > > > > > To be able to show the stats for CVQ. > > > > > > > > Right. > > > > > > > > > > > > > > > And as discussed, the > > > > > > ctrl_vq_idx varies depending on whether MQ is negotiated. > > > > > > > > > > I handle this according to the spec and it depends on MQ. > > > > > > > > Yes, but there could be a chance that the cvq index changes after the > > > > vdpa dev config show. > > > > > > > > > > It depends on: > > > VIRTIO_NET_F_CTRL_VQ > > > VIRTIO_NET_F_MQ > > > > > > which can change only if the features are re-negotiated and that happens > > > on driver in itialization. > > > > > > And on max_virtqueue_pairs which is also set at driver initialization. > > > > > > So I don't see any real issue here. Am I missing something? > > > > No. I meant technically there could be a re-negotiation that happens > > in the middle: > > > > 1) vdpa dev config show > > 2) feature renegotiation which change the cvq index > > 3) getting cvq stats > > > > So the cvq index might be changed. E.g it could be changed from a cvq > > to a rx queue. It might be easier to make the get index and stats > > atomic. > > > > The interface to read VQ stats is based on the user providing the index > and getting the statistics. > > If we want to follow your suggestion then we need maybe to use a > slightly modified command: > > For regular VQ: > dpa dev vstats show vdpa-a qidx 0 > > For CVQ: > vdpa dev vstats show vdpa-a cvqThis might be better, but we need to make sure cvq only works for the device that has a cvq.> > And that raises another question. Do we want to report the CVQ index in > config show?If we go with stats show vdpa-a cvq, we don't need to report the cvq index. Thanks> > > Thanks > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > Signed-off-by: Eli Cohen <elic at nvidia.com> > > > > > > > --- > > > > > > > v0 -> v1: > > > > > > > 1. Use logic defined in the spec to deduce virtqueue index. > > > > > > > > > > > > > > drivers/vdpa/vdpa.c | 25 +++++++++++++++++++++++++ > > > > > > > include/uapi/linux/vdpa.h | 1 + > > > > > > > 2 files changed, 26 insertions(+) > > > > > > > > > > > > > > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c > > > > > > > index 3bf016e03512..b4d4b8a7ca4e 100644 > > > > > > > --- a/drivers/vdpa/vdpa.c > > > > > > > +++ b/drivers/vdpa/vdpa.c > > > > > > > @@ -712,6 +712,27 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *msg, struct netlink_callba > > > > > > > return msg->len; > > > > > > > } > > > > > > > > > > > > > > +static int vdpa_dev_net_ctrl_vq_fill(struct vdpa_device *vdev, > > > > > > > + struct sk_buff *msg, > > > > > > > + struct virtio_net_config *config, > > > > > > > + u64 features) > > > > > > > +{ > > > > > > > + u16 N; > > > > > > > + > > > > > > > + /* control VQ index, if available, is deduced according to the logic > > > > > > > + * described in the virtio spec in section 5.1.2 > > > > > > > + */ > > > > > > > + if (!(features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ))) > > > > > > > + return 0; > > > > > > > + > > > > > > > + if (features & BIT_ULL(VIRTIO_NET_F_MQ)) > > > > > > > + N = le16_to_cpu(config->max_virtqueue_pairs); > > > > > > > + else > > > > > > > + N = 1; > > > > > > > + > > > > > > > + return nla_put_u16(msg, VDPA_ATTR_DEV_CTRL_VQ_IDX, 2 * N); > > > > > > > +} > > > > > > > + > > > > > > > static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev, > > > > > > > struct sk_buff *msg, u64 features, > > > > > > > const struct virtio_net_config *config) > > > > > > > @@ -730,6 +751,7 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms > > > > > > > struct virtio_net_config config = {}; > > > > > > > u64 features; > > > > > > > u16 val_u16; > > > > > > > + int err; > > > > > > > > > > > > > > vdpa_get_config(vdev, 0, &config, sizeof(config)); > > > > > > > > > > > > > > @@ -746,6 +768,9 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms > > > > > > > return -EMSGSIZE; > > > > > > > > > > > > > > features = vdev->config->get_driver_features(vdev); > > > > > > > + err = vdpa_dev_net_ctrl_vq_fill(vdev, msg, &config, features); > > > > > > > + if (err) > > > > > > > + return err; > > > > > > > > > > > > > > return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config); > > > > > > > } > > > > > > > diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h > > > > > > > index a252f06f9dfd..2e3a7f89f42d 100644 > > > > > > > --- a/include/uapi/linux/vdpa.h > > > > > > > +++ b/include/uapi/linux/vdpa.h > > > > > > > @@ -34,6 +34,7 @@ enum vdpa_attr { > > > > > > > VDPA_ATTR_DEV_MAX_VQS, /* u32 */ > > > > > > > VDPA_ATTR_DEV_MAX_VQ_SIZE, /* u16 */ > > > > > > > VDPA_ATTR_DEV_MIN_VQ_SIZE, /* u16 */ > > > > > > > + VDPA_ATTR_DEV_CTRL_VQ_IDX, /* u16 */ > > > > > > > > > > > > > > VDPA_ATTR_DEV_NET_CFG_MACADDR, /* binary */ > > > > > > > VDPA_ATTR_DEV_NET_STATUS, /* u8 */ > > > > > > > -- > > > > > > > 2.33.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > >