? 2021/8/16 ??1:47, Eli Cohen ??:> On Mon, Aug 16, 2021 at 12:16:14PM +0800, Jason Wang wrote:
>> ? 2021/8/12 ??5:50, Eli Cohen ??:
>>> On Thu, Aug 12, 2021 at 03:04:35PM +0800, Jason Wang wrote:
>>>> ? 2021/8/12 ??3:01, Eli Cohen ??:
>>>>> On Thu, Aug 12, 2021 at 02:47:06PM +0800, Jason Wang wrote:
>>>>>> On Thu, Aug 12, 2021 at 12:55 PM Eli Cohen <elic at
nvidia.com> wrote:
>>>>>>> On Thu, Aug 12, 2021 at 11:19:19AM +0800, Jason
Wang wrote:
>>>>>>>> ? 2021/8/11 ??7:04, Eli Cohen ??:
>>>>>>>>> On Wed, Aug 11, 2021 at 04:37:44PM +0800,
Jason Wang wrote:
>>>>>>>>>> ? 2021/8/11 ??3:53, Eli Cohen ??:
>>>>>>>>>>>> One thing need to solve for mq
is that the:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> +static u16
ctrl_vq_idx(struct mlx5_vdpa_dev *mvdev)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> + return 2 *
mlx5_vdpa_max_qps(mvdev->max_vqs);
>>>>>>>>>>>>> +}
>>>>>>>>>>>> We should handle the case when
MQ is supported by the device but not the
>>>>>>>>>>>> driver.
>>>>>>>>>>>>
>>>>>>>>>>>> E.g in the case when we have 2
queue pairs:
>>>>>>>>>>>>
>>>>>>>>>>>> When MQ is enabled, cvq is
queue 4
>>>>>>>>>>>>
>>>>>>>>>>>> When MQ is not enabled, cvq is
queue 2
>>>>>>>>>>>>
>>>>>>>>>>> There's some issue with this. I
get callbacks targeting specific
>>>>>>>>>>> virtqueues before features
negotiation has been completed.
>>>>>>>>>>>
>>>>>>>>>>> Specifically, I get set_vq_cb()
calls. At this point I must know the
>>>>>>>>>>> control vq index.
>>>>>>>>>> So I think we need do both:
>>>>>>>>>>
>>>>>>>>>> 1) At one hand, it's a bug for the
userspace to use vq_index before feature
>>>>>>>>>> is negotiated
>>>>>>>>>>
>>>>>>>>>> (looks like a bug in my cvq series that
will call SET_VRING_CALL before
>>>>>>>>>> feature is negotiate, which I will
look).
>>>>>>>>>>
>>>>>>>>>> 2) At the other hand, the driver should
be able to deal with that
>>>>>>>>>>
>>>>>>>>> All I can do is drop callbacks for VQs
before features negotation has
>>>>>>>>> been completed.
>>>>>>>> Or just leave queue index 0, 1 work.
>>>>>>>>
>>>>>>>> Since it is not expected to be changed.
>>>>>>>>
>>>>>>> Right, will do.
>>>>>>>
>>>>>>>>>>> I think the CVQ index must not
depend on the negotiated features but
>>>>>>>>>>> rather depend of the value the
device driver provides in the call to
>>>>>>>>>>> _vdpa_register_device().
>>>>>>>>>> At the virtio level, it's too late
to change that and it breaks the backward
>>>>>>>>>> compatibility.
>>>>>>>>>>
>>>>>>>>>> But at the vDPA level, the under layer
device can map virtio cvq to any of
>>>>>>>>>> it's virtqueue.
>>>>>>>>>>
>>>>>>>>>> E.g map cvq (index 2) to mlx5 cvq (the
last).
>>>>>>>>>>
>>>>>>>>> I am not following you here. I still
don't know what index is cvq.
>>>>>>>> Right, we still need to wait for the feature
being negotiated in order to
>>>>>>>> proceed.
>>>>>>>>
>>>>>>> So to summarise, before feature negotiation
complete, I accept calls
>>>>>>> referring to VQs only for indices 0 and 1.
>>>>>>> After feature negotiation complete I know CVQ index
and will accept
>>>>>>> indices 0 to cvq index.
>>>>>> I don't get this "accept indices 0 to cvq
index".
>>>>> What I meant to say is that there are several callbacks
that refer to
>>>>> specific virtqueues, e.g. set_vq_address(), set_vq_num()
etc. They all
>>>>> accept virtqueue index as an argument.
>>>>>
>>>>> What we want to do is verify wheather the index provided is
valid or
>>>>> not. If it is not valid, either return error (if the
callback can return
>>>>> a value) or just avoid processing it. If the index is valid
then we
>>>>> process it normally.
>>>>>
>>>>> Now we need to decide which index is valid or not. We need
something
>>>>> like this to identifiy valid indexes range:
>>>>>
>>>>> CVQ clear: 0 and 1
>>>>> CVQ set, MQ clear: 0, 1 and 2 (for CVQ).
>>>>> CVQ set, MQ set: 0..nvq where nvq is whatever provided to
>>>>> _vdpa_register_device()
>>>> Yes.
>>>>
>>> Unfortunately it does not work.
>>> set_vq_cb() for all the multiqueues is called beofre feature
>>> negotiation. If I apply the above logic, I will lose these
settings.
>>>
>>> I can make an exception for set_vq_cb(), save callbacks and restore
>>> them afterwards. This looks too convoluted and maybe we should seek
>>> another solution.
>>
>> I agree.
>>
>>
>>> Let me know what you think.
>>
>> Rethink about this issue. It looks to the only issue we face is the
>> set_vq_cb().
>>
>> With the assumption that the userspace can use the index correctly
(even
>> before set_features). I wonder the following works.
>>
>> Instead of checking whether the index is cvq in set_vq_cb() how about:
>>
>> 1) decouple event_cb out of mlx5_vdpa_virtqueue and mlx5_congro_vq
>> 2) have a dedicated event_cb array in mlx5_vdpa_net
>> 3) then we can do
>>
>> ndev->event_cbs[index] = *cb;
>>
> So actually you're suggesting to save all the callabck configurations
in
> an array and evaluate cvq index after feature negotiation has been
> completed.
Yes.
>
> I think that could work. I will code this and update.
Thanks.
>
>> in mlx5_vdpa_set_vq_cb()
>>
>> 4) in the mlx5_cvq_kick_handler(), we know the feature is negotiated
and we
>> can use the correct index there.
>>
>> In the mean time, I will look at Qemu code to see if we can guarantee
that
>> set_features is called before set_vq_callback. (At first glance,
it's not
>> trivial but let's see).
>>
>> Thanks
>>
>>
>>>>> And while writing this, I think this logic does not belog
in mlx5_vdpa
>>>>> but probably in vdpa.c
>>>> The problem is that vdpa should be unaware of a specific device
type. E.g
>>>> the above indices may work only for virtio-net but not other.
>>>>
>>>> Thanks
>>>>
>>>>
>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>