On Tue, Feb 07, 2023 at 02:45:39PM -0500, Stefan Hajnoczi
wrote:> On Tue, Feb 07, 2023 at 11:14:46AM +0100, Peter-Jan Gootzen wrote:
> > Hi,
> >
[cc German]
> > For my MSc thesis project in collaboration with IBM
> > (https://github.com/IBM/dpu-virtio-fs) we are looking to improve the
> > performance of the virtio-fs driver in high throughput scenarios. We
think
> > the main bottleneck is the fact that the virtio-fs driver does not
support
> > multi-queue (while the spec does). A big factor in this is that our
setup on
> > the virtio-fs device-side (a DPU) does not easily allow multiple cores
to
> > tend to a single virtio queue.
This is an interesting limitation in DPU.
> >
> > We are therefore looking to implement multi-queue functionality in the
> > virtio-fs driver. The request queues seem to already get created at
probe,
> > but left unused afterwards. The current plan is to select the queue
for a
> > request based on the current smp processor id and set the virtio queue
> > interrupt affinity for each core accordingly at probe.
> >
> > This is my first time contributing to the Linux kernel so I am here to
ask
> > what the maintainers' thoughts are about this plan.
In general we have talked about multiqueue support in the past but
nothing actually made upstream. So if there are patches to make
it happen, it should be reasonable to look at these and review.
Is it just a theory at this point of time or have you implemented
it and seeing significant performance benefit with multiqueue?
Thanks
Vivek>
> Hi,
> Sounds good. Assigning vqs round-robin is the strategy that virtio-net
> and virtio-blk use. virtio-blk could be an interesting example as it's
> similar to virtiofs. The Linux multiqueue block layer and core virtio
> irq allocation handle CPU affinity in the case of virtio-blk.
>
> Which DPU are you targetting?
>
> Stefan
>
> >
> > Best,
> > Peter-Jan Gootzen
> > MSc student at VU University Amsterdam & IBM Research Zurich
> >