On Wed, 19 Apr 2023 23:19:22 -0700, Christoph Hellwig <hch at
infradead.org> wrote:> On Wed, Apr 19, 2023 at 09:45:06AM -0700, Jakub Kicinski wrote:
> > > Can you explain what the actual use case is?
> > >
> > > From the original patchset I suspect it is dma mapping something
very
> > > long term and then maybe doing syncs on it as needed?
> >
> > In this case yes, pinned user memory, it gets sliced up into MTU sized
> > chunks, fed into an Rx queue of a device, and user can see packets
> > without any copies.
>
> How long is the life time of these mappings? Because dma_map_*
> assumes a temporary mapping and not one that is pinned bascically
> forever.
>
> > Quite similar use case #2 is upcoming io_uring / "direct
placement"
> > patches (former from Meta, latter for Google) which will try to
receive
> > just the TCP data into pinned user memory.
>
> I don't think we can just long term pin user memory here. E.g. for
> confidential computing cases we can't even ever do DMA straight to
> userspace. I had that conversation with Meta's block folks who
> want to do something similar with io_uring and the only option is an
> an allocator for memory that is known DMAable, e.g. through dma-bufs.
>
> You guys really all need to get together and come up with a scheme
> that actually works instead of piling these hacks over hacks.
I think that cases Jakub mentioned are new requirements. From the perspective of
implementation, compared to this patch I submitted, if the DMA API can be
expanded, compatible with some special hardware, I think it is a good solution.
I know that the current design of DMA API only supports some physical hardware,
but can it be modified or expanded?
Still the previous idea, can we add a new ops (not dma_ops) in device? After the
driver configuration, so that the DMA API can be compatible with some special
hardware?
Thanks.