Linus Walleij
2020-Nov-05 10:07 UTC
[Nouveau] [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Overall I like this, just an inline question: On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann at suse.de> wrote:> To do framebuffer updates, one needs memcpy from system memory and a > pointer-increment function. Add both interfaces with documentation.(...)> +/** > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > + * @dst: The dma-buf mapping structure > + * @src: The source buffer > + * @len: The number of byte in src > + * > + * Copies data into a dma-buf mapping. The source buffer is in system > + * memory. Depending on the buffer's location, the helper picks the correct > + * method of accessing the memory. > + */ > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) > +{ > + if (dst->is_iomem) > + memcpy_toio(dst->vaddr_iomem, src, len); > + else > + memcpy(dst->vaddr, src, len); > +}Are these going to be really big memcpy() operations? Some platforms have DMA offload engines that can perform memcpy(), drivers/dma, include/linux/dmaengine.h especially if the CPU doesn't really need to touch the contents and flush caches etc. An example exist in some MTD drivers that move large quantities of data off flash memory like this: drivers/mtd/nand/raw/cadence-nand-controller.c Notice that DMAengine and DMAbuf does not have much in common, the names can be deceiving. The value of this varies with the system architecture. It is not just a question about performance but also about power and the CPU being able to do other stuff in parallel for large transfers. So *when* to use this facility to accelerate memcpy() is a delicate question. What I'm after here is if these can be really big, do we want (in the long run, not now) open up to the idea to slot in hardware-accelerated memcpy() here? Yours, Linus Walleij
Thomas Zimmermann
2020-Nov-05 10:37 UTC
[Nouveau] [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Hi Am 05.11.20 um 11:07 schrieb Linus Walleij:> Overall I like this, just an inline question: > > On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann at suse.de> wrote: > >> To do framebuffer updates, one needs memcpy from system memory and a >> pointer-increment function. Add both interfaces with documentation. > > (...) >> +/** >> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping >> + * @dst: The dma-buf mapping structure >> + * @src: The source buffer >> + * @len: The number of byte in src >> + * >> + * Copies data into a dma-buf mapping. The source buffer is in system >> + * memory. Depending on the buffer's location, the helper picks the correct >> + * method of accessing the memory. >> + */ >> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) >> +{ >> + if (dst->is_iomem) >> + memcpy_toio(dst->vaddr_iomem, src, len); >> + else >> + memcpy(dst->vaddr, src, len); >> +} > > Are these going to be really big memcpy() operations?Individually, each could be a scanline, so a few KiB. (4 bytes * horizontal resolution). Updating a full framebuffer can sum up to several MiB.> > Some platforms have DMA offload engines that can perform memcpy(),They could be > drivers/dma, include/linux/dmaengine.h > especially if the CPU doesn't really need to touch the contents > and flush caches etc. > An example exist in some MTD drivers that move large quantities of > data off flash memory like this: > drivers/mtd/nand/raw/cadence-nand-controller.c > > Notice that DMAengine and DMAbuf does not have much in common, > the names can be deceiving. > > The value of this varies with the system architecture. It is not just > a question about performance but also about power and the CPU > being able to do other stuff in parallel for large transfers. So *when* > to use this facility to accelerate memcpy() is a delicate question. > > What I'm after here is if these can be really big, do we want > (in the long run, not now) open up to the idea to slot in > hardware-accelerated memcpy() here?We currently use this functionality for the graphical framebuffer console that most DRM drivers provide. It's non-accelerated and slow, but this has not been much of a problem so far. Within DRM, we're more interested in removing console code from drivers and going for the generic implementation. Most of the graphics HW allocates framebuffers from video RAM, system memory or CMA pools and does not really need these memcpys. Only a few systems with small video RAM require a shadow buffer, which we flush into VRAM as needed. Those might benefit. OTOH, off-loading memcpys to hardware sounds reasonable if we can hide it from the DRM code. I think it all depends on how invasive that change would be. Best regards Thomas> > Yours, > Linus Walleij >-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x680DC11D530B7A23.asc Type: application/pgp-keys Size: 4201 bytes Desc: not available URL: <https://lists.freedesktop.org/archives/nouveau/attachments/20201105/38352ed7/attachment.key> -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 495 bytes Desc: OpenPGP digital signature URL: <https://lists.freedesktop.org/archives/nouveau/attachments/20201105/38352ed7/attachment.sig>
Daniel Vetter
2020-Nov-05 12:54 UTC
[Nouveau] [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
On Thu, Nov 05, 2020 at 11:37:08AM +0100, Thomas Zimmermann wrote:> Hi > > Am 05.11.20 um 11:07 schrieb Linus Walleij: > > Overall I like this, just an inline question: > > > > On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann at suse.de> wrote: > > > >> To do framebuffer updates, one needs memcpy from system memory and a > >> pointer-increment function. Add both interfaces with documentation. > > > > (...) > >> +/** > >> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > >> + * @dst: The dma-buf mapping structure > >> + * @src: The source buffer > >> + * @len: The number of byte in src > >> + * > >> + * Copies data into a dma-buf mapping. The source buffer is in system > >> + * memory. Depending on the buffer's location, the helper picks the correct > >> + * method of accessing the memory. > >> + */ > >> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) > >> +{ > >> + if (dst->is_iomem) > >> + memcpy_toio(dst->vaddr_iomem, src, len); > >> + else > >> + memcpy(dst->vaddr, src, len); > >> +} > > > > Are these going to be really big memcpy() operations? > > Individually, each could be a scanline, so a few KiB. (4 bytes * > horizontal resolution). Updating a full framebuffer can sum up to > several MiB. > > > > > Some platforms have DMA offload engines that can perform memcpy(),They could be > > drivers/dma, include/linux/dmaengine.h > > especially if the CPU doesn't really need to touch the contents > > and flush caches etc. > > An example exist in some MTD drivers that move large quantities of > > data off flash memory like this: > > drivers/mtd/nand/raw/cadence-nand-controller.c > > > > Notice that DMAengine and DMAbuf does not have much in common, > > the names can be deceiving. > > > > The value of this varies with the system architecture. It is not just > > a question about performance but also about power and the CPU > > being able to do other stuff in parallel for large transfers. So *when* > > to use this facility to accelerate memcpy() is a delicate question. > > > > What I'm after here is if these can be really big, do we want > > (in the long run, not now) open up to the idea to slot in > > hardware-accelerated memcpy() here? > > We currently use this functionality for the graphical framebuffer > console that most DRM drivers provide. It's non-accelerated and slow, > but this has not been much of a problem so far. > > Within DRM, we're more interested in removing console code from drivers > and going for the generic implementation. > > Most of the graphics HW allocates framebuffers from video RAM, system > memory or CMA pools and does not really need these memcpys. Only a few > systems with small video RAM require a shadow buffer, which we flush > into VRAM as needed. Those might benefit. > > OTOH, off-loading memcpys to hardware sounds reasonable if we can hide > it from the DRM code. I think it all depends on how invasive that change > would be.I wouldn't, all the additional locks this would pull in sound like nightmare. And when an oops happens, this might be the only thing that manages to get the oops to the user. Unless someone really starts caring about fbcon acceleration I really wouldn't bother. Ok maybe it also matters for fbdev, but the problem is that the page fault intercepting alone is already expensive, so the only real solution if you care about performance in that case is to use kms natively, and use a dirty rectangle flip (or the DIRTY syscall). And in there drivers should (and do) use any dma engines they have to upload the frames already. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Maybe Matching Threads
- [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
- [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
- [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
- [PATCH v6 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
- [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces