search for: kmapped

Displaying 20 results from an estimated 396 matches for "kmapped".

Did you mean: mapped
2019 Apr 29
0
[PATCH v3 15/19] drm/mgag200: Replace mapping code with drm_gem_vram_{kmap/kunmap}()
The mgag200 driver establishes several memory mappings for frame buffers and cursors. This patch converts the driver to use the equivalent drm_gem_vram_kmap() functions. It removes the dependencies on TTM and cleans up the code. --- drivers/gpu/drm/mgag200/mgag200_cursor.c | 35 +++++++++++------------- drivers/gpu/drm/mgag200/mgag200_drv.h | 1 - drivers/gpu/drm/mgag200/mgag200_fb.c | 22
2020 Sep 29
0
[PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function
The parameters map and is_iomem are always of the same value. Removed them to prepares the function for conversion to struct dma_buf_map. Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> --- drivers/gpu/drm/drm_gem_vram_helper.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c
2020 Oct 15
0
[PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
The parameters map and is_iomem are always of the same value. Removed them to prepares the function for conversion to struct dma_buf_map. v4: * don't check for !kmap->virtual; will always be false Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch> --- drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
2018 Dec 12
0
[PATCH v2 16/18] drm/qxl: implement prime kmap/kunmap
Generic fbdev emulation needs this. Also: We must keep track of the number of mappings now, so we don't unmap early in case two users want a kmap of the same bo. Add a sanity check to destroy callback to make sure kmap/kunmap is balanced. Signed-off-by: Gerd Hoffmann <kraxel at redhat.com> --- drivers/gpu/drm/qxl/qxl_drv.h | 1 + drivers/gpu/drm/qxl/qxl_object.c | 6 ++++++
2019 Apr 29
4
[PATCH v3 01/19] drm: Add |struct drm_gem_vram_object| and helpers
Hi Thomas. Some minor things and some bikeshedding too. One general^Wbikeshedding thing - unint32_t is used in many places. And then s64 in one place. Seems like two concepts are mixed. Maybe be consistent and use u32, s32 everywhere? I did not read the code carefully enough to understand it. I cannot give a r-b or a-b - as I feel I need to understand it to do so. Sam On Mon, Apr 29, 2019 at
2019 Apr 29
4
[PATCH v3 01/19] drm: Add |struct drm_gem_vram_object| and helpers
Hi Thomas. Some minor things and some bikeshedding too. One general^Wbikeshedding thing - unint32_t is used in many places. And then s64 in one place. Seems like two concepts are mixed. Maybe be consistent and use u32, s32 everywhere? I did not read the code carefully enough to understand it. I cannot give a r-b or a-b - as I feel I need to understand it to do so. Sam On Mon, Apr 29, 2019 at
2019 May 06
0
[PATCH v4 01/19] drm: Add |struct drm_gem_vram_object| and helpers
The type |struct drm_gem_vram_object| implements a GEM object for simple framebuffer devices with dedicated video memory. The BO is either located in VRAM or system memory. The implementation has been created from the respective code in ast, bochs and mgag200. These drivers copy their implementation from each other; except for the names of several data types. The helpers are currently build with
2019 Apr 29
0
[PATCH v3 01/19] drm: Add |struct drm_gem_vram_object| and helpers
The type |struct drm_gem_vram_object| implements a GEM object for simple framebuffer devices with dedicated video memory. The BO is either located in VRAM or system memory. The implementation has been created from the respective code in ast, bochs and mgag200. These drivers copy their implementation from each other; except for the names of several data types. The helpers are currently build with
2020 Nov 03
0
[patch V3 22/37] highmem: High implementation details and document API
Move the gory details of kmap & al into a private header and only document the interfaces which are usable by drivers. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- include/linux/highmem-internal.h | 174 +++++++++++++++++++++++++ include/linux/highmem.h | 270 ++++++++++++++------------------------- mm/highmem.c | 11 - 3
2020 Nov 03
0
[patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
No reason having the same code in every architecture. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Russell King <linux at armlinux.org.uk> Cc: Arnd Bergmann <arnd at arndb.de> Cc: linux-arm-kernel at lists.infradead.org --- V3: Remove the kmap types cruft --- arch/arm/Kconfig | 1 arch/arm/include/asm/fixmap.h | 4 -
2020 Nov 03
0
[patch V3 24/37] sched: highmem: Store local kmaps in task struct
Instead of storing the map per CPU provide and use per task storage. That prepares for local kmaps which are preemptible. The context switch code is preparatory and not yet in use because kmap_atomic() runs with preemption disabled. Will be made usable in the next step. The context switch logic is safe even when an interrupt happens after clearing or before restoring the kmaps. The kmap index in
2019 Apr 29
21
[PATCH v3 00/19] Share TTM code among DRM framebuffer drivers
Several simple framebuffer drivers copy most of the TTM code from each other. The implementation is always the same; except for the name of some data structures. As recently discussed, this patch set provides generic memory-management code for simple framebuffers with dedicated video memory. It further converts the respective drivers to the generic code. The shared code is basically the same
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2013 Sep 19
3
[PATCH] xen/balloon: don't alloc page while non-preemptible
From: David Vrabel <david.vrabel@citrix.com> get_balloon_scratch_page() disables preemption so we cannot call alloc_page() in between get/put_balloon_scratch_page(). Shuffle bits around in decrease_reservation() to avoid this. Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- drivers/xen/balloon.c | 23 +++++++++++------------ 1 files changed, 11 insertions(+), 12
2009 Aug 18
1
[PATCH 1/2] drm/nouveau: minor gem cleanups
Signed-off-by: Pekka Paalanen <pq at iki.fi> --- drivers/gpu/drm/nouveau/nouveau_gem.c | 7 ++++--- 1 files changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 64e59fb..75cae79 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -595,7 +595,7 @@
2009 Aug 04
5
[PATCH 1/6] drm/nouveau: bo read/write wrappers for nv04_crtc.c
Introduce accessors for TTM buffer object memory that has been mapped into the kernel virtual address space or as IO memory. IO memory needs to be accessed via special accessor functions, not by dereferencing the iomem cookie. The wrappers hide the details of 32-bit access and honour the TTM map type. nv04_crtc_cursor_set() is changed to use the new wrappers. 'cursor' is received from
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson, On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > Just to make sure I understand here. For boosting through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson, On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > Just to make sure I understand here. For boosting through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and
2019 May 06
2
[PATCH v4 01/19] drm: Add |struct drm_gem_vram_object| and helpers
> +/** > + * drm_gem_vram_unpin() - Unpins a GEM VRAM object > + * @gbo: the GEM VRAM object > + * > + * Returns: > + * 0 on success, or > + * a negative error code otherwise. > + */ > +int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) > +{ > + int i, ret; > + struct ttm_operation_ctx ctx = { false, false }; > + > + if (!gbo->pin_count) > +