search for: kmap

Displaying 20 results from an estimated 389 matches for "kmap".

Did you mean: imap
2019 Apr 29
0
[PATCH v3 15/19] drm/mgag200: Replace mapping code with drm_gem_vram_{kmap/kunmap}()
The mgag200 driver establishes several memory mappings for frame buffers and cursors. This patch converts the driver to use the equivalent drm_gem_vram_kmap() functions. It removes the dependencies on TTM and cleans up the code. --- drivers/gpu/drm/mgag200/mgag200_cursor.c | 35 +++++++++++------------- drivers/gpu/drm/mgag200/mgag200_drv.h | 1 - drivers/gpu/drm/mgag200/mgag200_fb.c | 22 +++++++++------ drivers/gpu/drm/mgag200/mgag200_mode.c...
2020 Sep 29
0
[PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function
...drm/drm_gem_vram_helper.c index 3fe4b326e18e..256b346664f2 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -382,16 +382,16 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, - bool map, bool *is_iomem) +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) { int ret; struct ttm_bo_kmap_obj *kmap = &gbo->kmap; + bool is_iomem; if (gbo->kmap_use_count > 0) goto out; - if (kmap-&g...
2020 Oct 15
0
[PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
The parameters map and is_iomem are always of the same value. Removed them to prepares the function for conversion to struct dma_buf_map. v4: * don't check for !kmap->virtual; will always be false Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch> --- drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/drivers/...
2018 Dec 12
0
[PATCH v2 16/18] drm/qxl: implement prime kmap/kunmap
Generic fbdev emulation needs this. Also: We must keep track of the number of mappings now, so we don't unmap early in case two users want a kmap of the same bo. Add a sanity check to destroy callback to make sure kmap/kunmap is balanced. Signed-off-by: Gerd Hoffmann <kraxel at redhat.com> --- drivers/gpu/drm/qxl/qxl_drv.h | 1 + drivers/gpu/drm/qxl/qxl_object.c | 6 ++++++ drivers/gpu/drm/qxl/qxl_prime.c | 17 +++++++++++++---...
2019 Apr 29
4
[PATCH v3 01/19] drm: Add |struct drm_gem_vram_object| and helpers
...veral data types. The helpers are > currently build with TTM, but this is considered an implementation > detail and may change in future updates. > > v2: > * rename to |struct drm_gem_vram_object| > * move drm_is_gem_ttm() to a later patch in the series > * add drm_gem_vram_kmap_at() > * return is_iomem from kmap functions > * redefine TTM placement flags for public interface > * documentation fixes > > Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> > --- > Documentation/gpu/drm-mm.rst | 12 + > drivers/gpu/drm/Kco...
2019 Apr 29
4
[PATCH v3 01/19] drm: Add |struct drm_gem_vram_object| and helpers
...veral data types. The helpers are > currently build with TTM, but this is considered an implementation > detail and may change in future updates. > > v2: > * rename to |struct drm_gem_vram_object| > * move drm_is_gem_ttm() to a later patch in the series > * add drm_gem_vram_kmap_at() > * return is_iomem from kmap functions > * redefine TTM placement flags for public interface > * documentation fixes > > Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> > --- > Documentation/gpu/drm-mm.rst | 12 + > drivers/gpu/drm/Kco...
2019 May 06
0
[PATCH v4 01/19] drm: Add |struct drm_gem_vram_object| and helpers
...tes. v4: * cleanups from checkpatch.pl * removed several fixed-size types from interfaces * DRM_VRAM_HELPER now selects DRM_TTM * remove separate config option for GEM VRAM v2: * rename to |struct drm_gem_vram_object| * move drm_is_gem_ttm() to a later patch in the series * add drm_gem_vram_kmap_at() * return is_iomem from kmap functions * redefine TTM placement flags for public interface * documentation fixes Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> --- Documentation/gpu/drm-mm.rst | 15 + drivers/gpu/drm/Kconfig | 7 + drivers/g...
2019 Apr 29
0
[PATCH v3 01/19] drm: Add |struct drm_gem_vram_object| and helpers
...h other; except for the names of several data types. The helpers are currently build with TTM, but this is considered an implementation detail and may change in future updates. v2: * rename to |struct drm_gem_vram_object| * move drm_is_gem_ttm() to a later patch in the series * add drm_gem_vram_kmap_at() * return is_iomem from kmap functions * redefine TTM placement flags for public interface * documentation fixes Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> --- Documentation/gpu/drm-mm.rst | 12 + drivers/gpu/drm/Kconfig | 13 + drivers/g...
2020 Nov 03
0
[patch V3 22/37] highmem: High implementation details and document API
Move the gory details of kmap & al into a private header and only document the interfaces which are usable by drivers. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- include/linux/highmem-internal.h | 174 +++++++++++++++++++++++++ include/linux/highmem.h | 270 ++++++++++++++...
2020 Nov 03
0
[patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
No reason having the same code in every architecture. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Russell King <linux at armlinux.org.uk> Cc: Arnd Bergmann <arnd at arndb.de> Cc: linux-arm-kernel at lists.infradead.org --- V3: Remove the kmap types cruft --- arch/arm/Kconfig | 1 arch/arm/include/asm/fixmap.h | 4 - arch/arm/include/asm/highmem.h | 33 +++++++--- arch/arm/include/asm/kmap_types.h | 10 --- arch/arm/mm/Makefile | 1 arch/arm/mm/highmem.c | 121 ------------...
2020 Nov 03
0
[patch V3 24/37] sched: highmem: Store local kmaps in task struct
Instead of storing the map per CPU provide and use per task storage. That prepares for local kmaps which are preemptible. The context switch code is preparatory and not yet in use because kmap_atomic() runs with preemption disabled. Will be made usable in the next step. The context switch logic is safe even when an interrupt happens after clearing or before restoring the kmaps. The kmap index...
2019 Apr 29
21
[PATCH v3 00/19] Share TTM code among DRM framebuffer drivers
...re VRAM MM callback structure among drivers * move VRAM MM instances to drm_device and share rsp. code v2: * rename |struct drm_gem_ttm_object| to |struct drm_gem_vram_object| * rename |struct drm_simple_ttm| to |struct drm_vram_mm| * make drm_is_gem_ttm() an internal helper * add drm_gem_vram_kmap_at() * return is_iomem from kmap functions * redefine TTM placement flags for public interface * add drm_vram_mm_mmap() helper * replace almost all of driver's TTM code with these helpers * documentation fixes Thomas Zimmermann (19): drm: Add |struct drm_gem_vram_object| and helpers d...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...or it can be done by this series (only about three 4K > > > pages were vmapped per virtqueue)? > > When I answered about the advantages of mmu notifier and I mentioned > > guaranteed 2m/gigapages where available, I overlooked the detail you > > were using vmap instead of kmap. So with vmap you're actually doing > > the opposite, it slows down the access because it will always use a 4k > > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > > > If there's just one page (or a few pages) in each vmap there's no need > > of vm...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...or it can be done by this series (only about three 4K > > > pages were vmapped per virtqueue)? > > When I answered about the advantages of mmu notifier and I mentioned > > guaranteed 2m/gigapages where available, I overlooked the detail you > > were using vmap instead of kmap. So with vmap you're actually doing > > the opposite, it slows down the access because it will always use a 4k > > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > > > If there's just one page (or a few pages) in each vmap there's no need > > of vm...
2013 Sep 19
3
[PATCH] xen/balloon: don't alloc page while non-preemptible
...E_PVMMU if (xen_pv_domain() && !PageHighMem(page)) { ret = HYPERVISOR_update_va_mapping( @@ -422,24 +426,19 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) BUG_ON(ret); } #endif - } - - /* Ensure that ballooned highmem pages don''t have kmaps. */ - kmap_flush_unused(); - flush_tlb_all(); - - /* No more mappings: invalidate P2M and add to balloon. */ - for (i = 0; i < nr_pages; i++) { - pfn = mfn_to_pfn(frame_list[i]); if (!xen_feature(XENFEAT_auto_translated_physmap)) { unsigned long p; p = page_to_pfn(scratch_page);...
2009 Aug 18
1
[PATCH 1/2] drm/nouveau: minor gem cleanups
...bbo = nouveau_gem_object(gem); ret = ttm_bo_reserve(&pbbo->bo, false, false, true, chan->fence.sequence); @@ -669,7 +669,8 @@ nouveau_gem_ioctl_pushbuf_call(struct drm_device *dev, void *data, /* Apply any relocations that are required */ if (do_reloc) { - ret = ttm_bo_kmap(&pbbo->bo, 0, pbbo->bo.mem.num_pages, &pbbo->kmap); + ret = ttm_bo_kmap(&pbbo->bo, 0, pbbo->bo.mem.num_pages, + &pbbo->kmap); if (ret) { NV_ERROR(dev, "kmap pb: %d\n", ret); goto out; -- 1.6.3.3
2009 Aug 04
5
[PATCH 1/6] drm/nouveau: bo read/write wrappers for nv04_crtc.c
...diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index d59ffc4..442bab7 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -172,6 +172,29 @@ nouveau_bo_unmap(struct nouveau_bo *nvbo) ttm_bo_kunmap(&nvbo->kmap); } +u32 +nouveau_bo_rd32(struct nouveau_bo *nvbo, unsigned index) +{ + bool is_iomem; + u32 *mem = ttm_kmap_obj_virtual(&nvbo->kmap, &is_iomem); + mem = &mem[index]; + if (is_iomem) + return ioread32_native((void __force __iomem *)mem); + else + return *mem; +} + +void +nouveau...
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and I mentioned guaranteed 2m/gigapages where available, I overlooked the detail you were using vmap instead of kmap. So with vmap you're actually doing the opposite, it slows down the access because it will always use a 4k TLB even if QEMU runs on THP or gigapages hugetlbfs. If there's just one page (or a few pages) in each vmap there's no need of vmap, the linearity vmap provides doesn't pay of...
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and I mentioned guaranteed 2m/gigapages where available, I overlooked the detail you were using vmap instead of kmap. So with vmap you're actually doing the opposite, it slows down the access because it will always use a 4k TLB even if QEMU runs on THP or gigapages hugetlbfs. If there's just one page (or a few pages) in each vmap there's no need of vmap, the linearity vmap provides doesn't pay of...
2019 May 06
2
[PATCH v4 01/19] drm: Add |struct drm_gem_vram_object| and helpers
...ush_to_system(struct drm_gem_vram_object *gbo) > +{ > + int i, ret; > + struct ttm_operation_ctx ctx = { false, false }; > + > + if (!gbo->pin_count) > + return 0; Likewise. > + --gbo->pin_count; > + if (gbo->pin_count) > + return 0; > + > + if (gbo->kmap.virtual) > + ttm_bo_kunmap(&gbo->kmap); > + > + drm_gem_vram_placement(gbo, TTM_PL_FLAG_SYSTEM); > + for (i = 0; i < gbo->placement.num_placement ; ++i) > + gbo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; > + > + ret = ttm_bo_validate(&gbo->bo, &g...