search for: default_cache

Displaying 20 results from an estimated 67 matches for "default_cache".

2017 Aug 27
7
[Bug 102430] New: nv4x - memory problems when starting graphical application - logs included
https://bugs.freedesktop.org/show_bug.cgi?id=102430 Bug ID: 102430 Summary: nv4x - memory problems when starting graphical application - logs included Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: normal Priority: medium
2020 Feb 18
5
[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses
On 2/18/20 1:44 PM, Christian K?nig wrote: > Am 18.02.20 um 13:40 schrieb Thomas Zimmermann: >> Hi >> >> Am 17.02.20 um 16:04 schrieb Nirmoy Das: >>> GPU address handling is device specific and should be handle by its >>> device >>> driver. >>> >>> Signed-off-by: Nirmoy Das <nirmoy.das at amd.com> >>> ---
2020 Feb 18
2
[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses
Am 18.02.20 um 19:16 schrieb Thomas Zimmermann: > Hi > > Am 18.02.20 um 18:13 schrieb Nirmoy: >> On 2/18/20 1:44 PM, Christian K?nig wrote: >>> Am 18.02.20 um 13:40 schrieb Thomas Zimmermann: >>>> Hi >>>> >>>> Am 17.02.20 um 16:04 schrieb Nirmoy Das: >>>>> GPU address handling is device specific and should be handle by its
2014 May 19
2
[RFC] drm/nouveau: disable caching for VRAM BOs on ARM
This patch is not meant to be merged, but rather to try and understand why this is needed and what a more suitable solution could be. Allowing BOs to be write-cached results in the following happening when trying to run any program on Tegra/GK20A: Unhandled fault: external abort on non-linefetch (0x1008) at 0xf0036010 ... (nouveau_bo_rd32) from [<c0357d00>] (nouveau_fence_update+0x5c/0x80)
2020 Feb 18
2
[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses
Am 18.02.20 um 19:28 schrieb Thomas Zimmermann: > Hi > > Am 18.02.20 um 19:23 schrieb Christian K?nig: >> Am 18.02.20 um 19:16 schrieb Thomas Zimmermann: >>> Hi >>> >>> Am 18.02.20 um 18:13 schrieb Nirmoy: >>>> On 2/18/20 1:44 PM, Christian K?nig wrote: >>>>> Am 18.02.20 um 13:40 schrieb Thomas Zimmermann:
2020 Feb 18
0
[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses
Am 18.02.20 um 18:13 schrieb Nirmoy: > > On 2/18/20 1:44 PM, Christian K?nig wrote: >> Am 18.02.20 um 13:40 schrieb Thomas Zimmermann: >>> Hi >>> >>> Am 17.02.20 um 16:04 schrieb Nirmoy Das: >>>> GPU address handling is device specific and should be handle by its >>>> device >>>> driver. >>>> >>>>
2020 Feb 18
0
[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses
Hi Am 18.02.20 um 18:13 schrieb Nirmoy: > > On 2/18/20 1:44 PM, Christian K?nig wrote: >> Am 18.02.20 um 13:40 schrieb Thomas Zimmermann: >>> Hi >>> >>> Am 17.02.20 um 16:04 schrieb Nirmoy Das: >>>> GPU address handling is device specific and should be handle by its >>>> device >>>> driver. >>>>
2020 Feb 18
0
[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses
Hi Am 18.02.20 um 19:23 schrieb Christian K?nig: > Am 18.02.20 um 19:16 schrieb Thomas Zimmermann: >> Hi >> >> Am 18.02.20 um 18:13 schrieb Nirmoy: >>> On 2/18/20 1:44 PM, Christian K?nig wrote: >>>> Am 18.02.20 um 13:40 schrieb Thomas Zimmermann: >>>>> Hi >>>>> >>>>> Am 17.02.20 um 16:04 schrieb Nirmoy Das:
2020 Feb 18
0
[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses
On Tue, Feb 18, 2020 at 07:37:44PM +0100, Christian K?nig wrote: > Am 18.02.20 um 19:28 schrieb Thomas Zimmermann: > > Hi > > > > Am 18.02.20 um 19:23 schrieb Christian K?nig: > > > Am 18.02.20 um 19:16 schrieb Thomas Zimmermann: > > > > Hi > > > > > > > > Am 18.02.20 um 18:13 schrieb Nirmoy: > > > > > On 2/18/20
2009 Aug 19
1
[PATCH] drm/nouveau: Add a MM for mappable VRAM that isn't usable as scanout.
Dynamically resizing the framebuffer on nv04 was like playing Russian roulette (and it often happened gratuitously) because it seems unable to scan out from buffers above 16MB. This patch splits the mappable VRAM into two chunks when that's the case, and makes the higher one to be used as well when applicable. Signed-off-by: Francisco Jerez <currojerez at riseup.net> ---
2014 May 19
0
[RFC] drm/nouveau: disable caching for VRAM BOs on ARM
Am Montag, den 19.05.2014, 18:46 +0900 schrieb Alexandre Courbot: > This patch is not meant to be merged, but rather to try and understand > why this is needed and what a more suitable solution could be. > > Allowing BOs to be write-cached results in the following happening when > trying to run any program on Tegra/GK20A: > > Unhandled fault: external abort on non-linefetch
2014 Jun 27
5
[PATCH 1/2] drm/nouveau/bar: add noncached ioremap property
Some BARs (like GK20A's) do not support being ioremapped write-combined. Add a boolean property to the BAR structure and handle that case in the Nouveau BO implementation. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drivers/gpu/drm/nouveau/core/include/subdev/bar.h | 3 +++ drivers/gpu/drm/nouveau/nouveau_bo.c | 17 ++++++++++++----- 2 files changed, 15
2014 Jun 27
3
[PATCH v3 0/2] drm: nouveau: memory coherency for ARM
v2 was doing some pretty nasty things with the DMA API, so I took a different approach for this v3. As suggested, this version uses ttm_dma_populate() to populate BOs. The reason for doing this was that it would entitle us to using the DMA sync functions, but since the memory returned is already coherent anyway, we do not even need to call these functions anymore. So this series has turned into
2019 Apr 09
0
[PATCH 13/15] drm/vboxvideo: Convert vboxvideo driver to Simple TTM
Hi, On 08-04-19 11:21, Thomas Zimmermann wrote: > Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> Patch looks good to me (although perhaps it needs a commit msg): Reviewed-by: Hans de Goede <hdegoede at redhat.com> Regards, Hans > --- > drivers/gpu/drm/vboxvideo/Kconfig | 1 + > drivers/gpu/drm/vboxvideo/vbox_drv.h | 6 +- >
2019 Apr 24
0
[PATCH v2 07/17] drm/ast: Convert AST driver to VRAM MM
The data structure |struct drm_vram_mm| and its helpers replace ast's TTM-based memory manager. It's the same implementation; except for the type names. v2: * implement ast_mmap() with drm_vram_mm_mmap() Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> --- drivers/gpu/drm/ast/Kconfig | 1 + drivers/gpu/drm/ast/ast_drv.h | 12 +--- drivers/gpu/drm/ast/ast_main.c |
2019 May 06
0
[PATCH v4 12/19] drm/bochs: Convert bochs driver to VRAM MM
The data structure |struct drm_vram_mm| and its helpers replace bochs' TTM-based memory manager. It's the same implementation; except for the type names. v4: * don't select DRM_TTM or DRM_VRAM_MM_HELPER v3: * use drm_gem_vram_mm_funcs * convert driver to drm_device-based instance v2: * implement bochs_mmap() with drm_vram_mm_mmap() Signed-off-by: Thomas Zimmermann <tzimmermann
2014 Jun 09
2
[PATCH 4/4] drm/nouveau: introduce CPU cache flushing macro
On Mon, May 19, 2014 at 6:22 PM, Lucas Stach <l.stach at pengutronix.de> wrote: > Am Montag, den 19.05.2014, 11:02 +0200 schrieb Thierry Reding: >> On Mon, May 19, 2014 at 04:10:58PM +0900, Alexandre Courbot wrote: >> > Some architectures (e.g. ARM) need the CPU buffers to be explicitely >> > flushed for a memory write to take effect. Not doing so results in
2019 Apr 24
0
[PATCH v2 05/17] drm: Add VRAM MM, a simple memory manager for dedicated VRAM
The VRAM MM memory manager is a helper library that manages dedicated video memory of simple framebuffer devices. It is supported to be used with struct drm_gem_vram_object, but does not depend on it. The implementation is based on the respective code from ast, bochs, and mgag200. These drivers share the exact same implementation except for type names. The helpers are currently build with TTM.
2014 Mar 26
2
[PATCH 00/12] drm/nouveau: support for GK20A, cont'd
Hi Lucas, On Mon, Mar 24, 2014 at 10:19 PM, Lucas Stach <l.stach at pengutronix.de> wrote: > Hi Alexandre, > > Am Montag, den 24.03.2014, 17:42 +0900 schrieb Alexandre Courbot: >> Hi everyone, > [...] >> >> A few lines of hacks (not included here) are still needed to deal with cached >> mappings triggering external aborts and CPU/GPU memory coherency
2019 Apr 08
1
selftest, help with a single test
On 4/8/2019 12:49 AM, Manfred wrote: >> Hi, >> >> Yes, you're right, the problems are due to the selftest environment >> failing to start up. In this case, you could just reproduce the same >> problem with: >> SELFTEST_TESTENV=s4member:local make testenv > > This actually reveals something: > [user at s4member samba]$ ping