similar to: [PATCH AUTOSEL 5.5 074/542] drm/nouveau/nouveau: fix incorrect sizeof on args.src an args.dst

Displaying 20 results from an estimated 1000 matches similar to: "[PATCH AUTOSEL 5.5 074/542] drm/nouveau/nouveau: fix incorrect sizeof on args.src an args.dst"

2020 Feb 14
0
[PATCH AUTOSEL 5.4 065/459] drm/nouveau/nouveau: fix incorrect sizeof on args.src an args.dst
From: Colin Ian King <colin.king at canonical.com> [ Upstream commit f42e4b337b327b1336c978c4b5174990a25f68a0 ] The sizeof is currently on args.src and args.dst and should be on *args.src and *args.dst. Fortunately these sizes just so happen to be the same size so it worked, however, this should be fixed and it also cleans up static analysis warnings Addresses-Coverity: ("sizeof not
2019 Nov 29
0
[PATCH] nouveau: fix incorrect sizeof on args.src an args.dst
From: Colin Ian King <colin.king at canonical.com> The sizeof is currently on args.src and args.dst and should be on *args.src and *args.dst. Fortunately these sizes just so happen to be the same size so it worked, however, this should be fixed and it also cleans up static analysis warnings Addresses-Coverity: ("sizeof not portable") Fixes: f268307ec7c7 ("nouveau: simplify
2019 Jul 29
0
[PATCH 6/9] nouveau: simplify nouveau_dmem_migrate_vma
Factor the main copy page to vram routine out into a helper that acts on a single page and which doesn't require the nouveau_dmem_migrate structure for argument passing. As an added benefit the new version only allocates the dma address array once and reuses it for each subsequent chunk of work. Signed-off-by: Christoph Hellwig <hch at lst.de> ---
2019 Jul 29
0
[PATCH 1/9] mm: turn migrate_vma upside down
There isn't any good reason to pass callbacks to migrate_vma. Instead we can just export the three steps done by this function to drivers and let them sequence the operation without callbacks. This removes a lot of boilerplate code as-is, and will allow the drivers to drastically improve code flow and error handling further on. Signed-off-by: Christoph Hellwig <hch at lst.de> ---
2019 Aug 14
0
[PATCH 01/10] mm: turn migrate_vma upside down
There isn't any good reason to pass callbacks to migrate_vma. Instead we can just export the three steps done by this function to drivers and let them sequence the operation without callbacks. This removes a lot of boilerplate code as-is, and will allow the drivers to drastically improve code flow and error handling further on. Signed-off-by: Christoph Hellwig <hch at lst.de>
2019 Jul 31
1
[PATCH 1/9] mm: turn migrate_vma upside down
On 7/29/19 7:28 AM, Christoph Hellwig wrote: > There isn't any good reason to pass callbacks to migrate_vma. Instead > we can just export the three steps done by this function to drivers and > let them sequence the operation without callbacks. This removes a lot > of boilerplate code as-is, and will allow the drivers to drastically > improve code flow and error handling
2020 Mar 16
0
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
No driver has actually used properly wire up and support this feature. There is various code related to it in nouveau, but as far as I can tell it never actually got turned on, and the only changes since the initial commit are global cleanups. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 - drivers/gpu/drm/nouveau/nouveau_dmem.c | 37
2019 Jun 14
0
[PATCH v2] drm/nouveau/dmem: missing mutex_lock in error path
In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before calling nouveau_dmem_chunk_alloc() as shown when CONFIG_PROVE_LOCKING is enabled: [ 1294.871933] ===================================== [ 1294.876656] WARNING: bad unlock balance detected! [ 1294.881375] 5.2.0-rc3+ #5 Not tainted [ 1294.885048] ------------------------------------- [ 1294.889773] test-malloc-vra/6299 is
2019 Jul 26
0
[PATCH AUTOSEL 5.2 85/85] drm/nouveau/dmem: missing mutex_lock in error path
From: Ralph Campbell <rcampbell at nvidia.com> [ Upstream commit d304654bd79332ace9ac46b9a3d8de60acb15da3 ] In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before calling nouveau_dmem_chunk_alloc() as shown when CONFIG_PROVE_LOCKING is enabled: [ 1294.871933] ===================================== [ 1294.876656] WARNING: bad unlock balance detected! [ 1294.881375]
2019 Aug 13
0
[PATCH] nouveau/hmm: map pages after migration
On Wed, Aug 07, 2019 at 08:02:14AM -0700, Ralph Campbell wrote: > When memory is migrated to the GPU it is likely to be accessed by GPU > code soon afterwards. Instead of waiting for a GPU fault, map the > migrated memory into the GPU page tables with the same access permissions > as the source CPU page table entries. This preserves copy on write > semantics. > >
2020 Jun 23
1
[RESEND PATCH 2/3] nouveau: fix mixed normal and device private page migration
On 2020-06-22 16:38, Ralph Campbell wrote: > The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will > migrate memory in the given address range to device private memory. The > source pages might already have been migrated to device private memory. > In that case, the source struct page is not checked to see if it is > a device private page and incorrectly computes the
2019 Jun 14
1
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
On 6/13/19 5:49 PM, John Hubbard wrote: > On 6/13/19 5:11 PM, Ralph Campbell wrote: >> In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before >> calling nouveau_dmem_chunk_alloc(). >> Reacquire the lock before continuing to the next page. >> >> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> >> --- >> >> I found
2020 Jun 22
0
[RESEND PATCH 2/3] nouveau: fix mixed normal and device private page migration
The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will migrate memory in the given address range to device private memory. The source pages might already have been migrated to device private memory. In that case, the source struct page is not checked to see if it is a device private page and incorrectly computes the GPU's physical address of local memory leading to data
2020 Mar 03
2
[PATCH v2] nouveau/hmm: map pages after migration
When memory is migrated to the GPU, it is likely to be accessed by GPU code soon afterwards. Instead of waiting for a GPU fault, map the migrated memory into the GPU page tables with the same access permissions as the source CPU page table entries. This preserves copy on write semantics. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> Cc: Christoph Hellwig <hch at lst.de> Cc:
2020 Mar 16
6
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
On 3/16/20 10:52 AM, Christoph Hellwig wrote: > No driver has actually used properly wire up and support this feature. > There is various code related to it in nouveau, but as far as I can tell > it never actually got turned on, and the only changes since the initial > commit are global cleanups. This is not actually true. OpenCL 2.x does support SVM with nouveau and device private
2020 Mar 16
0
[PATCH 4/4] mm: check the device private page owner in hmm_range_fault
Hmm range fault will succeed for any kind of device private memory, even if it doesn't belong to the calling entity. While nouveau has some crude checks for that, they are broken because they assume nouveau is the only user of device private memory. Fix this by passing in an expected pgmap owner in the hmm_range_fault structure. Signed-off-by: Christoph Hellwig <hch at lst.de> Fixes:
2020 Jul 20
0
[PATCH v2 2/5] mm/migrate: add a direction parameter to migrate_vma
On 7/20/20 11:36 AM, Jason Gunthorpe wrote: > On Mon, Jul 13, 2020 at 10:21:46AM -0700, Ralph Campbell wrote: >> The src_owner field in struct migrate_vma is being used for two purposes, >> it implies the direction of the migration and it identifies device private >> pages owned by the caller. Split this into separate parameters so the >> src_owner field can be used just
2020 Jul 20
2
[PATCH v2 2/5] mm/migrate: add a direction parameter to migrate_vma
On Mon, Jul 13, 2020 at 10:21:46AM -0700, Ralph Campbell wrote: > The src_owner field in struct migrate_vma is being used for two purposes, > it implies the direction of the migration and it identifies device private > pages owned by the caller. Split this into separate parameters so the > src_owner field can be used just to identify device private pages owned > by the caller of
2020 Mar 16
0
[PATCH 1/2] mm: handle multiple owners of device private pages in migrate_vma
Add a new opaque owner field to struct dev_pagemap, which will allow the hmm and migrate_vma code to identify who owns ZONE_DEVICE memory, and refuse to work on mappings not owned by the calling entity. Signed-off-by: Christoph Hellwig <hch at lst.de> --- arch/powerpc/kvm/book3s_hv_uvmem.c | 4 ++++ drivers/gpu/drm/nouveau/nouveau_dmem.c | 3 +++ include/linux/memremap.h
2020 Jul 06
0
[PATCH 2/5] mm/migrate: add a direction parameter to migrate_vma
The src_owner field in struct migrate_vma is being used for two purposes, it implies the direction of the migration and it identifies device private pages owned by the caller. Split this into separate parameters so the src_owner field can be used just to identify device private pages owned by the caller of migrate_vma_setup(). Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> ---