Displaying 11 results from an estimated 11 matches for "dma_nr".
Did you mean:
  dma_intr
  
2019 Jul 29
0
[PATCH 2/9] nouveau: reset dma_nr in nouveau_dmem_migrate_alloc_and_copy
When we start a new batch of dma_map operations we need to reset dma_nr,
as we start filling a newly allocated array.
Signed-off-by: Christoph Hellwig <hch at lst.de>
---
 drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 38416798abd4..e...
2019 Jul 29
0
[PATCH 6/9] nouveau: simplify nouveau_dmem_migrate_vma
...struct nouveau_dmem *page_to_dmem(struct page *page)
 	return container_of(page->pgmap, struct nouveau_dmem, pagemap);
 }
 
-struct nouveau_migrate {
-	struct vm_area_struct *vma;
-	struct nouveau_drm *drm;
-	struct nouveau_fence *fence;
-	unsigned long npages;
-	dma_addr_t *dma;
-	unsigned long dma_nr;
-};
-
 static unsigned long nouveau_dmem_page_addr(struct page *page)
 {
 	struct nouveau_dmem_chunk *chunk = page->zone_device_data;
@@ -569,131 +558,67 @@ nouveau_dmem_init(struct nouveau_drm *drm)
 	drm->dmem = NULL;
 }
 
-static void
-nouveau_dmem_migrate_alloc_and_copy(struct vm_area_st...
2019 Jul 03
0
[PATCH] nouveau: remove bogus uses of DMA_ATTR_SKIP_CPU_SYNC
...e, 0, PAGE_SIZE,
+				     PCI_DMA_BIDIRECTIONAL);
 		if (dma_mapping_error(dev, fault->dma[fault->npages])) {
 			dst_pfns[i] = MIGRATE_PFN_ERROR;
 			__free_page(dpage);
@@ -706,9 +705,8 @@ nouveau_dmem_migrate_alloc_and_copy(struct vm_area_struct *vma,
 		}
 
 		migrate->dma[migrate->dma_nr] =
-			dma_map_page_attrs(dev, spage, 0, PAGE_SIZE,
-					   PCI_DMA_BIDIRECTIONAL,
-					   DMA_ATTR_SKIP_CPU_SYNC);
+			dma_map_page(dev, spage, 0, PAGE_SIZE,
+				     PCI_DMA_BIDIRECTIONAL);
 		if (dma_mapping_error(dev, migrate->dma[migrate->dma_nr])) {
 			nouveau_dmem_page_free_locked(...
2019 Jul 29
0
[PATCH 4/9] nouveau: factor out dmem fence completion
...igrate->fence, true, false);
-		nouveau_fence_unref(&migrate->fence);
-	} else {
-		/*
-		 * FIXME wait for channel to be IDLE before finalizing
-		 * the hmem object below (nouveau_migrate_hmem_fini()) ?
-		 */
-	}
+	nouveau_dmem_fence_done(&migrate->fence);
 
 	while (migrate->dma_nr--) {
 		dma_unmap_page(drm->dev->dev, migrate->dma[migrate->dma_nr],
-- 
2.20.1
2019 Aug 08
10
turn hmm migrate_vma upside down v2
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which starts revamping the
migrate_vma functionality.  The prime idea is to export three slightly
lower level functions and thus avoid the need for migrate_vma_ops
callbacks.
Diffstat:
    5 files changed, 281 insertions(+), 607 deletions(-)
A git tree is also available at:
    git://git.infradead.org/users/hch/misc.git
2019 Jul 29
24
turn the hmm migrate_vma upside down
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which starts revamping the
migrate_vma functionality.  The prime idea is to export three slightly
lower level functions and thus avoid the need for migrate_vma_ops
callbacks.
Diffstat:
    4 files changed, 285 insertions(+), 602 deletions(-)
A git tree is also available at:
    git://git.infradead.org/users/hch/misc.git
2019 Aug 14
20
turn hmm migrate_vma upside down v3
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which starts revamping the
migrate_vma functionality.  The prime idea is to export three slightly
lower level functions and thus avoid the need for migrate_vma_ops
callbacks.
Diffstat:
    7 files changed, 282 insertions(+), 614 deletions(-)
A git tree is also available at:
    git://git.infradead.org/users/hch/misc.git
2019 Jul 29
0
[PATCH 3/9] nouveau: factor out device memory address calculation
...deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e696157f771e..d469bc334438 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -102,6 +102,14 @@ struct nouveau_migrate {
 	unsigned long dma_nr;
 };
 
+static unsigned long nouveau_dmem_page_addr(struct page *page)
+{
+	struct nouveau_dmem_chunk *chunk = page->zone_device_data;
+	unsigned long idx = page_to_pfn(page) - chunk->pfn_first;
+
+	return (idx << PAGE_SHIFT) + chunk->bo->bo.offset;
+}
+
 static void nouveau_dmem_...
2019 Jun 17
34
dev_pagemap related cleanups v2
Hi Dan, Jérôme and Jason,
below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code
Note: this series is on top of the rdma/hmm branch + the dev_pagemap
releas fix series from Dan that went into 5.2-rc5.
Git tree:
    git://git.infradead.org/users/hch/misc.git
2019 Jun 13
57
dev_pagemap related cleanups
Hi Dan, Jérôme and Jason,
below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code
Diffstat:
 22 files changed, 245 insertions(+), 802 deletions(-)
Git tree:
    git://git.infradead.org/users/hch/misc.git hmm-devmem-cleanup
Gitweb:
   
2019 Jun 26
41
dev_pagemap related cleanups v3
Hi Dan, Jérôme and Jason,
below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code
Note: this series is on top of Linux 5.2-rc5 and has some minor
conflicts with the hmm tree that are easy to resolve.
Diffstat summary:
 32 files changed, 361 insertions(+), 1012 deletions(-)
Git