Displaying 14 results from an estimated 14 matches for "lpg_shift".
2015 Apr 17
3
[PATCH 4/6] drm: enable big page mapping for small pages when IOMMU is available
...u_bo.c
> @@ -221,6 +221,11 @@ nouveau_bo_new(struct drm_device *dev, int size, int align,
>         if (drm->client.vm) {
>                 if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
>                         nvbo->page_shift = drm->client.vm->mmu->lpg_shift;
> +
> +               if ((flags & TTM_PL_FLAG_TT) &&
> +                               drm->client.vm->mmu->iommu_capable &&
> +                               (size % (1 << drm->client.vm->mmu->lpg_shift)) == 0)
> +...
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi,
Generally the the imported buffers which has memory type TTM_PL_TT are
mapped as small pages probably due to lack of big page allocation. But the
platform device which also use memory type TTM_PL_TT, like GK20A, can
*allocate* big page though the IOMMU hardware inside the SoC. This is a try
to map the imported buffers as big pages in GMMU by the platform IOMMU. With
some preparation work to
2015 Apr 16
2
[PATCH 6/6] mmu: gk20a: implement IOMMU mapping for big pages
...void **priv)
> +{
> +       struct nvkm_vm *vm = vma->vm;
> +       struct nvkm_mmu *mmu = vm->mmu;
> +       struct nvkm_mm_node *node;
> +       struct nouveau_platform_device *plat;
> +       struct gk20a_mmu_iommu_mapping *p;
> +       int npages = 1 << (mmu->lpg_shift - mmu->spg_shift);
> +       int i, ret;
> +       u64 addr;
> +
> +       plat = nv_device_to_platform(nv_device(&mmu->base));
> +
> +       *priv = kzalloc(sizeof(struct gk20a_mmu_iommu_mapping), GFP_KERNEL);
> +       if (!*priv)
> +               return;
> +...
2013 Aug 11
2
Fixing nouveau for >4k PAGE_SIZE
...Now, to do that, I need a better understanding of the various things
in there since I'm not familiar with nouveau at all. What I think I've
figured out is with a few questions, it would be awesome if you could
answer them so I can have a shot at fixing it all :-)
 - There is spg_shift and lpg_shift in the backend vmm. My understanding
is those correspond to the supported small and large page shift respectively
in the card's MMU, correct ? On nv41 they are both 12.
 - vma->node->type indicates the desired page shift for a given vma
object we are trying to map. It may or may not matc...
2013 Aug 11
2
Fixing nouveau for >4k PAGE_SIZE
...gpu/drm/nouveau/nouveau_bo.c
@@ -226,7 +226,7 @@ nouveau_bo_new(struct drm_device *dev, int size, int align,
 	nvbo->page_shift = 12;
 	if (drm->client.base.vm) {
 		if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
-			nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift;
+			nvbo->page_shift = lpg_shift;
 	}
 
 	nouveau_bo_fixup_align(nvbo, flags, &align, &size);
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index ca5492a..494cf88 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/...
2013 Nov 29
2
Fixing nouveau for >4k PAGE_SIZE
...ndup(*size, PAGE_SIZE);
 }
 
@@ -221,7 +221,7 @@ nouveau_bo_new(struct drm_device *dev, int size, int align,
 	nvbo->page_shift = 12;
 	if (drm->client.base.vm) {
 		if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
-			nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift;
+			nvbo->page_shift = lpg_shift;
 	}
 
 	nouveau_bo_fixup_align(nvbo, flags, &align, &size);
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index 0843ebc..f255ff8 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/...
2013 Aug 11
2
Fixing nouveau for >4k PAGE_SIZE
...226,7 @@ nouveau_bo_new(struct drm_device *dev, int size, int align,
> >  	nvbo->page_shift = 12;
> >  	if (drm->client.base.vm) {
> >  		if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
> > -			nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift;
> > +			nvbo->page_shift = lpg_shift;
> >  	}
> >  
> >  	nouveau_bo_fixup_align(nvbo, flags, &align, &size);
> Hm.. I don't know if it will be an issue here. But I'm concerned about the cases where nouveau_vm can end up unaligned.
> This will not b...
2013 Aug 11
2
Fixing nouveau for >4k PAGE_SIZE
...vs. large
> pages (in card mmu) (though I assume big is always 0 on nv40 unless
> missed something, I want to make sure I'm not breaking everything
> else...).
>
> Thus I assume that each "pte" in a "big" page table maps a page size
> of 1 << vmm->lpg_shift, is that correct ?
Correct, nv50+ are the only ones that support large pages.
> vmm->pgt_bits is always the same however, thus I assume that PDEs always
> map the same amount of space, but regions for large pages just have
> fewer PTEs, which seem to correspond to what the code does her...
2013 Aug 11
0
Fixing nouveau for >4k PAGE_SIZE
...o.c
> @@ -226,7 +226,7 @@ nouveau_bo_new(struct drm_device *dev, int size, int align,
>  	nvbo->page_shift = 12;
>  	if (drm->client.base.vm) {
>  		if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
> -			nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift;
> +			nvbo->page_shift = lpg_shift;
>  	}
>  
>  	nouveau_bo_fixup_align(nvbo, flags, &align, &size);
Hm.. I don't know if it will be an issue here. But I'm concerned about the cases where nouveau_vm can end up unaligned.
This will not be an issue for the bar mapping...
2013 Dec 11
0
Fixing nouveau for >4k PAGE_SIZE
...truct drm_device *dev, int size, int align,
>         nvbo->page_shift = 12;
>         if (drm->client.base.vm) {
>                 if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
> -                       nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift;
> +                       nvbo->page_shift = lpg_shift;
>         }
Ack both hunks.
>
>         nouveau_bo_fixup_align(nvbo, flags, &align, &size);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
> index 0843ebc..f255ff...
2013 Nov 12
0
[PATCH 6/7] drm/nouveau: more paranoia in nouveau_bo_fixup_align
...ile_mode);
 		}
 	} else {
 		*size = roundup(*size, (1 << nvbo->page_shift));
@@ -228,8 +224,14 @@ nouveau_bo_new(struct drm_device *dev, int size, int align,
 		if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
 			nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift;
 	}
-
 	nouveau_bo_fixup_align(nvbo, flags, &align, &size);
+	if (size <= 0) {
+		nv_warn(drm, "invalid size %x after setting alignment %x\n",
+			size, align);
+		kfree(nvbo);
+		return -EINVAL;
+	}
+
 	nvbo->bo.mem.num_pages = size >> PAGE_SHIFT;
 	nouveau_bo_placem...
2013 Aug 29
0
Fixing nouveau for >4k PAGE_SIZE
...v, int size, int align,
>> >     nvbo->page_shift = 12;
>> >     if (drm->client.base.vm) {
>> >             if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024)
>> > -                   nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift;
>> > +                   nvbo->page_shift = lpg_shift;
>> >     }
>> >
>> >     nouveau_bo_fixup_align(nvbo, flags, &align, &size);
>> Hm.. I don't know if it will be an issue here. But I'm concerned about the cases where nouveau_vm can...
2013 Aug 11
0
Fixing nouveau for >4k PAGE_SIZE
...eparate page tables for small vs. large
pages (in card mmu) (though I assume big is always 0 on nv40 unless
missed something, I want to make sure I'm not breaking everything
else...).
Thus I assume that each "pte" in a "big" page table maps a page size
of 1 << vmm->lpg_shift, is that correct ?
vmm->pgt_bits is always the same however, thus I assume that PDEs always
map the same amount of space, but regions for large pages just have
fewer PTEs, which seem to correspond to what the code does here:
	u32 pte  = (offset & ((1 << vmm->pgt_bits) - 1)) >&g...
2013 Nov 12
6
[PATCH 1/7] drm/nouveau: fix m2mf copy to tiled gart
From: Maarten Lankhorst <maarten.lankhorst at canonical.com>
Commit de7b7d59d54852c introduced tiled GART, but a linear copy is
still performed. This may result in errors on eviction, fix it by
checking tiling from memtype.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com>
Cc: stable at vger.kernel.org #3.10+
---
 drivers/gpu/drm/nouveau/nouveau_bo.c | 33