Displaying 3 results from an estimated 3 matches for "gk20a_vm_map_iommu".
2015 Apr 20
3
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
On Sat, Apr 18, 2015 at 12:37 AM, Terje Bergstrom <tbergstrom at nvidia.com> wrote:
>
> On 04/17/2015 02:11 AM, Alexandre Courbot wrote:
>>
>> Tracking the PDE and PTE of each memory chunk can probably be avoided
>> if you change your unmapping strategy. Currently you are going through
>> the list of nvkm_vm_bp_list, but you know your PDE and PTE are always
2015 Apr 16
2
[PATCH 6/6] mmu: gk20a: implement IOMMU mapping for big pages
...->offset;
> + phys |= (u64)tag << (32 + 12);
> + ltc->tags_clear(ltc, tag, 1);
> + }
> +
> + nv_wo32(pgt, pte + 0, lower_32_bits(phys));
> + nv_wo32(pgt, pte + 4, upper_32_bits(phys));
> +}
> +
> +static void
> +gk20a_vm_map_iommu(struct nvkm_vma *vma, struct nvkm_gpuobj *pgt,
> + struct nvkm_mem *mem, u32 pte, dma_addr_t *list,
> + void **priv)
> +{
> + struct nvkm_vm *vm = vma->vm;
> + struct nvkm_mmu *mmu = vm->mmu;
> + struct nvkm_mm_node *node;
> +...
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi,
Generally the the imported buffers which has memory type TTM_PL_TT are
mapped as small pages probably due to lack of big page allocation. But the
platform device which also use memory type TTM_PL_TT, like GK20A, can
*allocate* big page though the IOMMU hardware inside the SoC. This is a try
to map the imported buffers as big pages in GMMU by the platform IOMMU. With
some preparation work to