Displaying 3 results from an estimated 3 matches for "gk20a_mmu_iommu_map".
2015 Apr 16
2
[PATCH 6/6] mmu: gk20a: implement IOMMU mapping for big pages
...t; +#include <subdev/mmu.h>
> +
> +#ifdef __KERNEL__
> +#include <linux/iommu.h>
> +#include <nouveau_platform.h>
> +#endif
> +
> +#include "gf100.h"
> +
> +struct gk20a_mmu_priv {
> + struct nvkm_mmu base;
> +};
> +
> +struct gk20a_mmu_iommu_mapping {
> + struct nvkm_mm_node *node;
> + u64 iova;
> +};
> +
> +extern const u8 gf100_pte_storage_type_map[256];
> +
> +static void
> +gk20a_vm_map(struct nvkm_vma *vma, struct nvkm_gpuobj *pgt,
> + struct nvkm_mem *mem, u32 pte, u64 list)
> +...
2015 Apr 20
3
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
On Sat, Apr 18, 2015 at 12:37 AM, Terje Bergstrom <tbergstrom at nvidia.com> wrote:
>
> On 04/17/2015 02:11 AM, Alexandre Courbot wrote:
>>
>> Tracking the PDE and PTE of each memory chunk can probably be avoided
>> if you change your unmapping strategy. Currently you are going through
>> the list of nvkm_vm_bp_list, but you know your PDE and PTE are always
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi,
Generally the the imported buffers which has memory type TTM_PL_TT are
mapped as small pages probably due to lack of big page allocation. But the
platform device which also use memory type TTM_PL_TT, like GK20A, can
*allocate* big page though the IOMMU hardware inside the SoC. This is a try
to map the imported buffers as big pages in GMMU by the platform IOMMU. With
some preparation work to