search for: nvkm_vm_bp_list

Displaying 3 results from an estimated 3 matches for "nvkm_vm_bp_list".

2015 Apr 17
2
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
...a chunk of memory is mapped through the IOMMU can be tested by checking if the IOMMU bit is set in the address recorded in the PTE. So has_iommu_bp looks redundant here. > }; > > struct nvkm_vm { > @@ -37,6 +39,13 @@ struct nvkm_vm { > u32 lpde; > }; > > +struct nvkm_vm_bp_list { > + struct list_head head; > + u32 pde; > + u32 pte; > + void *priv; > +}; > + Tracking the PDE and PTE of each memory chunk can probably be avoided if you change your unmapping strategy. Currently you are going through the list of nvkm_vm_bp_list, but y...
2015 Apr 20
3
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
...e Bergstrom <tbergstrom at nvidia.com> wrote: > > On 04/17/2015 02:11 AM, Alexandre Courbot wrote: >> >> Tracking the PDE and PTE of each memory chunk can probably be avoided >> if you change your unmapping strategy. Currently you are going through >> the list of nvkm_vm_bp_list, but you know your PDE and PTE are always >> going to be adjacent, since a nvkm_vma represents a contiguous block >> in the GPU VA. So when unmapping, you can simply check for each PTE >> entry whether the IOMMU bit is set, and unmap from the IOMMU space >> after unmapping f...
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi, Generally the the imported buffers which has memory type TTM_PL_TT are mapped as small pages probably due to lack of big page allocation. But the platform device which also use memory type TTM_PL_TT, like GK20A, can *allocate* big page though the IOMMU hardware inside the SoC. This is a try to map the imported buffers as big pages in GMMU by the platform IOMMU. With some preparation work to