search for: lpde

Displaying 5 results from an estimated 5 matches for "lpde".

Did you mean: lpd
2013 Mar 05
4
[RFC PATCH] drm/nouveau: use vmalloc for pgt allocation
.../vm/base.c index 77c67fc..e66fb77 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/vm/base.c +++ b/drivers/gpu/drm/nouveau/core/subdev/vm/base.c @@ -362,7 +362,7 @@ nouveau_vm_create(struct nouveau_vmmgr *vmm, u64 offset, u64 length, vm->fpde = offset >> (vmm->pgt_bits + 12); vm->lpde = (offset + length - 1) >> (vmm->pgt_bits + 12); - vm->pgt = kcalloc(vm->lpde - vm->fpde + 1, sizeof(*vm->pgt), GFP_KERNEL); + vm->pgt = vzalloc((vm->lpde - vm->fpde + 1) * sizeof(*vm->pgt)); if (!vm->pgt) { kfree(vm); return -ENOMEM; @@ -371,7 +371,7...
2013 Jul 29
0
[PATCH] drm/nouveau: protect vm refcount with mutex
...u_vm_ref(vm, &vma->vm, NULL); vma->offset = (u64)vma->node->offset << 12; #ifdef NOUVEAU_VM_POISON if (vm->poison) @@ -353,7 +355,7 @@ nouveau_vm_put(struct nouveau_vma *vma) { struct nouveau_vm *vm = vma->vm; struct nouveau_vmmgr *vmm = vm->vmm; - u32 fpde, lpde; + u32 fpde, lpde, ref; if (unlikely(vma->node == NULL)) return; @@ -363,9 +365,12 @@ nouveau_vm_put(struct nouveau_vma *vma) mutex_lock(&nv_subdev(vmm)->mutex); nouveau_vm_unmap_pgt(vm, vma->node->type != vmm->spg_shift, fpde, lpde); nouveau_mm_free(&vm->mm,...
2013 Jun 11
0
[RFC PATCH] drm/nouveau: use vmalloc for pgt allocation
....e66fb77 100644 > --- a/drivers/gpu/drm/nouveau/core/subdev/vm/base.c > +++ b/drivers/gpu/drm/nouveau/core/subdev/vm/base.c > @@ -362,7 +362,7 @@ nouveau_vm_create(struct nouveau_vmmgr *vmm, u64 offset, u64 length, > vm->fpde = offset >> (vmm->pgt_bits + 12); > vm->lpde = (offset + length - 1) >> (vmm->pgt_bits + 12); > > - vm->pgt = kcalloc(vm->lpde - vm->fpde + 1, sizeof(*vm->pgt), GFP_KERNEL); > + vm->pgt = vzalloc((vm->lpde - vm->fpde + 1) * sizeof(*vm->pgt)); > if (!vm->pgt) { > kfree(vm); > re...
2015 Apr 17
2
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
...bool has_iommu_bp; Whether a chunk of memory is mapped through the IOMMU can be tested by checking if the IOMMU bit is set in the address recorded in the PTE. So has_iommu_bp looks redundant here. > }; > > struct nvkm_vm { > @@ -37,6 +39,13 @@ struct nvkm_vm { > u32 lpde; > }; > > +struct nvkm_vm_bp_list { > + struct list_head head; > + u32 pde; > + u32 pte; > + void *priv; > +}; > + Tracking the PDE and PTE of each memory chunk can probably be avoided if you change your unmapping strategy. Currently you are goin...
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi, Generally the the imported buffers which has memory type TTM_PL_TT are mapped as small pages probably due to lack of big page allocation. But the platform device which also use memory type TTM_PL_TT, like GK20A, can *allocate* big page though the IOMMU hardware inside the SoC. This is a try to map the imported buffers as big pages in GMMU by the platform IOMMU. With some preparation work to