search for: nvkm_vma

Displaying 20 results from an estimated 41 matches for "nvkm_vma".

Did you mean: nvkm_vmm
2018 Feb 13
2
[drm-nouveau-mmu] question about potential NULL pointer dereference
Hi all, While doing some static analysis I ran into the following piece of code at drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c:957: 957#define node(root, dir) ((root)->head.dir == &vmm->list) ? NULL : \ 958 list_entry((root)->head.dir, struct nvkm_vma, head) 959 960void 961nvkm_vmm_unmap_region(struct nvkm_vmm *vmm, struct nvkm_vma *vma) 962{ 963 struct nvkm_vma *next; 964 965 nvkm_memory_tags_put(vma->memory, vmm->mmu->subdev.device, &vma->tags); 966 nvkm_memory_unref(&vma->memory);...
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi, Generally the the imported buffers which has memory type TTM_PL_TT are mapped as small pages probably due to lack of big page allocation. But the platform device which also use memory type TTM_PL_TT, like GK20A, can *allocate* big page though the IOMMU hardware inside the SoC. This is a try to map the imported buffers as big pages in GMMU by the platform IOMMU. With some preparation work to
2015 Apr 20
3
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
...ourbot wrote: >> >> Tracking the PDE and PTE of each memory chunk can probably be avoided >> if you change your unmapping strategy. Currently you are going through >> the list of nvkm_vm_bp_list, but you know your PDE and PTE are always >> going to be adjacent, since a nvkm_vma represents a contiguous block >> in the GPU VA. So when unmapping, you can simply check for each PTE >> entry whether the IOMMU bit is set, and unmap from the IOMMU space >> after unmapping from the GPU VA space, in a loop similar to that of >> nvkm_vm_unmap_at(). >> &...
2015 Apr 17
2
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
...rtions(+), 7 deletions(-) > > diff --git a/drm/nouveau/include/nvkm/subdev/mmu.h b/drm/nouveau/include/nvkm/subdev/mmu.h > index 3a5368776c31..3230d31a7971 100644 > --- a/drm/nouveau/include/nvkm/subdev/mmu.h > +++ b/drm/nouveau/include/nvkm/subdev/mmu.h > @@ -22,6 +22,8 @@ struct nvkm_vma { > struct nvkm_mm_node *node; > u64 offset; > u32 access; > + struct list_head bp; > + bool has_iommu_bp; Whether a chunk of memory is mapped through the IOMMU can be tested by checking if the IOMMU bit is set in the address recorded in the PTE....
2018 Feb 13
0
[drm-nouveau-mmu] question about potential NULL pointer dereference
...all, > > While doing some static analysis I ran into the following piece of code at > drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c:957: > > 957#define node(root, dir) ((root)->head.dir == &vmm->list) ? NULL : > \ > 958 list_entry((root)->head.dir, struct nvkm_vma, head) > 959 > 960void > 961nvkm_vmm_unmap_region(struct nvkm_vmm *vmm, struct nvkm_vma *vma) > 962{ > 963 struct nvkm_vma *next; > 964 > 965 nvkm_memory_tags_put(vma->memory, vmm->mmu->subdev.device, > &vma->tags); > 966 nvkm_...
2015 Jul 07
5
CUDA fixed VA allocations and sparse mappings
...size; /* in, bytes */ uint32_t flags; /* in */ uint64_t offset; /* in/out, byte address */ }; struct drm_nouveau_as_free { uint64_t offset; /* in, byte address */ }; These ioctls just call into the allocator to allocate a range of addresses, resulting in a struct nvkm_vma that tracks that allocation (or releases the struct nvkm_vma back into the virtual address pool in the case of the free ioctl). If NOUVEAU_AS_ALLOC_FLAGS_FIXED_OFFSET is set, offset specifies the requested virtual address. Otherwise, an arbitrary address will be allocated. In addition to this, a...
2020 Jun 22
0
[RESEND PATCH 3/3] nouveau: make nvkm_vmm_ctor() and nvkm_mmu_ptp_get() static
...anaged, u64 addr, u64 size, struct lock_class_key *, const char *name, struct nvkm_vmm **); -int nvkm_vmm_ctor(const struct nvkm_vmm_func *, struct nvkm_mmu *, - u32 pd_header, bool managed, u64 addr, u64 size, - struct lock_class_key *, const char *name, struct nvkm_vmm *); struct nvkm_vma *nvkm_vmm_node_search(struct nvkm_vmm *, u64 addr); struct nvkm_vma *nvkm_vmm_node_split(struct nvkm_vmm *, struct nvkm_vma *, u64 addr, u64 size); -- 2.20.1
2020 Jun 22
7
[RESEND PATCH 0/3] nouveau: fixes for SVM
These are based on 5.8.0-rc2 and intended for Ben Skeggs' nouveau tree. I believe the changes can be queued for 5.8-rcX after being reviewed. These were part of a larger series but I'm resending them separately as suggested by Jason Gunthorpe. https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampbell at nvidia.com/ Note that in order to exercise/test patch 2 here, you will need a
2018 Mar 10
17
[RFC PATCH 00/13] SVM (share virtual memory) with HMM in nouveau
...atches have already been posted on mesa mailing list. They are two aspect that need to sorted before this can be considered ready. First we want to decide how to update GPU page table from HMM. In this patchset i added new methods to vmm to allow GPU page table to be updated without nvkm_memory or nvkm_vma object (see patch 7 and 8 special mapping method for HMM). It just take an array of pages and flags. It allow for both system and device private memory to be interleaved. The second aspect is how to create a HMM enabled channel. Channel is a term use for NVidia GPU command queue, each process usin...
2015 Apr 16
2
[PATCH 6/6] mmu: gk20a: implement IOMMU mapping for big pages
...+struct gk20a_mmu_priv { > + struct nvkm_mmu base; > +}; > + > +struct gk20a_mmu_iommu_mapping { > + struct nvkm_mm_node *node; > + u64 iova; > +}; > + > +extern const u8 gf100_pte_storage_type_map[256]; > + > +static void > +gk20a_vm_map(struct nvkm_vma *vma, struct nvkm_gpuobj *pgt, > + struct nvkm_mem *mem, u32 pte, u64 list) > +{ > + u32 target = (vma->access & NV_MEM_ACCESS_NOSNOOP) ? 7 : 5; > + u64 phys; > + > + pte <<= 3; > + phys = gf100_vm_addr(vma, list, mem->memty...
2015 Jul 07
2
CUDA fixed VA allocations and sparse mappings
...fset; /* in/out, byte address */ > > }; > > > > struct drm_nouveau_as_free { > > uint64_t offset; /* in, byte address */ > > }; > > > > These ioctls just call into the allocator to allocate a range of addresses, > > resulting in a struct nvkm_vma that tracks that allocation (or releases the > > struct nvkm_vma back into the virtual address pool in the case of the free > > ioctl). If NOUVEAU_AS_ALLOC_FLAGS_FIXED_OFFSET is set, offset specifies the > > requested virtual address. Otherwise, an arbitrary address will be >...
2015 Jun 15
2
[PATCH v2 2/2] drm/nouveau: add GEM_SET_TILING staging ioctl
...t; + struct nouveau_drm *drm = nouveau_drm(dev); > + struct nouveau_cli *cli = nouveau_cli(file_priv); > + struct nvkm_fb *pfb = nvxx_fb(&drm->device); > + struct drm_nouveau_gem_set_tiling *req = data; > + struct drm_gem_object *gem; > + struct nouveau_bo *nvbo; > + struct nvkm_vma *vma; > + int ret = 0; > + > + if (!nouveau_staging_tiling) > + return -EINVAL; > + > + if (!pfb->memtype_valid(pfb, req->tile_flags)) { > + NV_PRINTK(error, cli, "bad page flags: 0x%08x\n", req->tile_flags); > + return -EINVAL; > + } > + > + g...
2015 Jun 15
4
[PATCH v2 0/2] drm/nouveau: option for staging ioctls and new GEM_SET_TILING ioctl
Second version of this patchset addressing Ben's comments and fixing a few extra things. This patchset proposes to introduce a "staging" module option to dynamically enable features (mostly ioctls) that are merged but may be refined before they are declared "stable". The second patch illustrates the use of this staging option with the SET_TILING ioctl, which can be used to
2015 Nov 11
2
[PATCH] instmem/gk20a: use DMA API CPU mapping
...mu *node = gk20a_instobj_iommu(memory); + struct gk20a_instmem *imem = node->base.imem; + struct nvkm_ltc *ltc = imem->base.subdev.device->ltc; unsigned long flags; spin_lock_irqsave(&imem->lock, flags); @@ -284,27 +280,6 @@ gk20a_instobj_map(struct nvkm_memory *memory, struct nvkm_vma *vma, u64 offset) nvkm_vm_map_at(vma, offset, &node->mem); } -/* - * Clear the CPU mapping of an instobj if it exists - */ -static void -gk20a_instobj_dtor(struct gk20a_instobj *node) -{ - struct gk20a_instmem *imem = node->imem; - unsigned long flags; - - spin_lock_irqsave(&imem...
2019 Mar 16
6
[PATCH 0/4] NV50/GF100 behind constrained hierarchies
Hi Ben, I've been working with an mmio-constrained pci hierarchy intended almost solely for nvme devices and switches. Binding nouveau to an NV50-based gpu results in a kernel panic as the device cannot be fully mapped. I've made modifications in nv50 and vmm to unbind the driver from this hierarchy, and modified gf100 assuming it will have the same issue. 1/4 also includes a fix where
2015 Nov 11
0
[PATCH] instmem/gk20a: use DMA API CPU mapping
...(memory); > + struct gk20a_instmem *imem = node->base.imem; > + struct nvkm_ltc *ltc = imem->base.subdev.device->ltc; > unsigned long flags; > > spin_lock_irqsave(&imem->lock, flags); > @@ -284,27 +280,6 @@ gk20a_instobj_map(struct nvkm_memory *memory, struct nvkm_vma *vma, u64 offset) > nvkm_vm_map_at(vma, offset, &node->mem); > } > > -/* > - * Clear the CPU mapping of an instobj if it exists > - */ > -static void > -gk20a_instobj_dtor(struct gk20a_instobj *node) > -{ > - struct gk20a_instmem *imem = node->imem; >...
2015 May 20
3
[PATCH 0/2] drm/nouveau: option for staging ioctls and new SET_TILING ioctl
This patchset proposes to introduce a "staging" module option to dynamically enable features (mostly ioctls) that are merged but may be refined before they are declared "stable". The second patch illustrates the use of this staging option with the SET_TILING ioctl, which can be used to specify the tiling options of a PRIME-imported buffer. The staging parameter will allow us
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To
2016 Jan 18
6
[PATCH v2 0/5] nouveau: add secure boot support for dGPU and Tegra
This is a highly changed revision of the first patch series that adds secure boot support to Nouveau. This code still depends on NVIDIA releasing official firmware files, but the files released with SHIELD TV and Pixel C can already be used on a Jetson TX1. As you know we are working hard to release the official firmware files, however in the meantime it doesn't hurt to review the code so it
2016 Nov 02
0
[PATCH v3 07/15] secboot: generate HS BL descriptor in hook
...*gsb) * gm200_secboot_run_hs_blob() - run the given high-secure blob */ static int -gm200_secboot_run_hs_blob(struct gm200_secboot *gsb, struct nvkm_gpuobj *blob, - struct gm200_flcn_bl_desc *desc) +gm200_secboot_run_hs_blob(struct gm200_secboot *gsb, struct nvkm_gpuobj *blob) { struct nvkm_vma vma; - u64 vma_addr; const u32 bl_desc_size = gsb->func->bl_desc_size; + const struct hsf_load_header *load_hdr; u8 bl_desc[bl_desc_size]; int ret; + /* Find the bootloader descriptor for our blob and copy it */ + if (blob == gsb->acr_load_blob) { + load_hdr = &gsb->load_bl...