search for: __gfp_dma

Displaying 20 results from an estimated 22 matches for "__gfp_dma".

2019 May 16
2
[PATCH 06/10] s390/cio: add basic protected virtualization support
...WARN_ON_ONCE(dev && !dev->coherent_dma_mask); > > if (dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr)) > return cpu_addr; > > /* let the implementation decide on the zone to allocate from: */ > flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM); > > that is the GFP flags dropped that implies that we really want > cdev->dev restricted to 31 bit addressable memory because we can't tell > (with the current common DMA code) hey but this piece of DMA mem you > are abot to allocate for me mu...
2019 May 16
2
[PATCH 06/10] s390/cio: add basic protected virtualization support
...WARN_ON_ONCE(dev && !dev->coherent_dma_mask); > > if (dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr)) > return cpu_addr; > > /* let the implementation decide on the zone to allocate from: */ > flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM); > > that is the GFP flags dropped that implies that we really want > cdev->dev restricted to 31 bit addressable memory because we can't tell > (with the current common DMA code) hey but this piece of DMA mem you > are abot to allocate for me mu...
2019 May 18
0
[PATCH 06/10] s390/cio: add basic protected virtualization support
...N_ON_ONCE(dev && !dev->coherent_dma_mask); > > > > if (dma_alloc_from_dev_coherent(dev, size, dma_handle, > > &cpu_addr)) return cpu_addr; > > > > /* let the implementation decide on the zone to allocate > > from: */ flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM); > > > > that is the GFP flags dropped that implies that we really want > > cdev->dev restricted to 31 bit addressable memory because we can't > > tell (with the current common DMA code) hey but this piece of DMA > > mem you are a...
2019 May 15
0
[PATCH 06/10] s390/cio: add basic protected virtualization support
...void *cpu_addr; WARN_ON_ONCE(dev && !dev->coherent_dma_mask); if (dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr)) return cpu_addr; /* let the implementation decide on the zone to allocate from: */ flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM); that is the GFP flags dropped that implies that we really want cdev->dev restricted to 31 bit addressable memory because we can't tell (with the current common DMA code) hey but this piece of DMA mem you are abot to allocate for me must be 31 bit addressable...
2019 May 13
4
[PATCH 06/10] s390/cio: add basic protected virtualization support
On Fri, 26 Apr 2019 20:32:41 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > As virtio-ccw devices are channel devices, we need to use the dma area > for any communication with the hypervisor. > > This patch addresses the most basic stuff (mostly what is required for > virtio-ccw), and does take care of QDIO or any devices. "does not take care of QDIO",
2019 May 13
4
[PATCH 06/10] s390/cio: add basic protected virtualization support
On Fri, 26 Apr 2019 20:32:41 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > As virtio-ccw devices are channel devices, we need to use the dma area > for any communication with the hypervisor. > > This patch addresses the most basic stuff (mostly what is required for > virtio-ccw), and does take care of QDIO or any devices. "does not take care of QDIO",
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to the caller it wrappers it into iommu_map() and iommu_map_atomic() Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- arch/arm/mm/dma-mapping.c | 11 +++++++---- .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to the caller it wrappers it into iommu_map() and iommu_map_atomic() Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- arch/arm/mm/dma-mapping.c | 11 +++++++---- .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to the caller it wrappers it into iommu_map() and iommu_map_atomic() Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- arch/arm/mm/dma-mapping.c | 11 +++++++---- .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2005 Jul 27
20
Xen 3.0 Status update
At OLS we had a couple of "Xen Mini Summit" sessions. Although there weren''t any formal minutes, here''s a rough summary of what we discussed/concluded. Status as of 24 July 2005 ========================= Summary: We have a couple of annoying bugs, but things are coming together nicely. x86_32p (PAE 16GB) and x86_64 ports are close to being feature complete with
2020 Sep 15
0
[PATCH 15/18] dma-mapping: add a new dma_alloc_pages API
...truct page *dma_alloc_pages(struct device *dev, size_t size, + dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + struct page *page; + + if (WARN_ON_ONCE(!dev->coherent_dma_mask)) + return NULL; + if (WARN_ON_ONCE(gfp & (__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM))) + return NULL; + + size = PAGE_ALIGN(size); + if (dma_alloc_direct(dev, ops)) + page = dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp); + else if (ops->alloc_pages) + page = ops->alloc_pages(dev, size, dma_handle, dir, gfp); + else + return NULL; +...
2020 Aug 19
0
[PATCH 19/28] dma-mapping: replace DMA_ATTR_NON_CONSISTENT with dma_{alloc, free}_pages
..._attrs); +void *dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, + enum dma_data_direction dir, gfp_t gfp) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + void *vaddr; + + if (WARN_ON_ONCE(!dev->coherent_dma_mask)) + return NULL; + if (WARN_ON_ONCE(gfp & (__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM))) + return NULL; + + size = PAGE_ALIGN(size); + if (dma_alloc_direct(dev, ops)) + vaddr = dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp); + else if (ops->alloc_pages) + vaddr = ops->alloc_pages(dev, size, dma_handle, dir, gfp); + else + return NULL;...
2019 Sep 08
7
[PATCH v6 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V6: -add more details to the description of patch 001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch -rename handle_deferred_device to iommu_dma_deferred_attach -fix double tabs in 0003-iommu-dma-iommu-Handle-deferred-devices.patch V5: -Rebase on
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V4: -Rebase on top of linux-next -Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch -refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments v3: -rename dma_limit to dma_mask -exit
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V4: -Rebase on top of linux-next -Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch -refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments v3: -rename dma_limit to dma_mask -exit
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. As a follow up I