search for: phys_addr_t

Displaying 20 results from an estimated 299 matches for "phys_addr_t".

Did you mean: phys_addr
2019 Sep 08
7
[PATCH v6 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V6: -add more details to the description of patch 001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch -rename handle_deferred_device to iommu_dma_deferred_attach -fix double tabs in 0003-iommu-dma-iommu-Handle-deferred-devices.patch V5: -Rebase on
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V4: -Rebase on top of linux-next -Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch -refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments v3: -rename dma_limit to dma_mask -exit
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V4: -Rebase on top of linux-next -Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch -refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments v3: -rename dma_limit to dma_mask -exit
2015 Dec 07
0
[PATCH RFC 1/3] xen: export xen_phys_to_bus, xen_bus_to_phys and xen_virt_to_bus
...rivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 79bc493..56014d5 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -75,37 +75,6 @@ static unsigned long xen_io_tlb_nslabs; static u64 start_dma_addr; -/* - * Both of these functions should avoid PFN_PHYS because phys_addr_t - * can be 32bit when dma_addr_t is 64bit leading to a loss in - * information if the shift is done before casting to 64bit. - */ -static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr) -{ - unsigned long bfn = pfn_to_bfn(PFN_DOWN(paddr)); - dma_addr_t dma = (dma_addr_t)bfn << PAGE_SHIFT...
2020 May 20
1
[PATCH v3 51/75] x86/sev-es: Handle MMIO events
...ernel/sev-es.c b/arch/x86/kernel/sev-es.c > index f4ce3b475464..e3662723ed76 100644 > --- a/arch/x86/kernel/sev-es.c > +++ b/arch/x86/kernel/sev-es.c > @@ -294,6 +294,25 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt, > return ES_EXCEPTION; > } > > +static phys_addr_t vc_slow_virt_to_phys(struct ghcb *ghcb, unsigned long vaddr) > +{ > + unsigned long va = (unsigned long)vaddr; > + unsigned int level; > + phys_addr_t pa; > + pgd_t *pgd; > + pte_t *pte; > + > + pgd = pgd_offset(current->active_mm, va); > + pte = lookup_address_in_pgd(...
2020 Apr 28
0
[PATCH 5/5] virtio: Add bounce DMA ops
...> + > +#include <linux/virtio.h> > +#include <linux/io.h> > +#include <linux/swiotlb.h> > +#include <linux/dma-mapping.h> > +#include <linux/of_reserved_mem.h> > +#include <linux/of.h> > +#include <linux/of_fdt.h> > + > +static phys_addr_t bounce_buf_paddr; > +static void *bounce_buf_vaddr; > +static size_t bounce_buf_size; > +struct swiotlb_pool *virtio_pool; > + > +#define VIRTIO_MAX_BOUNCE_SIZE (16*4096) > + > +static void *virtio_alloc_coherent(struct device *dev, size_t size, > + dma_addr_t *dma_handle,...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...entry->prot); + entry->prot, GFP_KERNEL); if (ret) goto out; map_size = 0; @@ -2360,31 +2360,26 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -static int _iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; int ret; + might_sleep_if(gfpflags_allow_blocking(gfp)); + ret = __io...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...entry->prot); + entry->prot, GFP_KERNEL); if (ret) goto out; map_size = 0; @@ -2360,31 +2360,26 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -static int _iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; int ret; + might_sleep_if(gfpflags_allow_blocking(gfp)); + ret = __io...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...entry->prot); + entry->prot, GFP_KERNEL); if (ret) goto out; map_size = 0; @@ -2360,31 +2360,26 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -static int _iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; int ret; + might_sleep_if(gfpflags_allow_blocking(gfp)); + ret = __io...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2008 Dec 22
17
[PATCH 0 of 9] swiotlb: use phys_addr_t for pages
Hi all, Here''s a work in progress series whcih does a partial revert of the previous swiotlb changes, and does a partial replacement with Becky Bruce''s series. The most important difference is Becky''s use of phys_addr_t rather than page+offset to represent arbitrary pages. This turns out to be simpler. I didn''t replicate the map_single_page changes, since I''m not exactly sure of ppc''s requirements here, and it seemed like something that could be easily added. Quick testing showed no p...
2020 Aug 19
0
[PATCH 08/28] MIPS: make dma_sync_*_for_cpu a little less overzealous
...DMA_FROM_DEVICE: + case DMA_BIDIRECTIONAL: + dma_cache_inv((unsigned long)addr, size); + break; default: BUG(); } @@ -82,7 +94,7 @@ static inline void dma_sync_virt(void *addr, size_t size, * configured then the bulk of this loop gets optimized out. */ static inline void dma_sync_phys(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) + enum dma_data_direction dir, bool for_device) { struct page *page = pfn_to_page(paddr >> PAGE_SHIFT); unsigned long offset = paddr & ~PAGE_MASK; @@ -90,18 +102,20 @@ static inline void dma_sync_phys(phys_addr_t paddr, size_t size...
2023 May 19
1
[PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront is enabling
On Fri, May 19, 2023 at 01:49:46PM +0100, Andrew Cooper wrote: > > The alternative would be to finally merge swiotlb-xen into swiotlb, in > > which case we might be able to do this later. Let me see what I can > > do there. > > If that is an option, it would be great to reduce the special-cashing. I think it's doable, and I've been wanting it for a while. I just
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 May 12
2
[PATCH vhost v8 01/12] virtio_ring: split: separate dma codes
As said before, please don't try to do weird runtime checks based on the scatterlist. What you have works for now, but there are plans to repalce the page + offset tuple in the scatterlist with just a phys_addr_t. And with that your "clever" scheme will break instantly.