Displaying 20 results from an estimated 103 matches for "unmap_page".
2020 Feb 04
2
[PATCH 5/5] vdpasim: vDPA device simulator
On 2020/2/4 ??4:21, Zhu Lingshan wrote:
>> +static const struct dma_map_ops vdpasim_dma_ops = {
>> +??? .map_page = vdpasim_map_page,
>> +??? .unmap_page = vdpasim_unmap_page,
>> +??? .alloc = vdpasim_alloc_coherent,
>> +??? .free = vdpasim_free_coherent,
>> +};
>> +
>
> Hey Jason,
>
> IMHO, it would be nice if dma_ops of the parent device could be
> re-used. vdpa_device is expecting to represent a physical de...
2020 Feb 04
2
[PATCH 5/5] vdpasim: vDPA device simulator
On 2020/2/4 ??4:21, Zhu Lingshan wrote:
>> +static const struct dma_map_ops vdpasim_dma_ops = {
>> +??? .map_page = vdpasim_map_page,
>> +??? .unmap_page = vdpasim_unmap_page,
>> +??? .alloc = vdpasim_alloc_coherent,
>> +??? .free = vdpasim_free_coherent,
>> +};
>> +
>
> Hey Jason,
>
> IMHO, it would be nice if dma_ops of the parent device could be
> re-used. vdpa_device is expecting to represent a physical de...
2020 Sep 15
0
[PATCH 15/18] dma-mapping: add a new dma_alloc_pages API
...vice *dev, struct scatterlist
const struct dma_map_ops arm_nommu_dma_ops = {
.alloc = arm_nommu_dma_alloc,
.free = arm_nommu_dma_free,
+ .alloc_pages = dma_direct_alloc_pages,
+ .free_pages = dma_direct_free_pages,
.mmap = arm_nommu_dma_mmap,
.map_page = arm_nommu_dma_map_page,
.unmap_page = arm_nommu_dma_unmap_page,
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 8a8949174b1c06..7738b4d23f692f 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -199,6 +199,8 @@ static int arm_dma_supported(struct device *dev, u64 mask)
const struct dm...
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
...atch series enables xen-swiotlb on arm and arm64.
It has been heavily reworked compared to the previous versions in order
to achieve better performances and to address review comments.
We are not using dma_mark_clean to ensure coherency anymore. We call the
platform implementation of map_page and unmap_page.
We assume that dom0 has been mapped 1:1 (physical address ==
machine address), that is what Xen on ARM currently does.
As a consequence we only use the swiotlb to handle dma
requests involving pages corresponding to grant refs. Obviously these
pages cannot be part of the 1:1 because they belong t...
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to
GFP_KERNEL to use the gfp parameter like the other allocations in this
function.
Auditing says this is never called from an atomic context, so it is safe
as is, but reads wrong.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
drivers/iommu/dma-iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to
GFP_KERNEL to use the gfp parameter like the other allocations in this
function.
Auditing says this is never called from an atomic context, so it is safe
as is, but reads wrong.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
drivers/iommu/dma-iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
2018 Jul 30
1
[RFC 1/4] virtio: Define virtio_direct_dma_ops structure
...FORM.
> + */
> +dma_addr_t virtio_direct_map_page(struct device *dev, struct page *page,
> + unsigned long offset, size_t size,
> + enum dma_data_direction dir,
> + unsigned long attrs)
All these functions should probably be marked static.
> +void virtio_direct_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
> + size_t size, enum dma_data_direction dir,
> + unsigned long attrs)
> +{
> +}
No need to implement no-op callbacks in struct dma_map_ops.
> +
> +int virtio_direct_mapping_error(struct device *hwdev, dma_addr_t dma_addr)
> +{
&...
2023 Sep 04
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...; switch (cap) {
> case IOMMU_CAP_CACHE_COHERENCY:
> return true;
> + case IOMMU_CAP_DEFERRED_FLUSH:
> + return true;
> default:
> return false;
> }
> @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = {
> .map_pages = viommu_map_pages,
> .unmap_pages = viommu_unmap_pages,
> .iova_to_phys = viommu_iova_to_phys,
> + .flush_iotlb_all = viommu_flush_iotlb_all,
> .iotlb_sync = viommu_iotlb_sync,
> .iotlb_sync_map = viommu_iotlb_sync_map,
> .free = viommu_domain_free,
>
> --
> 2.39.2
>
2013 Nov 11
0
[GIT PULL] (xen) stable/for-linus-3.13-rc0-tag
...: get_dma_ops: return xen_dma_ops if we are running as xen_initial_domain
arm64/xen: get_dma_ops: return xen_dma_ops if we are running as xen_initial_domain
xen: introduce xen_alloc/free_coherent_pages
swiotlb-xen: use xen_alloc/free_coherent_pages
xen: introduce xen_dma_map/unmap_page and xen_dma_sync_single_for_cpu/device
swiotlb-xen: use xen_dma_map/unmap_page, xen_dma_sync_single_for_cpu/device
swiotlb: print a warning when the swiotlb is full
arm,arm64: do not always merge biovec if we are running on Xen
grant-table: call set_phys_to_machine after map...
2016 Jun 02
0
[RFC v3 02/45] dma-mapping: Use unsigned long for dma_attrs
...dma_attrs *attrs);
+ dma_addr_t, size_t, unsigned long attrs);
dma_addr_t (*map_page)(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
- struct dma_attrs *attrs);
+ unsigned long attrs);
void (*unmap_page)(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir,
- struct dma_attrs *attrs);
+ unsigned long attrs);
/*
* map_sg returns 0 on error and a value > 0 on success.
* It should never return a value < 0.
*/
int (*map_sg)(struct devi...
2020 Aug 19
0
[PATCH 19/28] dma-mapping: replace DMA_ATTR_NON_CONSISTENT with dma_{alloc, free}_pages
...vice *dev, struct scatterlist
const struct dma_map_ops arm_nommu_dma_ops = {
.alloc = arm_nommu_dma_alloc,
.free = arm_nommu_dma_free,
+ .alloc_pages = dma_direct_alloc_pages,
+ .free_pages = dma_direct_free_pages,
.mmap = arm_nommu_dma_mmap,
.map_page = arm_nommu_dma_map_page,
.unmap_page = arm_nommu_dma_unmap_page,
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 8a8949174b1c06..7738b4d23f692f 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -199,6 +199,8 @@ static int arm_dma_supported(struct device *dev, u64 mask)
const struct dm...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...FLUSH:
> > > > + return true;
> > > > default:
> > > > return false;
> > > > }
> > > > @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = {
> > > > .map_pages = viommu_map_pages,
> > > > .unmap_pages = viommu_unmap_pages,
> > > > .iova_to_phys = viommu_iova_to_phys,
> > > > + .flush_iotlb_all = viommu_flush_iotlb_all,
> > > > .iotlb_sync = viommu_iotlb_sync,
> > > > .iotlb_sync_map = viommu_iotlb_sync_map,
> > > >...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...FLUSH:
> > > > + return true;
> > > > default:
> > > > return false;
> > > > }
> > > > @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = {
> > > > .map_pages = viommu_map_pages,
> > > > .unmap_pages = viommu_unmap_pages,
> > > > .iova_to_phys = viommu_iova_to_phys,
> > > > + .flush_iotlb_all = viommu_flush_iotlb_all,
> > > > .iotlb_sync = viommu_iotlb_sync,
> > > > .iotlb_sync_map = viommu_iotlb_sync_map,
> > > >...
2020 Feb 04
0
[PATCH 5/5] vdpasim: vDPA device simulator
On Tue, Feb 04, 2020 at 04:28:27PM +0800, Jason Wang wrote:
>
> On 2020/2/4 ??4:21, Zhu Lingshan wrote:
> > > +static const struct dma_map_ops vdpasim_dma_ops = {
> > > +??? .map_page = vdpasim_map_page,
> > > +??? .unmap_page = vdpasim_unmap_page,
> > > +??? .alloc = vdpasim_alloc_coherent,
> > > +??? .free = vdpasim_free_coherent,
> > > +};
> > > +
> >
> > Hey Jason,
> >
> > IMHO, it would be nice if dma_ops of the parent device could be re-used.
> > v...
2023 Jan 20
0
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...n for any level of
the API internals to pick up as appropriate, rather than propagate
per-call gfp flags everywhere. As it stands we're still missing
potential pagetable and other domain-related allocations by drivers in
.attach_dev and even (in probably-shouldn't-really-happen cases)
.unmap_pages...
Thanks,
Robin.
> Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
> ---
> drivers/iommu/dma-iommu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 8c2788633c1766..e4bf1bb15...
2023 Sep 04
0
[PATCH 1/2] iommu/virtio: Make use of ops->iotlb_sync_map
...> + return viommu_sync_req(vdomain->viommu);
> +}
> +
> static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> {
> struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> @@ -1058,6 +1070,7 @@ static struct iommu_ops viommu_ops = {
> .unmap_pages = viommu_unmap_pages,
> .iova_to_phys = viommu_iova_to_phys,
> .iotlb_sync = viommu_iotlb_sync,
> + .iotlb_sync_map = viommu_iotlb_sync_map,
> .free = viommu_domain_free,
> }
> };
>
> --
> 2.39.2
>
2013 Dec 09
1
[PATCH] xen/arm64: do not call the swiotlb functions twice
On arm64 the dma_map_ops implementation is based on the swiotlb.
swiotlb-xen, used by default in dom0 on Xen, is also based on the
swiotlb.
Avoid calling into the default arm64 dma_map_ops functions from
xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu, and
xen_dma_sync_single_for_device otherwise we end up calling into the
swiotlb twice.
When arm64 gets a non-swiotlb based implementation of dma_map_ops, we''ll
probably have to reintroduce dma_map_ops calls in page-coherent.h.
Signed-off-by: Stefano Stabelli...
2018 Jul 20
0
[RFC 1/4] virtio: Define virtio_direct_dma_ops structure
...n IOVA in absence of VIRTIO_F_IOMMU_PLATFORM.
+ */
+dma_addr_t virtio_direct_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ unsigned long attrs)
+{
+ return page_to_phys(page) + offset;
+}
+
+void virtio_direct_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs)
+{
+}
+
+int virtio_direct_mapping_error(struct device *hwdev, dma_addr_t dma_addr)
+{
+ return 0;
+}
+
+void *virtio_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handl...
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...+ anon_vma_lock_write(anon_vma);
}
end = -1;
mapping = NULL;
- anon_vma_lock_write(anon_vma);
} else {
mapping = head->mapping;
@@ -2686,13 +2719,19 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
/*
* Racy check if we can split the page, before unmap_page() will
* split PMDs
+ * If we are splitting a migrating THP, there is no check needed
+ * because the page is already unmapped and isolated from the LRU.
*/
- if (!can_split_huge_page(head, &extra_pins)) {
+ if (!remap)
+ extra_pins = thp_nr_pages(page) - 1 +
+ is_device_private_page...
2023 May 15
3
[PATCH v2 0/2] iommu/virtio: Fixes
One fix reported by Akihiko, and another found while going over the
driver.
Jean-Philippe Brucker (2):
iommu/virtio: Detach domain on endpoint release
iommu/virtio: Return size mapped for a detached domain
drivers/iommu/virtio-iommu.c | 57 ++++++++++++++++++++++++++----------
1 file changed, 41 insertions(+), 16 deletions(-)
--
2.40.0