Displaying 14 results from an estimated 14 matches for "io_tlb_start".
2013 Jan 24
1
[PATCH 35/35] x86: Don't panic if can not alloc buffer for swiotlb
...mips_dma_map_ops = &octeon_linear_dma_map_ops.dma_map_ops;
}
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index af47e75..1d94316 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -231,7 +231,9 @@ retry:
}
start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
if (early) {
- swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
+ if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
+ verbose))
+ panic("Cannot allocate SWIOTLB buffer");
rc = 0;
} else
rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, x...
2013 Jan 24
1
[PATCH 35/35] x86: Don't panic if can not alloc buffer for swiotlb
...mips_dma_map_ops = &octeon_linear_dma_map_ops.dma_map_ops;
}
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index af47e75..1d94316 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -231,7 +231,9 @@ retry:
}
start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
if (early) {
- swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
+ if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
+ verbose))
+ panic("Cannot allocate SWIOTLB buffer");
rc = 0;
} else
rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, x...
2020 Apr 29
0
[PATCH 1/5] swiotlb: Introduce concept of swiotlb_pool
...64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp at intel.com>
All errors (new ones prefixed by >>):
drivers/iommu/intel-iommu.c: In function 'bounce_map_single':
>> drivers/iommu/intel-iommu.c:3990:24: error: 'io_tlb_start' undeclared (first use in this function); did you mean 'swiotlb_start'?
__phys_to_dma(dev, io_tlb_start),
^~~~~~~~~~~~
swiotlb_start
drivers/iommu/intel-iommu.c:3990:24: note: each undeclared identifier is reported only on...
2008 Dec 22
17
[PATCH 0 of 9] swiotlb: use phys_addr_t for pages
Hi all,
Here''s a work in progress series whcih does a partial revert of the
previous swiotlb changes, and does a partial replacement with Becky
Bruce''s series.
The most important difference is Becky''s use of phys_addr_t rather
than page+offset to represent arbitrary pages. This turns out to be
simpler.
I didn''t replicate the map_single_page changes, since
2008 Nov 13
69
[PATCH 00 of 38] xen: add more Xen dom0 support
Hi Ingo,
Here''s the chunk of patches to add Xen Dom0 support (it''s probably
worth creating a new xen/dom0 topic branch for it).
A dom0 Xen domain is basically the same as a normal domU domain, but
it has extra privileges to directly access hardware. There are two
issues to deal with:
- translating to and from the domain''s pseudo-physical addresses and
real machine
2018 May 11
0
[patch] swiotlb: fix ignored DMA_ATTR_NO_WARN request
...Signed-off-by: Mike Galbraith <efault at gmx.de>
---
lib/swiotlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -714,7 +714,7 @@ swiotlb_alloc_buffer(struct device *dev,
phys_addr = swiotlb_tbl_map_single(dev,
__phys_to_dma(dev, io_tlb_start),
- 0, size, DMA_FROM_DEVICE, 0);
+ 0, size, DMA_FROM_DEVICE, attrs);
if (phys_addr == SWIOTLB_MAP_ERROR)
goto out_warn;
2019 Feb 07
0
[PATCH v7 2/5] swiotlb: Add is_swiotlb_active() function
...0b..c873f9cc2146 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -667,3 +667,12 @@ size_t swiotlb_max_mapping_size(struct device *dev)
{
return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
}
+
+bool is_swiotlb_active(void)
+{
+ /*
+ * When SWIOTLB is initialized, even if io_tlb_start points to physical
+ * address zero, io_tlb_end surely doesn't.
+ */
+ return io_tlb_end != 0;
+}
--
2.17.1
2018 May 11
2
kernel spew from nouveau/ swiotlb
On Thu, 2018-05-10 at 12:28 +0200, Mike Galbraith wrote:
> On Thu, 2018-05-10 at 11:10 +0200, Mike Galbraith wrote:
> > Greetings,
> >
> > When box is earning its keep, nouveau/swiotlb grumble.. a LOT. The
> > below is from master.today.
> >
> > [12594.640959] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes)
> > [12594.693000] nouveau
2019 Dec 21
0
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
...CONFIG_SWIOTLB
+ /*
+ * If both the physical buffer start address and size are
+ * page aligned, we don't need to use a bounce page.
+ */
+ if (iommu_needs_bounce_buffer(dev)
+ && !iova_offset(iovad, phys | org_size)) {
+ phys = swiotlb_tbl_map_single(dev,
+ __phys_to_dma(dev, io_tlb_start),
+ phys, org_size, aligned_size, dir, attrs);
+
+ if (phys == DMA_MAPPING_ERROR)
+ return DMA_MAPPING_ERROR;
+
+ /* Cleanup the padding area. */
+ void *padding_start = phys_to_virt(phys);
+ size_t padding_size = aligned_size;
+
+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+...
2019 Feb 07
5
[PATCH v7 0/5] Fix virtio-blk issue with SWIOTLB
Hi,
here is the next version of this patch-set. Previous
versions can be found here:
V1: https://lore.kernel.org/lkml/20190110134433.15672-1-joro at 8bytes.org/
V2: https://lore.kernel.org/lkml/20190115132257.6426-1-joro at 8bytes.org/
V3: https://lore.kernel.org/lkml/20190123163049.24863-1-joro at 8bytes.org/
V4: https://lore.kernel.org/lkml/20190129084342.26030-1-joro at 8bytes.org/
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api.
While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api.
While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
This issue is most likely in the i915 driver and is most likely caused by the
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
..._lock: the red-black tree is not modified at run time;
- add "swiotlb-xen: introduce xen_swiotlb_set_dma_mask";
- add "xen: introduce xen_alloc/free_coherent_pages";
- add "swiotlb-xen: use xen_alloc/free_coherent_pages";
- add "swiotlb: don''t assume that io_tlb_start-io_tlb_end is coherent".
Changes in v4:
- rename XENMEM_get_dma_buf to XENMEM_exchange_and_pin;
- rename XENMEM_put_dma_buf to XENMEM_unpin;
- improve the documentation of the new hypercalls;
- add a note about out.address_bits for XENMEM_exchange;
- code style fixes;
- add err_out label in x...