search for: swiotlb_tbl_map_single

Displaying 20 results from an estimated 21 matches for "swiotlb_tbl_map_single".

2018 May 10
4
kernel spew from nouveau/ swiotlb
...: swiotlb buffer is full (sz: 2097152 bytes) [12594.713787] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.743413] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.796740] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12607.000774] swiotlb_tbl_map_single: 54 callbacks suppressed [12607.000776] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12607.347941] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12608.677038] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) homer:/novell/ssh # dmesg|grep ...
2018 May 11
2
kernel spew from nouveau/ swiotlb
...tes) > > [12594.713787] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > > [12594.743413] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > > [12594.796740] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > > [12607.000774] swiotlb_tbl_map_single: 54 callbacks suppressed > > [12607.000776] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > > [12607.347941] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > > [12608.677038] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) >...
2018 May 11
0
[patch] swiotlb: fix ignored DMA_ATTR_NO_WARN request
In the trace below, swiotlb_alloc() is called with __GFP_NOWARN, it ors attrs with DMA_ATTR_NO_WARN and passes it to swiotlb_alloc_buffer(), which does NOT pass it on to swiotlb_tbl_map_single(), leading to an ever repeating warning that the caller of swiotlb_alloc() explicitly asked to be squelched. Pass the caller's request for silence onward. Xorg-3170 [006] .... 963.866098: swiotlb_alloc+0x1d/0x1a0: gfp & __GFP_NOWARN Xorg-3170 [006] .... 963.866101: <stack trace...
2013 Aug 22
2
[PATCH] tracing/events: Add bounce tracing to swiotbl-xen
...wiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this @@ -358,6 +362,9 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, /* * Oh well, have to allocate and map a bounce buffer. */ + + trace_bounced(dev, dev_addr, size, swiotlb_force); + map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir); if (map == SWIOTLB_MAP_ERROR) return DMA_ERROR_CODE; diff --git a/include/trace/events/swiotlb-xen.h b/include/trace/events/swiotlb-xen.h new file mode 100644 index 0000000..cbe2dca --- /dev/null +++ b/include/trace/events/swiotlb-xen.h @@ -0,0 +1,46 @@ +...
2013 Aug 22
2
[PATCH] tracing/events: Add bounce tracing to swiotbl-xen
...wiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this @@ -358,6 +362,9 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, /* * Oh well, have to allocate and map a bounce buffer. */ + + trace_bounced(dev, dev_addr, size, swiotlb_force); + map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir); if (map == SWIOTLB_MAP_ERROR) return DMA_ERROR_CODE; diff --git a/include/trace/events/swiotlb-xen.h b/include/trace/events/swiotlb-xen.h new file mode 100644 index 0000000..cbe2dca --- /dev/null +++ b/include/trace/events/swiotlb-xen.h @@ -0,0 +1,46 @@ +...
2013 Aug 22
2
[PATCH] tracing/events: Add bounce tracing to swiotbl-xen
...wiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this @@ -358,6 +362,9 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, /* * Oh well, have to allocate and map a bounce buffer. */ + + trace_bounced(dev, dev_addr, size, swiotlb_force); + map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir); if (map == SWIOTLB_MAP_ERROR) return DMA_ERROR_CODE; diff --git a/include/trace/events/swiotlb-xen.h b/include/trace/events/swiotlb-xen.h new file mode 100644 index 0000000..cbe2dca --- /dev/null +++ b/include/trace/events/swiotlb-xen.h @@ -0,0 +1,46 @@ +...
2018 May 10
1
kernel spew from nouveau/ swiotlb
...oherent() flags |= __GFP_NOWARN; swiotlb_alloc_coherent(..flags) swiotlb_alloc_coherent(..flags) attrs = (flags & __GFP_NOWARN) ? DMA_ATTR_NO_WARN : 0; swiotlb_alloc_buffer(..attr) swiotlb_alloc_buffer(..0) <== hm, pass zero instead of attr? swiotlb_tbl_map_single() gripeage ...that? -Mike
2013 Sep 04
1
[PATCHv2] tracing/events: Add bounce tracing to swiotbl
..._tbl_sync_single_*, to see if the memory was in fact allocated by this @@ -358,6 +361,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, /* * Oh well, have to allocate and map a bounce buffer. */ + trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force); + map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir); if (map == SWIOTLB_MAP_ERROR) return DMA_ERROR_CODE; diff --git a/include/trace/events/swiotlb.h b/include/trace/events/swiotlb.h new file mode 100644 index 0000000..6d21410 --- /dev/null +++ b/include/trace/events/swiotlb.h @@ -0,0 +1,46 @@ +#undef TRACE...
2013 Sep 04
1
[PATCHv2] tracing/events: Add bounce tracing to swiotbl
..._tbl_sync_single_*, to see if the memory was in fact allocated by this @@ -358,6 +361,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, /* * Oh well, have to allocate and map a bounce buffer. */ + trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force); + map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir); if (map == SWIOTLB_MAP_ERROR) return DMA_ERROR_CODE; diff --git a/include/trace/events/swiotlb.h b/include/trace/events/swiotlb.h new file mode 100644 index 0000000..6d21410 --- /dev/null +++ b/include/trace/events/swiotlb.h @@ -0,0 +1,46 @@ +#undef TRACE...
2013 Sep 04
1
[PATCHv2] tracing/events: Add bounce tracing to swiotbl
..._tbl_sync_single_*, to see if the memory was in fact allocated by this @@ -358,6 +361,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, /* * Oh well, have to allocate and map a bounce buffer. */ + trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force); + map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir); if (map == SWIOTLB_MAP_ERROR) return DMA_ERROR_CODE; diff --git a/include/trace/events/swiotlb.h b/include/trace/events/swiotlb.h new file mode 100644 index 0000000..6d21410 --- /dev/null +++ b/include/trace/events/swiotlb.h @@ -0,0 +1,46 @@ +#undef TRACE...
2018 May 10
0
kernel spew from nouveau/ swiotlb
...t; bytes) > [12594.713787] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) > [12594.743413] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) > [12594.796740] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) > [12607.000774] swiotlb_tbl_map_single: 54 callbacks suppressed > [12607.000776] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) > [12607.347941] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) > [12608.677038] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) >...
2018 May 10
0
kernel spew from nouveau/ swiotlb
...full (sz: 2097152 bytes) > [12594.713787] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > [12594.743413] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > [12594.796740] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > [12607.000774] swiotlb_tbl_map_single: 54 callbacks suppressed > [12607.000776] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > [12607.347941] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > [12608.677038] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > homer:/novell/...
2013 Jan 24
1
[PATCH 35/35] x86: Don't panic if can not alloc buffer for swiotlb
...t_with_default_size(64 * (1<<20), verbose); /* default to 64MB */ + if (io_tlb_start) + free_bootmem(io_tlb_start, + PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); + pr_warn("Cannot allocate SWIOTLB buffer"); + no_iotlb_memory = true; } /* @@ -405,6 +413,9 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, unsigned long offset_slots; unsigned long max_slots; + if (no_iotlb_memory) + panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); + mask = dma_get_seg_boundary(hwdev); tbl_dma_addr &= mask; -- 1...
2013 Jan 24
1
[PATCH 35/35] x86: Don't panic if can not alloc buffer for swiotlb
...t_with_default_size(64 * (1<<20), verbose); /* default to 64MB */ + if (io_tlb_start) + free_bootmem(io_tlb_start, + PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); + pr_warn("Cannot allocate SWIOTLB buffer"); + no_iotlb_memory = true; } /* @@ -405,6 +413,9 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, unsigned long offset_slots; unsigned long max_slots; + if (no_iotlb_memory) + panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); + mask = dma_get_seg_boundary(hwdev); tbl_dma_addr &= mask; -- 1...
2019 Dec 21
0
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
...- size = iova_align(iovad, size + iova_off); +#ifdef CONFIG_SWIOTLB + /* + * If both the physical buffer start address and size are + * page aligned, we don't need to use a bounce page. + */ + if (iommu_needs_bounce_buffer(dev) + && !iova_offset(iovad, phys | org_size)) { + phys = swiotlb_tbl_map_single(dev, + __phys_to_dma(dev, io_tlb_start), + phys, org_size, aligned_size, dir, attrs); + + if (phys == DMA_MAPPING_ERROR) + return DMA_MAPPING_ERROR; + + /* Cleanup the padding area. */ + void *padding_start = phys_to_virt(phys); + size_t padding_size = aligned_size; + + if (!(attrs &am...
2020 Apr 29
0
[PATCH 1/5] swiotlb: Introduce concept of swiotlb_pool
...f2d4e Lu Baolu 2019-09-06 3986 * page aligned, we don't need to use a bounce page. cfb94a372f2d4e Lu Baolu 2019-09-06 3987 */ cfb94a372f2d4e Lu Baolu 2019-09-06 3988 if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) { cfb94a372f2d4e Lu Baolu 2019-09-06 3989 tlb_addr = swiotlb_tbl_map_single(dev, cfb94a372f2d4e Lu Baolu 2019-09-06 @3990 __phys_to_dma(dev, io_tlb_start), cfb94a372f2d4e Lu Baolu 2019-09-06 3991 paddr, size, aligned_size, dir, attrs); cfb94a372f2d4e Lu Baolu 2019-09-06 3992 if (tlb_addr == DMA_MAPPING_ERROR) { cfb94a372f2d4e Lu Baolu 2019-09...
2012 Oct 12
13
Dom0 physical networking/swiotlb/something issue in 3.7-rc1
Hi Konrad, The following patch causes fairly large packet loss when transmitting from dom0 to the physical network, at least with my tg3 hardware, but I assume it can impact anything which uses this interface. I suspect that the issue is that the compound pages allocated in this way are not backed by contiguous mfns and so things fall apart when the driver tries to do DMA. However I
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a