search for: io_tlb_segsize

Displaying 7 results from an estimated 7 matches for "io_tlb_segsize".

2019 Feb 07
0
[PATCH v7 2/5] swiotlb: Add is_swiotlb_active() function
...d swiotlb_print_info(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 9cb21259cb0b..c873f9cc2146 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -667,3 +667,12 @@ size_t swiotlb_max_mapping_size(struct device *dev) { return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; } + +bool is_swiotlb_active(void) +{ + /* + * When SWIOTLB is initialized, even if io_tlb_start points to physical + * address zero, io_tlb_end surely doesn't. + */ + return io_tlb_end != 0; +} -- 2.17.1
2019 Feb 07
5
[PATCH v7 0/5] Fix virtio-blk issue with SWIOTLB
Hi, here is the next version of this patch-set. Previous versions can be found here: V1: https://lore.kernel.org/lkml/20190110134433.15672-1-joro at 8bytes.org/ V2: https://lore.kernel.org/lkml/20190115132257.6426-1-joro at 8bytes.org/ V3: https://lore.kernel.org/lkml/20190123163049.24863-1-joro at 8bytes.org/ V4: https://lore.kernel.org/lkml/20190129084342.26030-1-joro at 8bytes.org/
2008 Dec 22
17
[PATCH 0 of 9] swiotlb: use phys_addr_t for pages
Hi all, Here''s a work in progress series whcih does a partial revert of the previous swiotlb changes, and does a partial replacement with Becky Bruce''s series. The most important difference is Becky''s use of phys_addr_t rather than page+offset to represent arbitrary pages. This turns out to be simpler. I didn''t replicate the map_single_page changes, since
2011 Sep 06
3
Memory fragmentation and PCI passthrough
Hello, I''ve hit known problem with dynamic memory management - memory fragmentation... This dynamic memory management basically does xl mem-set to balance memory. After some time of running system, xen memory is so fragmented that it is impossible to start new VM with PCI device. Sometimes it crashes during boot (no 64MB contiguous memory for SWIOTLB), or later - eg. iwlagn cannot
2013 Jun 24
3
[konrad.wilk@oracle.com: [PATCH] drm/i915: make compact dma scatter lists creation work with SWIOTLB backend.]
...ugging traced it down to dma_map_sg failing (in i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB). That unfortunately are sizes that the SWIOTLB is incapable of handling - the maximum it can handle is a an entry of 512KB of virtual contiguous memory for its bounce buffer. (See IO_TLB_SEGSIZE). Previous to the above mention git commit the SG entries were of 4KB, and the code introduced by above git commit squashed the CPU contiguous PFNs in one big virtual address provided to DMA API. This patch is a simple semi-revert - were we emulate the old behavior if we detect that SWIOTLB is on...
2008 Nov 13
69
[PATCH 00 of 38] xen: add more Xen dom0 support
Hi Ingo, Here''s the chunk of patches to add Xen Dom0 support (it''s probably worth creating a new xen/dom0 topic branch for it). A dom0 Xen domain is basically the same as a normal domU domain, but it has extra privileges to directly access hardware. There are two issues to deal with: - translating to and from the domain''s pseudo-physical addresses and real machine
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
Hi all, this patch series enables xen-swiotlb on arm and arm64. It has been heavily reworked compared to the previous versions in order to achieve better performances and to address review comments. We are not using dma_mark_clean to ensure coherency anymore. We call the platform implementation of map_page and unmap_page. We assume that dom0 has been mapped 1:1 (physical address == machine