search for: io_tlb_shift

Displaying 8 results from an estimated 8 matches for "io_tlb_shift".

2013 Jan 24
1
[PATCH 35/35] x86: Don't panic if can not alloc buffer for swiotlb
...rc = 0; } else rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 071d62c..2de42f9 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -23,7 +23,7 @@ extern int swiotlb_force; #define IO_TLB_SHIFT 11 extern void swiotlb_init(int verbose); -extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); +int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); extern unsigned long swiotlb_nr_tbl(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsi...
2013 Jan 24
1
[PATCH 35/35] x86: Don't panic if can not alloc buffer for swiotlb
...rc = 0; } else rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 071d62c..2de42f9 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -23,7 +23,7 @@ extern int swiotlb_force; #define IO_TLB_SHIFT 11 extern void swiotlb_init(int verbose); -extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); +int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); extern unsigned long swiotlb_nr_tbl(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsi...
2008 Dec 22
17
[PATCH 0 of 9] swiotlb: use phys_addr_t for pages
Hi all, Here''s a work in progress series whcih does a partial revert of the previous swiotlb changes, and does a partial replacement with Becky Bruce''s series. The most important difference is Becky''s use of phys_addr_t rather than page+offset to represent arbitrary pages. This turns out to be simpler. I didn''t replicate the map_single_page changes, since
2008 Nov 13
69
[PATCH 00 of 38] xen: add more Xen dom0 support
Hi Ingo, Here''s the chunk of patches to add Xen Dom0 support (it''s probably worth creating a new xen/dom0 topic branch for it). A dom0 Xen domain is basically the same as a normal domU domain, but it has extra privileges to directly access hardware. There are two issues to deal with: - translating to and from the domain''s pseudo-physical addresses and real machine
2019 Feb 07
0
[PATCH v7 2/5] swiotlb: Add is_swiotlb_active() function
...*/ extern void swiotlb_print_info(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 9cb21259cb0b..c873f9cc2146 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -667,3 +667,12 @@ size_t swiotlb_max_mapping_size(struct device *dev) { return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; } + +bool is_swiotlb_active(void) +{ + /* + * When SWIOTLB is initialized, even if io_tlb_start points to physical + * address zero, io_tlb_end surely doesn't. + */ + return io_tlb_end != 0; +} -- 2.17.1
2019 Feb 07
5
[PATCH v7 0/5] Fix virtio-blk issue with SWIOTLB
Hi, here is the next version of this patch-set. Previous versions can be found here: V1: https://lore.kernel.org/lkml/20190110134433.15672-1-joro at 8bytes.org/ V2: https://lore.kernel.org/lkml/20190115132257.6426-1-joro at 8bytes.org/ V3: https://lore.kernel.org/lkml/20190123163049.24863-1-joro at 8bytes.org/ V4: https://lore.kernel.org/lkml/20190129084342.26030-1-joro at 8bytes.org/
2011 Sep 06
3
Memory fragmentation and PCI passthrough
Hello, I''ve hit known problem with dynamic memory management - memory fragmentation... This dynamic memory management basically does xl mem-set to balance memory. After some time of running system, xen memory is so fragmented that it is impossible to start new VM with PCI device. Sometimes it crashes during boot (no 64MB contiguous memory for SWIOTLB), or later - eg. iwlagn cannot
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
Hi all, this patch series enables xen-swiotlb on arm and arm64. It has been heavily reworked compared to the previous versions in order to achieve better performances and to address review comments. We are not using dma_mark_clean to ensure coherency anymore. We call the platform implementation of map_page and unmap_page. We assume that dom0 has been mapped 1:1 (physical address == machine