Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [GIT PULL REPOST] xen/dom0/pci - Xen dom0 PCI access
Hi Ingo, This is a repost of the Xen PCI access changes. There are no differences from the last repost, and no outstanding issues. This branch adds the core pieces to allow PCI access and DMA to work, including mapping of device memory into the Xen domain and rearranging the guest kernel memory to be physically contiguous for DMA. Please pull into tip.git as x86/xen/dom0/pci. Thanks, J The following changes since commit ce791368bb4a53d05e78e1588bac0aacde8db84c: Jeremy Fitzhardinge (1): xen/i386: make sure initial VGA/ISA mappings are not overridden are available in the git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git for-ingo/xen/dom0/pci Alex Nixon (7): xen: Don''t disable the I/O space xen: Allow unprivileged Xen domains to create iomap pages Xen: Rename the balloon lock xen: Add xen_create_contiguous_region x86/PCI: Clean up pci_cache_line_size x86/PCI: Enable scanning of all pci functions Xen/x86/PCI: Add support for the Xen PCI subsystem Jeremy Fitzhardinge (3): x86/pci: make sure _PAGE_IOMAP it set on pci mappings xen/pci: clean up Kconfig a bit xen: define BIOVEC_PHYS_MERGEABLE() arch/x86/Kconfig | 4 + arch/x86/include/asm/io.h | 15 ++ arch/x86/include/asm/pci.h | 8 +- arch/x86/include/asm/pci_x86.h | 2 + arch/x86/include/asm/xen/iommu.h | 12 ++ arch/x86/kernel/pci-dma.c | 3 + arch/x86/pci/Makefile | 1 + arch/x86/pci/common.c | 18 ++- arch/x86/pci/i386.c | 3 + arch/x86/pci/init.c | 6 + arch/x86/pci/xen.c | 51 +++++++ arch/x86/xen/Kconfig | 2 + arch/x86/xen/enlighten.c | 6 +- arch/x86/xen/mmu.c | 225 +++++++++++++++++++++++++++++++- arch/x86/xen/setup.c | 3 - drivers/pci/Makefile | 2 + drivers/pci/xen-iommu.c | 271 ++++++++++++++++++++++++++++++++++++++ drivers/xen/Makefile | 2 +- drivers/xen/balloon.c | 15 +-- drivers/xen/biomerge.c | 14 ++ include/asm-generic/pci.h | 2 + include/xen/interface/memory.h | 50 +++++++ include/xen/xen-ops.h | 6 + 23 files changed, 693 insertions(+), 28 deletions(-) create mode 100644 arch/x86/include/asm/xen/iommu.h create mode 100644 arch/x86/pci/xen.c create mode 100644 drivers/pci/xen-iommu.c create mode 100644 drivers/xen/biomerge.c _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 01/10] xen: Don''t disable the I/O space
From: Alex Nixon <alex.nixon@citrix.com> If a guest domain wants to access PCI devices through the frontend driver (coming later in the patch series), it will need access to the I/O space. [ Impact: Allow for domU IO access, preparing for pci passthrough ] Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> --- arch/x86/xen/setup.c | 3 --- 1 files changed, 0 insertions(+), 3 deletions(-) diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c index 805ae53..2439456 100644 --- a/arch/x86/xen/setup.c +++ b/arch/x86/xen/setup.c @@ -230,8 +230,5 @@ void __init xen_arch_setup(void) pm_idle = xen_idle; - if (!xen_initial_domain()) - paravirt_disable_iospace(); - fiddle_vdso(); } -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 02/10] xen: Allow unprivileged Xen domains to create iomap pages
From: Alex Nixon <alex.nixon@citrix.com> PV DomU domains are allowed to map hardware MFNs for PCI passthrough, but are not generally allowed to map raw machine pages. In particular, various pieces of code try to map DMI and ACPI tables in the ISA ROM range. We disallow _PAGE_IOMAP for those mappings, so that they are redirected to a set of local zeroed pages we reserve for that purpose. [ Impact: prevent passthrough of ISA space, as we only allow PCI ] Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> --- arch/x86/xen/enlighten.c | 6 +++--- arch/x86/xen/mmu.c | 18 +++++++++++++++--- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 12e4d9c..adb4fe0 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -1051,11 +1051,11 @@ asmlinkage void __init xen_start_kernel(void) /* Prevent unwanted bits from being set in PTEs. */ __supported_pte_mask &= ~_PAGE_GLOBAL; - if (xen_initial_domain()) - __supported_pte_mask |= _PAGE_IOMAP; - else + if (!xen_initial_domain()) __supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD); + __supported_pte_mask |= _PAGE_IOMAP; + #ifdef CONFIG_X86_64 /* Work out if we support NX */ check_efer(); diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 331e52d..370e1b8 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -49,6 +49,7 @@ #include <asm/mmu_context.h> #include <asm/setup.h> #include <asm/paravirt.h> +#include <asm/e820.h> #include <asm/linkage.h> #include <asm/xen/hypercall.h> @@ -378,7 +379,7 @@ static bool xen_page_pinned(void *ptr) static bool xen_iomap_pte(pte_t pte) { - return xen_initial_domain() && (pte_flags(pte) & _PAGE_IOMAP); + return pte_flags(pte) & _PAGE_IOMAP; } static void xen_set_iomap_pte(pte_t *ptep, pte_t pteval) @@ -580,10 +581,21 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_pgd_val); pte_t xen_make_pte(pteval_t pte) { - if (unlikely(xen_initial_domain() && (pte & _PAGE_IOMAP))) + phys_addr_t addr = (pte & PTE_PFN_MASK); + + /* + * Unprivileged domains are allowed to do IOMAPpings for + * PCI passthrough, but not map ISA space. The ISA + * mappings are just dummy local mappings to keep other + * parts of the kernel happy. + */ + if (unlikely(pte & _PAGE_IOMAP) && + (xen_initial_domain() || addr >= ISA_END_ADDRESS)) { pte = iomap_pte(pte); - else + } else { + pte &= ~_PAGE_IOMAP; pte = pte_pfn_to_mfn(pte); + } return native_make_pte(pte); } -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 03/10] Xen: Rename the balloon lock
From: Alex Nixon <alex.nixon@citrix.com> * xen_create_contiguous_region needs access to the balloon lock to ensure memory doesn''t change under its feet, so expose the balloon lock * Change the name of the lock to xen_reservation_lock, to imply it''s now less-specific usage. [ Impact: cleanup ] Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> --- arch/x86/xen/mmu.c | 7 +++++++ drivers/xen/balloon.c | 15 ++++----------- include/xen/interface/memory.h | 8 ++++++++ 3 files changed, 19 insertions(+), 11 deletions(-) diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 370e1b8..9cee943 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -67,6 +67,13 @@ #define MMU_UPDATE_HISTO 30 +/* + * Protects atomic reservation decrease/increase against concurrent increases. + * Also protects non-atomic updates of current_pages and driver_pages, and + * balloon lists. + */ +DEFINE_SPINLOCK(xen_reservation_lock); + #ifdef CONFIG_XEN_DEBUG_FS static struct { diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index f5bbd9e..46a8b39 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -84,13 +84,6 @@ static struct sys_device balloon_sysdev; static int register_balloon(struct sys_device *sysdev); -/* - * Protects atomic reservation decrease/increase against concurrent increases. - * Also protects non-atomic updates of current_pages and driver_pages, and - * balloon lists. - */ -static DEFINE_SPINLOCK(balloon_lock); - static struct balloon_stats balloon_stats; /* We increase/decrease in batches which fit in a page */ @@ -209,7 +202,7 @@ static int increase_reservation(unsigned long nr_pages) if (nr_pages > ARRAY_SIZE(frame_list)) nr_pages = ARRAY_SIZE(frame_list); - spin_lock_irqsave(&balloon_lock, flags); + spin_lock_irqsave(&xen_reservation_lock, flags); page = balloon_first_page(); for (i = 0; i < nr_pages; i++) { @@ -267,7 +260,7 @@ static int increase_reservation(unsigned long nr_pages) totalram_pages = balloon_stats.current_pages; out: - spin_unlock_irqrestore(&balloon_lock, flags); + spin_unlock_irqrestore(&xen_reservation_lock, flags); return 0; } @@ -312,7 +305,7 @@ static int decrease_reservation(unsigned long nr_pages) kmap_flush_unused(); flush_tlb_all(); - spin_lock_irqsave(&balloon_lock, flags); + spin_lock_irqsave(&xen_reservation_lock, flags); /* No more mappings: invalidate P2M and add to balloon. */ for (i = 0; i < nr_pages; i++) { @@ -329,7 +322,7 @@ static int decrease_reservation(unsigned long nr_pages) balloon_stats.current_pages -= nr_pages; totalram_pages = balloon_stats.current_pages; - spin_unlock_irqrestore(&balloon_lock, flags); + spin_unlock_irqrestore(&xen_reservation_lock, flags); return need_sleep; } diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h index f548f7c..9ddf473 100644 --- a/include/xen/interface/memory.h +++ b/include/xen/interface/memory.h @@ -9,6 +9,8 @@ #ifndef __XEN_PUBLIC_MEMORY_H__ #define __XEN_PUBLIC_MEMORY_H__ +#include <linux/spinlock.h> + /* * Increase or decrease the specified domain''s memory reservation. Returns a * -ve errcode on failure, or the # extents successfully allocated or freed. @@ -184,4 +186,10 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_memory_map); */ #define XENMEM_machine_memory_map 10 +/* + * Prevent the balloon driver from changing the memory reservation + * during a driver critical region. + */ +extern spinlock_t xen_reservation_lock; + #endif /* __XEN_PUBLIC_MEMORY_H__ */ -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 04/10] xen: Add xen_create_contiguous_region
From: Alex Nixon <alex.nixon@citrix.com> A memory region must be physically contiguous in order to be accessed through DMA. This patch adds xen_create_contiguous_region, which ensures a region of contiguous virtual memory is also physically contiguous. Based on Stephen Tweedie''s port of the 2.6.18-xen version. Remove contiguous_bitmap[] as it''s no longer needed. Ported from linux-2.6.18-xen.hg 707:e410857fd83c [ Impact: add Xen-internal API to make pages phys-contig ] Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> --- arch/x86/xen/mmu.c | 200 ++++++++++++++++++++++++++++++++++++++++ include/xen/interface/memory.h | 42 +++++++++ include/xen/xen-ops.h | 6 + 3 files changed, 248 insertions(+), 0 deletions(-) diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 9cee943..fed27f1 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -51,6 +51,7 @@ #include <asm/paravirt.h> #include <asm/e820.h> #include <asm/linkage.h> +#include <asm/page.h> #include <asm/xen/hypercall.h> #include <asm/xen/hypervisor.h> @@ -2099,6 +2100,205 @@ const struct pv_mmu_ops xen_mmu_ops __initdata = { }; +/* Protected by xen_reservation_lock. */ +#define MAX_CONTIG_ORDER 9 /* 2MB */ +static unsigned long discontig_frames[1<<MAX_CONTIG_ORDER]; + +#define VOID_PTE (mfn_pte(0, __pgprot(0))) +static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order, + unsigned long *in_frames, + unsigned long *out_frames) +{ + int i; + struct multicall_space mcs; + + xen_mc_batch(); + for (i = 0; i < (1UL<<order); i++, vaddr += PAGE_SIZE) { + mcs = __xen_mc_entry(0); + + if (in_frames) + in_frames[i] = virt_to_mfn(vaddr); + + MULTI_update_va_mapping(mcs.mc, vaddr, VOID_PTE, 0); + set_phys_to_machine(virt_to_pfn(vaddr), INVALID_P2M_ENTRY); + + if (out_frames) + out_frames[i] = virt_to_pfn(vaddr); + } + xen_mc_issue(0); +} + +/* + * Update the pfn-to-mfn mappings for a virtual address range, either to + * point to an array of mfns, or contiguously from a single starting + * mfn. + */ +static void xen_remap_exchanged_ptes(unsigned long vaddr, int order, + unsigned long *mfns, + unsigned long first_mfn) +{ + unsigned i, limit; + unsigned long mfn; + + xen_mc_batch(); + + limit = 1u << order; + for (i = 0; i < limit; i++, vaddr += PAGE_SIZE) { + struct multicall_space mcs; + unsigned flags; + + mcs = __xen_mc_entry(0); + if (mfns) + mfn = mfns[i]; + else + mfn = first_mfn + i; + + if (i < (limit - 1)) + flags = 0; + else { + if (order == 0) + flags = UVMF_INVLPG | UVMF_ALL; + else + flags = UVMF_TLB_FLUSH | UVMF_ALL; + } + + MULTI_update_va_mapping(mcs.mc, vaddr, + mfn_pte(mfn, PAGE_KERNEL), flags); + + set_phys_to_machine(virt_to_pfn(vaddr), mfn); + } + + xen_mc_issue(0); +} + +/* + * Perform the hypercall to exchange a region of our pfns to point to + * memory with the required contiguous alignment. Takes the pfns as + * input, and populates mfns as output. + * + * Returns a success code indicating whether the hypervisor was able to + * satisfy the request or not. + */ +static int xen_exchange_memory(unsigned long extents_in, unsigned int order_in, + unsigned long *pfns_in, + unsigned long extents_out, unsigned int order_out, + unsigned long *mfns_out, + unsigned int address_bits) +{ + long rc; + int success; + + struct xen_memory_exchange exchange = { + .in = { + .nr_extents = extents_in, + .extent_order = order_in, + .extent_start = pfns_in, + .domid = DOMID_SELF + }, + .out = { + .nr_extents = extents_out, + .extent_order = order_out, + .extent_start = mfns_out, + .address_bits = address_bits, + .domid = DOMID_SELF + } + }; + + BUG_ON(extents_in << order_in != extents_out << order_out); + + rc = HYPERVISOR_memory_op(XENMEM_exchange, &exchange); + success = (exchange.nr_exchanged == extents_in); + + BUG_ON(!success && ((exchange.nr_exchanged != 0) || (rc == 0))); + BUG_ON(success && (rc != 0)); + + return success; +} + +int xen_create_contiguous_region(unsigned long vstart, unsigned int order, + unsigned int address_bits) +{ + unsigned long *in_frames = discontig_frames, out_frame; + unsigned long flags; + int success; + + /* + * Currently an auto-translated guest will not perform I/O, nor will + * it require PAE page directories below 4GB. Therefore any calls to + * this function are redundant and can be ignored. + */ + + if (xen_feature(XENFEAT_auto_translated_physmap)) + return 0; + + if (unlikely(order > MAX_CONTIG_ORDER)) + return -ENOMEM; + + memset((void *) vstart, 0, PAGE_SIZE << order); + + vm_unmap_aliases(); + + spin_lock_irqsave(&xen_reservation_lock, flags); + + /* 1. Zap current PTEs, remembering MFNs. */ + xen_zap_pfn_range(vstart, order, in_frames, NULL); + + /* 2. Get a new contiguous memory extent. */ + out_frame = virt_to_pfn(vstart); + success = xen_exchange_memory(1UL << order, 0, in_frames, + 1, order, &out_frame, + address_bits); + + /* 3. Map the new extent in place of old pages. */ + if (success) + xen_remap_exchanged_ptes(vstart, order, NULL, out_frame); + else + xen_remap_exchanged_ptes(vstart, order, in_frames, 0); + + spin_unlock_irqrestore(&xen_reservation_lock, flags); + + return success ? 0 : -ENOMEM; +} +EXPORT_SYMBOL_GPL(xen_create_contiguous_region); + +void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order) +{ + unsigned long *out_frames = discontig_frames, in_frame; + unsigned long flags; + int success; + + if (xen_feature(XENFEAT_auto_translated_physmap)) + return; + + if (unlikely(order > MAX_CONTIG_ORDER)) + return; + + memset((void *) vstart, 0, PAGE_SIZE << order); + + vm_unmap_aliases(); + + spin_lock_irqsave(&xen_reservation_lock, flags); + + /* 1. Find start MFN of contiguous extent. */ + in_frame = virt_to_mfn(vstart); + + /* 2. Zap current PTEs. */ + xen_zap_pfn_range(vstart, order, NULL, out_frames); + + /* 3. Do the exchange for non-contiguous MFNs. */ + success = xen_exchange_memory(1, order, &in_frame, 1UL << order, + 0, out_frames, 0); + + /* 4. Map new pages in place of old pages. */ + if (success) + xen_remap_exchanged_ptes(vstart, order, out_frames, 0); + else + xen_remap_exchanged_ptes(vstart, order, NULL, in_frame); + + spin_unlock_irqrestore(&xen_reservation_lock, flags); +} +EXPORT_SYMBOL_GPL(xen_destroy_contiguous_region); + #ifdef CONFIG_XEN_DEBUG_FS static struct dentry *d_mmu_debug; diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h index 9ddf473..48fc968 100644 --- a/include/xen/interface/memory.h +++ b/include/xen/interface/memory.h @@ -55,6 +55,48 @@ struct xen_memory_reservation { DEFINE_GUEST_HANDLE_STRUCT(xen_memory_reservation); /* + * An atomic exchange of memory pages. If return code is zero then + * @out.extent_list provides GMFNs of the newly-allocated memory. + * Returns zero on complete success, otherwise a negative error code. + * On complete success then always @nr_exchanged == @in.nr_extents. + * On partial success @nr_exchanged indicates how much work was done. + */ +#define XENMEM_exchange 11 +struct xen_memory_exchange { + /* + * [IN] Details of memory extents to be exchanged (GMFN bases). + * Note that @in.address_bits is ignored and unused. + */ + struct xen_memory_reservation in; + + /* + * [IN/OUT] Details of new memory extents. + * We require that: + * 1. @in.domid == @out.domid + * 2. @in.nr_extents << @in.extent_order =+ * @out.nr_extents << @out.extent_order + * 3. @in.extent_start and @out.extent_start lists must not overlap + * 4. @out.extent_start lists GPFN bases to be populated + * 5. @out.extent_start is overwritten with allocated GMFN bases + */ + struct xen_memory_reservation out; + + /* + * [OUT] Number of input extents that were successfully exchanged: + * 1. The first @nr_exchanged input extents were successfully + * deallocated. + * 2. The corresponding first entries in the output extent list correctly + * indicate the GMFNs that were successfully exchanged. + * 3. All other input and output extents are untouched. + * 4. If not all input exents are exchanged then the return code of this + * command will be non-zero. + * 5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER! + */ + unsigned long nr_exchanged; +}; + +DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange); +/* * Returns the maximum machine frame number of mapped RAM in this system. * This command always succeeds (it never returns an error code). * arg == NULL. diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h index 883a21b..d789c93 100644 --- a/include/xen/xen-ops.h +++ b/include/xen/xen-ops.h @@ -14,4 +14,10 @@ void xen_mm_unpin_all(void); void xen_timer_resume(void); void xen_arch_resume(void); +extern unsigned long *xen_contiguous_bitmap; +int xen_create_contiguous_region(unsigned long vstart, unsigned int order, + unsigned int address_bits); + +void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order); + #endif /* INCLUDE_XEN_OPS_H */ -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 05/10] x86/PCI: Clean up pci_cache_line_size
From: Alex Nixon <alex.nixon@citrix.com> Separate out x86 cache_line_size initialisation code into its own function (so it can be shared by Xen later in this patch series) [ Impact: cleanup ] Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: "H. Peter Anvin" <hpa@zytor.com> Reviewed-by: Matthew Wilcox <willy@linux.intel.com> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> --- arch/x86/include/asm/pci_x86.h | 1 + arch/x86/pci/common.c | 17 +++++++++++------ 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h index e60fd3e..5401ca2 100644 --- a/arch/x86/include/asm/pci_x86.h +++ b/arch/x86/include/asm/pci_x86.h @@ -45,6 +45,7 @@ enum pci_bf_sort_state { extern unsigned int pcibios_max_latency; void pcibios_resource_survey(void); +void pcibios_set_cache_line_size(void); /* pci-pc.c */ diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c index 2202b62..011ff45 100644 --- a/arch/x86/pci/common.c +++ b/arch/x86/pci/common.c @@ -412,26 +412,31 @@ struct pci_bus * __devinit pcibios_scan_root(int busnum) extern u8 pci_cache_line_size; -int __init pcibios_init(void) +void __init pcibios_set_cache_line_size(void) { struct cpuinfo_x86 *c = &boot_cpu_data; - if (!raw_pci_ops) { - printk(KERN_WARNING "PCI: System does not support PCI\n"); - return 0; - } - /* * Assume PCI cacheline size of 32 bytes for all x86s except K7/K8 * and P4. It''s also good for 386/486s (which actually have 16) * as quite a few PCI devices do not support smaller values. */ + pci_cache_line_size = 32 >> 2; if (c->x86 >= 6 && c->x86_vendor == X86_VENDOR_AMD) pci_cache_line_size = 64 >> 2; /* K7 & K8 */ else if (c->x86 > 6 && c->x86_vendor == X86_VENDOR_INTEL) pci_cache_line_size = 128 >> 2; /* P4 */ +} + +int __init pcibios_init(void) +{ + if (!raw_pci_ops) { + printk(KERN_WARNING "PCI: System does not support PCI\n"); + return 0; + } + pcibios_set_cache_line_size(); pcibios_resource_survey(); if (pci_bf_sort >= pci_force_bf) -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 06/10] x86/PCI: Enable scanning of all pci functions
From: Alex Nixon <alex.nixon@citrix.com> Xen may want to enable scanning of all pci functions - if for example the device at function 0 is not passed through to the guest, but the device at function 1 is. Jesse objected to the "#undef pcibios_scan_all_fns"''s ugliness, so replace it with the more common HAVE_ARCH_ idiom. [ Impact: allow passthrough of just some PCI functions. ] Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: "H. Peter Anvin" <hpa@zytor.com> Reviewed-by: Matthew Wilcox <willy@linux.intel.com> Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org> --- arch/x86/include/asm/pci.h | 8 +++++++- arch/x86/pci/common.c | 1 + include/asm-generic/pci.h | 2 ++ 3 files changed, 10 insertions(+), 1 deletions(-) diff --git a/arch/x86/include/asm/pci.h b/arch/x86/include/asm/pci.h index b51a1e8..cabea93 100644 --- a/arch/x86/include/asm/pci.h +++ b/arch/x86/include/asm/pci.h @@ -21,6 +21,7 @@ struct pci_sysdata { extern int pci_routeirq; extern int noioapicquirk; extern int noioapicreroute; +extern int pci_scan_all_fns; /* scan a bus after allocating a pci_sysdata for it */ extern struct pci_bus *pci_scan_bus_on_node(int busno, struct pci_ops *ops, @@ -48,7 +49,11 @@ extern unsigned int pcibios_assign_all_busses(void); #else #define pcibios_assign_all_busses() 0 #endif -#define pcibios_scan_all_fns(a, b) 0 + +static inline int pcibios_scan_all_fns(struct pci_bus *bus, int devfn) +{ + return pci_scan_all_fns; +} extern unsigned long pci_mem_start; #define PCIBIOS_MIN_IO 0x1000 @@ -129,6 +134,7 @@ extern void pci_iommu_alloc(void); #include <asm-generic/pci-dma-compat.h> /* generic pci stuff */ +#define HAVE_ARCH_PCIBIOS_SCAN_ALL_FNS #include <asm-generic/pci.h> #ifdef CONFIG_NUMA diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c index 011ff45..6a522c2 100644 --- a/arch/x86/pci/common.c +++ b/arch/x86/pci/common.c @@ -22,6 +22,7 @@ unsigned int pci_probe = PCI_PROBE_BIOS | PCI_PROBE_CONF1 | PCI_PROBE_CONF2 | unsigned int pci_early_dump_regs; static int pci_bf_sort; int pci_routeirq; +int pci_scan_all_fns; int noioapicquirk; #ifdef CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS int noioapicreroute = 0; diff --git a/include/asm-generic/pci.h b/include/asm-generic/pci.h index c36a77d..9ad9cb7 100644 --- a/include/asm-generic/pci.h +++ b/include/asm-generic/pci.h @@ -43,7 +43,9 @@ pcibios_select_root(struct pci_dev *pdev, struct resource *res) return root; } +#ifndef HAVE_ARCH_PCIBIOS_SCAN_ALL_FNS #define pcibios_scan_all_fns(a, b) 0 +#endif #ifndef HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 07/10] Xen/x86/PCI: Add support for the Xen PCI subsystem
From: Alex Nixon <alex.nixon@citrix.com> On boot, the system will search to see if a Xen iommu/pci subsystem is available. If the kernel detects it''s running in a domain rather than on bare hardware, this subsystem will be used. Otherwise, it falls back to using hardware as usual. The frontend stub lives in arch/x86/pci-xen.c, alongside other sub-arch PCI init code (e.g. olpc.c) (All subsequent fixes, API changes and swiotlb operations folded in.) [ Impact: add core of Xen PCI support ] Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: "H. Peter Anvin" <hpa@zytor.com> Reviewed-by: Matthew Wilcox <willy@linux.intel.com> --- arch/x86/Kconfig | 4 + arch/x86/include/asm/io.h | 2 + arch/x86/include/asm/pci_x86.h | 1 + arch/x86/include/asm/xen/iommu.h | 12 ++ arch/x86/kernel/pci-dma.c | 3 + arch/x86/pci/Makefile | 1 + arch/x86/pci/init.c | 6 + arch/x86/pci/xen.c | 51 +++++++ drivers/pci/Makefile | 2 + drivers/pci/xen-iommu.c | 271 ++++++++++++++++++++++++++++++++++++++ 10 files changed, 353 insertions(+), 0 deletions(-) create mode 100644 arch/x86/include/asm/xen/iommu.h create mode 100644 arch/x86/pci/xen.c create mode 100644 drivers/pci/xen-iommu.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index df9e885..15cc23a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1829,6 +1829,10 @@ config PCI_OLPC def_bool y depends on PCI && OLPC && (PCI_GOOLPC || PCI_GOANY) +config PCI_XEN + def_bool y + depends on XEN_PCI_PASSTHROUGH || XEN_DOM0_PCI + config PCI_DOMAINS def_bool y depends on PCI diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index 7373932..57c7b26 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -7,6 +7,8 @@ #include <asm-generic/int-ll64.h> #include <asm/page.h> +extern int isapnp_disable; + #define build_mmio_read(name, size, type, reg, barrier) \ static inline type name(const volatile void __iomem *addr) \ { type ret; asm volatile("mov" size " %1,%0":reg (ret) \ diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h index 5401ca2..34f03a4 100644 --- a/arch/x86/include/asm/pci_x86.h +++ b/arch/x86/include/asm/pci_x86.h @@ -107,6 +107,7 @@ extern int pci_direct_probe(void); extern void pci_direct_init(int type); extern void pci_pcbios_init(void); extern int pci_olpc_init(void); +extern int pci_xen_init(void); extern void __init dmi_check_pciprobe(void); extern void __init dmi_check_skip_isa_align(void); diff --git a/arch/x86/include/asm/xen/iommu.h b/arch/x86/include/asm/xen/iommu.h new file mode 100644 index 0000000..75df312 --- /dev/null +++ b/arch/x86/include/asm/xen/iommu.h @@ -0,0 +1,12 @@ +#ifndef ASM_X86__XEN_IOMMU_H + +#ifdef CONFIG_PCI_XEN +extern void xen_iommu_init(void); +#else +static inline void xen_iommu_init(void) +{ +} +#endif + +#endif + diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index 745579b..e486c40 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -10,6 +10,7 @@ #include <asm/gart.h> #include <asm/calgary.h> #include <asm/amd_iommu.h> +#include <asm/xen/iommu.h> static int forbid_dac __read_mostly; @@ -275,6 +276,8 @@ static int __init pci_iommu_init(void) dma_debug_add_bus(&pci_bus_type); #endif + xen_iommu_init(); + calgary_iommu_init(); intel_iommu_init(); diff --git a/arch/x86/pci/Makefile b/arch/x86/pci/Makefile index d49202e..64182c5 100644 --- a/arch/x86/pci/Makefile +++ b/arch/x86/pci/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_PCI_BIOS) += pcbios.o obj-$(CONFIG_PCI_MMCONFIG) += mmconfig_$(BITS).o direct.o mmconfig-shared.o obj-$(CONFIG_PCI_DIRECT) += direct.o obj-$(CONFIG_PCI_OLPC) += olpc.o +obj-$(CONFIG_PCI_XEN) += xen.o obj-y += fixup.o obj-$(CONFIG_ACPI) += acpi.o diff --git a/arch/x86/pci/init.c b/arch/x86/pci/init.c index 25a1f8e..4e2f90a 100644 --- a/arch/x86/pci/init.c +++ b/arch/x86/pci/init.c @@ -15,10 +15,16 @@ static __init int pci_arch_init(void) if (!(pci_probe & PCI_PROBE_NOEARLY)) pci_mmcfg_early_init(); +#ifdef CONFIG_PCI_XEN + if (!pci_xen_init()) + return 0; +#endif + #ifdef CONFIG_PCI_OLPC if (!pci_olpc_init()) return 0; /* skip additional checks if it''s an XO */ #endif + #ifdef CONFIG_PCI_BIOS pci_pcbios_init(); #endif diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c new file mode 100644 index 0000000..1b922aa --- /dev/null +++ b/arch/x86/pci/xen.c @@ -0,0 +1,51 @@ +/* + * Xen PCI Frontend Stub - puts some "dummy" functions in to the Linux + * x86 PCI core to support the Xen PCI Frontend + * + * Author: Ryan Wilson <hap9@epoch.ncsc.mil> + */ +#include <linux/module.h> +#include <linux/init.h> +#include <linux/pci.h> +#include <linux/acpi.h> + +#include <asm/io.h> +#include <asm/pci_x86.h> + +#include <asm/xen/hypervisor.h> + +static int xen_pcifront_enable_irq(struct pci_dev *dev) +{ + return 0; +} + +int __init pci_xen_init(void) +{ + if (!xen_pv_domain() || xen_initial_domain()) + return -ENODEV; + + printk(KERN_INFO "PCI: setting up Xen PCI frontend stub\n"); + + pcibios_set_cache_line_size(); + + pcibios_enable_irq = xen_pcifront_enable_irq; + pcibios_disable_irq = NULL; + +#ifdef CONFIG_ACPI + /* Keep ACPI out of the picture */ + acpi_noirq = 1; +#endif + +#ifdef CONFIG_ISAPNP + /* Stop isapnp from probing */ + isapnp_disable = 1; +#endif + + /* Ensure a device still gets scanned even if it''s fn number + * is non-zero. + */ + pci_scan_all_fns = 1; + + return 0; +} + diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index ba6af16..8db0cb5 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -27,6 +27,8 @@ obj-$(CONFIG_HT_IRQ) += htirq.o # Build Intel IOMMU support obj-$(CONFIG_DMAR) += dmar.o iova.o intel-iommu.o +# Build Xen IOMMU support +obj-$(CONFIG_PCI_XEN) += xen-iommu.o obj-$(CONFIG_INTR_REMAP) += dmar.o intr_remapping.o obj-$(CONFIG_PCI_IOV) += iov.o diff --git a/drivers/pci/xen-iommu.c b/drivers/pci/xen-iommu.c new file mode 100644 index 0000000..ac6bcdb --- /dev/null +++ b/drivers/pci/xen-iommu.c @@ -0,0 +1,271 @@ +#include <linux/types.h> +#include <linux/mm.h> +#include <linux/string.h> +#include <linux/pci.h> +#include <linux/module.h> +#include <linux/version.h> +#include <linux/scatterlist.h> +#include <linux/io.h> +#include <linux/bug.h> + +#include <xen/interface/xen.h> +#include <xen/grant_table.h> +#include <xen/page.h> +#include <xen/xen-ops.h> + +#include <asm/iommu.h> +#include <asm/swiotlb.h> +#include <asm/tlbflush.h> + +#define IOMMU_BUG_ON(test) \ +do { \ + if (unlikely(test)) { \ + printk(KERN_ALERT "Fatal DMA error! " \ + "Please use ''swiotlb=force''\n"); \ + BUG(); \ + } \ +} while (0) + +/* Print address range with message */ +#define PAR(msg, addr, size) \ +do { \ + printk(msg "[%#llx - %#llx]\n", \ + (unsigned long long)addr, \ + (unsigned long long)addr + size); \ +} while (0) + +static inline int address_needs_mapping(struct device *hwdev, + dma_addr_t addr) +{ + dma_addr_t mask = DMA_BIT_MASK(32); + int ret; + + /* If the device has a mask, use it, otherwise default to 32 bits */ + if (hwdev) + mask = *hwdev->dma_mask; + + ret = (addr & ~mask) != 0; + + if (ret) { + printk(KERN_ERR "dma address needs mapping\n"); + printk(KERN_ERR "mask: %#llx\n address: [%#llx]\n", mask, addr); + } + return ret; +} + +static int check_pages_physically_contiguous(unsigned long pfn, + unsigned int offset, + size_t length) +{ + unsigned long next_mfn; + int i; + int nr_pages; + + next_mfn = pfn_to_mfn(pfn); + nr_pages = (offset + length + PAGE_SIZE-1) >> PAGE_SHIFT; + + for (i = 1; i < nr_pages; i++) { + if (pfn_to_mfn(++pfn) != ++next_mfn) + return 0; + } + return 1; +} + +static int range_straddles_page_boundary(phys_addr_t p, size_t size) +{ + unsigned long pfn = PFN_DOWN(p); + unsigned int offset = p & ~PAGE_MASK; + + if (offset + size <= PAGE_SIZE) + return 0; + if (check_pages_physically_contiguous(pfn, offset, size)) + return 0; + return 1; +} + +static inline void xen_dma_unmap_page(struct page *page) +{ + /* Xen TODO: 2.6.18 xen calls __gnttab_dma_unmap_page here + * to deal with foreign pages. We''ll need similar logic here at + * some point. + */ +} + +/* Gets dma address of a page */ +static inline dma_addr_t xen_dma_map_page(struct page *page) +{ + /* Xen TODO: 2.6.18 xen calls __gnttab_dma_map_page here to deal + * with foreign pages. We''ll need similar logic here at some + * point. + */ + return ((dma_addr_t)pfn_to_mfn(page_to_pfn(page))) << PAGE_SHIFT; +} + +static int xen_map_sg(struct device *hwdev, struct scatterlist *sg, + int nents, + enum dma_data_direction direction, + struct dma_attrs *attrs) +{ + struct scatterlist *s; + struct page *page; + int i, rc; + + BUG_ON(direction == DMA_NONE); + WARN_ON(nents == 0 || sg[0].length == 0); + + for_each_sg(sg, s, nents, i) { + BUG_ON(!sg_page(s)); + page = sg_page(s); + s->dma_address = xen_dma_map_page(page) + s->offset; + s->dma_length = s->length; + IOMMU_BUG_ON(range_straddles_page_boundary( + page_to_phys(page), s->length)); + } + + rc = nents; + + flush_write_buffers(); + return rc; +} + +static void xen_unmap_sg(struct device *hwdev, struct scatterlist *sg, + int nents, + enum dma_data_direction direction, + struct dma_attrs *attrs) +{ + struct scatterlist *s; + struct page *page; + int i; + + for_each_sg(sg, s, nents, i) { + page = pfn_to_page(mfn_to_pfn(PFN_DOWN(s->dma_address))); + xen_dma_unmap_page(page); + } +} + +static void *xen_alloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp) +{ + void *ret; + unsigned int order = get_order(size); + unsigned long vstart; + u64 mask; + + /* ignore region specifiers */ + gfp &= ~(__GFP_DMA | __GFP_HIGHMEM); + + if (dma_alloc_from_coherent(dev, size, dma_handle, &ret)) + return ret; + + if (dev == NULL || (dev->coherent_dma_mask < DMA_BIT_MASK(32))) + gfp |= GFP_DMA; + + vstart = __get_free_pages(gfp, order); + ret = (void *)vstart; + + if (dev != NULL && dev->coherent_dma_mask) + mask = dev->coherent_dma_mask; + else + mask = DMA_BIT_MASK(32); + + if (ret != NULL) { + if (xen_create_contiguous_region(vstart, order, + fls64(mask)) != 0) { + free_pages(vstart, order); + return NULL; + } + memset(ret, 0, size); + *dma_handle = virt_to_machine(ret).maddr; + } + return ret; +} + +static void xen_free_coherent(struct device *dev, size_t size, + void *vaddr, dma_addr_t dma_addr) +{ + int order = get_order(size); + + if (dma_release_from_coherent(dev, order, vaddr)) + return; + + xen_destroy_contiguous_region((unsigned long)vaddr, order); + free_pages((unsigned long)vaddr, order); +} + +static dma_addr_t xen_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction direction, + struct dma_attrs *attrs) +{ + dma_addr_t dma; + + BUG_ON(direction == DMA_NONE); + + WARN_ON(size == 0); + + dma = xen_dma_map_page(page) + offset; + + IOMMU_BUG_ON(address_needs_mapping(dev, dma)); + flush_write_buffers(); + return dma; +} + +static void xen_unmap_page(struct device *dev, dma_addr_t dma_addr, + size_t size, + enum dma_data_direction direction, + struct dma_attrs *attrs) +{ + BUG_ON(direction == DMA_NONE); + xen_dma_unmap_page(pfn_to_page(mfn_to_pfn(PFN_DOWN(dma_addr)))); +} + +static struct dma_map_ops xen_dma_ops = { + .dma_supported = NULL, + + .alloc_coherent = xen_alloc_coherent, + .free_coherent = xen_free_coherent, + + .map_page = xen_map_page, + .unmap_page = xen_unmap_page, + + .map_sg = xen_map_sg, + .unmap_sg = xen_unmap_sg, + + .mapping_error = NULL, + + .is_phys = 0, +}; + +static struct dma_map_ops xen_swiotlb_dma_ops = { + .dma_supported = swiotlb_dma_supported, + + .alloc_coherent = xen_alloc_coherent, + .free_coherent = xen_free_coherent, + + .map_page = swiotlb_map_page, + .unmap_page = swiotlb_unmap_page, + + .map_sg = swiotlb_map_sg_attrs, + .unmap_sg = swiotlb_unmap_sg_attrs, + + .mapping_error = swiotlb_dma_mapping_error, + + .is_phys = 0, +}; + +void __init xen_iommu_init(void) +{ + if (!xen_pv_domain()) + return; + + printk(KERN_INFO "Xen: Initializing Xen DMA ops\n"); + + force_iommu = 0; + dma_ops = &xen_dma_ops; + + if (swiotlb) { + printk(KERN_INFO "Xen: Enabling DMA fallback to swiotlb\n"); + dma_ops = &xen_swiotlb_dma_ops; + } +} + -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 08/10] x86/pci: make sure _PAGE_IOMAP it set on pci mappings
From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> When mapping pci space via /sys or /proc, make sure we''re really doing a hardware mapping by setting _PAGE_IOMAP. [ Impact: bugfix; make PCI mappings map the right pages ] Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: "H. Peter Anvin" <hpa@zytor.com> Reviewed-by: Matthew Wilcox <willy@linux.intel.com> --- arch/x86/pci/i386.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c index a85bef2..88a1080 100644 --- a/arch/x86/pci/i386.c +++ b/arch/x86/pci/i386.c @@ -278,6 +278,9 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, return -EINVAL; prot = pgprot_val(vma->vm_page_prot); + + prot |= _PAGE_IOMAP; /* creating a mapping for IO */ + if (pat_enabled && write_combine) prot |= _PAGE_CACHE_WC; else if (pat_enabled || boot_cpu_data.x86 > 3) -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 09/10] xen/pci: clean up Kconfig a bit
Cut down on the maze of PCI-related config options. [ Impact: Kconfig cleanup ] Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: Matthew Wilcox <willy@linux.intel.com> --- arch/x86/Kconfig | 4 ++-- arch/x86/xen/Kconfig | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 15cc23a..3d9d2cb 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1830,8 +1830,8 @@ config PCI_OLPC depends on PCI && OLPC && (PCI_GOOLPC || PCI_GOANY) config PCI_XEN - def_bool y - depends on XEN_PCI_PASSTHROUGH || XEN_DOM0_PCI + bool + select SWIOTLB config PCI_DOMAINS def_bool y diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig index fe69286..87c13db 100644 --- a/arch/x86/xen/Kconfig +++ b/arch/x86/xen/Kconfig @@ -55,6 +55,7 @@ config XEN_PRIVILEGED_GUEST config XEN_PCI_PASSTHROUGH bool #"Enable support for Xen PCI passthrough devices" depends on XEN && PCI + select PCI_XEN help Enable support for passing PCI devices through to unprivileged domains. (COMPLETELY UNTESTED) @@ -62,3 +63,4 @@ config XEN_PCI_PASSTHROUGH config XEN_DOM0_PCI def_bool y depends on XEN_DOM0 && PCI + select PCI_XEN -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-27 07:24 UTC
[Xen-devel] [PATCH 10/10] xen: define BIOVEC_PHYS_MERGEABLE()
When running in Xen domain with device access, we need to make sure the block subsystem doesn''t merge requests across pages which aren''t machine physically contiguous. To do this, we define our own BIOVEC_PHYS_MERGEABLE. When CONFIG_XEN isn''t enabled, or we''re not running in a Xen domain, this has identical behaviour to the normal implementation. When running under Xen, we also make sure the underlying machine pages are the same or adjacent. [ Impact: allow Xen control of bio merging ] Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> --- arch/x86/include/asm/io.h | 13 +++++++++++++ drivers/xen/Makefile | 2 +- drivers/xen/biomerge.c | 14 ++++++++++++++ 3 files changed, 28 insertions(+), 1 deletions(-) create mode 100644 drivers/xen/biomerge.c diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index 57c7b26..c75e9eb 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -7,6 +7,8 @@ #include <asm-generic/int-ll64.h> #include <asm/page.h> +#include <asm/xen/hypervisor.h> + extern int isapnp_disable; #define build_mmio_read(name, size, type, reg, barrier) \ @@ -201,6 +203,17 @@ extern void __iomem *early_memremap(resource_size_t phys_addr, unsigned long size); extern void early_iounmap(void __iomem *addr, unsigned long size); +#ifdef CONFIG_XEN +struct bio_vec; + +extern bool xen_biovec_phys_mergeable(const struct bio_vec *vec1, + const struct bio_vec *vec2); + +#define BIOVEC_PHYS_MERGEABLE(vec1, vec2) \ + (__BIOVEC_PHYS_MERGEABLE(vec1, vec2) && \ + (!xen_domain() || xen_biovec_phys_mergeable(vec1, vec2))) +#endif /* CONFIG_XEN */ + #define IO_SPACE_LIMIT 0xffff #endif /* _ASM_X86_IO_H */ diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile index ec2a39b..e6c8b85 100644 --- a/drivers/xen/Makefile +++ b/drivers/xen/Makefile @@ -1,4 +1,4 @@ -obj-y += grant-table.o features.o events.o manage.o +obj-y += grant-table.o features.o events.o manage.o biomerge.o obj-y += xenbus/ obj-$(CONFIG_HOTPLUG_CPU) += cpu_hotplug.o diff --git a/drivers/xen/biomerge.c b/drivers/xen/biomerge.c new file mode 100644 index 0000000..d40f534 --- /dev/null +++ b/drivers/xen/biomerge.c @@ -0,0 +1,14 @@ +#include <linux/bio.h> +#include <asm/io.h> +#include <xen/page.h> + +bool xen_biovec_phys_mergeable(const struct bio_vec *vec1, + const struct bio_vec *vec2) +{ + unsigned long mfn1 = pfn_to_mfn(page_to_pfn(vec1->bv_page)); + unsigned long mfn2 = pfn_to_mfn(page_to_pfn(vec2->bv_page)); + + return __BIOVEC_PHYS_MERGEABLE(vec1, vec2) && + ((mfn1 == mfn2) || ((mfn1+1) == mfn2)); +} + -- 1.6.0.6 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel