Stefano Stabellini
2012-Jan-09 17:58 UTC
[PATCH v4 00/25] xen: ARMv7 with virtualization extensions
Hello everyone, this is the fourth version of the patch series that introduces ARMv7 with virtualization extensions support in Xen. The series allows Xen and Dom0 to boot on a Cortex-A15 based Versatile Express simulator. See the following announce email for more informations about what we are trying to achieve, as well as the original git history: See http://marc.info/?l=xen-devel&m=132257857628098&w=2 The first 7 patches affect generic Xen code and are not ARM specific; often they fix real issues, hidden in the default X86 configuration. The following 18 patches introduce ARMv7 with virtualization extensions support: makefiles first, then the asm-arm header files and finally everything else, ordered in a way that should make the patches easier to read. Changes in v4: - fix arm build after rebasing on xen-unstable 87c607efbfece009360f615b2bf98959f4ea48e8; - use ABS() in __ldivmod_helper; - return a negative integer in case of errors in elf_load_image. Changes in v3: - introduce clear_guest for x86 and ia64 (I kept ia64 version of clear_user for symmetry but it is not actually used anywhere); - rename the current ARM *_user functions to *_guest; - use raw_clear_guest and raw_copy_to_guest in elf_load_image. Changes in v2: - introduce CONFIG_XENOPROF; - make _srodata and _erodata const char[]; - do not include p2m.h ifdef __ia64__; - remove wrong comment about pfn.h; - introduce HAS_KEXEC and CONFIG_KEXEC; - use long in __do_clear_user; - remove the div64 patch, implement __aeabi_ldivmod and __aeabi_uldivmod instead; - move "arm: makefiles" at the end of the series. Stefano Stabellini (25): Move cpufreq option parsing to cpufreq.c Include some header files that are not automatically included on all archs A collection of fixes to Xen common files xen: implement an signed 64 bit division helper function Introduce clear_user and clear_guest libelf-loader: introduce elf_load_image xen/common/Makefile: introduce HAS_CPUFREQ, HAS_PCI, HAS_PASSTHROUGH, HAS_NS16550, HAS_KEXEC arm: compile tmem arm: header files arm: bit manipulation, copy and division libraries arm: entry.S and head.S arm: domain arm: domain_build arm: driver for CoreLink GIC-400 Generic Interrupt Controller arm: mmio handlers arm: irq arm: mm and p2m arm: pl011 UART driver arm: early setup code arm: shutdown, smp and smpboot arm: driver for the generic timer for ARMv7 arm: trap handlers arm: vgic emulation arm: vtimer arm: makefiles config/arm.mk | 18 + tools/libxc/xc_dom_elfloader.c | 8 +- tools/libxc/xc_hvm_build.c | 5 +- xen/arch/arm/Makefile | 76 ++++ xen/arch/arm/Rules.mk | 29 ++ xen/arch/arm/asm-offsets.c | 76 ++++ xen/arch/arm/domain.c | 269 ++++++++++++++ xen/arch/arm/domain_build.c | 212 +++++++++++ xen/arch/arm/dummy.S | 72 ++++ xen/arch/arm/entry.S | 107 ++++++ xen/arch/arm/gic.c | 473 +++++++++++++++++++++++++ xen/arch/arm/gic.h | 154 ++++++++ xen/arch/arm/guestcopy.c | 81 +++++ xen/arch/arm/head.S | 298 ++++++++++++++++ xen/arch/arm/io.c | 51 +++ xen/arch/arm/io.h | 55 +++ xen/arch/arm/irq.c | 180 ++++++++++ xen/arch/arm/lib/Makefile | 5 + xen/arch/arm/lib/assembler.h | 49 +++ xen/arch/arm/lib/bitops.h | 36 ++ xen/arch/arm/lib/changebit.S | 18 + xen/arch/arm/lib/clearbit.S | 19 + xen/arch/arm/lib/copy_template.S | 266 ++++++++++++++ xen/arch/arm/lib/div64.S | 149 ++++++++ xen/arch/arm/lib/findbit.S | 115 ++++++ xen/arch/arm/lib/lib1funcs.S | 302 ++++++++++++++++ xen/arch/arm/lib/memcpy.S | 64 ++++ xen/arch/arm/lib/memmove.S | 200 +++++++++++ xen/arch/arm/lib/memset.S | 129 +++++++ xen/arch/arm/lib/memzero.S | 127 +++++++ xen/arch/arm/lib/setbit.S | 18 + xen/arch/arm/lib/testchangebit.S | 18 + xen/arch/arm/lib/testclearbit.S | 18 + xen/arch/arm/lib/testsetbit.S | 18 + xen/arch/arm/mm.c | 321 +++++++++++++++++ xen/arch/arm/p2m.c | 214 +++++++++++ xen/arch/arm/setup.c | 206 +++++++++++ xen/arch/arm/shutdown.c | 23 ++ xen/arch/arm/smp.c | 29 ++ xen/arch/arm/smpboot.c | 50 +++ xen/arch/arm/time.c | 181 ++++++++++ xen/arch/arm/traps.c | 609 ++++++++++++++++++++++++++++++++ xen/arch/arm/vgic.c | 605 +++++++++++++++++++++++++++++++ xen/arch/arm/vtimer.c | 148 ++++++++ xen/arch/arm/vtimer.h | 35 ++ xen/arch/arm/xen.lds.S | 141 ++++++++ xen/arch/ia64/Rules.mk | 5 + xen/arch/ia64/linux/memcpy_mck.S | 177 +++++++++ xen/arch/x86/Rules.mk | 5 + xen/arch/x86/domain_build.c | 7 +- xen/arch/x86/hvm/hvm.c | 107 ++++++ xen/arch/x86/usercopy.c | 36 ++ xen/common/Makefile | 2 +- xen/common/domain.c | 37 +-- xen/common/domctl.c | 1 + xen/common/grant_table.c | 1 + xen/common/irq.c | 1 + xen/common/kernel.c | 2 +- xen/common/keyhandler.c | 1 + xen/common/lib.c | 19 + xen/common/libelf/libelf-dominfo.c | 6 + xen/common/libelf/libelf-loader.c | 28 ++- xen/common/memory.c | 4 +- xen/common/sched_credit2.c | 6 - xen/common/shutdown.c | 4 + xen/common/spinlock.c | 1 + xen/common/tmem.c | 3 +- xen/common/tmem_xen.c | 4 +- xen/common/wait.c | 1 + xen/common/xencomm.c | 111 ++++++ xen/drivers/Makefile | 6 +- xen/drivers/char/Makefile | 3 +- xen/drivers/char/console.c | 5 + xen/drivers/char/pl011.c | 266 ++++++++++++++ xen/drivers/cpufreq/cpufreq.c | 31 ++ xen/include/asm-arm/asm_defns.h | 18 + xen/include/asm-arm/atomic.h | 236 ++++++++++++ xen/include/asm-arm/bitops.h | 195 ++++++++++ xen/include/asm-arm/bug.h | 15 + xen/include/asm-arm/byteorder.h | 16 + xen/include/asm-arm/cache.h | 20 + xen/include/asm-arm/config.h | 122 +++++++ xen/include/asm-arm/cpregs.h | 207 +++++++++++ xen/include/asm-arm/current.h | 60 ++++ xen/include/asm-arm/debugger.h | 15 + xen/include/asm-arm/delay.h | 15 + xen/include/asm-arm/desc.h | 12 + xen/include/asm-arm/div64.h | 235 ++++++++++++ xen/include/asm-arm/domain.h | 82 +++++ xen/include/asm-arm/elf.h | 33 ++ xen/include/asm-arm/event.h | 41 +++ xen/include/asm-arm/flushtlb.h | 31 ++ xen/include/asm-arm/grant_table.h | 35 ++ xen/include/asm-arm/guest_access.h | 131 +++++++ xen/include/asm-arm/hardirq.h | 28 ++ xen/include/asm-arm/hypercall.h | 24 ++ xen/include/asm-arm/init.h | 12 + xen/include/asm-arm/io.h | 12 + xen/include/asm-arm/iocap.h | 20 + xen/include/asm-arm/irq.h | 30 ++ xen/include/asm-arm/mm.h | 315 +++++++++++++++++ xen/include/asm-arm/multicall.h | 23 ++ xen/include/asm-arm/nmi.h | 15 + xen/include/asm-arm/numa.h | 21 ++ xen/include/asm-arm/p2m.h | 88 +++++ xen/include/asm-arm/page.h | 335 ++++++++++++++++++ xen/include/asm-arm/paging.h | 13 + xen/include/asm-arm/percpu.h | 28 ++ xen/include/asm-arm/processor.h | 269 ++++++++++++++ xen/include/asm-arm/regs.h | 43 +++ xen/include/asm-arm/setup.h | 20 + xen/include/asm-arm/smp.h | 25 ++ xen/include/asm-arm/softirq.h | 15 + xen/include/asm-arm/spinlock.h | 144 ++++++++ xen/include/asm-arm/string.h | 38 ++ xen/include/asm-arm/system.h | 202 +++++++++++ xen/include/asm-arm/time.h | 26 ++ xen/include/asm-arm/trace.h | 12 + xen/include/asm-arm/types.h | 57 +++ xen/include/asm-arm/xenoprof.h | 12 + xen/include/asm-ia64/config.h | 2 + xen/include/asm-ia64/uaccess.h | 12 + xen/include/asm-x86/config.h | 3 + xen/include/asm-x86/guest_access.h | 18 + xen/include/asm-x86/hvm/guest_access.h | 1 + xen/include/asm-x86/uaccess.h | 1 + xen/include/public/arch-arm.h | 125 +++++++ xen/include/public/xen.h | 2 + xen/include/xen/domain.h | 2 + xen/include/xen/grant_table.h | 1 + xen/include/xen/guest_access.h | 6 + xen/include/xen/irq.h | 13 + xen/include/xen/kernel.h | 12 +- xen/include/xen/libelf.h | 4 +- xen/include/xen/list.h | 1 + xen/include/xen/paging.h | 2 +- xen/include/xen/sched.h | 4 + xen/include/xen/serial.h | 2 + xen/include/xen/time.h | 1 + xen/include/xen/timer.h | 1 + xen/include/xen/tmem_xen.h | 1 + xen/include/xen/xencomm.h | 24 ++ 142 files changed, 10679 insertions(+), 62 deletions(-) A git branch is available here, based on xen-unstable (git CS 87c607efbfece009360f615b2bf98959f4ea48e8): git://xenbits.xen.org/people/sstabellini/xen-unstable.git arm-v4 Cheers, Stefano
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 01/25] Move cpufreq option parsing to cpufreq.c
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> --- xen/common/domain.c | 35 ++--------------------------------- xen/drivers/cpufreq/cpufreq.c | 31 +++++++++++++++++++++++++++++++ xen/include/xen/domain.h | 2 ++ 3 files changed, 35 insertions(+), 33 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 52a63ef..1100517 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -31,8 +31,8 @@ #include <xen/grant_table.h> #include <xen/xenoprof.h> #include <xen/irq.h> -#include <acpi/cpufreq/cpufreq.h> #include <asm/debugger.h> +#include <asm/processor.h> #include <public/sched.h> #include <public/sysctl.h> #include <public/vcpu.h> @@ -45,40 +45,9 @@ unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX; /* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */ -static bool_t opt_dom0_vcpus_pin; +bool_t opt_dom0_vcpus_pin; boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin); -/* set xen as default cpufreq */ -enum cpufreq_controller cpufreq_controller = FREQCTL_xen; - -static void __init setup_cpufreq_option(char *str) -{ - char *arg; - - if ( !strcmp(str, "dom0-kernel") ) - { - xen_processor_pmbits &= ~XEN_PROCESSOR_PM_PX; - cpufreq_controller = FREQCTL_dom0_kernel; - opt_dom0_vcpus_pin = 1; - return; - } - - if ( !strcmp(str, "none") ) - { - xen_processor_pmbits &= ~XEN_PROCESSOR_PM_PX; - cpufreq_controller = FREQCTL_none; - return; - } - - if ( (arg = strpbrk(str, ",:")) != NULL ) - *arg++ = ''\0''; - - if ( !strcmp(str, "xen") ) - if ( arg && *arg ) - cpufreq_cmdline_parse(arg); -} -custom_param("cpufreq", setup_cpufreq_option); - /* Protect updates/reads (resp.) of domain_list and domain_hash. */ DEFINE_SPINLOCK(domlist_update_lock); DEFINE_RCU_READ_LOCK(domlist_read_lock); diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c index f49ea1c..34ea2a9 100644 --- a/xen/drivers/cpufreq/cpufreq.c +++ b/xen/drivers/cpufreq/cpufreq.c @@ -61,6 +61,37 @@ static LIST_HEAD_READ_MOSTLY(cpufreq_dom_list_head); struct cpufreq_governor *__read_mostly cpufreq_opt_governor; LIST_HEAD_READ_MOSTLY(cpufreq_governor_list); +/* set xen as default cpufreq */ +enum cpufreq_controller cpufreq_controller = FREQCTL_xen; + +static void __init setup_cpufreq_option(char *str) +{ + char *arg; + + if ( !strcmp(str, "dom0-kernel") ) + { + xen_processor_pmbits &= ~XEN_PROCESSOR_PM_PX; + cpufreq_controller = FREQCTL_dom0_kernel; + opt_dom0_vcpus_pin = 1; + return; + } + + if ( !strcmp(str, "none") ) + { + xen_processor_pmbits &= ~XEN_PROCESSOR_PM_PX; + cpufreq_controller = FREQCTL_none; + return; + } + + if ( (arg = strpbrk(str, ",:")) != NULL ) + *arg++ = ''\0''; + + if ( !strcmp(str, "xen") ) + if ( arg && *arg ) + cpufreq_cmdline_parse(arg); +} +custom_param("cpufreq", setup_cpufreq_option); + bool_t __read_mostly cpufreq_verbose; struct cpufreq_governor *__find_governor(const char *governor) diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 765e132..de3e8db 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -85,4 +85,6 @@ int continue_hypercall_on_cpu( extern unsigned int xen_processor_pmbits; +extern bool_t opt_dom0_vcpus_pin; + #endif /* __XEN_DOMAIN_H__ */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 02/25] Include some header files that are not automatically included on all archs
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v2: - include asm header files after xen header files; - remove incorrect comment; - do not include asm/p2m.h under ia64. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/common/domctl.c | 1 + xen/common/grant_table.c | 1 + xen/common/irq.c | 1 + xen/common/kernel.c | 2 +- xen/common/keyhandler.c | 1 + xen/common/memory.c | 4 ++-- xen/common/spinlock.c | 1 + xen/common/wait.c | 1 + xen/drivers/char/console.c | 1 + xen/include/xen/grant_table.h | 1 + xen/include/xen/list.h | 1 + xen/include/xen/sched.h | 4 ++++ xen/include/xen/timer.h | 1 + xen/include/xen/tmem_xen.h | 1 + 14 files changed, 18 insertions(+), 3 deletions(-) diff --git a/xen/common/domctl.c b/xen/common/domctl.c index d6ae09b..14ab515 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -24,6 +24,7 @@ #include <xen/paging.h> #include <xen/hypercall.h> #include <asm/current.h> +#include <asm/page.h> #include <public/domctl.h> #include <xsm/xsm.h> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 014734d..b024016 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -38,6 +38,7 @@ #include <xen/paging.h> #include <xen/keyhandler.h> #include <xsm/xsm.h> +#include <asm/flushtlb.h> #ifndef max_nr_grant_frames unsigned int max_nr_grant_frames = DEFAULT_MAX_NR_GRANT_FRAMES; diff --git a/xen/common/irq.c b/xen/common/irq.c index 6d37dd4..3e55dfa 100644 --- a/xen/common/irq.c +++ b/xen/common/irq.c @@ -1,5 +1,6 @@ #include <xen/config.h> #include <xen/irq.h> +#include <xen/errno.h> int init_one_irq_desc(struct irq_desc *desc) { diff --git a/xen/common/kernel.c b/xen/common/kernel.c index 7decc1d..f51d387 100644 --- a/xen/common/kernel.c +++ b/xen/common/kernel.c @@ -18,8 +18,8 @@ #include <public/version.h> #ifdef CONFIG_X86 #include <asm/shared.h> -#include <asm/setup.h> #endif +#include <asm/setup.h> #ifndef COMPAT diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index f22fe05..1051a86 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -15,6 +15,7 @@ #include <xen/compat.h> #include <xen/ctype.h> #include <xen/perfc.h> +#include <xen/init.h> #include <asm/debugger.h> #include <asm/div64.h> diff --git a/xen/common/memory.c b/xen/common/memory.c index c796137..8d45439 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -23,8 +23,8 @@ #include <xen/tmem_xen.h> #include <asm/current.h> #include <asm/hardirq.h> -#ifdef CONFIG_X86 -# include <asm/p2m.h> +#ifndef __ia64__ +#include <asm/p2m.h> #endif #include <xen/numa.h> #include <public/memory.h> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index ecf5b44..bfb9670 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -8,6 +8,7 @@ #include <xen/preempt.h> #include <public/sysctl.h> #include <asm/processor.h> +#include <asm/atomic.h> #ifndef NDEBUG diff --git a/xen/common/wait.c b/xen/common/wait.c index 2fb2309..92d1a4f 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -23,6 +23,7 @@ #include <xen/config.h> #include <xen/sched.h> #include <xen/wait.h> +#include <xen/errno.h> struct waitqueue_vcpu { struct list_head list; diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 8a4c684..89cf4f8 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -12,6 +12,7 @@ #include <xen/version.h> #include <xen/lib.h> +#include <xen/init.h> #include <xen/event.h> #include <xen/console.h> #include <xen/serial.h> diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h index c161705..cb0596a 100644 --- a/xen/include/xen/grant_table.h +++ b/xen/include/xen/grant_table.h @@ -26,6 +26,7 @@ #include <xen/config.h> #include <public/grant_table.h> +#include <asm/page.h> #include <asm/grant_table.h> /* Active grant entry - used for shadowing GTF_permit_access grants. */ diff --git a/xen/include/xen/list.h b/xen/include/xen/list.h index b87682f..18443a4 100644 --- a/xen/include/xen/list.h +++ b/xen/include/xen/list.h @@ -8,6 +8,7 @@ #define __XEN_LIST_H__ #include <xen/lib.h> +#include <xen/prefetch.h> #include <asm/system.h> /* These are non-NULL pointers that will result in page faults diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 3904afe..6546757 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -14,6 +14,10 @@ #include <xen/nodemask.h> #include <xen/radix-tree.h> #include <xen/multicall.h> +#include <xen/tasklet.h> +#include <xen/mm.h> +#include <xen/smp.h> +#include <asm/atomic.h> #include <public/xen.h> #include <public/domctl.h> #include <public/sysctl.h> diff --git a/xen/include/xen/timer.h b/xen/include/xen/timer.h index d209142..7c465fb 100644 --- a/xen/include/xen/timer.h +++ b/xen/include/xen/timer.h @@ -12,6 +12,7 @@ #include <xen/time.h> #include <xen/string.h> #include <xen/list.h> +#include <xen/percpu.h> struct timer { /* System time expiry value (nanoseconds since boot). */ diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h index 76ab440..fdbeed1 100644 --- a/xen/include/xen/tmem_xen.h +++ b/xen/include/xen/tmem_xen.h @@ -11,6 +11,7 @@ #include <xen/config.h> #include <xen/mm.h> /* heap alloc/free */ +#include <xen/pfn.h> #include <xen/xmalloc.h> /* xmalloc/xfree */ #include <xen/sched.h> /* struct domain */ #include <xen/guest_access.h> /* copy_from_guest */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 03/25] A collection of fixes to Xen common files
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> - call free_xenoprof_pages only ifdef CONFIG_XENOPROF; - define PRI_stime as PRId64 in an header file; - respect boundaries in is_kernel_*; - implement is_kernel_rodata; - guest_physmap_add_page should be ((void)0). Changes in v4: - fix guest_physmap_add_page; Changes in v2: - introduce CONFIG_XENOPROF; - define _srodata and _erodata as const char*. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/common/domain.c | 2 ++ xen/common/sched_credit2.c | 6 ------ xen/include/asm-ia64/config.h | 1 + xen/include/asm-x86/config.h | 2 ++ xen/include/xen/kernel.h | 12 +++++++++--- xen/include/xen/paging.h | 2 +- xen/include/xen/time.h | 1 + 7 files changed, 16 insertions(+), 10 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 1100517..3c6c5af 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -635,7 +635,9 @@ static void complete_domain_destroy(struct rcu_head *head) sched_destroy_domain(d); /* Free page used by xen oprofile buffer. */ +#ifdef CONFIG_XENOPROF free_xenoprof_pages(d); +#endif xfree(d->mem_event); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 65825b4..ac2be2a 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -26,12 +26,6 @@ #include <xen/trace.h> #include <xen/cpu.h> -#if __i386__ -#define PRI_stime "lld" -#else -#define PRI_stime "ld" -#endif - #define d2printk(x...) //#define d2printk printk diff --git a/xen/include/asm-ia64/config.h b/xen/include/asm-ia64/config.h index be94b48..0173487 100644 --- a/xen/include/asm-ia64/config.h +++ b/xen/include/asm-ia64/config.h @@ -20,6 +20,7 @@ #define CONFIG_EFI #define CONFIG_EFI_PCDP #define CONFIG_SERIAL_SGI_L1_CONSOLE +#define CONFIG_XENOPROF 1 #define CONFIG_XEN_SMP diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h index 40a7b8c..905c0f8 100644 --- a/xen/include/asm-x86/config.h +++ b/xen/include/asm-x86/config.h @@ -43,6 +43,8 @@ #define CONFIG_HOTPLUG 1 #define CONFIG_HOTPLUG_CPU 1 +#define CONFIG_XENOPROF 1 + #define HZ 100 #define OPT_CONSOLE_STR "vga" diff --git a/xen/include/xen/kernel.h b/xen/include/xen/kernel.h index fd03f74..92de428 100644 --- a/xen/include/xen/kernel.h +++ b/xen/include/xen/kernel.h @@ -66,19 +66,25 @@ extern char _start[], _end[]; #define is_kernel(p) ({ \ char *__p = (char *)(unsigned long)(p); \ - (__p >= _start) && (__p <= _end); \ + (__p >= _start) && (__p < _end); \ }) extern char _stext[], _etext[]; #define is_kernel_text(p) ({ \ char *__p = (char *)(unsigned long)(p); \ - (__p >= _stext) && (__p <= _etext); \ + (__p >= _stext) && (__p < _etext); \ +}) + +extern const char _srodata[], _erodata[]; +#define is_kernel_rodata(p) ({ \ + const char *__p = (const char *)(unsigned long)(p); \ + (__p >= _srodata) && (__p < _erodata); \ }) extern char _sinittext[], _einittext[]; #define is_kernel_inittext(p) ({ \ char *__p = (char *)(unsigned long)(p); \ - (__p >= _sinittext) && (__p <= _einittext); \ + (__p >= _sinittext) && (__p < _einittext); \ }) #endif /* _LINUX_KERNEL_H */ diff --git a/xen/include/xen/paging.h b/xen/include/xen/paging.h index abe276d..a5d3261 100644 --- a/xen/include/xen/paging.h +++ b/xen/include/xen/paging.h @@ -20,7 +20,7 @@ #define paging_mode_translate(d) (0) #define paging_mode_external(d) (0) -#define guest_physmap_add_page(d, p, m, o) (0) +#define guest_physmap_add_page(d, p, m, o) ((void)0) #define guest_physmap_remove_page(d, p, m, o) ((void)0) #endif diff --git a/xen/include/xen/time.h b/xen/include/xen/time.h index a194340..31c9ce5 100644 --- a/xen/include/xen/time.h +++ b/xen/include/xen/time.h @@ -30,6 +30,7 @@ struct vcpu; */ typedef s64 s_time_t; +#define PRI_stime PRId64 s_time_t get_s_time(void); unsigned long get_localtime(struct domain *d); -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 04/25] xen: implement an signed 64 bit division helper function
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Implement a C function to perform 64 bit signed division and return both quotient and remainder. Useful as an helper function to implement __aeabi_ldivmod. Changes in v4: - use ABS(). Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- xen/common/lib.c | 19 +++++++++++++++++++ 1 files changed, 19 insertions(+), 0 deletions(-) diff --git a/xen/common/lib.c b/xen/common/lib.c index 4ae637c..e9d0637 100644 --- a/xen/common/lib.c +++ b/xen/common/lib.c @@ -399,6 +399,25 @@ s64 __moddi3(s64 a, s64 b) return (neg ? -urem : urem); } +/* + * Quotient and remainder of unsigned long long division + */ +s64 __ldivmod_helper(s64 a, s64 b, s64 *r) +{ + u64 ua, ub, rem, quot; + + ua = ABS(a); + ub = ABS(b); + quot = __qdivrem(ua, ub, &rem); + if ( a < 0 ) + *r = -rem; + else + *r = rem; + if ( (a < 0) ^ (b < 0) ) + return -quot; + else + return quot; +} #endif /* BITS_PER_LONG == 32 */ /* Compute with 96 bit intermediate result: (a*b)/c */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 05/25] Introduce clear_user and clear_guest
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Introduce clear_user for x86 and ia64, shamelessly taken from Linux. The x86 version is the 32 bit clear_user implementation. Introduce clear_guest for x86 and ia64. The x86 implementation is based on clear_user and a new clear_user_hvm function. The ia64 implementation is actually in xencomm and it is based on xencomm_copy_to_guest. Changes in v3: - introduce clear_guest. Changes in v2: - change d0 to be a long; - cast addr to long. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- xen/arch/ia64/linux/memcpy_mck.S | 177 ++++++++++++++++++++++++++++++++ xen/arch/x86/hvm/hvm.c | 107 +++++++++++++++++++ xen/arch/x86/usercopy.c | 36 +++++++ xen/common/xencomm.c | 111 ++++++++++++++++++++ xen/include/asm-ia64/uaccess.h | 12 ++ xen/include/asm-x86/guest_access.h | 18 +++ xen/include/asm-x86/hvm/guest_access.h | 1 + xen/include/asm-x86/uaccess.h | 1 + xen/include/xen/guest_access.h | 6 + xen/include/xen/xencomm.h | 24 +++++ 10 files changed, 493 insertions(+), 0 deletions(-) diff --git a/xen/arch/ia64/linux/memcpy_mck.S b/xen/arch/ia64/linux/memcpy_mck.S index 6f308e6..8b07006 100644 --- a/xen/arch/ia64/linux/memcpy_mck.S +++ b/xen/arch/ia64/linux/memcpy_mck.S @@ -659,3 +659,180 @@ EK(.ex_handler, (p17) st8 [dst1]=r39,8); \ /* end of McKinley specific optimization */ END(__copy_user) + +/* + * Theory of operations: + * - we check whether or not the buffer is small, i.e., less than 17 + * in which case we do the byte by byte loop. + * + * - Otherwise we go progressively from 1 byte store to 8byte store in + * the head part, the body is a 16byte store loop and we finish we the + * tail for the last 15 bytes. + * The good point about this breakdown is that the long buffer handling + * contains only 2 branches. + * + * The reason for not using shifting & masking for both the head and the + * tail is to stay semantically correct. This routine is not supposed + * to write bytes outside of the buffer. While most of the time this would + * be ok, we can''t tolerate a mistake. A classical example is the case + * of multithreaded code were to the extra bytes touched is actually owned + * by another thread which runs concurrently to ours. Another, less likely, + * example is with device drivers where reading an I/O mapped location may + * have side effects (same thing for writing). + */ +GLOBAL_ENTRY(__do_clear_user) + .prologue + .save ar.pfs, saved_pfs + alloc saved_pfs=ar.pfs,2,0,0,0 + cmp.eq p6,p0=r0,len // check for zero length + .save ar.lc, saved_lc + mov saved_lc=ar.lc // preserve ar.lc (slow) + .body + ;; // avoid WAW on CFM + adds tmp=-1,len // br.ctop is repeat/until + mov ret0=len // return value is length at this point +(p6) br.ret.spnt.many rp + ;; + cmp.lt p6,p0=16,len // if len > 16 then long memset + mov ar.lc=tmp // initialize lc for small count +(p6) br.cond.dptk .long_do_clear + ;; // WAR on ar.lc + // + // worst case 16 iterations, avg 8 iterations + // + // We could have played with the predicates to use the extra + // M slot for 2 stores/iteration but the cost the initialization + // the various counters compared to how long the loop is supposed + // to last on average does not make this solution viable. + // +1: + EX( .Lexit1, st1 [buf]=r0,1 ) + adds len=-1,len // countdown length using len + br.cloop.dptk 1b + ;; // avoid RAW on ar.lc + // + // .Lexit4: comes from byte by byte loop + // len contains bytes left +.Lexit1: + mov ret0=len // faster than using ar.lc + mov ar.lc=saved_lc + br.ret.sptk.many rp // end of short clear_user + + + // + // At this point we know we have more than 16 bytes to copy + // so we focus on alignment (no branches required) + // + // The use of len/len2 for countdown of the number of bytes left + // instead of ret0 is due to the fact that the exception code + // changes the values of r8. + // +.long_do_clear: + tbit.nz p6,p0=buf,0 // odd alignment (for long_do_clear) + ;; + EX( .Lexit3, (p6) st1 [buf]=r0,1 ) // 1-byte aligned +(p6) adds len=-1,len;; // sync because buf is modified + tbit.nz p6,p0=buf,1 + ;; + EX( .Lexit3, (p6) st2 [buf]=r0,2 ) // 2-byte aligned +(p6) adds len=-2,len;; + tbit.nz p6,p0=buf,2 + ;; + EX( .Lexit3, (p6) st4 [buf]=r0,4 ) // 4-byte aligned +(p6) adds len=-4,len;; + tbit.nz p6,p0=buf,3 + ;; + EX( .Lexit3, (p6) st8 [buf]=r0,8 ) // 8-byte aligned +(p6) adds len=-8,len;; + shr.u cnt=len,4 // number of 128-bit (2x64bit) words + ;; + cmp.eq p6,p0=r0,cnt + adds tmp=-1,cnt +(p6) br.cond.dpnt .dotail // we have less than 16 bytes left + ;; + adds buf2=8,buf // setup second base pointer + mov ar.lc=tmp + ;; + + // + // 16bytes/iteration core loop + // + // The second store can never generate a fault because + // we come into the loop only when we are 16-byte aligned. + // This means that if we cross a page then it will always be + // in the first store and never in the second. + // + // + // We need to keep track of the remaining length. A possible (optimistic) + // way would be to use ar.lc and derive how many byte were left by + // doing : left= 16*ar.lc + 16. this would avoid the addition at + // every iteration. + // However we need to keep the synchronization point. A template + // M;;MB does not exist and thus we can keep the addition at no + // extra cycle cost (use a nop slot anyway). It also simplifies the + // (unlikely) error recovery code + // + +2: EX(.Lexit3, st8 [buf]=r0,16 ) + ;; // needed to get len correct when error + st8 [buf2]=r0,16 + adds len=-16,len + br.cloop.dptk 2b + ;; + mov ar.lc=saved_lc + // + // tail correction based on len only + // + // We alternate the use of len3,len2 to allow parallelism and correct + // error handling. We also reuse p6/p7 to return correct value. + // The addition of len2/len3 does not cost anything more compared to + // the regular memset as we had empty slots. + // +.dotail: + mov len2=len // for parallelization of error handling + mov len3=len + tbit.nz p6,p0=len,3 + ;; + EX( .Lexit2, (p6) st8 [buf]=r0,8 ) // at least 8 bytes +(p6) adds len3=-8,len2 + tbit.nz p7,p6=len,2 + ;; + EX( .Lexit2, (p7) st4 [buf]=r0,4 ) // at least 4 bytes +(p7) adds len2=-4,len3 + tbit.nz p6,p7=len,1 + ;; + EX( .Lexit2, (p6) st2 [buf]=r0,2 ) // at least 2 bytes +(p6) adds len3=-2,len2 + tbit.nz p7,p6=len,0 + ;; + EX( .Lexit2, (p7) st1 [buf]=r0 ) // only 1 byte left + mov ret0=r0 // success + br.ret.sptk.many rp // end of most likely path + + // + // Outlined error handling code + // + + // + // .Lexit3: comes from core loop, need restore pr/lc + // len contains bytes left + // + // + // .Lexit2: + // if p6 -> coming from st8 or st2 : len2 contains what''s left + // if p7 -> coming from st4 or st1 : len3 contains what''s left + // We must restore lc/pr even though might not have been used. +.Lexit2: + .pred.rel "mutex", p6, p7 +(p6) mov len=len2 +(p7) mov len=len3 + ;; + // + // .Lexit4: comes from head, need not restore pr/lc + // len contains bytes left + // +.Lexit3: + mov ret0=len + mov ar.lc=saved_lc + br.ret.sptk.many rp +END(__do_clear_user) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 160a47f..de1a0ed 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2390,6 +2390,96 @@ static enum hvm_copy_result __hvm_copy( return HVMCOPY_okay; } +static enum hvm_copy_result __hvm_clear(paddr_t addr, int size) +{ + struct vcpu *curr = current; + unsigned long gfn, mfn; + p2m_type_t p2mt; + char *p; + int count, todo = size; + uint32_t pfec = PFEC_page_present | PFEC_write_access; + + /* + * XXX Disable for 4.1.0: PV-on-HVM drivers will do grant-table ops + * such as query_size. Grant-table code currently does copy_to/from_guest + * accesses under the big per-domain lock, which this test would disallow. + * The test is not needed until we implement sleeping-on-waitqueue when + * we access a paged-out frame, and that''s post 4.1.0 now. + */ +#if 0 + /* + * If the required guest memory is paged out, this function may sleep. + * Hence we bail immediately if called from atomic context. + */ + if ( in_atomic() ) + return HVMCOPY_unhandleable; +#endif + + while ( todo > 0 ) + { + count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo); + + gfn = paging_gva_to_gfn(curr, addr, &pfec); + if ( gfn == INVALID_GFN ) + { + if ( pfec == PFEC_page_paged ) + return HVMCOPY_gfn_paged_out; + if ( pfec == PFEC_page_shared ) + return HVMCOPY_gfn_shared; + return HVMCOPY_bad_gva_to_gfn; + } + + mfn = mfn_x(get_gfn_unshare(curr->domain, gfn, &p2mt)); + + if ( p2m_is_paging(p2mt) ) + { + p2m_mem_paging_populate(curr->domain, gfn); + put_gfn(curr->domain, gfn); + return HVMCOPY_gfn_paged_out; + } + if ( p2m_is_shared(p2mt) ) + { + put_gfn(curr->domain, gfn); + return HVMCOPY_gfn_shared; + } + if ( p2m_is_grant(p2mt) ) + { + put_gfn(curr->domain, gfn); + return HVMCOPY_unhandleable; + } + if ( !p2m_is_ram(p2mt) ) + { + put_gfn(curr->domain, gfn); + return HVMCOPY_bad_gfn_to_mfn; + } + ASSERT(mfn_valid(mfn)); + + p = (char *)map_domain_page(mfn) + (addr & ~PAGE_MASK); + + if ( p2mt == p2m_ram_ro ) + { + static unsigned long lastpage; + if ( xchg(&lastpage, gfn) != gfn ) + gdprintk(XENLOG_DEBUG, "guest attempted write to read-only" + " memory page. gfn=%#lx, mfn=%#lx\n", + gfn, mfn); + } + else + { + memset(p, 0x00, count); + paging_mark_dirty(curr->domain, mfn); + } + + unmap_domain_page(p); + + addr += count; + todo -= count; + put_gfn(curr->domain, gfn); + } + + return HVMCOPY_okay; +} + enum hvm_copy_result hvm_copy_to_guest_phys( paddr_t paddr, void *buf, int size) { @@ -2476,6 +2566,23 @@ unsigned long copy_to_user_hvm(void *to, const void *from, unsigned int len) return rc ? len : 0; /* fake a copy_to_user() return code */ } +unsigned long clear_user_hvm(void *to, unsigned int len) +{ + int rc; + +#ifdef __x86_64__ + if ( !current->arch.hvm_vcpu.hcall_64bit && + is_compat_arg_xlat_range(to, len) ) + { + memset(to, 0x00, len); + return 0; + } +#endif + + rc = __hvm_clear((unsigned long)to, len); + return rc ? len : 0; /* fake a copy_to_user() return code */ +} + unsigned long copy_from_user_hvm(void *to, const void *from, unsigned len) { int rc; diff --git a/xen/arch/x86/usercopy.c b/xen/arch/x86/usercopy.c index d88e635..47dadae 100644 --- a/xen/arch/x86/usercopy.c +++ b/xen/arch/x86/usercopy.c @@ -110,6 +110,42 @@ copy_to_user(void __user *to, const void *from, unsigned n) return n; } +#define __do_clear_user(addr,size) \ +do { \ + long __d0; \ + __asm__ __volatile__( \ + "0: rep; stosl\n" \ + " movl %2,%0\n" \ + "1: rep; stosb\n" \ + "2:\n" \ + ".section .fixup,\"ax\"\n" \ + "3: lea 0(%2,%0,4),%0\n" \ + " jmp 2b\n" \ + ".previous\n" \ + _ASM_EXTABLE(0b,3b) \ + _ASM_EXTABLE(1b,2b) \ + : "=&c"(size), "=&D" (__d0) \ + : "r"(size & 3), "0"(size / 4), "1"((long)addr), "a"(0)); \ +} while (0) + +/** + * clear_user: - Zero a block of memory in user space. + * @to: Destination address, in user space. + * @n: Number of bytes to zero. + * + * Zero a block of memory in user space. + * + * Returns number of bytes that could not be cleared. + * On success, this will be zero. + */ +unsigned long +clear_user(void __user *to, unsigned n) +{ + if ( access_ok(to, n) ) + __do_clear_user(to, n); + return n; +} + /** * copy_from_user: - Copy a block of data from user space. * @to: Destination address, in kernel space. diff --git a/xen/common/xencomm.c b/xen/common/xencomm.c index 2475454..9f6c1c5 100644 --- a/xen/common/xencomm.c +++ b/xen/common/xencomm.c @@ -414,6 +414,117 @@ out: return n - from_pos; } +static int +xencomm_clear_chunk( + unsigned long paddr, unsigned int len) +{ + struct page_info *page; + int res; + + do { + res = xencomm_get_page(paddr, &page); + } while ( res == -EAGAIN ); + + if ( res ) + return res; + + memset(xencomm_vaddr(paddr, page), 0x00, len); + xencomm_mark_dirty((unsigned long)xencomm_vaddr(paddr, page), len); + put_page(page); + + return 0; +} + +static unsigned long +xencomm_inline_clear_guest( + void *to, unsigned int n, unsigned int skip) +{ + unsigned long dest_paddr = xencomm_inline_addr(to) + skip; + + while ( n > 0 ) + { + unsigned int chunksz, bytes; + + chunksz = PAGE_SIZE - (dest_paddr % PAGE_SIZE); + bytes = min(chunksz, n); + + if ( xencomm_clear_chunk(dest_paddr, bytes) ) + return n; + dest_paddr += bytes; + n -= bytes; + } + + /* Always successful. */ + return 0; +} + +/** + * xencomm_clear_guest: Clear a block of data in domain space. + * @to: Physical address to xencomm buffer descriptor. + * @n: Number of bytes to copy. + * @skip: Number of bytes from the start to skip. + * + * Clear domain data + * + * Returns number of bytes that could not be cleared + * On success, this will be zero. + */ +unsigned long +xencomm_clear_guest( + void *to, unsigned int n, unsigned int skip) +{ + struct xencomm_ctxt ctxt; + unsigned int from_pos = 0; + unsigned int to_pos = 0; + unsigned int i = 0; + + if ( xencomm_is_inline(to) ) + return xencomm_inline_clear_guest(to, n, skip); + + if ( xencomm_ctxt_init(to, &ctxt) ) + return n; + + /* Iterate through the descriptor, copying up to a page at a time */ + while ( (from_pos < n) && (i < xencomm_ctxt_nr_addrs(&ctxt)) ) + { + unsigned long dest_paddr; + unsigned int pgoffset, chunksz, chunk_skip; + + if ( xencomm_ctxt_next(&ctxt, i) ) + goto out; + dest_paddr = *xencomm_ctxt_address(&ctxt); + if ( dest_paddr == XENCOMM_INVALID ) + { + i++; + continue; + } + + pgoffset = dest_paddr % PAGE_SIZE; + chunksz = PAGE_SIZE - pgoffset; + + chunk_skip = min(chunksz, skip); + to_pos += chunk_skip; + chunksz -= chunk_skip; + skip -= chunk_skip; + + if ( skip == 0 && chunksz > 0 ) + { + unsigned int bytes = min(chunksz, n - from_pos); + + if ( xencomm_clear_chunk(dest_paddr + chunk_skip, bytes) ) + goto out; + from_pos += bytes; + to_pos += bytes; + } + + i++; + } + +out: + xencomm_ctxt_done(&ctxt); + return n - from_pos; +} + static int xencomm_inline_add_offset(void **handle, unsigned int bytes) { *handle += bytes; diff --git a/xen/include/asm-ia64/uaccess.h b/xen/include/asm-ia64/uaccess.h index 32ef415..2ececb1 100644 --- a/xen/include/asm-ia64/uaccess.h +++ b/xen/include/asm-ia64/uaccess.h @@ -236,6 +236,18 @@ __copy_from_user (void *to, const void __user *from, unsigned long count) __cu_len; \ }) +extern unsigned long __do_clear_user (void __user * to, unsigned long count); + +#define clear_user(to, n) \ +({ \ + void __user *__cu_to = (to); \ + long __cu_len = (n); \ + \ + if (__access_ok(__cu_to)) \ + __cu_len = __do_clear_user(__cu_to, __cu_len); \ + __cu_len; \ +}) + #define copy_from_user(to, from, n) \ ({ \ void *__cu_to = (to); \ diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h index 99ea64d..2b429c2 100644 --- a/xen/include/asm-x86/guest_access.h +++ b/xen/include/asm-x86/guest_access.h @@ -21,6 +21,10 @@ (is_hvm_vcpu(current) ? \ copy_from_user_hvm((dst), (src), (len)) : \ copy_from_user((dst), (src), (len))) +#define raw_clear_guest(dst, len) \ + (is_hvm_vcpu(current) ? \ + clear_user_hvm((dst), (len)) : \ + clear_user((dst), (len))) #define __raw_copy_to_guest(dst, src, len) \ (is_hvm_vcpu(current) ? \ copy_to_user_hvm((dst), (src), (len)) : \ @@ -29,6 +33,10 @@ (is_hvm_vcpu(current) ? \ copy_from_user_hvm((dst), (src), (len)) : \ __copy_from_user((dst), (src), (len))) +#define __raw_clear_guest(dst, len) \ + (is_hvm_vcpu(current) ? \ + clear_user_hvm((dst), (len)) : \ + clear_user((dst), (len))) /* Is the guest handle a NULL reference? */ #define guest_handle_is_null(hnd) ((hnd).p == NULL) @@ -69,6 +77,11 @@ raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\ }) +#define clear_guest_offset(hnd, off, nr) ({ \ + void *_d = (hnd).p; \ + raw_clear_guest(_d+(off), nr); \ +}) + /* Copy sub-field of a structure to guest context via a guest handle. */ #define copy_field_to_guest(hnd, ptr, field) ({ \ const typeof(&(ptr)->field) _s = &(ptr)->field; \ @@ -110,6 +123,11 @@ __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\ }) +#define __clear_guest_offset(hnd, off, nr) ({ \ + void *_d = (hnd).p; \ + __raw_clear_guest(_d+(off), nr); \ +}) + #define __copy_field_to_guest(hnd, ptr, field) ({ \ const typeof(&(ptr)->field) _s = &(ptr)->field; \ void *_d = &(hnd).p->field; \ diff --git a/xen/include/asm-x86/hvm/guest_access.h b/xen/include/asm-x86/hvm/guest_access.h index 7a89e81..b92dbe9 100644 --- a/xen/include/asm-x86/hvm/guest_access.h +++ b/xen/include/asm-x86/hvm/guest_access.h @@ -2,6 +2,7 @@ #define __ASM_X86_HVM_GUEST_ACCESS_H__ unsigned long copy_to_user_hvm(void *to, const void *from, unsigned len); +unsigned long clear_user_hvm(void *to, unsigned int len); unsigned long copy_from_user_hvm(void *to, const void *from, unsigned len); #endif /* __ASM_X86_HVM_GUEST_ACCESS_H__ */ diff --git a/xen/include/asm-x86/uaccess.h b/xen/include/asm-x86/uaccess.h index e3e541b..d6f4458 100644 --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -16,6 +16,7 @@ #endif unsigned long copy_to_user(void *to, const void *from, unsigned len); +unsigned long clear_user(void *to, unsigned len); unsigned long copy_from_user(void *to, const void *from, unsigned len); /* Handles exceptions in both to and from, but doesn''t do access_ok */ unsigned long __copy_to_user_ll(void *to, const void *from, unsigned n); diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h index 0b9fb07..373454e 100644 --- a/xen/include/xen/guest_access.h +++ b/xen/include/xen/guest_access.h @@ -15,10 +15,16 @@ #define copy_from_guest(ptr, hnd, nr) \ copy_from_guest_offset(ptr, hnd, 0, nr) +#define clear_guest(hnd, nr) \ + clear_guest_offset(hnd, 0, nr) + #define __copy_to_guest(hnd, ptr, nr) \ __copy_to_guest_offset(hnd, 0, ptr, nr) #define __copy_from_guest(ptr, hnd, nr) \ __copy_from_guest_offset(ptr, hnd, 0, nr) +#define __clear_guest(hnd, nr) \ + __clear_guest_offset(hnd, 0, nr) + #endif /* __XEN_GUEST_ACCESS_H__ */ diff --git a/xen/include/xen/xencomm.h b/xen/include/xen/xencomm.h index bce2ca7..730da7c 100644 --- a/xen/include/xen/xencomm.h +++ b/xen/include/xen/xencomm.h @@ -27,6 +27,8 @@ unsigned long xencomm_copy_to_guest( void *to, const void *from, unsigned int len, unsigned int skip); unsigned long xencomm_copy_from_guest( void *to, const void *from, unsigned int len, unsigned int skip); +unsigned long xencomm_clear_guest( + void *to, unsigned int n, unsigned int skip); int xencomm_add_offset(void **handle, unsigned int bytes); int xencomm_handle_is_null(void *ptr); @@ -41,6 +43,16 @@ static inline unsigned long xencomm_inline_addr(const void *handle) return (unsigned long)handle & ~XENCOMM_INLINE_FLAG; } +#define raw_copy_to_guest(dst, src, len) \ + xencomm_copy_to_guest(dst, src, len, 0) +#define raw_copy_from_guest(dst, src, len) \ + xencomm_copy_from_guest(dst, src, nr, 0) +#define raw_clear_guest(dst, len) \ + xencomm_clear_guest(dst, len, 0) +#define __raw_copy_to_guest raw_copy_to_guest +#define __raw_copy_from_guest raw_copy_from_guest +#define __raw_clear_guest raw_clear_guest + /* Is the guest handle a NULL reference? */ #define guest_handle_is_null(hnd) \ ((hnd).p == NULL || xencomm_handle_is_null((hnd).p)) @@ -82,6 +94,13 @@ static inline unsigned long xencomm_inline_addr(const void *handle) #define copy_from_guest_offset(ptr, hnd, idx, nr) \ __copy_from_guest_offset(ptr, hnd, idx, nr) +/* + * Clear an array of objects in guest context via a guest handle. + * Optionally specify an offset into the guest array. + */ +#define clear_guest_offset(hnd, idx, nr) \ + __clear_guest_offset(hnd, idx, nr) + /* Copy sub-field of a structure from guest context via a guest handle. */ #define copy_field_from_guest(ptr, hnd, field) \ __copy_field_from_guest(ptr, hnd, field) @@ -115,6 +134,11 @@ static inline unsigned long xencomm_inline_addr(const void *handle) xencomm_copy_from_guest(_d, _s, sizeof(*_d), _off); \ }) +#define __clear_guest_offset(hnd, idx, nr) ({ \ + void *_d = (hnd).p; \ + xencomm_clear_guest(_d, nr, idx); \ +}) + #ifdef CONFIG_XENCOMM_MARK_DIRTY extern void xencomm_mark_dirty(unsigned long addr, unsigned int len); #else -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 06/25] libelf-loader: introduce elf_load_image
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Implement a new function, called elf_load_image, to perform the actually copy of the elf image and clearing the padding. The function is implemented as memcpy and memset when the library is built as part of the tools, but it is implemented as raw_copy_to_guest and raw_clear_guest when built as part of Xen, so that it can be safely called with an HVM style dom0. Changes in v4: - check for return values in elf_load_image. Changes in v3: - switch to raw_copy_to_guest and raw_clear_guest. Changes in v2: - remove CONFIG_KERNEL_NO_RELOC. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- tools/libxc/xc_dom_elfloader.c | 8 +++++++- tools/libxc/xc_hvm_build.c | 5 +++-- xen/arch/x86/domain_build.c | 7 ++++++- xen/common/libelf/libelf-loader.c | 28 +++++++++++++++++++++++++--- xen/include/xen/libelf.h | 2 +- 5 files changed, 42 insertions(+), 8 deletions(-) diff --git a/tools/libxc/xc_dom_elfloader.c b/tools/libxc/xc_dom_elfloader.c index 4d7b8e0..2e69559 100644 --- a/tools/libxc/xc_dom_elfloader.c +++ b/tools/libxc/xc_dom_elfloader.c @@ -310,9 +310,15 @@ static int xc_dom_parse_elf_kernel(struct xc_dom_image *dom) static int xc_dom_load_elf_kernel(struct xc_dom_image *dom) { struct elf_binary *elf = dom->private_loader; + int rc; elf->dest = xc_dom_seg_to_ptr(dom, &dom->kernel_seg); - elf_load_binary(elf); + rc = elf_load_binary(elf); + if ( rc < 0 ) + { + DOMPRINTF("%s: failed to load elf binary", __FUNCTION__); + return rc; + } if ( dom->parms.bsd_symtab ) xc_dom_load_elf_symtab(dom, elf, 1); return 0; diff --git a/tools/libxc/xc_hvm_build.c b/tools/libxc/xc_hvm_build.c index 9831bab..1fa5658 100644 --- a/tools/libxc/xc_hvm_build.c +++ b/tools/libxc/xc_hvm_build.c @@ -109,8 +109,9 @@ static int loadelfimage( elf->dest += elf->pstart & (PAGE_SIZE - 1); /* Load the initial elf image. */ - elf_load_binary(elf); - rc = 0; + rc = elf_load_binary(elf); + if ( rc < 0 ) + PERROR("Failed to load elf binary\n"); munmap(elf->dest, pages << PAGE_SHIFT); elf->dest = NULL; diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c index 1b3818f..b3c5d4c 100644 --- a/xen/arch/x86/domain_build.c +++ b/xen/arch/x86/domain_build.c @@ -903,7 +903,12 @@ int __init construct_dom0( /* Copy the OS image and free temporary buffer. */ elf.dest = (void*)vkern_start; - elf_load_binary(&elf); + rc = elf_load_binary(&elf); + if ( rc < 0 ) + { + printk("Failed to load the kernel binary\n"); + return rc; + } bootstrap_map(NULL); if ( UNSET_ADDR != parms.virt_hypercall ) diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c index 1ccf7d3..4cdb03b 100644 --- a/xen/common/libelf/libelf-loader.c +++ b/xen/common/libelf/libelf-loader.c @@ -107,11 +107,32 @@ void elf_set_log(struct elf_binary *elf, elf_log_callback *log_callback, elf->log_caller_data = log_caller_data; elf->verbose = verbose; } + +static int elf_load_image(void *dst, const void *src, uint64_t filesz, uint64_t memsz) +{ + memcpy(dst, src, filesz); + memset(dst + filesz, 0, memsz - filesz); + return 0; +} #else +#include <asm/guest_access.h> + void elf_set_verbose(struct elf_binary *elf) { elf->verbose = 1; } + +static int elf_load_image(void *dst, const void *src, uint64_t filesz, uint64_t memsz) +{ + int rc; + rc = raw_copy_to_guest(dst, src, filesz); + if ( rc != 0 ) + return -rc; + rc = raw_clear_guest(dst + filesz, memsz - filesz); + if ( rc != 0 ) + return -rc; + return 0; +} #endif /* Calculate the required additional kernel space for the elf image */ @@ -237,7 +258,7 @@ void elf_parse_binary(struct elf_binary *elf) __FUNCTION__, elf->pstart, elf->pend); } -void elf_load_binary(struct elf_binary *elf) +int elf_load_binary(struct elf_binary *elf) { const elf_phdr *phdr; uint64_t i, count, paddr, offset, filesz, memsz; @@ -256,11 +277,12 @@ void elf_load_binary(struct elf_binary *elf) dest = elf_get_ptr(elf, paddr); elf_msg(elf, "%s: phdr %" PRIu64 " at 0x%p -> 0x%p\n", __func__, i, dest, dest + filesz); - memcpy(dest, elf->image + offset, filesz); - memset(dest + filesz, 0, memsz - filesz); + if ( elf_load_image(dest, elf->image + offset, filesz, memsz) != 0 ) + return -1; } elf_load_bsdsyms(elf); + return 0; } void *elf_get_ptr(struct elf_binary *elf, unsigned long addr) diff --git a/xen/include/xen/libelf.h b/xen/include/xen/libelf.h index 9de84eb..d77bda6 100644 --- a/xen/include/xen/libelf.h +++ b/xen/include/xen/libelf.h @@ -198,7 +198,7 @@ void elf_set_log(struct elf_binary *elf, elf_log_callback*, #endif void elf_parse_binary(struct elf_binary *elf); -void elf_load_binary(struct elf_binary *elf); +int elf_load_binary(struct elf_binary *elf); void *elf_get_ptr(struct elf_binary *elf, unsigned long addr); uint64_t elf_lookup_addr(struct elf_binary *elf, const char *symbol); -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 07/25] xen/common/Makefile: introduce HAS_CPUFREQ, HAS_PCI, HAS_PASSTHROUGH, HAS_NS16550, HAS_KEXEC
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> - make the compilation of ns16550.c depend upon HAS_NS16550; - make the compilation of cpufreq depend upon HAS_CPUFREQ; - make the compilation of pci depend upon HAS_PCI; - make the compilation of passthrough depend upon HAS_PASSTHROUGH; - make the compilation of kexec depend upon HAS_KEXEC. Changes in v2: - introduce HAS_KEXEC and CONFIG_KEXEC kexec. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/ia64/Rules.mk | 5 +++++ xen/arch/x86/Rules.mk | 5 +++++ xen/common/Makefile | 2 +- xen/common/shutdown.c | 4 ++++ xen/drivers/Makefile | 6 +++--- xen/drivers/char/Makefile | 2 +- xen/drivers/char/console.c | 4 ++++ xen/include/asm-ia64/config.h | 1 + xen/include/asm-x86/config.h | 1 + 9 files changed, 25 insertions(+), 5 deletions(-) diff --git a/xen/arch/ia64/Rules.mk b/xen/arch/ia64/Rules.mk index bef11c3..054b4de 100644 --- a/xen/arch/ia64/Rules.mk +++ b/xen/arch/ia64/Rules.mk @@ -4,6 +4,11 @@ ia64 := y HAS_ACPI := y HAS_VGA := y +HAS_CPUFREQ := y +HAS_PCI := y +HAS_PASSTHROUGH := y +HAS_NS16550 := y +HAS_KEXEC := y xenoprof := y no_warns ?= n vti_debug ?= n diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk index bf77aef..1e48877 100644 --- a/xen/arch/x86/Rules.mk +++ b/xen/arch/x86/Rules.mk @@ -3,6 +3,11 @@ HAS_ACPI := y HAS_VGA := y +HAS_CPUFREQ := y +HAS_PCI := y +HAS_PASSTHROUGH := y +HAS_NS16550 := y +HAS_KEXEC := y xenoprof := y # diff --git a/xen/common/Makefile b/xen/common/Makefile index 1d85e65..9249845 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -8,7 +8,7 @@ obj-y += grant_table.o obj-y += irq.o obj-y += kernel.o obj-y += keyhandler.o -obj-y += kexec.o +obj-$(HAS_KEXEC) += kexec.o obj-y += lib.o obj-y += memory.o obj-y += multicall.o diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c index e356e86..b18ef5d 100644 --- a/xen/common/shutdown.c +++ b/xen/common/shutdown.c @@ -6,7 +6,9 @@ #include <xen/delay.h> #include <xen/shutdown.h> #include <xen/console.h> +#ifdef CONFIG_KEXEC #include <xen/kexec.h> +#endif #include <asm/debugger.h> #include <public/sched.h> @@ -58,7 +60,9 @@ void dom0_shutdown(u8 reason) case SHUTDOWN_watchdog: { printk("Domain 0 shutdown: watchdog rebooting machine.\n"); +#ifdef CONFIG_KEXEC kexec_crash(); +#endif machine_restart(0); break; /* not reached */ } diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile index eb4fb61..7239375 100644 --- a/xen/drivers/Makefile +++ b/xen/drivers/Makefile @@ -1,6 +1,6 @@ subdir-y += char -subdir-y += cpufreq -subdir-y += pci -subdir-y += passthrough +subdir-$(HAS_CPUFREQ) += cpufreq +subdir-$(HAS_PCI) += pci +subdir-$(HAS_PASSTHROUGH) += passthrough subdir-$(HAS_ACPI) += acpi subdir-$(HAS_VGA) += video diff --git a/xen/drivers/char/Makefile b/xen/drivers/char/Makefile index ded9a94..19250c8 100644 --- a/xen/drivers/char/Makefile +++ b/xen/drivers/char/Makefile @@ -1,3 +1,3 @@ obj-y += console.o -obj-y += ns16550.o +obj-$(HAS_NS16550) += ns16550.o obj-y += serial.o diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 89cf4f8..19f021c 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -22,7 +22,9 @@ #include <xen/guest_access.h> #include <xen/shutdown.h> #include <xen/vga.h> +#ifdef CONFIG_KEXEC #include <xen/kexec.h> +#endif #include <asm/debugger.h> #include <asm/div64.h> #include <xen/hypercall.h> /* for do_console_io */ @@ -961,7 +963,9 @@ void panic(const char *fmt, ...) debugger_trap_immediate(); +#ifdef CONFIG_KEXEC kexec_crash(); +#endif if ( opt_noreboot ) { diff --git a/xen/include/asm-ia64/config.h b/xen/include/asm-ia64/config.h index 0173487..6e9fc98 100644 --- a/xen/include/asm-ia64/config.h +++ b/xen/include/asm-ia64/config.h @@ -20,6 +20,7 @@ #define CONFIG_EFI #define CONFIG_EFI_PCDP #define CONFIG_SERIAL_SGI_L1_CONSOLE +#define CONFIG_KEXEC 1 #define CONFIG_XENOPROF 1 #define CONFIG_XEN_SMP diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h index 905c0f8..cf6a4cc 100644 --- a/xen/include/asm-x86/config.h +++ b/xen/include/asm-x86/config.h @@ -44,6 +44,7 @@ #define CONFIG_HOTPLUG_CPU 1 #define CONFIG_XENOPROF 1 +#define CONFIG_KEXEC 1 #define HZ 100 -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 08/25] arm: compile tmem
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Include few missing header files; introduce defined(CONFIG_ARM) where required. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- xen/common/tmem.c | 3 ++- xen/common/tmem_xen.c | 4 +++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/xen/common/tmem.c b/xen/common/tmem.c index 115465b..dd276df 100644 --- a/xen/common/tmem.c +++ b/xen/common/tmem.c @@ -22,6 +22,7 @@ #include <xen/rbtree.h> #include <xen/radix-tree.h> #include <xen/list.h> +#include <xen/init.h> #define EXPORT /* indicates code other modules are dependent upon */ #define FORWARD @@ -49,7 +50,7 @@ #define INVERT_SENTINEL(_x,_y) _x->sentinel = ~_y##_SENTINEL #define ASSERT_SENTINEL(_x,_y) \ ASSERT(_x->sentinel != ~_y##_SENTINEL);ASSERT(_x->sentinel == _y##_SENTINEL) -#ifdef __i386__ +#if defined(__i386__) || defined(CONFIG_ARM) #define POOL_SENTINEL 0x87658765 #define OBJ_SENTINEL 0x12345678 #define OBJNODE_SENTINEL 0xfedcba09 diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c index 15f1806..9b2a22c 100644 --- a/xen/common/tmem_xen.c +++ b/xen/common/tmem_xen.c @@ -12,6 +12,8 @@ #include <xen/paging.h> #include <xen/domain_page.h> #include <xen/cpu.h> +#include <xen/init.h> +#include <asm/p2m.h> #define EXPORT /* indicates code other modules are dependent upon */ @@ -87,7 +89,7 @@ void tmh_copy_page(char *to, char*from) #endif } -#ifdef __ia64__ +#if defined(__ia64__) || defined (CONFIG_ARM) static inline void *cli_get_page(tmem_cli_mfn_t cmfn, unsigned long *pcli_mfn, pfp_t **pcli_pfp, bool_t cli_write) { -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 09/25] arm: header files
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> A simple implementation of everything under asm-arm and arch-arm.h; some of these files are shamelessly taken from Linux. Changes in v4: - bring atomic access routines in line with upstream changes; - fix build for -wunused-values; Changes in v2: - remove div64. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/include/asm-arm/atomic.h | 236 ++++++++++++++++++++++++++++++++ xen/include/asm-arm/bitops.h | 195 +++++++++++++++++++++++++++ xen/include/asm-arm/bug.h | 15 ++ xen/include/asm-arm/byteorder.h | 16 +++ xen/include/asm-arm/cache.h | 20 +++ xen/include/asm-arm/config.h | 122 +++++++++++++++++ xen/include/asm-arm/cpregs.h | 207 ++++++++++++++++++++++++++++ xen/include/asm-arm/current.h | 60 ++++++++ xen/include/asm-arm/debugger.h | 15 ++ xen/include/asm-arm/delay.h | 15 ++ xen/include/asm-arm/desc.h | 12 ++ xen/include/asm-arm/div64.h | 235 ++++++++++++++++++++++++++++++++ xen/include/asm-arm/elf.h | 33 +++++ xen/include/asm-arm/event.h | 41 ++++++ xen/include/asm-arm/flushtlb.h | 31 +++++ xen/include/asm-arm/grant_table.h | 35 +++++ xen/include/asm-arm/hardirq.h | 28 ++++ xen/include/asm-arm/hypercall.h | 24 ++++ xen/include/asm-arm/init.h | 12 ++ xen/include/asm-arm/io.h | 12 ++ xen/include/asm-arm/iocap.h | 20 +++ xen/include/asm-arm/multicall.h | 23 +++ xen/include/asm-arm/nmi.h | 15 ++ xen/include/asm-arm/numa.h | 21 +++ xen/include/asm-arm/paging.h | 13 ++ xen/include/asm-arm/percpu.h | 28 ++++ xen/include/asm-arm/processor.h | 269 +++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/regs.h | 43 ++++++ xen/include/asm-arm/setup.h | 16 +++ xen/include/asm-arm/smp.h | 25 ++++ xen/include/asm-arm/softirq.h | 15 ++ xen/include/asm-arm/spinlock.h | 144 ++++++++++++++++++++ xen/include/asm-arm/string.h | 38 +++++ xen/include/asm-arm/system.h | 202 ++++++++++++++++++++++++++++ xen/include/asm-arm/trace.h | 12 ++ xen/include/asm-arm/types.h | 57 ++++++++ xen/include/asm-arm/xenoprof.h | 12 ++ xen/include/public/arch-arm.h | 125 +++++++++++++++++ xen/include/public/xen.h | 2 + 39 files changed, 2444 insertions(+), 0 deletions(-) create mode 100644 xen/include/asm-arm/atomic.h create mode 100644 xen/include/asm-arm/bitops.h create mode 100644 xen/include/asm-arm/bug.h create mode 100644 xen/include/asm-arm/byteorder.h create mode 100644 xen/include/asm-arm/cache.h create mode 100644 xen/include/asm-arm/config.h create mode 100644 xen/include/asm-arm/cpregs.h create mode 100644 xen/include/asm-arm/current.h create mode 100644 xen/include/asm-arm/debugger.h create mode 100644 xen/include/asm-arm/delay.h create mode 100644 xen/include/asm-arm/desc.h create mode 100644 xen/include/asm-arm/div64.h create mode 100644 xen/include/asm-arm/elf.h create mode 100644 xen/include/asm-arm/event.h create mode 100644 xen/include/asm-arm/flushtlb.h create mode 100644 xen/include/asm-arm/grant_table.h create mode 100644 xen/include/asm-arm/hardirq.h create mode 100644 xen/include/asm-arm/hypercall.h create mode 100644 xen/include/asm-arm/init.h create mode 100644 xen/include/asm-arm/io.h create mode 100644 xen/include/asm-arm/iocap.h create mode 100644 xen/include/asm-arm/multicall.h create mode 100644 xen/include/asm-arm/nmi.h create mode 100644 xen/include/asm-arm/numa.h create mode 100644 xen/include/asm-arm/paging.h create mode 100644 xen/include/asm-arm/percpu.h create mode 100644 xen/include/asm-arm/processor.h create mode 100644 xen/include/asm-arm/regs.h create mode 100644 xen/include/asm-arm/setup.h create mode 100644 xen/include/asm-arm/smp.h create mode 100644 xen/include/asm-arm/softirq.h create mode 100644 xen/include/asm-arm/spinlock.h create mode 100644 xen/include/asm-arm/string.h create mode 100644 xen/include/asm-arm/system.h create mode 100644 xen/include/asm-arm/trace.h create mode 100644 xen/include/asm-arm/types.h create mode 100644 xen/include/asm-arm/xenoprof.h create mode 100644 xen/include/public/arch-arm.h diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h new file mode 100644 index 0000000..c7eadd6 --- /dev/null +++ b/xen/include/asm-arm/atomic.h @@ -0,0 +1,236 @@ +/* + * arch/arm/include/asm/atomic.h + * + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#ifndef __ARCH_ARM_ATOMIC__ +#define __ARCH_ARM_ATOMIC__ + +#include <xen/config.h> +#include <asm/system.h> + +#define build_atomic_read(name, size, type, reg) \ +static inline type name(const volatile type *addr) \ +{ \ + type ret; \ + asm volatile("ldr" size " %0,%1" \ + : reg (ret) \ + : "m" (*(volatile type *)addr)); \ + return ret; \ +} + +#define build_atomic_write(name, size, type, reg) \ +static inline void name(volatile type *addr, type val) \ +{ \ + asm volatile("str" size " %1,%0" \ + : "=m" (*(volatile type *)addr) \ + : reg (val)); \ +} + +build_atomic_read(read_u8_atomic, "b", uint8_t, "=q") +build_atomic_read(read_u16_atomic, "h", uint16_t, "=r") +build_atomic_read(read_u32_atomic, "", uint32_t, "=r") +//build_atomic_read(read_u64_atomic, "d", uint64_t, "=r") +build_atomic_read(read_int_atomic, "", int, "=r") + +build_atomic_write(write_u8_atomic, "b", uint8_t, "q") +build_atomic_write(write_u16_atomic, "h", uint16_t, "r") +build_atomic_write(write_u32_atomic, "", uint32_t, "r") +//build_atomic_write(write_u64_atomic, "d", uint64_t, "r") +build_atomic_write(write_int_atomic, "", int, "r") + +void __bad_atomic_size(void); + +#define read_atomic(p) ({ \ + typeof(*p) __x; \ + switch ( sizeof(*p) ) { \ + case 1: __x = (typeof(*p))read_u8_atomic((uint8_t *)p); break; \ + case 2: __x = (typeof(*p))read_u16_atomic((uint16_t *)p); break; \ + case 4: __x = (typeof(*p))read_u32_atomic((uint32_t *)p); break; \ + default: __x = 0; __bad_atomic_size(); break; \ + } \ + __x; \ +}) + +#define write_atomic(p, x) ({ \ + typeof(*p) __x = (x); \ + switch ( sizeof(*p) ) { \ + case 1: write_u8_atomic((uint8_t *)p, (uint8_t)__x); break; \ + case 2: write_u16_atomic((uint16_t *)p, (uint16_t)__x); break; \ + case 4: write_u32_atomic((uint32_t *)p, (uint32_t)__x); break; \ + default: __bad_atomic_size(); break; \ + } \ + __x; \ +}) + +/* + * NB. I''ve pushed the volatile qualifier into the operations. This allows + * fast accessors such as _atomic_read() and _atomic_set() which don''t give + * the compiler a fit. + */ +typedef struct { int counter; } atomic_t; + +#define ATOMIC_INIT(i) { (i) } + +/* + * On ARM, ordinary assignment (str instruction) doesn''t clear the local + * strex/ldrex monitor on some implementations. The reason we can use it for + * atomic_set() is the clrex or dummy strex done on every exception return. + */ +#define _atomic_read(v) ((v).counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) + +#define _atomic_set(v,i) (((v).counter) = (i)) +#define atomic_set(v,i) (((v)->counter) = (i)) + +/* + * ARMv6 UP and SMP safe atomic ops. We use load exclusive and + * store exclusive to ensure that these are atomic. We may loop + * to ensure that the update happens. + */ +static inline void atomic_add(int i, atomic_t *v) +{ + unsigned long tmp; + int result; + + __asm__ __volatile__("@ atomic_add\n" +"1: ldrex %0, [%3]\n" +" add %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); +} + +static inline int atomic_add_return(int i, atomic_t *v) +{ + unsigned long tmp; + int result; + + smp_mb(); + + __asm__ __volatile__("@ atomic_add_return\n" +"1: ldrex %0, [%3]\n" +" add %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); + + smp_mb(); + + return result; +} + +static inline void atomic_sub(int i, atomic_t *v) +{ + unsigned long tmp; + int result; + + __asm__ __volatile__("@ atomic_sub\n" +"1: ldrex %0, [%3]\n" +" sub %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); +} + +static inline int atomic_sub_return(int i, atomic_t *v) +{ + unsigned long tmp; + int result; + + smp_mb(); + + __asm__ __volatile__("@ atomic_sub_return\n" +"1: ldrex %0, [%3]\n" +" sub %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); + + smp_mb(); + + return result; +} + +static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) +{ + unsigned long oldval, res; + + smp_mb(); + + do { + __asm__ __volatile__("@ atomic_cmpxchg\n" + "ldrex %1, [%3]\n" + "mov %0, #0\n" + "teq %1, %4\n" + "strexeq %0, %5, [%3]\n" + : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter) + : "r" (&ptr->counter), "Ir" (old), "r" (new) + : "cc"); + } while (res); + + smp_mb(); + + return oldval; +} + +static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr) +{ + unsigned long tmp, tmp2; + + __asm__ __volatile__("@ atomic_clear_mask\n" +"1: ldrex %0, [%3]\n" +" bic %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (tmp), "=&r" (tmp2), "+Qo" (*addr) + : "r" (addr), "Ir" (mask) + : "cc"); +} + +#define atomic_inc(v) atomic_add(1, v) +#define atomic_dec(v) atomic_sub(1, v) + +#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) +#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) +#define atomic_inc_return(v) (atomic_add_return(1, v)) +#define atomic_dec_return(v) (atomic_sub_return(1, v)) +#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) + +#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0) + +static inline atomic_t atomic_compareandswap( + atomic_t old, atomic_t new, atomic_t *v) +{ + atomic_t rc; + rc.counter = __cmpxchg(&v->counter, old.counter, new.counter, sizeof(int)); + return rc; +} + +#endif /* __ARCH_ARM_ATOMIC__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/bitops.h b/xen/include/asm-arm/bitops.h new file mode 100644 index 0000000..3d6b30b --- /dev/null +++ b/xen/include/asm-arm/bitops.h @@ -0,0 +1,195 @@ +/* + * Copyright 1995, Russell King. + * Various bits and pieces copyrights include: + * Linus Torvalds (test_bit). + * Big endian support: Copyright 2001, Nicolas Pitre + * reworked by rmk. + */ + +#ifndef _ARM_BITOPS_H +#define _ARM_BITOPS_H + +extern void _set_bit(int nr, volatile void * p); +extern void _clear_bit(int nr, volatile void * p); +extern void _change_bit(int nr, volatile void * p); +extern int _test_and_set_bit(int nr, volatile void * p); +extern int _test_and_clear_bit(int nr, volatile void * p); +extern int _test_and_change_bit(int nr, volatile void * p); + +#define set_bit(n,p) _set_bit(n,p) +#define clear_bit(n,p) _clear_bit(n,p) +#define change_bit(n,p) _change_bit(n,p) +#define test_and_set_bit(n,p) _test_and_set_bit(n,p) +#define test_and_clear_bit(n,p) _test_and_clear_bit(n,p) +#define test_and_change_bit(n,p) _test_and_change_bit(n,p) + +#define BIT(nr) (1UL << (nr)) +#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG)) +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) +#define BITS_PER_BYTE 8 + +#define ADDR (*(volatile long *) addr) +#define CONST_ADDR (*(const volatile long *) addr) + +/** + * __test_and_set_bit - Set a bit and return its old value + * @nr: Bit to set + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline int __test_and_set_bit(int nr, volatile void *addr) +{ + unsigned long mask = BIT_MASK(nr); + volatile unsigned long *p + ((volatile unsigned long *)addr) + BIT_WORD(nr); + unsigned long old = *p; + + *p = old | mask; + return (old & mask) != 0; +} + +/** + * __test_and_clear_bit - Clear a bit and return its old value + * @nr: Bit to clear + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline int __test_and_clear_bit(int nr, volatile void *addr) +{ + unsigned long mask = BIT_MASK(nr); + volatile unsigned long *p + ((volatile unsigned long *)addr) + BIT_WORD(nr); + unsigned long old = *p; + + *p = old & ~mask; + return (old & mask) != 0; +} + +/* WARNING: non atomic and it can be reordered! */ +static inline int __test_and_change_bit(int nr, + volatile void *addr) +{ + unsigned long mask = BIT_MASK(nr); + volatile unsigned long *p + ((volatile unsigned long *)addr) + BIT_WORD(nr); + unsigned long old = *p; + + *p = old ^ mask; + return (old & mask) != 0; +} + +/** + * test_bit - Determine whether a bit is set + * @nr: bit number to test + * @addr: Address to start counting from + */ +static inline int test_bit(int nr, const volatile void *addr) +{ + const volatile unsigned long *p = (const volatile unsigned long *)addr; + return 1UL & (p[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); +} + + +extern unsigned int _find_first_bit( + const unsigned long *addr, unsigned int size); +extern unsigned int _find_next_bit( + const unsigned long *addr, unsigned int size, unsigned int offset); +extern unsigned int _find_first_zero_bit( + const unsigned long *addr, unsigned int size); +extern unsigned int _find_next_zero_bit( + const unsigned long *addr, unsigned int size, unsigned int offset); + +/* + * These are the little endian, atomic definitions. + */ +#define find_first_zero_bit(p,sz) _find_first_zero_bit(p,sz) +#define find_next_zero_bit(p,sz,off) _find_next_zero_bit(p,sz,off) +#define find_first_bit(p,sz) _find_first_bit(p,sz) +#define find_next_bit(p,sz,off) _find_next_bit(p,sz,off) + +static inline int constant_fls(int x) +{ + int r = 32; + + if (!x) + return 0; + if (!(x & 0xffff0000u)) { + x <<= 16; + r -= 16; + } + if (!(x & 0xff000000u)) { + x <<= 8; + r -= 8; + } + if (!(x & 0xf0000000u)) { + x <<= 4; + r -= 4; + } + if (!(x & 0xc0000000u)) { + x <<= 2; + r -= 2; + } + if (!(x & 0x80000000u)) { + x <<= 1; + r -= 1; + } + return r; +} + +/* + * On ARMv5 and above those functions can be implemented around + * the clz instruction for much better code efficiency. + */ + +static inline int fls(int x) +{ + int ret; + + if (__builtin_constant_p(x)) + return constant_fls(x); + + asm("clz\t%0, %1" : "=r" (ret) : "r" (x)); + ret = 32 - ret; + return ret; +} + +#define ffs(x) ({ unsigned long __t = (x); fls(__t & -__t); }) + +/** + * find_first_set_bit - find the first set bit in @word + * @word: the word to search + * + * Returns the bit-number of the first set bit (first bit being 0). + * The input must *not* be zero. + */ +static inline unsigned int find_first_set_bit(unsigned long word) +{ + return ffs(word) - 1; +} + +/** + * hweightN - returns the hamming weight of a N-bit word + * @x: the word to weigh + * + * The Hamming Weight of a number is the total number of bits set in it. + */ +#define hweight64(x) generic_hweight64(x) +#define hweight32(x) generic_hweight32(x) +#define hweight16(x) generic_hweight16(x) +#define hweight8(x) generic_hweight8(x) + +#endif /* _ARM_BITOPS_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/bug.h b/xen/include/asm-arm/bug.h new file mode 100644 index 0000000..bc2532c --- /dev/null +++ b/xen/include/asm-arm/bug.h @@ -0,0 +1,15 @@ +#ifndef __ARM_BUG_H__ +#define __ARM_BUG_H__ + +#define BUG() __bug(__FILE__, __LINE__) +#define WARN() __warn(__FILE__, __LINE__) + +#endif /* __X86_BUG_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/byteorder.h b/xen/include/asm-arm/byteorder.h new file mode 100644 index 0000000..f6ad883 --- /dev/null +++ b/xen/include/asm-arm/byteorder.h @@ -0,0 +1,16 @@ +#ifndef __ASM_ARM_BYTEORDER_H__ +#define __ASM_ARM_BYTEORDER_H__ + +#define __BYTEORDER_HAS_U64__ + +#include <xen/byteorder/little_endian.h> + +#endif /* __ASM_ARM_BYTEORDER_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/cache.h b/xen/include/asm-arm/cache.h new file mode 100644 index 0000000..41b6291 --- /dev/null +++ b/xen/include/asm-arm/cache.h @@ -0,0 +1,20 @@ +#ifndef __ARCH_ARM_CACHE_H +#define __ARCH_ARM_CACHE_H + +#include <xen/config.h> + +/* L1 cache line size */ +#define L1_CACHE_SHIFT (CONFIG_ARM_L1_CACHE_SHIFT) +#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) + +#define __read_mostly __attribute__((__section__(".data.read_mostly"))) + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h new file mode 100644 index 0000000..12285dd --- /dev/null +++ b/xen/include/asm-arm/config.h @@ -0,0 +1,122 @@ +/****************************************************************************** + * config.h + * + * A Linux-style configuration list. + */ + +#ifndef __ARM_CONFIG_H__ +#define __ARM_CONFIG_H__ + +#define CONFIG_PAGING_LEVELS 3 + +#define CONFIG_ARM 1 + +#define CONFIG_ARM_L1_CACHE_SHIFT 7 /* XXX */ + +#define CONFIG_SMP 1 + +#define CONFIG_DOMAIN_PAGE 1 + +#define OPT_CONSOLE_STR "com1" + +#ifdef MAX_PHYS_CPUS +#define NR_CPUS MAX_PHYS_CPUS +#else +#define NR_CPUS 128 +#endif + +#define MAX_VIRT_CPUS 128 /* XXX */ +#define MAX_HVM_VCPUS MAX_VIRT_CPUS + +#define asmlinkage /* Nothing needed */ + +/* Linkage for ARM */ +#define __ALIGN .align 2 +#define __ALIGN_STR ".align 2" +#ifdef __ASSEMBLY__ +#define ALIGN __ALIGN +#define ALIGN_STR __ALIGN_STR +#define ENTRY(name) \ + .globl name; \ + ALIGN; \ + name: +#define END(name) \ + .size name, .-name +#define ENDPROC(name) \ + .type name, %function; \ + END(name) +#endif + +/* + * Memory layout: + * 0 - 2M Unmapped + * 2M - 4M Xen text, data, bss + * 4M - 6M Fixmap: special-purpose 4K mapping slots + * + * 32M - 128M Frametable: 24 bytes per page for 16GB of RAM + * + * 1G - 2G Xenheap: always-mapped memory + * 2G - 4G Domheap: on-demand-mapped + */ + +#define XEN_VIRT_START 0x00200000 +#define FIXMAP_ADDR(n) (0x00400000 + (n) * PAGE_SIZE) +#define FRAMETABLE_VIRT_START 0x02000000 +#define XENHEAP_VIRT_START 0x40000000 +#define DOMHEAP_VIRT_START 0x80000000 + +#define HYPERVISOR_VIRT_START mk_unsigned_long(XEN_VIRT_START) + +#define DOMHEAP_ENTRIES 1024 /* 1024 2MB mapping slots */ + +/* Fixmap slots */ +#define FIXMAP_CONSOLE 0 /* The primary UART */ +#define FIXMAP_PT 1 /* Temporary mappings of pagetable pages */ +#define FIXMAP_MISC 2 /* Ephemeral mappings of hardware */ +#define FIXMAP_GICD 3 /* Interrupt controller: distributor registers */ +#define FIXMAP_GICC1 4 /* Interrupt controller: CPU registers (first page) */ +#define FIXMAP_GICC2 5 /* Interrupt controller: CPU registers (second page) */ +#define FIXMAP_GICH 6 /* Interrupt controller: virtual interface control registers */ + +#define PAGE_SHIFT 12 + +#ifndef __ASSEMBLY__ +#define PAGE_SIZE (1L << PAGE_SHIFT) +#else +#define PAGE_SIZE (1 << PAGE_SHIFT) +#endif +#define PAGE_MASK (~(PAGE_SIZE-1)) +#define PAGE_FLAG_MASK (~0) + +#define STACK_ORDER 3 +#define STACK_SIZE (PAGE_SIZE << STACK_ORDER) + +#ifndef __ASSEMBLY__ +extern unsigned long xen_phys_start; +extern unsigned long xenheap_phys_end; +extern unsigned long frametable_virt_end; +#endif + +#define supervisor_mode_kernel (0) + +#define watchdog_disable() ((void)0) +#define watchdog_enable() ((void)0) + +/* Board-specific: base address of PL011 UART */ +#define EARLY_UART_ADDRESS 0x1c090000 +/* Board-specific: base address of GIC + its regs */ +#define GIC_BASE_ADDRESS 0x2c000000 +#define GIC_DR_OFFSET 0x1000 +#define GIC_CR_OFFSET 0x2000 +#define GIC_HR_OFFSET 0x4000 /* Guess work http://lists.infradead.org/pipermail/linux-arm-kernel/2011-September/064219.html */ +#define GIC_VR_OFFSET 0x6000 /* Virtual Machine CPU interface) */ + +#endif /* __ARM_CONFIG_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h new file mode 100644 index 0000000..3a4028d --- /dev/null +++ b/xen/include/asm-arm/cpregs.h @@ -0,0 +1,207 @@ +#ifndef __ASM_ARM_CPREGS_H +#define __ASM_ARM_CPREGS_H + +#include <xen/stringify.h> + +/* Co-processor registers */ + +/* Layout as used in assembly, with src/dest registers mixed in */ +#define __CP32(r, coproc, opc1, crn, crm, opc2) coproc, opc1, r, crn, crm, opc2 +#define __CP64(r1, r2, coproc, opc, crm) coproc, opc, r1, r2, crm +#define CP32(r, name...) __CP32(r, name) +#define CP64(r, name...) __CP64(r, name) + +/* Stringified for inline assembly */ +#define LOAD_CP32(r, name...) "mrc " __stringify(CP32(%r, name)) ";" +#define STORE_CP32(r, name...) "mcr " __stringify(CP32(%r, name)) ";" +#define LOAD_CP64(r, name...) "mrrc " __stringify(CP64(%r, %H##r, name)) ";" +#define STORE_CP64(r, name...) "mcrr " __stringify(CP64(%r, %H##r, name)) ";" + +/* C wrappers */ +#define READ_CP32(name...) ({ \ + register uint32_t _r; \ + asm volatile(LOAD_CP32(0, name) : "=r" (_r)); \ + _r; }) + +#define WRITE_CP32(v, name...) do { \ + register uint32_t _r = (v); \ + asm volatile(STORE_CP32(0, name) : : "r" (_r)); \ +} while (0) + +#define READ_CP64(name...) ({ \ + register uint64_t _r; \ + asm volatile(LOAD_CP64(0, name) : "=r" (_r)); \ + _r; }) + +#define WRITE_CP64(v, name...) do { \ + register uint64_t _r = (v); \ + asm volatile(STORE_CP64(0, name) : : "r" (_r)); \ +} while (0) + +#define __HSR_CPREG_c0 0 +#define __HSR_CPREG_c1 1 +#define __HSR_CPREG_c2 2 +#define __HSR_CPREG_c3 3 +#define __HSR_CPREG_c4 4 +#define __HSR_CPREG_c5 5 +#define __HSR_CPREG_c6 6 +#define __HSR_CPREG_c7 7 +#define __HSR_CPREG_c8 8 +#define __HSR_CPREG_c9 9 +#define __HSR_CPREG_c10 10 +#define __HSR_CPREG_c11 11 +#define __HSR_CPREG_c12 12 +#define __HSR_CPREG_c13 13 +#define __HSR_CPREG_c14 14 +#define __HSR_CPREG_c15 15 + +#define __HSR_CPREG_0 0 +#define __HSR_CPREG_1 1 +#define __HSR_CPREG_2 2 +#define __HSR_CPREG_3 3 +#define __HSR_CPREG_4 4 +#define __HSR_CPREG_5 5 +#define __HSR_CPREG_6 6 +#define __HSR_CPREG_7 7 + +#define _HSR_CPREG32(cp,op1,crn,crm,op2) \ + ((__HSR_CPREG_##crn) << HSR_CP32_CRN_SHIFT) | \ + ((__HSR_CPREG_##crm) << HSR_CP32_CRM_SHIFT) | \ + ((__HSR_CPREG_##op1) << HSR_CP32_OP1_SHIFT) | \ + ((__HSR_CPREG_##op2) << HSR_CP32_OP2_SHIFT) + +#define _HSR_CPREG64(cp,op1,crm) \ + ((__HSR_CPREG_##crm) << HSR_CP64_CRM_SHIFT) | \ + ((__HSR_CPREG_##op1) << HSR_CP64_OP1_SHIFT) + +/* Encode a register as per HSR ISS pattern */ +#define HSR_CPREG32(X) _HSR_CPREG32(X) +#define HSR_CPREG64(X) _HSR_CPREG64(X) + +/* + * Order registers by Coprocessor-> CRn-> Opcode 1-> CRm-> Opcode 2 + * + * This matches the ordering used in the ARM as well as the groupings + * which the CP registers are allocated in. + * + * This is slightly different to the form of the instruction + * arguments, which are cp,opc1,crn,crm,opc2. + */ + +/* Coprocessor 15 */ + +/* CP15 CR0: CPUID and Cache Type Registers */ +#define ID_PFR0 p15,0,c0,c1,0 /* Processor Feature Register 0 */ +#define ID_PFR1 p15,0,c0,c1,1 /* Processor Feature Register 1 */ +#define CCSIDR p15,1,c0,c0,0 /* Cache Size ID Registers */ +#define CLIDR p15,1,c0,c0,1 /* Cache Level ID Register */ +#define CSSELR p15,2,c0,c0,0 /* Cache Size Selection Register */ + +/* CP15 CR1: System Control Registers */ +#define SCTLR p15,0,c1,c0,0 /* System Control Register */ +#define SCR p15,0,c1,c1,0 /* Secure Configuration Register */ +#define NSACR p15,0,c1,c1,2 /* Non-Secure Access Control Register */ +#define HSCTLR p15,4,c1,c0,0 /* Hyp. System Control Register */ +#define HCR p15,4,c1,c1,0 /* Hyp. Configuration Register */ + +/* CP15 CR2: Translation Table Base and Control Registers */ +#define TTBR0 p15,0,c2,c0,0 /* Translation Table Base Reg. 0 */ +#define TTBR1 p15,0,c2,c0,1 /* Translation Table Base Reg. 1 */ +#define TTBCR p15,0,c2,c0,2 /* Translatation Table Base Control Register */ +#define HTTBR p15,4,c2 /* Hyp. Translation Table Base Register */ +#define HTCR p15,4,c2,c0,2 /* Hyp. Translation Control Register */ +#define VTCR p15,4,c2,c1,2 /* Virtualization Translation Control Register */ +#define VTTBR p15,6,c2 /* Virtualization Translation Table Base Register */ + +/* CP15 CR3: Domain Access Control Register */ + +/* CP15 CR4: */ + +/* CP15 CR5: Fault Status Registers */ +#define DFSR p15,0,c5,c0,0 /* Data Fault Status Register */ +#define IFSR p15,0,c5,c0,1 /* Instruction Fault Status Register */ +#define HSR p15,4,c5,c2,0 /* Hyp. Syndrome Register */ + +/* CP15 CR6: Fault Address Registers */ +#define DFAR p15,0,c6,c0,0 /* Data Fault Address Register */ +#define IFAR p15,0,c6,c0,2 /* Instruction Fault Address Register */ +#define HDFAR p15,4,c6,c0,0 /* Hyp. Data Fault Address Register */ +#define HIFAR p15,4,c6,c0,2 /* Hyp. Instruction Fault Address Register */ +#define HPFAR p15,4,c6,c0,4 /* Hyp. IPA Fault Address Register */ + +/* CP15 CR7: Cache and address translation operations */ +#define PAR p15,0,c7 /* Physical Address Register */ +#define ICIALLUIS p15,0,c7,c1,0 /* Invalidate all instruction caches to PoU inner shareable */ +#define BPIALLIS p15,0,c7,c1,6 /* Invalidate entire branch predictor array inner shareable */ +#define ICIALLU p15,0,c7,c5,0 /* Invalidate all instruction caches to PoU */ +#define BPIALL p15,0,c7,c5,6 /* Invalidate entire branch predictor array */ +#define ATS1CPR p15,0,c7,c8,0 /* Address Translation Stage 1. Non-Secure Kernel Read */ +#define ATS1CPW p15,0,c7,c8,1 /* Address Translation Stage 1. Non-Secure Kernel Write */ +#define ATS1CUR p15,0,c7,c8,2 /* Address Translation Stage 1. Non-Secure User Read */ +#define ATS1CUW p15,0,c7,c8,3 /* Address Translation Stage 1. Non-Secure User Write */ +#define ATS12NSOPR p15,0,c7,c8,4 /* Address Translation Stage 1+2 Non-Secure Kernel Read */ +#define ATS12NSOPW p15,0,c7,c8,5 /* Address Translation Stage 1+2 Non-Secure Kernel Write */ +#define ATS12NSOUR p15,0,c7,c8,6 /* Address Translation Stage 1+2 Non-Secure User Read */ +#define ATS12NSOUW p15,0,c7,c8,7 /* Address Translation Stage 1+2 Non-Secure User Write */ +#define DCCMVAC p15,0,c7,c10,1 /* Clean data or unified cache line by MVA to PoC */ +#define DCCISW p15,0,c7,c14,2 /* Clean and invalidate data cache line by set/way */ +#define ATS1HR p15,4,c7,c8,0 /* Address Translation Stage 1 Hyp. Read */ +#define ATS1HW p15,4,c7,c8,1 /* Address Translation Stage 1 Hyp. Write */ + +/* CP15 CR8: TLB maintenance operations */ +#define TLBIALLIS p15,0,c8,c3,0 /* Invalidate entire TLB innrer shareable */ +#define TLBIMVAIS p15,0,c8,c3,1 /* Invalidate unified TLB entry by MVA inner shareable */ +#define TLBIASIDIS p15,0,c8,c3,2 /* Invalidate unified TLB by ASID match inner shareable */ +#define TLBIMVAAIS p15,0,c8,c3,3 /* Invalidate unified TLB entry by MVA all ASID inner shareable */ +#define DTLBIALL p15,0,c8,c6,0 /* Invalidate data TLB */ +#define DTLBIMVA p15,0,c8,c6,1 /* Invalidate data TLB entry by MVA */ +#define DTLBIASID p15,0,c8,c6,2 /* Invalidate data TLB by ASID match */ +#define TLBILLHIS p15,4,c8,c3,0 /* Invalidate Entire Hyp. Unified TLB inner shareable */ +#define TLBIMVAHIS p15,4,c8,c3,1 /* Invalidate Unified Hyp. TLB by MVA inner shareable */ +#define TLBIALLNSNHIS p15,4,c8,c7,4 /* Invalidate Entire Non-Secure Non-Hyp. Unified TLB inner shareable */ +#define TLBIALLH p15,4,c8,c7,0 /* Invalidate Entire Hyp. Unified TLB */ +#define TLBIMVAH p15,4,c8,c7,1 /* Invalidate Unified Hyp. TLB by MVA */ +#define TLBIALLNSNH p15,4,c8,c7,4 /* Invalidate Entire Non-Secure Non-Hyp. Unified TLB */ + +/* CP15 CR9: */ + +/* CP15 CR10: */ +#define MAIR0 p15,0,c10,c2,0 /* Memory Attribute Indirection Register 0 */ +#define MAIR1 p15,0,c10,c2,1 /* Memory Attribute Indirection Register 1 */ +#define HMAIR0 p15,4,c10,c2,0 /* Hyp. Memory Attribute Indirection Register 0 */ +#define HMAIR1 p15,4,c10,c2,1 /* Hyp. Memory Attribute Indirection Register 1 */ + +/* CP15 CR11: DMA Operations for TCM Access */ + +/* CP15 CR12: */ +#define HVBAR p15,4,c12,c0,0 /* Hyp. Vector Base Address Register */ + +/* CP15 CR13: */ +#define FCSEIDR p15,0,c13,c0,0 /* FCSE Process ID Register */ +#define CONTEXTIDR p15,0,c13,c0,1 /* Context ID Register */ + +/* CP15 CR14: */ +#define CNTPCT p15,0,c14 /* Time counter value */ +#define CNTFRQ p15,0,c14,c0,0 /* Time counter frequency */ +#define CNTKCTL p15,0,c14,c1,0 /* Time counter kernel control */ +#define CNTP_TVAL p15,0,c14,c2,0 /* Physical Timer value */ +#define CNTP_CTL p15,0,c14,c2,1 /* Physical Timer control register */ +#define CNTVCT p15,1,c14 /* Time counter value + offset */ +#define CNTP_CVAL p15,2,c14 /* Physical Timer comparator */ +#define CNTVOFF p15,4,c14 /* Time counter offset */ +#define CNTHCTL p15,4,c14,c1,0 /* Time counter hyp. control */ +#define CNTHP_TVAL p15,4,c14,c2,0 /* Hyp. Timer value */ +#define CNTHP_CTL p15,4,c14,c2,1 /* Hyp. Timer control register */ +#define CNTHP_CVAL p15,6,c14 /* Hyp. Timer comparator */ + +/* CP15 CR15: Implementation Defined Registers */ + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/current.h b/xen/include/asm-arm/current.h new file mode 100644 index 0000000..826efa5 --- /dev/null +++ b/xen/include/asm-arm/current.h @@ -0,0 +1,60 @@ +#ifndef __ARM_CURRENT_H__ +#define __ARM_CURRENT_H__ + +#include <xen/config.h> +#include <xen/percpu.h> +#include <public/xen.h> + +#ifndef __ASSEMBLY__ + +struct vcpu; + +struct cpu_info { + struct cpu_user_regs guest_cpu_user_regs; + unsigned long elr; + unsigned int processor_id; + struct vcpu *current_vcpu; + unsigned long per_cpu_offset; +}; + +static inline struct cpu_info *get_cpu_info(void) +{ + register unsigned long sp asm ("sp"); + return (struct cpu_info *)((sp & ~(STACK_SIZE - 1)) + STACK_SIZE - sizeof(struct cpu_info)); +} + +#define get_current() (get_cpu_info()->current_vcpu) +#define set_current(vcpu) (get_cpu_info()->current_vcpu = (vcpu)) +#define current (get_current()) + +#define get_processor_id() (get_cpu_info()->processor_id) +#define set_processor_id(id) do { \ + struct cpu_info *ci__ = get_cpu_info(); \ + ci__->per_cpu_offset = __per_cpu_offset[ci__->processor_id = (id)]; \ +} while (0) + +#define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs) + +#define reset_stack_and_jump(__fn) \ + __asm__ __volatile__ ( \ + "mov sp,%0; b "STR(__fn) \ + : : "r" (guest_cpu_user_regs()) : "memory" ) +#endif + + +/* + * Which VCPU''s state is currently running on each CPU? + * This is not necesasrily the same as ''current'' as a CPU may be + * executing a lazy state switch. + */ +DECLARE_PER_CPU(struct vcpu *, curr_vcpu); + +#endif /* __ARM_CURRENT_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/debugger.h b/xen/include/asm-arm/debugger.h new file mode 100644 index 0000000..452613a --- /dev/null +++ b/xen/include/asm-arm/debugger.h @@ -0,0 +1,15 @@ +#ifndef __ARM_DEBUGGER_H__ +#define __ARM_DEBUGGER_H__ + +#define debugger_trap_fatal(v, r) ((void) 0) +#define debugger_trap_immediate() ((void) 0) + +#endif /* __ARM_DEBUGGER_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/delay.h b/xen/include/asm-arm/delay.h new file mode 100644 index 0000000..6250774 --- /dev/null +++ b/xen/include/asm-arm/delay.h @@ -0,0 +1,15 @@ +#ifndef _ARM_DELAY_H +#define _ARM_DELAY_H + +extern void __udelay(unsigned long usecs); +#define udelay(n) __udelay(n) + +#endif /* defined(_ARM_DELAY_H) */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/desc.h b/xen/include/asm-arm/desc.h new file mode 100644 index 0000000..3989e8a --- /dev/null +++ b/xen/include/asm-arm/desc.h @@ -0,0 +1,12 @@ +#ifndef __ARCH_DESC_H +#define __ARCH_DESC_H + +#endif /* __ARCH_DESC_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/div64.h b/xen/include/asm-arm/div64.h new file mode 100644 index 0000000..7b00808 --- /dev/null +++ b/xen/include/asm-arm/div64.h @@ -0,0 +1,235 @@ +/* Taken from Linux arch/arm */ +#ifndef __ASM_ARM_DIV64 +#define __ASM_ARM_DIV64 + +#include <asm/system.h> +#include <xen/types.h> + +/* + * The semantics of do_div() are: + * + * uint32_t do_div(uint64_t *n, uint32_t base) + * { + * uint32_t remainder = *n % base; + * *n = *n / base; + * return remainder; + * } + * + * In other words, a 64-bit dividend with a 32-bit divisor producing + * a 64-bit result and a 32-bit remainder. To accomplish this optimally + * we call a special __do_div64 helper with completely non standard + * calling convention for arguments and results (beware). + */ + +#ifdef __ARMEB__ +#define __xh "r0" +#define __xl "r1" +#else +#define __xl "r0" +#define __xh "r1" +#endif + +#define __do_div_asm(n, base) \ +({ \ + register unsigned int __base asm("r4") = base; \ + register unsigned long long __n asm("r0") = n; \ + register unsigned long long __res asm("r2"); \ + register unsigned int __rem asm(__xh); \ + asm( __asmeq("%0", __xh) \ + __asmeq("%1", "r2") \ + __asmeq("%2", "r0") \ + __asmeq("%3", "r4") \ + "bl __do_div64" \ + : "=r" (__rem), "=r" (__res) \ + : "r" (__n), "r" (__base) \ + : "ip", "lr", "cc"); \ + n = __res; \ + __rem; \ +}) + +#if __GNUC__ < 4 + +/* + * gcc versions earlier than 4.0 are simply too problematic for the + * optimized implementation below. First there is gcc PR 15089 that + * tend to trig on more complex constructs, spurious .global __udivsi3 + * are inserted even if none of those symbols are referenced in the + * generated code, and those gcc versions are not able to do constant + * propagation on long long values anyway. + */ +#define do_div(n, base) __do_div_asm(n, base) + +#elif __GNUC__ >= 4 + +#include <asm/bug.h> + +/* + * If the divisor happens to be constant, we determine the appropriate + * inverse at compile time to turn the division into a few inline + * multiplications instead which is much faster. And yet only if compiling + * for ARMv4 or higher (we need umull/umlal) and if the gcc version is + * sufficiently recent to perform proper long long constant propagation. + * (It is unfortunate that gcc doesn''t perform all this internally.) + */ +#define do_div(n, base) \ +({ \ + unsigned int __r, __b = (base); \ + if (!__builtin_constant_p(__b) || __b == 0) { \ + /* non-constant divisor (or zero): slow path */ \ + __r = __do_div_asm(n, __b); \ + } else if ((__b & (__b - 1)) == 0) { \ + /* Trivial: __b is constant and a power of 2 */ \ + /* gcc does the right thing with this code. */ \ + __r = n; \ + __r &= (__b - 1); \ + n /= __b; \ + } else { \ + /* Multiply by inverse of __b: n/b = n*(p/b)/p */ \ + /* We rely on the fact that most of this code gets */ \ + /* optimized away at compile time due to constant */ \ + /* propagation and only a couple inline assembly */ \ + /* instructions should remain. Better avoid any */ \ + /* code construct that might prevent that. */ \ + unsigned long long __res, __x, __t, __m, __n = n; \ + unsigned int __c, __p, __z = 0; \ + /* preserve low part of n for reminder computation */ \ + __r = __n; \ + /* determine number of bits to represent __b */ \ + __p = 1 << __div64_fls(__b); \ + /* compute __m = ((__p << 64) + __b - 1) / __b */ \ + __m = (~0ULL / __b) * __p; \ + __m += (((~0ULL % __b + 1) * __p) + __b - 1) / __b; \ + /* compute __res = __m*(~0ULL/__b*__b-1)/(__p << 64) */ \ + __x = ~0ULL / __b * __b - 1; \ + __res = (__m & 0xffffffff) * (__x & 0xffffffff); \ + __res >>= 32; \ + __res += (__m & 0xffffffff) * (__x >> 32); \ + __t = __res; \ + __res += (__x & 0xffffffff) * (__m >> 32); \ + __t = (__res < __t) ? (1ULL << 32) : 0; \ + __res = (__res >> 32) + __t; \ + __res += (__m >> 32) * (__x >> 32); \ + __res /= __p; \ + /* Now sanitize and optimize what we''ve got. */ \ + if (~0ULL % (__b / (__b & -__b)) == 0) { \ + /* those cases can be simplified with: */ \ + __n /= (__b & -__b); \ + __m = ~0ULL / (__b / (__b & -__b)); \ + __p = 1; \ + __c = 1; \ + } else if (__res != __x / __b) { \ + /* We can''t get away without a correction */ \ + /* to compensate for bit truncation errors. */ \ + /* To avoid it we''d need an additional bit */ \ + /* to represent __m which would overflow it. */ \ + /* Instead we do m=p/b and n/b=(n*m+m)/p. */ \ + __c = 1; \ + /* Compute __m = (__p << 64) / __b */ \ + __m = (~0ULL / __b) * __p; \ + __m += ((~0ULL % __b + 1) * __p) / __b; \ + } else { \ + /* Reduce __m/__p, and try to clear bit 31 */ \ + /* of __m when possible otherwise that''ll */ \ + /* need extra overflow handling later. */ \ + unsigned int __bits = -(__m & -__m); \ + __bits |= __m >> 32; \ + __bits = (~__bits) << 1; \ + /* If __bits == 0 then setting bit 31 is */ \ + /* unavoidable. Simply apply the maximum */ \ + /* possible reduction in that case. */ \ + /* Otherwise the MSB of __bits indicates the */ \ + /* best reduction we should apply. */ \ + if (!__bits) { \ + __p /= (__m & -__m); \ + __m /= (__m & -__m); \ + } else { \ + __p >>= __div64_fls(__bits); \ + __m >>= __div64_fls(__bits); \ + } \ + /* No correction needed. */ \ + __c = 0; \ + } \ + /* Now we have a combination of 2 conditions: */ \ + /* 1) whether or not we need a correction (__c), and */ \ + /* 2) whether or not there might be an overflow in */ \ + /* the cross product (__m & ((1<<63) | (1<<31))) */ \ + /* Select the best insn combination to perform the */ \ + /* actual __m * __n / (__p << 64) operation. */ \ + if (!__c) { \ + asm ( "umull %Q0, %R0, %1, %Q2\n\t" \ + "mov %Q0, #0" \ + : "=&r" (__res) \ + : "r" (__m), "r" (__n) \ + : "cc" ); \ + } else if (!(__m & ((1ULL << 63) | (1ULL << 31)))) { \ + __res = __m; \ + asm ( "umlal %Q0, %R0, %Q1, %Q2\n\t" \ + "mov %Q0, #0" \ + : "+&r" (__res) \ + : "r" (__m), "r" (__n) \ + : "cc" ); \ + } else { \ + asm ( "umull %Q0, %R0, %Q1, %Q2\n\t" \ + "cmn %Q0, %Q1\n\t" \ + "adcs %R0, %R0, %R1\n\t" \ + "adc %Q0, %3, #0" \ + : "=&r" (__res) \ + : "r" (__m), "r" (__n), "r" (__z) \ + : "cc" ); \ + } \ + if (!(__m & ((1ULL << 63) | (1ULL << 31)))) { \ + asm ( "umlal %R0, %Q0, %R1, %Q2\n\t" \ + "umlal %R0, %Q0, %Q1, %R2\n\t" \ + "mov %R0, #0\n\t" \ + "umlal %Q0, %R0, %R1, %R2" \ + : "+&r" (__res) \ + : "r" (__m), "r" (__n) \ + : "cc" ); \ + } else { \ + asm ( "umlal %R0, %Q0, %R2, %Q3\n\t" \ + "umlal %R0, %1, %Q2, %R3\n\t" \ + "mov %R0, #0\n\t" \ + "adds %Q0, %1, %Q0\n\t" \ + "adc %R0, %R0, #0\n\t" \ + "umlal %Q0, %R0, %R2, %R3" \ + : "+&r" (__res), "+&r" (__z) \ + : "r" (__m), "r" (__n) \ + : "cc" ); \ + } \ + __res /= __p; \ + /* The reminder can be computed with 32-bit regs */ \ + /* only, and gcc is good at that. */ \ + { \ + unsigned int __res0 = __res; \ + unsigned int __b0 = __b; \ + __r -= __res0 * __b0; \ + } \ + /* BUG_ON(__r >= __b || __res * __b + __r != n); */ \ + n = __res; \ + } \ + __r; \ +}) + +/* our own fls implementation to make sure constant propagation is fine */ +#define __div64_fls(bits) \ +({ \ + unsigned int __left = (bits), __nr = 0; \ + if (__left & 0xffff0000) __nr += 16, __left >>= 16; \ + if (__left & 0x0000ff00) __nr += 8, __left >>= 8; \ + if (__left & 0x000000f0) __nr += 4, __left >>= 4; \ + if (__left & 0x0000000c) __nr += 2, __left >>= 2; \ + if (__left & 0x00000002) __nr += 1; \ + __nr; \ +}) + +#endif + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/elf.h b/xen/include/asm-arm/elf.h new file mode 100644 index 0000000..12d487c --- /dev/null +++ b/xen/include/asm-arm/elf.h @@ -0,0 +1,33 @@ +#ifndef __ARM_ELF_H__ +#define __ARM_ELF_H__ + +typedef struct { + unsigned long r0; + unsigned long r1; + unsigned long r2; + unsigned long r3; + unsigned long r4; + unsigned long r5; + unsigned long r6; + unsigned long r7; + unsigned long r8; + unsigned long r9; + unsigned long r10; + unsigned long r11; + unsigned long r12; + unsigned long sp; + unsigned long lr; + unsigned long pc; +} ELF_Gregset; + +#endif /* __ARM_ELF_H__ */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h new file mode 100644 index 0000000..6b2fb7c --- /dev/null +++ b/xen/include/asm-arm/event.h @@ -0,0 +1,41 @@ +#ifndef __ASM_EVENT_H__ +#define __ASM_EVENT_H__ + +void vcpu_kick(struct vcpu *v); +void vcpu_mark_events_pending(struct vcpu *v); + +static inline int local_events_need_delivery(void) +{ + /* TODO + * return (vcpu_info(v, evtchn_upcall_pending) && + !vcpu_info(v, evtchn_upcall_mask)); */ + return 0; +} + +int local_event_delivery_is_enabled(void); + +static inline void local_event_delivery_disable(void) +{ + /* TODO current->vcpu_info->evtchn_upcall_mask = 1; */ +} + +static inline void local_event_delivery_enable(void) +{ + /* TODO current->vcpu_info->evtchn_upcall_mask = 0; */ +} + +/* No arch specific virq definition now. Default to global. */ +static inline int arch_virq_is_global(int virq) +{ + return 1; +} + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h new file mode 100644 index 0000000..c8486fc --- /dev/null +++ b/xen/include/asm-arm/flushtlb.h @@ -0,0 +1,31 @@ +#ifndef __FLUSHTLB_H__ +#define __FLUSHTLB_H__ + +#include <xen/cpumask.h> + +/* + * Filter the given set of CPUs, removing those that definitely flushed their + * TLB since @page_timestamp. + */ +/* XXX lazy implementation just doesn''t clear anything.... */ +#define tlbflush_filter(mask, page_timestamp) \ +do { \ +} while ( 0 ) + +#define tlbflush_current_time() (0) + +/* Flush local TLBs */ +void flush_tlb_local(void); + +/* Flush specified CPUs'' TLBs */ +void flush_tlb_mask(const cpumask_t *mask); + +#endif /* __FLUSHTLB_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h new file mode 100644 index 0000000..66fe9bf --- /dev/null +++ b/xen/include/asm-arm/grant_table.h @@ -0,0 +1,35 @@ +#ifndef __ASM_GRANT_TABLE_H__ +#define __ASM_GRANT_TABLE_H__ + +#include <xen/grant_table.h> + +#define INVALID_GFN (-1UL) +#define INITIAL_NR_GRANT_FRAMES 1 + +void gnttab_clear_flag(unsigned long nr, uint16_t *addr); +int create_grant_host_mapping(unsigned long gpaddr, + unsigned long mfn, unsigned int flags, unsigned int + cache_flags); +#define gnttab_host_mapping_get_page_type(op, d, rd) (0) +int replace_grant_host_mapping(unsigned long gpaddr, unsigned long mfn, + unsigned long new_gpaddr, unsigned int flags); +void gnttab_mark_dirty(struct domain *d, unsigned long l); +#define gnttab_create_status_page(d, t, i) do {} while (0) +#define gnttab_create_shared_page(d, t, i) do {} while (0) +#define gnttab_shared_gmfn(d, t, i) (0) +#define gnttab_status_gmfn(d, t, i) (0) +#define gnttab_release_host_mappings(domain) 1 +static inline int replace_grant_supported(void) +{ + return 1; +} + +#endif /* __ASM_GRANT_TABLE_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/hardirq.h b/xen/include/asm-arm/hardirq.h new file mode 100644 index 0000000..9c031a8 --- /dev/null +++ b/xen/include/asm-arm/hardirq.h @@ -0,0 +1,28 @@ +#ifndef __ASM_HARDIRQ_H +#define __ASM_HARDIRQ_H + +#include <xen/config.h> +#include <xen/cache.h> +#include <xen/smp.h> + +typedef struct { + unsigned long __softirq_pending; + unsigned int __local_irq_count; +} __cacheline_aligned irq_cpustat_t; + +#include <xen/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */ + +#define in_irq() (local_irq_count(smp_processor_id()) != 0) + +#define irq_enter() (local_irq_count(smp_processor_id())++) +#define irq_exit() (local_irq_count(smp_processor_id())--) + +#endif /* __ASM_HARDIRQ_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h new file mode 100644 index 0000000..90a87ef --- /dev/null +++ b/xen/include/asm-arm/hypercall.h @@ -0,0 +1,24 @@ +#ifndef __ASM_ARM_HYPERCALL_H__ +#define __ASM_ARM_HYPERCALL_H__ + +#include <public/domctl.h> /* for arch_do_domctl */ + +struct vcpu; +extern long +arch_do_vcpu_op( + int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg); + +extern long +arch_do_sysctl( + struct xen_sysctl *op, + XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl); + +#endif /* __ASM_ARM_HYPERCALL_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/init.h b/xen/include/asm-arm/init.h new file mode 100644 index 0000000..5f44929 --- /dev/null +++ b/xen/include/asm-arm/init.h @@ -0,0 +1,12 @@ +#ifndef _XEN_ASM_INIT_H +#define _XEN_ASM_INIT_H + +#endif /* _XEN_ASM_INIT_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/io.h b/xen/include/asm-arm/io.h new file mode 100644 index 0000000..1babbab --- /dev/null +++ b/xen/include/asm-arm/io.h @@ -0,0 +1,12 @@ +#ifndef _ASM_IO_H +#define _ASM_IO_H + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/iocap.h b/xen/include/asm-arm/iocap.h new file mode 100644 index 0000000..f647f30 --- /dev/null +++ b/xen/include/asm-arm/iocap.h @@ -0,0 +1,20 @@ +#ifndef __X86_IOCAP_H__ +#define __X86_IOCAP_H__ + +#define cache_flush_permitted(d) \ + (!rangeset_is_empty((d)->iomem_caps)) + +#define multipage_allocation_permitted(d, order) \ + (((order) <= 9) || /* allow 2MB superpages */ \ + !rangeset_is_empty((d)->iomem_caps)) + +#endif + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/multicall.h b/xen/include/asm-arm/multicall.h new file mode 100644 index 0000000..c800940 --- /dev/null +++ b/xen/include/asm-arm/multicall.h @@ -0,0 +1,23 @@ +#ifndef __ASM_ARM_MULTICALL_H__ +#define __ASM_ARM_MULTICALL_H__ + +#define do_multicall_call(_call) \ + do { \ + __asm__ __volatile__ ( \ + ".word 0xe7f000f0@; do_multicall_call\n" \ + " mov r0,#0; @ do_multicall_call\n" \ + " str r0, [r0];\n" \ + : \ + : \ + : ); \ + } while ( 0 ) + +#endif /* __ASM_ARM_MULTICALL_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/nmi.h b/xen/include/asm-arm/nmi.h new file mode 100644 index 0000000..e0f19f9 --- /dev/null +++ b/xen/include/asm-arm/nmi.h @@ -0,0 +1,15 @@ +#ifndef ASM_NMI_H +#define ASM_NMI_H + +#define register_guest_nmi_callback(a) (-ENOSYS) +#define unregister_guest_nmi_callback() (-ENOSYS) + +#endif /* ASM_NMI_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h new file mode 100644 index 0000000..cffee5c --- /dev/null +++ b/xen/include/asm-arm/numa.h @@ -0,0 +1,21 @@ +#ifndef __ARCH_ARM_NUMA_H +#define __ARCH_ARM_NUMA_H + +/* Fake one node for now... */ +#define cpu_to_node(cpu) 0 +#define node_to_cpumask(node) (cpu_online_map) + +static inline __attribute__((pure)) int phys_to_nid(paddr_t addr) +{ + return 0; +} + +#endif /* __ARCH_ARM_NUMA_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/paging.h b/xen/include/asm-arm/paging.h new file mode 100644 index 0000000..4dc340f --- /dev/null +++ b/xen/include/asm-arm/paging.h @@ -0,0 +1,13 @@ +#ifndef _XEN_PAGING_H +#define _XEN_PAGING_H + +#endif /* XEN_PAGING_H */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/percpu.h b/xen/include/asm-arm/percpu.h new file mode 100644 index 0000000..9d369eb --- /dev/null +++ b/xen/include/asm-arm/percpu.h @@ -0,0 +1,28 @@ +#ifndef __ARM_PERCPU_H__ +#define __ARM_PERCPU_H__ + +#ifndef __ASSEMBLY__ +extern char __per_cpu_start[], __per_cpu_data_end[]; +extern unsigned long __per_cpu_offset[NR_CPUS]; +void percpu_init_areas(void); +#endif + +/* Separate out the type, so (int[3], foo) works. */ +#define __DEFINE_PER_CPU(type, name, suffix) \ + __attribute__((__section__(".bss.percpu" #suffix))) \ + __typeof__(type) per_cpu_##name + +#define per_cpu(var, cpu) ((&per_cpu__##var)[cpu?0:0]) +#define __get_cpu_var(var) per_cpu__##var + +#define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name + +#endif /* __ARM_PERCPU_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h new file mode 100644 index 0000000..1f85d31 --- /dev/null +++ b/xen/include/asm-arm/processor.h @@ -0,0 +1,269 @@ +#ifndef __ASM_ARM_PROCESSOR_H +#define __ASM_ARM_PROCESSOR_H + +#include <asm/cpregs.h> + +/* PSR bits (CPSR, SPSR)*/ + +/* 0-4: Mode */ +#define PSR_MODE_MASK 0x1f +#define PSR_MODE_USR 0x10 +#define PSR_MODE_FIQ 0x11 +#define PSR_MODE_IRQ 0x12 +#define PSR_MODE_SVC 0x13 +#define PSR_MODE_MON 0x16 +#define PSR_MODE_ABT 0x17 +#define PSR_MODE_HYP 0x1a +#define PSR_MODE_UND 0x1b +#define PSR_MODE_SYS 0x1f + +#define PSR_THUMB (1<<5) /* Thumb Mode enable */ +#define PSR_FIQ_MASK (1<<6) /* Fast Interrupt mask */ +#define PSR_IRQ_MASK (1<<7) /* Interrupt mask */ +#define PSR_ABT_MASK (1<<8) /* Asynchronous Abort mask */ +#define PSR_BIG_ENDIAN (1<<9) /* Big Endian Mode */ +#define PSR_JAZELLE (1<<24) /* Jazelle Mode */ + +/* TTBCR Translation Table Base Control Register */ +#define TTBCR_N_MASK 0x07 +#define TTBCR_N_16KB 0x00 +#define TTBCR_N_8KB 0x01 +#define TTBCR_N_4KB 0x02 +#define TTBCR_N_2KB 0x03 +#define TTBCR_N_1KB 0x04 + +/* SCTLR System Control Register. */ +/* HSCTLR is a subset of this. */ +#define SCTLR_TE (1<<30) +#define SCTLR_AFE (1<<29) +#define SCTLR_TRE (1<<28) +#define SCTLR_NMFI (1<<27) +#define SCTLR_EE (1<<25) +#define SCTLR_VE (1<<24) +#define SCTLR_U (1<<22) +#define SCTLR_FI (1<<21) +#define SCTLR_WXN (1<<19) +#define SCTLR_HA (1<<17) +#define SCTLR_RR (1<<14) +#define SCTLR_V (1<<13) +#define SCTLR_I (1<<12) +#define SCTLR_Z (1<<11) +#define SCTLR_SW (1<<10) +#define SCTLR_B (1<<7) +#define SCTLR_C (1<<2) +#define SCTLR_A (1<<1) +#define SCTLR_M (1<<0) + +#define SCTLR_BASE 0x00c50078 +#define HSCTLR_BASE 0x30c51878 + +/* HCR Hyp Configuration Register */ +#define HCR_TGE (1<<27) +#define HCR_TVM (1<<26) +#define HCR_TTLB (1<<25) +#define HCR_TPU (1<<24) +#define HCR_TPC (1<<23) +#define HCR_TSW (1<<22) +#define HCR_TAC (1<<21) +#define HCR_TIDCP (1<<20) +#define HCR_TSC (1<<19) +#define HCR_TID3 (1<<18) +#define HCR_TID2 (1<<17) +#define HCR_TID1 (1<<16) +#define HCR_TID0 (1<<15) +#define HCR_TWE (1<<14) +#define HCR_TWI (1<<13) +#define HCR_DC (1<<12) +#define HCR_BSU_MASK (3<<10) +#define HCR_FB (1<<9) +#define HCR_VA (1<<8) +#define HCR_VI (1<<7) +#define HCR_VF (1<<6) +#define HCR_AMO (1<<5) +#define HCR_IMO (1<<4) +#define HCR_FMO (1<<3) +#define HCR_PTW (1<<2) +#define HCR_SWIO (1<<1) +#define HCR_VM (1<<0) + +#define HSR_EC_WFI_WFE 0x01 +#define HSR_EC_CP15_32 0x03 +#define HSR_EC_CP15_64 0x04 +#define HSR_EC_CP14_32 0x05 +#define HSR_EC_CP14_DBG 0x06 +#define HSR_EC_CP 0x07 +#define HSR_EC_CP10 0x08 +#define HSR_EC_JAZELLE 0x09 +#define HSR_EC_BXJ 0x0a +#define HSR_EC_CP14_64 0x0c +#define HSR_EC_SVC 0x11 +#define HSR_EC_HVC 0x12 +#define HSR_EC_INSTR_ABORT_GUEST 0x20 +#define HSR_EC_INSTR_ABORT_HYP 0x21 +#define HSR_EC_DATA_ABORT_GUEST 0x24 +#define HSR_EC_DATA_ABORT_HYP 0x25 + +#ifndef __ASSEMBLY__ +union hsr { + uint32_t bits; + struct { + unsigned long iss:25; /* Instruction Specific Syndrome */ + unsigned long len:1; /* Instruction length */ + unsigned long ec:6; /* Exception Class */ + }; + + struct hsr_cp32 { + unsigned long read:1; /* Direction */ + unsigned long crm:4; /* CRm */ + unsigned long reg:4; /* Rt */ + unsigned long sbzp:1; + unsigned long crn:4; /* CRn */ + unsigned long op1:3; /* Op1 */ + unsigned long op2:3; /* Op2 */ + unsigned long cc:4; /* Condition Code */ + unsigned long ccvalid:1;/* CC Valid */ + unsigned long len:1; /* Instruction length */ + unsigned long ec:6; /* Exception Class */ + } cp32; /* HSR_EC_CP15_32, CP14_32, CP10 */ + + struct hsr_cp64 { + unsigned long read:1; /* Direction */ + unsigned long crm:4; /* CRm */ + unsigned long reg1:4; /* Rt1 */ + unsigned long sbzp1:1; + unsigned long reg2:4; /* Rt2 */ + unsigned long sbzp2:2; + unsigned long op1:4; /* Op1 */ + unsigned long cc:4; /* Condition Code */ + unsigned long ccvalid:1;/* CC Valid */ + unsigned long len:1; /* Instruction length */ + unsigned long ec:6; /* Exception Class */ + } cp64; /* HSR_EC_CP15_64, HSR_EC_CP14_64 */ + + struct hsr_dabt { + unsigned long dfsc:6; /* Data Fault Status Code */ + unsigned long write:1; /* Write / not Read */ + unsigned long s1ptw:1; /* */ + unsigned long cache:1; /* Cache Maintenance */ + unsigned long eat:1; /* External Abort Type */ + unsigned long sbzp0:6; + unsigned long reg:4; /* Register */ + unsigned long sbzp1:1; + unsigned long sign:1; /* Sign extend */ + unsigned long size:2; /* Access Size */ + unsigned long valid:1; /* Syndrome Valid */ + unsigned long len:1; /* Instruction length */ + unsigned long ec:6; /* Exception Class */ + } dabt; /* HSR_EC_DATA_ABORT_* */ +}; +#endif + +/* HSR.EC == HSR_CP{15,14,10}_32 */ +#define HSR_CP32_OP2_MASK (0x000e0000) +#define HSR_CP32_OP2_SHIFT (17) +#define HSR_CP32_OP1_MASK (0x0001c000) +#define HSR_CP32_OP1_SHIFT (14) +#define HSR_CP32_CRN_MASK (0x00003c00) +#define HSR_CP32_CRN_SHIFT (10) +#define HSR_CP32_CRM_MASK (0x0000001e) +#define HSR_CP32_CRM_SHIFT (1) +#define HSR_CP32_REGS_MASK (HSR_CP32_OP1_MASK|HSR_CP32_OP2_MASK|\ + HSR_CP32_CRN_MASK|HSR_CP32_CRM_MASK) + +/* HSR.EC == HSR_CP{15,14}_64 */ +#define HSR_CP64_OP1_MASK (0x000f0000) +#define HSR_CP64_OP1_SHIFT (16) +#define HSR_CP64_CRM_MASK (0x0000001e) +#define HSR_CP64_CRM_SHIFT (1) +#define HSR_CP64_REGS_MASK (HSR_CP64_OP1_MASK|HSR_CP64_CRM_MASK) + +/* Physical Address Register */ +#define PAR_F (1<<0) + +/* .... If F == 1 */ +#define PAR_FSC_SHIFT (1) +#define PAR_FSC_MASK (0x3f<<PAR_FSC_SHIFT) +#define PAR_STAGE21 (1<<8) /* Stage 2 Fault During Stage 1 Walk */ +#define PAR_STAGE2 (1<<9) /* Stage 2 Fault */ + +/* If F == 0 */ +#define PAR_MAIR_SHIFT 56 /* Memory Attributes */ +#define PAR_MAIR_MASK (0xffLL<<PAR_MAIR_SHIFT) +#define PAR_NS (1<<9) /* Non-Secure */ +#define PAR_SH_SHIFT 7 /* Shareability */ +#define PAR_SH_MASK (3<<PAR_SH_SHIFT) + +/* Fault Status Register */ +/* + * 543210 BIT + * 00XXLL -- XX Fault Level LL + * ..01LL -- Translation Fault LL + * ..10LL -- Access Fault LL + * ..11LL -- Permission Fault LL + * 01xxxx -- Abort/Parity + * 10xxxx -- Other + * 11xxxx -- Implementation Defined + */ +#define FSC_TYPE_MASK (0x3<<4) +#define FSC_TYPE_FAULT (0x00<<4) +#define FSC_TYPE_ABT (0x01<<4) +#define FSC_TYPE_OTH (0x02<<4) +#define FSC_TYPE_IMPL (0x03<<4) + +#define FSC_FLT_TRANS (0x04) +#define FSC_FLT_ACCESS (0x08) +#define FSC_FLT_PERM (0x0c) +#define FSC_SEA (0x10) /* Synchronous External Abort */ +#define FSC_SPE (0x18) /* Memory Access Synchronous Parity Error */ +#define FSC_APE (0x11) /* Memory Access Asynchronous Parity Error */ +#define FSC_SEATT (0x14) /* Sync. Ext. Abort Translation Table */ +#define FSC_SPETT (0x1c) /* Sync. Parity. Error Translation Table */ +#define FSC_AF (0x21) /* Alignment Fault */ +#define FSC_DE (0x22) /* Debug Event */ +#define FSC_LKD (0x34) /* Lockdown Abort */ +#define FSC_CPR (0x3a) /* Coprocossor Abort */ + +#define FSC_LL_MASK (0x03<<0) + +/* Time counter hypervisor control register */ +#define CNTHCTL_PA (1u<<0) /* Kernel/user access to physical counter */ +#define CNTHCTL_TA (1u<<1) /* Kernel/user access to CNTP timer */ + +/* Timer control registers */ +#define CNTx_CTL_ENABLE (1u<<0) /* Enable timer */ +#define CNTx_CTL_MASK (1u<<1) /* Mask IRQ */ +#define CNTx_CTL_PENDING (1u<<2) /* IRQ pending */ + +/* CPUID bits */ +#define ID_PFR1_GT_MASK 0x000F0000 /* Generic Timer interface support */ +#define ID_PFR1_GT_v1 0x00010000 + +#define MSR(reg,val) asm volatile ("msr "#reg", %0\n" : : "r" (val)) +#define MRS(val,reg) asm volatile ("mrs %0,"#reg"\n" : "=r" (v)) + +#ifndef __ASSEMBLY__ +extern uint32_t hyp_traps_vector[8]; + +void panic_PAR(uint64_t par, const char *when); + +void show_execution_state(struct cpu_user_regs *regs); +void show_registers(struct cpu_user_regs *regs); +//#define dump_execution_state() run_in_exception_handler(show_execution_state) +#define dump_execution_state() asm volatile (".word 0xe7f000f0\n"); /* XXX */ + +#define cpu_relax() barrier() /* Could yield? */ + +/* All a bit UP for the moment */ +#define cpu_to_core(_cpu) (0) +#define cpu_to_socket(_cpu) (0) + +#endif /* __ASSEMBLY__ */ +#endif /* __ASM_ARM_PROCESSOR_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/regs.h b/xen/include/asm-arm/regs.h new file mode 100644 index 0000000..ee095bf --- /dev/null +++ b/xen/include/asm-arm/regs.h @@ -0,0 +1,43 @@ +#ifndef __ARM_REGS_H__ +#define __ARM_REGS_H__ + +#include <xen/types.h> +#include <public/xen.h> +#include <asm/processor.h> + +#define psr_mode(psr,m) (((psr) & PSR_MODE_MASK) == m) + +#define usr_mode(r) psr_mode((r)->cpsr,PSR_MODE_USR) +#define fiq_mode(r) psr_mode((r)->cpsr,PSR_MODE_FIQ) +#define irq_mode(r) psr_mode((r)->cpsr,PSR_MODE_IRQ) +#define svc_mode(r) psr_mode((r)->cpsr,PSR_MODE_SVC) +#define mon_mode(r) psr_mode((r)->cpsr,PSR_MODE_MON) +#define abt_mode(r) psr_mode((r)->cpsr,PSR_MODE_ABT) +#define hyp_mode(r) psr_mode((r)->cpsr,PSR_MODE_HYP) +#define und_mode(r) psr_mode((r)->cpsr,PSR_MODE_UND) +#define sys_mode(r) psr_mode((r)->cpsr,PSR_MODE_SYS) + +#define guest_mode(r) \ +({ \ + unsigned long diff = (char *)guest_cpu_user_regs() - (char *)(r); \ + /* Frame pointer must point into current CPU stack. */ \ + ASSERT(diff < STACK_SIZE); \ + /* If not a guest frame, it must be a hypervisor frame. */ \ + ASSERT((diff == 0) || hyp_mode(r)); \ + /* Return TRUE if it''s a guest frame. */ \ + (diff == 0); \ +}) + +#define return_reg(v) ((v)->arch.user_regs.r0) + +#define CTXT_SWITCH_STACK_BYTES (sizeof(struct cpu_user_regs)) + +#endif /* __ARM_REGS_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h new file mode 100644 index 0000000..c27d438 --- /dev/null +++ b/xen/include/asm-arm/setup.h @@ -0,0 +1,16 @@ +#ifndef __ARM_SETUP_H_ +#define __ARM_SETUP_H_ + +#include <public/version.h> + +void arch_get_xen_caps(xen_capabilities_info_t *info); + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h new file mode 100644 index 0000000..9cdd87f --- /dev/null +++ b/xen/include/asm-arm/smp.h @@ -0,0 +1,25 @@ +#ifndef __ASM_SMP_H +#define __ASM_SMP_H + +#ifndef __ASSEMBLY__ +#include <xen/config.h> +#include <xen/cpumask.h> +#include <asm/current.h> +#endif + +DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask); +DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask); + +#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu)) + +#define raw_smp_processor_id() (get_processor_id()) + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/softirq.h b/xen/include/asm-arm/softirq.h new file mode 100644 index 0000000..536af38 --- /dev/null +++ b/xen/include/asm-arm/softirq.h @@ -0,0 +1,15 @@ +#ifndef __ASM_SOFTIRQ_H__ +#define __ASM_SOFTIRQ_H__ + +#define VGIC_SOFTIRQ (NR_COMMON_SOFTIRQS + 0) +#define NR_ARCH_SOFTIRQS 1 + +#endif /* __ASM_SOFTIRQ_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/spinlock.h b/xen/include/asm-arm/spinlock.h new file mode 100644 index 0000000..b1825c9 --- /dev/null +++ b/xen/include/asm-arm/spinlock.h @@ -0,0 +1,144 @@ +#ifndef __ASM_SPINLOCK_H +#define __ASM_SPINLOCK_H + +#include <xen/config.h> +#include <xen/lib.h> + +static inline void dsb_sev(void) +{ + __asm__ __volatile__ ( + "dsb\n" + "sev\n" + ); +} + +typedef struct { + volatile unsigned int lock; +} raw_spinlock_t; + +#define _RAW_SPIN_LOCK_UNLOCKED { 0 } + +#define _raw_spin_is_locked(x) ((x)->lock != 0) + +static always_inline void _raw_spin_unlock(raw_spinlock_t *lock) +{ + ASSERT(_raw_spin_is_locked(lock)); + + smp_mb(); + + __asm__ __volatile__( +" str %1, [%0]\n" + : + : "r" (&lock->lock), "r" (0) + : "cc"); + + dsb_sev(); +} + +static always_inline int _raw_spin_trylock(raw_spinlock_t *lock) +{ + unsigned long tmp; + + __asm__ __volatile__( +" ldrex %0, [%1]\n" +" teq %0, #0\n" +" strexeq %0, %2, [%1]" + : "=&r" (tmp) + : "r" (&lock->lock), "r" (1) + : "cc"); + + if (tmp == 0) { + smp_mb(); + return 1; + } else { + return 0; + } +} + +typedef struct { + volatile unsigned int lock; +} raw_rwlock_t; + +#define _RAW_RW_LOCK_UNLOCKED { 0 } + +static always_inline int _raw_read_trylock(raw_rwlock_t *rw) +{ + unsigned long tmp, tmp2 = 1; + + __asm__ __volatile__( +"1: ldrex %0, [%2]\n" +" adds %0, %0, #1\n" +" strexpl %1, %0, [%2]\n" + : "=&r" (tmp), "+r" (tmp2) + : "r" (&rw->lock) + : "cc"); + + smp_mb(); + return tmp2 == 0; +} + +static always_inline int _raw_write_trylock(raw_rwlock_t *rw) +{ + unsigned long tmp; + + __asm__ __volatile__( +"1: ldrex %0, [%1]\n" +" teq %0, #0\n" +" strexeq %0, %2, [%1]" + : "=&r" (tmp) + : "r" (&rw->lock), "r" (0x80000000) + : "cc"); + + if (tmp == 0) { + smp_mb(); + return 1; + } else { + return 0; + } +} + +static inline void _raw_read_unlock(raw_rwlock_t *rw) +{ + unsigned long tmp, tmp2; + + smp_mb(); + + __asm__ __volatile__( +"1: ldrex %0, [%2]\n" +" sub %0, %0, #1\n" +" strex %1, %0, [%2]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (tmp), "=&r" (tmp2) + : "r" (&rw->lock) + : "cc"); + + if (tmp == 0) + dsb_sev(); +} + +static inline void _raw_write_unlock(raw_rwlock_t *rw) +{ + smp_mb(); + + __asm__ __volatile__( + "str %1, [%0]\n" + : + : "r" (&rw->lock), "r" (0) + : "cc"); + + dsb_sev(); +} + +#define _raw_rw_is_locked(x) ((x)->lock != 0) +#define _raw_rw_is_write_locked(x) ((x)->lock == 0x80000000) + +#endif /* __ASM_SPINLOCK_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/string.h b/xen/include/asm-arm/string.h new file mode 100644 index 0000000..f2d643d --- /dev/null +++ b/xen/include/asm-arm/string.h @@ -0,0 +1,38 @@ +#ifndef __ARM_STRING_H__ +#define __ARM_STRING_H__ + +#include <xen/config.h> + +#define __HAVE_ARCH_MEMCPY +extern void * memcpy(void *, const void *, __kernel_size_t); + +/* Some versions of gcc don''t have this builtin. It''s non-critical anyway. */ +#define __HAVE_ARCH_MEMMOVE +extern void *memmove(void *dest, const void *src, size_t n); + +#define __HAVE_ARCH_MEMSET +extern void * memset(void *, int, __kernel_size_t); + +extern void __memzero(void *ptr, __kernel_size_t n); + +#define memset(p,v,n) \ + ({ \ + void *__p = (p); size_t __n = n; \ + if ((__n) != 0) { \ + if (__builtin_constant_p((v)) && (v) == 0) \ + __memzero((__p),(__n)); \ + else \ + memset((__p),(v),(__n)); \ + } \ + (__p); \ + }) + +#endif /* __ARM_STRING_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h new file mode 100644 index 0000000..731d89f --- /dev/null +++ b/xen/include/asm-arm/system.h @@ -0,0 +1,202 @@ +/* Portions taken from Linux arch arm */ +#ifndef __ASM_SYSTEM_H +#define __ASM_SYSTEM_H + +#include <xen/lib.h> +#include <asm/processor.h> + +#define nop() \ + asm volatile ( "nop" ) + +#define xchg(ptr,x) \ + ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) + +#define isb() __asm__ __volatile__ ("isb" : : : "memory") +#define dsb() __asm__ __volatile__ ("dsb" : : : "memory") +#define dmb() __asm__ __volatile__ ("dmb" : : : "memory") + +#define mb() dsb() +#define rmb() dsb() +#define wmb() mb() + +#define smp_mb() dmb() +#define smp_rmb() dmb() +#define smp_wmb() dmb() + +/* + * This is used to ensure the compiler did actually allocate the register we + * asked it for some inline assembly sequences. Apparently we can''t trust + * the compiler from one version to another so a bit of paranoia won''t hurt. + * This string is meant to be concatenated with the inline asm string and + * will cause compilation to stop on mismatch. + * (for details, see gcc PR 15089) + */ +#define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t" + +extern void __bad_xchg(volatile void *, int); + +static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size) +{ + unsigned long ret; + unsigned int tmp; + + smp_mb(); + + switch (size) { + case 1: + asm volatile("@ __xchg1\n" + "1: ldrexb %0, [%3]\n" + " strexb %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=&r" (ret), "=&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; + case 4: + asm volatile("@ __xchg4\n" + "1: ldrex %0, [%3]\n" + " strex %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=&r" (ret), "=&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; + default: + __bad_xchg(ptr, size), ret = 0; + break; + } + smp_mb(); + + return ret; +} + +/* + * Atomic compare and exchange. Compare OLD with MEM, if identical, + * store NEW in MEM. Return the initial value in MEM. Success is + * indicated by comparing RETURN with OLD. + */ + +extern void __bad_cmpxchg(volatile void *ptr, int size); + +static always_inline unsigned long __cmpxchg( + volatile void *ptr, unsigned long old, unsigned long new, int size) +{ + unsigned long /*long*/ oldval, res; + + switch (size) { + case 1: + do { + asm volatile("@ __cmpxchg1\n" + " ldrexb %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexbeq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + case 2: + do { + asm volatile("@ __cmpxchg2\n" + " ldrexh %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexheq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + case 4: + do { + asm volatile("@ __cmpxchg4\n" + " ldrex %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexeq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; +#if 0 + case 8: + do { + asm volatile("@ __cmpxchg8\n" + " ldrexd %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexdeq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; +#endif + default: + __bad_cmpxchg(ptr, size); + oldval = 0; + } + + return oldval; +} +#define cmpxchg(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)(o), \ + (unsigned long)(n),sizeof(*(ptr)))) + +#define local_irq_disable() asm volatile ( "cpsid i @ local_irq_disable\n" : : : "cc" ) +#define local_irq_enable() asm volatile ( "cpsie i @ local_irq_enable\n" : : : "cc" ) + +#define local_save_flags(x) \ +({ \ + BUILD_BUG_ON(sizeof(x) != sizeof(long)); \ + asm volatile ( "mrs %0, cpsr @ local_save_flags\n" \ + : "=r" (x) :: "memory", "cc" ); \ +}) +#define local_irq_save(x) \ +({ \ + local_save_flags(x); \ + local_irq_disable(); \ +}) +#define local_irq_restore(x) \ +({ \ + BUILD_BUG_ON(sizeof(x) != sizeof(long)); \ + asm volatile ( \ + "msr cpsr_c, %0 @ local_irq_restore\n" \ + : \ + : "r" (flags) \ + : "memory", "cc"); \ +}) + +static inline int local_irq_is_enabled(void) +{ + unsigned long flags; + local_save_flags(flags); + return !(flags & PSR_IRQ_MASK); +} + +#define local_fiq_enable() __asm__("cpsie f @ __stf\n" : : : "memory", "cc") +#define local_fiq_disable() __asm__("cpsid f @ __clf\n" : : : "memory", "cc") + +#define local_abort_enable() __asm__("cpsie a @ __sta\n" : : : "memory", "cc") +#define local_abort_disable() __asm__("cpsid a @ __sta\n" : : : "memory", "cc") + +static inline int local_fiq_is_enabled(void) +{ + unsigned long flags; + local_save_flags(flags); + return !!(flags & PSR_FIQ_MASK); +} + +#endif +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/trace.h b/xen/include/asm-arm/trace.h new file mode 100644 index 0000000..db84541 --- /dev/null +++ b/xen/include/asm-arm/trace.h @@ -0,0 +1,12 @@ +#ifndef __ASM_TRACE_H__ +#define __ASM_TRACE_H__ + +#endif /* __ASM_TRACE_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/types.h b/xen/include/asm-arm/types.h new file mode 100644 index 0000000..48864f9 --- /dev/null +++ b/xen/include/asm-arm/types.h @@ -0,0 +1,57 @@ +#ifndef __ARM_TYPES_H__ +#define __ARM_TYPES_H__ + +#ifndef __ASSEMBLY__ + +#include <xen/config.h> + +typedef __signed__ char __s8; +typedef unsigned char __u8; + +typedef __signed__ short __s16; +typedef unsigned short __u16; + +typedef __signed__ int __s32; +typedef unsigned int __u32; + +#if defined(__GNUC__) && !defined(__STRICT_ANSI__) +typedef __signed__ long long __s64; +typedef unsigned long long __u64; +#endif + +typedef signed char s8; +typedef unsigned char u8; + +typedef signed short s16; +typedef unsigned short u16; + +typedef signed int s32; +typedef unsigned int u32; + +typedef signed long long s64; +typedef unsigned long long u64; +typedef u64 paddr_t; +#define INVALID_PADDR (~0ULL) +#define PRIpaddr "016llx" + +typedef unsigned long size_t; + +typedef char bool_t; +#define test_and_set_bool(b) xchg(&(b), 1) +#define test_and_clear_bool(b) xchg(&(b), 0) + +#endif /* __ASSEMBLY__ */ + +#define BITS_PER_LONG 32 +#define BYTES_PER_LONG 4 +#define LONG_BYTEORDER 2 + +#endif /* __ARM_TYPES_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/xenoprof.h b/xen/include/asm-arm/xenoprof.h new file mode 100644 index 0000000..131ac13 --- /dev/null +++ b/xen/include/asm-arm/xenoprof.h @@ -0,0 +1,12 @@ +#ifndef __ASM_XENOPROF_H__ +#define __ASM_XENOPROF_H__ + +#endif /* __ASM_XENOPROF_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h new file mode 100644 index 0000000..4d1daa9 --- /dev/null +++ b/xen/include/public/arch-arm.h @@ -0,0 +1,125 @@ +/****************************************************************************** + * arch-arm.h + * + * Guest OS interface to ARM Xen. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + * + * Copyright 2011 (C) Citrix Systems + */ + +#ifndef __XEN_PUBLIC_ARCH_ARM_H__ +#define __XEN_PUBLIC_ARCH_ARM_H__ + +#ifndef __ASSEMBLY__ +#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \ + typedef struct { type *p; } __guest_handle_ ## name + +#define __DEFINE_XEN_GUEST_HANDLE(name, type) \ + ___DEFINE_XEN_GUEST_HANDLE(name, type); \ + ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type) +#define DEFINE_XEN_GUEST_HANDLE(name) __DEFINE_XEN_GUEST_HANDLE(name, name) +#define __XEN_GUEST_HANDLE(name) __guest_handle_ ## name +#define XEN_GUEST_HANDLE(name) __XEN_GUEST_HANDLE(name) +#define set_xen_guest_handle_raw(hnd, val) do { (hnd).p = val; } while (0) +#ifdef __XEN_TOOLS__ +#define get_xen_guest_handle(val, hnd) do { val = (hnd).p; } while (0) +#endif +#define set_xen_guest_handle(hnd, val) set_xen_guest_handle_raw(hnd, val) + +struct cpu_user_regs +{ + uint32_t r0; + uint32_t r1; + uint32_t r2; + uint32_t r3; + uint32_t r4; + uint32_t r5; + uint32_t r6; + uint32_t r7; + uint32_t r8; + uint32_t r9; + uint32_t r10; + union { + uint32_t r11; + uint32_t fp; + }; + uint32_t r12; + + uint32_t sp; /* r13 - SP: Valid for Hyp. frames only, o/w banked (see below) */ + uint32_t lr; /* r14 - LR: Valid for Hyp. Same physical register as lr_usr. */ + + uint32_t pc; /* Return IP */ + uint32_t cpsr; /* Return mode */ + uint32_t pad0; /* Doubleword-align the kernel half of the frame */ + + /* Outer guest frame only from here on... */ + + uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq; + + uint32_t sp_usr, sp_svc, sp_abt, sp_und, sp_irq, sp_fiq; + uint32_t lr_usr, lr_svc, lr_abt, lr_und, lr_irq, lr_fiq; + + uint32_t spsr_svc, spsr_abt, spsr_und, spsr_irq, spsr_fiq; +}; +typedef struct cpu_user_regs cpu_user_regs_t; +DEFINE_XEN_GUEST_HANDLE(cpu_user_regs_t); + +typedef uint64_t xen_pfn_t; +#define PRI_xen_pfn PRIx64 + +/* Maximum number of virtual CPUs in legacy multi-processor guests. */ +/* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */ +#define XEN_LEGACY_MAX_VCPUS 1 + +typedef uint32_t xen_ulong_t; + +struct vcpu_guest_context { + struct cpu_user_regs user_regs; /* User-level CPU registers */ + union { + uint32_t reg[16]; + struct { + uint32_t __pad[12]; + uint32_t sp; /* r13 */ + uint32_t lr; /* r14 */ + uint32_t pc; /* r15 */ + }; + }; +}; +typedef struct vcpu_guest_context vcpu_guest_context_t; +DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t); + +struct arch_vcpu_info { }; +typedef struct arch_vcpu_info arch_vcpu_info_t; + +struct arch_shared_info { }; +typedef struct arch_shared_info arch_shared_info_t; +#endif + +#endif /* __XEN_PUBLIC_ARCH_ARM_H__ */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index 41b14ea..68bce71 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -33,6 +33,8 @@ #include "arch-x86/xen.h" #elif defined(__ia64__) #include "arch-ia64.h" +#elif defined(__arm__) +#include "arch-arm.h" #else #error "Unsupported architecture" #endif -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 10/25] arm: bit manipulation, copy and division libraries
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Bit manipulation, division and memcpy & friends implementations for the ARM architecture, shamelessly taken from Linux. Changes in v2: - implement __aeabi_uldivmod and __aeabi_ldivmod. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/lib/Makefile | 5 + xen/arch/arm/lib/assembler.h | 49 ++++++ xen/arch/arm/lib/bitops.h | 36 +++++ xen/arch/arm/lib/changebit.S | 18 +++ xen/arch/arm/lib/clearbit.S | 19 +++ xen/arch/arm/lib/copy_template.S | 266 +++++++++++++++++++++++++++++++++ xen/arch/arm/lib/div64.S | 149 +++++++++++++++++++ xen/arch/arm/lib/findbit.S | 115 +++++++++++++++ xen/arch/arm/lib/lib1funcs.S | 302 ++++++++++++++++++++++++++++++++++++++ xen/arch/arm/lib/memcpy.S | 64 ++++++++ xen/arch/arm/lib/memmove.S | 200 +++++++++++++++++++++++++ xen/arch/arm/lib/memset.S | 129 ++++++++++++++++ xen/arch/arm/lib/memzero.S | 127 ++++++++++++++++ xen/arch/arm/lib/setbit.S | 18 +++ xen/arch/arm/lib/testchangebit.S | 18 +++ xen/arch/arm/lib/testclearbit.S | 18 +++ xen/arch/arm/lib/testsetbit.S | 18 +++ 17 files changed, 1551 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/lib/Makefile create mode 100644 xen/arch/arm/lib/assembler.h create mode 100644 xen/arch/arm/lib/bitops.h create mode 100644 xen/arch/arm/lib/changebit.S create mode 100644 xen/arch/arm/lib/clearbit.S create mode 100644 xen/arch/arm/lib/copy_template.S create mode 100644 xen/arch/arm/lib/div64.S create mode 100644 xen/arch/arm/lib/findbit.S create mode 100644 xen/arch/arm/lib/lib1funcs.S create mode 100644 xen/arch/arm/lib/memcpy.S create mode 100644 xen/arch/arm/lib/memmove.S create mode 100644 xen/arch/arm/lib/memset.S create mode 100644 xen/arch/arm/lib/memzero.S create mode 100644 xen/arch/arm/lib/setbit.S create mode 100644 xen/arch/arm/lib/testchangebit.S create mode 100644 xen/arch/arm/lib/testclearbit.S create mode 100644 xen/arch/arm/lib/testsetbit.S diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/lib/Makefile new file mode 100644 index 0000000..cbbed68 --- /dev/null +++ b/xen/arch/arm/lib/Makefile @@ -0,0 +1,5 @@ +obj-y += memcpy.o memmove.o memset.o memzero.o +obj-y += findbit.o setbit.o +obj-y += setbit.o clearbit.o changebit.o +obj-y += testsetbit.o testclearbit.o testchangebit.o +obj-y += lib1funcs.o div64.o diff --git a/xen/arch/arm/lib/assembler.h b/xen/arch/arm/lib/assembler.h new file mode 100644 index 0000000..f8f0961 --- /dev/null +++ b/xen/arch/arm/lib/assembler.h @@ -0,0 +1,49 @@ +#ifndef __ARCH_ARM_LIB_ASSEMBLER_H__ +#define __ARCH_ARM_LIB_ASSEMBLER_H__ + +/* From Linux arch/arm/include/asm/assembler.h */ +/* + * Data preload for architectures that support it + */ +#define PLD(code...) code + +/* + * This can be used to enable code to cacheline align the destination + * pointer when bulk writing to memory. Experiments on StrongARM and + * XScale didn''t show this a worthwhile thing to do when the cache is not + * set to write-allocate (this would need further testing on XScale when WA + * is used). + * + * On Feroceon there is much to gain however, regardless of cache mode. + */ +#ifdef CONFIG_CPU_FEROCEON /* Not in Xen... */ +#define CALGN(code...) code +#else +#define CALGN(code...) +#endif + +// No Thumb, hence: +#define W(instr) instr +#define ARM(instr...) instr +#define THUMB(instr...) + +#ifdef CONFIG_ARM_UNWIND +#define UNWIND(code...) code +#else +#define UNWIND(code...) +#endif + +#define pull lsl +#define push lsr +#define get_byte_0 lsr #24 +#define get_byte_1 lsr #16 +#define get_byte_2 lsr #8 +#define get_byte_3 lsl #0 +#define put_byte_0 lsl #24 +#define put_byte_1 lsl #16 +#define put_byte_2 lsl #8 +#define put_byte_3 lsl #0 + +#define smp_dmb dmb + +#endif /* __ARCH_ARM_LIB_ASSEMBLER_H__ */ diff --git a/xen/arch/arm/lib/bitops.h b/xen/arch/arm/lib/bitops.h new file mode 100644 index 0000000..e56d4e8 --- /dev/null +++ b/xen/arch/arm/lib/bitops.h @@ -0,0 +1,36 @@ + .macro bitop, instr + ands ip, r1, #3 + strneb r1, [ip] @ assert word-aligned + mov r2, #1 + and r3, r0, #31 @ Get bit offset + mov r0, r0, lsr #5 + add r1, r1, r0, lsl #2 @ Get word offset + mov r3, r2, lsl r3 +1: ldrex r2, [r1] + \instr r2, r2, r3 + strex r0, r2, [r1] + cmp r0, #0 + bne 1b + bx lr + .endm + + .macro testop, instr, store + ands ip, r1, #3 + strneb r1, [ip] @ assert word-aligned + mov r2, #1 + and r3, r0, #31 @ Get bit offset + mov r0, r0, lsr #5 + add r1, r1, r0, lsl #2 @ Get word offset + mov r3, r2, lsl r3 @ create mask + smp_dmb +1: ldrex r2, [r1] + ands r0, r2, r3 @ save old value of bit + \instr r2, r2, r3 @ toggle bit + strex ip, r2, [r1] + cmp ip, #0 + bne 1b + smp_dmb + cmp r0, #0 + movne r0, #1 +2: bx lr + .endm diff --git a/xen/arch/arm/lib/changebit.S b/xen/arch/arm/lib/changebit.S new file mode 100644 index 0000000..62954bc --- /dev/null +++ b/xen/arch/arm/lib/changebit.S @@ -0,0 +1,18 @@ +/* + * linux/arch/arm/lib/changebit.S + * + * Copyright (C) 1995-1996 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include <xen/config.h> + +#include "assembler.h" +#include "bitops.h" + .text + +ENTRY(_change_bit) + bitop eor +ENDPROC(_change_bit) diff --git a/xen/arch/arm/lib/clearbit.S b/xen/arch/arm/lib/clearbit.S new file mode 100644 index 0000000..42ce416 --- /dev/null +++ b/xen/arch/arm/lib/clearbit.S @@ -0,0 +1,19 @@ +/* + * linux/arch/arm/lib/clearbit.S + * + * Copyright (C) 1995-1996 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <xen/config.h> + +#include "assembler.h" +#include "bitops.h" + .text + +ENTRY(_clear_bit) + bitop bic +ENDPROC(_clear_bit) diff --git a/xen/arch/arm/lib/copy_template.S b/xen/arch/arm/lib/copy_template.S new file mode 100644 index 0000000..7f7f4d5 --- /dev/null +++ b/xen/arch/arm/lib/copy_template.S @@ -0,0 +1,266 @@ +/* + * linux/arch/arm/lib/copy_template.s + * + * Code template for optimized memory copy functions + * + * Author: Nicolas Pitre + * Created: Sep 28, 2005 + * Copyright: MontaVista Software, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +/* + * Theory of operation + * ------------------- + * + * This file provides the core code for a forward memory copy used in + * the implementation of memcopy(), copy_to_user() and copy_from_user(). + * + * The including file must define the following accessor macros + * according to the need of the given function: + * + * ldr1w ptr reg abort + * + * This loads one word from ''ptr'', stores it in ''reg'' and increments + * ''ptr'' to the next word. The ''abort'' argument is used for fixup tables. + * + * ldr4w ptr reg1 reg2 reg3 reg4 abort + * ldr8w ptr, reg1 reg2 reg3 reg4 reg5 reg6 reg7 reg8 abort + * + * This loads four or eight words starting from ''ptr'', stores them + * in provided registers and increments ''ptr'' past those words. + * The''abort'' argument is used for fixup tables. + * + * ldr1b ptr reg cond abort + * + * Similar to ldr1w, but it loads a byte and increments ''ptr'' one byte. + * It also must apply the condition code if provided, otherwise the + * "al" condition is assumed by default. + * + * str1w ptr reg abort + * str8w ptr reg1 reg2 reg3 reg4 reg5 reg6 reg7 reg8 abort + * str1b ptr reg cond abort + * + * Same as their ldr* counterparts, but data is stored to ''ptr'' location + * rather than being loaded. + * + * enter reg1 reg2 + * + * Preserve the provided registers on the stack plus any additional + * data as needed by the implementation including this code. Called + * upon code entry. + * + * exit reg1 reg2 + * + * Restore registers with the values previously saved with the + * ''preserv'' macro. Called upon code termination. + * + * LDR1W_SHIFT + * STR1W_SHIFT + * + * Correction to be applied to the "ip" register when branching into + * the ldr1w or str1w instructions (some of these macros may expand to + * than one 32bit instruction in Thumb-2) + */ + + enter r4, lr + + subs r2, r2, #4 + blt 8f + ands ip, r0, #3 + PLD( pld [r1, #0] ) + bne 9f + ands ip, r1, #3 + bne 10f + +1: subs r2, r2, #(28) + stmfd sp!, {r5 - r8} + blt 5f + + CALGN( ands ip, r0, #31 ) + CALGN( rsb r3, ip, #32 ) + CALGN( sbcnes r4, r3, r2 ) @ C is always set here + CALGN( bcs 2f ) + CALGN( adr r4, 6f ) + CALGN( subs r2, r2, r3 ) @ C gets set + CALGN( add pc, r4, ip ) + + PLD( pld [r1, #0] ) +2: PLD( subs r2, r2, #96 ) + PLD( pld [r1, #28] ) + PLD( blt 4f ) + PLD( pld [r1, #60] ) + PLD( pld [r1, #92] ) + +3: PLD( pld [r1, #124] ) +4: ldr8w r1, r3, r4, r5, r6, r7, r8, ip, lr, abort=20f + subs r2, r2, #32 + str8w r0, r3, r4, r5, r6, r7, r8, ip, lr, abort=20f + bge 3b + PLD( cmn r2, #96 ) + PLD( bge 4b ) + +5: ands ip, r2, #28 + rsb ip, ip, #32 +#if LDR1W_SHIFT > 0 + lsl ip, ip, #LDR1W_SHIFT +#endif + addne pc, pc, ip @ C is always clear here + b 7f +6: + .rept (1 << LDR1W_SHIFT) + W(nop) + .endr + ldr1w r1, r3, abort=20f + ldr1w r1, r4, abort=20f + ldr1w r1, r5, abort=20f + ldr1w r1, r6, abort=20f + ldr1w r1, r7, abort=20f + ldr1w r1, r8, abort=20f + ldr1w r1, lr, abort=20f + +#if LDR1W_SHIFT < STR1W_SHIFT + lsl ip, ip, #STR1W_SHIFT - LDR1W_SHIFT +#elif LDR1W_SHIFT > STR1W_SHIFT + lsr ip, ip, #LDR1W_SHIFT - STR1W_SHIFT +#endif + add pc, pc, ip + nop + .rept (1 << STR1W_SHIFT) + W(nop) + .endr + str1w r0, r3, abort=20f + str1w r0, r4, abort=20f + str1w r0, r5, abort=20f + str1w r0, r6, abort=20f + str1w r0, r7, abort=20f + str1w r0, r8, abort=20f + str1w r0, lr, abort=20f + + CALGN( bcs 2b ) + +7: ldmfd sp!, {r5 - r8} + +8: movs r2, r2, lsl #31 + ldr1b r1, r3, ne, abort=21f + ldr1b r1, r4, cs, abort=21f + ldr1b r1, ip, cs, abort=21f + str1b r0, r3, ne, abort=21f + str1b r0, r4, cs, abort=21f + str1b r0, ip, cs, abort=21f + + exit r4, pc + +9: rsb ip, ip, #4 + cmp ip, #2 + ldr1b r1, r3, gt, abort=21f + ldr1b r1, r4, ge, abort=21f + ldr1b r1, lr, abort=21f + str1b r0, r3, gt, abort=21f + str1b r0, r4, ge, abort=21f + subs r2, r2, ip + str1b r0, lr, abort=21f + blt 8b + ands ip, r1, #3 + beq 1b + +10: bic r1, r1, #3 + cmp ip, #2 + ldr1w r1, lr, abort=21f + beq 17f + bgt 18f + + + .macro forward_copy_shift pull push + + subs r2, r2, #28 + blt 14f + + CALGN( ands ip, r0, #31 ) + CALGN( rsb ip, ip, #32 ) + CALGN( sbcnes r4, ip, r2 ) @ C is always set here + CALGN( subcc r2, r2, ip ) + CALGN( bcc 15f ) + +11: stmfd sp!, {r5 - r9} + + PLD( pld [r1, #0] ) + PLD( subs r2, r2, #96 ) + PLD( pld [r1, #28] ) + PLD( blt 13f ) + PLD( pld [r1, #60] ) + PLD( pld [r1, #92] ) + +12: PLD( pld [r1, #124] ) +13: ldr4w r1, r4, r5, r6, r7, abort=19f + mov r3, lr, pull #\pull + subs r2, r2, #32 + ldr4w r1, r8, r9, ip, lr, abort=19f + orr r3, r3, r4, push #\push + mov r4, r4, pull #\pull + orr r4, r4, r5, push #\push + mov r5, r5, pull #\pull + orr r5, r5, r6, push #\push + mov r6, r6, pull #\pull + orr r6, r6, r7, push #\push + mov r7, r7, pull #\pull + orr r7, r7, r8, push #\push + mov r8, r8, pull #\pull + orr r8, r8, r9, push #\push + mov r9, r9, pull #\pull + orr r9, r9, ip, push #\push + mov ip, ip, pull #\pull + orr ip, ip, lr, push #\push + str8w r0, r3, r4, r5, r6, r7, r8, r9, ip, , abort=19f + bge 12b + PLD( cmn r2, #96 ) + PLD( bge 13b ) + + ldmfd sp!, {r5 - r9} + +14: ands ip, r2, #28 + beq 16f + +15: mov r3, lr, pull #\pull + ldr1w r1, lr, abort=21f + subs ip, ip, #4 + orr r3, r3, lr, push #\push + str1w r0, r3, abort=21f + bgt 15b + CALGN( cmp r2, #0 ) + CALGN( bge 11b ) + +16: sub r1, r1, #(\push / 8) + b 8b + + .endm + + + forward_copy_shift pull=8 push=24 + +17: forward_copy_shift pull=16 push=16 + +18: forward_copy_shift pull=24 push=8 + + +/* + * Abort preamble and completion macros. + * If a fixup handler is required then those macros must surround it. + * It is assumed that the fixup code will handle the private part of + * the exit macro. + */ + + .macro copy_abort_preamble +19: ldmfd sp!, {r5 - r9} + b 21f +20: ldmfd sp!, {r5 - r8} +21: + .endm + + .macro copy_abort_end + ldmfd sp!, {r4, pc} + .endm + diff --git a/xen/arch/arm/lib/div64.S b/xen/arch/arm/lib/div64.S new file mode 100644 index 0000000..2584772 --- /dev/null +++ b/xen/arch/arm/lib/div64.S @@ -0,0 +1,149 @@ +/* + * linux/arch/arm/lib/div64.S + * + * Optimized computation of 64-bit dividend / 32-bit divisor + * + * Author: Nicolas Pitre + * Created: Oct 5, 2003 + * Copyright: Monta Vista Software, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <xen/config.h> +#include "assembler.h" + +#define xl r0 +#define xh r1 +#define yl r2 +#define yh r3 + +/* + * __do_div64: perform a division with 64-bit dividend and 32-bit divisor. + * + * Note: Calling convention is totally non standard for optimal code. + * This is meant to be used by do_div() from include/asm/div64.h only. + * + * Input parameters: + * xh-xl = dividend (clobbered) + * r4 = divisor (preserved) + * + * Output values: + * yh-yl = result + * xh = remainder + * + * Clobbered regs: xl, ip + */ + +ENTRY(__do_div64) + + @ Test for easy paths first. + subs ip, r4, #1 + bls 9f @ divisor is 0 or 1 + tst ip, r4 + beq 8f @ divisor is power of 2 + + @ See if we need to handle upper 32-bit result. + cmp xh, r4 + mov yh, #0 + blo 3f + + @ Align divisor with upper part of dividend. + @ The aligned divisor is stored in yl preserving the original. + @ The bit position is stored in ip. + + clz yl, r4 + clz ip, xh + sub yl, yl, ip + mov ip, #1 + mov ip, ip, lsl yl + mov yl, r4, lsl yl + + @ The division loop for needed upper bit positions. + @ Break out early if dividend reaches 0. +2: cmp xh, yl + orrcs yh, yh, ip + subcss xh, xh, yl + movnes ip, ip, lsr #1 + mov yl, yl, lsr #1 + bne 2b + + @ See if we need to handle lower 32-bit result. +3: cmp xh, #0 + mov yl, #0 + cmpeq xl, r4 + movlo xh, xl + movlo pc, lr + + @ The division loop for lower bit positions. + @ Here we shift remainer bits leftwards rather than moving the + @ divisor for comparisons, considering the carry-out bit as well. + mov ip, #0x80000000 +4: movs xl, xl, lsl #1 + adcs xh, xh, xh + beq 6f + cmpcc xh, r4 +5: orrcs yl, yl, ip + subcs xh, xh, r4 + movs ip, ip, lsr #1 + bne 4b + mov pc, lr + + @ The top part of remainder became zero. If carry is set + @ (the 33th bit) this is a false positive so resume the loop. + @ Otherwise, if lower part is also null then we are done. +6: bcs 5b + cmp xl, #0 + moveq pc, lr + + @ We still have remainer bits in the low part. Bring them up. + + clz xh, xl @ we know xh is zero here so... + add xh, xh, #1 + mov xl, xl, lsl xh + mov ip, ip, lsr xh + + @ Current remainder is now 1. It is worthless to compare with + @ divisor at this point since divisor can not be smaller than 3 here. + @ If possible, branch for another shift in the division loop. + @ If no bit position left then we are done. + movs ip, ip, lsr #1 + mov xh, #1 + bne 4b + mov pc, lr + +8: @ Division by a power of 2: determine what that divisor order is + @ then simply shift values around + + clz ip, r4 + rsb ip, ip, #31 + + mov yh, xh, lsr ip + mov yl, xl, lsr ip + rsb ip, ip, #32 + ARM( orr yl, yl, xh, lsl ip ) + THUMB( lsl xh, xh, ip ) + THUMB( orr yl, yl, xh ) + mov xh, xl, lsl ip + mov xh, xh, lsr ip + mov pc, lr + + @ eq -> division by 1: obvious enough... +9: moveq yl, xl + moveq yh, xh + moveq xh, #0 + moveq pc, lr + + @ Division by 0: + str lr, [sp, #-8]! + bl __div0 + + @ as wrong as it could be... + mov yl, #0 + mov yh, #0 + mov xh, #0 + ldr pc, [sp], #8 + +ENDPROC(__do_div64) diff --git a/xen/arch/arm/lib/findbit.S b/xen/arch/arm/lib/findbit.S new file mode 100644 index 0000000..5669b91 --- /dev/null +++ b/xen/arch/arm/lib/findbit.S @@ -0,0 +1,115 @@ +/* + * linux/arch/arm/lib/findbit.S + * + * Copyright (C) 1995-2000 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * 16th March 2001 - John Ripley <jripley@sonicblue.com> + * Fixed so that "size" is an exclusive not an inclusive quantity. + * All users of these functions expect exclusive sizes, and may + * also call with zero size. + * Reworked by rmk. + */ + +#include <xen/config.h> + +#include "assembler.h" + .text + +/* + * Purpose : Find a ''zero'' bit + * Prototype: int find_first_zero_bit(void *addr, unsigned int maxbit); + */ +ENTRY(_find_first_zero_bit) + teq r1, #0 + beq 3f + mov r2, #0 +1: + ARM( ldrb r3, [r0, r2, lsr #3] ) + THUMB( lsr r3, r2, #3 ) + THUMB( ldrb r3, [r0, r3] ) + eors r3, r3, #0xff @ invert bits + bne .L_found @ any now set - found zero bit + add r2, r2, #8 @ next bit pointer +2: cmp r2, r1 @ any more? + blo 1b +3: mov r0, r1 @ no free bits + mov pc, lr +ENDPROC(_find_first_zero_bit) + +/* + * Purpose : Find next ''zero'' bit + * Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset) + */ +ENTRY(_find_next_zero_bit) + teq r1, #0 + beq 3b + ands ip, r2, #7 + beq 1b @ If new byte, goto old routine + ARM( ldrb r3, [r0, r2, lsr #3] ) + THUMB( lsr r3, r2, #3 ) + THUMB( ldrb r3, [r0, r3] ) + eor r3, r3, #0xff @ now looking for a 1 bit + movs r3, r3, lsr ip @ shift off unused bits + bne .L_found + orr r2, r2, #7 @ if zero, then no bits here + add r2, r2, #1 @ align bit pointer + b 2b @ loop for next bit +ENDPROC(_find_next_zero_bit) + +/* + * Purpose : Find a ''one'' bit + * Prototype: int find_first_bit(const unsigned long *addr, unsigned int maxbit); + */ +ENTRY(_find_first_bit) + teq r1, #0 + beq 3f + mov r2, #0 +1: + ARM( ldrb r3, [r0, r2, lsr #3] ) + THUMB( lsr r3, r2, #3 ) + THUMB( ldrb r3, [r0, r3] ) + movs r3, r3 + bne .L_found @ any now set - found zero bit + add r2, r2, #8 @ next bit pointer +2: cmp r2, r1 @ any more? + blo 1b +3: mov r0, r1 @ no free bits + mov pc, lr +ENDPROC(_find_first_bit) + +/* + * Purpose : Find next ''one'' bit + * Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset) + */ +ENTRY(_find_next_bit) + teq r1, #0 + beq 3b + ands ip, r2, #7 + beq 1b @ If new byte, goto old routine + ARM( ldrb r3, [r0, r2, lsr #3] ) + THUMB( lsr r3, r2, #3 ) + THUMB( ldrb r3, [r0, r3] ) + movs r3, r3, lsr ip @ shift off unused bits + bne .L_found + orr r2, r2, #7 @ if zero, then no bits here + add r2, r2, #1 @ align bit pointer + b 2b @ loop for next bit +ENDPROC(_find_next_bit) + +/* + * One or more bits in the LSB of r3 are assumed to be set. + */ +.L_found: + rsb r0, r3, #0 + and r3, r3, r0 + clz r3, r3 + rsb r3, r3, #31 + add r0, r2, r3 + cmp r1, r0 @ Clamp to maxbit + movlo r0, r1 + mov pc, lr + diff --git a/xen/arch/arm/lib/lib1funcs.S b/xen/arch/arm/lib/lib1funcs.S new file mode 100644 index 0000000..828e688 --- /dev/null +++ b/xen/arch/arm/lib/lib1funcs.S @@ -0,0 +1,302 @@ +/* + * linux/arch/arm/lib/lib1funcs.S: Optimized ARM division routines + * + * Author: Nicolas Pitre <nico@fluxnic.net> + * - contributed to gcc-3.4 on Sep 30, 2003 + * - adapted for the Linux kernel on Oct 2, 2003 + */ + +/* Copyright 1995, 1996, 1998, 1999, 2000, 2003 Free Software Foundation, Inc. + +This file is free software; you can redistribute it and/or modify it +under the terms of the GNU General Public License as published by the +Free Software Foundation; either version 2, or (at your option) any +later version. + +In addition to the permissions in the GNU General Public License, the +Free Software Foundation gives you unlimited permission to link the +compiled version of this file into combinations with other programs, +and to distribute those combinations without any restriction coming +from the use of this file. (The General Public License restrictions +do apply in other respects; for example, they cover modification of +the file, and distribution when not linked into a combine +executable.) + +This file is distributed in the hope that it will be useful, but +WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; see the file COPYING. If not, write to +the Free Software Foundation, 59 Temple Place - Suite 330, +Boston, MA 02111-1307, USA. */ + + +#include <xen/config.h> +#include "assembler.h" + +.macro ARM_DIV_BODY dividend, divisor, result, curbit + + clz \curbit, \divisor + clz \result, \dividend + sub \result, \curbit, \result + mov \curbit, #1 + mov \divisor, \divisor, lsl \result + mov \curbit, \curbit, lsl \result + mov \result, #0 + + @ Division loop +1: cmp \dividend, \divisor + subhs \dividend, \dividend, \divisor + orrhs \result, \result, \curbit + cmp \dividend, \divisor, lsr #1 + subhs \dividend, \dividend, \divisor, lsr #1 + orrhs \result, \result, \curbit, lsr #1 + cmp \dividend, \divisor, lsr #2 + subhs \dividend, \dividend, \divisor, lsr #2 + orrhs \result, \result, \curbit, lsr #2 + cmp \dividend, \divisor, lsr #3 + subhs \dividend, \dividend, \divisor, lsr #3 + orrhs \result, \result, \curbit, lsr #3 + cmp \dividend, #0 @ Early termination? + movnes \curbit, \curbit, lsr #4 @ No, any more bits to do? + movne \divisor, \divisor, lsr #4 + bne 1b + +.endm + + +.macro ARM_DIV2_ORDER divisor, order + + clz \order, \divisor + rsb \order, \order, #31 + +.endm + + +.macro ARM_MOD_BODY dividend, divisor, order, spare + + clz \order, \divisor + clz \spare, \dividend + sub \order, \order, \spare + mov \divisor, \divisor, lsl \order + + @ Perform all needed substractions to keep only the reminder. + @ Do comparisons in batch of 4 first. + subs \order, \order, #3 @ yes, 3 is intended here + blt 2f + +1: cmp \dividend, \divisor + subhs \dividend, \dividend, \divisor + cmp \dividend, \divisor, lsr #1 + subhs \dividend, \dividend, \divisor, lsr #1 + cmp \dividend, \divisor, lsr #2 + subhs \dividend, \dividend, \divisor, lsr #2 + cmp \dividend, \divisor, lsr #3 + subhs \dividend, \dividend, \divisor, lsr #3 + cmp \dividend, #1 + mov \divisor, \divisor, lsr #4 + subges \order, \order, #4 + bge 1b + + tst \order, #3 + teqne \dividend, #0 + beq 5f + + @ Either 1, 2 or 3 comparison/substractions are left. +2: cmn \order, #2 + blt 4f + beq 3f + cmp \dividend, \divisor + subhs \dividend, \dividend, \divisor + mov \divisor, \divisor, lsr #1 +3: cmp \dividend, \divisor + subhs \dividend, \dividend, \divisor + mov \divisor, \divisor, lsr #1 +4: cmp \dividend, \divisor + subhs \dividend, \dividend, \divisor +5: +.endm + + +ENTRY(__udivsi3) +ENTRY(__aeabi_uidiv) +UNWIND(.fnstart) + + subs r2, r1, #1 + moveq pc, lr + bcc Ldiv0 + cmp r0, r1 + bls 11f + tst r1, r2 + beq 12f + + ARM_DIV_BODY r0, r1, r2, r3 + + mov r0, r2 + mov pc, lr + +11: moveq r0, #1 + movne r0, #0 + mov pc, lr + +12: ARM_DIV2_ORDER r1, r2 + + mov r0, r0, lsr r2 + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__udivsi3) +ENDPROC(__aeabi_uidiv) + +ENTRY(__umodsi3) +UNWIND(.fnstart) + + subs r2, r1, #1 @ compare divisor with 1 + bcc Ldiv0 + cmpne r0, r1 @ compare dividend with divisor + moveq r0, #0 + tsthi r1, r2 @ see if divisor is power of 2 + andeq r0, r0, r2 + movls pc, lr + + ARM_MOD_BODY r0, r1, r2, r3 + + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__umodsi3) + +ENTRY(__divsi3) +ENTRY(__aeabi_idiv) +UNWIND(.fnstart) + + cmp r1, #0 + eor ip, r0, r1 @ save the sign of the result. + beq Ldiv0 + rsbmi r1, r1, #0 @ loops below use unsigned. + subs r2, r1, #1 @ division by 1 or -1 ? + beq 10f + movs r3, r0 + rsbmi r3, r0, #0 @ positive dividend value + cmp r3, r1 + bls 11f + tst r1, r2 @ divisor is power of 2 ? + beq 12f + + ARM_DIV_BODY r3, r1, r0, r2 + + cmp ip, #0 + rsbmi r0, r0, #0 + mov pc, lr + +10: teq ip, r0 @ same sign ? + rsbmi r0, r0, #0 + mov pc, lr + +11: movlo r0, #0 + moveq r0, ip, asr #31 + orreq r0, r0, #1 + mov pc, lr + +12: ARM_DIV2_ORDER r1, r2 + + cmp ip, #0 + mov r0, r3, lsr r2 + rsbmi r0, r0, #0 + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__divsi3) +ENDPROC(__aeabi_idiv) + +ENTRY(__modsi3) +UNWIND(.fnstart) + + cmp r1, #0 + beq Ldiv0 + rsbmi r1, r1, #0 @ loops below use unsigned. + movs ip, r0 @ preserve sign of dividend + rsbmi r0, r0, #0 @ if negative make positive + subs r2, r1, #1 @ compare divisor with 1 + cmpne r0, r1 @ compare dividend with divisor + moveq r0, #0 + tsthi r1, r2 @ see if divisor is power of 2 + andeq r0, r0, r2 + bls 10f + + ARM_MOD_BODY r0, r1, r2, r3 + +10: cmp ip, #0 + rsbmi r0, r0, #0 + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__modsi3) + +ENTRY(__aeabi_uidivmod) +UNWIND(.fnstart) +UNWIND(.save {r0, r1, ip, lr} ) + + stmfd sp!, {r0, r1, ip, lr} + bl __aeabi_uidiv + ldmfd sp!, {r1, r2, ip, lr} + mul r3, r0, r2 + sub r1, r1, r3 + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__aeabi_uidivmod) + +ENTRY(__aeabi_idivmod) +UNWIND(.fnstart) +UNWIND(.save {r0, r1, ip, lr} ) + stmfd sp!, {r0, r1, ip, lr} + bl __aeabi_idiv + ldmfd sp!, {r1, r2, ip, lr} + mul r3, r0, r2 + sub r1, r1, r3 + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__aeabi_idivmod) + +ENTRY(__aeabi_uldivmod) +UNWIND(.fnstart) +UNWIND(.save {lr} ) + sub sp, sp, #8 + stmfd sp!, {sp, lr} + bl __qdivrem + ldr lr, [sp, #4] + add sp, sp, #8 + ldmfd sp!, {r2, r3} + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__aeabi_uldivmod) + +ENTRY(__aeabi_ldivmod) +UNWIND(.fnstart) +UNWIND(.save {lr} ) + sub sp, sp, #16 + stmfd sp!, {sp, lr} + bl __ldivmod_helper + ldr lr, [sp, #4] + add sp, sp, #16 + ldmfd sp!, {r2, r3} + mov pc, lr + +UNWIND(.fnend) +ENDPROC(__aeabi_uldivmod) + +Ldiv0: +UNWIND(.fnstart) +UNWIND(.pad #4) +UNWIND(.save {lr}) + str lr, [sp, #-8]! + bl __div0 + mov r0, #0 @ About as wrong as it could be. + ldr pc, [sp], #8 +UNWIND(.fnend) +ENDPROC(Ldiv0) diff --git a/xen/arch/arm/lib/memcpy.S b/xen/arch/arm/lib/memcpy.S new file mode 100644 index 0000000..f4bad5c --- /dev/null +++ b/xen/arch/arm/lib/memcpy.S @@ -0,0 +1,64 @@ +/* + * linux/arch/arm/lib/memcpy.S + * + * Author: Nicolas Pitre + * Created: Sep 28, 2005 + * Copyright: MontaVista Software, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <xen/config.h> + +#include "assembler.h" + +#define LDR1W_SHIFT 0 +#define STR1W_SHIFT 0 + + .macro ldr1w ptr reg abort + W(ldr) \reg, [\ptr], #4 + .endm + + .macro ldr4w ptr reg1 reg2 reg3 reg4 abort + ldmia \ptr!, {\reg1, \reg2, \reg3, \reg4} + .endm + + .macro ldr8w ptr reg1 reg2 reg3 reg4 reg5 reg6 reg7 reg8 abort + ldmia \ptr!, {\reg1, \reg2, \reg3, \reg4, \reg5, \reg6, \reg7, \reg8} + .endm + + .macro ldr1b ptr reg cond=al abort + ldr\cond\()b \reg, [\ptr], #1 + .endm + + .macro str1w ptr reg abort + W(str) \reg, [\ptr], #4 + .endm + + .macro str8w ptr reg1 reg2 reg3 reg4 reg5 reg6 reg7 reg8 abort + stmia \ptr!, {\reg1, \reg2, \reg3, \reg4, \reg5, \reg6, \reg7, \reg8} + .endm + + .macro str1b ptr reg cond=al abort + str\cond\()b \reg, [\ptr], #1 + .endm + + .macro enter reg1 reg2 + stmdb sp!, {r0, \reg1, \reg2} + .endm + + .macro exit reg1 reg2 + ldmfd sp!, {r0, \reg1, \reg2} + .endm + + .text + +/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */ + +ENTRY(memcpy) + +#include "copy_template.S" + +ENDPROC(memcpy) diff --git a/xen/arch/arm/lib/memmove.S b/xen/arch/arm/lib/memmove.S new file mode 100644 index 0000000..4e142b8 --- /dev/null +++ b/xen/arch/arm/lib/memmove.S @@ -0,0 +1,200 @@ +/* + * linux/arch/arm/lib/memmove.S + * + * Author: Nicolas Pitre + * Created: Sep 28, 2005 + * Copyright: (C) MontaVista Software Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <xen/config.h> + +#include "assembler.h" + + .text + +/* + * Prototype: void *memmove(void *dest, const void *src, size_t n); + * + * Note: + * + * If the memory regions don''t overlap, we simply branch to memcpy which is + * normally a bit faster. Otherwise the copy is done going downwards. This + * is a transposition of the code from copy_template.S but with the copy + * occurring in the opposite direction. + */ + +ENTRY(memmove) + + subs ip, r0, r1 + cmphi r2, ip + bls memcpy + + stmfd sp!, {r0, r4, lr} + add r1, r1, r2 + add r0, r0, r2 + subs r2, r2, #4 + blt 8f + ands ip, r0, #3 + PLD( pld [r1, #-4] ) + bne 9f + ands ip, r1, #3 + bne 10f + +1: subs r2, r2, #(28) + stmfd sp!, {r5 - r8} + blt 5f + + CALGN( ands ip, r0, #31 ) + CALGN( sbcnes r4, ip, r2 ) @ C is always set here + CALGN( bcs 2f ) + CALGN( adr r4, 6f ) + CALGN( subs r2, r2, ip ) @ C is set here + CALGN( rsb ip, ip, #32 ) + CALGN( add pc, r4, ip ) + + PLD( pld [r1, #-4] ) +2: PLD( subs r2, r2, #96 ) + PLD( pld [r1, #-32] ) + PLD( blt 4f ) + PLD( pld [r1, #-64] ) + PLD( pld [r1, #-96] ) + +3: PLD( pld [r1, #-128] ) +4: ldmdb r1!, {r3, r4, r5, r6, r7, r8, ip, lr} + subs r2, r2, #32 + stmdb r0!, {r3, r4, r5, r6, r7, r8, ip, lr} + bge 3b + PLD( cmn r2, #96 ) + PLD( bge 4b ) + +5: ands ip, r2, #28 + rsb ip, ip, #32 + addne pc, pc, ip @ C is always clear here + b 7f +6: W(nop) + W(ldr) r3, [r1, #-4]! + W(ldr) r4, [r1, #-4]! + W(ldr) r5, [r1, #-4]! + W(ldr) r6, [r1, #-4]! + W(ldr) r7, [r1, #-4]! + W(ldr) r8, [r1, #-4]! + W(ldr) lr, [r1, #-4]! + + add pc, pc, ip + nop + W(nop) + W(str) r3, [r0, #-4]! + W(str) r4, [r0, #-4]! + W(str) r5, [r0, #-4]! + W(str) r6, [r0, #-4]! + W(str) r7, [r0, #-4]! + W(str) r8, [r0, #-4]! + W(str) lr, [r0, #-4]! + + CALGN( bcs 2b ) + +7: ldmfd sp!, {r5 - r8} + +8: movs r2, r2, lsl #31 + ldrneb r3, [r1, #-1]! + ldrcsb r4, [r1, #-1]! + ldrcsb ip, [r1, #-1] + strneb r3, [r0, #-1]! + strcsb r4, [r0, #-1]! + strcsb ip, [r0, #-1] + ldmfd sp!, {r0, r4, pc} + +9: cmp ip, #2 + ldrgtb r3, [r1, #-1]! + ldrgeb r4, [r1, #-1]! + ldrb lr, [r1, #-1]! + strgtb r3, [r0, #-1]! + strgeb r4, [r0, #-1]! + subs r2, r2, ip + strb lr, [r0, #-1]! + blt 8b + ands ip, r1, #3 + beq 1b + +10: bic r1, r1, #3 + cmp ip, #2 + ldr r3, [r1, #0] + beq 17f + blt 18f + + + .macro backward_copy_shift push pull + + subs r2, r2, #28 + blt 14f + + CALGN( ands ip, r0, #31 ) + CALGN( sbcnes r4, ip, r2 ) @ C is always set here + CALGN( subcc r2, r2, ip ) + CALGN( bcc 15f ) + +11: stmfd sp!, {r5 - r9} + + PLD( pld [r1, #-4] ) + PLD( subs r2, r2, #96 ) + PLD( pld [r1, #-32] ) + PLD( blt 13f ) + PLD( pld [r1, #-64] ) + PLD( pld [r1, #-96] ) + +12: PLD( pld [r1, #-128] ) +13: ldmdb r1!, {r7, r8, r9, ip} + mov lr, r3, push #\push + subs r2, r2, #32 + ldmdb r1!, {r3, r4, r5, r6} + orr lr, lr, ip, pull #\pull + mov ip, ip, push #\push + orr ip, ip, r9, pull #\pull + mov r9, r9, push #\push + orr r9, r9, r8, pull #\pull + mov r8, r8, push #\push + orr r8, r8, r7, pull #\pull + mov r7, r7, push #\push + orr r7, r7, r6, pull #\pull + mov r6, r6, push #\push + orr r6, r6, r5, pull #\pull + mov r5, r5, push #\push + orr r5, r5, r4, pull #\pull + mov r4, r4, push #\push + orr r4, r4, r3, pull #\pull + stmdb r0!, {r4 - r9, ip, lr} + bge 12b + PLD( cmn r2, #96 ) + PLD( bge 13b ) + + ldmfd sp!, {r5 - r9} + +14: ands ip, r2, #28 + beq 16f + +15: mov lr, r3, push #\push + ldr r3, [r1, #-4]! + subs ip, ip, #4 + orr lr, lr, r3, pull #\pull + str lr, [r0, #-4]! + bgt 15b + CALGN( cmp r2, #0 ) + CALGN( bge 11b ) + +16: add r1, r1, #(\pull / 8) + b 8b + + .endm + + + backward_copy_shift push=8 pull=24 + +17: backward_copy_shift push=16 pull=16 + +18: backward_copy_shift push=24 pull=8 + +ENDPROC(memmove) diff --git a/xen/arch/arm/lib/memset.S b/xen/arch/arm/lib/memset.S new file mode 100644 index 0000000..d2937a3 --- /dev/null +++ b/xen/arch/arm/lib/memset.S @@ -0,0 +1,129 @@ +/* + * linux/arch/arm/lib/memset.S + * + * Copyright (C) 1995-2000 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * ASM optimised string functions + */ + +#include <xen/config.h> + +#include "assembler.h" + + .text + .align 5 + .word 0 + +1: subs r2, r2, #4 @ 1 do we have enough + blt 5f @ 1 bytes to align with? + cmp r3, #2 @ 1 + strltb r1, [r0], #1 @ 1 + strleb r1, [r0], #1 @ 1 + strb r1, [r0], #1 @ 1 + add r2, r2, r3 @ 1 (r2 = r2 - (4 - r3)) +/* + * The pointer is now aligned and the length is adjusted. Try doing the + * memset again. + */ + +ENTRY(memset) + ands r3, r0, #3 @ 1 unaligned? + bne 1b @ 1 +/* + * we know that the pointer in r0 is aligned to a word boundary. + */ + orr r1, r1, r1, lsl #8 + orr r1, r1, r1, lsl #16 + mov r3, r1 + cmp r2, #16 + blt 4f + +#if ! CALGN(1)+0 + +/* + * We need an extra register for this loop - save the return address and + * use the LR + */ + str lr, [sp, #-4]! + mov ip, r1 + mov lr, r1 + +2: subs r2, r2, #64 + stmgeia r0!, {r1, r3, ip, lr} @ 64 bytes at a time. + stmgeia r0!, {r1, r3, ip, lr} + stmgeia r0!, {r1, r3, ip, lr} + stmgeia r0!, {r1, r3, ip, lr} + bgt 2b + ldmeqfd sp!, {pc} @ Now <64 bytes to go. +/* + * No need to correct the count; we''re only testing bits from now on + */ + tst r2, #32 + stmneia r0!, {r1, r3, ip, lr} + stmneia r0!, {r1, r3, ip, lr} + tst r2, #16 + stmneia r0!, {r1, r3, ip, lr} + ldr lr, [sp], #4 + +#else + +/* + * This version aligns the destination pointer in order to write + * whole cache lines at once. + */ + + stmfd sp!, {r4-r7, lr} + mov r4, r1 + mov r5, r1 + mov r6, r1 + mov r7, r1 + mov ip, r1 + mov lr, r1 + + cmp r2, #96 + tstgt r0, #31 + ble 3f + + and ip, r0, #31 + rsb ip, ip, #32 + sub r2, r2, ip + movs ip, ip, lsl #(32 - 4) + stmcsia r0!, {r4, r5, r6, r7} + stmmiia r0!, {r4, r5} + tst ip, #(1 << 30) + mov ip, r1 + strne r1, [r0], #4 + +3: subs r2, r2, #64 + stmgeia r0!, {r1, r3-r7, ip, lr} + stmgeia r0!, {r1, r3-r7, ip, lr} + bgt 3b + ldmeqfd sp!, {r4-r7, pc} + + tst r2, #32 + stmneia r0!, {r1, r3-r7, ip, lr} + tst r2, #16 + stmneia r0!, {r4-r7} + ldmfd sp!, {r4-r7, lr} + +#endif + +4: tst r2, #8 + stmneia r0!, {r1, r3} + tst r2, #4 + strne r1, [r0], #4 +/* + * When we get here, we''ve got less than 4 bytes to zero. We + * may have an unaligned pointer as well. + */ +5: tst r2, #2 + strneb r1, [r0], #1 + strneb r1, [r0], #1 + tst r2, #1 + strneb r1, [r0], #1 + mov pc, lr +ENDPROC(memset) diff --git a/xen/arch/arm/lib/memzero.S b/xen/arch/arm/lib/memzero.S new file mode 100644 index 0000000..ce25aca --- /dev/null +++ b/xen/arch/arm/lib/memzero.S @@ -0,0 +1,127 @@ +/* + * linux/arch/arm/lib/memzero.S + * + * Copyright (C) 1995-2000 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <xen/config.h> + +#include "assembler.h" + + .text + .align 5 + .word 0 +/* + * Align the pointer in r0. r3 contains the number of bytes that we are + * mis-aligned by, and r1 is the number of bytes. If r1 < 4, then we + * don''t bother; we use byte stores instead. + */ +1: subs r1, r1, #4 @ 1 do we have enough + blt 5f @ 1 bytes to align with? + cmp r3, #2 @ 1 + strltb r2, [r0], #1 @ 1 + strleb r2, [r0], #1 @ 1 + strb r2, [r0], #1 @ 1 + add r1, r1, r3 @ 1 (r1 = r1 - (4 - r3)) +/* + * The pointer is now aligned and the length is adjusted. Try doing the + * memzero again. + */ + +ENTRY(__memzero) + mov r2, #0 @ 1 + ands r3, r0, #3 @ 1 unaligned? + bne 1b @ 1 +/* + * r3 = 0, and we know that the pointer in r0 is aligned to a word boundary. + */ + cmp r1, #16 @ 1 we can skip this chunk if we + blt 4f @ 1 have < 16 bytes + +#if ! CALGN(1)+0 + +/* + * We need an extra register for this loop - save the return address and + * use the LR + */ + str lr, [sp, #-4]! @ 1 + mov ip, r2 @ 1 + mov lr, r2 @ 1 + +3: subs r1, r1, #64 @ 1 write 32 bytes out per loop + stmgeia r0!, {r2, r3, ip, lr} @ 4 + stmgeia r0!, {r2, r3, ip, lr} @ 4 + stmgeia r0!, {r2, r3, ip, lr} @ 4 + stmgeia r0!, {r2, r3, ip, lr} @ 4 + bgt 3b @ 1 + ldmeqfd sp!, {pc} @ 1/2 quick exit +/* + * No need to correct the count; we''re only testing bits from now on + */ + tst r1, #32 @ 1 + stmneia r0!, {r2, r3, ip, lr} @ 4 + stmneia r0!, {r2, r3, ip, lr} @ 4 + tst r1, #16 @ 1 16 bytes or more? + stmneia r0!, {r2, r3, ip, lr} @ 4 + ldr lr, [sp], #4 @ 1 + +#else + +/* + * This version aligns the destination pointer in order to write + * whole cache lines at once. + */ + + stmfd sp!, {r4-r7, lr} + mov r4, r2 + mov r5, r2 + mov r6, r2 + mov r7, r2 + mov ip, r2 + mov lr, r2 + + cmp r1, #96 + andgts ip, r0, #31 + ble 3f + + rsb ip, ip, #32 + sub r1, r1, ip + movs ip, ip, lsl #(32 - 4) + stmcsia r0!, {r4, r5, r6, r7} + stmmiia r0!, {r4, r5} + movs ip, ip, lsl #2 + strcs r2, [r0], #4 + +3: subs r1, r1, #64 + stmgeia r0!, {r2-r7, ip, lr} + stmgeia r0!, {r2-r7, ip, lr} + bgt 3b + ldmeqfd sp!, {r4-r7, pc} + + tst r1, #32 + stmneia r0!, {r2-r7, ip, lr} + tst r1, #16 + stmneia r0!, {r4-r7} + ldmfd sp!, {r4-r7, lr} + +#endif + +4: tst r1, #8 @ 1 8 bytes or more? + stmneia r0!, {r2, r3} @ 2 + tst r1, #4 @ 1 4 bytes or more? + strne r2, [r0], #4 @ 1 +/* + * When we get here, we''ve got less than 4 bytes to zero. We + * may have an unaligned pointer as well. + */ +5: tst r1, #2 @ 1 2 bytes or more? + strneb r2, [r0], #1 @ 1 + strneb r2, [r0], #1 @ 1 + tst r1, #1 @ 1 a byte left over + strneb r2, [r0], #1 @ 1 + mov pc, lr @ 1 +ENDPROC(__memzero) diff --git a/xen/arch/arm/lib/setbit.S b/xen/arch/arm/lib/setbit.S new file mode 100644 index 0000000..c828851 --- /dev/null +++ b/xen/arch/arm/lib/setbit.S @@ -0,0 +1,18 @@ +/* + * linux/arch/arm/lib/setbit.S + * + * Copyright (C) 1995-1996 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include <xen/config.h> + +#include "assembler.h" +#include "bitops.h" + .text + +ENTRY(_set_bit) + bitop orr +ENDPROC(_set_bit) diff --git a/xen/arch/arm/lib/testchangebit.S b/xen/arch/arm/lib/testchangebit.S new file mode 100644 index 0000000..a7f527c --- /dev/null +++ b/xen/arch/arm/lib/testchangebit.S @@ -0,0 +1,18 @@ +/* + * linux/arch/arm/lib/testchangebit.S + * + * Copyright (C) 1995-1996 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include <xen/config.h> + +#include "assembler.h" +#include "bitops.h" + .text + +ENTRY(_test_and_change_bit) + testop eor, str +ENDPROC(_test_and_change_bit) diff --git a/xen/arch/arm/lib/testclearbit.S b/xen/arch/arm/lib/testclearbit.S new file mode 100644 index 0000000..8f39c72 --- /dev/null +++ b/xen/arch/arm/lib/testclearbit.S @@ -0,0 +1,18 @@ +/* + * linux/arch/arm/lib/testclearbit.S + * + * Copyright (C) 1995-1996 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include <xen/config.h> + +#include "assembler.h" +#include "bitops.h" + .text + +ENTRY(_test_and_clear_bit) + testop bicne, strne +ENDPROC(_test_and_clear_bit) diff --git a/xen/arch/arm/lib/testsetbit.S b/xen/arch/arm/lib/testsetbit.S new file mode 100644 index 0000000..1b8d273 --- /dev/null +++ b/xen/arch/arm/lib/testsetbit.S @@ -0,0 +1,18 @@ +/* + * linux/arch/arm/lib/testsetbit.S + * + * Copyright (C) 1995-1996 Russell King + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include <xen/config.h> + +#include "assembler.h" +#include "bitops.h" + .text + +ENTRY(_test_and_set_bit) + testop orreq, streq +ENDPROC(_test_and_set_bit) -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 11/25] arm: entry.S and head.S
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Low level assembly routines, including entry.S and head.S. Also the linker script and a collection of dummy functions that we plan to reduce to zero as soon as possible. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/asm-offsets.c | 76 ++++++++++ xen/arch/arm/dummy.S | 72 ++++++++++ xen/arch/arm/entry.S | 107 ++++++++++++++ xen/arch/arm/head.S | 298 +++++++++++++++++++++++++++++++++++++++ xen/arch/arm/xen.lds.S | 141 ++++++++++++++++++ xen/include/asm-arm/asm_defns.h | 18 +++ 6 files changed, 712 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/asm-offsets.c create mode 100644 xen/arch/arm/dummy.S create mode 100644 xen/arch/arm/entry.S create mode 100644 xen/arch/arm/head.S create mode 100644 xen/arch/arm/xen.lds.S create mode 100644 xen/include/asm-arm/asm_defns.h diff --git a/xen/arch/arm/asm-offsets.c b/xen/arch/arm/asm-offsets.c new file mode 100644 index 0000000..ee5d5d4 --- /dev/null +++ b/xen/arch/arm/asm-offsets.c @@ -0,0 +1,76 @@ +/* + * Generate definitions needed by assembly language modules. + * This code generates raw asm output which is post-processed + * to extract and format the required data. + */ +#define COMPILE_OFFSETS + +#include <xen/config.h> +#include <xen/types.h> +#include <public/xen.h> +#include <asm/current.h> + +#define DEFINE(_sym, _val) \ + __asm__ __volatile__ ( "\n->" #_sym " %0 " #_val : : "i" (_val) ) +#define BLANK() \ + __asm__ __volatile__ ( "\n->" : : ) +#define OFFSET(_sym, _str, _mem) \ + DEFINE(_sym, offsetof(_str, _mem)); + +/* base-2 logarithm */ +#define __L2(_x) (((_x) & 0x00000002) ? 1 : 0) +#define __L4(_x) (((_x) & 0x0000000c) ? ( 2 + __L2( (_x)>> 2)) : __L2( _x)) +#define __L8(_x) (((_x) & 0x000000f0) ? ( 4 + __L4( (_x)>> 4)) : __L4( _x)) +#define __L16(_x) (((_x) & 0x0000ff00) ? ( 8 + __L8( (_x)>> 8)) : __L8( _x)) +#define LOG_2(_x) (((_x) & 0xffff0000) ? (16 + __L16((_x)>>16)) : __L16(_x)) + +void __dummy__(void) +{ + OFFSET(UREGS_sp, struct cpu_user_regs, sp); + OFFSET(UREGS_lr, struct cpu_user_regs, lr); + OFFSET(UREGS_pc, struct cpu_user_regs, pc); + OFFSET(UREGS_cpsr, struct cpu_user_regs, cpsr); + + OFFSET(UREGS_LR_usr, struct cpu_user_regs, lr_usr); + OFFSET(UREGS_SP_usr, struct cpu_user_regs, sp_usr); + + OFFSET(UREGS_SP_svc, struct cpu_user_regs, sp_svc); + OFFSET(UREGS_LR_svc, struct cpu_user_regs, lr_svc); + OFFSET(UREGS_SPSR_svc, struct cpu_user_regs, spsr_svc); + + OFFSET(UREGS_SP_abt, struct cpu_user_regs, sp_abt); + OFFSET(UREGS_LR_abt, struct cpu_user_regs, lr_abt); + OFFSET(UREGS_SPSR_abt, struct cpu_user_regs, spsr_abt); + + OFFSET(UREGS_SP_und, struct cpu_user_regs, sp_und); + OFFSET(UREGS_LR_und, struct cpu_user_regs, lr_und); + OFFSET(UREGS_SPSR_und, struct cpu_user_regs, spsr_und); + + OFFSET(UREGS_SP_irq, struct cpu_user_regs, sp_irq); + OFFSET(UREGS_LR_irq, struct cpu_user_regs, lr_irq); + OFFSET(UREGS_SPSR_irq, struct cpu_user_regs, spsr_irq); + + OFFSET(UREGS_SP_fiq, struct cpu_user_regs, sp_fiq); + OFFSET(UREGS_LR_fiq, struct cpu_user_regs, lr_fiq); + OFFSET(UREGS_SPSR_fiq, struct cpu_user_regs, spsr_fiq); + + OFFSET(UREGS_R8_fiq, struct cpu_user_regs, r8_fiq); + OFFSET(UREGS_R9_fiq, struct cpu_user_regs, r9_fiq); + OFFSET(UREGS_R10_fiq, struct cpu_user_regs, r10_fiq); + OFFSET(UREGS_R11_fiq, struct cpu_user_regs, r11_fiq); + OFFSET(UREGS_R12_fiq, struct cpu_user_regs, r12_fiq); + + OFFSET(UREGS_kernel_sizeof, struct cpu_user_regs, cpsr); + DEFINE(UREGS_user_sizeof, sizeof(struct cpu_user_regs)); + BLANK(); + + DEFINE(CPUINFO_sizeof, sizeof(struct cpu_info)); +} +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S new file mode 100644 index 0000000..5bc4f21 --- /dev/null +++ b/xen/arch/arm/dummy.S @@ -0,0 +1,72 @@ +/* Nothing is mapped at 1G, for the moment */ +#define DUMMY(x) \ + .globl x; \ +x: .word 0xe7f000f0 +/* x: mov r0, #0x40000000 ; str r0, [r0]; b x */ + +#define NOP(x) \ + .globl x; \ +x: mov pc, lr + +DUMMY(alloc_pirq_struct); +DUMMY(alloc_vcpu_guest_context); +DUMMY(arch_do_domctl); +DUMMY(arch_do_sysctl); +DUMMY(arch_do_vcpu_op); +DUMMY(arch_get_info_guest); +DUMMY(arch_get_xen_caps); +DUMMY(arch_memory_op); +DUMMY(arch_set_info_guest); +DUMMY(arch_vcpu_reset); +DUMMY(create_grant_host_mapping); +DUMMY(__cpu_die); +DUMMY(__cpu_disable); +DUMMY(__cpu_up); +DUMMY(do_get_pm_info); +DUMMY(domain_get_maximum_gpfn); +DUMMY(domain_relinquish_resources); +DUMMY(domain_set_time_offset); +DUMMY(dom_cow); +DUMMY(donate_page); +DUMMY(do_pm_op); +DUMMY(flush_tlb_mask); +DUMMY(free_vcpu_guest_context); +DUMMY(get_page); +DUMMY(get_page_type); +DUMMY(gmfn_to_mfn); +DUMMY(gnttab_clear_flag); +DUMMY(gnttab_host_mapping_get_page_type); +DUMMY(gnttab_mark_dirty); +DUMMY(hypercall_create_continuation); +DUMMY(iommu_map_page); +DUMMY(iommu_unmap_page); +DUMMY(is_iomem_page); +DUMMY(local_event_delivery_enable); +DUMMY(local_events_need_delivery); +DUMMY(machine_to_phys_mapping_valid); +DUMMY(max_page); +DUMMY(node_online_map); +DUMMY(nr_irqs_gsi); +DUMMY(p2m_pod_decrease_reservation); +DUMMY(guest_physmap_mark_populate_on_demand); +DUMMY(page_get_owner_and_reference); +DUMMY(page_is_ram_type); +DUMMY(per_cpu__cpu_core_mask); +DUMMY(per_cpu__cpu_sibling_mask); +DUMMY(__per_cpu_offset); +DUMMY(pirq_guest_bind); +DUMMY(pirq_guest_unbind); +DUMMY(pirq_set_affinity); +DUMMY(put_page); +DUMMY(put_page_type); +DUMMY(replace_grant_host_mapping); +DUMMY(send_timer_event); +DUMMY(share_xen_page_with_privileged_guests); +DUMMY(smp_send_state_dump); +DUMMY(steal_page); +DUMMY(sync_vcpu_execstate); +DUMMY(__udelay); +NOP(update_vcpu_system_time); +DUMMY(vcpu_mark_events_pending); +DUMMY(vcpu_show_execution_state); +DUMMY(wallclock_time); diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S new file mode 100644 index 0000000..16a8f36 --- /dev/null +++ b/xen/arch/arm/entry.S @@ -0,0 +1,107 @@ +#include <xen/config.h> +#include <asm/asm_defns.h> + +#define SAVE_ONE_BANKED(reg) mrs r11, reg; str r11, [sp, #UREGS_##reg] +#define RESTORE_ONE_BANKED(reg) ldr r11, [sp, #UREGS_##reg]; msr reg, r11 + +#define SAVE_BANKED(mode) \ + SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode) + +#define RESTORE_BANKED(mode) \ + RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode) + +#define SAVE_ALL \ + sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */ \ + push {r0-r12}; /* Save R0-R12 */ \ + \ + mrs r11, ELR_hyp; /* ELR_hyp is return address. */ \ + str r11, [sp, #UREGS_pc]; \ + \ + str lr, [sp, #UREGS_lr]; \ + \ + add r11, sp, #UREGS_kernel_sizeof+4; \ + str r11, [sp, #UREGS_sp]; \ + \ + mrs r11, SPSR_hyp; \ + str r11, [sp, #UREGS_cpsr]; \ + and r11, #PSR_MODE_MASK; \ + cmp r11, #PSR_MODE_HYP; \ + blne save_guest_regs + +save_guest_regs: + ldr r11, [sp, #UREGS_lr] + str r11, [sp, #UREGS_LR_usr] + ldr r11, =0xffffffff /* Clobber SP which is only valid for hypervisor frames. */ + str r11, [sp, #UREGS_sp] + SAVE_ONE_BANKED(SP_usr) + SAVE_BANKED(svc) + SAVE_BANKED(abt) + SAVE_BANKED(und) + SAVE_BANKED(irq) + SAVE_BANKED(fiq) + SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq) + SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq); + mov pc, lr + +#define DEFINE_TRAP_ENTRY(trap) \ + ALIGN; \ +trap_##trap: \ + SAVE_ALL; \ + adr lr, return_from_trap; \ + mov r0, sp; \ + mov r11, sp; \ + bic sp, #7; /* Align the stack pointer (noop on guest trap) */ \ + b do_trap_##trap + +.globl hyp_traps_vector + .align 5 +hyp_traps_vector: + .word 0 /* 0x00 - Reset */ + b trap_undefined_instruction /* 0x04 - Undefined Instruction */ + b trap_supervisor_call /* 0x08 - Supervisor Call */ + b trap_prefetch_abort /* 0x0c - Prefetch Abort */ + b trap_data_abort /* 0x10 - Data Abort */ + b trap_hypervisor /* 0x14 - Hypervisor */ + b trap_irq /* 0x18 - IRQ */ + b trap_fiq /* 0x1c - FIQ */ + +DEFINE_TRAP_ENTRY(undefined_instruction) +DEFINE_TRAP_ENTRY(supervisor_call) +DEFINE_TRAP_ENTRY(prefetch_abort) +DEFINE_TRAP_ENTRY(data_abort) +DEFINE_TRAP_ENTRY(hypervisor) +DEFINE_TRAP_ENTRY(irq) +DEFINE_TRAP_ENTRY(fiq) + +ENTRY(return_from_trap) + ldr r11, [sp, #UREGS_cpsr] + and r11, #PSR_MODE_MASK + cmp r11, #PSR_MODE_HYP + beq return_to_hypervisor + +ENTRY(return_to_guest) + mov r11, sp + bic sp, #7 /* Align the stack pointer */ + bl leave_hypervisor_tail + ldr r11, [sp, #UREGS_pc] + msr ELR_hyp, r11 + ldr r11, [sp, #UREGS_cpsr] + msr SPSR_hyp, r11 + RESTORE_ONE_BANKED(SP_usr) + RESTORE_BANKED(svc) + RESTORE_BANKED(abt) + RESTORE_BANKED(und) + RESTORE_BANKED(irq) + RESTORE_BANKED(fiq) + RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq) + RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq); + ldr lr, [sp, #UREGS_LR_usr] + pop {r0-r12} + add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */ + eret + +ENTRY(return_to_hypervisor) + ldr lr, [sp, #UREGS_lr] + pop {r0-r12} + add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */ + eret diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S new file mode 100644 index 0000000..b98c921 --- /dev/null +++ b/xen/arch/arm/head.S @@ -0,0 +1,298 @@ +/* + * xen/arch/arm/head.S + * + * Start-of-day code for an ARMv7-A with virt extensions. + * + * Tim Deegan <tim@xen.org> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <asm/config.h> +#include <asm/page.h> +#include <asm/asm_defns.h> + + +/* Macro to print a string to the UART, if there is one. + * Clobbers r0-r3. */ +#ifdef EARLY_UART_ADDRESS +#define PRINT(_s) \ + adr r0, 98f ; \ + bl puts ; \ + b 99f ; \ +98: .asciz _s ; \ + .align 2 ; \ +99: +#else +#define PRINT(s) +#endif + + .arm + + /* This must be the very first address in the loaded image. + * It should be linked at XEN_VIRT_START, and loaded at any + * 2MB-aligned address. All of text+data+bss must fit in 2MB, + * or the initial pagetable code below will need adjustment. */ + .global start +start: + cpsid aif /* Disable all interrupts */ + + /* Save the bootloader arguments in less-clobberable registers */ + mov r7, r1 /* r7 := ARM-linux machine type */ + mov r8, r2 /* r8 := ATAG base address */ + + /* Find out where we are */ + ldr r0, =start + adr r9, start /* r9 := paddr (start) */ + sub r10, r9, r0 /* r10 := phys-offset */ + +#ifdef EARLY_UART_ADDRESS + /* Say hello */ + ldr r11, =EARLY_UART_ADDRESS /* r11 := UART base address */ + bl init_uart +#endif + + /* Check that this CPU has Hyp mode */ + mrc CP32(r0, ID_PFR1) + and r0, r0, #0xf000 /* Bits 12-15 define virt extensions */ + teq r0, #0x1000 /* Must == 0x1 or may be incompatible */ + beq 1f + bl putn + PRINT("- CPU doesn''t support the virtualization extensions -\r\n") + b fail +1: + /* Check if we''re already in it */ + mrs r0, cpsr + and r0, r0, #0x1f /* Mode is in the low 5 bits of CPSR */ + teq r0, #0x1a /* Hyp Mode? */ + bne 1f + PRINT("- Started in Hyp mode -\r\n") + b hyp +1: + /* Otherwise, it must have been Secure Supervisor mode */ + mrc CP32(r0, SCR) + tst r0, #0x1 /* Not-Secure bit set? */ + beq 1f + PRINT("- CPU is not in Hyp mode or Secure state -\r\n") + b fail +1: + /* OK, we''re in Secure state. */ + PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n") + + /* Dance into Hyp mode */ + cpsid aif, #0x16 /* Enter Monitor mode */ + mrc CP32(r0, SCR) + orr r0, r0, #0x100 /* Set HCE */ + orr r0, r0, #0xb1 /* Set SCD, AW, FW and NS */ + bic r0, r0, #0xe /* Clear EA, FIQ and IRQ */ + mcr CP32(r0, SCR) + /* Ugly: the system timer''s frequency register is only + * programmable in Secure state. Since we don''t know where its + * memory-mapped control registers live, we can''t find out the + * right frequency. Use the VE model''s default frequency here. */ + ldr r0, =0x5f5e100 /* 100 MHz */ + mcr CP32(r0, CNTFRQ) + ldr r0, =0x40c00 /* SMP, c11, c10 in non-secure mode */ + mcr CP32(r0, NSACR) + /* Continuing ugliness: Set up the GIC so NS state owns interrupts */ + mov r0, #GIC_BASE_ADDRESS + add r0, r0, #GIC_DR_OFFSET + mov r1, #0 + str r1, [r0] /* Disable delivery in the distributor */ + add r0, r0, #0x80 /* GICD_IGROUP0 */ + mov r2, #0xffffffff /* All interrupts to group 1 */ + str r2, [r0] + str r2, [r0, #4] + str r2, [r0, #8] + /* Must drop priority mask below 0x80 before entering NS state */ + mov r0, #GIC_BASE_ADDRESS + add r0, r0, #GIC_CR_OFFSET + ldr r1, =0xff + str r1, [r0, #0x4] /* -> GICC_PMR */ + /* Reset a few config registers */ + mov r0, #0 + mcr CP32(r0, FCSEIDR) + mcr CP32(r0, CONTEXTIDR) + /* FIXME: ought to reset some other NS control regs here */ + adr r1, 1f + adr r0, hyp /* Store paddr (hyp entry point) */ + str r0, [r1] /* where we can use it for RFE */ + isb /* Ensure we see the stored target address */ + rfeia r1 /* Enter Hyp mode */ + +1: .word 0 /* PC to enter Hyp mode at */ + .word 0x000001da /* CPSR: LE, Abort/IRQ/FIQ off, Hyp */ + +hyp: + PRINT("- Setting up control registers -\r\n") + + /* Set up memory attribute type tables */ + ldr r0, =MAIR0VAL + ldr r1, =MAIR1VAL + mcr CP32(r0, MAIR0) + mcr CP32(r1, MAIR1) + mcr CP32(r0, HMAIR0) + mcr CP32(r1, HMAIR1) + + /* Set up the HTCR: + * PT walks use Outer-Shareable accesses, + * PT walks are write-back, no-write-allocate in both cache levels, + * Full 32-bit address space goes through this table. */ + ldr r0, =0x80002500 + mcr CP32(r0, HTCR) + + /* Set up the HSCTLR: + * Exceptions in LE ARM, + * Low-latency IRQs disabled, + * Write-implies-XN disabled (for now), + * I-cache and d-cache enabled, + * Alignment checking enabled, + * MMU translation disabled (for now). */ + ldr r0, =(HSCTLR_BASE|SCTLR_A|SCTLR_C) + mcr CP32(r0, HSCTLR) + + /* Write Xen''s PT''s paddr into the HTTBR */ + ldr r4, =xen_pgtable + add r4, r4, r10 /* r4 := paddr (xen_pagetable) */ + mov r5, #0 /* r4:r5 is paddr (xen_pagetable) */ + mcrr CP64(r4, r5, HTTBR) + + /* Build the baseline idle pagetable''s first-level entries */ + ldr r1, =xen_second + add r1, r1, r10 /* r1 := paddr (xen_second) */ + mov r3, #0x0 + orr r2, r1, #0xe00 /* r2:r3 := table map of xen_second */ + orr r2, r2, #0x07f /* (+ rights for linear PT) */ + strd r2, r3, [r4, #0] /* Map it in slot 0 */ + add r2, r2, #0x1000 + strd r2, r3, [r4, #8] /* Map 2nd page in slot 1 */ + add r2, r2, #0x1000 + strd r2, r3, [r4, #16] /* Map 3rd page in slot 2 */ + add r2, r2, #0x1000 + strd r2, r3, [r4, #24] /* Map 4th page in slot 3 */ + + /* Now set up the second-level entries */ + orr r2, r9, #0xe00 + orr r2, r2, #0x07d /* r2:r3 := 2MB normal map of Xen */ + mov r4, r9, lsr #18 /* Slot for paddr(start) */ + strd r2, r3, [r1, r4] /* Map Xen there */ + ldr r4, =start + lsr r4, #18 /* Slot for vaddr(start) */ + strd r2, r3, [r1, r4] /* Map Xen there too */ +#ifdef EARLY_UART_ADDRESS + ldr r3, =(1<<(54-32)) /* NS for device mapping */ + lsr r2, r11, #21 + lsl r2, r2, #21 /* 2MB-aligned paddr of UART */ + orr r2, r2, #0xe00 + orr r2, r2, #0x071 /* r2:r3 := 2MB dev map including UART */ + add r4, r4, #8 + strd r2, r3, [r1, r4] /* Map it in the fixmap''s slot */ +#endif + + PRINT("- Turning on paging -\r\n") + + ldr r1, =paging /* Explicit vaddr, not RIP-relative */ + mrc CP32(r0, HSCTLR) + orr r0, r0, #0x1 /* Add in the MMU enable bit */ + dsb /* Flush PTE writes and finish reads */ + mcr CP32(r0, HSCTLR) /* now paging is enabled */ + isb /* Now, flush the icache */ + mov pc, r1 /* Get a proper vaddr into PC */ +paging: + +#ifdef EARLY_UART_ADDRESS + /* Recover the UART address in the new address space */ + lsl r11, #11 + lsr r11, #11 /* UART base''s offset from 2MB base */ + adr r0, start + add r0, r0, #0x200000 /* vaddr of the fixmap''s 2MB slot */ + add r11, r11, r0 /* r11 := vaddr (UART base address) */ +#endif + + PRINT("- Entering C -\r\n") + + ldr sp, =init_stack /* Supply a stack */ + add sp, #STACK_SIZE /* (which grows down from the top). */ + sub sp, #CPUINFO_sizeof /* Make room for CPU save record */ + mov r0, r10 /* Marshal args: - phys_offset */ + mov r1, r7 /* - machine type */ + mov r2, r8 /* - ATAG address */ + b start_xen /* and disappear into the land of C */ + +/* Fail-stop + * r0: string explaining why */ +fail: PRINT("- Boot failed -\r\n") +1: wfe + b 1b + +#ifdef EARLY_UART_ADDRESS + +/* Bring up the UART. Specific to the PL011 UART. + * Clobbers r0-r2 */ +init_uart: + mov r1, #0x0 + str r1, [r11, #0x24] /* -> UARTIBRD (Baud divisor fraction) */ + mov r1, #0x4 /* 7.3728MHz / 0x4 == 16 * 115200 */ + str r1, [r11, #0x24] /* -> UARTIBRD (Baud divisor integer) */ + mov r1, #0x60 /* 8n1 */ + str r1, [r11, #0x24] /* -> UARTLCR_H (Line control) */ + ldr r1, =0x00000301 /* RXE | TXE | UARTEN */ + str r1, [r11, #0x30] /* -> UARTCR (Control Register) */ + adr r0, 1f + b puts +1: .asciz "- UART enabled -\r\n" + .align 4 + +/* Print early debug messages. Specific to the PL011 UART. + * r0: Nul-terminated string to print. + * Clobbers r0-r2 */ +puts: + ldr r2, [r11, #0x18] /* <- UARTFR (Flag register) */ + tst r2, #0x8 /* Check BUSY bit */ + bne puts /* Wait for the UART to be ready */ + ldrb r2, [r0], #1 /* Load next char */ + teq r2, #0 /* Exit on nul*/ + moveq pc, lr + str r2, [r11] /* -> UARTDR (Data Register) */ + b puts + +/* Print a 32-bit number in hex. Specific to the PL011 UART. + * r0: Number to print. + * clobbers r0-r3 */ +putn: + adr r1, hex + mov r3, #8 +1: ldr r2, [r11, #0x18] /* <- UARTFR (Flag register) */ + tst r2, #0x8 /* Check BUSY bit */ + bne 1b /* Wait for the UART to be ready */ + and r2, r0, #0xf0000000 /* Mask off the top nybble */ + ldrb r2, [r1, r2, lsr #28] /* Convert to a char */ + str r2, [r11] /* -> UARTDR (Data Register) */ + lsl r0, #4 /* Roll it through one nybble at a time */ + subs r3, r3, #1 + bne 1b + adr r0, crlf /* Finish with a newline */ + b puts + +crlf: .asciz "\r\n" +hex: .ascii "0123456789abcdef" + .align 2 + +#else /* EARLY_UART_ADDRESS */ + +init_uart: +.global early_puts +early_puts: +puts: +putn: mov pc, lr + +#endif /* EARLY_UART_ADDRESS */ diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S new file mode 100644 index 0000000..5a62e2c --- /dev/null +++ b/xen/arch/arm/xen.lds.S @@ -0,0 +1,141 @@ +/* Excerpts written by Martin Mares <mj@atrey.karlin.mff.cuni.cz> */ +/* Modified for i386/x86-64 Xen by Keir Fraser */ +/* Modified for ARM Xen by Ian Campbell */ + +#include <xen/config.h> +#include <xen/cache.h> +#include <asm/page.h> +#include <asm/percpu.h> +#undef ENTRY +#undef ALIGN + +ENTRY(start) + +OUTPUT_ARCH(arm) + +PHDRS +{ + text PT_LOAD /* XXX should be AT ( XEN_PHYS_START ) */ ; +} +SECTIONS +{ + . = XEN_VIRT_START; + _start = .; + .text : /* XXX should be AT ( XEN_PHYS_START ) */ { + _stext = .; /* Text section */ + *(.text) + *(.fixup) + *(.gnu.warning) + _etext = .; /* End of text section */ + } :text = 0x9090 + + . = ALIGN(PAGE_SIZE); + .rodata : { + _srodata = .; /* Read-only data */ + *(.rodata) + *(.rodata.*) + _erodata = .; /* End of read-only data */ + } :text + + .data : { /* Data */ + . = ALIGN(PAGE_SIZE); + *(.data.page_aligned) + *(.data) + *(.data.rel) + *(.data.rel.*) + CONSTRUCTORS + } :text + + . = ALIGN(SMP_CACHE_BYTES); + .data.read_mostly : { + /* Exception table */ + __start___ex_table = .; + *(.ex_table) + __stop___ex_table = .; + + /* Pre-exception table */ + __start___pre_ex_table = .; + *(.ex_table.pre) + __stop___pre_ex_table = .; + + *(.data.read_mostly) + *(.data.rel.ro) + *(.data.rel.ro.*) + } :text + +#ifdef LOCK_PROFILE + . = ALIGN(32); + __lock_profile_start = .; + .lockprofile.data : { *(.lockprofile.data) } :text + __lock_profile_end = .; +#endif + + . = ALIGN(PAGE_SIZE); /* Init code and data */ + __init_begin = .; + .init.text : { + _sinittext = .; + *(.init.text) + _einittext = .; + } :text + . = ALIGN(PAGE_SIZE); + .init.data : { + *(.init.rodata) + *(.init.rodata.str*) + *(.init.data) + *(.init.data.rel) + *(.init.data.rel.*) + } :text + . = ALIGN(32); + .init.setup : { + __setup_start = .; + *(.init.setup) + __setup_end = .; + } :text + .initcall.init : { + __initcall_start = .; + *(.initcallpresmp.init) + __presmp_initcall_end = .; + *(.initcall1.init) + __initcall_end = .; + } :text + .xsm_initcall.init : { + __xsm_initcall_start = .; + *(.xsm_initcall.init) + __xsm_initcall_end = .; + } :text + . = ALIGN(STACK_SIZE); + __init_end = .; + + .bss : { /* BSS */ + __bss_start = .; + *(.bss.stack_aligned) + . = ALIGN(PAGE_SIZE); + *(.bss.page_aligned) + *(.bss) + . = ALIGN(SMP_CACHE_BYTES); + __per_cpu_start = .; + *(.bss.percpu) + . = ALIGN(SMP_CACHE_BYTES); + *(.bss.percpu.read_mostly) + . = ALIGN(SMP_CACHE_BYTES); + __per_cpu_data_end = .; + } :text + _end = . ; + + /* Sections to be discarded */ + /DISCARD/ : { + *(.exit.text) + *(.exit.data) + *(.exitcall.exit) + *(.eh_frame) + } + + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } +} diff --git a/xen/include/asm-arm/asm_defns.h b/xen/include/asm-arm/asm_defns.h new file mode 100644 index 0000000..c59fb6c --- /dev/null +++ b/xen/include/asm-arm/asm_defns.h @@ -0,0 +1,18 @@ +#ifndef __ARM_ASM_DEFNS_H__ +#define __ARM_ASM_DEFNS_H__ + +#ifndef COMPILE_OFFSETS +/* NB. Auto-generated from arch/.../asm-offsets.c */ +#include <asm/asm-offsets.h> +#endif +#include <asm/processor.h> + +#endif /* __ARM_ASM_DEFNS_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Domain creation and destruction, vcpu initialization and destruction, arch specific scheduling functions called by common code. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/domain.c | 253 ++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/domain.h | 43 +++++++ 2 files changed, 296 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/domain.c create mode 100644 xen/include/asm-arm/domain.h diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c new file mode 100644 index 0000000..d706b5f --- /dev/null +++ b/xen/arch/arm/domain.c @@ -0,0 +1,253 @@ +#include <xen/config.h> +#include <xen/init.h> +#include <xen/lib.h> +#include <xen/sched.h> +#include <xen/softirq.h> +#include <xen/wait.h> +#include <xen/errno.h> + +#include <asm/current.h> +#include <asm/regs.h> +#include <asm/p2m.h> +#include <asm/irq.h> + +DEFINE_PER_CPU(struct vcpu *, curr_vcpu); + +static void continue_idle_domain(struct vcpu *v) +{ + reset_stack_and_jump(idle_loop); +} + +static void continue_nonidle_domain(struct vcpu *v) +{ + /* check_wakeup_from_wait(); */ + reset_stack_and_jump(return_from_trap); +} + +void idle_loop(void) +{ + for ( ; ; ) + { + /* TODO + if ( cpu_is_offline(smp_processor_id()) ) + play_dead(); + (*pm_idle)(); + BUG(); + */ + do_tasklet(); + do_softirq(); + } +} + +static void ctxt_switch_from(struct vcpu *p) +{ + +} + +static void ctxt_switch_to(struct vcpu *n) +{ + p2m_load_VTTBR(n->domain); +} + +static void __context_switch(void) +{ + struct cpu_user_regs *stack_regs = guest_cpu_user_regs(); + unsigned int cpu = smp_processor_id(); + struct vcpu *p = per_cpu(curr_vcpu, cpu); + struct vcpu *n = current; + + ASSERT(p != n); + ASSERT(cpumask_empty(n->vcpu_dirty_cpumask)); + + if ( !is_idle_vcpu(p) ) + { + memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES); + ctxt_switch_from(p); + } + + if ( !is_idle_vcpu(n) ) + { + memcpy(stack_regs, &n->arch.user_regs, CTXT_SWITCH_STACK_BYTES); + ctxt_switch_to(n); + } + + per_cpu(curr_vcpu, cpu) = n; + +} + +static void schedule_tail(struct vcpu *v) +{ + if ( is_idle_vcpu(v) ) + continue_idle_domain(v); + else + continue_nonidle_domain(v); +} + +void context_switch(struct vcpu *prev, struct vcpu *next) +{ + unsigned int cpu = smp_processor_id(); + + ASSERT(local_irq_is_enabled()); + + printk("context switch %d:%d%s -> %d:%d%s\n", + prev->domain->domain_id, prev->vcpu_id, is_idle_vcpu(prev) ? " (idle)" : "", + next->domain->domain_id, next->vcpu_id, is_idle_vcpu(next) ? " (idle)" : ""); + + /* TODO + if (prev != next) + update_runstate_area(prev); + */ + + local_irq_disable(); + + set_current(next); + + if ( (per_cpu(curr_vcpu, cpu) == next) || + (is_idle_vcpu(next) && cpu_online(cpu)) ) + { + local_irq_enable(); + } + else + { + __context_switch(); + + /* Re-enable interrupts before restoring state which may fault. */ + local_irq_enable(); + } + + context_saved(prev); + + /* TODO + if (prev != next) + update_runstate_area(next); + */ + + schedule_tail(next); + BUG(); + +} + +void continue_running(struct vcpu *same) +{ + schedule_tail(same); + BUG(); +} + +int __sync_local_execstate(void) +{ + unsigned long flags; + int switch_required; + + local_irq_save(flags); + + switch_required = (this_cpu(curr_vcpu) != current); + + if ( switch_required ) + { + ASSERT(current == idle_vcpu[smp_processor_id()]); + __context_switch(); + } + + local_irq_restore(flags); + + return switch_required; +} + +void sync_local_execstate(void) +{ + (void)__sync_local_execstate(); +} + +void startup_cpu_idle_loop(void) +{ + struct vcpu *v = current; + + ASSERT(is_idle_vcpu(v)); + /* TODO + cpumask_set_cpu(v->processor, v->domain->domain_dirty_cpumask); + cpumask_set_cpu(v->processor, v->vcpu_dirty_cpumask); + */ + + reset_stack_and_jump(idle_loop); +} + +struct domain *alloc_domain_struct(void) +{ + struct domain *d; + BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE); + d = alloc_xenheap_pages(0, 0); + if ( d != NULL ) + clear_page(d); + return d; +} + +void free_domain_struct(struct domain *d) +{ + free_xenheap_page(d); +} + +void dump_pageframe_info(struct domain *d) +{ + +} + +struct vcpu *alloc_vcpu_struct(void) +{ + struct vcpu *v; + BUILD_BUG_ON(sizeof(*v) > PAGE_SIZE); + v = alloc_xenheap_pages(0, 0); + if ( v != NULL ) + clear_page(v); + return v; +} + +void free_vcpu_struct(struct vcpu *v) +{ + free_xenheap_page(v); +} + +int vcpu_initialise(struct vcpu *v) +{ + int rc = 0; + + return rc; +} + +void vcpu_destroy(struct vcpu *v) +{ + +} + +int arch_domain_create(struct domain *d, unsigned int domcr_flags) +{ + int rc; + + d->max_vcpus = 8; + + rc = 0; +fail: + return rc; +} + +void arch_domain_destroy(struct domain *d) +{ + /* p2m_destroy */ + /* domain_vgic_destroy */ +} + +void arch_dump_domain_info(struct domain *d) +{ +} + +void arch_dump_vcpu_info(struct vcpu *v) +{ +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h new file mode 100644 index 0000000..c226bdf --- /dev/null +++ b/xen/include/asm-arm/domain.h @@ -0,0 +1,43 @@ +#ifndef __ASM_DOMAIN_H__ +#define __ASM_DOMAIN_H__ + +#include <xen/config.h> +#include <xen/cache.h> +#include <asm/page.h> +#include <asm/p2m.h> + +struct pending_irq +{ + int irq; + struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */ + uint8_t priority; + struct list_head link; +}; + +struct arch_domain +{ +} __cacheline_aligned; + +struct arch_vcpu +{ + struct cpu_user_regs user_regs; + + uint32_t sctlr; + uint32_t ttbr0, ttbr1, ttbcr; + +} __cacheline_aligned; + +void vcpu_show_execution_state(struct vcpu *); +void vcpu_show_registers(const struct vcpu *); + +#endif /* __ASM_DOMAIN_H__ */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 13/25] arm: domain_build
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Functions to build dom0: memory allocation, p2m construction, mappings of the MMIO regions, ATAG setup. Changes in v2: - set elf.dest. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/domain_build.c | 212 ++++++++++++++++++++++++++++++++++++ xen/common/libelf/libelf-dominfo.c | 6 + xen/include/asm-arm/setup.h | 2 + xen/include/xen/libelf.h | 2 +- 4 files changed, 221 insertions(+), 1 deletions(-) create mode 100644 xen/arch/arm/domain_build.c diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c new file mode 100644 index 0000000..b4e0aaa --- /dev/null +++ b/xen/arch/arm/domain_build.c @@ -0,0 +1,212 @@ +#include <xen/config.h> +#include <xen/init.h> +#include <xen/lib.h> +#include <xen/mm.h> +#include <xen/domain_page.h> +#include <xen/sched.h> +#include <xen/libelf.h> +#include <asm/irq.h> + +#include "gic.h" + +static unsigned int __initdata opt_dom0_max_vcpus; +integer_param("dom0_max_vcpus", opt_dom0_max_vcpus); + +struct vcpu *__init alloc_dom0_vcpu0(void) +{ + dom0->vcpu = xmalloc_array(struct vcpu *, opt_dom0_max_vcpus); + if ( !dom0->vcpu ) + { + printk("failed to alloc dom0->vccpu\n"); + return NULL; + } + memset(dom0->vcpu, 0, opt_dom0_max_vcpus * sizeof(*dom0->vcpu)); + dom0->max_vcpus = opt_dom0_max_vcpus; + + return alloc_vcpu(dom0, 0, 0); +} + +extern void guest_mode_entry(void); + +static void copy_from_flash(void *dst, paddr_t flash, unsigned long len) +{ + void *src = (void *)FIXMAP_ADDR(FIXMAP_MISC); + unsigned long offs; + + printk("Copying %#lx bytes from flash %"PRIpaddr" to %p-%p: [", + len, flash, dst, dst+(1<<23)); + for ( offs = 0; offs < len ; offs += PAGE_SIZE ) + { + if ( ( offs % (1<<20) ) == 0 ) + printk("."); + set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED); + memcpy(dst+offs, src, PAGE_SIZE); + } + printk("]\n"); + + clear_fixmap(FIXMAP_MISC); +} + +static void setup_linux_atag(paddr_t tags, paddr_t ram_s, paddr_t ram_e) +{ + paddr_t ma = gvirt_to_maddr(tags); + void *map = map_domain_page(ma>>PAGE_SHIFT); + void *p = map + (tags & (PAGE_SIZE - 1)); + char cmdline[] = "earlyprintk=xenboot console=ttyAMA1 root=/dev/mmcblk0 debug rw"; + + /* not enough room on this page for all the tags */ + BUG_ON(PAGE_SIZE - (tags & (PAGE_SIZE - 1)) < 8 * sizeof(uint32_t)); + +#define TAG(type, val) *(type*)p = val; p+= sizeof(type) + + /* ATAG_CORE */ + TAG(uint32_t, 2); + TAG(uint32_t, 0x54410001); + + /* ATAG_MEM */ + TAG(uint32_t, 4); + TAG(uint32_t, 0x54410002); + TAG(uint32_t, (ram_e - ram_s) & 0xFFFFFFFF); + TAG(uint32_t, ram_s & 0xFFFFFFFF); + + /* ATAG_CMDLINE */ + TAG(uint32_t, 2 + ((strlen(cmdline) + 4) >> 2)); + TAG(uint32_t, 0x54410009); + memcpy(p, cmdline, strlen(cmdline) + 1); + p += ((strlen(cmdline) + 4) >> 2) << 2; + + /* ATAG_NONE */ + TAG(uint32_t, 0); + TAG(uint32_t, 0); + +#undef TAG + + unmap_domain_page(map); +} + +/* Store kernel in first 8M of flash */ +#define KERNEL_FLASH_ADDRESS 0x00000000UL +#define KERNEL_FLASH_SIZE 0x00800000UL + +int construct_dom0(struct domain *d) +{ + int rc, kernel_order; + void *kernel_img; + + struct vcpu *v = d->vcpu[0]; + struct cpu_user_regs *regs = &v->arch.user_regs; + + struct elf_binary elf; + struct elf_dom_parms parms; + + /* Sanity! */ + BUG_ON(d->domain_id != 0); + BUG_ON(d->vcpu[0] == NULL); + BUG_ON(v->is_initialised); + + printk("*** LOADING DOMAIN 0 ***\n"); + + kernel_order = get_order_from_bytes(KERNEL_FLASH_SIZE); + kernel_img = alloc_xenheap_pages(kernel_order, 0); + if ( kernel_img == NULL ) + panic("Cannot allocate temporary buffer for kernel.\n"); + + copy_from_flash(kernel_img, KERNEL_FLASH_ADDRESS, KERNEL_FLASH_SIZE); + + d->max_pages = ~0U; + + if ( (rc = elf_init(&elf, kernel_img, KERNEL_FLASH_SIZE )) != 0 ) + return rc; memset(regs, 0, sizeof(*regs)); +#ifdef VERBOSE + elf_set_verbose(&elf); +#endif + elf_parse_binary(&elf); + if ( (rc = elf_xen_parse(&elf, &parms)) != 0 ) + return rc; + + if ( (rc = p2m_alloc_table(d)) != 0 ) + return rc; + + /* 128M at 3G physical */ + /* TODO size and location according to platform info */ + printk("Populate P2M %#llx->%#llx\n", 0xc0000000ULL, 0xc8000000ULL); + p2m_populate_ram(d, 0xc0000000ULL, 0xc8000000ULL); + + printk("Map CS2 MMIO regions 1:1 in the P2M %#llx->%#llx\n", 0x18000000ULL, 0x1BFFFFFFULL); + map_mmio_regions(d, 0x18000000, 0x1BFFFFFF, 0x18000000); + printk("Map CS3 MMIO regions 1:1 in the P2M %#llx->%#llx\n", 0x1C000000ULL, 0x1FFFFFFFULL); + map_mmio_regions(d, 0x1C000000, 0x1FFFFFFF, 0x1C000000); + printk("Map VGIC MMIO regions 1:1 in the P2M %#llx->%#llx\n", 0x2C008000ULL, 0x2DFFFFFFULL); + map_mmio_regions(d, 0x2C008000, 0x2DFFFFFF, 0x2C008000); + + gicv_setup(d); + + printk("Routing peripheral interrupts to guest\n"); + /* TODO Get from device tree */ + /*gic_route_irq_to_guest(d, 37, "uart0"); -- XXX used by Xen*/ + gic_route_irq_to_guest(d, 38, "uart1"); + gic_route_irq_to_guest(d, 39, "uart2"); + gic_route_irq_to_guest(d, 40, "uart3"); + gic_route_irq_to_guest(d, 41, "mmc0-1"); + gic_route_irq_to_guest(d, 42, "mmc0-2"); + gic_route_irq_to_guest(d, 44, "keyboard"); + gic_route_irq_to_guest(d, 45, "mouse"); + gic_route_irq_to_guest(d, 46, "lcd"); + gic_route_irq_to_guest(d, 47, "eth"); + + /* Enable second stage translation */ + WRITE_CP32(READ_CP32(HCR) | HCR_VM, HCR); isb(); + + /* The following load uses domain''s p2m */ + p2m_load_VTTBR(d); + + printk("Loading ELF image into guest memory\n"); + elf.dest = (void*)(unsigned long)parms.virt_kstart; + elf_load_binary(&elf); + + printk("Free temporary kernel buffer\n"); + free_xenheap_pages(kernel_img, kernel_order); + + setup_linux_atag(0xc0000100ULL, 0xc0000000ULL, 0xc8000000ULL); + + clear_bit(_VPF_down, &v->pause_flags); + + memset(regs, 0, sizeof(*regs)); + + regs->pc = (uint32_t)parms.virt_entry; + + regs->cpsr = PSR_ABT_MASK|PSR_FIQ_MASK|PSR_IRQ_MASK|PSR_MODE_SVC; + +/* FROM LINUX head.S + + * Kernel startup entry point. + * --------------------------- + * + * This is normally called from the decompressor code. The requirements + * are: MMU = off, D-cache = off, I-cache = dont care, r0 = 0, + * r1 = machine nr, r2 = atags or dtb pointer. + *... + */ + + regs->r0 = 0; /* SBZ */ + regs->r1 = 2272; /* Machine NR: Versatile Express */ + regs->r2 = 0xc0000100; /* ATAGS */ + + WRITE_CP32(SCTLR_BASE, SCTLR); + + WRITE_CP32(HCR_AMO|HCR_IMO|HCR_VM, HCR); + isb(); + + local_abort_enable(); + + return 0; +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c index c569a48..523837f 100644 --- a/xen/common/libelf/libelf-dominfo.c +++ b/xen/common/libelf/libelf-dominfo.c @@ -341,6 +341,12 @@ static int elf_xen_note_check(struct elf_binary *elf, return 0; } + if ( elf_uval(elf, elf->ehdr, e_machine) == EM_ARM ) + { + elf_msg(elf, "%s: Not bothering with notes on ARM\n", __FUNCTION__); + return 0; + } + /* Check the contents of the Xen notes or guest string. */ if ( ((strlen(parms->loader) == 0) || strncmp(parms->loader, "generic", 7)) && diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h index c27d438..1dc3f97 100644 --- a/xen/include/asm-arm/setup.h +++ b/xen/include/asm-arm/setup.h @@ -5,6 +5,8 @@ void arch_get_xen_caps(xen_capabilities_info_t *info); +int construct_dom0(struct domain *d); + #endif /* * Local variables: diff --git a/xen/include/xen/libelf.h b/xen/include/xen/libelf.h index d77bda6..0ff8b5b 100644 --- a/xen/include/xen/libelf.h +++ b/xen/include/xen/libelf.h @@ -23,7 +23,7 @@ #ifndef __XEN_LIBELF_H__ #define __XEN_LIBELF_H__ -#if defined(__i386__) || defined(__x86_64__) || defined(__ia64__) +#if defined(__i386__) || defined(__x86_64__) || defined(__ia64__) || defined(__arm__) #define XEN_ELF_LITTLE_ENDIAN #else #error define architectural endianness -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 14/25] arm: driver for CoreLink GIC-400 Generic Interrupt Controller
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> - GICC, GICD and GICH initialization; - interrupts routing, acking and EOI; - interrupt injection into guests; - maintenance interrupt handler, that takes care of EOI physical interrupts on behalf of the guest; - a function to remap the virtual cpu interface into the guest address space, where the guest expect the GICC to be. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/domain.c | 2 + xen/arch/arm/gic.c | 473 +++++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/gic.h | 151 ++++++++++++++++ 3 files changed, 626 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/gic.c create mode 100644 xen/arch/arm/gic.h diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index d706b5f..ecbc5b7 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -11,6 +11,8 @@ #include <asm/p2m.h> #include <asm/irq.h> +#include "gic.h" + DEFINE_PER_CPU(struct vcpu *, curr_vcpu); static void continue_idle_domain(struct vcpu *v) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c new file mode 100644 index 0000000..9643a7d --- /dev/null +++ b/xen/arch/arm/gic.c @@ -0,0 +1,473 @@ +/* + * xen/arch/arm/gic.c + * + * ARM Generic Interrupt Controller support + * + * Tim Deegan <tim@xen.org> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/lib.h> +#include <xen/init.h> +#include <xen/mm.h> +#include <xen/irq.h> +#include <xen/sched.h> +#include <xen/errno.h> +#include <xen/softirq.h> +#include <asm/p2m.h> +#include <asm/domain.h> + +#include "gic.h" + +/* Access to the GIC Distributor registers through the fixmap */ +#define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD)) +#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1) \ + + (GIC_CR_OFFSET & 0xfff))) +#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH) \ + + (GIC_HR_OFFSET & 0xfff))) + +/* Global state */ +static struct { + paddr_t dbase; /* Address of distributor registers */ + paddr_t cbase; /* Address of CPU interface registers */ + paddr_t hbase; /* Address of virtual interface registers */ + unsigned int lines; + unsigned int cpus; + spinlock_t lock; +} gic; + +irq_desc_t irq_desc[NR_IRQS]; +unsigned nr_lrs; + +static unsigned int gic_irq_startup(struct irq_desc *desc) +{ + uint32_t enabler; + int irq = desc->irq; + + /* Enable routing */ + enabler = GICD[GICD_ISENABLER + irq / 32]; + GICD[GICD_ISENABLER + irq / 32] = enabler | (1u << (irq % 32)); + + return 0; +} + +static void gic_irq_shutdown(struct irq_desc *desc) +{ + uint32_t enabler; + int irq = desc->irq; + + /* Disable routing */ + enabler = GICD[GICD_ICENABLER + irq / 32]; + GICD[GICD_ICENABLER + irq / 32] = enabler | (1u << (irq % 32)); +} + +static void gic_irq_enable(struct irq_desc *desc) +{ + +} + +static void gic_irq_disable(struct irq_desc *desc) +{ + +} + +static void gic_irq_ack(struct irq_desc *desc) +{ + /* No ACK -- reading IAR has done this for us */ +} + +static void gic_host_irq_end(struct irq_desc *desc) +{ + int irq = desc->irq; + /* Lower the priority */ + GICC[GICC_EOIR] = irq; + /* Deactivate */ + GICC[GICC_DIR] = irq; +} + +static void gic_guest_irq_end(struct irq_desc *desc) +{ + int irq = desc->irq; + /* Lower the priority of the IRQ */ + GICC[GICC_EOIR] = irq; + /* Deactivation happens in maintenance interrupt / via GICV */ +} + +static void gic_irq_set_affinity(struct irq_desc *desc, const cpumask_t *mask) +{ + BUG(); +} + +/* XXX different for level vs edge */ +static hw_irq_controller gic_host_irq_type = { + .typename = "gic", + .startup = gic_irq_startup, + .shutdown = gic_irq_shutdown, + .enable = gic_irq_enable, + .disable = gic_irq_disable, + .ack = gic_irq_ack, + .end = gic_host_irq_end, + .set_affinity = gic_irq_set_affinity, +}; +static hw_irq_controller gic_guest_irq_type = { + .typename = "gic", + .startup = gic_irq_startup, + .shutdown = gic_irq_shutdown, + .enable = gic_irq_enable, + .disable = gic_irq_disable, + .ack = gic_irq_ack, + .end = gic_guest_irq_end, + .set_affinity = gic_irq_set_affinity, +}; + +/* Program the GIC to route an interrupt */ +static int gic_route_irq(unsigned int irq, bool_t level, + unsigned int cpu_mask, unsigned int priority) +{ + volatile unsigned char *bytereg; + uint32_t cfg, edgebit; + struct irq_desc *desc = irq_to_desc(irq); + unsigned long flags; + + ASSERT(!(cpu_mask & ~0xff)); /* Targets bitmap only supports 8 CPUs */ + ASSERT(priority <= 0xff); /* Only 8 bits of priority */ + ASSERT(irq < gic.lines + 32); /* Can''t route interrupts that don''t exist */ + + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&gic.lock); + + if ( desc->action != NULL ) + { + spin_unlock(&desc->lock); + return -EBUSY; + } + + desc->handler = &gic_host_irq_type; + + /* Disable interrupt */ + desc->handler->shutdown(desc); + + /* Set edge / level */ + cfg = GICD[GICD_ICFGR + irq / 16]; + edgebit = 2u << (2 * (irq % 16)); + if ( level ) + cfg &= ~edgebit; + else + cfg |= edgebit; + GICD[GICD_ICFGR + irq / 16] = cfg; + + /* Set target CPU mask (RAZ/WI on uniprocessor) */ + bytereg = (unsigned char *) (GICD + GICD_ITARGETSR); + bytereg[irq] = cpu_mask; + + /* Set priority */ + bytereg = (unsigned char *) (GICD + GICD_IPRIORITYR); + bytereg[irq] = priority; + + spin_unlock(&gic.lock); + spin_unlock_irqrestore(&desc->lock, flags); + return 0; +} + +static void __init gic_dist_init(void) +{ + uint32_t type; + uint32_t cpumask = 1 << smp_processor_id(); + int i; + + cpumask |= cpumask << 8; + cpumask |= cpumask << 16; + + /* Disable the distributor */ + GICD[GICD_CTLR] = 0; + + type = GICD[GICD_TYPER]; + gic.lines = 32 * (type & GICD_TYPE_LINES); + gic.cpus = 1 + ((type & GICD_TYPE_CPUS) >> 5); + printk("GIC: %d lines, %d cpu%s%s (IID %8.8x).\n", + gic.lines, gic.cpus, (gic.cpus == 1) ? "" : "s", + (type & GICD_TYPE_SEC) ? ", secure" : "", + GICD[GICD_IIDR]); + + /* Default all global IRQs to level, active low */ + for ( i = 32; i < gic.lines; i += 16 ) + GICD[GICD_ICFGR + i / 16] = 0x0; + + /* Route all global IRQs to this CPU */ + for ( i = 32; i < gic.lines; i += 4 ) + GICD[GICD_ICFGR + i / 4] = cpumask; + + /* Default priority for global interrupts */ + for ( i = 32; i < gic.lines; i += 4 ) + GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0; + + /* Disable all global interrupts */ + for ( i = 32; i < gic.lines; i += 32 ) + GICD[GICD_ICENABLER + i / 32] = ~0ul; + + /* Turn on the distributor */ + GICD[GICD_CTLR] = GICD_CTL_ENABLE; +} + +static void __cpuinit gic_cpu_init(void) +{ + int i; + + /* Disable all PPI and enable all SGI */ + GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */ + GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */ + /* Set PPI and SGI priorities */ + for (i = 0; i < 32; i += 4) + GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0; + + /* Local settings: interface controller */ + GICC[GICC_PMR] = 0xff; /* Don''t mask by priority */ + GICC[GICC_BPR] = 0; /* Finest granularity of priority */ + GICC[GICC_CTLR] = GICC_CTL_ENABLE|GICC_CTL_EOI; /* Turn on delivery */ +} + +static void __cpuinit gic_hyp_init(void) +{ + uint32_t vtr; + + vtr = GICH[GICH_VTR]; + nr_lrs = (vtr & GICH_VTR_NRLRGS) + 1; + printk("GICH: %d list registers available\n", nr_lrs); + + GICH[GICH_HCR] = GICH_HCR_EN; + GICH[GICH_MISR] = GICH_MISR_EOI; +} + +/* Set up the GIC */ +void gic_init(void) +{ + /* XXX FIXME get this from devicetree */ + gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET; + gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET; + gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET; + set_fixmap(FIXMAP_GICD, gic.dbase >> PAGE_SHIFT, DEV_SHARED); + BUILD_BUG_ON(FIXMAP_ADDR(FIXMAP_GICC1) !+ FIXMAP_ADDR(FIXMAP_GICC2)-PAGE_SIZE); + set_fixmap(FIXMAP_GICC1, gic.cbase >> PAGE_SHIFT, DEV_SHARED); + set_fixmap(FIXMAP_GICC2, (gic.cbase >> PAGE_SHIFT) + 1, DEV_SHARED); + set_fixmap(FIXMAP_GICH, gic.hbase >> PAGE_SHIFT, DEV_SHARED); + + /* Global settings: interrupt distributor */ + spin_lock_init(&gic.lock); + spin_lock(&gic.lock); + + gic_dist_init(); + gic_cpu_init(); + gic_hyp_init(); + + spin_unlock(&gic.lock); +} + +void gic_route_irqs(void) +{ + /* XXX should get these from DT */ + /* GIC maintenance */ + gic_route_irq(25, 1, 1u << smp_processor_id(), 0xa0); + /* Hypervisor Timer */ + gic_route_irq(26, 1, 1u << smp_processor_id(), 0xa0); + /* Timer */ + gic_route_irq(30, 1, 1u << smp_processor_id(), 0xa0); + /* UART */ + gic_route_irq(37, 0, 1u << smp_processor_id(), 0xa0); +} + +void __init release_irq(unsigned int irq) +{ + struct irq_desc *desc; + unsigned long flags; + struct irqaction *action; + + desc = irq_to_desc(irq); + + spin_lock_irqsave(&desc->lock,flags); + action = desc->action; + desc->action = NULL; + desc->status |= IRQ_DISABLED; + + spin_lock(&gic.lock); + desc->handler->shutdown(desc); + spin_unlock(&gic.lock); + + spin_unlock_irqrestore(&desc->lock,flags); + + /* Wait to make sure it''s not being used on another CPU */ + do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS ); + + if (action && action->free_on_release) + xfree(action); +} + +static int __setup_irq(struct irq_desc *desc, unsigned int irq, + struct irqaction *new) +{ + if ( desc->action != NULL ) + return -EBUSY; + + desc->action = new; + desc->status &= ~IRQ_DISABLED; + dsb(); + + desc->handler->startup(desc); + + return 0; +} + +int __init setup_irq(unsigned int irq, struct irqaction *new) +{ + int rc; + unsigned long flags; + struct irq_desc *desc; + + desc = irq_to_desc(irq); + + spin_lock_irqsave(&desc->lock, flags); + + rc = __setup_irq(desc, irq, new); + + spin_unlock_irqrestore(&desc->lock,flags); + + return rc; +} + +void gic_set_guest_irq(unsigned int virtual_irq, + unsigned int state, unsigned int priority) +{ + BUG_ON(virtual_irq > nr_lrs); + GICH[GICH_LR + virtual_irq] = state | + GICH_LR_MAINTENANCE_IRQ | + ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) | + ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT); +} + +void gic_inject_irq_start(void) +{ + uint32_t hcr; + hcr = READ_CP32(HCR); + WRITE_CP32(hcr | HCR_VI, HCR); + isb(); +} + +void gic_inject_irq_stop(void) +{ + uint32_t hcr; + hcr = READ_CP32(HCR); + if (hcr & HCR_VI) { + WRITE_CP32(hcr & ~HCR_VI, HCR); + isb(); + } +} + +int gic_route_irq_to_guest(struct domain *d, unsigned int irq, + const char * devname) +{ + struct irqaction *action; + struct irq_desc *desc = irq_to_desc(irq); + unsigned long flags; + int retval; + + action = xmalloc(struct irqaction); + if (!action) + return -ENOMEM; + + action->dev_id = d; + action->name = devname; + + spin_lock_irqsave(&desc->lock, flags); + + desc->handler = &gic_guest_irq_type; + desc->status |= IRQ_GUEST; + + retval = __setup_irq(desc, irq, action); + if (retval) { + xfree(action); + goto out; + } + +out: + spin_unlock_irqrestore(&desc->lock, flags); + return retval; +} + +/* Accept an interrupt from the GIC and dispatch its handler */ +void gic_interrupt(struct cpu_user_regs *regs, int is_fiq) +{ + uint32_t intack = GICC[GICC_IAR]; + unsigned int irq = intack & GICC_IA_IRQ; + + if ( irq == 1023 ) + /* Spurious interrupt */ + return; + + do_IRQ(regs, irq, is_fiq); +} + +void gicv_setup(struct domain *d) +{ + /* map the gic virtual cpu interface in the gic cpu interface region of + * the guest */ + printk("mapping GICC at %#"PRIx32" to %#"PRIx32"\n", + GIC_BASE_ADDRESS + GIC_CR_OFFSET, + GIC_BASE_ADDRESS + GIC_VR_OFFSET); + map_mmio_regions(d, GIC_BASE_ADDRESS + GIC_CR_OFFSET, + GIC_BASE_ADDRESS + GIC_CR_OFFSET + (2 * PAGE_SIZE) - 1, + GIC_BASE_ADDRESS + GIC_VR_OFFSET); +} + +static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs) +{ + int i, virq; + uint32_t lr; + uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32); + + for ( i = 0; i < 64; i++ ) { + if ( eisr & ((uint64_t)1 << i) ) { + struct pending_irq *p; + + lr = GICH[GICH_LR + i]; + virq = lr & GICH_LR_VIRTUAL_MASK; + GICH[GICH_LR + i] = 0; + + spin_lock(¤t->arch.vgic.lock); + p = irq_to_pending(current, virq); + if ( p->desc != NULL ) { + p->desc->status &= ~IRQ_INPROGRESS; + GICC[GICC_DIR] = virq; + } + gic_inject_irq_stop(); + list_del(&p->link); + INIT_LIST_HEAD(&p->link); + cpu_raise_softirq(current->processor, VGIC_SOFTIRQ); + spin_unlock(¤t->arch.vgic.lock); + } + } +} + +void __cpuinit init_maintenance_interrupt(void) +{ + request_irq(25, maintenance_interrupt, 0, "irq-maintenance", NULL); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/gic.h b/xen/arch/arm/gic.h new file mode 100644 index 0000000..63b6648 --- /dev/null +++ b/xen/arch/arm/gic.h @@ -0,0 +1,151 @@ +/* + * xen/arch/arm/gic.h + * + * ARM Generic Interrupt Controller support + * + * Tim Deegan <tim@xen.org> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __ARCH_ARM_GIC_H__ +#define __ARCH_ARM_GIC_H__ + +#define GICD_CTLR (0x000/4) +#define GICD_TYPER (0x004/4) +#define GICD_IIDR (0x008/4) +#define GICD_IGROUPR (0x080/4) +#define GICD_IGROUPRN (0x0FC/4) +#define GICD_ISENABLER (0x100/4) +#define GICD_ISENABLERN (0x17C/4) +#define GICD_ICENABLER (0x180/4) +#define GICD_ICENABLERN (0x1fC/4) +#define GICD_ISPENDR (0x200/4) +#define GICD_ISPENDRN (0x27C/4) +#define GICD_ICPENDR (0x280/4) +#define GICD_ICPENDRN (0x2FC/4) +#define GICD_ISACTIVER (0x300/4) +#define GICD_ISACTIVERN (0x37C/4) +#define GICD_ICACTIVER (0x380/4) +#define GICD_ICACTIVERN (0x3FC/4) +#define GICD_IPRIORITYR (0x400/4) +#define GICD_IPRIORITYRN (0x7F8/4) +#define GICD_ITARGETSR (0x800/4) +#define GICD_ITARGETSRN (0xBF8/4) +#define GICD_ICFGR (0xC00/4) +#define GICD_ICFGRN (0xCFC/4) +#define GICD_NSACR (0xE00/4) +#define GICD_NSACRN (0xEFC/4) +#define GICD_ICPIDR2 (0xFE8/4) +#define GICD_SGIR (0xF00/4) +#define GICD_CPENDSGIR (0xF10/4) +#define GICD_CPENDSGIRN (0xF1C/4) +#define GICD_SPENDSGIR (0xF20/4) +#define GICD_SPENDSGIRN (0xF2C/4) +#define GICD_ICPIDR2 (0xFE8/4) + +#define GICC_CTLR (0x0000/4) +#define GICC_PMR (0x0004/4) +#define GICC_BPR (0x0008/4) +#define GICC_IAR (0x000C/4) +#define GICC_EOIR (0x0010/4) +#define GICC_RPR (0x0014/4) +#define GICC_HPPIR (0x0018/4) +#define GICC_APR (0x00D0/4) +#define GICC_NSAPR (0x00E0/4) +#define GICC_DIR (0x1000/4) + +#define GICH_HCR (0x00/4) +#define GICH_VTR (0x04/4) +#define GICH_VMCR (0x08/4) +#define GICH_MISR (0x10/4) +#define GICH_EISR0 (0x20/4) +#define GICH_EISR1 (0x24/4) +#define GICH_ELRSR0 (0x30/4) +#define GICH_ELRSR1 (0x34/4) +#define GICH_APR (0xF0/4) +#define GICH_LR (0x100/4) + +/* Register bits */ +#define GICD_CTL_ENABLE 0x1 + +#define GICD_TYPE_LINES 0x01f +#define GICD_TYPE_CPUS 0x0e0 +#define GICD_TYPE_SEC 0x400 + +#define GICC_CTL_ENABLE 0x1 +#define GICC_CTL_EOI (0x1 << 9) + +#define GICC_IA_IRQ 0x03ff +#define GICC_IA_CPU 0x1c00 + +#define GICH_HCR_EN (1 << 0) +#define GICH_HCR_UIE (1 << 1) +#define GICH_HCR_LRENPIE (1 << 2) +#define GICH_HCR_NPIE (1 << 3) +#define GICH_HCR_VGRP0EIE (1 << 4) +#define GICH_HCR_VGRP0DIE (1 << 5) +#define GICH_HCR_VGRP1EIE (1 << 6) +#define GICH_HCR_VGRP1DIE (1 << 7) + +#define GICH_MISR_EOI (1 << 0) +#define GICH_MISR_U (1 << 1) +#define GICH_MISR_LRENP (1 << 2) +#define GICH_MISR_NP (1 << 3) +#define GICH_MISR_VGRP0E (1 << 4) +#define GICH_MISR_VGRP0D (1 << 5) +#define GICH_MISR_VGRP1E (1 << 6) +#define GICH_MISR_VGRP1D (1 << 7) + +#define GICH_LR_VIRTUAL_MASK 0x3ff +#define GICH_LR_VIRTUAL_SHIFT 0 +#define GICH_LR_PHYSICAL_MASK 0x3ff +#define GICH_LR_PHYSICAL_SHIFT 10 +#define GICH_LR_STATE_MASK 0x3 +#define GICH_LR_STATE_SHIFT 28 +#define GICH_LR_PRIORITY_SHIFT 23 +#define GICH_LR_MAINTENANCE_IRQ (1<<19) +#define GICH_LR_PENDING (1<<28) +#define GICH_LR_ACTIVE (1<<29) +#define GICH_LR_GRP1 (1<<30) +#define GICH_LR_HW (1<<31) +#define GICH_LR_CPUID_SHIFT 9 +#define GICH_VTR_NRLRGS 0x3f + +extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq); + +extern void gic_route_irqs(void); + +extern void __cpuinit init_maintenance_interrupt(void); +extern void gic_set_guest_irq(unsigned int irq, + unsigned int state, unsigned int priority); +extern void gic_inject_irq_start(void); +extern void gic_inject_irq_stop(void); +extern int gic_route_irq_to_guest(struct domain *d, unsigned int irq, + const char * devname); + +/* Accept an interrupt from the GIC and dispatch its handler */ +extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq); +/* Bring up the interrupt controller */ +extern void gic_init(void); +/* setup the gic virtual interface for a guest */ +extern void gicv_setup(struct domain *d); +#endif + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 15/25] arm: mmio handlers
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Basic infrastructure to emulate mmio reads and writes. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/io.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/io.h | 53 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 103 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/io.c create mode 100644 xen/arch/arm/io.h diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c new file mode 100644 index 0000000..8789705 --- /dev/null +++ b/xen/arch/arm/io.c @@ -0,0 +1,50 @@ +/* + * xen/arch/arm/io.h + * + * ARM I/O handlers + * + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/lib.h> +#include <asm/current.h> + +#include "io.h" + +static const struct mmio_handler *const mmio_handlers[] +{ +}; +#define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers) + +int handle_mmio(mmio_info_t *info) +{ + struct vcpu *v = current; + int i; + + for ( i = 0; i < MMIO_HANDLER_NR; i++ ) + if ( mmio_handlers[i]->check_handler(v, info->gpa) ) + return info->dabt.write ? + mmio_handlers[i]->write_handler(v, info) : + mmio_handlers[i]->read_handler(v, info); + + return 0; +} +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h new file mode 100644 index 0000000..d7847e3 --- /dev/null +++ b/xen/arch/arm/io.h @@ -0,0 +1,53 @@ +/* + * xen/arch/arm/io.h + * + * ARM I/O handlers + * + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __ARCH_ARM_IO_H__ +#define __ARCH_ARM_IO_H__ + +#include <xen/lib.h> +#include <asm/processor.h> + +typedef struct +{ + struct hsr_dabt dabt; + uint32_t gva; + paddr_t gpa; +} mmio_info_t; + +typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info); +typedef int (*mmio_write_t)(struct vcpu *v, mmio_info_t *info); +typedef int (*mmio_check_t)(struct vcpu *v, paddr_t addr); + +struct mmio_handler { + mmio_check_t check_handler; + mmio_read_t read_handler; + mmio_write_t write_handler; +}; + +extern int handle_mmio(mmio_info_t *info); + +#endif + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> A simple do_IRQ and request_irq implementation for ARM. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/irq.c | 179 +++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/irq.h | 30 +++++++ xen/include/asm-arm/setup.h | 2 + xen/include/xen/irq.h | 13 +++ 4 files changed, 224 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/irq.c create mode 100644 xen/include/asm-arm/irq.h diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c new file mode 100644 index 0000000..5663762 --- /dev/null +++ b/xen/arch/arm/irq.c @@ -0,0 +1,179 @@ +/* + * xen/arch/arm/irq.c + * + * ARM Interrupt support + * + * Ian Campbell <ian.campbell@citrix.com> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/lib.h> +#include <xen/spinlock.h> +#include <xen/irq.h> +#include <xen/init.h> +#include <xen/errno.h> +#include <xen/sched.h> + +#include "gic.h" + +static void enable_none(struct irq_desc *irq) { } +static unsigned int startup_none(struct irq_desc *irq) { return 0; } +static void disable_none(struct irq_desc *irq) { } +static void ack_none(struct irq_desc *irq) +{ + printk("unexpected IRQ trap at irq %02x\n", irq->irq); +} + +#define shutdown_none disable_none +#define end_none enable_none + +hw_irq_controller no_irq_type = { + .typename = "none", + .startup = startup_none, + .shutdown = shutdown_none, + .enable = enable_none, + .disable = disable_none, + .ack = ack_none, + .end = end_none +}; + +int __init arch_init_one_irq_desc(struct irq_desc *desc) +{ + return 0; +} + + +static int __init init_irq_data(void) +{ + int irq; + + for (irq = 0; irq < NR_IRQS; irq++) { + struct irq_desc *desc = irq_to_desc(irq); + init_one_irq_desc(desc); + desc->irq = irq; + desc->action = NULL; + } + return 0; +} + +void __init init_IRQ(void) +{ + BUG_ON(init_irq_data() < 0); +} + +int __init request_irq(unsigned int irq, + void (*handler)(int, void *, struct cpu_user_regs *), + unsigned long irqflags, const char * devname, void *dev_id) +{ + struct irqaction *action; + int retval; + + /* + * Sanity-check: shared interrupts must pass in a real dev-ID, + * otherwise we''ll have trouble later trying to figure out + * which interrupt is which (messes up the interrupt freeing + * logic etc). + */ + if (irq >= nr_irqs) + return -EINVAL; + if (!handler) + return -EINVAL; + + action = xmalloc(struct irqaction); + if (!action) + return -ENOMEM; + + action->handler = handler; + action->name = devname; + action->dev_id = dev_id; + action->free_on_release = 1; + + retval = setup_irq(irq, action); + if (retval) + xfree(action); + + return retval; +} + +/* Dispatch an interrupt */ +void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq) +{ + struct irq_desc *desc = irq_to_desc(irq); + struct irqaction *action = desc->action; + + /* TODO: perfc_incr(irqs); */ + + /* TODO: this_cpu(irq_count)++; */ + + irq_enter(); + + spin_lock(&desc->lock); + desc->handler->ack(desc); + + if ( action == NULL ) + { + printk("Unknown %s %#3.3x\n", + is_fiq ? "FIQ" : "IRQ", irq); + goto out; + } + + if ( desc->status & IRQ_GUEST ) + { + struct domain *d = action->dev_id; + + desc->handler->end(desc); + + desc->status |= IRQ_INPROGRESS; + + /* XXX: inject irq into the guest */ + goto out_no_end; + } + + desc->status |= IRQ_PENDING; + + /* + * Since we set PENDING, if another processor is handling a different + * instance of this same irq, the other processor will take care of it. + */ + if ( desc->status & (IRQ_DISABLED | IRQ_INPROGRESS) ) + goto out; + + desc->status |= IRQ_INPROGRESS; + + action = desc->action; + while ( desc->status & IRQ_PENDING ) + { + desc->status &= ~IRQ_PENDING; + spin_unlock_irq(&desc->lock); + action->handler(irq, action->dev_id, regs); + spin_lock_irq(&desc->lock); + } + + desc->status &= ~IRQ_INPROGRESS; + +out: + desc->handler->end(desc); +out_no_end: + spin_unlock(&desc->lock); + irq_exit(); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h new file mode 100644 index 0000000..8e65a2e --- /dev/null +++ b/xen/include/asm-arm/irq.h @@ -0,0 +1,30 @@ +#ifndef _ASM_HW_IRQ_H +#define _ASM_HW_IRQ_H + +#include <xen/config.h> + +#define NR_VECTORS 256 /* XXX */ + +typedef struct { + DECLARE_BITMAP(_bits,NR_VECTORS); +} vmask_t; + +struct arch_pirq +{ +}; + +struct irq_cfg { +#define arch_irq_desc irq_cfg +}; + +void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq); + +#endif /* _ASM_HW_IRQ_H */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h index 1dc3f97..2041f06 100644 --- a/xen/include/asm-arm/setup.h +++ b/xen/include/asm-arm/setup.h @@ -7,6 +7,8 @@ void arch_get_xen_caps(xen_capabilities_info_t *info); int construct_dom0(struct domain *d); +void init_IRQ(void); + #endif /* * Local variables: diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h index a2a90e1..5a711cc 100644 --- a/xen/include/xen/irq.h +++ b/xen/include/xen/irq.h @@ -107,6 +107,19 @@ extern irq_desc_t irq_desc[NR_VECTORS]; #define request_irq(irq, handler, irqflags, devname, devid) \ request_irq_vector(irq_to_vector(irq), handler, irqflags, devname, devid) + +#elif defined(__arm__) + +#define NR_IRQS 1024 +#define nr_irqs NR_IRQS +extern irq_desc_t irq_desc[NR_IRQS]; + +extern int setup_irq(unsigned int irq, struct irqaction *); +extern void release_irq(unsigned int irq); +extern int request_irq(unsigned int irq, + void (*handler)(int, void *, struct cpu_user_regs *), + unsigned long irqflags, const char * devname, void *dev_id); + #else extern int setup_irq(unsigned int irq, struct irqaction *); extern void release_irq(unsigned int irq); -- 1.7.2.5
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Functions to setup pagetables, handle the p2m, map and unmap domain pages, copy data to/from guest addresses. The implementation is based on the LPAE extension for ARMv7 and makes use of the two level transtion mechanism. Changes in v4: - fix build for -wunused-values; Changes in v3: - rename copy_to_user and copy_from_user to raw_copy_to_guest and raw_copy_from_guest. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/domain.c | 4 + xen/arch/arm/guestcopy.c | 81 +++++++++ xen/arch/arm/mm.c | 321 ++++++++++++++++++++++++++++++++++ xen/arch/arm/p2m.c | 214 +++++++++++++++++++++++ xen/include/asm-arm/domain.h | 2 + xen/include/asm-arm/guest_access.h | 131 ++++++++++++++ xen/include/asm-arm/mm.h | 315 +++++++++++++++++++++++++++++++++ xen/include/asm-arm/p2m.h | 88 ++++++++++ xen/include/asm-arm/page.h | 335 ++++++++++++++++++++++++++++++++++++ 9 files changed, 1491 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/guestcopy.c create mode 100644 xen/arch/arm/mm.c create mode 100644 xen/arch/arm/p2m.c create mode 100644 xen/include/asm-arm/guest_access.h create mode 100644 xen/include/asm-arm/mm.h create mode 100644 xen/include/asm-arm/p2m.h create mode 100644 xen/include/asm-arm/page.h diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index ecbc5b7..0844b37 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -224,6 +224,10 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags) { int rc; + rc = -ENOMEM; + if ( (rc = p2m_init(d)) != 0 ) + goto fail; + d->max_vcpus = 8; rc = 0; diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c new file mode 100644 index 0000000..d9eb7ac --- /dev/null +++ b/xen/arch/arm/guestcopy.c @@ -0,0 +1,81 @@ +#include <xen/config.h> +#include <xen/lib.h> +#include <xen/domain_page.h> + +#include <asm/mm.h> +#include <asm/guest_access.h> + +unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len) +{ + /* XXX needs to handle faults */ + unsigned offset = ((unsigned long)to & ~PAGE_MASK); + + while ( len ) + { + paddr_t g = gvirt_to_maddr((uint32_t) to); + void *p = map_domain_page(g>>PAGE_SHIFT); + unsigned size = min(len, (unsigned)PAGE_SIZE - offset); + + p += offset; + memcpy(p, from, size); + + unmap_domain_page(p - offset); + len -= size; + from += size; + to += size; + offset = 0; + } + + return 0; +} + +unsigned long raw_clear_guest(void *to, unsigned len) +{ + /* XXX needs to handle faults */ + unsigned offset = ((unsigned long)to & ~PAGE_MASK); + + while ( len ) + { + paddr_t g = gvirt_to_maddr((uint32_t) to); + void *p = map_domain_page(g>>PAGE_SHIFT); + unsigned size = min(len, (unsigned)PAGE_SIZE - offset); + + p += offset; + memset(p, 0x00, size); + + unmap_domain_page(p - offset); + len -= size; + to += size; + offset = 0; + } + + return 0; +} + +unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len) +{ + while ( len ) + { + paddr_t g = gvirt_to_maddr((uint32_t) from & PAGE_MASK); + void *p = map_domain_page(g>>PAGE_SHIFT); + unsigned size = min(len, (unsigned)(PAGE_SIZE - ((unsigned)from & (~PAGE_MASK)))); + + p += ((unsigned long)from & (~PAGE_MASK)); + + memcpy(to, p, size); + + unmap_domain_page(p); + len -= size; + from += size; + to += size; + } + return 0; +} +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c new file mode 100644 index 0000000..613d084 --- /dev/null +++ b/xen/arch/arm/mm.c @@ -0,0 +1,321 @@ +/* + * xen/arch/arm/mm.c + * + * MMU code for an ARMv7-A with virt extensions. + * + * Tim Deegan <tim@xen.org> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/compile.h> +#include <xen/types.h> +#include <xen/init.h> +#include <xen/mm.h> +#include <xen/preempt.h> +#include <asm/page.h> +#include <asm/current.h> + +struct domain *dom_xen, *dom_io; + +/* Static start-of-day pagetables that we use before the allocators are up */ +lpae_t xen_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096))); +lpae_t xen_second[LPAE_ENTRIES*4] __attribute__((__aligned__(4096*4))); +static lpae_t xen_fixmap[LPAE_ENTRIES] __attribute__((__aligned__(4096))); +static lpae_t xen_xenmap[LPAE_ENTRIES] __attribute__((__aligned__(4096))); + +/* Limits of the Xen heap */ +unsigned long xenheap_mfn_start, xenheap_mfn_end; +unsigned long xenheap_virt_end; + +unsigned long frametable_virt_end; + +/* Map a 4k page in a fixmap entry */ +void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes) +{ + lpae_t pte = mfn_to_xen_entry(mfn); + pte.pt.table = 1; /* 4k mappings always have this bit set */ + pte.pt.ai = attributes; + write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte); + flush_xen_data_tlb_va(FIXMAP_ADDR(map)); +} + +/* Remove a mapping from a fixmap entry */ +void clear_fixmap(unsigned map) +{ + lpae_t pte = {0}; + write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte); + flush_xen_data_tlb_va(FIXMAP_ADDR(map)); +} + +/* Map a page of domheap memory */ +void *map_domain_page(unsigned long mfn) +{ + unsigned long flags; + lpae_t *map = xen_second + second_linear_offset(DOMHEAP_VIRT_START); + unsigned long slot_mfn = mfn & ~LPAE_ENTRY_MASK; + uint32_t va; + lpae_t pte; + int i, slot; + + local_irq_save(flags); + + /* The map is laid out as an open-addressed hash table where each + * entry is a 2MB superpage pte. We use the available bits of each + * PTE as a reference count; when the refcount is zero the slot can + * be reused. */ + for ( slot = (slot_mfn >> LPAE_SHIFT) % DOMHEAP_ENTRIES, i = 0; + i < DOMHEAP_ENTRIES; + slot = (slot + 1) % DOMHEAP_ENTRIES, i++ ) + { + if ( map[slot].pt.avail == 0 ) + { + /* Commandeer this 2MB slot */ + pte = mfn_to_xen_entry(slot_mfn); + pte.pt.avail = 1; + write_pte(map + slot, pte); + break; + } + else if ( map[slot].pt.avail < 0xf && map[slot].pt.base == slot_mfn ) + { + /* This slot already points to the right place; reuse it */ + map[slot].pt.avail++; + break; + } + } + /* If the map fills up, the callers have misbehaved. */ + BUG_ON(i == DOMHEAP_ENTRIES); + +#ifndef NDEBUG + /* Searching the hash could get slow if the map starts filling up. + * Cross that bridge when we come to it */ + { + static int max_tries = 32; + if ( i >= max_tries ) + { + dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", i); + max_tries *= 2; + } + } +#endif + + local_irq_restore(flags); + + va = (DOMHEAP_VIRT_START + + (slot << SECOND_SHIFT) + + ((mfn & LPAE_ENTRY_MASK) << THIRD_SHIFT)); + + /* + * We may not have flushed this specific subpage at map time, + * since we only flush the 4k page not the superpage + */ + flush_xen_data_tlb_va(va); + + return (void *)va; +} + +/* Release a mapping taken with map_domain_page() */ +void unmap_domain_page(const void *va) +{ + unsigned long flags; + lpae_t *map = xen_second + second_linear_offset(DOMHEAP_VIRT_START); + int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT; + + local_irq_save(flags); + + ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES); + ASSERT(map[slot].pt.avail != 0); + + map[slot].pt.avail--; + + local_irq_restore(flags); +} + + +/* Boot-time pagetable setup. + * Changes here may need matching changes in head.S */ +void __init setup_pagetables(unsigned long boot_phys_offset) +{ + paddr_t xen_paddr, phys_offset; + unsigned long dest_va; + lpae_t pte, *p; + int i; + + if ( boot_phys_offset != 0 ) + { + /* Remove the old identity mapping of the boot paddr */ + pte.bits = 0; + dest_va = (unsigned long)_start + boot_phys_offset; + write_pte(xen_second + second_linear_offset(dest_va), pte); + } + + xen_paddr = XEN_PADDR; + + /* Map the destination in the empty L2 above the fixmap */ + dest_va = FIXMAP_ADDR(0) + (1u << SECOND_SHIFT); + pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT); + write_pte(xen_second + second_table_offset(dest_va), pte); + + /* Calculate virt-to-phys offset for the new location */ + phys_offset = xen_paddr - (unsigned long) _start; + + /* Copy */ + memcpy((void *) dest_va, _start, _end - _start); + + /* Beware! Any state we modify between now and the PT switch may be + * discarded when we switch over to the copy. */ + + /* Update the copy of xen_pgtable to use the new paddrs */ + p = (void *) xen_pgtable + dest_va - (unsigned long) _start; + for ( i = 0; i < 4; i++) + p[i].pt.base += (phys_offset - boot_phys_offset) >> PAGE_SHIFT; + p = (void *) xen_second + dest_va - (unsigned long) _start; + for ( i = 0; i < 4 * LPAE_ENTRIES; i++) + if ( p[i].pt.valid ) + p[i].pt.base += (phys_offset - boot_phys_offset) >> PAGE_SHIFT; + + /* Change pagetables to the copy in the relocated Xen */ + asm volatile ( + STORE_CP64(0, HTTBR) /* Change translation base */ + "dsb;" /* Ensure visibility of HTTBR update */ + STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */ + STORE_CP32(0, BPIALL) /* Flush branch predictor */ + "dsb;" /* Ensure completion of TLB+BP flush */ + "isb;" + : : "r" ((unsigned long) xen_pgtable + phys_offset) : "memory"); + + /* Undo the temporary map */ + pte.bits = 0; + write_pte(xen_second + second_table_offset(dest_va), pte); + /* + * Have removed a mapping previously used for .text. Flush everything + * for safety. + */ + asm volatile ( + "dsb;" /* Ensure visibility of PTE write */ + STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */ + STORE_CP32(0, BPIALL) /* Flush branch predictor */ + "dsb;" /* Ensure completion of TLB+BP flush */ + "isb;" + : : "r" (i /*dummy*/) : "memory"); + + /* Link in the fixmap pagetable */ + pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset) + >> PAGE_SHIFT); + pte.pt.table = 1; + write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte); + /* + * No flush required here. Individual flushes are done in + * set_fixmap as entries are used. + */ + + /* Break up the Xen mapping into 4k pages and protect them separately. */ + for ( i = 0; i < LPAE_ENTRIES; i++ ) + { + unsigned long mfn = paddr_to_pfn(xen_paddr) + i; + unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT); + if ( !is_kernel(va) ) + break; + pte = mfn_to_xen_entry(mfn); + pte.pt.table = 1; /* 4k mappings always have this bit set */ + if ( is_kernel_text(va) || is_kernel_inittext(va) ) + { + pte.pt.xn = 0; + pte.pt.ro = 1; + } + if ( is_kernel_rodata(va) ) + pte.pt.ro = 1; + write_pte(xen_xenmap + i, pte); + /* No flush required here as page table is not hooked in yet. */ + } + pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset) + >> PAGE_SHIFT); + pte.pt.table = 1; + write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte); + /* Have changed a mapping used for .text. Flush everything for safety. */ + asm volatile ( + "dsb;" /* Ensure visibility of PTE write */ + STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */ + STORE_CP32(0, BPIALL) /* Flush branch predictor */ + "dsb;" /* Ensure completion of TLB+BP flush */ + "isb;" + : : "r" (i /*dummy*/) : "memory"); + + /* From now on, no mapping may be both writable and executable. */ + WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR); +} + +/* Create Xen''s mappings of memory. + * Base and virt must be 32MB aligned and size a multiple of 32MB. */ +static void __init create_mappings(unsigned long virt, + unsigned long base_mfn, + unsigned long nr_mfns) +{ + unsigned long i, count; + lpae_t pte, *p; + + ASSERT(!((virt >> PAGE_SHIFT) % (16 * LPAE_ENTRIES))); + ASSERT(!(base_mfn % (16 * LPAE_ENTRIES))); + ASSERT(!(nr_mfns % (16 * LPAE_ENTRIES))); + + count = nr_mfns / LPAE_ENTRIES; + p = xen_second + second_linear_offset(virt); + pte = mfn_to_xen_entry(base_mfn); + pte.pt.hint = 1; /* These maps are in 16-entry contiguous chunks. */ + for ( i = 0; i < count; i++ ) + { + write_pte(p + i, pte); + pte.pt.base += 1 << LPAE_SHIFT; + } + flush_xen_data_tlb(); +} + +/* Set up the xenheap: up to 1GB of contiguous, always-mapped memory. */ +void __init setup_xenheap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) +{ + create_mappings(XENHEAP_VIRT_START, base_mfn, nr_mfns); + + /* Record where the xenheap is, for translation routines. */ + xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; + xenheap_mfn_start = base_mfn; + xenheap_mfn_end = base_mfn + nr_mfns; +} + +/* Map a frame table to cover physical addresses ps through pe */ +void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) +{ + unsigned long nr_pages = (pe - ps) >> PAGE_SHIFT; + unsigned long frametable_size = nr_pages * sizeof(struct page_info); + unsigned long base_mfn; + + /* Round up to 32M boundary */ + frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff; + base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 5); + create_mappings(FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT); + + memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info)); + memset(&frame_table[nr_pages], -1, + frametable_size - (nr_pages * sizeof(struct page_info))); + + frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info)); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c new file mode 100644 index 0000000..a1d026d --- /dev/null +++ b/xen/arch/arm/p2m.c @@ -0,0 +1,214 @@ +#include <xen/config.h> +#include <xen/sched.h> +#include <xen/lib.h> +#include <xen/errno.h> +#include <xen/domain_page.h> + +void p2m_load_VTTBR(struct domain *d) +{ + struct p2m_domain *p2m = &d->arch.p2m; + paddr_t maddr = page_to_maddr(p2m->first_level); + uint64_t vttbr = maddr; + + vttbr |= ((uint64_t)p2m->vmid&0xff)<<48; + + printk("VTTBR dom%d = %"PRIx64"\n", d->domain_id, vttbr); + + WRITE_CP64(vttbr, VTTBR); + isb(); /* Ensure update is visible */ +} + +static int p2m_create_entry(struct domain *d, + lpae_t *entry) +{ + struct p2m_domain *p2m = &d->arch.p2m; + struct page_info *page; + void *p; + lpae_t pte; + + BUG_ON(entry->p2m.valid); + + page = alloc_domheap_page(d, 0); + if ( page == NULL ) + return -ENOMEM; + + page_list_add(page, &p2m->pages); + + p = __map_domain_page(page); + clear_page(p); + unmap_domain_page(p); + + pte = mfn_to_p2m_entry(page_to_mfn(page)); + + write_pte(entry, pte); + + return 0; +} + +static int create_p2m_entries(struct domain *d, + int alloc, + paddr_t start_gpaddr, + paddr_t end_gpaddr, + paddr_t maddr) +{ + int rc; + struct p2m_domain *p2m = &d->arch.p2m; + lpae_t *first = NULL, *second = NULL, *third = NULL; + paddr_t addr; + unsigned long cur_first_offset = ~0, cur_second_offset = ~0; + + /* XXX Don''t actually handle 40 bit guest physical addresses */ + BUG_ON(start_gpaddr & 0x8000000000ULL); + BUG_ON(end_gpaddr & 0x8000000000ULL); + + first = __map_domain_page(p2m->first_level); + + for(addr = start_gpaddr; addr < end_gpaddr; addr += PAGE_SIZE) + { + if ( !first[first_table_offset(addr)].p2m.valid ) + { + rc = p2m_create_entry(d, &first[first_table_offset(addr)]); + if ( rc < 0 ) { + printk("p2m_populate_ram: L1 failed\n"); + goto out; + } + } + + BUG_ON(!first[first_table_offset(addr)].p2m.valid); + + if ( cur_first_offset != first_table_offset(addr) ) + { + if (second) unmap_domain_page(second); + second = map_domain_page(first[first_table_offset(addr)].p2m.base); + cur_first_offset = first_table_offset(addr); + } + /* else: second already valid */ + + if ( !second[second_table_offset(addr)].p2m.valid ) + { + rc = p2m_create_entry(d, &second[second_table_offset(addr)]); + if ( rc < 0 ) { + printk("p2m_populate_ram: L2 failed\n"); + goto out; + } + } + + BUG_ON(!second[second_table_offset(addr)].p2m.valid); + + if ( cur_second_offset != second_table_offset(addr) ) + { + /* map third level */ + if (third) unmap_domain_page(third); + third = map_domain_page(second[second_table_offset(addr)].p2m.base); + cur_second_offset = second_table_offset(addr); + } + /* else: third already valid */ + + BUG_ON(third[third_table_offset(addr)].p2m.valid); + + /* Allocate a new RAM page and attach */ + if (alloc) + { + struct page_info *page; + lpae_t pte; + + rc = -ENOMEM; + page = alloc_domheap_page(d, 0); + if ( page == NULL ) { + printk("p2m_populate_ram: failed to allocate page\n"); + goto out; + } + + pte = mfn_to_p2m_entry(page_to_mfn(page)); + + write_pte(&third[third_table_offset(addr)], pte); + } else { + lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT); + write_pte(&third[third_table_offset(addr)], pte); + maddr += PAGE_SIZE; + } + } + + rc = 0; + +out: + spin_lock(&p2m->lock); + + if (third) unmap_domain_page(third); + if (second) unmap_domain_page(second); + if (first) unmap_domain_page(first); + + spin_unlock(&p2m->lock); + + return rc; +} + +int p2m_populate_ram(struct domain *d, + paddr_t start, + paddr_t end) +{ + return create_p2m_entries(d, 1, start, end, 0); +} + +int map_mmio_regions(struct domain *d, + paddr_t start_gaddr, + paddr_t end_gaddr, + paddr_t maddr) +{ + return create_p2m_entries(d, 0, start_gaddr, end_gaddr, maddr); +} + +int p2m_alloc_table(struct domain *d) +{ + struct p2m_domain *p2m = &d->arch.p2m; + struct page_info *page; + void *p; + + /* First level P2M is 2 consecutive pages */ + page = alloc_domheap_pages(d, 1, 0); + if ( page == NULL ) + return -ENOMEM; + + spin_lock(&p2m->lock); + + page_list_add(page, &p2m->pages); + + /* Clear both first level pages */ + p = __map_domain_page(page); + clear_page(p); + unmap_domain_page(p); + + p = __map_domain_page(page + 1); + clear_page(p); + unmap_domain_page(p); + + p2m->first_level = page; + + spin_unlock(&p2m->lock); + + return 0; +} + +int p2m_init(struct domain *d) +{ + struct p2m_domain *p2m = &d->arch.p2m; + + spin_lock_init(&p2m->lock); + INIT_PAGE_LIST_HEAD(&p2m->pages); + + /* XXX allocate properly */ + /* Zero is reserved */ + p2m->vmid = d->domain_id + 1; + + p2m->first_level = NULL; + + return 0; +} +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index c226bdf..2226a24 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -16,6 +16,8 @@ struct pending_irq struct arch_domain { + struct p2m_domain p2m; + } __cacheline_aligned; struct arch_vcpu diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h new file mode 100644 index 0000000..0fceae6 --- /dev/null +++ b/xen/include/asm-arm/guest_access.h @@ -0,0 +1,131 @@ +#ifndef __ASM_ARM_GUEST_ACCESS_H__ +#define __ASM_ARM_GUEST_ACCESS_H__ + +#include <xen/guest_access.h> +#include <xen/errno.h> + +/* Guests have their own comlete address space */ +#define access_ok(addr,size) (1) + +#define array_access_ok(addr,count,size) \ + (likely(count < (~0UL/size)) && access_ok(addr,count*size)) + +unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len); +unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len); +unsigned long raw_clear_guest(void *to, unsigned len); + +#define __raw_copy_to_guest raw_copy_to_guest +#define __raw_copy_from_guest raw_copy_from_guest +#define __raw_clear_guest raw_clear_guest + +/* Remainder copied from x86 -- could be common? */ + +/* Is the guest handle a NULL reference? */ +#define guest_handle_is_null(hnd) ((hnd).p == NULL) + +/* Offset the given guest handle into the array it refers to. */ +#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr)) +#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr)) + +/* Cast a guest handle to the specified type of handle. */ +#define guest_handle_cast(hnd, type) ({ \ + type *_x = (hnd).p; \ + (XEN_GUEST_HANDLE(type)) { _x }; \ +}) + +#define guest_handle_from_ptr(ptr, type) \ + ((XEN_GUEST_HANDLE(type)) { (type *)ptr }) +#define const_guest_handle_from_ptr(ptr, type) \ + ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr }) + +/* + * Copy an array of objects to guest context via a guest handle, + * specifying an offset into the guest array. + */ +#define copy_to_guest_offset(hnd, off, ptr, nr) ({ \ + const typeof(*(ptr)) *_s = (ptr); \ + char (*_d)[sizeof(*_s)] = (void *)(hnd).p; \ + ((void)((hnd).p == (ptr))); \ + raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr)); \ +}) + +/* + * Clear an array of objects in guest context via a guest handle, + * specifying an offset into the guest array. + */ +#define clear_guest_offset(hnd, off, ptr, nr) ({ \ + raw_clear_guest(_d+(off), nr); \ +}) + +/* + * Copy an array of objects from guest context via a guest handle, + * specifying an offset into the guest array. + */ +#define copy_from_guest_offset(ptr, hnd, off, nr) ({ \ + const typeof(*(ptr)) *_s = (hnd).p; \ + typeof(*(ptr)) *_d = (ptr); \ + raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\ +}) + +/* Copy sub-field of a structure to guest context via a guest handle. */ +#define copy_field_to_guest(hnd, ptr, field) ({ \ + const typeof(&(ptr)->field) _s = &(ptr)->field; \ + void *_d = &(hnd).p->field; \ + ((void)(&(hnd).p->field == &(ptr)->field)); \ + raw_copy_to_guest(_d, _s, sizeof(*_s)); \ +}) + +/* Copy sub-field of a structure from guest context via a guest handle. */ +#define copy_field_from_guest(ptr, hnd, field) ({ \ + const typeof(&(ptr)->field) _s = &(hnd).p->field; \ + typeof(&(ptr)->field) _d = &(ptr)->field; \ + raw_copy_from_guest(_d, _s, sizeof(*_d)); \ +}) + +/* + * Pre-validate a guest handle. + * Allows use of faster __copy_* functions. + */ +/* All ARM guests are paging mode external and hence safe */ +#define guest_handle_okay(hnd, nr) (1) +#define guest_handle_subrange_okay(hnd, first, last) (1) + +#define __copy_to_guest_offset(hnd, off, ptr, nr) ({ \ + const typeof(*(ptr)) *_s = (ptr); \ + char (*_d)[sizeof(*_s)] = (void *)(hnd).p; \ + ((void)((hnd).p == (ptr))); \ + __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\ +}) + +#define __clear_guest_offset(hnd, off, ptr, nr) ({ \ + __raw_clear_guest(_d+(off), nr); \ +}) + +#define __copy_from_guest_offset(ptr, hnd, off, nr) ({ \ + const typeof(*(ptr)) *_s = (hnd).p; \ + typeof(*(ptr)) *_d = (ptr); \ + __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\ +}) + +#define __copy_field_to_guest(hnd, ptr, field) ({ \ + const typeof(&(ptr)->field) _s = &(ptr)->field; \ + void *_d = &(hnd).p->field; \ + ((void)(&(hnd).p->field == &(ptr)->field)); \ + __raw_copy_to_guest(_d, _s, sizeof(*_s)); \ +}) + +#define __copy_field_from_guest(ptr, hnd, field) ({ \ + const typeof(&(ptr)->field) _s = &(hnd).p->field; \ + typeof(&(ptr)->field) _d = &(ptr)->field; \ + __raw_copy_from_guest(_d, _s, sizeof(*_d)); \ +}) + +#endif /* __ASM_ARM_GUEST_ACCESS_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h new file mode 100644 index 0000000..f721c54 --- /dev/null +++ b/xen/include/asm-arm/mm.h @@ -0,0 +1,315 @@ +#ifndef __ARCH_ARM_MM__ +#define __ARCH_ARM_MM__ + +#include <xen/config.h> +#include <xen/kernel.h> +#include <asm/page.h> +#include <public/xen.h> + +/* Find a suitable place at the top of memory for Xen to live */ +/* XXX For now, use the top of the VE''s 4GB RAM, at a 40-bit alias */ +#define XEN_PADDR 0x80ffe00000ull + +/* + * Per-page-frame information. + * + * Every architecture must ensure the following: + * 1. ''struct page_info'' contains a ''struct page_list_entry list''. + * 2. Provide a PFN_ORDER() macro for accessing the order of a free page. + */ +#define PFN_ORDER(_pfn) ((_pfn)->v.free.order) + +struct page_info +{ + /* Each frame can be threaded onto a doubly-linked list. */ + struct page_list_entry list; + + /* Reference count and various PGC_xxx flags and fields. */ + unsigned long count_info; + + /* Context-dependent fields follow... */ + union { + /* Page is in use: ((count_info & PGC_count_mask) != 0). */ + struct { + /* Type reference count and various PGT_xxx flags and fields. */ + unsigned long type_info; + } inuse; + /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */ + struct { + /* Do TLBs need flushing for safety before next page use? */ + bool_t need_tlbflush; + } free; + + } u; + + union { + /* Page is in use, but not as a shadow. */ + struct { + /* Owner of this page (zero if page is anonymous). */ + struct domain *domain; + } inuse; + + /* Page is on a free list. */ + struct { + /* Order-size of the free chunk this page is the head of. */ + unsigned int order; + } free; + + } v; + + union { + /* + * Timestamp from ''TLB clock'', used to avoid extra safety flushes. + * Only valid for: a) free pages, and b) pages with zero type count + */ + u32 tlbflush_timestamp; + }; + u64 pad; +}; + +#define PG_shift(idx) (BITS_PER_LONG - (idx)) +#define PG_mask(x, idx) (x ## UL << PG_shift(idx)) + +#define PGT_none PG_mask(0, 4) /* no special uses of this page */ +#define PGT_writable_page PG_mask(7, 4) /* has writable mappings? */ +#define PGT_shared_page PG_mask(8, 4) /* CoW sharable page */ +#define PGT_type_mask PG_mask(15, 4) /* Bits 28-31 or 60-63. */ + + /* Owning guest has pinned this page to its current type? */ +#define _PGT_pinned PG_shift(5) +#define PGT_pinned PG_mask(1, 5) + + /* Count of uses of this frame as its current type. */ +#define PGT_count_width PG_shift(9) +#define PGT_count_mask ((1UL<<PGT_count_width)-1) + + /* Cleared when the owning guest ''frees'' this page. */ +#define _PGC_allocated PG_shift(1) +#define PGC_allocated PG_mask(1, 1) + /* Page is Xen heap? */ +#define _PGC_xen_heap PG_shift(2) +#define PGC_xen_heap PG_mask(1, 2) +/* ... */ +/* Page is broken? */ +#define _PGC_broken PG_shift(7) +#define PGC_broken PG_mask(1, 7) + /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */ +#define PGC_state PG_mask(3, 9) +#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_offlining PG_mask(1, 9) +#define PGC_state_offlined PG_mask(2, 9) +#define PGC_state_free PG_mask(3, 9) +#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) + +/* Count of references to this frame. */ +#define PGC_count_width PG_shift(9) +#define PGC_count_mask ((1UL<<PGC_count_width)-1) + +extern unsigned long xenheap_mfn_start, xenheap_mfn_end; +extern unsigned long xenheap_virt_end; + +#define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page)) +#define is_xen_heap_mfn(mfn) ({ \ + unsigned long _mfn = (mfn); \ + (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end); \ +}) +#define is_xen_fixed_mfn(mfn) is_xen_heap_mfn(mfn) + +#define page_get_owner(_p) (_p)->v.inuse.domain +#define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d)) + +#define maddr_get_owner(ma) (page_get_owner(maddr_to_page((ma)))) +#define vaddr_get_owner(va) (page_get_owner(virt_to_page((va)))) + +#define XENSHARE_writable 0 +#define XENSHARE_readonly 1 +extern void share_xen_page_with_guest( + struct page_info *page, struct domain *d, int readonly); +extern void share_xen_page_with_privileged_guests( + struct page_info *page, int readonly); + +#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) + +extern unsigned long max_page; +extern unsigned long total_pages; + +/* Boot-time pagetable setup */ +extern void setup_pagetables(unsigned long boot_phys_offset); +/* Set up the xenheap: up to 1GB of contiguous, always-mapped memory. + * Base must be 32MB aligned and size a multiple of 32MB. */ +extern void setup_xenheap_mappings(unsigned long base_mfn, unsigned long nr_mfns); +/* Map a frame table to cover physical addresses ps through pe */ +extern void setup_frametable_mappings(paddr_t ps, paddr_t pe); +/* Map a 4k page in a fixmap entry */ +extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes); +/* Remove a mapping from a fixmap entry */ +extern void clear_fixmap(unsigned map); + + +#define mfn_valid(mfn) ({ \ + unsigned long __m_f_n = (mfn); \ + likely(__m_f_n < max_page); \ +}) + +#define max_pdx max_page +/* XXX Assume everything in the 40-bit physical alias 0x8000000000 for now */ +#define pfn_to_pdx(pfn) ((pfn) - 0x8000000UL) +#define pdx_to_pfn(pdx) ((pdx) + 0x8000000UL) +#define virt_to_pdx(va) virt_to_mfn(va) +#define pdx_to_virt(pdx) mfn_to_virt(pdx) + +/* Convert between machine frame numbers and page-info structures. */ +#define mfn_to_page(mfn) (frame_table + pfn_to_pdx(mfn)) +#define page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) - frame_table)) +#define __page_to_mfn(pg) page_to_mfn(pg) +#define __mfn_to_page(mfn) mfn_to_page(mfn) + +/* Convert between machine addresses and page-info structures. */ +#define maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT) +#define page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) << PAGE_SHIFT) + +/* Convert between frame number and address formats. */ +#define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT) +#define paddr_to_pfn(pa) ((unsigned long)((pa) >> PAGE_SHIFT)) +#define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa)) + + +static inline paddr_t virt_to_maddr(void *va) +{ + uint64_t par = va_to_par((uint32_t)va); + return (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK); +} + +static inline void *maddr_to_virt(paddr_t ma) +{ + ASSERT(is_xen_heap_mfn(ma >> PAGE_SHIFT)); + ma -= pfn_to_paddr(xenheap_mfn_start); + return (void *)(unsigned long) ma + XENHEAP_VIRT_START; +} + +static inline paddr_t gvirt_to_maddr(uint32_t va) +{ + uint64_t par = gva_to_par(va); + return (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK); +} + +/* Convert between Xen-heap virtual addresses and machine addresses. */ +#define __pa(x) (virt_to_maddr(x)) +#define __va(x) (maddr_to_virt(x)) + +/* Convert between Xen-heap virtual addresses and machine frame numbers. */ +#define virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT) +#define mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT)) + + +static inline int get_order_from_bytes(paddr_t size) +{ + int order; + size = (size-1) >> PAGE_SHIFT; + for ( order = 0; size; order++ ) + size >>= 1; + return order; +} + +static inline int get_order_from_pages(unsigned long nr_pages) +{ + int order; + nr_pages--; + for ( order = 0; nr_pages; order++ ) + nr_pages >>= 1; + return order; +} + + +/* Convert between Xen-heap virtual addresses and page-info structures. */ +static inline struct page_info *virt_to_page(const void *v) +{ + unsigned long va = (unsigned long)v; + ASSERT(va >= XENHEAP_VIRT_START); + ASSERT(va < xenheap_virt_end); + + return frame_table + ((va - XENHEAP_VIRT_START) >> PAGE_SHIFT); +} + +static inline void *page_to_virt(const struct page_info *pg) +{ + ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < frametable_virt_end); + return (void *)(XENHEAP_VIRT_START + + ((unsigned long)pg - FRAMETABLE_VIRT_START) / + (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) * + (PAGE_SIZE / (sizeof(*pg) & -sizeof(*pg)))); + +} + +struct domain *page_get_owner_and_reference(struct page_info *page); +void put_page(struct page_info *page); +int get_page(struct page_info *page, struct domain *domain); + +/* + * The MPT (machine->physical mapping table) is an array of word-sized + * values, indexed on machine frame number. It is expected that guest OSes + * will use it to store a "physical" frame number to give the appearance of + * contiguous (or near contiguous) physical memory. + */ +#undef machine_to_phys_mapping +#define machine_to_phys_mapping ((unsigned long *)RDWR_MPT_VIRT_START) +#define INVALID_M2P_ENTRY (~0UL) +#define VALID_M2P(_e) (!((_e) & (1UL<<(BITS_PER_LONG-1)))) +#define SHARED_M2P_ENTRY (~0UL - 1UL) +#define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY) + +#define _set_gpfn_from_mfn(mfn, pfn) ({ \ + struct domain *d = page_get_owner(__mfn_to_page(mfn)); \ + if(d && (d == dom_cow)) \ + machine_to_phys_mapping[(mfn)] = SHARED_M2P_ENTRY; \ + else \ + machine_to_phys_mapping[(mfn)] = (pfn); \ + }) + +#define put_gfn(d, g) ((void)0) + +#define INVALID_MFN (~0UL) + +/* Xen always owns P2M on ARM */ +#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0) +#define mfn_to_gmfn(_d, mfn) (mfn) + + +/* Arch-specific portion of memory_op hypercall. */ +long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg); + +int steal_page( + struct domain *d, struct page_info *page, unsigned int memflags); +int donate_page( + struct domain *d, struct page_info *page, unsigned int memflags); + +#define domain_set_alloc_bitsize(d) ((void)0) +#define domain_clamp_alloc_bitsize(d, b) (b) + +unsigned long domain_get_maximum_gpfn(struct domain *d); + +extern struct domain *dom_xen, *dom_io, *dom_cow; + +#define memguard_init(_s) (_s) +#define memguard_guard_stack(_p) ((void)0) +#define memguard_guard_range(_p,_l) ((void)0) +#define memguard_unguard_range(_p,_l) ((void)0) +int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, + unsigned int order); + +extern void put_page_type(struct page_info *page); +static inline void put_page_and_type(struct page_info *page) +{ + put_page_type(page); + put_page(page); +} + +#endif /* __ARCH_ARM_MM__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h new file mode 100644 index 0000000..aec52f7 --- /dev/null +++ b/xen/include/asm-arm/p2m.h @@ -0,0 +1,88 @@ +#ifndef _XEN_P2M_H +#define _XEN_P2M_H + +#include <xen/mm.h> + +struct domain; + +/* Per-p2m-table state */ +struct p2m_domain { + /* Lock that protects updates to the p2m */ + spinlock_t lock; + + /* Pages used to construct the p2m */ + struct page_list_head pages; + + /* Root of p2m page tables, 2 contiguous pages */ + struct page_info *first_level; + + /* Current VMID in use */ + uint8_t vmid; +}; + +/* Init the datastructures for later use by the p2m code */ +int p2m_init(struct domain *d); + +/* Allocate a new p2m table for a domain. + * + * Returns 0 for success or -errno. + */ +int p2m_alloc_table(struct domain *d); + +/* */ +void p2m_load_VTTBR(struct domain *d); + +/* Setup p2m RAM mapping for domain d from start-end. */ +int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end); +/* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range + * in the guest physical address space to map, starting from the machine + * address maddr. */ +int map_mmio_regions(struct domain *d, paddr_t start_gaddr, + paddr_t end_gaddr, paddr_t maddr); + +unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn); + +/* + * Populate-on-demand + */ + +/* Call when decreasing memory reservation to handle PoD entries properly. + * Will return ''1'' if all entries were handled and nothing more need be done.*/ +int +p2m_pod_decrease_reservation(struct domain *d, + xen_pfn_t gpfn, + unsigned int order); + +/* Compatibility function exporting the old untyped interface */ +static inline unsigned long get_gfn_untyped(struct domain *d, unsigned long gpfn) +{ + return gmfn_to_mfn(d, gpfn); +} + +int get_page_type(struct page_info *page, unsigned long type); +int is_iomem_page(unsigned long mfn); +static inline int get_page_and_type(struct page_info *page, + struct domain *domain, + unsigned long type) +{ + int rc = get_page(page, domain); + + if ( likely(rc) && unlikely(!get_page_type(page, type)) ) + { + put_page(page); + rc = 0; + } + + return rc; +} + +#endif /* _XEN_P2M_H */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h new file mode 100644 index 0000000..6dc1659 --- /dev/null +++ b/xen/include/asm-arm/page.h @@ -0,0 +1,335 @@ +#ifndef __ARM_PAGE_H__ +#define __ARM_PAGE_H__ + +#include <xen/config.h> + +#define PADDR_BITS 40 +#define PADDR_MASK ((1ULL << PADDR_BITS)-1) + +#define VADDR_BITS 32 +#define VADDR_MASK (~0UL) + +/* Shareability values for the LPAE entries */ +#define LPAE_SH_NON_SHAREABLE 0x0 +#define LPAE_SH_UNPREDICTALE 0x1 +#define LPAE_SH_OUTER 0x2 +#define LPAE_SH_INNER 0x3 + +/* LPAE Memory region attributes, to match Linux''s (non-LPAE) choices. + * Indexed by the AttrIndex bits of a LPAE entry; + * the 8-bit fields are packed little-endian into MAIR0 and MAIR1 + * + * ai encoding + * UNCACHED 000 0000 0000 -- Strongly Ordered + * BUFFERABLE 001 0100 0100 -- Non-Cacheable + * WRITETHROUGH 010 1010 1010 -- Write-through + * WRITEBACK 011 1110 1110 -- Write-back + * DEV_SHARED 100 0000 0100 -- Device + * ?? 101 + * reserved 110 + * WRITEALLOC 111 1111 1111 -- Write-back write-allocate + * + * DEV_NONSHARED 100 (== DEV_SHARED) + * DEV_WC 001 (== BUFFERABLE) + * DEV_CACHED 011 (== WRITEBACK) + */ +#define MAIR0VAL 0xeeaa4400 +#define MAIR1VAL 0xff000004 + +#define UNCACHED 0x0 +#define BUFFERABLE 0x1 +#define WRITETHROUGH 0x2 +#define WRITEBACK 0x3 +#define DEV_SHARED 0x4 +#define WRITEALLOC 0x7 +#define DEV_NONSHARED DEV_SHARED +#define DEV_WC BUFFERABLE +#define DEV_CACHED WRITEBACK + + +#ifndef __ASSEMBLY__ + +#include <xen/types.h> +#include <xen/lib.h> + +/* WARNING! Unlike the Intel pagetable code, where l1 is the lowest + * level and l4 is the root of the trie, the ARM pagetables follow ARM''s + * documentation: the levels are called first, second &c in the order + * that the MMU walks them (i.e. "first" is the root of the trie). */ + +/****************************************************************************** + * ARMv7-A LPAE pagetables: 3-level trie, mapping 40-bit input to + * 40-bit output addresses. Tables at all levels have 512 64-bit entries + * (i.e. are 4Kb long). + * + * The bit-shuffling that has the permission bits in branch nodes in a + * different place from those in leaf nodes seems to be to allow linear + * pagetable tricks. If we''re not doing that then the set of permission + * bits that''s not in use in a given node type can be used as + * extra software-defined bits. */ + +typedef struct { + /* These are used in all kinds of entry. */ + unsigned long valid:1; /* Valid mapping */ + unsigned long table:1; /* == 1 in 4k map entries too */ + + /* These ten bits are only used in Block entries and are ignored + * in Table entries. */ + unsigned long ai:3; /* Attribute Index */ + unsigned long ns:1; /* Not-Secure */ + unsigned long user:1; /* User-visible */ + unsigned long ro:1; /* Read-Only */ + unsigned long sh:2; /* Shareability */ + unsigned long af:1; /* Access Flag */ + unsigned long ng:1; /* Not-Global */ + + /* The base address must be approprately aligned for Block entries */ + unsigned long base:28; /* Base address of block or next table */ + unsigned long sbz:12; /* Must be zero */ + + /* These seven bits are only used in Block entries and are ignored + * in Table entries. */ + unsigned long hint:1; /* In a block of 16 contiguous entries */ + unsigned long pxn:1; /* Privileged-XN */ + unsigned long xn:1; /* eXecute-Never */ + unsigned long avail:4; /* Ignored by hardware */ + + /* These 5 bits are only used in Table entries and are ignored in + * Block entries */ + unsigned long pxnt:1; /* Privileged-XN */ + unsigned long xnt:1; /* eXecute-Never */ + unsigned long apt:2; /* Access Permissions */ + unsigned long nst:1; /* Not-Secure */ +} __attribute__((__packed__)) lpae_pt_t; + +/* The p2m tables have almost the same layout, but some of the permission + * and cache-control bits are laid out differently (or missing) */ +typedef struct { + /* These are used in all kinds of entry. */ + unsigned long valid:1; /* Valid mapping */ + unsigned long table:1; /* == 1 in 4k map entries too */ + + /* These ten bits are only used in Block entries and are ignored + * in Table entries. */ + unsigned long mattr:4; /* Memory Attributes */ + unsigned long read:1; /* Read access */ + unsigned long write:1; /* Write access */ + unsigned long sh:2; /* Shareability */ + unsigned long af:1; /* Access Flag */ + unsigned long sbz4:1; + + /* The base address must be approprately aligned for Block entries */ + unsigned long base:28; /* Base address of block or next table */ + unsigned long sbz3:12; + + /* These seven bits are only used in Block entries and are ignored + * in Table entries. */ + unsigned long hint:1; /* In a block of 16 contiguous entries */ + unsigned long sbz2:1; + unsigned long xn:1; /* eXecute-Never */ + unsigned long avail:4; /* Ignored by hardware */ + + unsigned long sbz1:5; +} __attribute__((__packed__)) lpae_p2m_t; + +typedef union { + uint64_t bits; + lpae_pt_t pt; + lpae_p2m_t p2m; +} lpae_t; + +/* Standard entry type that we''ll use to build Xen''s own pagetables. + * We put the same permissions at every level, because they''re ignored + * by the walker in non-leaf entries. */ +static inline lpae_t mfn_to_xen_entry(unsigned long mfn) +{ + paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; + lpae_t e = (lpae_t) { + .pt = { + .xn = 1, /* No need to execute outside .text */ + .ng = 1, /* Makes TLB flushes easier */ + .af = 1, /* No need for access tracking */ + .sh = LPAE_SH_OUTER, /* Xen mappings are globally coherent */ + .ns = 1, /* Hyp mode is in the non-secure world */ + .user = 1, /* See below */ + .ai = WRITEALLOC, + .table = 0, /* Set to 1 for links and 4k maps */ + .valid = 1, /* Mappings are present */ + }};; + /* Setting the User bit is strange, but the ATS1H[RW] instructions + * don''t seem to work otherwise, and since we never run on Xen + * pagetables un User mode it''s OK. If this changes, remember + * to update the hard-coded values in head.S too */ + + ASSERT(!(pa & ~PAGE_MASK)); + ASSERT(!(pa & ~PADDR_MASK)); + + // XXX shifts + e.bits |= pa; + return e; +} + +static inline lpae_t mfn_to_p2m_entry(unsigned long mfn) +{ + paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; + lpae_t e = (lpae_t) { + .p2m.xn = 0, + .p2m.af = 1, + .p2m.sh = LPAE_SH_OUTER, + .p2m.write = 1, + .p2m.read = 1, + .p2m.mattr = 0xf, + .p2m.table = 1, + .p2m.valid = 1, + }; + + ASSERT(!(pa & ~PAGE_MASK)); + ASSERT(!(pa & ~PADDR_MASK)); + + e.bits |= pa; + + return e; +} + +/* Write a pagetable entry */ +static inline void write_pte(lpae_t *p, lpae_t pte) +{ + asm volatile ( + /* Safely write the entry (STRD is atomic on CPUs that support LPAE) */ + "strd %0, %H0, [%1];" + /* Push this cacheline to the PoC so the rest of the system sees it. */ + STORE_CP32(1, DCCMVAC) + : : "r" (pte.bits), "r" (p) : "memory"); +} + +/* + * Flush all hypervisor mappings from the data TLB. This is not + * sufficient when changing code mappings or for self modifying code. + */ +static inline void flush_xen_data_tlb(void) +{ + register unsigned long r0 asm ("r0"); + asm volatile("dsb;" /* Ensure preceding are visible */ + STORE_CP32(0, TLBIALLH) + "dsb;" /* Ensure completion of the TLB flush */ + "isb;" + : : "r" (r0) /* dummy */: "memory"); +} + +/* + * Flush one VA''s hypervisor mappings from the data TLB. This is not + * sufficient when changing code mappings or for self modifying code. + */ +static inline void flush_xen_data_tlb_va(unsigned long va) +{ + asm volatile("dsb;" /* Ensure preceding are visible */ + STORE_CP32(0, TLBIMVAH) + "dsb;" /* Ensure completion of the TLB flush */ + "isb;" + : : "r" (va) : "memory"); +} + +/* Flush all non-hypervisor mappings from the TLB */ +static inline void flush_guest_tlb(void) +{ + register unsigned long r0 asm ("r0"); + WRITE_CP32(r0 /* dummy */, TLBIALLNSNH); +} + +/* Ask the MMU to translate a VA for us */ +static inline uint64_t __va_to_par(uint32_t va) +{ + uint64_t par, tmp; + tmp = READ_CP64(PAR); + WRITE_CP32(va, ATS1HR); + isb(); /* Ensure result is available. */ + par = READ_CP64(PAR); + WRITE_CP64(tmp, PAR); + return par; +} + +static inline uint64_t va_to_par(uint32_t va) +{ + uint64_t par = __va_to_par(va); + /* It is not OK to call this with an invalid VA */ + if ( par & PAR_F ) panic_PAR(par, "Hypervisor"); + return par; +} + +/* Ask the MMU to translate a Guest VA for us */ +static inline uint64_t __gva_to_par(uint32_t va) +{ + uint64_t par, tmp; + tmp = READ_CP64(PAR); + WRITE_CP32(va, ATS12NSOPR); + isb(); /* Ensure result is available. */ + par = READ_CP64(PAR); + WRITE_CP64(tmp, PAR); + return par; +} +static inline uint64_t gva_to_par(uint32_t va) +{ + uint64_t par = __gva_to_par(va); + /* It is not OK to call this with an invalid VA */ + /* XXX harsh for a guest address... */ + if ( par & PAR_F ) panic_PAR(par, "Guest"); + return par; +} +static inline uint64_t __gva_to_ipa(uint32_t va) +{ + uint64_t par, tmp; + tmp = READ_CP64(PAR); + WRITE_CP32(va, ATS1CPR); + isb(); /* Ensure result is available. */ + par = READ_CP64(PAR); + WRITE_CP64(tmp, PAR); + return par; +} +static inline uint64_t gva_to_ipa(uint32_t va) +{ + uint64_t par = __gva_to_ipa(va); + /* It is not OK to call this with an invalid VA */ + /* XXX harsh for a guest address... */ + if ( par & PAR_F ) panic_PAR(par, "Guest"); + return (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK); +} +/* Bits in the PAR returned by va_to_par */ +#define PAR_FAULT 0x1 + +#endif /* __ASSEMBLY__ */ + +/* These numbers add up to a 39-bit input address space. The ARMv7-A + * architecture actually specifies a 40-bit input address space for the p2m, + * with an 8K (1024-entry) top-level table. */ + +#define LPAE_SHIFT 9 +#define LPAE_ENTRIES (1u << LPAE_SHIFT) +#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1) + +#define THIRD_SHIFT PAGE_SHIFT +#define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT) +#define FIRST_SHIFT (SECOND_SHIFT + LPAE_SHIFT) + +/* Calculate the offsets into the pagetables for a given VA */ +#define first_linear_offset(va) (va >> FIRST_SHIFT) +#define second_linear_offset(va) (va >> SECOND_SHIFT) +#define third_linear_offset(va) (va >> THIRD_SHIFT) +#define first_table_offset(va) (first_linear_offset(va)) +#define second_table_offset(va) (second_linear_offset(va) & LPAE_ENTRY_MASK) +#define third_table_offset(va) (third_linear_offset(va) & LPAE_ENTRY_MASK) + +#define clear_page(page)memset((void *)(page), 0, PAGE_SIZE) + +#define PAGE_ALIGN(x) (((x) + PAGE_SIZE - 1) & PAGE_MASK) + +#endif /* __ARM_PAGE_H__ */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 19/25] arm: early setup code
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/setup.c | 206 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 206 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/setup.c diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c new file mode 100644 index 0000000..33c880e --- /dev/null +++ b/xen/arch/arm/setup.c @@ -0,0 +1,206 @@ +/* + * xen/arch/arm/setup.c + * + * Early bringup code for an ARMv7-A with virt extensions. + * + * Tim Deegan <tim@xen.org> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/compile.h> +#include <xen/domain_page.h> +#include <xen/types.h> +#include <xen/string.h> +#include <xen/serial.h> +#include <xen/sched.h> +#include <xen/console.h> +#include <xen/init.h> +#include <xen/irq.h> +#include <xen/mm.h> +#include <xen/softirq.h> +#include <xen/keyhandler.h> +#include <xen/cpu.h> +#include <asm/page.h> +#include <asm/current.h> +#include <asm/setup.h> +#include "gic.h" + +/* maxcpus: maximum number of CPUs to activate. */ +static unsigned int __initdata max_cpus = NR_CPUS; + +/* Xen stack for bringing up the first CPU. */ +unsigned char init_stack[STACK_SIZE] __attribute__((__aligned__(STACK_SIZE))); + +extern char __init_begin[], __init_end[], __bss_start[]; + +static __attribute_used__ void init_done(void) +{ + /* TODO: free (or page-protect) the init areas. + memset(__init_begin, 0xcc, __init_end - __init_begin); + free_xen_data(__init_begin, __init_end); + */ + printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>10); + + startup_cpu_idle_loop(); +} + +static void __init init_idle_domain(void) +{ + scheduler_init(); + set_current(idle_vcpu[0]); + this_cpu(curr_vcpu) = current; + /* TODO: setup_idle_pagetable(); */ +} + +void __init start_xen(unsigned long boot_phys_offset, + unsigned long arm_type, + unsigned long atag_paddr) + +{ + int i; + + setup_pagetables(boot_phys_offset); + +#ifdef EARLY_UART_ADDRESS + /* Map the UART */ + /* TODO Need to get device tree or command line for UART address */ + set_fixmap(FIXMAP_CONSOLE, EARLY_UART_ADDRESS >> PAGE_SHIFT, DEV_SHARED); + pl011_init(0, FIXMAP_ADDR(FIXMAP_CONSOLE)); + console_init_preirq(); +#endif + + set_current((struct vcpu *)0xfffff000); /* debug sanity */ + idle_vcpu[0] = current; + set_processor_id(0); /* needed early, for smp_processor_id() */ + + /* TODO: smp_prepare_boot_cpu(void) */ + cpumask_set_cpu(smp_processor_id(), &cpu_online_map); + cpumask_set_cpu(smp_processor_id(), &cpu_present_map); + + smp_prepare_cpus(max_cpus); + + init_xen_time(); + + /* TODO: This needs some thought, as well as device-tree mapping. + * For testing, assume that the whole xenheap is contiguous in RAM */ + setup_xenheap_mappings(0x8000000, 0x40000); /* 1 GB @ 512GB */ + /* Must pass a single mapped page for populating bootmem_region_list. */ + init_boot_pages(pfn_to_paddr(xenheap_mfn_start), + pfn_to_paddr(xenheap_mfn_start+1)); + + /* Add non-xenheap memory */ + init_boot_pages(0x8040000000, 0x80c0000000); /* 2 GB @513GB */ + + /* TODO Make sure Xen''s own pages aren''t added + * -- the memory above doesn''t include our relocation target. */ + /* TODO Handle payloads too */ + + /* TODO Need to find actual memory, for now use 4GB at 512GB */ + setup_frametable_mappings(0x8000000000ULL, 0x8100000000UL); + + /* Add xenheap memory */ + init_xenheap_pages(pfn_to_paddr(xenheap_mfn_start+1), + pfn_to_paddr(xenheap_mfn_end)); + + end_boot_allocator(); + + /* Setup Hyp vector base */ + WRITE_CP32((uint32_t) hyp_traps_vector, HVBAR); + printk("Set hyp vector base to %"PRIx32" (expected %p)\n", + READ_CP32(HVBAR), hyp_traps_vector); + + /* Setup Stage 2 address translation */ + /* SH0=00, ORGN0=IRGN0=01 + * SL0=01 (Level-1) + * T0SZ=(1)1000 = -8 (40 bit physical addresses) + */ + WRITE_CP32(0x80002558, VTCR); isb(); + + softirq_init(); + tasklet_subsys_init(); + + init_IRQ(); + + gic_init(); + + gic_route_irqs(); + + init_maintenance_interrupt(); + init_timer_interrupt(); + + timer_init(); + + init_idle_domain(); + + rcu_init(); + + local_irq_enable(); + + initialize_keytable(); + + console_init_postirq(); + + do_presmp_initcalls(); + + for_each_present_cpu ( i ) + { + if ( (num_online_cpus() < max_cpus) && !cpu_online(i) ) + { + int ret = cpu_up(i); + if ( ret != 0 ) + printk("Failed to bring up CPU %u (error %d)\n", i, ret); + } + } + + printk("Brought up %ld CPUs\n", (long)num_online_cpus()); + /* TODO: smp_cpus_done(); */ + + do_initcalls(); + + /* Create initial domain 0. */ + dom0 = domain_create(0, 0, 0); + if ( dom0 == NULL ) + printk("domain_create failed\n"); + if ( (dom0 == NULL) || (alloc_dom0_vcpu0() == NULL) ) + panic("Error creating domain 0\n"); + + dom0->is_privileged = 1; + dom0->target = NULL; + + if ( construct_dom0(dom0) != 0) + panic("Could not set up DOM0 guest OS\n"); + + /* Scrub RAM that is still free and so may go to an unprivileged domain. + XXX too slow in simulator + scrub_heap_pages(); + */ + + console_endboot(); + + /* Hide UART from DOM0 if we''re using it */ + serial_endboot(); + + domain_unpause_by_systemcontroller(dom0); + + reset_stack_and_jump(init_done); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 20/25] arm: shutdown, smp and smpboot
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Dummy implementation of machine_* and smp_* Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/shutdown.c | 23 +++++++++++++++++++++ xen/arch/arm/smp.c | 29 +++++++++++++++++++++++++++ xen/arch/arm/smpboot.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 102 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/shutdown.c create mode 100644 xen/arch/arm/smp.c create mode 100644 xen/arch/arm/smpboot.c diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c new file mode 100644 index 0000000..2e35d2d --- /dev/null +++ b/xen/arch/arm/shutdown.c @@ -0,0 +1,23 @@ +#include <xen/config.h> +#include <xen/lib.h> + +void machine_halt(void) +{ + /* TODO: halt */ + while(1) ; +} + +void machine_restart(unsigned int delay_millisecs) +{ + /* TODO: restart */ + printk("Cannot restart yet\n"); + while(1); +} +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c new file mode 100644 index 0000000..677c71a --- /dev/null +++ b/xen/arch/arm/smp.c @@ -0,0 +1,29 @@ +#include <xen/config.h> +#include <asm/smp.h> + +void smp_call_function( + void (*func) (void *info), + void *info, + int wait) +{ + /* TODO: No SMP just now, does not include self so nothing to do. + cpumask_t allbutself = cpu_online_map; + cpu_clear(smp_processor_id(), allbutself); + on_selected_cpus(&allbutself, func, info, wait); + */ +} +void smp_send_event_check_mask(const cpumask_t *mask) +{ + /* TODO: No SMP just now, does not include self so nothing to do. + send_IPI_mask(mask, EVENT_CHECK_VECTOR); + */ +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c new file mode 100644 index 0000000..8287473 --- /dev/null +++ b/xen/arch/arm/smpboot.c @@ -0,0 +1,50 @@ +/* + * xen/arch/arm/smpboot.c + * + * Dummy smpboot support + * + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/cpumask.h> +#include <xen/smp.h> +#include <xen/init.h> + +cpumask_t cpu_online_map; +EXPORT_SYMBOL(cpu_online_map); +cpumask_t cpu_present_map; +EXPORT_SYMBOL(cpu_online_map); +cpumask_t cpu_possible_map; +EXPORT_SYMBOL(cpu_possible_map); + +void __init +smp_prepare_cpus (unsigned int max_cpus) +{ + set_processor_id(0); /* needed early, for smp_processor_id() */ + + cpumask_clear(&cpu_online_map); + cpumask_clear(&cpu_present_map); + cpumask_clear(&cpu_possible_map); + cpumask_set_cpu(0, &cpu_online_map); + cpumask_set_cpu(0, &cpu_present_map); + cpumask_set_cpu(0, &cpu_possible_map); + return; +} +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 21/25] arm: driver for the generic timer for ARMv7
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Driver for the generic timer for ARMv7 with virtualization extensions. Currently it is based on the kernel timer rather than the hypervisor timer because the latter does not work correctly on our test environment. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/time.c | 181 ++++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/time.h | 26 ++++++ 2 files changed, 207 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/time.c create mode 100644 xen/include/asm-arm/time.h diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c new file mode 100644 index 0000000..13c1254 --- /dev/null +++ b/xen/arch/arm/time.c @@ -0,0 +1,181 @@ +/* + * xen/arch/arm/time.c + * + * Time and timer support, using the ARM Generic Timer interfaces + * + * Tim Deegan <tim@xen.org> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/console.h> +#include <xen/init.h> +#include <xen/irq.h> +#include <xen/lib.h> +#include <xen/mm.h> +#include <xen/softirq.h> +#include <xen/time.h> +#include <asm/system.h> + +/* Unfortunately the hypervisor timer interrupt appears to be buggy */ +#define USE_HYP_TIMER 0 + +/* For fine-grained timekeeping, we use the ARM "Generic Timer", a + * register-mapped time source in the SoC. */ +static uint32_t __read_mostly cntfrq; /* Ticks per second */ +static uint64_t __read_mostly boot_count; /* Counter value at boot time */ + +/*static inline*/ s_time_t ticks_to_ns(uint64_t ticks) +{ + return muldiv64(ticks, SECONDS(1), cntfrq); +} + +/*static inline*/ uint64_t ns_to_ticks(s_time_t ns) +{ + return muldiv64(ns, cntfrq, SECONDS(1)); +} + +/* TODO: On a real system the firmware would have set the frequency in + the CNTFRQ register. Also we''d need to use devicetree to find + the RTC. When we''ve seen some real systems, we can delete this. +static uint32_t calibrate_timer(void) +{ + uint32_t sec; + uint64_t start, end; + paddr_t rtc_base = 0x1C170000ull; + volatile uint32_t *rtc; + + ASSERT(!local_irq_is_enabled()); + set_fixmap(FIXMAP_MISC, rtc_base >> PAGE_SHIFT, DEV_SHARED); + rtc = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC); + + printk("Calibrating timer against RTC..."); + // Turn on the RTC + rtc[3] = 1; + // Wait for an edge + sec = rtc[0] + 1; + do {} while ( rtc[0] != sec ); + // Now time a few seconds + start = READ_CP64(CNTPCT); + do {} while ( rtc[0] < sec + 32 ); + end = READ_CP64(CNTPCT); + printk("done.\n"); + + clear_fixmap(FIXMAP_MISC); + return (end - start) / 32; +} +*/ + +/* Set up the timer on the boot CPU */ +int __init init_xen_time(void) +{ + /* Check that this CPU supports the Generic Timer interface */ + if ( (READ_CP32(ID_PFR1) & ID_PFR1_GT_MASK) != ID_PFR1_GT_v1 ) + panic("CPU does not support the Generic Timer v1 interface.\n"); + + cntfrq = READ_CP32(CNTFRQ); + boot_count = READ_CP64(CNTPCT); + printk("Using generic timer at %"PRIu32" Hz\n", cntfrq); + + return 0; +} + +/* Return number of nanoseconds since boot */ +s_time_t get_s_time(void) +{ + uint64_t ticks = READ_CP64(CNTPCT) - boot_count; + return ticks_to_ns(ticks); +} + +/* Set the timer to wake us up at a particular time. + * Timeout is a Xen system time (nanoseconds since boot); 0 disables the timer. + * Returns 1 on success; 0 if the timeout is too soon or is in the past. */ +int reprogram_timer(s_time_t timeout) +{ + uint64_t deadline; + + if ( timeout == 0 ) + { +#if USE_HYP_TIMER + WRITE_CP32(0, CNTHP_CTL); +#else + WRITE_CP32(0, CNTP_CTL); +#endif + return 1; + } + + deadline = ns_to_ticks(timeout) + boot_count; +#if USE_HYP_TIMER + WRITE_CP64(deadline, CNTHP_CVAL); + WRITE_CP32(CNTx_CTL_ENABLE, CNTHP_CTL); +#else + WRITE_CP64(deadline, CNTP_CVAL); + WRITE_CP32(CNTx_CTL_ENABLE, CNTP_CTL); +#endif + isb(); + + /* No need to check for timers in the past; the Generic Timer fires + * on a signed 63-bit comparison. */ + return 1; +} + +/* Handle the firing timer */ +static void timer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs) +{ + if ( irq == 26 && READ_CP32(CNTHP_CTL) & CNTx_CTL_PENDING ) + { + /* Signal the generic timer code to do its work */ + raise_softirq(TIMER_SOFTIRQ); + /* Disable the timer to avoid more interrupts */ + WRITE_CP32(0, CNTHP_CTL); + } + + if (irq == 30 && READ_CP32(CNTP_CTL) & CNTx_CTL_PENDING ) + { + /* Signal the generic timer code to do its work */ + raise_softirq(TIMER_SOFTIRQ); + /* Disable the timer to avoid more interrupts */ + WRITE_CP32(0, CNTP_CTL); + } +} + +/* Set up the timer interrupt on this CPU */ +void __cpuinit init_timer_interrupt(void) +{ + /* Sensible defaults */ + WRITE_CP64(0, CNTVOFF); /* No VM-specific offset */ + WRITE_CP32(0, CNTKCTL); /* No user-mode access */ +#if USE_HYP_TIMER + /* Let the VMs read the physical counter and timer so they can tell time */ + WRITE_CP32(CNTHCTL_PA|CNTHCTL_TA, CNTHCTL); +#else + /* Cannot let VMs access physical counter if we are using it */ + WRITE_CP32(0, CNTHCTL); +#endif + WRITE_CP32(0, CNTP_CTL); /* Physical timer disabled */ + WRITE_CP32(0, CNTHP_CTL); /* Hypervisor''s timer disabled */ + isb(); + + /* XXX Need to find this IRQ number from devicetree? */ + request_irq(26, timer_interrupt, 0, "hyptimer", NULL); + request_irq(30, timer_interrupt, 0, "phytimer", NULL); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h new file mode 100644 index 0000000..8cc9e78 --- /dev/null +++ b/xen/include/asm-arm/time.h @@ -0,0 +1,26 @@ +#ifndef __ARM_TIME_H__ +#define __ARM_TIME_H__ + +typedef unsigned long cycles_t; + +static inline cycles_t get_cycles (void) +{ + return 0; +} + +struct tm; +struct tm wallclock_time(void); + + +/* Set up the timer interrupt on this CPU */ +extern void __cpuinit init_timer_interrupt(void); + +#endif /* __ARM_TIME_H__ */ +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 22/25] arm: trap handlers
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Functions executed exiting from the guest and returning to the guest: trap and hypercall handlers and leave_hypervisor_tail. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/traps.c | 609 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 609 insertions(+), 0 deletions(-) create mode 100644 xen/arch/arm/traps.c diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c new file mode 100644 index 0000000..4346dd7 --- /dev/null +++ b/xen/arch/arm/traps.c @@ -0,0 +1,609 @@ +/* + * xen/arch/arm/traps.c + * + * ARM Trap handlers + * + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/init.h> +#include <xen/string.h> +#include <xen/version.h> +#include <xen/smp.h> +#include <xen/symbols.h> +#include <xen/irq.h> +#include <xen/lib.h> +#include <xen/mm.h> +#include <xen/errno.h> +#include <xen/hypercall.h> +#include <xen/softirq.h> +#include <public/xen.h> +#include <asm/regs.h> +#include <asm/cpregs.h> + +#include "io.h" +#include "vtimer.h" +#include "gic.h" + +/* The base of the stack must always be double-word aligned, which means + * that both the kernel half of struct cpu_user_regs (which is pushed in + * entry.S) and struct cpu_info (which lives at the bottom of a Xen + * stack) must be doubleword-aligned in size. */ +static inline void check_stack_alignment_constraints(void) { + BUILD_BUG_ON((sizeof (struct cpu_user_regs)) & 0x7); + BUILD_BUG_ON((offsetof(struct cpu_user_regs, r8_fiq)) & 0x7); + BUILD_BUG_ON((sizeof (struct cpu_info)) & 0x7); +} + +static int debug_stack_lines = 20; +integer_param("debug_stack_lines", debug_stack_lines); + +#define stack_words_per_line 8 + +asmlinkage void __div0(void) +{ + printk("Division by zero in hypervisor.\n"); + BUG(); +} + +/* XXX could/should be common code */ +static void print_xen_info(void) +{ + char taint_str[TAINT_STRING_MAX_LEN]; + char debug = ''n''; + +#ifndef NDEBUG + debug = ''y''; +#endif + + printk("----[ Xen-%d.%d%s x86_64 debug=%c %s ]----\n", + xen_major_version(), xen_minor_version(), xen_extra_version(), + debug, print_tainted(taint_str)); +} + +static const char *decode_fsc(uint32_t fsc, int *level) +{ + const char *msg = NULL; + + switch ( fsc & 0x3f ) + { + case FSC_FLT_TRANS ... FSC_FLT_TRANS + 3: + msg = "Translation fault"; + *level = fsc & FSC_LL_MASK; + break; + case FSC_FLT_ACCESS ... FSC_FLT_ACCESS + 3: + msg = "Access fault"; + *level = fsc & FSC_LL_MASK; + break; + case FSC_FLT_PERM ... FSC_FLT_PERM + 3: + msg = "Permission fault"; + *level = fsc & FSC_LL_MASK; + break; + + case FSC_SEA: + msg = "Synchronous External Abort"; + break; + case FSC_SPE: + msg = "Memory Access Synchronous Parity Error"; + break; + case FSC_APE: + msg = "Memory Access Asynchronous Parity Error"; + break; + case FSC_SEATT ... FSC_SEATT + 3: + msg = "Sync. Ext. Abort Translation Table"; + *level = fsc & FSC_LL_MASK; + break; + case FSC_SPETT ... FSC_SPETT + 3: + msg = "Sync. Parity. Error Translation Table"; + *level = fsc & FSC_LL_MASK; + break; + case FSC_AF: + msg = "Alignment Fault"; + break; + case FSC_DE: + msg = "Debug Event"; + break; + + case FSC_LKD: + msg = "Implementation Fault: Lockdown Abort"; + break; + case FSC_CPR: + msg = "Implementation Fault: Coprocossor Abort"; + break; + + default: + msg = "Unknown Failure"; + break; + } + return msg; +} + +static const char *fsc_level_str(int level) +{ + switch ( level ) + { + case -1: return ""; + case 1: return " at level 1"; + case 2: return " at level 2"; + case 3: return " at level 3"; + default: return " (level invalid)"; + } +} + +void panic_PAR(uint64_t par, const char *when) +{ + if ( par & PAR_F ) + { + const char *msg; + int level = -1; + int stage = par & PAR_STAGE2 ? 2 : 1; + int second_in_first = !!(par & PAR_STAGE21); + + msg = decode_fsc( (par&PAR_FSC_MASK) >> PAR_FSC_SHIFT, &level); + + printk("PAR: %010"PRIx64": %s stage %d%s%s\n", + par, msg, + stage, + second_in_first ? " during second stage lookup" : "", + fsc_level_str(level)); + } + else + { + printk("PAR: %010"PRIx64": paddr:%010"PRIx64 + " attr %"PRIx64" sh %"PRIx64" %s\n", + par, par & PADDR_MASK, par >> PAR_MAIR_SHIFT, + (par & PAR_SH_MASK) >> PAR_SH_SHIFT, + (par & PAR_NS) ? "Non-Secure" : "Secure"); + } + panic("Error during %s-to-physical address translation\n", when); +} + +void show_registers(struct cpu_user_regs *regs) +{ + static const char *mode_strings[] = { + [PSR_MODE_USR] = "USR", + [PSR_MODE_FIQ] = "FIQ", + [PSR_MODE_IRQ] = "IRQ", + [PSR_MODE_SVC] = "SVC", + [PSR_MODE_MON] = "MON", + [PSR_MODE_ABT] = "ABT", + [PSR_MODE_HYP] = "HYP", + [PSR_MODE_UND] = "UND", + [PSR_MODE_SYS] = "SYS" + }; + + print_xen_info(); + printk("CPU: %d\n", smp_processor_id()); + printk("PC: %08"PRIx32, regs->pc); + if ( !guest_mode(regs) ) + print_symbol(" %s", regs->pc); + printk("\n"); + printk("CPSR: %08"PRIx32" MODE:%s\n", regs->cpsr, + mode_strings[regs->cpsr & PSR_MODE_MASK]); + printk(" R0: %08"PRIx32" R1: %08"PRIx32" R2: %08"PRIx32" R3: %08"PRIx32"\n", + regs->r0, regs->r1, regs->r2, regs->r3); + printk(" R4: %08"PRIx32" R5: %08"PRIx32" R6: %08"PRIx32" R7: %08"PRIx32"\n", + regs->r4, regs->r5, regs->r6, regs->r7); + printk(" R8: %08"PRIx32" R9: %08"PRIx32" R10:%08"PRIx32" R11:%08"PRIx32" R12:%08"PRIx32"\n", + regs->r8, regs->r9, regs->r10, regs->r11, regs->r12); + + if ( guest_mode(regs) ) + { + printk("USR: SP: %08"PRIx32" LR: %08"PRIx32" CPSR:%08"PRIx32"\n", + regs->sp_usr, regs->lr_usr, regs->cpsr); + printk("SVC: SP: %08"PRIx32" LR: %08"PRIx32" SPSR:%08"PRIx32"\n", + regs->sp_svc, regs->lr_svc, regs->spsr_svc); + printk("ABT: SP: %08"PRIx32" LR: %08"PRIx32" SPSR:%08"PRIx32"\n", + regs->sp_abt, regs->lr_abt, regs->spsr_abt); + printk("UND: SP: %08"PRIx32" LR: %08"PRIx32" SPSR:%08"PRIx32"\n", + regs->sp_und, regs->lr_und, regs->spsr_und); + printk("IRQ: SP: %08"PRIx32" LR: %08"PRIx32" SPSR:%08"PRIx32"\n", + regs->sp_irq, regs->lr_irq, regs->spsr_irq); + printk("FIQ: SP: %08"PRIx32" LR: %08"PRIx32" SPSR:%08"PRIx32"\n", + regs->sp_fiq, regs->lr_fiq, regs->spsr_fiq); + printk("FIQ: R8: %08"PRIx32" R9: %08"PRIx32" R10:%08"PRIx32" R11:%08"PRIx32" R12:%08"PRIx32"\n", + regs->r8_fiq, regs->r9_fiq, regs->r10_fiq, regs->r11_fiq, regs->r11_fiq); + printk("\n"); + printk("TTBR0 %08"PRIx32" TTBR1 %08"PRIx32" TTBCR %08"PRIx32"\n", + READ_CP32(TTBR0), READ_CP32(TTBR1), READ_CP32(TTBCR)); + printk("SCTLR %08"PRIx32"\n", READ_CP32(SCTLR)); + printk("VTTBR %010"PRIx64"\n", READ_CP64(VTTBR)); + printk("\n"); + } + else + { + printk(" SP: %08"PRIx32" LR: %08"PRIx32"\n", regs->sp, regs->lr); + printk("\n"); + } + + printk("HTTBR %"PRIx64"\n", READ_CP64(HTTBR)); + printk("HDFAR %"PRIx32"\n", READ_CP32(HDFAR)); + printk("HIFAR %"PRIx32"\n", READ_CP32(HIFAR)); + printk("HPFAR %"PRIx32"\n", READ_CP32(HPFAR)); + printk("HCR %08"PRIx32"\n", READ_CP32(HCR)); + printk("HSR %"PRIx32"\n", READ_CP32(HSR)); + printk("\n"); + + printk("DFSR %"PRIx32" DFAR %"PRIx32"\n", READ_CP32(DFSR), READ_CP32(DFAR)); + printk("IFSR %"PRIx32" IFAR %"PRIx32"\n", READ_CP32(IFSR), READ_CP32(IFAR)); + printk("\n"); +} + +static void show_guest_stack(struct cpu_user_regs *regs) +{ + printk("GUEST STACK GOES HERE\n"); +} + +#define STACK_BEFORE_EXCEPTION(regs) ((uint32_t*)(regs)->sp) + +static void show_trace(struct cpu_user_regs *regs) +{ + uint32_t *frame, next, addr, low, high; + + printk("Xen call trace:\n "); + + printk("[<%p>]", _p(regs->pc)); + print_symbol(" %s\n ", regs->pc); + + /* Bounds for range of valid frame pointer. */ + low = (uint32_t)(STACK_BEFORE_EXCEPTION(regs)/* - 2*/); + high = (low & ~(STACK_SIZE - 1)) + + (STACK_SIZE - sizeof(struct cpu_info)); + + /* Frame: + * (largest address) + * | cpu_info + * | [...] | + * | return addr <-----------------, | + * | fp --------------------------------+----'' + * | [...] | + * | return addr <------------, | + * | fp ---------------------------+----'' + * | [...] | + * | return addr <- regs->fp | + * | fp ---------------------------'' + * | + * v (smallest address, sp) + */ + + /* The initial frame pointer. */ + next = regs->fp; + + for ( ; ; ) + { + if ( (next < low) || (next >= high) ) + break; + { + /* Ordinary stack frame. */ + frame = (uint32_t *)next; + next = frame[-1]; + addr = frame[0]; + } + + printk("[<%p>]", _p(addr)); + print_symbol(" %s\n ", addr); + + low = (uint32_t)&frame[1]; + } + + printk("\n"); +} + +void show_stack(struct cpu_user_regs *regs) +{ + uint32_t *stack = STACK_BEFORE_EXCEPTION(regs), addr; + int i; + + if ( guest_mode(regs) ) + return show_guest_stack(regs); + + printk("Xen stack trace from sp=%p:\n ", stack); + + for ( i = 0; i < (debug_stack_lines*stack_words_per_line); i++ ) + { + if ( ((long)stack & (STACK_SIZE-BYTES_PER_LONG)) == 0 ) + break; + if ( (i != 0) && ((i % stack_words_per_line) == 0) ) + printk("\n "); + + addr = *stack++; + printk(" %p", _p(addr)); + } + if ( i == 0 ) + printk("Stack empty."); + printk("\n"); + + show_trace(regs); +} + +void show_execution_state(struct cpu_user_regs *regs) +{ + show_registers(regs); + show_stack(regs); +} + +static void do_unexpected_trap(const char *msg, struct cpu_user_regs *regs) +{ + printk("Unexpected Trap: %s\n", msg); + show_execution_state(regs); + while(1); +} + +asmlinkage void do_trap_undefined_instruction(struct cpu_user_regs *regs) +{ + do_unexpected_trap("Undefined Instruction", regs); +} + +asmlinkage void do_trap_supervisor_call(struct cpu_user_regs *regs) +{ + do_unexpected_trap("Supervisor Call", regs); +} + +asmlinkage void do_trap_prefetch_abort(struct cpu_user_regs *regs) +{ + do_unexpected_trap("Prefetch Abort", regs); +} + +asmlinkage void do_trap_data_abort(struct cpu_user_regs *regs) +{ + do_unexpected_trap("Data Abort", regs); +} + +unsigned long do_arch_0(unsigned int cmd, unsigned long long value) +{ + printk("do_arch_0 cmd=%x arg=%llx\n", cmd, value); + return 0; +} + +typedef unsigned long arm_hypercall_t( + unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, + unsigned int, unsigned int, unsigned int, unsigned int, unsigned int); + +#define HYPERCALL(x) \ + [ __HYPERVISOR_ ## x ] = (arm_hypercall_t *) do_ ## x + +static arm_hypercall_t *arm_hypercall_table[] = { + HYPERCALL(arch_0), + HYPERCALL(sched_op), + HYPERCALL(console_io), +}; + +static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code) +{ + uint32_t reg, *r; + + switch ( code ) { + case 0xe0 ... 0xef: + reg = code - 0xe0; + r = ®s->r0 + reg; + printk("R%d = %#010"PRIx32" at %#010"PRIx32"\n", reg, *r, regs->pc); + break; + case 0xfd: + printk("Reached %08"PRIx32"\n", regs->pc); + break; + case 0xfe: + printk("%c", (char)(regs->r0 & 0xff)); + break; + case 0xff: + printk("DEBUG\n"); + show_execution_state(regs); + break; + default: + panic("Unhandled debug trap %#x\n", code); + break; + } +} + +static void do_trap_hypercall(struct cpu_user_regs *regs, unsigned long iss) +{ + local_irq_enable(); + + regs->r0 = arm_hypercall_table[iss](regs->r0, + regs->r1, + regs->r2, + regs->r3, + regs->r4, + regs->r5, + regs->r6, + regs->r7, + regs->r8, + regs->r9); +} + +static void do_cp15_32(struct cpu_user_regs *regs, + union hsr hsr) +{ + struct hsr_cp32 cp32 = hsr.cp32; + uint32_t *r = ®s->r0 + cp32.reg; + + if ( !cp32.ccvalid ) { + dprintk(XENLOG_ERR, "cp_15(32): need to handle invalid condition codes\n"); + domain_crash_synchronous(); + } + if ( cp32.cc != 0xe ) { + dprintk(XENLOG_ERR, "cp_15(32): need to handle condition codes %x\n", + cp32.cc); + domain_crash_synchronous(); + } + + switch ( hsr.bits & HSR_CP32_REGS_MASK ) + { + case HSR_CPREG32(CLIDR): + if ( !cp32.read ) + { + dprintk(XENLOG_ERR, + "attempt to write to read-only register CLIDR\n"); + domain_crash_synchronous(); + } + *r = READ_CP32(CLIDR); + break; + case HSR_CPREG32(CCSIDR): + if ( !cp32.read ) + { + dprintk(XENLOG_ERR, + "attempt to write to read-only register CSSIDR\n"); + domain_crash_synchronous(); + } + *r = READ_CP32(CCSIDR); + break; + case HSR_CPREG32(DCCISW): + if ( cp32.read ) + { + dprintk(XENLOG_ERR, + "attempt to read from write-only register DCCISW\n"); + domain_crash_synchronous(); + } + WRITE_CP32(*r, DCCISW); + break; + case HSR_CPREG32(CNTP_CTL): + case HSR_CPREG32(CNTP_TVAL): + /* emulate timer */ + break; + default: + printk("%s p15, %d, r%d, cr%d, cr%d, %d @ %#08x\n", + cp32.read ? "mrc" : "mcr", + cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc); + panic("unhandled 32-bit CP15 access %#x\n", hsr.bits & HSR_CP32_REGS_MASK); + } + regs->pc += cp32.len ? 4 : 2; + +} + +static void do_cp15_64(struct cpu_user_regs *regs, + union hsr hsr) +{ + struct hsr_cp64 cp64 = hsr.cp64; + + if ( !cp64.ccvalid ) { + dprintk(XENLOG_ERR, "cp_15(64): need to handle invalid condition codes\n"); + domain_crash_synchronous(); + } + if ( cp64.cc != 0xe ) { + dprintk(XENLOG_ERR, "cp_15(64): need to handle condition codes %x\n", + cp64.cc); + domain_crash_synchronous(); + } + + switch ( hsr.bits & HSR_CP64_REGS_MASK ) + { + case HSR_CPREG64(CNTPCT): + /* emulate timer */ + break; + default: + printk("%s p15, %d, r%d, r%d, cr%d @ %#08x\n", + cp64.read ? "mrrc" : "mcrr", + cp64.op1, cp64.reg1, cp64.reg2, cp64.crm, regs->pc); + panic("unhandled 64-bit CP15 access %#x\n", hsr.bits & HSR_CP64_REGS_MASK); + } + regs->pc += cp64.len ? 4 : 2; + +} + +static void do_trap_data_abort_guest(struct cpu_user_regs *regs, + struct hsr_dabt dabt) +{ + const char *msg; + int level = -1; + mmio_info_t info; + + if (dabt.s1ptw) + goto bad_data_abort; + + info.dabt = dabt; + info.gva = READ_CP32(HDFAR); + info.gpa = gva_to_ipa(info.gva); + + if (handle_mmio(&info)) + { + regs->pc += dabt.len ? 4 : 2; + return; + } + +bad_data_abort: + + msg = decode_fsc( dabt.dfsc, &level); + + printk("Guest data abort: %s%s%s\n" + " gva=%"PRIx32" gpa=%"PRIpaddr"\n", + msg, dabt.s1ptw ? " S2 during S1" : "", + fsc_level_str(level), + info.gva, info.gpa); + if (dabt.valid) + printk(" size=%d sign=%d write=%d reg=%d\n", + dabt.size, dabt.sign, dabt.write, dabt.reg); + else + printk(" instruction syndrome invalid\n"); + printk(" eat=%d cm=%d s1ptw=%d dfsc=%d\n", + dabt.eat, dabt.cache, dabt.s1ptw, dabt.dfsc); + + show_execution_state(regs); + panic("Unhandled guest data abort\n"); +} + +asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) +{ + union hsr hsr = { .bits = READ_CP32(HSR) }; + + switch (hsr.ec) { + case HSR_EC_CP15_32: + do_cp15_32(regs, hsr); + break; + case HSR_EC_CP15_64: + do_cp15_64(regs, hsr); + break; + case HSR_EC_HVC: + if ( (hsr.iss & 0xff00) == 0xff00 ) + return do_debug_trap(regs, hsr.iss & 0x00ff); + do_trap_hypercall(regs, hsr.iss); + break; + case HSR_EC_DATA_ABORT_GUEST: + do_trap_data_abort_guest(regs, hsr.dabt); + break; + default: + printk("Hypervisor Trap. HSR=0x%x EC=0x%x IL=%x Syndrome=%"PRIx32"\n", + hsr.bits, hsr.ec, hsr.len, hsr.iss); + do_unexpected_trap("Hypervisor", regs); + } +} + +asmlinkage void do_trap_irq(struct cpu_user_regs *regs) +{ + gic_interrupt(regs, 0); +} + +asmlinkage void do_trap_fiq(struct cpu_user_regs *regs) +{ + gic_interrupt(regs, 1); +} + +asmlinkage void leave_hypervisor_tail(void) +{ + while (1) + { + local_irq_disable(); + if (!softirq_pending(smp_processor_id())) + return; + local_irq_enable(); + do_softirq(); + } +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ -- 1.7.2.5
<stefano.stabellini@eu.citrix.com>
2012-Jan-09 17:59 UTC
[PATCH v4 23/25] arm: vgic emulation
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> - emulation of the GICD interface for the guest; - interrupt injection into the guest; - keep track of inflight irqs using a list, ordered by priority. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/domain.c | 6 + xen/arch/arm/gic.h | 3 + xen/arch/arm/io.c | 1 + xen/arch/arm/io.h | 2 + xen/arch/arm/irq.c | 3 +- xen/arch/arm/vgic.c | 605 ++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/domain.h | 30 ++ 7 files changed, 649 insertions(+), 1 deletions(-) create mode 100644 xen/arch/arm/vgic.c diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 0844b37..7e681ab 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -212,6 +212,9 @@ int vcpu_initialise(struct vcpu *v) { int rc = 0; + if ( (rc = vcpu_vgic_init(v)) != 0 ) + return rc; + return rc; } @@ -230,6 +233,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags) d->max_vcpus = 8; + if ( (rc = domain_vgic_init(d)) != 0 ) + goto fail; + rc = 0; fail: return rc; diff --git a/xen/arch/arm/gic.h b/xen/arch/arm/gic.h index 63b6648..81c388d 100644 --- a/xen/arch/arm/gic.h +++ b/xen/arch/arm/gic.h @@ -121,6 +121,9 @@ #define GICH_LR_CPUID_SHIFT 9 #define GICH_VTR_NRLRGS 0x3f +extern int domain_vgic_init(struct domain *d); +extern int vcpu_vgic_init(struct vcpu *v); +extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual); extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq); extern void gic_route_irqs(void); diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 8789705..4461225 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -24,6 +24,7 @@ static const struct mmio_handler *const mmio_handlers[] { + &vgic_distr_mmio_handler, }; #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers) diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h index d7847e3..8cc5ca7 100644 --- a/xen/arch/arm/io.h +++ b/xen/arch/arm/io.h @@ -39,6 +39,8 @@ struct mmio_handler { mmio_write_t write_handler; }; +extern const struct mmio_handler vgic_distr_mmio_handler; + extern int handle_mmio(mmio_info_t *info); #endif diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c index 5663762..7820310 100644 --- a/xen/arch/arm/irq.c +++ b/xen/arch/arm/irq.c @@ -136,7 +136,8 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq) desc->status |= IRQ_INPROGRESS; - /* XXX: inject irq into the guest */ + /* XXX: inject irq into all guest vcpus */ + vgic_vcpu_inject_irq(d->vcpu[0], irq, 0); goto out_no_end; } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c new file mode 100644 index 0000000..26eae55 --- /dev/null +++ b/xen/arch/arm/vgic.c @@ -0,0 +1,605 @@ +/* + * xen/arch/arm/vgic.c + * + * ARM Virtual Generic Interrupt Controller support + * + * Ian Campbell <ian.campbell@citrix.com> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/lib.h> +#include <xen/init.h> +#include <xen/softirq.h> +#include <xen/irq.h> +#include <xen/sched.h> + +#include <asm/current.h> + +#include "io.h" +#include "gic.h" + +#define VGIC_DISTR_BASE_ADDRESS 0x000000002c001000 + +#define REG(n) (n/4) + +/* Number of ranks of interrupt registers for a domain */ +#define DOMAIN_NR_RANKS(d) (((d)->arch.vgic.nr_lines+31)/32) + +/* + * Rank containing GICD_<FOO><n> for GICD_<FOO> with + * <b>-bits-per-interrupt + */ +static inline int REG_RANK_NR(int b, uint32_t n) +{ + switch ( b ) + { + case 8: return n >> 3; + case 4: return n >> 2; + case 2: return n >> 1; + default: BUG(); + } +} + +/* + * Offset of GICD_<FOO><n> with its rank, for GICD_<FOO> with + * <b>-bits-per-interrupt. + */ +#define REG_RANK_INDEX(b, n) ((n) & ((b)-1)) + +/* + * Returns rank corresponding to a GICD_<FOO><n> register for + * GICD_<FOO> with <b>-bits-per-interrupt. + */ +static struct vgic_irq_rank *vgic_irq_rank(struct vcpu *v, int b, int n) +{ + int rank = REG_RANK_NR(b, n); + + if ( rank == 0 ) + return &v->arch.vgic.private_irqs; + else if ( rank <= DOMAIN_NR_RANKS(v->domain) ) + return &v->domain->arch.vgic.shared_irqs[rank - 1]; + else + return NULL; +} + +int domain_vgic_init(struct domain *d) +{ + int i; + + d->arch.vgic.ctlr = 0; + d->arch.vgic.nr_lines = 32; + d->arch.vgic.shared_irqs + xmalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d)); + d->arch.vgic.pending_irqs + xmalloc_array(struct pending_irq, + d->arch.vgic.nr_lines + (32 * d->max_vcpus)); + for (i=0; i<d->arch.vgic.nr_lines + (32 * d->max_vcpus); i++) + INIT_LIST_HEAD(&d->arch.vgic.pending_irqs[i].link); + for (i=0; i<DOMAIN_NR_RANKS(d); i++) + spin_lock_init(&d->arch.vgic.shared_irqs[i].lock); + return 0; +} + +int vcpu_vgic_init(struct vcpu *v) +{ + int i; + memset(&v->arch.vgic.private_irqs, 0, sizeof(v->arch.vgic.private_irqs)); + + spin_lock_init(&v->arch.vgic.private_irqs.lock); + + /* For SGI and PPI the target is always this CPU */ + for ( i = 0 ; i < 8 ; i++ ) + v->arch.vgic.private_irqs.itargets[i] + (1<<(v->vcpu_id+0)) + | (1<<(v->vcpu_id+8)) + | (1<<(v->vcpu_id+16)) + | (1<<(v->vcpu_id+24)); + INIT_LIST_HEAD(&v->arch.vgic.inflight_irqs); + spin_lock_init(&v->arch.vgic.lock); + + return 0; +} + +#define vgic_lock(v) spin_lock(&(v)->domain->arch.vgic.lock) +#define vgic_unlock(v) spin_unlock(&(v)->domain->arch.vgic.lock) + +#define vgic_lock_rank(v, r) spin_lock(&(r)->lock) +#define vgic_unlock_rank(v, r) spin_unlock(&(r)->lock) + +static uint32_t byte_read(uint32_t val, int sign, int offset) +{ + int byte = offset & 0x3; + + val = val >> (8*byte); + if ( sign && (val & 0x80) ) + val |= 0xffffff00; + else + val &= 0x000000ff; + return val; +} + +static void byte_write(uint32_t *reg, uint32_t var, int offset) +{ + int byte = offset & 0x3; + + var &= (0xff << (8*byte)); + + *reg &= ~(0xff << (8*byte)); + *reg |= var; +} + +static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info) +{ + struct hsr_dabt dabt = info->dabt; + struct cpu_user_regs *regs = guest_cpu_user_regs(); + uint32_t *r = ®s->r0 + dabt.reg; + struct vgic_irq_rank *rank; + int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS); + int gicd_reg = REG(offset); + + switch ( gicd_reg ) + { + case GICD_CTLR: + if ( dabt.size != 2 ) goto bad_width; + vgic_lock(v); + *r = v->domain->arch.vgic.ctlr; + vgic_unlock(v); + return 1; + case GICD_TYPER: + if ( dabt.size != 2 ) goto bad_width; + /* No secure world support for guests. */ + vgic_lock(v); + *r = ( (v->domain->max_vcpus<<5) & GICD_TYPE_CPUS ) + |( ((v->domain->arch.vgic.nr_lines/32)) & GICD_TYPE_LINES ); + vgic_unlock(v); + return 1; + case GICD_IIDR: + if ( dabt.size != 2 ) goto bad_width; + /* + * XXX Do we need a JEP106 manufacturer ID? + * Just use the physical h/w value for now + */ + *r = 0x0000043b; + return 1; + + /* Implementation defined -- read as zero */ + case REG(0x020) ... REG(0x03c): + goto read_as_zero; + + case GICD_IGROUPR ... GICD_IGROUPRN: + /* We do not implement security extensions for guests, read zero */ + goto read_as_zero; + + case GICD_ISENABLER ... GICD_ISENABLERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ISENABLER); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = rank->ienable; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ICENABLER ... GICD_ICENABLERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ICENABLER); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = rank->ienable; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ISPENDR ... GICD_ISPENDRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ISPENDR); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = byte_read(rank->ipend, dabt.sign, offset); + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ICPENDR ... GICD_ICPENDRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ICPENDR); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = byte_read(rank->ipend, dabt.sign, offset); + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ISACTIVER ... GICD_ISACTIVERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ISACTIVER); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = rank->iactive; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ICACTIVER ... GICD_ICACTIVERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ICACTIVER); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = rank->iactive; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ITARGETSR ... GICD_ITARGETSRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ITARGETSR); + if ( rank == NULL) goto read_as_zero; + + vgic_lock_rank(v, rank); + *r = rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR)]; + if ( dabt.size == 0 ) + *r = byte_read(*r, dabt.sign, offset); + vgic_unlock_rank(v, rank); + return 1; + + case GICD_IPRIORITYR ... GICD_IPRIORITYRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_IPRIORITYR); + if ( rank == NULL) goto read_as_zero; + + vgic_lock_rank(v, rank); + *r = rank->ipriority[REG_RANK_INDEX(8, gicd_reg - GICD_IPRIORITYR)]; + if ( dabt.size == 0 ) + *r = byte_read(*r, dabt.sign, offset); + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ICFGR ... GICD_ICFGRN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 2, gicd_reg - GICD_ICFGR); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)]; + vgic_unlock_rank(v, rank); + return 0; + + case GICD_NSACR ... GICD_NSACRN: + /* We do not implement securty extensions for guests, read zero */ + goto read_as_zero; + + case GICD_SGIR: + if ( dabt.size != 2 ) goto bad_width; + /* Write only -- read unknown */ + *r = 0xdeadbeef; + return 1; + + case GICD_CPENDSGIR ... GICD_CPENDSGIRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_CPENDSGIR); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = byte_read(rank->pendsgi, dabt.sign, offset); + vgic_unlock_rank(v, rank); + return 1; + + case GICD_SPENDSGIR ... GICD_SPENDSGIRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_SPENDSGIR); + if ( rank == NULL) goto read_as_zero; + vgic_lock_rank(v, rank); + *r = byte_read(rank->pendsgi, dabt.sign, offset); + vgic_unlock_rank(v, rank); + return 1; + + /* Implementation defined -- read as zero */ + case REG(0xfd0) ... REG(0xfe4): + goto read_as_zero; + + case GICD_ICPIDR2: + if ( dabt.size != 2 ) goto bad_width; + printk("vGICD: unhandled read from ICPIDR2\n"); + return 0; + + /* Implementation defined -- read as zero */ + case REG(0xfec) ... REG(0xffc): + goto read_as_zero; + + /* Reserved -- read as zero */ + case REG(0x00c) ... REG(0x01c): + case REG(0x040) ... REG(0x07c): + case REG(0x7fc): + case REG(0xbfc): + case REG(0xf04) ... REG(0xf0c): + case REG(0xf30) ... REG(0xfcc): + goto read_as_zero; + + default: + printk("vGICD: unhandled read r%d offset %#08x\n", + dabt.reg, offset); + return 0; + } + +bad_width: + printk("vGICD: bad read width %d r%d offset %#08x\n", + dabt.size, dabt.reg, offset); + domain_crash_synchronous(); + return 0; + +read_as_zero: + if ( dabt.size != 2 ) goto bad_width; + *r = 0; + return 1; +} + +static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info) +{ + struct hsr_dabt dabt = info->dabt; + struct cpu_user_regs *regs = guest_cpu_user_regs(); + uint32_t *r = ®s->r0 + dabt.reg; + struct vgic_irq_rank *rank; + int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS); + int gicd_reg = REG(offset); + + switch ( gicd_reg ) + { + case GICD_CTLR: + if ( dabt.size != 2 ) goto bad_width; + /* Ignore all but the enable bit */ + v->domain->arch.vgic.ctlr = (*r) & GICD_CTL_ENABLE; + return 1; + + /* R/O -- write ignored */ + case GICD_TYPER: + case GICD_IIDR: + goto write_ignore; + + /* Implementation defined -- write ignored */ + case REG(0x020) ... REG(0x03c): + goto write_ignore; + + case GICD_IGROUPR ... GICD_IGROUPRN: + /* We do not implement securty extensions for guests, write ignore */ + goto write_ignore; + + case GICD_ISENABLER ... GICD_ISENABLERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ISENABLER); + if ( rank == NULL) goto write_ignore; + vgic_lock_rank(v, rank); + rank->ienable |= *r; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ICENABLER ... GICD_ICENABLERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ICENABLER); + if ( rank == NULL) goto write_ignore; + vgic_lock_rank(v, rank); + rank->ienable &= ~*r; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ISPENDR ... GICD_ISPENDRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + printk("vGICD: unhandled %s write %#"PRIx32" to ISPENDR%d\n", + dabt.size ? "word" : "byte", *r, gicd_reg - GICD_ISPENDR); + return 0; + + case GICD_ICPENDR ... GICD_ICPENDRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + printk("vGICD: unhandled %s write %#"PRIx32" to ICPENDR%d\n", + dabt.size ? "word" : "byte", *r, gicd_reg - GICD_ICPENDR); + return 0; + + case GICD_ISACTIVER ... GICD_ISACTIVERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ISACTIVER); + if ( rank == NULL) goto write_ignore; + vgic_lock_rank(v, rank); + rank->iactive &= ~*r; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ICACTIVER ... GICD_ICACTIVERN: + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ICACTIVER); + if ( rank == NULL) goto write_ignore; + vgic_lock_rank(v, rank); + rank->iactive &= ~*r; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ITARGETSR ... GICD_ITARGETSR + 7: + /* SGI/PPI target is read only */ + goto write_ignore; + + case GICD_ITARGETSR + 8 ... GICD_ITARGETSRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_ITARGETSR); + if ( rank == NULL) goto write_ignore; + vgic_lock_rank(v, rank); + if ( dabt.size == 2 ) + rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR)] = *r; + else + byte_write(&rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR)], + *r, offset); + vgic_unlock_rank(v, rank); + return 1; + + case GICD_IPRIORITYR ... GICD_IPRIORITYRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 8, gicd_reg - GICD_IPRIORITYR); + if ( rank == NULL) goto write_ignore; + vgic_lock_rank(v, rank); + if ( dabt.size == 2 ) + rank->ipriority[REG_RANK_INDEX(8, gicd_reg - GICD_IPRIORITYR)] = *r; + else + byte_write(&rank->ipriority[REG_RANK_INDEX(8, gicd_reg - GICD_IPRIORITYR)], + *r, offset); + vgic_unlock_rank(v, rank); + return 1; + + case GICD_ICFGR: /* SGIs */ + goto write_ignore; + case GICD_ICFGR + 1: /* PPIs */ + /* It is implementation defined if these are writeable. We chose not */ + goto write_ignore; + case GICD_ICFGR + 2 ... GICD_ICFGRN: /* SPIs */ + if ( dabt.size != 2 ) goto bad_width; + rank = vgic_irq_rank(v, 2, gicd_reg - GICD_ICFGR); + vgic_lock_rank(v, rank); + if ( rank == NULL) goto write_ignore; + rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)] = *r; + vgic_unlock_rank(v, rank); + return 1; + + case GICD_NSACR ... GICD_NSACRN: + /* We do not implement securty extensions for guests, write ignore */ + goto write_ignore; + + case GICD_SGIR: + if ( dabt.size != 2 ) goto bad_width; + printk("vGICD: unhandled write %#"PRIx32" to ICFGR%d\n", + *r, gicd_reg - GICD_ICFGR); + return 0; + + case GICD_CPENDSGIR ... GICD_CPENDSGIRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + printk("vGICD: unhandled %s write %#"PRIx32" to ICPENDSGIR%d\n", + dabt.size ? "word" : "byte", *r, gicd_reg - GICD_CPENDSGIR); + return 0; + + case GICD_SPENDSGIR ... GICD_SPENDSGIRN: + if ( dabt.size != 0 && dabt.size != 2 ) goto bad_width; + printk("vGICD: unhandled %s write %#"PRIx32" to ISPENDSGIR%d\n", + dabt.size ? "word" : "byte", *r, gicd_reg - GICD_SPENDSGIR); + return 0; + + /* Implementation defined -- write ignored */ + case REG(0xfd0) ... REG(0xfe4): + goto write_ignore; + + /* R/O -- write ignore */ + case GICD_ICPIDR2: + goto write_ignore; + + /* Implementation defined -- write ignored */ + case REG(0xfec) ... REG(0xffc): + goto write_ignore; + + /* Reserved -- write ignored */ + case REG(0x00c) ... REG(0x01c): + case REG(0x040) ... REG(0x07c): + case REG(0x7fc): + case REG(0xbfc): + case REG(0xf04) ... REG(0xf0c): + case REG(0xf30) ... REG(0xfcc): + goto write_ignore; + + default: + printk("vGICD: unhandled write r%d=%"PRIx32" offset %#08x\n", + dabt.reg, *r, offset); + return 0; + } + +bad_width: + printk("vGICD: bad write width %d r%d=%"PRIx32" offset %#08x\n", + dabt.size, dabt.reg, *r, offset); + domain_crash_synchronous(); + return 0; + +write_ignore: + if ( dabt.size != 2 ) goto bad_width; + return 0; +} + +static int vgic_distr_mmio_check(struct vcpu *v, paddr_t addr) +{ + return addr >= VGIC_DISTR_BASE_ADDRESS && addr < (VGIC_DISTR_BASE_ADDRESS+PAGE_SIZE); +} + +const struct mmio_handler vgic_distr_mmio_handler = { + .check_handler = vgic_distr_mmio_check, + .read_handler = vgic_distr_mmio_read, + .write_handler = vgic_distr_mmio_write, +}; + +struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) +{ + struct pending_irq *n; + /* Pending irqs allocation strategy: the first vgic.nr_lines irqs + * are used for SPIs; the rests are used for per cpu irqs */ + if ( irq < 32 ) + n = &v->domain->arch.vgic.pending_irqs[irq + (v->vcpu_id * 32) + + v->domain->arch.vgic.nr_lines]; + else + n = &v->domain->arch.vgic.pending_irqs[irq - 32]; + return n; +} + +void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual) +{ + int idx = irq >> 2, byte = irq & 0x3; + uint8_t priority; + struct vgic_irq_rank *rank = vgic_irq_rank(v, 8, idx); + struct pending_irq *iter, *n = irq_to_pending(v, irq); + + /* irq still pending */ + if (!list_empty(&n->link)) + return; + + priority = byte_read(rank->ipriority[REG_RANK_INDEX(8, idx)], 0, byte); + + n->irq = irq; + n->priority = priority; + if (!virtual) + n->desc = irq_to_desc(irq); + else + n->desc = NULL; + + gic_set_guest_irq(irq, GICH_LR_PENDING, priority); + + spin_lock(&v->arch.vgic.lock); + list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, link ) + { + if ( iter->priority < priority ) + { + list_add_tail(&n->link, &iter->link); + spin_unlock(&v->arch.vgic.lock); + return; + } + } + list_add(&n->link, &v->arch.vgic.inflight_irqs); + spin_unlock(&v->arch.vgic.lock); + /* we have a new higher priority irq, inject it into the guest */ + cpu_raise_softirq(v->processor, VGIC_SOFTIRQ); +} + +static void vgic_softirq(void) +{ + if (list_empty(¤t->arch.vgic.inflight_irqs)) + return; + + gic_inject_irq_start(); +} + +static int __init init_vgic_softirq(void) +{ + open_softirq(VGIC_SOFTIRQ, vgic_softirq); + return 0; +} +__initcall(init_vgic_softirq); +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ + diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 2226a24..2cd0bd3 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -6,6 +6,15 @@ #include <asm/page.h> #include <asm/p2m.h> +/* Represents state corresponding to a block of 32 interrupts */ +struct vgic_irq_rank { + spinlock_t lock; /* Covers access to all other members of this struct */ + uint32_t ienable, iactive, ipend, pendsgi; + uint32_t icfg[2]; + uint32_t ipriority[8]; + uint32_t itargets[8]; +}; + struct pending_irq { int irq; @@ -18,6 +27,22 @@ struct arch_domain { struct p2m_domain p2m; + struct { + /* + * Covers access to other members of this struct _except_ for + * shared_irqs where each member contains its own locking. + * + * If both class of lock is required then this lock must be + * taken first. If multiple rank locks are required (including + * the per-vcpu private_irqs rank) then they must be taken in + * rank order. + */ + spinlock_t lock; + int ctlr; + int nr_lines; + struct vgic_irq_rank *shared_irqs; + struct pending_irq *pending_irqs; + } vgic; } __cacheline_aligned; struct arch_vcpu @@ -27,6 +52,11 @@ struct arch_vcpu uint32_t sctlr; uint32_t ttbr0, ttbr1, ttbcr; + struct { + struct vgic_irq_rank private_irqs; + struct list_head inflight_irqs; + spinlock_t lock; + } vgic; } __cacheline_aligned; void vcpu_show_execution_state(struct vcpu *); -- 1.7.2.5
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Emulation of the generic timer kernel registers. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- xen/arch/arm/domain.c | 4 + xen/arch/arm/traps.c | 4 +- xen/arch/arm/vtimer.c | 148 ++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vtimer.h | 35 ++++++++++ xen/include/asm-arm/domain.h | 7 ++ 5 files changed, 196 insertions(+), 2 deletions(-) create mode 100644 xen/arch/arm/vtimer.c create mode 100644 xen/arch/arm/vtimer.h diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 7e681ab..70f71c3 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -12,6 +12,7 @@ #include <asm/irq.h> #include "gic.h" +#include "vtimer.h" DEFINE_PER_CPU(struct vcpu *, curr_vcpu); @@ -215,6 +216,9 @@ int vcpu_initialise(struct vcpu *v) if ( (rc = vcpu_vgic_init(v)) != 0 ) return rc; + if ( (rc = vcpu_vtimer_init(v)) != 0 ) + return rc; + return rc; } diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 4346dd7..395d0af 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -468,7 +468,7 @@ static void do_cp15_32(struct cpu_user_regs *regs, break; case HSR_CPREG32(CNTP_CTL): case HSR_CPREG32(CNTP_TVAL): - /* emulate timer */ + BUG_ON(!vtimer_emulate(regs, hsr)); break; default: printk("%s p15, %d, r%d, cr%d, cr%d, %d @ %#08x\n", @@ -498,7 +498,7 @@ static void do_cp15_64(struct cpu_user_regs *regs, switch ( hsr.bits & HSR_CP64_REGS_MASK ) { case HSR_CPREG64(CNTPCT): - /* emulate timer */ + BUG_ON(!vtimer_emulate(regs, hsr)); break; default: printk("%s p15, %d, r%d, r%d, cr%d @ %#08x\n", diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c new file mode 100644 index 0000000..3ebf5b1 --- /dev/null +++ b/xen/arch/arm/vtimer.c @@ -0,0 +1,148 @@ +/* + * xen/arch/arm/vtimer.c + * + * ARM Virtual Timer emulation support + * + * Ian Campbell <ian.campbell@citrix.com> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <xen/config.h> +#include <xen/lib.h> +#include <xen/timer.h> +#include <xen/sched.h> +#include "gic.h" + +extern s_time_t ticks_to_ns(uint64_t ticks); +extern uint64_t ns_to_ticks(s_time_t ns); + +static void vtimer_expired(void *data) +{ + struct vcpu *v = data; + v->arch.vtimer.ctl |= CNTx_CTL_PENDING; + v->arch.vtimer.ctl &= ~CNTx_CTL_MASK; + vgic_vcpu_inject_irq(v, 30, 1); +} + +int vcpu_vtimer_init(struct vcpu *v) +{ + init_timer(&v->arch.vtimer.timer, + vtimer_expired, v, + smp_processor_id()); + v->arch.vtimer.ctl = 0; + v->arch.vtimer.offset = NOW(); + v->arch.vtimer.cval = NOW(); + return 0; +} + +static int vtimer_emulate_32(struct cpu_user_regs *regs, union hsr hsr) +{ + struct vcpu *v = current; + struct hsr_cp32 cp32 = hsr.cp32; + uint32_t *r = ®s->r0 + cp32.reg; + s_time_t now; + + switch ( hsr.bits & HSR_CP32_REGS_MASK ) + { + case HSR_CPREG32(CNTP_CTL): + if ( cp32.read ) + { + *r = v->arch.vtimer.ctl; + } + else + { + v->arch.vtimer.ctl = *r; + + if ( v->arch.vtimer.ctl & CNTx_CTL_ENABLE ) + { + set_timer(&v->arch.vtimer.timer, + v->arch.vtimer.cval + v->arch.vtimer.offset); + } + else + stop_timer(&v->arch.vtimer.timer); + } + + return 1; + + case HSR_CPREG32(CNTP_TVAL): + now = NOW() - v->arch.vtimer.offset; + if ( cp32.read ) + { + *r = (uint32_t)(ns_to_ticks(v->arch.vtimer.cval - now) & 0xffffffffull); + } + else + { + v->arch.vtimer.cval = now + ticks_to_ns(*r); + if ( v->arch.vtimer.ctl & CNTx_CTL_ENABLE ) + { + set_timer(&v->arch.vtimer.timer, + v->arch.vtimer.cval + v->arch.vtimer.offset); + } + } + + return 1; + + default: + return 0; + } +} + +static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr) +{ + struct vcpu *v = current; + struct hsr_cp64 cp64 = hsr.cp64; + uint32_t *r1 = ®s->r0 + cp64.reg1; + uint32_t *r2 = ®s->r0 + cp64.reg2; + s_time_t now; + + switch ( hsr.bits & HSR_CP64_REGS_MASK ) + { + case HSR_CPREG64(CNTPCT): + if ( cp64.read ) + { + now = NOW() - v->arch.vtimer.offset; + *r1 = (uint32_t)(now & 0xffffffff); + *r2 = (uint32_t)(now >> 32); + return 1; + } + else + { + printk("READ from R/O CNTPCT\n"); + return 0; + } + + default: + return 0; + } +} + +int vtimer_emulate(struct cpu_user_regs *regs, union hsr hsr) +{ + switch (hsr.ec) { + case HSR_EC_CP15_32: + return vtimer_emulate_32(regs, hsr); + case HSR_EC_CP15_64: + return vtimer_emulate_64(regs, hsr); + default: + return 0; + } +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/vtimer.h b/xen/arch/arm/vtimer.h new file mode 100644 index 0000000..d87bb25 --- /dev/null +++ b/xen/arch/arm/vtimer.h @@ -0,0 +1,35 @@ +/* + * xen/arch/arm/vtimer.h + * + * ARM Virtual Timer emulation support + * + * Ian Campbell <ian.campbell@citrix.com> + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __ARCH_ARM_VTIMER_H__ +#define __ARCH_ARM_VTIMER_H__ + +extern int vcpu_vtimer_init(struct vcpu *v); +extern int vtimer_emulate(struct cpu_user_regs *regs, union hsr hsr); + +#endif + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 2cd0bd3..3372d14 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -57,6 +57,13 @@ struct arch_vcpu struct list_head inflight_irqs; spinlock_t lock; } vgic; + + struct { + struct timer timer; + uint32_t ctl; + s_time_t offset; + s_time_t cval; + } vtimer; } __cacheline_aligned; void vcpu_show_execution_state(struct vcpu *); -- 1.7.2.5
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Makefile and config options for the ARM architecture. Changes in v2: - move patch at the end of the series. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> --- config/arm.mk | 18 +++++++++++ xen/arch/arm/Makefile | 76 +++++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/Rules.mk | 29 ++++++++++++++++++ 3 files changed, 123 insertions(+), 0 deletions(-) create mode 100644 config/arm.mk create mode 100644 xen/arch/arm/Makefile create mode 100644 xen/arch/arm/Rules.mk diff --git a/config/arm.mk b/config/arm.mk new file mode 100644 index 0000000..f64f0c1 --- /dev/null +++ b/config/arm.mk @@ -0,0 +1,18 @@ +CONFIG_ARM := y +CONFIG_ARM_32 := y +CONFIG_ARM_$(XEN_OS) := y + +# -march= -mcpu+ +# Explicitly specifiy 32-bit ARM ISA since toolchain default can be -mthumb: +CFLAGS += -marm + +HAS_PL011 := y + +# Use only if calling $(LD) directly. +#LDFLAGS_DIRECT_OpenBSD = _obsd +#LDFLAGS_DIRECT_FreeBSD = _fbsd +LDFLAGS_DIRECT_Linux = _linux +LDFLAGS_DIRECT += -marmelf$(LDFLAGS_DIRECT_$(XEN_OS))_eabi + +CONFIG_LOAD_ADDRESS ?= 0x80000000 diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile new file mode 100644 index 0000000..5a07ae7 --- /dev/null +++ b/xen/arch/arm/Makefile @@ -0,0 +1,76 @@ +subdir-y += lib + +obj-y += dummy.o +obj-y += entry.o +obj-y += domain.o +obj-y += domain_build.o +obj-y += gic.o +obj-y += io.o +obj-y += irq.o +obj-y += mm.o +obj-y += p2m.o +obj-y += guestcopy.o +obj-y += setup.o +obj-y += time.o +obj-y += smpboot.o +obj-y += smp.o +obj-y += shutdown.o +obj-y += traps.o +obj-y += vgic.o +obj-y += vtimer.o + +#obj-bin-y += ....o + +ALL_OBJS := head.o $(ALL_OBJS) + +$(TARGET): $(TARGET)-syms + # XXX: VE model loads by VMA so instead of + # making a proper ELF we link with LMA == VMA and adjust crudely + $(OBJCOPY) --change-addresses +0x7fe00000 $< $@ + # XXX strip it + +#$(TARGET): $(TARGET)-syms $(efi-y) boot/mkelf32 +# ./boot/mkelf32 $(TARGET)-syms $(TARGET) 0x100000 \ +# `$(NM) -nr $(TARGET)-syms | head -n 1 | sed -e ''s/^\([^ ]*\).*/0x\1/''` + +ifeq ($(lto),y) +# Gather all LTO objects together +prelink_lto.o: $(ALL_OBJS) + $(LD_LTO) -r -o $@ $^ + +# Link it with all the binary objects +prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o + $(LD) $(LDFLAGS) -r -o $@ $^ +else +prelink.o: $(ALL_OBJS) + $(LD) $(LDFLAGS) -r -o $@ $^ +endif + +$(BASEDIR)/common/symbols-dummy.o: + $(MAKE) -f $(BASEDIR)/Rules.mk -C $(BASEDIR)/common symbols-dummy.o + +$(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o + $(LD) $(LDFLAGS) -T xen.lds -N prelink.o \ + $(BASEDIR)/common/symbols-dummy.o -o $(@D)/.$(@F).0 + $(NM) -n $(@D)/.$(@F).0 | $(BASEDIR)/tools/symbols >$(@D)/.$(@F).0.S + $(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).0.o + $(LD) $(LDFLAGS) -T xen.lds -N prelink.o \ + $(@D)/.$(@F).0.o -o $(@D)/.$(@F).1 + $(NM) -n $(@D)/.$(@F).1 | $(BASEDIR)/tools/symbols >$(@D)/.$(@F).1.S + $(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).1.o + $(LD) $(LDFLAGS) -T xen.lds -N prelink.o \ + $(@D)/.$(@F).1.o -o $@ + rm -f $(@D)/.$(@F).[0-9]* + +asm-offsets.s: asm-offsets.c + $(CC) $(filter-out -flto,$(CFLAGS)) -S -o $@ $< + +xen.lds: xen.lds.S + $(CC) -P -E -Ui386 $(AFLAGS) -DXEN_PHYS_START=$(CONFIG_LOAD_ADDRESS) -o $@ $< + sed -e ''s/xen\.lds\.o:/xen\.lds:/g'' <.xen.lds.d >.xen.lds.d.new + mv -f .xen.lds.d.new .xen.lds.d + +.PHONY: clean +clean:: + rm -f asm-offsets.s xen.lds + rm -f $(BASEDIR)/.xen-syms.[0-9]* diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk new file mode 100644 index 0000000..336e209 --- /dev/null +++ b/xen/arch/arm/Rules.mk @@ -0,0 +1,29 @@ +######################################## +# arm-specific definitions + +# +# If you change any of these configuration options then you must +# ''make clean'' before rebuilding. +# + +CFLAGS += -fno-builtin -fno-common -Wredundant-decls +CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe +CFLAGS += -I$(BASEDIR)/include + +# Prevent floating-point variables from creeping into Xen. +CFLAGS += -msoft-float + +$(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS)) +$(call cc-option-add,CFLAGS,CC,-Wnested-externs) + +arm := y + +ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n) +CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE +endif + +CFLAGS += -march=armv7-a -mcpu=cortex-a15 + +# Require GCC v3.4+ (to avoid issues with alignment constraints in Xen headers) +check-$(gcc) = $(call cc-ver-check,CC,0x030400,"Xen requires at least gcc-3.4") +$(eval $(check-y)) -- 1.7.2.5
On 09/01/12 17:59, stefano.stabellini@eu.citrix.com wrote:> > +static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info) > +{[...]> + case GICD_ICFGR ... GICD_ICFGRN: > + if ( dabt.size != 2 ) goto bad_width; > + rank = vgic_irq_rank(v, 2, gicd_reg - GICD_ICFGR); > + if ( rank == NULL) goto read_as_zero; > + vgic_lock_rank(v, rank); > + *r = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)]; > + vgic_unlock_rank(v, rank); > + return 0;This needs to return 1 or recent kernels will crash when they try and read these registers. David From 8c2377a9b4a10cba57fba9f8a19177ac73339d78 Mon Sep 17 00:00:00 2001 From: David Vrabel <david.vrabel@citrix.com> Date: Mon, 9 Jan 2012 15:17:22 +0000 Subject: [PATCH] ARM: allow guest to read GICD_ICFGRn registers Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- xen/arch/arm/vgic.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 26eae55..584e682 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -266,7 +266,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info) vgic_lock_rank(v, rank); *r = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)]; vgic_unlock_rank(v, rank); - return 0; + return 1; case GICD_NSACR ... GICD_NSACRN: /* We do not implement securty extensions for guests, read zero */
On 09/01/12 17:59, stefano.stabellini@eu.citrix.com wrote:> > +int construct_dom0(struct domain *d) > +{[...]> + printk("Routing peripheral interrupts to guest\n"); > + /* TODO Get from device tree */Can you route interrupt 34 (timer0) to dom0 as well? Current mainline kernels are using this timer. David From 88148e85b2d8d9bf60564d4b5eb2ac73d8389fa5 Mon Sep 17 00:00:00 2001 From: David Vrabel <david.vrabel@citrix.com> Date: Mon, 9 Jan 2012 15:21:37 +0000 Subject: [PATCH] ARM: route timer0 interrupt to dom0 Signed-off-by: David Vrabel <david.vrabel@citrix.com> --- xen/arch/arm/domain_build.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index c36b888..cbbc0b9 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -108,6 +108,7 @@ int construct_dom0(struct domain *d) printk("Routing peripheral interrupts to guest\n"); /* TODO Get from device tree */ + gic_route_irq_to_guest(d, 34, "timer0"); /*gic_route_irq_to_guest(d, 37, "uart0"); -- XXX used by Xen*/ gic_route_irq_to_guest(d, 38, "uart1"); gic_route_irq_to_guest(d, 39, "uart2"); -- 1.7.2.5
Ian Campbell
2012-Jan-10 08:32 UTC
Re: [PATCH v4 10/25] arm: bit manipulation, copy and division libraries
On Mon, 2012-01-09 at 17:59 +0000, stefano.stabellini@eu.citrix.com wrote:> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > Bit manipulation, division and memcpy & friends implementations for the > ARM architecture, shamelessly taken from Linux.When I initially imported these I did so with the minimal changes possible to integrate the in the Xen tree so as to aid future merges of this code from Linux. This meant there was quite a lot of ifdef''d code (in particular for previous ARM architectures via __LINUX_ARM_ARCH__) but I think that is a price worth paying to keep these files somewhat in sync. I used a pretty ugly "#if 1 /* __LINUX_ARM_ARCH__ >= 5 */" construct to minimise changes but perhaps it would be better to simply define __LINUX_ARM_ARCH__ appropriately within the lib subdirectory? Ian.> > > Changes in v2: > > - implement __aeabi_uldivmod and __aeabi_ldivmod. > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com> > Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> > --- > xen/arch/arm/lib/Makefile | 5 + > xen/arch/arm/lib/assembler.h | 49 ++++++ > xen/arch/arm/lib/bitops.h | 36 +++++ > xen/arch/arm/lib/changebit.S | 18 +++ > xen/arch/arm/lib/clearbit.S | 19 +++ > xen/arch/arm/lib/copy_template.S | 266 +++++++++++++++++++++++++++++++++ > xen/arch/arm/lib/div64.S | 149 +++++++++++++++++++ > xen/arch/arm/lib/findbit.S | 115 +++++++++++++++ > xen/arch/arm/lib/lib1funcs.S | 302 ++++++++++++++++++++++++++++++++++++++ > xen/arch/arm/lib/memcpy.S | 64 ++++++++ > xen/arch/arm/lib/memmove.S | 200 +++++++++++++++++++++++++ > xen/arch/arm/lib/memset.S | 129 ++++++++++++++++ > xen/arch/arm/lib/memzero.S | 127 ++++++++++++++++ > xen/arch/arm/lib/setbit.S | 18 +++ > xen/arch/arm/lib/testchangebit.S | 18 +++ > xen/arch/arm/lib/testclearbit.S | 18 +++ > xen/arch/arm/lib/testsetbit.S | 18 +++ > 17 files changed, 1551 insertions(+), 0 deletions(-) > create mode 100644 xen/arch/arm/lib/Makefile > create mode 100644 xen/arch/arm/lib/assembler.h > create mode 100644 xen/arch/arm/lib/bitops.h > create mode 100644 xen/arch/arm/lib/changebit.S > create mode 100644 xen/arch/arm/lib/clearbit.S > create mode 100644 xen/arch/arm/lib/copy_template.S > create mode 100644 xen/arch/arm/lib/div64.S > create mode 100644 xen/arch/arm/lib/findbit.S > create mode 100644 xen/arch/arm/lib/lib1funcs.S > create mode 100644 xen/arch/arm/lib/memcpy.S > create mode 100644 xen/arch/arm/lib/memmove.S > create mode 100644 xen/arch/arm/lib/memset.S > create mode 100644 xen/arch/arm/lib/memzero.S > create mode 100644 xen/arch/arm/lib/setbit.S > create mode 100644 xen/arch/arm/lib/testchangebit.S > create mode 100644 xen/arch/arm/lib/testclearbit.S > create mode 100644 xen/arch/arm/lib/testsetbit.S
On Mon, 2012-01-09 at 18:25 +0000, David Vrabel wrote:> On 09/01/12 17:59, stefano.stabellini@eu.citrix.com wrote: > > > > +static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info) > > +{ > [...] > > + case GICD_ICFGR ... GICD_ICFGRN: > > + if ( dabt.size != 2 ) goto bad_width; > > + rank = vgic_irq_rank(v, 2, gicd_reg - GICD_ICFGR); > > + if ( rank == NULL) goto read_as_zero; > > + vgic_lock_rank(v, rank); > > + *r = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)]; > > + vgic_unlock_rank(v, rank); > > + return 0; > > This needs to return 1 or recent kernels will crash when they try and > read these registers. > > David > > From 8c2377a9b4a10cba57fba9f8a19177ac73339d78 Mon Sep 17 00:00:00 2001 > From: David Vrabel <david.vrabel@citrix.com> > Date: Mon, 9 Jan 2012 15:17:22 +0000 > Subject: [PATCH] ARM: allow guest to read GICD_ICFGRn registers > > Signed-off-by: David Vrabel <david.vrabel@citrix.com>Acked-by: Ian Campbell <ian.campbell@citrix.com>> --- > xen/arch/arm/vgic.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index 26eae55..584e682 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -266,7 +266,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, > mmio_info_t *info) > vgic_lock_rank(v, rank); > *r = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)]; > vgic_unlock_rank(v, rank); > - return 0; > + return 1; > > case GICD_NSACR ... GICD_NSACRN: > /* We do not implement securty extensions for guests, read zero */
On Mon, 2012-01-09 at 18:29 +0000, David Vrabel wrote:> On 09/01/12 17:59, stefano.stabellini@eu.citrix.com wrote: > > > > +int construct_dom0(struct domain *d) > > +{ > [...] > > + printk("Routing peripheral interrupts to guest\n"); > > + /* TODO Get from device tree */ > > Can you route interrupt 34 (timer0) to dom0 as well? Current mainline > kernels are using this timer. > > David > > From 88148e85b2d8d9bf60564d4b5eb2ac73d8389fa5 Mon Sep 17 00:00:00 2001 > From: David Vrabel <david.vrabel@citrix.com> > Date: Mon, 9 Jan 2012 15:21:37 +0000 > Subject: [PATCH] ARM: route timer0 interrupt to dom0This is the peripheral timer rather than one of the generic timers provided by the processor. Exposing it to dom0 is correct.> Signed-off-by: David Vrabel <david.vrabel@citrix.com>Acked-by: Ian Campbell <ian.campbell@citrix.com>> --- > xen/arch/arm/domain_build.c | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c > index c36b888..cbbc0b9 100644 > --- a/xen/arch/arm/domain_build.c > +++ b/xen/arch/arm/domain_build.c > @@ -108,6 +108,7 @@ int construct_dom0(struct domain *d) > > printk("Routing peripheral interrupts to guest\n"); > /* TODO Get from device tree */ > + gic_route_irq_to_guest(d, 34, "timer0"); > /*gic_route_irq_to_guest(d, 37, "uart0"); -- XXX used by Xen*/ > gic_route_irq_to_guest(d, 38, "uart1"); > gic_route_irq_to_guest(d, 39, "uart2");
Jan Beulich
2012-Jan-10 10:02 UTC
Re: [PATCH v4 06/25] libelf-loader: introduce elf_load_image
>>> On 09.01.12 at 18:59, <stefano.stabellini@eu.citrix.com> wrote: > --- a/xen/common/libelf/libelf-loader.c > +++ b/xen/common/libelf/libelf-loader.c > @@ -107,11 +107,32 @@ void elf_set_log(struct elf_binary *elf, > elf_log_callback *log_callback, > elf->log_caller_data = log_caller_data; > elf->verbose = verbose; > } > + > +static int elf_load_image(void *dst, const void *src, uint64_t filesz, > uint64_t memsz) > +{ > + memcpy(dst, src, filesz); > + memset(dst + filesz, 0, memsz - filesz); > + return 0; > +} > #else > +#include <asm/guest_access.h> > + > void elf_set_verbose(struct elf_binary *elf) > { > elf->verbose = 1; > } > + > +static int elf_load_image(void *dst, const void *src, uint64_t filesz, uint64_t memsz) > +{ > + int rc; > + rc = raw_copy_to_guest(dst, src, filesz); > + if ( rc != 0 ) > + return -rc; > + rc = raw_clear_guest(dst + filesz, memsz - filesz); > + if ( rc != 0 ) > + return -rc; > + return 0; > +}I''m afraid a little more care is needed here: filesz and memsz being 64-bit values permits them to overflow the "long" of the functions called. I think simply checking that both values fit in an unsigned long will do for now. Also, if you want to return a meaningful error code here, you also need to consider that fact as well as the counts being unsigned (or otherwise you could e.g. just return "bool"). Jan> #endif > > /* Calculate the required additional kernel space for the elf image */
Ian Campbell
2012-Jan-10 10:04 UTC
Re: [PATCH v4 14/25] arm: driver for CoreLink GIC-400 Generic Interrupt Controller
On Mon, 2012-01-09 at 17:59 +0000, stefano.stabellini@eu.citrix.com wrote:> > +static unsigned int gic_irq_startup(struct irq_desc *desc) > +{ > + uint32_t enabler; > + int irq = desc->irq; > +Some hard tabs appear to have snuck in at least a couple of times in this file and elsewhere: $ find xen/arch/arm/ xen/include/asm-arm/ -name \*.[ch] | xargs grep '' '' -l xen/arch/arm/smp.c xen/arch/arm/lib/bitops.h xen/arch/arm/irq.c xen/arch/arm/gic.c xen/arch/arm/vtimer.c xen/arch/arm/setup.c xen/arch/arm/domain.c xen/include/asm-arm/numa.h xen/include/asm-arm/grant_table.h xen/include/asm-arm/div64.h I think hard tabs are OK in code from Linux (e.g. xen/arch/arm/lib/bitops.h) and in .S files but not elsewhere. Ian.
Jan Beulich
2012-Jan-10 10:06 UTC
Re: [PATCH v4 00/25] xen: ARMv7 with virtualization extensions
>>> On 09.01.12 at 18:58, Stefano Stabellini <stefano.stabellini@eu.citrix.com>wrote:> Hello everyone, > this is the fourth version of the patch series that introduces ARMv7 > with virtualization extensions support in Xen. > The series allows Xen and Dom0 to boot on a Cortex-A15 based Versatile > Express simulator. > See the following announce email for more informations about what we > are trying to achieve, as well as the original git history: > > See http://marc.info/?l=xen-devel&m=132257857628098&w=2 > > > The first 7 patches affect generic Xen code and are not ARM specific; > often they fix real issues, hidden in the default X86 configuration.Out of the first 8 patches, I think all but #6 could go in if you''re okay with them. That would hopefully reduce Stefano''s effort a little to maintain them.> The following 18 patches introduce ARMv7 with virtualization extensions > support: makefiles first, then the asm-arm header files and finally > everything else, ordered in a way that should make the patches easier > to read.All of those that don''t touch anything outside xen/*/*arm/ (and don''t depend on #6) could go in imo too. I didn''t look too closely to tell precisely which those are. What do you think? Jan
Stefano Stabellini
2012-Jan-10 11:13 UTC
Re: [PATCH v4 14/25] arm: driver for CoreLink GIC-400 Generic Interrupt Controller
On Tue, 10 Jan 2012, Ian Campbell wrote:> On Mon, 2012-01-09 at 17:59 +0000, stefano.stabellini@eu.citrix.com > wrote: > > > > +static unsigned int gic_irq_startup(struct irq_desc *desc) > > +{ > > + uint32_t enabler; > > + int irq = desc->irq; > > + > > Some hard tabs appear to have snuck in at least a couple of times in > this file and elsewhere: > $ find xen/arch/arm/ xen/include/asm-arm/ -name \*.[ch] | xargs grep '' '' -l > xen/arch/arm/smp.c > xen/arch/arm/lib/bitops.h > xen/arch/arm/irq.c > xen/arch/arm/gic.c > xen/arch/arm/vtimer.c > xen/arch/arm/setup.c > xen/arch/arm/domain.c > xen/include/asm-arm/numa.h > xen/include/asm-arm/grant_table.h > xen/include/asm-arm/div64.h > > I think hard tabs are OK in code from Linux (e.g. > xen/arch/arm/lib/bitops.h) and in .S files but not elsewhere.Agreed, evidently my smart vim runes are not smart enough.
On Mon, 9 Jan 2012, David Vrabel wrote:> On 09/01/12 17:59, stefano.stabellini@eu.citrix.com wrote: > > > > +static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info) > > +{ > [...] > > + case GICD_ICFGR ... GICD_ICFGRN: > > + if ( dabt.size != 2 ) goto bad_width; > > + rank = vgic_irq_rank(v, 2, gicd_reg - GICD_ICFGR); > > + if ( rank == NULL) goto read_as_zero; > > + vgic_lock_rank(v, rank); > > + *r = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)]; > > + vgic_unlock_rank(v, rank); > > + return 0; > > This needs to return 1 or recent kernels will crash when they try and > read these registers.Not just a comment but a patch! Wonderful! I''ll merge it with this patch, thanks!> >From 8c2377a9b4a10cba57fba9f8a19177ac73339d78 Mon Sep 17 00:00:00 2001 > From: David Vrabel <david.vrabel@citrix.com> > Date: Mon, 9 Jan 2012 15:17:22 +0000 > Subject: [PATCH] ARM: allow guest to read GICD_ICFGRn registers > > Signed-off-by: David Vrabel <david.vrabel@citrix.com> > --- > xen/arch/arm/vgic.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index 26eae55..584e682 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -266,7 +266,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, > mmio_info_t *info) > vgic_lock_rank(v, rank); > *r = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR)]; > vgic_unlock_rank(v, rank); > - return 0; > + return 1; > > case GICD_NSACR ... GICD_NSACRN: > /* We do not implement securty extensions for guests, read zero */ >
On Mon, 9 Jan 2012, David Vrabel wrote:> On 09/01/12 17:59, stefano.stabellini@eu.citrix.com wrote: > > > > +int construct_dom0(struct domain *d) > > +{ > [...] > > + printk("Routing peripheral interrupts to guest\n"); > > + /* TODO Get from device tree */ > > Can you route interrupt 34 (timer0) to dom0 as well? Current mainline > kernels are using this timer. >Good idea, it will be in the next version of the series> > >From 88148e85b2d8d9bf60564d4b5eb2ac73d8389fa5 Mon Sep 17 00:00:00 2001 > From: David Vrabel <david.vrabel@citrix.com> > Date: Mon, 9 Jan 2012 15:21:37 +0000 > Subject: [PATCH] ARM: route timer0 interrupt to dom0 > > Signed-off-by: David Vrabel <david.vrabel@citrix.com> > --- > xen/arch/arm/domain_build.c | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c > index c36b888..cbbc0b9 100644 > --- a/xen/arch/arm/domain_build.c > +++ b/xen/arch/arm/domain_build.c > @@ -108,6 +108,7 @@ int construct_dom0(struct domain *d) > > printk("Routing peripheral interrupts to guest\n"); > /* TODO Get from device tree */ > + gic_route_irq_to_guest(d, 34, "timer0"); > /*gic_route_irq_to_guest(d, 37, "uart0"); -- XXX used by Xen*/ > gic_route_irq_to_guest(d, 38, "uart1"); > gic_route_irq_to_guest(d, 39, "uart2"); > -- > 1.7.2.5 >
Stefano Stabellini
2012-Jan-10 11:22 UTC
Re: [PATCH v4 10/25] arm: bit manipulation, copy and division libraries
On Tue, 10 Jan 2012, Ian Campbell wrote:> On Mon, 2012-01-09 at 17:59 +0000, stefano.stabellini@eu.citrix.com > wrote: > > From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > > > Bit manipulation, division and memcpy & friends implementations for the > > ARM architecture, shamelessly taken from Linux. > > When I initially imported these I did so with the minimal changes > possible to integrate the in the Xen tree so as to aid future merges of > this code from Linux. > > This meant there was quite a lot of ifdef''d code (in particular for > previous ARM architectures via __LINUX_ARM_ARCH__) but I think that is a > price worth paying to keep these files somewhat in sync. I used a pretty > ugly "#if 1 /* __LINUX_ARM_ARCH__ >= 5 */" construct to minimise changes > but perhaps it would be better to simply define __LINUX_ARM_ARCH__ > appropriately within the lib subdirectory? >I am not a great fan of manually sync''ed source files, however I understand your concerns. At that point we might have to move __aeabi_uldivmod and __aeabi_ldivmod to a different location, if we really want to keep these files identical.
Ian Campbell
2012-Jan-10 11:29 UTC
Re: [PATCH v4 10/25] arm: bit manipulation, copy and division libraries
On Tue, 2012-01-10 at 11:22 +0000, Stefano Stabellini wrote:> On Tue, 10 Jan 2012, Ian Campbell wrote: > > On Mon, 2012-01-09 at 17:59 +0000, stefano.stabellini@eu.citrix.com > > wrote: > > > From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > > > > > Bit manipulation, division and memcpy & friends implementations for the > > > ARM architecture, shamelessly taken from Linux. > > > > When I initially imported these I did so with the minimal changes > > possible to integrate the in the Xen tree so as to aid future merges of > > this code from Linux. > > > > This meant there was quite a lot of ifdef''d code (in particular for > > previous ARM architectures via __LINUX_ARM_ARCH__) but I think that is a > > price worth paying to keep these files somewhat in sync. I used a pretty > > ugly "#if 1 /* __LINUX_ARM_ARCH__ >= 5 */" construct to minimise changes > > but perhaps it would be better to simply define __LINUX_ARM_ARCH__ > > appropriately within the lib subdirectory? > > > > I am not a great fan of manually sync''ed source files, however I > understand your concerns. At that point we might have to move > __aeabi_uldivmod and __aeabi_ldivmod to a different location, if we > really want to keep these files identical.Lets just agree that necessary changes are ok. I don''t think they need to be identical, just not unnecessarily changed since that makes resyncing harder for no gain. Ian.
Stefano Stabellini
2012-Jan-10 13:49 UTC
Re: [PATCH v4 06/25] libelf-loader: introduce elf_load_image
On Tue, 10 Jan 2012, Jan Beulich wrote:> >>> On 09.01.12 at 18:59, <stefano.stabellini@eu.citrix.com> wrote: > > --- a/xen/common/libelf/libelf-loader.c > > +++ b/xen/common/libelf/libelf-loader.c > > @@ -107,11 +107,32 @@ void elf_set_log(struct elf_binary *elf, > > elf_log_callback *log_callback, > > elf->log_caller_data = log_caller_data; > > elf->verbose = verbose; > > } > > + > > +static int elf_load_image(void *dst, const void *src, uint64_t filesz, > > uint64_t memsz) > > +{ > > + memcpy(dst, src, filesz); > > + memset(dst + filesz, 0, memsz - filesz); > > + return 0; > > +} > > #else > > +#include <asm/guest_access.h> > > + > > void elf_set_verbose(struct elf_binary *elf) > > { > > elf->verbose = 1; > > } > > + > > +static int elf_load_image(void *dst, const void *src, uint64_t filesz, uint64_t memsz) > > +{ > > + int rc; > > + rc = raw_copy_to_guest(dst, src, filesz); > > + if ( rc != 0 ) > > + return -rc; > > + rc = raw_clear_guest(dst + filesz, memsz - filesz); > > + if ( rc != 0 ) > > + return -rc; > > + return 0; > > +} > > I''m afraid a little more care is needed here: filesz and memsz being > 64-bit values permits them to overflow the "long" of the functions > called. I think simply checking that both values fit in an unsigned long > will do for now.OK> Also, if you want to return a meaningful error code here, you also > need to consider that fact as well as the counts being unsigned (or > otherwise you could e.g. just return "bool").I''ll just return -1, to be consistent with elf_init
Ian Jackson
2012-Jan-10 17:15 UTC
Re: [PATCH v4 04/25] xen: implement an signed 64 bit division helper function
stefano.stabellini@eu.citrix.com writes ("[Xen-devel] [PATCH v4 04/25] xen: implement an signed 64 bit division helper function"):> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > Implement a C function to perform 64 bit signed division and return both > quotient and remainder. > Useful as an helper function to implement __aeabi_ldivmod.Are we sure having callers of this function is desirable ? I was under the impression that this is very slow. Ian.