Ian Campbell
2013-Oct-07 11:37 UTC
[PATCH 0/5] xen: arm: fixups for systems with RAM above 4GB
This is primarily an attempt to get arm64 Xen working on systems which do not have any RAM at all below 4GB but there are small fixes for systems with highmem generally. I''ve been testing this with a hack DTB which uses only the AEM fastmodel''s 36-bit alias of DRAM and a hacked up boot-wrapper to load at the appropriate addresses etc. The first patch "xen: correct xenheap_bits after "xen: support RAM at addresses 0 and 4096" has been previously posted as a standalone patch. Ian.
Ian Campbell
2013-Oct-07 11:38 UTC
[PATCH 1/5] xen: correct xenheap_bits after "xen: support RAM at addresses 0 and 4096"
This is incorrect after commit 1aac966e24e which shuffled the zones up by one. I''ve observed failures on arm64 systems with RAM at 0x8,00000000-0x8,7fffffff since xenheap_bits ends up as 35 instead of 36 (which is the zone with all the RAM). Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Cc: Keir Fraser <keir@xen.org> Cc: Tim Deegan <tim@xen.org> --- I suppose that MEMZONE_XEN is not really useful when !CONFIG_SEPARATE_XENHEAP so in principal 1aac966e24e could be make conditional, but in reality MEMZONE_XEN is at least referenced when !CONFIG_SEPARATE_XENHEAP so at least some other cleanup would be needed. This fix seems simpler/clearer. --- xen/common/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index fb8187b..4c17fbd 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1364,7 +1364,7 @@ static unsigned int __read_mostly xenheap_bits; void __init xenheap_max_mfn(unsigned long mfn) { - xenheap_bits = fls(mfn) + PAGE_SHIFT - 1; + xenheap_bits = fls(mfn) + PAGE_SHIFT; } void init_xenheap_pages(paddr_t ps, paddr_t pe) -- 1.7.10.4
Ian Campbell
2013-Oct-07 11:38 UTC
[PATCH 2/5] xen: arm: Enable 40 bit addressing in VTCR for arm64
This requires setting the v8 specific VTCR_EL2.PS field. These bits are UNK/SBZP on v7. Also the TS0SZ field is described slightly differently for v8, so update the comment to reflect this. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> --- xen/arch/arm/mm.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 0b53200..e48d473 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -386,9 +386,16 @@ void __cpuinit setup_virt_paging(void) /* Setup Stage 2 address translation */ /* SH0=00, ORGN0=IRGN0=01 * SL0=01 (Level-1) - * T0SZ=(1)1000 = -8 (40 bit physical addresses) + * ARVv7: T0SZ=(1)1000 = -8 (32-(-8) = 40 bit physical addresses) + * ARMv8: T0SZ=01 1000 = 24 (64-24 = 40 bit physical addresses) + * PS=010 == 40 bits */ - WRITE_SYSREG32(0x80002558, VTCR_EL2); isb(); +#ifdef CONFIG_ARM_32 + WRITE_SYSREG32(0x80002558, VTCR_EL2); +#else + WRITE_SYSREG32(0x80022558, VTCR_EL2); +#endif + isb(); } static inline lpae_t pte_of_xenaddr(vaddr_t va) -- 1.7.10.4
Currently we only map regions which are not part of boot modules. However we subsequently free at least some of those modules to the heaps in discard_initial_modules and if we were unluckly with sizing/location we might end up adding unmapped pages to the heap. The heaps on 64-bit use 1GB mappings, so in practice this is probably pretty unlikely and I''ve not actually seen it. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> --- xen/arch/arm/setup.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 49f344c..6300802 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -519,6 +519,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) xenheap_pages += (bank_size >> PAGE_SHIFT); + setup_xenheap_mappings(bank_start>>PAGE_SHIFT, bank_size>>PAGE_SHIFT); + /* XXX we assume that the ram regions are ordered */ s = bank_start; while ( s < bank_end ) @@ -535,8 +537,6 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) if ( e > bank_end ) e = bank_end; - setup_xenheap_mappings(s>>PAGE_SHIFT, (e-s)>>PAGE_SHIFT); - xenheap_mfn_end = e; dt_unreserved_regions(s, e, init_boot_pages, 0); -- 1.7.10.4
Ian Campbell
2013-Oct-07 11:38 UTC
[PATCH 4/5] xen: arm: make sure pagetable mask macros have appropriate size
{ZEROETH,FIRST,SECOND,THIRD}_MASK are used with physical addresses which may be larger than 32 bits. Therefore ensure that they are wide enough by casting to paddr_t otherwise we may truncate addresses on 32-bit. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> --- xen/include/asm-arm/page.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h index 3d0f8a9..d468418 100644 --- a/xen/include/asm-arm/page.h +++ b/xen/include/asm-arm/page.h @@ -323,17 +323,17 @@ static inline int gva_to_ipa(vaddr_t va, paddr_t *paddr) #define LPAE_ENTRIES (1u << LPAE_SHIFT) #define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1) -#define THIRD_SHIFT PAGE_SHIFT -#define THIRD_SIZE (1u << THIRD_SHIFT) -#define THIRD_MASK (~(THIRD_SIZE - 1)) -#define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT) -#define SECOND_SIZE (1u << SECOND_SHIFT) -#define SECOND_MASK (~(SECOND_SIZE - 1)) -#define FIRST_SHIFT (SECOND_SHIFT + LPAE_SHIFT) -#define FIRST_SIZE (1u << FIRST_SHIFT) -#define FIRST_MASK (~(FIRST_SIZE - 1)) +#define THIRD_SHIFT (PAGE_SHIFT) +#define THIRD_SIZE ((paddr_t)1 << THIRD_SHIFT) +#define THIRD_MASK (~(THIRD_SIZE - 1)) +#define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT) +#define SECOND_SIZE ((paddr_t)1 << SECOND_SHIFT) +#define SECOND_MASK (~(SECOND_SIZE - 1)) +#define FIRST_SHIFT (SECOND_SHIFT + LPAE_SHIFT) +#define FIRST_SIZE ((paddr_t)1 << FIRST_SHIFT) +#define FIRST_MASK (~(FIRST_SIZE - 1)) #define ZEROETH_SHIFT (FIRST_SHIFT + LPAE_SHIFT) -#define ZEROETH_SIZE (1u << ZEROETH_SHIFT) +#define ZEROETH_SIZE ((paddr_t)1 << ZEROETH_SHIFT) #define ZEROETH_MASK (~(ZEROETH_SIZE - 1)) /* Calculate the offsets into the pagetables for a given VA */ -- 1.7.10.4
Ian Campbell
2013-Oct-07 11:38 UTC
[PATCH 5/5] xen: arm: correctly round down MFN to 1GB boundary make sure pagetable mask macros as physaddr size
~FIRST_MASK is nothing like correct for rounding down an MFN. It is the inverse *and* an address not a framenumber so wrong in every dimension! We cannot use FIRST_MASK since that would mask off any zeroeth level bits. Instead calculate the correct value from FIRST_SIZE. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> --- xen/arch/arm/mm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index e48d473..887930a 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -656,7 +656,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn, end_mfn = base_mfn + nr_mfns; /* Align to previous 1GB boundary */ - base_mfn &= ~FIRST_MASK; + base_mfn &= ~((FIRST_SIZE>>PAGE_SHIFT)-1); offset = base_mfn - xenheap_mfn_start; vaddr = DIRECTMAP_VIRT_START + offset*PAGE_SIZE; -- 1.7.10.4
Ian Campbell
2013-Oct-07 11:43 UTC
Re: [PATCH 0/5] xen: arm: fixups for systems with RAM above 4GB
On Mon, 2013-10-07 at 12:37 +0100, Ian Campbell wrote:> I''ve been testing this with a hack DTB which uses only the AEM > fastmodel''s 36-bit alias of DRAM and a hacked up boot-wrapper to load > at the appropriate addresses etc.I forgot to say that this currently gets as far as loading the dom0 kernel before it fails because the "at s12el1r" instruction used by gva_to_ma_par (and therefore copy_to_user) is failing. This appears to be because it is truncating the high address given as input to 32-bits as part of the stage 1 translation, and so the stage 2 translation fails because the input IPA is invalid. I have reported this as a potential model bug because stage 1 translation is not enabled (SCTLR_EL1.M == 0) and therefore IPA<0:47> is supposed to be equal to VA<0:47>. Ian.
Ian Campbell
2013-Oct-10 08:40 UTC
Re: [PATCH 0/5] xen: arm: fixups for systems with RAM above 4GB
On Mon, 2013-10-07 at 12:43 +0100, Ian Campbell wrote:> On Mon, 2013-10-07 at 12:37 +0100, Ian Campbell wrote: > > I''ve been testing this with a hack DTB which uses only the AEM > > fastmodel''s 36-bit alias of DRAM and a hacked up boot-wrapper to load > > at the appropriate addresses etc. > > I forgot to say that this currently gets as far as loading the dom0 > kernel before it fails because the "at s12el1r" instruction used by > gva_to_ma_par (and therefore copy_to_user) is failing. > > This appears to be because it is truncating the high address given as > input to 32-bits as part of the stage 1 translation, and so the stage 2 > translation fails because the input IPA is invalid. > > I have reported this as a potential model bug because stage 1 > translation is not enabled (SCTLR_EL1.M == 0) and therefore IPA<0:47> is > supposed to be equal to VA<0:47>.ARM support very kindly pointed out that HCR_EL2.RW (register-width bit) was configured for a 32-bit EL1, hence the truncation. Setting the bit correctly for a 64-bit EL1 appears to fix this issue, I just need to decide the cleanest way to do it properly. Ian.
Julien Grall
2013-Oct-21 15:23 UTC
Re: [PATCH 2/5] xen: arm: Enable 40 bit addressing in VTCR for arm64
On 10/07/2013 12:38 PM, Ian Campbell wrote:> This requires setting the v8 specific VTCR_EL2.PS field. These bits are > UNK/SBZP on v7. > > Also the TS0SZ field is described slightly differently for v8, so update the > comment to reflect this. > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>Acked-by: Julien Grall <julien.grall@linaro.org>> --- > xen/arch/arm/mm.c | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c > index 0b53200..e48d473 100644 > --- a/xen/arch/arm/mm.c > +++ b/xen/arch/arm/mm.c > @@ -386,9 +386,16 @@ void __cpuinit setup_virt_paging(void) > /* Setup Stage 2 address translation */ > /* SH0=00, ORGN0=IRGN0=01 > * SL0=01 (Level-1) > - * T0SZ=(1)1000 = -8 (40 bit physical addresses) > + * ARVv7: T0SZ=(1)1000 = -8 (32-(-8) = 40 bit physical addresses) > + * ARMv8: T0SZ=01 1000 = 24 (64-24 = 40 bit physical addresses) > + * PS=010 == 40 bits > */ > - WRITE_SYSREG32(0x80002558, VTCR_EL2); isb(); > +#ifdef CONFIG_ARM_32 > + WRITE_SYSREG32(0x80002558, VTCR_EL2); > +#else > + WRITE_SYSREG32(0x80022558, VTCR_EL2); > +#endif > + isb(); > } > > static inline lpae_t pte_of_xenaddr(vaddr_t va) >-- Julien Grall
Julien Grall
2013-Oct-21 15:29 UTC
Re: [PATCH 4/5] xen: arm: make sure pagetable mask macros have appropriate size
On 10/07/2013 12:38 PM, Ian Campbell wrote:> {ZEROETH,FIRST,SECOND,THIRD}_MASK are used with physical addresses which may > be larger than 32 bits. Therefore ensure that they are wide enough by casting > to paddr_t otherwise we may truncate addresses on 32-bit. > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>Acked-by: Julien Grall <julien.grall@linaro.org> -- Julien Grall
Julien Grall
2013-Oct-21 15:53 UTC
Re: [PATCH 4/5] xen: arm: make sure pagetable mask macros have appropriate size
On 10/21/2013 04:29 PM, Julien Grall wrote:> > > On 10/07/2013 12:38 PM, Ian Campbell wrote: >> {ZEROETH,FIRST,SECOND,THIRD}_MASK are used with physical addresses >> which may >> be larger than 32 bits. Therefore ensure that they are wide enough by >> casting >> to paddr_t otherwise we may truncate addresses on 32-bit. >> >> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> > Acked-by: Julien Grall <julien.grall@linaro.org>I have just noticed that I have acked the wrong version of the patch series. Sorry... -- Julien Grall
Ian Campbell
2013-Oct-22 10:18 UTC
Re: [PATCH 4/5] xen: arm: make sure pagetable mask macros have appropriate size
On Mon, 2013-10-21 at 16:53 +0100, Julien Grall wrote:> > On 10/21/2013 04:29 PM, Julien Grall wrote: > > > > > > On 10/07/2013 12:38 PM, Ian Campbell wrote: > >> {ZEROETH,FIRST,SECOND,THIRD}_MASK are used with physical addresses > >> which may > >> be larger than 32 bits. Therefore ensure that they are wide enough by > >> casting > >> to paddr_t otherwise we may truncate addresses on 32-bit. > >> > >> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> > > Acked-by: Julien Grall <julien.grall@linaro.org> > > I have just noticed that I have acked the wrong version of the patch > series. Sorry...No worries, so long as you are happy for me to carry the ack onto v2 (or v3 etc if it comes to that) then I''ll do so. Ian.