Glauber de Oliveira Costa
2007-Nov-09 13:28 UTC
[PATCH 0/24] paravirt_ops for unified x86 - that's me again!
Hey folks, Here's a new spin of the pvops64 patch series. We didn't get that many comments from the last time, so it should be probably almost ready to get in. Heya!>From the last version, the most notable changes are:* consolidation of system.h, merging jeremy's comments about ordering concerns * consolidation of smp functions that goes through smp_ops. They're sharing a bunch of code now. Other than that, just some issues that arose from the rebase. Please, not that this patch series _does not_ apply over linus git anymore, but rather, over tglx cleanup series. The first patch in this series is already on linus', but not on tglx', so I'm sending it again, because you'll need it if you want to compile it anyway. tglx, in the absense of any outstanding NACKs, or any very big call for improvements, could you please pull it in your tree? Have fun,
Glauber de Oliveira Costa
2007-Nov-09 13:28 UTC
[PATCH 1/24] mm/sparse-vmemmap.c: make sure init_mm is included
mm/sparse-vmemmap.c uses init_mm in some places. However, it is not present in any of the headers currently included in the file. init_mm is defined as extern in sched.h, so we add it to the headers list Up to now, this problem was masked by the fact that functions like set_pte_at() and pmd_populate_kernel() are usually macros that expand to simpler variants that does not use the first parameter at all. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> --- mm/sparse-vmemmap.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index d3b718b..22620f6 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -24,6 +24,7 @@ #include <linux/module.h> #include <linux/spinlock.h> #include <linux/vmalloc.h> +#include <linux/sched.h> #include <asm/dma.h> #include <asm/pgalloc.h> #include <asm/pgtable.h> -- 1.4.4.2
Glauber de Oliveira Costa
2007-Nov-09 13:28 UTC
[PATCH 12/24] provide native irq initialization function
The interrupt initialization routine becomes native_init_IRQ and will be overriden later in case paravirt is on. The interrupt array is made visible for guests such lguest, that will need to have their own initialization mechanism (though using most of the same irq lines) later on. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- arch/x86/kernel/i8259_64.c | 7 +++++-- include/asm-x86/irq_64.h | 3 +++ 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/i8259_64.c b/arch/x86/kernel/i8259_64.c index 3041e59..53955f4 100644 --- a/arch/x86/kernel/i8259_64.c +++ b/arch/x86/kernel/i8259_64.c @@ -77,7 +77,7 @@ BUILD_16_IRQS(0xc) BUILD_16_IRQS(0xd) BUILD_16_IRQS(0xe) BUILD_16_IRQS(0xf) IRQ(x,c), IRQ(x,d), IRQ(x,e), IRQ(x,f) /* for the irq vectors */ -static void (*interrupt[NR_VECTORS - FIRST_EXTERNAL_VECTOR])(void) = { +void (*interrupt[NR_VECTORS - FIRST_EXTERNAL_VECTOR])(void) = { IRQLIST_16(0x2), IRQLIST_16(0x3), IRQLIST_16(0x4), IRQLIST_16(0x5), IRQLIST_16(0x6), IRQLIST_16(0x7), IRQLIST_16(0x8), IRQLIST_16(0x9), IRQLIST_16(0xa), IRQLIST_16(0xb), @@ -456,7 +456,10 @@ void __init init_ISA_irqs (void) } } -void __init init_IRQ(void) +/* Overridden in paravirt.c */ +void init_IRQ(void) __attribute__((weak, alias("native_init_IRQ"))); + +void __init native_init_IRQ(void) { int i; diff --git a/include/asm-x86/irq_64.h b/include/asm-x86/irq_64.h index 5006c6e..4f02446 100644 --- a/include/asm-x86/irq_64.h +++ b/include/asm-x86/irq_64.h @@ -46,6 +46,9 @@ static __inline__ int irq_canonicalize(int irq) extern void fixup_irqs(cpumask_t map); #endif +#include <linux/init.h> +void native_init_IRQ(void); + #define __ARCH_HAS_DO_SOFTIRQ 1 #endif /* _ASM_IRQ_H */ -- 1.4.4.2
Export math_state_restore symbol, so it can be used for hypervisors. They are commonly loaded as modules (lguest being an example). Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- arch/x86/kernel/traps_64.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/arch/x86/kernel/traps_64.c b/arch/x86/kernel/traps_64.c index 4d752a8..0876692 100644 --- a/arch/x86/kernel/traps_64.c +++ b/arch/x86/kernel/traps_64.c @@ -1069,6 +1069,7 @@ asmlinkage void math_state_restore(void) task_thread_info(me)->status |= TS_USEDFPU; me->fpu_counter++; } +EXPORT_SYMBOL_GPL(math_state_restore); void __init trap_init(void) { -- 1.4.4.2
Glauber de Oliveira Costa
2007-Nov-09 13:28 UTC
[PATCH 13/24] report ring kernel is running without paravirt
When paravirtualization is disabled, the kernel is always running at ring 0. So report it in the appropriate macro Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- include/asm-x86/segment_64.h | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/include/asm-x86/segment_64.h b/include/asm-x86/segment_64.h index 04b8ab2..240c1bf 100644 --- a/include/asm-x86/segment_64.h +++ b/include/asm-x86/segment_64.h @@ -50,4 +50,8 @@ #define GDT_SIZE (GDT_ENTRIES * 8) #define TLS_SIZE (GDT_ENTRY_TLS_ENTRIES * 8) +#ifndef CONFIG_PARAVIRT +#define get_kernel_rpl() 0 +#endif + #endif -- 1.4.4.2
Glauber de Oliveira Costa
2007-Nov-09 13:29 UTC
[PATCH 15/24] native versions for set pagetables
This patch turns the set_p{te,md,ud,gd} functions into their native_ versions. There is no need to patch any caller. Also, it adds pte_update() and pte_update_defer() calls whenever we modify a page table entry. This last part was coded to match i386 as close as possible. Pieces of the header are moved to below the #ifdef CONFIG_PARAVIRT site, as they are users of the newly defined set_* macros. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- include/asm-x86/pgtable_64.h | 192 ++++++++++++++++++++++++++++-------------- 1 files changed, 128 insertions(+), 64 deletions(-) diff --git a/include/asm-x86/pgtable_64.h b/include/asm-x86/pgtable_64.h index 9b0ff47..592d613 100644 --- a/include/asm-x86/pgtable_64.h +++ b/include/asm-x86/pgtable_64.h @@ -57,56 +57,107 @@ extern unsigned long empty_zero_page[PAGE_SIZE/sizeof(unsigned long)]; */ #define PTRS_PER_PTE 512 -#ifndef __ASSEMBLY__ +#ifdef CONFIG_PARAVIRT +#include <asm/paravirt.h> +#else + +#define set_pte native_set_pte +#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) +#define set_pmd native_set_pmd +#define set_pud native_set_pud +#define set_pgd native_set_pgd +#define pte_clear(mm, addr, xp) \ +do { \ + set_pte_at(mm, addr, xp, __pte(0)); \ +} while (0) -#define pte_ERROR(e) \ - printk("%s:%d: bad pte %p(%016lx).\n", __FILE__, __LINE__, &(e), pte_val(e)) -#define pmd_ERROR(e) \ - printk("%s:%d: bad pmd %p(%016lx).\n", __FILE__, __LINE__, &(e), pmd_val(e)) -#define pud_ERROR(e) \ - printk("%s:%d: bad pud %p(%016lx).\n", __FILE__, __LINE__, &(e), pud_val(e)) -#define pgd_ERROR(e) \ - printk("%s:%d: bad pgd %p(%016lx).\n", __FILE__, __LINE__, &(e), pgd_val(e)) +#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0) +#define pud_clear native_pud_clear +#define pgd_clear native_pgd_clear +#define pte_update(mm, addr, ptep) do { } while (0) +#define pte_update_defer(mm, addr, ptep) do { } while (0) -#define pgd_none(x) (!pgd_val(x)) -#define pud_none(x) (!pud_val(x)) +#endif -static inline void set_pte(pte_t *dst, pte_t val) +#ifndef __ASSEMBLY__ + +static inline void native_set_pte(pte_t *dst, pte_t val) { - pte_val(*dst) = pte_val(val); + dst->pte = pte_val(val); } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) -static inline void set_pmd(pmd_t *dst, pmd_t val) +static inline void native_set_pmd(pmd_t *dst, pmd_t val) { - pmd_val(*dst) = pmd_val(val); + dst->pmd = pmd_val(val); } -static inline void set_pud(pud_t *dst, pud_t val) +static inline void native_set_pud(pud_t *dst, pud_t val) { - pud_val(*dst) = pud_val(val); + dst->pud = pud_val(val); } -static inline void pud_clear (pud_t *pud) +static inline void native_set_pgd(pgd_t *dst, pgd_t val) { - set_pud(pud, __pud(0)); + dst->pgd = pgd_val(val); } -static inline void set_pgd(pgd_t *dst, pgd_t val) +static inline void native_pud_clear(pud_t *pud) { - pgd_val(*dst) = pgd_val(val); -} + set_pud(pud, __pud(0)); +} -static inline void pgd_clear (pgd_t * pgd) +static inline void native_pgd_clear(pgd_t *pgd) { set_pgd(pgd, __pgd(0)); } -#define ptep_get_and_clear(mm,addr,xp) __pte(xchg(&(xp)->pte, 0)) +static inline void native_set_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval) +{ + native_set_pte(ptep, pteval); +} + +static inline void native_pte_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep) +{ + native_set_pte_at(mm, addr, ptep, __pte(0)); +} + +static inline void native_pmd_clear(pmd_t *pmd) +{ + native_set_pmd(pmd, __pmd(0)); +} + + +#define pte_ERROR(e) \ + printk("%s:%d: bad pte %p(%016llx).\n", \ + __FILE__, __LINE__, &(e), (u64)pte_val(e)) +#define pmd_ERROR(e) \ + printk("%s:%d: bad pmd %p(%016llx).\n", \ + __FILE__, __LINE__, &(e), (u64)pmd_val(e)) +#define pud_ERROR(e) \ + printk("%s:%d: bad pud %p(%016llx).\n", \ + __FILE__, __LINE__, &(e), (u64)pud_val(e)) +#define pgd_ERROR(e) \ + printk("%s:%d: bad pgd %p(%016llx).\n", \ + __FILE__, __LINE__, &(e), (u64)pgd_val(e)) + +#define pgd_none(x) (!pgd_val(x)) +#define pud_none(x) (!pud_val(x)) struct mm_struct; -static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, unsigned long addr, pte_t *ptep, int full) +static inline pte_t ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep) +{ + pte_t pte = __pte(xchg(&ptep->pte, 0)); + pte_update(mm, addr, ptep); + return pte; +} + +static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + int full) { pte_t pte; if (full) { @@ -246,7 +297,6 @@ static inline unsigned long pmd_bad(pmd_t pmd) #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT)) /* FIXME: is this right? */ @@ -255,11 +305,11 @@ static inline unsigned long pmd_bad(pmd_t pmd) static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot) { - pte_t pte; - pte_val(pte) = (page_nr << PAGE_SHIFT); - pte_val(pte) |= pgprot_val(pgprot); - pte_val(pte) &= __supported_pte_mask; - return pte; + unsigned long pte; + pte = (page_nr << PAGE_SHIFT); + pte |= pgprot_val(pgprot); + pte &= __supported_pte_mask; + return __pte(pte); } /* @@ -283,30 +333,6 @@ static inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | static inline pte_t pte_mkhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_PSE)); return pte; } static inline pte_t pte_clrhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_PSE)); return pte; } -struct vm_area_struct; - -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) -{ - if (!pte_young(*ptep)) - return 0; - return test_and_clear_bit(_PAGE_BIT_ACCESSED, &ptep->pte); -} - -static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) -{ - clear_bit(_PAGE_BIT_RW, &ptep->pte); -} - -/* - * Macro to mark a page protection value as "uncacheable". - */ -#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) | _PAGE_PCD | _PAGE_PWT)) - -static inline int pmd_large(pmd_t pte) { - return (pmd_val(pte) & __LARGE_PTE) == __LARGE_PTE; -} - - /* * Conversion functions: convert a page and protection to a page entry, * and a page entry and page directory to the page they refer to. @@ -340,7 +366,6 @@ static inline int pmd_large(pmd_t pte) { pmd_index(address)) #define pmd_none(x) (!pmd_val(x)) #define pmd_present(x) (pmd_val(x) & _PAGE_PRESENT) -#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0) #define pfn_pmd(nr,prot) (__pmd(((nr) << PAGE_SHIFT) | pgprot_val(prot))) #define pmd_pfn(x) ((pmd_val(x) & __PHYSICAL_MASK) >> PAGE_SHIFT) @@ -352,15 +377,53 @@ static inline int pmd_large(pmd_t pte) { /* page, protection -> pte */ #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot)) -#define mk_pte_huge(entry) (pte_val(entry) |= _PAGE_PRESENT | _PAGE_PSE) - + +static inline pte_t __mk_pte_huge(pte_t entry) +{ + unsigned long pte; + pte = pte_val(entry); + pte |= _PAGE_PRESENT | _PAGE_PSE; + return __pte(pte); +} +#define mk_pte_huge(entry) ((entry) = __mk_pte_huge(entry)) + +#include <linux/mm_types.h> +static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + int ret = 0; + if (!pte_young(*ptep)) + return 0; + ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, &ptep->pte); + pte_update(vma->vm_mm, addr, ptep); + return ret; +} + +static inline void ptep_set_wrprotect(struct mm_struct *mm, + unsigned long addr, pte_t *ptep) +{ + clear_bit(_PAGE_BIT_RW, &ptep->pte); + pte_update(mm, addr, ptep); +} + +/* + * Macro to mark a page protection value as "uncacheable". + */ +#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) | _PAGE_PCD | _PAGE_PWT)) + +static inline int pmd_large(pmd_t pte) +{ + return (pmd_val(pte) & __LARGE_PTE) == __LARGE_PTE; +} + /* Change flags of a PTE */ -static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) +static inline pte_t pte_modify(pte_t pte_old, pgprot_t newprot) { - pte_val(pte) &= _PAGE_CHG_MASK; - pte_val(pte) |= pgprot_val(newprot); - pte_val(pte) &= __supported_pte_mask; - return pte; + unsigned long pte = pte_val(pte_old); + pte &= _PAGE_CHG_MASK; + pte |= pgprot_val(newprot); + pte &= __supported_pte_mask; + return __pte(pte); } #define pte_index(address) \ @@ -387,6 +450,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) int __changed = !pte_same(*(__ptep), __entry); \ if (__changed && __dirty) { \ set_pte(__ptep, __entry); \ + pte_update_defer((__vma)->vm_mm, (__address), (__ptep)); \ flush_tlb_page(__vma, __address); \ } \ __changed; \ -- 1.4.4.2
Glauber de Oliveira Costa
2007-Nov-09 13:29 UTC
[PATCH 19/24] turn priviled operation into a macro in head_64.S
under paravirt, read cr2 cannot be issued directly anymore. So wrap it in a macro, defined to the operation itself in case paravirt is off, but to something else if we have paravirt in the game Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- arch/x86/kernel/head_64.S | 9 ++++++++- 1 files changed, 8 insertions(+), 1 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index b6167fe..c31b1c9 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -19,6 +19,13 @@ #include <asm/msr.h> #include <asm/cache.h> +#ifdef CONFIG_PARAVIRT +#include <asm/asm-offsets.h> +#include <asm/paravirt.h> +#else +#define GET_CR2_INTO_RCX movq %cr2, %rcx +#endif + /* we are not able to switch in one step to the final KERNEL ADRESS SPACE * because we need identity-mapped pages. * @@ -267,7 +274,7 @@ ENTRY(early_idt_handler) xorl %eax,%eax movq 8(%rsp),%rsi # get rip movq (%rsp),%rdx - movq %cr2,%rcx + GET_CR2_INTO_RCX leaq early_idt_msg(%rip),%rdi call early_printk cmpl $2,early_recursion_flag(%rip) -- 1.4.4.2
With paravirualization, hypervisors needs to handle the gdt, that was right to this point only used at very early inialization code. Hypervisors (lguest being the current case) are commonly modules, so make it an export Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- arch/x86/kernel/x8664_ksyms_64.c | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c index 105712e..f97aed4 100644 --- a/arch/x86/kernel/x8664_ksyms_64.c +++ b/arch/x86/kernel/x8664_ksyms_64.c @@ -8,6 +8,7 @@ #include <asm/processor.h> #include <asm/uaccess.h> #include <asm/pgtable.h> +#include <asm/desc.h> EXPORT_SYMBOL(kernel_thread); @@ -51,3 +52,8 @@ EXPORT_SYMBOL(__memcpy); EXPORT_SYMBOL(load_gs_index); EXPORT_SYMBOL(_proxy_pda); + +#ifdef CONFIG_PARAVIRT +/* Virtualized guests may want to use it */ +EXPORT_SYMBOL_GPL(cpu_gdt_descr); +#endif -- 1.4.4.2
We need something here because we can't call in and out instructions directly. However, we have to be careful, because no indirections are allowed in misc_64.c , and paravirt_ops is a kind of one. So just call it directly there Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- arch/x86/boot/compressed/misc_64.c | 6 +++++ include/asm-x86/io_64.h | 37 +++++++++++++++++++++++++++++------ 2 files changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/x86/boot/compressed/misc_64.c b/arch/x86/boot/compressed/misc_64.c index 6ea015a..6640a17 100644 --- a/arch/x86/boot/compressed/misc_64.c +++ b/arch/x86/boot/compressed/misc_64.c @@ -9,6 +9,12 @@ * High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996 */ +/* + * we have to be careful, because no indirections are allowed here, and + * paravirt_ops is a kind of one. As it will only run in baremetal anyway, + * we just keep it from happening + */ +#undef CONFIG_PARAVIRT #define _LINUX_STRING_H_ 1 #define __LINUX_BITMAP_H 1 diff --git a/include/asm-x86/io_64.h b/include/asm-x86/io_64.h index a037b07..57fcdd9 100644 --- a/include/asm-x86/io_64.h +++ b/include/asm-x86/io_64.h @@ -35,12 +35,24 @@ * - Arnaldo Carvalho de Melo <acme@conectiva.com.br> */ -#define __SLOW_DOWN_IO "\noutb %%al,$0x80" +static inline void native_io_delay(void) +{ + asm volatile("outb %%al,$0x80" : : : "memory"); +} -#ifdef REALLY_SLOW_IO -#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO +#if defined(CONFIG_PARAVIRT) +#include <asm/paravirt.h> #else -#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO + +static inline void slow_down_io(void) +{ + native_io_delay(); +#ifdef REALLY_SLOW_IO + native_io_delay(); + native_io_delay(); + native_io_delay(); +#endif +} #endif /* @@ -52,9 +64,15 @@ static inline void out##s(unsigned x value, unsigned short port) { #define __OUT2(s,s1,s2) \ __asm__ __volatile__ ("out" #s " %" s1 "0,%" s2 "1" +#ifndef REALLY_SLOW_IO +#define REALLY_SLOW_IO +#define UNSET_REALLY_SLOW_IO +#endif + #define __OUT(s,s1,x) \ __OUT1(s,x) __OUT2(s,s1,"w") : : "a" (value), "Nd" (port)); } \ -__OUT1(s##_p,x) __OUT2(s,s1,"w") __FULL_SLOW_DOWN_IO : : "a" (value), "Nd" (port));} \ +__OUT1(s##_p, x) __OUT2(s, s1, "w") : : "a" (value), "Nd" (port)); \ + slow_down_io(); } #define __IN1(s) \ static inline RETURN_TYPE in##s(unsigned short port) { RETURN_TYPE _v; @@ -63,8 +81,13 @@ static inline RETURN_TYPE in##s(unsigned short port) { RETURN_TYPE _v; __asm__ __volatile__ ("in" #s " %" s2 "1,%" s1 "0" #define __IN(s,s1,i...) \ -__IN1(s) __IN2(s,s1,"w") : "=a" (_v) : "Nd" (port) ,##i ); return _v; } \ -__IN1(s##_p) __IN2(s,s1,"w") __FULL_SLOW_DOWN_IO : "=a" (_v) : "Nd" (port) ,##i ); return _v; } \ +__IN1(s) __IN2(s, s1, "w") : "=a" (_v) : "Nd" (port), ##i); return _v; } \ +__IN1(s##_p) __IN2(s, s1, "w") : "=a" (_v) : "Nd" (port), ##i); \ + slow_down_io(); return _v; } + +#ifdef UNSET_REALLY_SLOW_IO +#undef REALLY_SLOW_IO +#endif #define __INS(s) \ static inline void ins##s(unsigned short port, void * addr, unsigned long count) \ -- 1.4.4.2
Glauber de Oliveira Costa
2007-Nov-09 14:04 UTC
[PATCH 21/24] native versions for page table entries values
This patch turns the page operations (set and make a page table) into native_ versions. The operations itself will be later overriden by paravirt. It uses unsigned long long for consistency with 32-bit. So we have to fix fault_64.c to get rid of warnings. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> --- arch/x86/mm/fault_64.c | 8 +++--- include/asm-x86/page_64.h | 56 +++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 55 insertions(+), 9 deletions(-) diff --git a/arch/x86/mm/fault_64.c b/arch/x86/mm/fault_64.c index 161c0d1..86b7307 100644 --- a/arch/x86/mm/fault_64.c +++ b/arch/x86/mm/fault_64.c @@ -157,22 +157,22 @@ void dump_pagetable(unsigned long address) pgd = __va((unsigned long)pgd & PHYSICAL_PAGE_MASK); pgd += pgd_index(address); if (bad_address(pgd)) goto bad; - printk("PGD %lx ", pgd_val(*pgd)); + printk("PGD %llx ", pgd_val(*pgd)); if (!pgd_present(*pgd)) goto ret; pud = pud_offset(pgd, address); if (bad_address(pud)) goto bad; - printk("PUD %lx ", pud_val(*pud)); + printk("PUD %llx ", pud_val(*pud)); if (!pud_present(*pud)) goto ret; pmd = pmd_offset(pud, address); if (bad_address(pmd)) goto bad; - printk("PMD %lx ", pmd_val(*pmd)); + printk("PMD %llx ", pmd_val(*pmd)); if (!pmd_present(*pmd) || pmd_large(*pmd)) goto ret; pte = pte_offset_kernel(pmd, address); if (bad_address(pte)) goto bad; - printk("PTE %lx", pte_val(*pte)); + printk("PTE %llx", pte_val(*pte)); ret: printk("\n"); return; diff --git a/include/asm-x86/page_64.h b/include/asm-x86/page_64.h index 6fdc904..b8da60c 100644 --- a/include/asm-x86/page_64.h +++ b/include/asm-x86/page_64.h @@ -65,16 +65,62 @@ typedef struct { unsigned long pgprot; } pgprot_t; extern unsigned long phys_base; -#define pte_val(x) ((x).pte) -#define pmd_val(x) ((x).pmd) -#define pud_val(x) ((x).pud) -#define pgd_val(x) ((x).pgd) -#define pgprot_val(x) ((x).pgprot) +static inline unsigned long long native_pte_val(pte_t pte) +{ + return pte.pte; +} + +static inline unsigned long long native_pud_val(pud_t pud) +{ + return pud.pud; +} + + +static inline unsigned long long native_pmd_val(pmd_t pmd) +{ + return pmd.pmd; +} + +static inline unsigned long long native_pgd_val(pgd_t pgd) +{ + return pgd.pgd; +} + +static inline pte_t native_make_pte(unsigned long long pte) +{ + return (pte_t){ pte }; +} + +static inline pud_t native_make_pud(unsigned long long pud) +{ + return (pud_t){ pud }; +} + +static inline pmd_t native_make_pmd(unsigned long long pmd) +{ + return (pmd_t){ pmd }; +} + +static inline pgd_t native_make_pgd(unsigned long long pgd) +{ + return (pgd_t){ pgd }; +} + +#ifdef CONFIG_PARAVIRT +#include <asm/paravirt.h> +#else +#define pte_val(x) native_pte_val(x) +#define pmd_val(x) native_pmd_val(x) +#define pud_val(x) native_pud_val(x) +#define pgd_val(x) native_pgd_val(x) #define __pte(x) ((pte_t) { (x) } ) #define __pmd(x) ((pmd_t) { (x) } ) #define __pud(x) ((pud_t) { (x) } ) #define __pgd(x) ((pgd_t) { (x) } ) +#endif /* CONFIG_PARAVIRT */ + +#define pgprot_val(x) ((x).pgprot) #define __pgprot(x) ((pgprot_t) { (x) } ) #endif /* !__ASSEMBLY__ */ -- 1.4.4.2
Jeremy Fitzhardinge
2007-Nov-09 15:01 UTC
[PATCH 1/24] mm/sparse-vmemmap.c: make sure init_mm is included
Glauber de Oliveira Costa wrote:> mm/sparse-vmemmap.c uses init_mm in some places. However, it is not > present in any of the headers currently included in the file. > > init_mm is defined as extern in sched.h, so we add it to the headers list > > Up to now, this problem was masked by the fact that functions like > set_pte_at() and pmd_populate_kernel() are usually macros that expand to > simpler variants that does not use the first parameter at all. > > Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> > --- > mm/sparse-vmemmap.c | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index d3b718b..22620f6 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -24,6 +24,7 @@ > #include <linux/module.h> > #include <linux/spinlock.h> > #include <linux/vmalloc.h> > +#include <linux/sched.h> >This is already in git. J
Amit Shah
2007-Nov-12 00:17 UTC
[kvm-devel] [PATCH 0/24] paravirt_ops for unified x86 - that's me again!
On Saturday 10 November 2007 00:12:41 Glauber de Oliveira Costa wrote:> Hey folks, > > Here's a new spin of the pvops64 patch series. > We didn't get that many comments from the last time, > so it should be probably almost ready to get in. Heya! > > >From the last version, the most notable changes are: > > * consolidation of system.h, merging jeremy's comments about ordering > concerns > * consolidation of smp functions that goes through smp_ops. They're sharing > a bunch of code now. > > Other than that, just some issues that arose from the rebase. > > Please, not that this patch series _does not_ apply over linus git anymore, > but rather, over tglx cleanup series. > > The first patch in this series is already on linus', but not on tglx', so > I'm sending it again, because you'll need it if you want to compile it > anyway. > > tglx, in the absense of any outstanding NACKs, or any very big call for > improvements, could you please pull it in your tree? > > Have fun,Glauber, are you planning on consolidating the dma_ops structure for 32- and 64-bit? 32-bit doesn't currently have a dma_mapping_ops structure, which makes paravirtualizing DMA access difficult on 32-bit.
Seemingly Similar Threads
- [PATCH 0/24] paravirt_ops for unified x86 - that's me again!
- [PATCH 0/7] (Re-)introducing pvops for x86_64 - Real pvops work part
- [PATCH 0/7] (Re-)introducing pvops for x86_64 - Real pvops work part
- [PATCH 0/25 -v2] paravirt_ops for x86_64, second round
- [PATCH 0/25 -v2] paravirt_ops for x86_64, second round