This patch series includes a number of bug fixes previously submitted, or newly found, for the mm layer. Patches 1 and 7 are tool patches, cc''ed tool maintainers. Everything else is hypervisors-x86-mm. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Signed-off-by: Adin Scannell <adin@scannell.ca> tools/libxc/xc_domain.c | 2 + xen/arch/x86/mm/shadow/common.c | 6 +- xen/arch/x86/mm/shadow/multi.c | 2 +- xen/arch/x86/mm/shadow/private.h | 7 + xen/arch/x86/mm/paging.c | 2 +- xen/include/asm-x86/page.h | 1 + xen/include/asm-x86/x86_32/page.h | 11 +++ xen/include/asm-x86/x86_64/page.h | 5 + xen/arch/x86/hvm/hvm.c | 44 +++++++---- xen/arch/x86/hvm/svm/nestedsvm.c | 6 - xen/arch/x86/hvm/vmx/vvmx.c | 17 +--- xen/arch/x86/domctl.c | 30 ++++++++ xen/arch/x86/mm/p2m-ept.c | 1 + xen/arch/x86/mm/p2m-pod.c | 5 - xen/arch/x86/mm/p2m-pt.c | 137 ++++++------------------------------- xen/arch/x86/mm/p2m.c | 124 +++++++++++++++++++++++++++++++--- xen/include/asm-x86/p2m.h | 11 +- xen/include/public/domctl.h | 12 +++ tools/libxc/xc_domain.c | 22 ++++++ tools/libxc/xenctrl.h | 27 +++++++ xen/arch/x86/mm.c | 3 +- xen/arch/x86/mm/mem_sharing.c | 3 + xen/include/asm-x86/p2m.h | 3 +- 23 files changed, 302 insertions(+), 179 deletions(-)
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 1 of 9] Tools: When passing no bitmap for the shadow log dirty bitmap clean up, we should not get EFAULT
tools/libxc/xc_domain.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) This is due to a stale check for guest_handle_null in the hypervisor, which doesn''t necessarily work with the hypercall buffers. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r 050e8684b6c8 -r 47bfc8f8c0be tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c +++ b/tools/libxc/xc_domain.c @@ -430,6 +430,8 @@ int xc_shadow_control(xc_interface *xch, DECLARE_DOMCTL; DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap); + memset(&domctl, 0, sizeof(domctl)); + domctl.cmd = XEN_DOMCTL_shadow_op; domctl.domain = (domid_t)domid; domctl.u.shadow_op.op = sop;
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 2 of 9] x86/mm: Don''t trigger unnecessary shadow scans on p2m entry update
xen/arch/x86/mm/shadow/common.c | 6 ++---- xen/arch/x86/mm/shadow/multi.c | 2 +- xen/arch/x86/mm/shadow/private.h | 7 +++++++ 3 files changed, 10 insertions(+), 5 deletions(-) When updating a p2m entry, the hypervisor scans all shadow pte''s to find mappings of that gfn and tear them down. This is avoided if the page count reveals that there are no additional mappings. The current test ignores the PGC_allocated flag and its effect on the page count. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Signed-off-by: Adin Scannell <adin@scannell.ca> diff -r 47bfc8f8c0be -r 1b241f984167 xen/arch/x86/mm/shadow/common.c --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -2464,7 +2464,6 @@ int sh_remove_write_access_from_sl1p(str int sh_remove_all_mappings(struct vcpu *v, mfn_t gmfn) { struct page_info *page = mfn_to_page(gmfn); - int expected_count; /* Dispatch table for getting per-type functions */ static const hash_callback_t callbacks[SH_type_unused] = { @@ -2501,7 +2500,7 @@ int sh_remove_all_mappings(struct vcpu * ; perfc_incr(shadow_mappings); - if ( (page->count_info & PGC_count_mask) == 0 ) + if ( __check_page_no_refs(page) ) return 0; /* Although this is an externally visible function, we do not know @@ -2517,8 +2516,7 @@ int sh_remove_all_mappings(struct vcpu * hash_foreach(v, callback_mask, callbacks, gmfn); /* If that didn''t catch the mapping, something is very wrong */ - expected_count = (page->count_info & PGC_allocated) ? 1 : 0; - if ( (page->count_info & PGC_count_mask) != expected_count ) + if ( !__check_page_no_refs(page) ) { /* Don''t complain if we''re in HVM and there are some extra mappings: * The qemu helper process has an untyped mapping of this dom''s RAM diff -r 47bfc8f8c0be -r 1b241f984167 xen/arch/x86/mm/shadow/multi.c --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4591,7 +4591,7 @@ int sh_rm_mappings_from_l1(struct vcpu * { (void) shadow_set_l1e(v, sl1e, shadow_l1e_empty(), p2m_invalid, sl1mfn); - if ( (mfn_to_page(target_mfn)->count_info & PGC_count_mask) == 0 ) + if ( __check_page_no_refs(mfn_to_page(target_mfn)) ) /* This breaks us cleanly out of the FOREACH macro */ done = 1; } diff -r 47bfc8f8c0be -r 1b241f984167 xen/arch/x86/mm/shadow/private.h --- a/xen/arch/x86/mm/shadow/private.h +++ b/xen/arch/x86/mm/shadow/private.h @@ -30,6 +30,7 @@ #include <xen/domain_page.h> #include <asm/x86_emulate.h> #include <asm/hvm/support.h> +#include <asm/atomic.h> #include "../mm-locks.h" @@ -815,6 +816,12 @@ static inline unsigned long vtlb_lookup( } #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB) */ +static inline int __check_page_no_refs(struct page_info *page) +{ + unsigned long __count = read_atomic(&page->count_info); + return ( (__count & PGC_count_mask) =+ ((__count & PGC_allocated) ? 1 : 0) ); +} #endif /* _XEN_SHADOW_PRIVATE_H */
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 3 of 9] x86/mm: Don''t lose track of the log dirty bitmap
xen/arch/x86/mm/paging.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) hap_log_dirty_init unconditionally sets the top of the log dirty bitmap to INVALID_MFN. If there had been a bitmap allocated, it is then leaked, and the host crashes on an ASSERT when the domain is cleaned up. Fixing it here. Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r 1b241f984167 -r bea03a7fe212 xen/arch/x86/mm/paging.c --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -595,7 +595,6 @@ void paging_log_dirty_init(struct domain d->arch.paging.log_dirty.enable_log_dirty = enable_log_dirty; d->arch.paging.log_dirty.disable_log_dirty = disable_log_dirty; d->arch.paging.log_dirty.clean_dirty_bitmap = clean_dirty_bitmap; - d->arch.paging.log_dirty.top = _mfn(INVALID_MFN); } /* This function fress log dirty bitmap resources. */ @@ -617,6 +616,7 @@ int paging_domain_init(struct domain *d, mm_lock_init(&d->arch.paging.lock); + d->arch.paging.log_dirty.top = _mfn(INVALID_MFN); /* The order of the *_init calls below is important, as the later * ones may rewrite some common fields. Shadow pagetables are the * default... */
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 4 of 9] x86: Add conversion from a xen map to an mfn
xen/include/asm-x86/page.h | 1 + xen/include/asm-x86/x86_32/page.h | 11 +++++++++++ xen/include/asm-x86/x86_64/page.h | 5 +++++ 3 files changed, 17 insertions(+), 0 deletions(-) This conversion is a trivial invocation of virt_to_mfn in 64 bits. In 32 bits it uses the linear_map. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r bea03a7fe212 -r 42e6ceb8138d xen/include/asm-x86/page.h --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -272,6 +272,7 @@ void copy_page_sse2(void *, const void * #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn) #define paddr_to_pfn(pa) __paddr_to_pfn(pa) #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa)) +#define xen_map_to_mfn(va) __xen_map_to_mfn(va) #endif /* !defined(__ASSEMBLY__) */ diff -r bea03a7fe212 -r 42e6ceb8138d xen/include/asm-x86/x86_32/page.h --- a/xen/include/asm-x86/x86_32/page.h +++ b/xen/include/asm-x86/x86_32/page.h @@ -71,6 +71,17 @@ static inline void *__maddr_to_virt(unsi return (void *)(ma + DIRECTMAP_VIRT_START); } +static inline unsigned long __xen_map_to_mfn(void *va) +{ + l1_pgentry_t *l1e; + + ASSERT( (((unsigned long) va) >= MAPCACHE_VIRT_START) && + (((unsigned long) va) <= MAPCACHE_VIRT_END) ); + l1e = &__linear_l1_table[ + l1_linear_offset((unsigned long) va)]; + return l1e_get_pfn(*l1e); +} + /* read access (should only be used for debug printk''s) */ typedef u64 intpte_t; #define PRIpte "016llx" diff -r bea03a7fe212 -r 42e6ceb8138d xen/include/asm-x86/x86_64/page.h --- a/xen/include/asm-x86/x86_64/page.h +++ b/xen/include/asm-x86/x86_64/page.h @@ -104,6 +104,11 @@ static inline void *__maddr_to_virt(unsi ((ma & ma_top_mask) >> pfn_pdx_hole_shift))); } +static inline unsigned long __xen_map_to_mfn(void *va) +{ + return (__virt_to_maddr((unsigned long) va) >> PAGE_SHIFT); +} + /* read access (should only be used for debug printk''s) */ typedef u64 intpte_t; #define PRIpte "016lx"
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 5 of 9] x86/mm: Ensure maps used by nested hvm code cannot be paged out
xen/arch/x86/hvm/hvm.c | 44 ++++++++++++++++++++++++--------------- xen/arch/x86/hvm/svm/nestedsvm.c | 6 ----- xen/arch/x86/hvm/vmx/vvmx.c | 13 ++--------- 3 files changed, 30 insertions(+), 33 deletions(-) The nested hvm code maps pages of the guest hvm. These maps live beyond a hypervisor entry/exit pair, and thus their liveness cannot be ensured with get_gfn/put_gfn critical sections. Ensure their liveness by increasing the page ref count, instead. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r 42e6ceb8138d -r b2097819f2d9 xen/arch/x86/hvm/hvm.c --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1801,11 +1801,14 @@ int hvm_virtual_to_linear_addr( return 0; } -/* We leave this function holding a lock on the p2m entry */ +/* On non-NULL return, we leave this function holding an additional + * ref on the underlying mfn, if any */ static void *__hvm_map_guest_frame(unsigned long gfn, bool_t writable) { + void *map; unsigned long mfn; p2m_type_t p2mt; + struct page_info *pg; struct domain *d = current->domain; mfn = mfn_x(writable @@ -1828,7 +1831,12 @@ static void *__hvm_map_guest_frame(unsig if ( writable ) paging_mark_dirty(d, mfn); - return map_domain_page(mfn); + pg = mfn_to_page(mfn); + page_get_owner_and_reference(pg); + + map = map_domain_page(mfn); + put_gfn(d, gfn); + return map; } void *hvm_map_guest_frame_rw(unsigned long gfn) @@ -1844,11 +1852,16 @@ void *hvm_map_guest_frame_ro(unsigned lo void hvm_unmap_guest_frame(void *p) { if ( p ) + { + unsigned long mfn = xen_map_to_mfn(p); unmap_domain_page(p); + put_page(mfn_to_page(mfn)); + } } -static void *hvm_map_entry(unsigned long va, unsigned long *gfn) +static void *hvm_map_entry(unsigned long va) { + unsigned long gfn; uint32_t pfec; char *v; @@ -1865,11 +1878,11 @@ static void *hvm_map_entry(unsigned long * treat it as a kernel-mode read (i.e. no access checks). */ pfec = PFEC_page_present; - *gfn = paging_gva_to_gfn(current, va, &pfec); + gfn = paging_gva_to_gfn(current, va, &pfec); if ( (pfec == PFEC_page_paged) || (pfec == PFEC_page_shared) ) goto fail; - v = hvm_map_guest_frame_rw(*gfn); + v = hvm_map_guest_frame_rw(gfn); if ( v == NULL ) goto fail; @@ -1880,11 +1893,9 @@ static void *hvm_map_entry(unsigned long return NULL; } -static void hvm_unmap_entry(void *p, unsigned long gfn) +static void hvm_unmap_entry(void *p) { hvm_unmap_guest_frame(p); - if ( p && (gfn != INVALID_GFN) ) - put_gfn(current->domain, gfn); } static int hvm_load_segment_selector( @@ -1896,7 +1907,6 @@ static int hvm_load_segment_selector( int fault_type = TRAP_invalid_tss; struct cpu_user_regs *regs = guest_cpu_user_regs(); struct vcpu *v = current; - unsigned long pdesc_gfn = INVALID_GFN; if ( regs->eflags & X86_EFLAGS_VM ) { @@ -1930,7 +1940,7 @@ static int hvm_load_segment_selector( if ( ((sel & 0xfff8) + 7) > desctab.limit ) goto fail; - pdesc = hvm_map_entry(desctab.base + (sel & 0xfff8), &pdesc_gfn); + pdesc = hvm_map_entry(desctab.base + (sel & 0xfff8)); if ( pdesc == NULL ) goto hvm_map_fail; @@ -1990,7 +2000,7 @@ static int hvm_load_segment_selector( desc.b |= 0x100; skip_accessed_flag: - hvm_unmap_entry(pdesc, pdesc_gfn); + hvm_unmap_entry(pdesc); segr.base = (((desc.b << 0) & 0xff000000u) | ((desc.b << 16) & 0x00ff0000u) | @@ -2006,7 +2016,7 @@ static int hvm_load_segment_selector( return 0; unmap_and_fail: - hvm_unmap_entry(pdesc, pdesc_gfn); + hvm_unmap_entry(pdesc); fail: hvm_inject_exception(fault_type, sel & 0xfffc, 0); hvm_map_fail: @@ -2021,7 +2031,7 @@ void hvm_task_switch( struct cpu_user_regs *regs = guest_cpu_user_regs(); struct segment_register gdt, tr, prev_tr, segr; struct desc_struct *optss_desc = NULL, *nptss_desc = NULL, tss_desc; - unsigned long eflags, optss_gfn = INVALID_GFN, nptss_gfn = INVALID_GFN; + unsigned long eflags; int exn_raised, rc; struct { u16 back_link,__blh; @@ -2047,11 +2057,11 @@ void hvm_task_switch( goto out; } - optss_desc = hvm_map_entry(gdt.base + (prev_tr.sel & 0xfff8), &optss_gfn); + optss_desc = hvm_map_entry(gdt.base + (prev_tr.sel & 0xfff8)); if ( optss_desc == NULL ) goto out; - nptss_desc = hvm_map_entry(gdt.base + (tss_sel & 0xfff8), &nptss_gfn); + nptss_desc = hvm_map_entry(gdt.base + (tss_sel & 0xfff8)); if ( nptss_desc == NULL ) goto out; @@ -2216,8 +2226,8 @@ void hvm_task_switch( } out: - hvm_unmap_entry(optss_desc, optss_gfn); - hvm_unmap_entry(nptss_desc, nptss_gfn); + hvm_unmap_entry(optss_desc); + hvm_unmap_entry(nptss_desc); } #define HVMCOPY_from_guest (0u<<0) diff -r 42e6ceb8138d -r b2097819f2d9 xen/arch/x86/hvm/svm/nestedsvm.c --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -81,10 +81,6 @@ int nestedsvm_vmcb_map(struct vcpu *v, u if (nv->nv_vvmcx == NULL) return 0; nv->nv_vvmcxaddr = vmcbaddr; - /* put_gfn here even though the map survives beyond this caller. - * The map can likely survive beyond a hypervisor exit, thus we - * need to put the gfn */ - put_gfn(current->domain, vmcbaddr >> PAGE_SHIFT); } return 1; @@ -358,7 +354,6 @@ static int nsvm_vmrun_permissionmap(stru ioport_80 = test_bit(0x80, ns_viomap); ioport_ed = test_bit(0xed, ns_viomap); hvm_unmap_guest_frame(ns_viomap); - put_gfn(current->domain, svm->ns_iomap_pa >> PAGE_SHIFT); svm->ns_iomap = nestedhvm_vcpu_iomap_get(ioport_80, ioport_ed); @@ -889,7 +884,6 @@ nsvm_vmcb_guest_intercepts_ioio(paddr_t enabled = test_bit(port, io_bitmap); hvm_unmap_guest_frame(io_bitmap); - put_gfn(current->domain, gfn); if (!enabled) return NESTEDHVM_VMEXIT_HOST; diff -r 42e6ceb8138d -r b2097819f2d9 xen/arch/x86/hvm/vmx/vvmx.c --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -560,10 +560,7 @@ static void __map_io_bitmap(struct vcpu if (nvmx->iobitmap[index]) hvm_unmap_guest_frame (nvmx->iobitmap[index]); gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, vmcs_reg); - nvmx->iobitmap[index] = hvm_map_guest_frame_ro (gpa >> PAGE_SHIFT); - /* See comment in nestedsvm_vmcb_map re putting this gfn and - * liveness of the map it backs */ - put_gfn(current->domain, gpa >> PAGE_SHIFT); + nvmx->iobitmap[index] = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT); } static inline void map_io_bitmap_all(struct vcpu *v) @@ -581,7 +578,7 @@ static void nvmx_purge_vvmcs(struct vcpu __clear_current_vvmcs(v); if ( nvcpu->nv_vvmcxaddr != VMCX_EADDR ) hvm_unmap_guest_frame(nvcpu->nv_vvmcx); - nvcpu->nv_vvmcx == NULL; + nvcpu->nv_vvmcx = NULL; nvcpu->nv_vvmcxaddr = VMCX_EADDR; for (i=0; i<2; i++) { if ( nvmx->iobitmap[i] ) { @@ -1138,12 +1135,9 @@ int nvmx_handle_vmptrld(struct cpu_user_ if ( nvcpu->nv_vvmcxaddr == VMCX_EADDR ) { - nvcpu->nv_vvmcx = hvm_map_guest_frame_rw (gpa >> PAGE_SHIFT); + nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT); nvcpu->nv_vvmcxaddr = gpa; map_io_bitmap_all (v); - /* See comment in nestedsvm_vmcb_map regarding putting this - * gfn and liveness of the map that uses it */ - put_gfn(current->domain, gpa >> PAGE_SHIFT); } vmreturn(regs, VMSUCCEED); @@ -1205,7 +1199,6 @@ int nvmx_handle_vmclear(struct cpu_user_ if ( vvmcs ) __set_vvmcs(vvmcs, NVMX_LAUNCH_STATE, 0); hvm_unmap_guest_frame(vvmcs); - put_gfn(current->domain, gpa >> PAGE_SHIFT); } vmreturn(regs, VMSUCCEED);
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 6 of 9] x86/mm: Rework stale p2m auditing
xen/arch/x86/domctl.c | 30 +++++++++ xen/arch/x86/mm/p2m-ept.c | 1 + xen/arch/x86/mm/p2m-pod.c | 5 - xen/arch/x86/mm/p2m-pt.c | 137 +++++++------------------------------------ xen/arch/x86/mm/p2m.c | 124 ++++++++++++++++++++++++++++++++++++--- xen/include/asm-x86/p2m.h | 11 ++- xen/include/public/domctl.h | 12 +++ 7 files changed, 186 insertions(+), 134 deletions(-) The p2m audit code doesn''t even compile, let alone work. It also partially supports ept. Make it: - compile - lay groundwork for eventual ept support - move out of the way of all calls and turn it into a domctl. It''s obviously not being used by anybody presently. - enable it via said domctl Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r b2097819f2d9 -r 6626add0fef6 xen/arch/x86/domctl.c --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1449,6 +1449,36 @@ long arch_do_domctl( break; #endif /* __x86_64__ */ + case XEN_DOMCTL_audit_p2m: + { + struct domain *d; + + ret = -ESRCH; + d = rcu_lock_domain_by_id(domctl->domain); + if ( d != NULL ) + { + ret = -EPERM; + if ( (domctl->domain != DOMID_SELF) && + (d != current->domain) && + (IS_PRIV_FOR(current->domain, d)) ) + { +#if P2M_AUDIT + audit_p2m(d, + &domctl->u.audit_p2m.orphans_debug, + &domctl->u.audit_p2m.orphans_invalid, + &domctl->u.audit_p2m.m2p_bad, + &domctl->u.audit_p2m.p2m_bad); + ret = (copy_to_guest(u_domctl, domctl, 1)) ? + -EFAULT : 0; +#else + ret = -ENOSYS; +#endif /* P2M_AUDIT */ + } + rcu_unlock_domain(d); + } + } + break; + case XEN_DOMCTL_set_access_required: { struct domain *d; diff -r b2097819f2d9 -r 6626add0fef6 xen/arch/x86/mm/p2m-ept.c --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -817,6 +817,7 @@ void ept_p2m_init(struct p2m_domain *p2m p2m->set_entry = ept_set_entry; p2m->get_entry = ept_get_entry; p2m->change_entry_type_global = ept_change_entry_type_global; + p2m->audit_p2m = NULL; } static void ept_dump_p2m_table(unsigned char key) diff -r b2097819f2d9 -r 6626add0fef6 xen/arch/x86/mm/p2m-pod.c --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -522,7 +522,6 @@ p2m_pod_decrease_reservation(struct doma steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count ); p2m_lock(p2m); - audit_p2m(p2m, 1); if ( unlikely(d->is_dying) ) goto out_unlock; @@ -616,7 +615,6 @@ out_entry_check: } out_unlock: - audit_p2m(p2m, 1); p2m_unlock(p2m); out: @@ -986,7 +984,6 @@ p2m_pod_demand_populate(struct p2m_domai */ set_p2m_entry(p2m, gfn_aligned, _mfn(0), PAGE_ORDER_2M, p2m_populate_on_demand, p2m->default_access); - audit_p2m(p2m, 1); return 0; } @@ -1108,7 +1105,6 @@ guest_physmap_mark_populate_on_demand(st return rc; p2m_lock(p2m); - audit_p2m(p2m, 1); P2M_DEBUG("mark pod gfn=%#lx\n", gfn); @@ -1142,7 +1138,6 @@ guest_physmap_mark_populate_on_demand(st BUG_ON(p2m->pod.entry_count < 0); } - audit_p2m(p2m, 1); p2m_unlock(p2m); out: diff -r b2097819f2d9 -r 6626add0fef6 xen/arch/x86/mm/p2m-pt.c --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -483,7 +483,6 @@ static int p2m_pod_check_and_populate(st /* This is called from the p2m lookups, which can happen with or * without the lock hed. */ p2m_lock_recursive(p2m); - audit_p2m(p2m, 1); /* Check to make sure this is still PoD */ if ( p2m_flags_to_type(l1e_get_flags(*p2m_entry)) != p2m_populate_on_demand ) @@ -494,7 +493,6 @@ static int p2m_pod_check_and_populate(st r = p2m_pod_demand_populate(p2m, gfn, order, q); - audit_p2m(p2m, 1); p2m_unlock(p2m); return r; @@ -975,118 +973,23 @@ static void p2m_change_type_global(struc } -/* Set up the p2m function pointers for pagetable format */ -void p2m_pt_init(struct p2m_domain *p2m) +#if P2M_AUDIT +long p2m_pt_audit_p2m(struct p2m_domain *p2m) { - p2m->set_entry = p2m_set_entry; - p2m->get_entry = p2m_gfn_to_mfn; - p2m->change_entry_type_global = p2m_change_type_global; - p2m->write_p2m_entry = paging_write_p2m_entry; -} - - -#if P2M_AUDIT -/* strict_m2p == 0 allows m2p mappings that don''#t match the p2m. - * It''s intended for add_to_physmap, when the domain has just been allocated - * new mfns that might have stale m2p entries from previous owners */ -void audit_p2m(struct p2m_domain *p2m, int strict_m2p) -{ - struct page_info *page; - struct domain *od; - unsigned long mfn, gfn, m2pfn, lp2mfn = 0; int entry_count = 0; - mfn_t p2mfn; - unsigned long orphans_d = 0, orphans_i = 0, mpbad = 0, pmbad = 0; + unsigned long pmbad = 0; + unsigned long mfn, gfn, m2pfn; int test_linear; - p2m_type_t type; struct domain *d = p2m->domain; - if ( !paging_mode_translate(d) ) - return; - - //P2M_PRINTK("p2m audit starts\n"); + ASSERT(p2m_locked_by_me(p2m)); test_linear = ( (d == current->domain) && !pagetable_is_null(current->arch.monitor_table) ); if ( test_linear ) flush_tlb_local(); - spin_lock(&d->page_alloc_lock); - - /* Audit part one: walk the domain''s page allocation list, checking - * the m2p entries. */ - page_list_for_each ( page, &d->page_list ) - { - mfn = mfn_x(page_to_mfn(page)); - - // P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn); - - od = page_get_owner(page); - - if ( od != d ) - { - P2M_PRINTK("wrong owner %#lx -> %p(%u) != %p(%u)\n", - mfn, od, (od?od->domain_id:-1), d, d->domain_id); - continue; - } - - gfn = get_gpfn_from_mfn(mfn); - if ( gfn == INVALID_M2P_ENTRY ) - { - orphans_i++; - //P2M_PRINTK("orphaned guest page: mfn=%#lx has invalid gfn\n", - // mfn); - continue; - } - - if ( gfn == 0x55555555 || gfn == 0x5555555555555555 ) - { - orphans_d++; - //P2M_PRINTK("orphaned guest page: mfn=%#lx has debug gfn\n", - // mfn); - continue; - } - - if ( gfn == SHARED_M2P_ENTRY ) - { - P2M_PRINTK("shared mfn (%lx) on domain page list!\n", - mfn); - continue; - } - - p2mfn = gfn_to_mfn_type_p2m(p2m, gfn, &type, p2m_query); - if ( strict_m2p && mfn_x(p2mfn) != mfn ) - { - mpbad++; - P2M_PRINTK("map mismatch mfn %#lx -> gfn %#lx -> mfn %#lx" - " (-> gfn %#lx)\n", - mfn, gfn, mfn_x(p2mfn), - (mfn_valid(p2mfn) - ? get_gpfn_from_mfn(mfn_x(p2mfn)) - : -1u)); - /* This m2p entry is stale: the domain has another frame in - * this physical slot. No great disaster, but for neatness, - * blow away the m2p entry. */ - set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY); - } - - if ( test_linear && (gfn <= p2m->max_mapped_pfn) ) - { - lp2mfn = mfn_x(gfn_to_mfn_type_p2m(p2m, gfn, &type, p2m_query)); - if ( lp2mfn != mfn_x(p2mfn) ) - { - P2M_PRINTK("linear mismatch gfn %#lx -> mfn %#lx " - "(!= mfn %#lx)\n", gfn, lp2mfn, mfn_x(p2mfn)); - } - } - - // P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx, lp2mfn=%#lx\n", - // mfn, gfn, mfn_x(p2mfn), lp2mfn); - } - - spin_unlock(&d->page_alloc_lock); - - /* Audit part two: walk the domain''s p2m table, checking the entries. */ + /* Audit part one: walk the domain''s p2m table, checking the entries. */ if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) != 0 ) { l2_pgentry_t *l2e; @@ -1239,17 +1142,23 @@ void audit_p2m(struct p2m_domain *p2m, i entry_count); BUG(); } - - //P2M_PRINTK("p2m audit complete\n"); - //if ( orphans_i | orphans_d | mpbad | pmbad ) - // P2M_PRINTK("p2m audit found %lu orphans (%lu inval %lu debug)\n", - // orphans_i + orphans_d, orphans_i, orphans_d); - if ( mpbad | pmbad ) - { - P2M_PRINTK("p2m audit found %lu odd p2m, %lu bad m2p entries\n", - pmbad, mpbad); - WARN(); - } + + return pmbad; } #endif /* P2M_AUDIT */ +/* Set up the p2m function pointers for pagetable format */ +void p2m_pt_init(struct p2m_domain *p2m) +{ + p2m->set_entry = p2m_set_entry; + p2m->get_entry = p2m_gfn_to_mfn; + p2m->change_entry_type_global = p2m_change_type_global; + p2m->write_p2m_entry = paging_write_p2m_entry; +#if P2M_AUDIT + p2m->audit_p2m = p2m_pt_audit_p2m; +#else + p2m->audit_p2m = NULL; +#endif +} + + diff -r b2097819f2d9 -r 6626add0fef6 xen/arch/x86/mm/p2m.c --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -439,9 +439,7 @@ guest_physmap_remove_page(struct domain { struct p2m_domain *p2m = p2m_get_hostp2m(d); p2m_lock(p2m); - audit_p2m(p2m, 1); p2m_remove_page(p2m, gfn, mfn, page_order); - audit_p2m(p2m, 1); p2m_unlock(p2m); } @@ -482,7 +480,6 @@ guest_physmap_add_entry(struct domain *d return rc; p2m_lock(p2m); - audit_p2m(p2m, 0); P2M_DEBUG("adding gfn=%#lx mfn=%#lx\n", gfn, mfn); @@ -566,7 +563,6 @@ guest_physmap_add_entry(struct domain *d } } - audit_p2m(p2m, 1); p2m_unlock(p2m); return rc; @@ -656,7 +652,6 @@ set_mmio_p2m_entry(struct domain *d, uns P2M_DEBUG("set mmio %lx %lx\n", gfn, mfn_x(mfn)); rc = set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_mmio_direct, p2m->default_access); - audit_p2m(p2m, 1); p2m_unlock(p2m); if ( 0 == rc ) gdprintk(XENLOG_ERR, @@ -688,7 +683,6 @@ clear_mmio_p2m_entry(struct domain *d, u goto out; } rc = set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K, p2m_invalid, p2m->default_access); - audit_p2m(p2m, 1); out: p2m_unlock(p2m); @@ -785,7 +779,6 @@ int p2m_mem_paging_nominate(struct domai /* Fix p2m entry */ set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_out, a); - audit_p2m(p2m, 1); ret = 0; out: @@ -852,7 +845,6 @@ int p2m_mem_paging_evict(struct domain * /* Remove mapping from p2m table */ set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K, p2m_ram_paged, a); - audit_p2m(p2m, 1); /* Clear content before returning the page to Xen */ scrub_one_page(page); @@ -946,7 +938,6 @@ void p2m_mem_paging_populate(struct doma req.flags |= MEM_EVENT_FLAG_EVICT_FAIL; set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in_start, a); - audit_p2m(p2m, 1); } p2m_unlock(p2m); @@ -1014,7 +1005,6 @@ int p2m_mem_paging_prep(struct domain *d /* Fix p2m mapping */ set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a); - audit_p2m(p2m, 1); atomic_dec(&d->paged_pages); @@ -1065,7 +1055,6 @@ void p2m_mem_paging_resume(struct domain paging_mode_log_dirty(d) ? p2m_ram_logdirty : p2m_ram_rw, a); set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn); - audit_p2m(p2m, 1); } p2m_unlock(p2m); } @@ -1427,6 +1416,119 @@ unsigned long paging_gva_to_gfn(struct v return hostmode->gva_to_gfn(v, hostp2m, va, pfec); } +/*** Audit ***/ + +#if P2M_AUDIT +void audit_p2m(struct domain *d, + uint64_t *orphans_debug, + uint64_t *orphans_invalid, + uint64_t *m2p_bad, + uint64_t *p2m_bad) +{ + struct page_info *page; + struct domain *od; + unsigned long mfn, gfn; + mfn_t p2mfn; + unsigned long orphans_d = 0, orphans_i = 0, mpbad = 0, pmbad = 0; + p2m_access_t p2ma; + p2m_type_t type; + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + if ( !paging_mode_translate(d) ) + goto out_p2m_audit; + + P2M_PRINTK("p2m audit starts\n"); + + p2m_lock(p2m); + + if (p2m->audit_p2m) + pmbad = p2m->audit_p2m(p2m); + + /* Audit part two: walk the domain''s page allocation list, checking + * the m2p entries. */ + spin_lock(&d->page_alloc_lock); + page_list_for_each ( page, &d->page_list ) + { + mfn = mfn_x(page_to_mfn(page)); + + P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn); + + od = page_get_owner(page); + + if ( od != d ) + { + P2M_PRINTK("wrong owner %#lx -> %p(%u) != %p(%u)\n", + mfn, od, (od?od->domain_id:-1), d, d->domain_id); + continue; + } + + gfn = get_gpfn_from_mfn(mfn); + if ( gfn == INVALID_M2P_ENTRY ) + { + orphans_i++; + P2M_PRINTK("orphaned guest page: mfn=%#lx has invalid gfn\n", + mfn); + continue; + } + + if ( gfn == 0x55555555 || gfn == 0x5555555555555555 ) + { + orphans_d++; + P2M_PRINTK("orphaned guest page: mfn=%#lx has debug gfn\n", + mfn); + continue; + } + + if ( gfn == SHARED_M2P_ENTRY ) + { + P2M_PRINTK("shared mfn (%lx) on domain page list!\n", + mfn); + continue; + } + + p2mfn = get_gfn_type_access(p2m, gfn, &type, &p2ma, p2m_query, NULL); + if ( mfn_x(p2mfn) != mfn ) + { + mpbad++; + P2M_PRINTK("map mismatch mfn %#lx -> gfn %#lx -> mfn %#lx" + " (-> gfn %#lx)\n", + mfn, gfn, mfn_x(p2mfn), + (mfn_valid(p2mfn) + ? get_gpfn_from_mfn(mfn_x(p2mfn)) + : -1u)); + /* This m2p entry is stale: the domain has another frame in + * this physical slot. No great disaster, but for neatness, + * blow away the m2p entry. */ + set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY); + } + __put_gfn(p2m, gfn); + + P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx, lp2mfn=%#lx\n", + mfn, gfn, mfn_x(p2mfn), lp2mfn); + } + spin_unlock(&d->page_alloc_lock); + + p2m_unlock(p2m); + + P2M_PRINTK("p2m audit complete\n"); + if ( orphans_i | orphans_d | mpbad | pmbad ) + P2M_PRINTK("p2m audit found %lu orphans (%lu inval %lu debug)\n", + orphans_i + orphans_d, orphans_i, orphans_d); + if ( mpbad | pmbad ) + { + P2M_PRINTK("p2m audit found %lu odd p2m, %lu bad m2p entries\n", + pmbad, mpbad); + WARN(); + } + +out_p2m_audit: + *orphans_debug = (uint64_t) orphans_d; + *orphans_invalid = (uint64_t) orphans_i; + *m2p_bad = (uint64_t) mpbad; + *p2m_bad = (uint64_t) pmbad; +} +#endif /* P2M_AUDIT */ + /* * Local variables: * mode: C diff -r b2097819f2d9 -r 6626add0fef6 xen/include/asm-x86/p2m.h --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -245,6 +245,7 @@ struct p2m_domain { unsigned long gfn, l1_pgentry_t *p, mfn_t table_mfn, l1_pgentry_t new, unsigned int level); + long (*audit_p2m)(struct p2m_domain *p2m); /* Default P2M access type for each page in the the domain: new pages, * swapped in pages, cleared pages, and pages that are ambiquously @@ -559,13 +560,15 @@ int set_p2m_entry(struct p2m_domain *p2m extern void p2m_pt_init(struct p2m_domain *p2m); /* Debugging and auditing of the P2M code? */ -#define P2M_AUDIT 0 +#define P2M_AUDIT 1 #define P2M_DEBUGGING 0 #if P2M_AUDIT -extern void audit_p2m(struct p2m_domain *p2m, int strict_m2p); -#else -# define audit_p2m(_p2m, _m2p) do { (void)(_p2m),(_m2p); } while (0) +extern void audit_p2m(struct domain *d, + uint64_t *orphans_debug, + uint64_t *orphans_invalid, + uint64_t *m2p_bad, + uint64_t *p2m_bad); #endif /* P2M_AUDIT */ /* Printouts */ diff -r b2097819f2d9 -r 6626add0fef6 xen/include/public/domctl.h --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -800,6 +800,16 @@ struct xen_domctl_mem_sharing_op { typedef struct xen_domctl_mem_sharing_op xen_domctl_mem_sharing_op_t; DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_sharing_op_t); +struct xen_domctl_audit_p2m { + /* OUT error counts */ + uint64_t orphans_debug; + uint64_t orphans_invalid; + uint64_t m2p_bad; + uint64_t p2m_bad; +}; +typedef struct xen_domctl_audit_p2m xen_domctl_audit_p2m_t; +DEFINE_XEN_GUEST_HANDLE(xen_domctl_audit_p2m_t); + #if defined(__i386__) || defined(__x86_64__) /* XEN_DOMCTL_setvcpuextstate */ /* XEN_DOMCTL_getvcpuextstate */ @@ -898,6 +908,7 @@ struct xen_domctl { #define XEN_DOMCTL_setvcpuextstate 62 #define XEN_DOMCTL_getvcpuextstate 63 #define XEN_DOMCTL_set_access_required 64 +#define XEN_DOMCTL_audit_p2m 65 #define XEN_DOMCTL_gdbsx_guestmemio 1000 #define XEN_DOMCTL_gdbsx_pausevcpu 1001 #define XEN_DOMCTL_gdbsx_unpausevcpu 1002 @@ -951,6 +962,7 @@ struct xen_domctl { struct xen_domctl_vcpuextstate vcpuextstate; #endif struct xen_domctl_set_access_required access_required; + struct xen_domctl_audit_p2m audit_p2m; struct xen_domctl_gdbsx_memio gdbsx_guest_memio; struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu; struct xen_domctl_gdbsx_domstatus gdbsx_domstatus;
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 7 of 9] Tools: Add libxc wrapper for p2m audit domctl
tools/libxc/xc_domain.c | 22 ++++++++++++++++++++++ tools/libxc/xenctrl.h | 27 +++++++++++++++++++++++++++ 2 files changed, 49 insertions(+), 0 deletions(-) Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r 6626add0fef6 -r 5286ed662c1e tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c +++ b/tools/libxc/xc_domain.c @@ -1472,6 +1472,28 @@ int xc_domain_debug_control(xc_interface return do_domctl(xc, &domctl); } +int xc_domain_p2m_audit(xc_interface *xch, + uint32_t domid, + uint64_t *orphans_debug, + uint64_t *orphans_invalid, + uint64_t *m2p_bad, + uint64_t *p2m_bad) +{ + DECLARE_DOMCTL; + int rc; + + domctl.cmd = XEN_DOMCTL_audit_p2m; + domctl.domain = domid; + rc = do_domctl(xch, &domctl); + + *orphans_debug = domctl.u.audit_p2m.orphans_debug; + *orphans_invalid = domctl.u.audit_p2m.orphans_invalid; + *m2p_bad = domctl.u.audit_p2m.m2p_bad; + *p2m_bad = domctl.u.audit_p2m.p2m_bad; + + return rc; +} + int xc_domain_set_access_required(xc_interface *xch, uint32_t domid, unsigned int required) diff -r 6626add0fef6 -r 5286ed662c1e tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h +++ b/tools/libxc/xenctrl.h @@ -712,6 +712,33 @@ int xc_domain_setdebugging(xc_interface unsigned int enable); /** + * This function audits the (top level) p2m of a domain + * and returns the different error counts, if any. + * + * @parm xch a handle to an open hypervisor interface + * @parm domid the domain id whose top level p2m we + * want to audit + * @parm orphans_debug count of m2p entries for valid + * domain pages containing a debug value + * @parm orphans_invalid count of m2p entries for valid + * domain pages containing an invalid value + * @parm m2p_bad count of m2p entries mismatching the + * associated p2m entry for this domain + * @parm p2m_bad count of p2m entries for this domain + * mismatching the associated m2p entry + * return 0 on success, -1 on failure + * errno values on failure include: + * -ENOSYS: not implemented + * -EFAULT: could not copy results back to guest + */ +int xc_domain_p2m_audit(xc_interface *xch, + uint32_t domid, + uint64_t *orphans_debug, + uint64_t *orphans_invalid, + uint64_t *m2p_bad, + uint64_t *p2m_bad); + +/** * This function sets or clears the requirement that an access memory * event listener is required on the domain. *
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 8 of 9] x86/mm: Fix checks during foreign mapping of paged pages
xen/arch/x86/mm.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) Check that the valid mfn is the one we are mapping, not the mfn of the page table of the foreign domain. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r 5286ed662c1e -r 3489152b3a56 xen/arch/x86/mm.c --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -3572,7 +3572,8 @@ int do_mmu_update( rc = -ENOENT; break; } - else if ( p2m_ram_paging_in_start == l1e_p2mt && !mfn_valid(mfn) ) + else if ( p2m_ram_paging_in_start == l1e_p2mt && + !mfn_valid(l1emfn) ) { put_gfn(pg_owner, l1egfn); rc = -ENOENT;
Andres Lagar-Cavilla
2011-Nov-29 20:21 UTC
[PATCH 9 of 9] x86/mm: Allow pages typed as log dirty to also be shared
xen/arch/x86/mm/mem_sharing.c | 3 +++ xen/include/asm-x86/p2m.h | 3 ++- 2 files changed, 5 insertions(+), 1 deletions(-) Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r 3489152b3a56 -r 4ee6d40edc2c xen/arch/x86/mm/mem_sharing.c --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -697,6 +697,9 @@ private_page_found: /* Update m2p entry */ set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), gfn); + /* Now that the gfn<->mfn map is properly established, + * marking dirty is feasible */ + paging_mark_dirty(d, mfn_x(page_to_mfn(page))); put_gfn(d, gfn); shr_unlock(); return 0; diff -r 3489152b3a56 -r 4ee6d40edc2c xen/include/asm-x86/p2m.h --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -163,7 +163,8 @@ typedef enum { /* Shared types */ /* XXX: Sharable types could include p2m_ram_ro too, but we would need to * reinit the type correctly after fault */ -#define P2M_SHARABLE_TYPES (p2m_to_mask(p2m_ram_rw)) +#define P2M_SHARABLE_TYPES (p2m_to_mask(p2m_ram_rw) \ + | p2m_to_mask(p2m_ram_logdirty) ) #define P2M_SHARED_TYPES (p2m_to_mask(p2m_ram_shared)) /* Broken type: the frame backing this pfn has failed in hardware
Jan Beulich
2011-Nov-30 10:06 UTC
Re: [PATCH 4 of 9] x86: Add conversion from a xen map to an mfn
>>> On 29.11.11 at 21:21, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote: > xen/include/asm-x86/page.h | 1 + > xen/include/asm-x86/x86_32/page.h | 11 +++++++++++ > xen/include/asm-x86/x86_64/page.h | 5 +++++ > 3 files changed, 17 insertions(+), 0 deletions(-) > > > This conversion is a trivial invocation of virt_to_mfn in 64 bits. > In 32 bits it uses the linear_map.This is not a bug fix by itself, and imo belongs into the next patch. Jan> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> > > diff -r bea03a7fe212 -r 42e6ceb8138d xen/include/asm-x86/page.h > --- a/xen/include/asm-x86/page.h > +++ b/xen/include/asm-x86/page.h > @@ -272,6 +272,7 @@ void copy_page_sse2(void *, const void * > #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn) > #define paddr_to_pfn(pa) __paddr_to_pfn(pa) > #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa)) > +#define xen_map_to_mfn(va) __xen_map_to_mfn(va) > > #endif /* !defined(__ASSEMBLY__) */ > > diff -r bea03a7fe212 -r 42e6ceb8138d xen/include/asm-x86/x86_32/page.h > --- a/xen/include/asm-x86/x86_32/page.h > +++ b/xen/include/asm-x86/x86_32/page.h > @@ -71,6 +71,17 @@ static inline void *__maddr_to_virt(unsi > return (void *)(ma + DIRECTMAP_VIRT_START); > } > > +static inline unsigned long __xen_map_to_mfn(void *va) > +{ > + l1_pgentry_t *l1e; > + > + ASSERT( (((unsigned long) va) >= MAPCACHE_VIRT_START) && > + (((unsigned long) va) <= MAPCACHE_VIRT_END) ); > + l1e = &__linear_l1_table[ > + l1_linear_offset((unsigned long) va)]; > + return l1e_get_pfn(*l1e); > +} > + > /* read access (should only be used for debug printk''s) */ > typedef u64 intpte_t; > #define PRIpte "016llx" > diff -r bea03a7fe212 -r 42e6ceb8138d xen/include/asm-x86/x86_64/page.h > --- a/xen/include/asm-x86/x86_64/page.h > +++ b/xen/include/asm-x86/x86_64/page.h > @@ -104,6 +104,11 @@ static inline void *__maddr_to_virt(unsi > ((ma & ma_top_mask) >> pfn_pdx_hole_shift))); > } > > +static inline unsigned long __xen_map_to_mfn(void *va) > +{ > + return (__virt_to_maddr((unsigned long) va) >> PAGE_SHIFT); > +} > + > /* read access (should only be used for debug printk''s) */ > typedef u64 intpte_t; > #define PRIpte "016lx"
Jan Beulich
2011-Nov-30 10:10 UTC
Re: [PATCH 5 of 9] x86/mm: Ensure maps used by nested hvm code cannot be paged out
>>> On 29.11.11 at 21:21, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote: > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -1828,7 +1831,12 @@ static void *__hvm_map_guest_frame(unsig > if ( writable ) > paging_mark_dirty(d, mfn); > > - return map_domain_page(mfn); > + pg = mfn_to_page(mfn); > + page_get_owner_and_reference(pg);This can (at least theoretically) fail, and you must handle failure (or explain in a comment why not). Jan> + > + map = map_domain_page(mfn); > + put_gfn(d, gfn); > + return map; > } > > void *hvm_map_guest_frame_rw(unsigned long gfn)
Olaf Hering
2011-Nov-30 12:46 UTC
Re: [PATCH 8 of 9] x86/mm: Fix checks during foreign mapping of paged pages
On Tue, Nov 29, Andres Lagar-Cavilla wrote:> Check that the valid mfn is the one we are mapping, not the > mfn of the page table of the foreign domain. > > Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>Acked-by: Olaf Hering <olaf@aepfle.de>> diff -r 5286ed662c1e -r 3489152b3a56 xen/arch/x86/mm.c > --- a/xen/arch/x86/mm.c > +++ b/xen/arch/x86/mm.c > @@ -3572,7 +3572,8 @@ int do_mmu_update( > rc = -ENOENT; > break; > } > - else if ( p2m_ram_paging_in_start == l1e_p2mt && !mfn_valid(mfn) ) > + else if ( p2m_ram_paging_in_start == l1e_p2mt && > + !mfn_valid(l1emfn) )
Olaf Hering
2011-Nov-30 13:38 UTC
Re: [PATCH 8 of 9] x86/mm: Fix checks during foreign mapping of paged pages
On Wed, Nov 30, Olaf Hering wrote:> On Tue, Nov 29, Andres Lagar-Cavilla wrote: > > > Check that the valid mfn is the one we are mapping, not the > > mfn of the page table of the foreign domain. > > > > Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> > > Acked-by: Olaf Hering <olaf@aepfle.de> >There are more issues like that in this file, not just the l1e. Olaf
Ian Jackson
2011-Nov-30 14:39 UTC
Re: [PATCH 1 of 9] Tools: When passing no bitmap for the shadow log dirty bitmap clean up, we should not get EFAULT
Andres Lagar-Cavilla writes ("[PATCH 1 of 9] Tools: When passing no bitmap for the shadow log dirty bitmap clean up, we should not get EFAULT"):> tools/libxc/xc_domain.c | 2 ++ > 1 files changed, 2 insertions(+), 0 deletions(-) > > > This is due to a stale check for guest_handle_null in the > hypervisor, which doesn''t necessarily work with the hypercall > buffers. > > Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Ian.
Ian Jackson
2011-Nov-30 14:43 UTC
Re: [PATCH 7 of 9] Tools: Add libxc wrapper for p2m audit domctl
Andres Lagar-Cavilla writes ("[PATCH 7 of 9] Tools: Add libxc wrapper for p2m audit domctl"):> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Assuming the hypervisor changes go in, of course. Ian.
Andres Lagar-Cavilla
2011-Nov-30 15:02 UTC
Re: [PATCH 8 of 9] x86/mm: Fix checks during foreign mapping of paged pages
I see, l2emfn, etc. Clearly never exercised. Certainly worth another patch. Andres> On Wed, Nov 30, Olaf Hering wrote: > >> On Tue, Nov 29, Andres Lagar-Cavilla wrote: >> >> > Check that the valid mfn is the one we are mapping, not the >> > mfn of the page table of the foreign domain. >> > >> > Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> >> >> Acked-by: Olaf Hering <olaf@aepfle.de> >> > > There are more issues like that in this file, not just the l1e. > > Olaf >
Olaf Hering
2011-Nov-30 15:08 UTC
Re: [PATCH 8 of 9] x86/mm: Fix checks during foreign mapping of paged pages
On Wed, Nov 30, Andres Lagar-Cavilla wrote:> I see, l2emfn, etc. Clearly never exercised. Certainly worth another patch.I''m working on an optimization patch in that area which will cover this change. Olaf
Tim Deegan
2011-Dec-01 14:27 UTC
Re: [PATCH 5 of 9] x86/mm: Ensure maps used by nested hvm code cannot be paged out
At 10:10 +0000 on 30 Nov (1322647821), Jan Beulich wrote:> >>> On 29.11.11 at 21:21, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote: > > --- a/xen/arch/x86/hvm/hvm.c > > +++ b/xen/arch/x86/hvm/hvm.c > > @@ -1828,7 +1831,12 @@ static void *__hvm_map_guest_frame(unsig > > if ( writable ) > > paging_mark_dirty(d, mfn); > > > > - return map_domain_page(mfn); > > + pg = mfn_to_page(mfn); > > + page_get_owner_and_reference(pg); > > This can (at least theoretically) fail, and you must handle failure (or > explain in a comment why not).Also, you should probably be using get_page(pg, d); is there any reason to skip the ownership check? Tim.
Andres Lagar-Cavilla
2011-Dec-01 14:29 UTC
Re: [PATCH 5 of 9] x86/mm: Ensure maps used by nested hvm code cannot be paged out
> At 10:10 +0000 on 30 Nov (1322647821), Jan Beulich wrote: >> >>> On 29.11.11 at 21:21, Andres Lagar-Cavilla >> <andres@lagarcavilla.org> wrote: >> > --- a/xen/arch/x86/hvm/hvm.c >> > +++ b/xen/arch/x86/hvm/hvm.c >> > @@ -1828,7 +1831,12 @@ static void *__hvm_map_guest_frame(unsig >> > if ( writable ) >> > paging_mark_dirty(d, mfn); >> > >> > - return map_domain_page(mfn); >> > + pg = mfn_to_page(mfn); >> > + page_get_owner_and_reference(pg); >> >> This can (at least theoretically) fail, and you must handle failure (or >> explain in a comment why not). > > Also, you should probably be using get_page(pg, d); is there any > reason to skip the ownership check?Shared pages, owner could be dom_cow. Could conditionally get_page if writable, page_get_owner_and_ref if non writable. Andres> > Tim. >
Tim Deegan
2011-Dec-01 14:36 UTC
Re: [PATCH 5 of 9] x86/mm: Ensure maps used by nested hvm code cannot be paged out
At 06:29 -0800 on 01 Dec (1322720982), Andres Lagar-Cavilla wrote:> >> This can (at least theoretically) fail, and you must handle failure (or > >> explain in a comment why not). > > > > Also, you should probably be using get_page(pg, d); is there any > > reason to skip the ownership check? > > Shared pages, owner could be dom_cow. Could conditionally get_page if > writable, page_get_owner_and_ref if non writable.That''s probably best, plus a comment saying why. Tim.
At 15:21 -0500 on 29 Nov (1322580097), Andres Lagar-Cavilla wrote:> This patch series includes a number of bug fixes previously > submitted, or newly found, for the mm layer. > > Patches 1 and 7 are tool patches, cc''ed tool maintainers. Everything > else is hypervisors-x86-mm.Patch 4 I applied but then reverted -- the 32-bit version doesn''t even compile. Patch 5 has been commented on separately. Patch 8 I''ll leave until the full patch that fixes all instances. I''ve applied the others, with minor modifications to #2 (to use more consistent names with the rest of shadow code) and #6 (tidying up the domctl handler, in particular using rcu_lock_remote_target_domain_by_id() instead of open-coding the domain checks). Cheers, Tim.
> At 15:21 -0500 on 29 Nov (1322580097), Andres Lagar-Cavilla wrote: >> This patch series includes a number of bug fixes previously >> submitted, or newly found, for the mm layer. >> >> Patches 1 and 7 are tool patches, cc''ed tool maintainers. Everything >> else is hypervisors-x86-mm. > > Patch 4 I applied but then reverted -- the 32-bit version doesn''t > even compile. > Patch 5 has been commented on separately. > Patch 8 I''ll leave until the full patch that fixes all instances. > > I''ve applied the others, with minor modifications to #2 (to use more > consistent names with the rest of shadow code) and #6 (tidying up the > domctl handler, in particular using rcu_lock_remote_target_domain_by_id() > instead of open-coding the domain checks).Thanks, will take care of broken ones shortly. Andres> > Cheers, > > Tim. >
>>> On 29.11.11 at 21:21, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote: > @@ -559,13 +560,15 @@ int set_p2m_entry(struct p2m_domain *p2m > extern void p2m_pt_init(struct p2m_domain *p2m); > > /* Debugging and auditing of the P2M code? */ > -#define P2M_AUDIT 0 > +#define P2M_AUDIT 1 > #define P2M_DEBUGGING 0 > > #if P2M_AUDITWas this change of the default really intended? It uncovered a problem in 22356:cbb6b4b17024 (the code meanwhile moved elsewhere): In a 32-bit build, if ( gfn == 0x55555555 || gfn == 0x5555555555555555 ) causes (at least with some compiler versions, didn''t check tonight''s stage tester results yet) a compiler warning (which in turn fails the build). And a comparison like this is wrong on 64-bit too - see other (sort of proper; "sort of" because this isn''t really appropriate anyway, especially not without at least using a manifest constant that would force these to be in sync with the memset()-s that actually establish these values) examples of using these magic values e.g. in p2m_alloc_table() or guest_physmap_add_entry(). Jan
Tim Deegan
2011-Dec-02 10:33 UTC
[PATCH] Re: [PATCH 6 of 9] x86/mm: Rework stale p2m auditing
At 08:28 +0000 on 02 Dec (1322814505), Jan Beulich wrote:> >>> On 29.11.11 at 21:21, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote: > > @@ -559,13 +560,15 @@ int set_p2m_entry(struct p2m_domain *p2m > > extern void p2m_pt_init(struct p2m_domain *p2m); > > > > /* Debugging and auditing of the P2M code? */ > > -#define P2M_AUDIT 0 > > +#define P2M_AUDIT 1 > > #define P2M_DEBUGGING 0 > > > > #if P2M_AUDIT > > Was this change of the default really intended?I think so - now that the ausdit code is not on any critical paths there''s no reason not to have it available by default.> It uncovered a problem > in 22356:cbb6b4b17024 (the code meanwhile moved elsewhere): In a > 32-bit build, > > if ( gfn == 0x55555555 || gfn == 0x5555555555555555 ) > > causes (at least with some compiler versions, didn''t check tonight''s > stage tester results yet) a compiler warningI don''t think these debug values are serving any purpose. How about we just get rid of them? x86/mm: remove 0x55 debug pattern from M2P table It''s not really any more useful than explicitly setting new M2P entries to the invalid value. Signed-off-by: Tim Deegan <tim@xen.org> diff -r 60d4e257d04b tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Dec 02 09:05:26 2011 +0100 +++ b/tools/libxc/xc_domain.c Fri Dec 02 10:19:18 2011 +0000 @@ -1474,8 +1474,7 @@ int xc_domain_debug_control(xc_interface int xc_domain_p2m_audit(xc_interface *xch, uint32_t domid, - uint64_t *orphans_debug, - uint64_t *orphans_invalid, + uint64_t *orphans, uint64_t *m2p_bad, uint64_t *p2m_bad) { @@ -1486,10 +1485,9 @@ int xc_domain_p2m_audit(xc_interface *xc domctl.domain = domid; rc = do_domctl(xch, &domctl); - *orphans_debug = domctl.u.audit_p2m.orphans_debug; - *orphans_invalid = domctl.u.audit_p2m.orphans_invalid; - *m2p_bad = domctl.u.audit_p2m.m2p_bad; - *p2m_bad = domctl.u.audit_p2m.p2m_bad; + *orphans = domctl.u.audit_p2m.orphans; + *m2p_bad = domctl.u.audit_p2m.m2p_bad; + *p2m_bad = domctl.u.audit_p2m.p2m_bad; return rc; } diff -r 60d4e257d04b tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Fri Dec 02 09:05:26 2011 +0100 +++ b/tools/libxc/xenctrl.h Fri Dec 02 10:19:18 2011 +0000 @@ -718,9 +718,7 @@ int xc_domain_setdebugging(xc_interface * @parm xch a handle to an open hypervisor interface * @parm domid the domain id whose top level p2m we * want to audit - * @parm orphans_debug count of m2p entries for valid - * domain pages containing a debug value - * @parm orphans_invalid count of m2p entries for valid + * @parm orphans count of m2p entries for valid * domain pages containing an invalid value * @parm m2p_bad count of m2p entries mismatching the * associated p2m entry for this domain @@ -732,9 +730,8 @@ int xc_domain_setdebugging(xc_interface * -EFAULT: could not copy results back to guest */ int xc_domain_p2m_audit(xc_interface *xch, - uint32_t domid, - uint64_t *orphans_debug, - uint64_t *orphans_invalid, + uint32_t domid, + uint64_t *orphans, uint64_t *m2p_bad, uint64_t *p2m_bad); diff -r 60d4e257d04b xen/arch/x86/domctl.c --- a/xen/arch/x86/domctl.c Fri Dec 02 09:05:26 2011 +0100 +++ b/xen/arch/x86/domctl.c Fri Dec 02 10:19:18 2011 +0000 @@ -1459,8 +1459,7 @@ long arch_do_domctl( break; audit_p2m(d, - &domctl->u.audit_p2m.orphans_debug, - &domctl->u.audit_p2m.orphans_invalid, + &domctl->u.audit_p2m.orphans, &domctl->u.audit_p2m.m2p_bad, &domctl->u.audit_p2m.p2m_bad); rcu_unlock_domain(d); diff -r 60d4e257d04b xen/arch/x86/mm/p2m.c --- a/xen/arch/x86/mm/p2m.c Fri Dec 02 09:05:26 2011 +0100 +++ b/xen/arch/x86/mm/p2m.c Fri Dec 02 10:19:18 2011 +0000 @@ -307,13 +307,7 @@ int p2m_alloc_table(struct p2m_domain *p /* Pages should not be shared that early */ ASSERT(gfn != SHARED_M2P_ENTRY); page_count++; - if ( -#ifdef __x86_64__ - (gfn != 0x5555555555555555L) -#else - (gfn != 0x55555555L) -#endif - && gfn != INVALID_M2P_ENTRY + if ( gfn != INVALID_M2P_ENTRY && !set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_rw, p2m->default_access) ) goto error_unlock; } @@ -513,14 +507,7 @@ guest_physmap_add_entry(struct domain *d if ( page_get_owner(mfn_to_page(_mfn(mfn + i))) != d ) continue; ogfn = mfn_to_gfn(d, _mfn(mfn+i)); - if ( -#ifdef __x86_64__ - (ogfn != 0x5555555555555555L) -#else - (ogfn != 0x55555555L) -#endif - && (ogfn != INVALID_M2P_ENTRY) - && (ogfn != gfn + i) ) + if ( (ogfn != INVALID_M2P_ENTRY) && (ogfn != gfn + i) ) { /* This machine frame is already mapped at another physical * address */ @@ -1447,8 +1434,7 @@ unsigned long paging_gva_to_gfn(struct v #if P2M_AUDIT void audit_p2m(struct domain *d, - uint64_t *orphans_debug, - uint64_t *orphans_invalid, + uint64_t *orphans, uint64_t *m2p_bad, uint64_t *p2m_bad) { @@ -1456,7 +1442,7 @@ void audit_p2m(struct domain *d, struct domain *od; unsigned long mfn, gfn; mfn_t p2mfn; - unsigned long orphans_d = 0, orphans_i = 0, mpbad = 0, pmbad = 0; + unsigned long orphans_count = 0, mpbad = 0, pmbad = 0; p2m_access_t p2ma; p2m_type_t type; struct p2m_domain *p2m = p2m_get_hostp2m(d); @@ -1492,20 +1478,12 @@ void audit_p2m(struct domain *d, gfn = get_gpfn_from_mfn(mfn); if ( gfn == INVALID_M2P_ENTRY ) { - orphans_i++; + orphans_count++; P2M_PRINTK("orphaned guest page: mfn=%#lx has invalid gfn\n", mfn); continue; } - if ( gfn == 0x55555555 || gfn == 0x5555555555555555 ) - { - orphans_d++; - P2M_PRINTK("orphaned guest page: mfn=%#lx has debug gfn\n", - mfn); - continue; - } - if ( gfn == SHARED_M2P_ENTRY ) { P2M_PRINTK("shared mfn (%lx) on domain page list!\n", @@ -1538,9 +1516,8 @@ void audit_p2m(struct domain *d, p2m_unlock(p2m); P2M_PRINTK("p2m audit complete\n"); - if ( orphans_i | orphans_d | mpbad | pmbad ) - P2M_PRINTK("p2m audit found %lu orphans (%lu inval %lu debug)\n", - orphans_i + orphans_d, orphans_i, orphans_d); + if ( orphans_count | mpbad | pmbad ) + P2M_PRINTK("p2m audit found %lu orphans\n", orphans); if ( mpbad | pmbad ) { P2M_PRINTK("p2m audit found %lu odd p2m, %lu bad m2p entries\n", @@ -1549,10 +1526,9 @@ void audit_p2m(struct domain *d, } out_p2m_audit: - *orphans_debug = (uint64_t) orphans_d; - *orphans_invalid = (uint64_t) orphans_i; - *m2p_bad = (uint64_t) mpbad; - *p2m_bad = (uint64_t) pmbad; + *orphans = (uint64_t) orphans_count; + *m2p_bad = (uint64_t) mpbad; + *p2m_bad = (uint64_t) pmbad; } #endif /* P2M_AUDIT */ diff -r 60d4e257d04b xen/arch/x86/x86_32/mm.c --- a/xen/arch/x86/x86_32/mm.c Fri Dec 02 09:05:26 2011 +0100 +++ b/xen/arch/x86/x86_32/mm.c Fri Dec 02 10:19:18 2011 +0000 @@ -114,8 +114,8 @@ void __init paging_init(void) l2e_write(&idle_pg_table_l2[l2_linear_offset(RO_MPT_VIRT_START) + i], l2e_from_page( pg, (__PAGE_HYPERVISOR | _PAGE_PSE) & ~_PAGE_RW)); - /* Fill with an obvious debug pattern. */ - memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)), 0x55, + /* Fill with INVALID_M2P_ENTRY. */ + memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)), 0xFF, 1UL << L2_PAGETABLE_SHIFT); } #undef CNT diff -r 60d4e257d04b xen/arch/x86/x86_64/mm.c --- a/xen/arch/x86/x86_64/mm.c Fri Dec 02 09:05:26 2011 +0100 +++ b/xen/arch/x86/x86_64/mm.c Fri Dec 02 10:19:18 2011 +0000 @@ -495,7 +495,8 @@ static int setup_compat_m2p_table(struct PAGE_HYPERVISOR); if ( err ) break; - memset((void *)rwva, 0x55, 1UL << L2_PAGETABLE_SHIFT); + /* Fill with INVALID_M2P_ENTRY. */ + memset((void *)rwva, 0xFF, 1UL << L2_PAGETABLE_SHIFT); /* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */ l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT)); } @@ -569,8 +570,9 @@ static int setup_m2p_table(struct mem_ho PAGE_HYPERVISOR); if ( ret ) goto error; + /* Fill with INVALID_M2P_ENTRY. */ memset((void *)(RDWR_MPT_VIRT_START + i * sizeof(unsigned long)), - 0x55, 1UL << L2_PAGETABLE_SHIFT); + 0xFF, 1UL << L2_PAGETABLE_SHIFT); ASSERT(!(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PSE)); @@ -727,8 +729,9 @@ void __init paging_init(void) page_to_mfn(l1_pg), 1UL << PAGETABLE_ORDER, PAGE_HYPERVISOR); + /* Fill with INVALID_M2P_ENTRY. */ memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)), - 0x55, 1UL << L2_PAGETABLE_SHIFT); + 0xFF, 1UL << L2_PAGETABLE_SHIFT); } if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) ) { diff -r 60d4e257d04b xen/include/asm-x86/p2m.h --- a/xen/include/asm-x86/p2m.h Fri Dec 02 09:05:26 2011 +0100 +++ b/xen/include/asm-x86/p2m.h Fri Dec 02 10:19:18 2011 +0000 @@ -566,10 +566,9 @@ extern void p2m_pt_init(struct p2m_domai #if P2M_AUDIT extern void audit_p2m(struct domain *d, - uint64_t *orphans_debug, - uint64_t *orphans_invalid, - uint64_t *m2p_bad, - uint64_t *p2m_bad); + uint64_t *orphans, + uint64_t *m2p_bad, + uint64_t *p2m_bad); #endif /* P2M_AUDIT */ /* Printouts */ diff -r 60d4e257d04b xen/include/public/domctl.h --- a/xen/include/public/domctl.h Fri Dec 02 09:05:26 2011 +0100 +++ b/xen/include/public/domctl.h Fri Dec 02 10:19:18 2011 +0000 @@ -806,8 +806,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_s struct xen_domctl_audit_p2m { /* OUT error counts */ - uint64_t orphans_debug; - uint64_t orphans_invalid; + uint64_t orphans; uint64_t m2p_bad; uint64_t p2m_bad; }; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jan Beulich
2011-Dec-02 11:12 UTC
[PATCH] Re: [PATCH 6 of 9] x86/mm: Rework stale p2m auditing
>>> On 02.12.11 at 11:33, Tim Deegan <tim@xen.org> wrote: > At 08:28 +0000 on 02 Dec (1322814505), Jan Beulich wrote: >> It uncovered a problem >> in 22356:cbb6b4b17024 (the code meanwhile moved elsewhere): In a >> 32-bit build, >> >> if ( gfn == 0x55555555 || gfn == 0x5555555555555555 ) >> >> causes (at least with some compiler versions, didn''t check tonight''s >> stage tester results yet) a compiler warning > > I don''t think these debug values are serving any purpose. How about we > just get rid of them? > > x86/mm: remove 0x55 debug pattern from M2P table > > It''s not really any more useful than explicitly setting new M2P entries > to the invalid value.That''s fine by me - I don''t think I ever found a need to distinguish never touched from invalidated entries. Jan