Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 0/7][RFC] make xenguest save & restore functions reentrant
The following patchset make suspend and restore code reentrant by having an explicit context to store current variables across all the suspend/restore code. This work is necessary for beeing able to get rid of the fork of processes during save&restore, and provide a simpler interface for toolstack developers. it hasn''t been properly stress tested yet. Vincent Hanquez (7): add explicit parameter to macros instead of assuming symbol name available on the stack or as a global variable. p2m_size is unnecessarily passed as a parameter when it''s available as a global variable. move global variable in suspend into a global context move the suspend_ctx on the save stack instead of a global one alias i/FPP(guest_width) as p2m_index and replace every usage move restore global variable into a global context pass restore context as an argument instead of a global context tools/libxc/xc_core.c | 2 +- tools/libxc/xc_core_x86.c | 20 ++-- tools/libxc/xc_domain_restore.c | 331 +++++++++++++++++++-------------------- tools/libxc/xc_domain_save.c | 243 ++++++++++++++--------------- tools/libxc/xc_offline_page.c | 8 +- tools/libxc/xc_resume.c | 12 +- tools/libxc/xg_private.h | 16 +- tools/libxc/xg_save_restore.h | 22 ++-- 8 files changed, 324 insertions(+), 330 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 1/7] add explicit parameter to macros instead of assuming symbol name available on the stack or as a global variable.
add explicit parameter to macros instead of assuming symbol name available on the stack or as a global variable. --- tools/libxc/xc_core.c | 2 +- tools/libxc/xc_core_x86.c | 20 +++++----- tools/libxc/xc_domain_restore.c | 66 +++++++++++++++--------------- tools/libxc/xc_domain_save.c | 84 +++++++++++++++++++------------------- tools/libxc/xc_offline_page.c | 8 ++-- tools/libxc/xc_resume.c | 12 +++--- tools/libxc/xg_private.h | 16 ++++---- tools/libxc/xg_save_restore.h | 22 +++++----- 8 files changed, 115 insertions(+), 115 deletions(-) diff --git a/tools/libxc/xc_core.c b/tools/libxc/xc_core.c index 0c226ac..c52212e 100644 --- a/tools/libxc/xc_core.c +++ b/tools/libxc/xc_core.c @@ -899,7 +899,7 @@ out: if ( memory_map != NULL ) free(memory_map); if ( p2m != NULL ) - munmap(p2m, PAGE_SIZE * P2M_FL_ENTRIES); + munmap(p2m, PAGE_SIZE * P2M_FL_ENTRIES(p2m_size, guest_width)); if ( p2m_array != NULL ) free(p2m_array); if ( pfn_array != NULL ) diff --git a/tools/libxc/xc_core_x86.c b/tools/libxc/xc_core_x86.c index fc2a7a1..2955af5 100644 --- a/tools/libxc/xc_core_x86.c +++ b/tools/libxc/xc_core_x86.c @@ -22,7 +22,7 @@ #include "xc_core.h" #include "xc_e820.h" -#define GET_FIELD(_p, _f) ((guest_width==8) ? ((_p)->x64._f) : ((_p)->x32._f)) +#define GET_FIELD(_gw, _p, _f) ((_gw==8) ? ((_p)->x64._f) : ((_p)->x32._f)) #ifndef MAX #define MAX(_a, _b) ((_a) >= (_b) ? (_a) : (_b)) @@ -101,7 +101,7 @@ xc_core_arch_map_p2m_rw(int xc_handle, unsigned int guest_width, xc_dominfo_t *i live_p2m_frame_list_list xc_map_foreign_range(xc_handle, dom, PAGE_SIZE, PROT_READ, - GET_FIELD(live_shinfo, arch.pfn_to_mfn_frame_list_list)); + GET_FIELD(guest_width, live_shinfo, arch.pfn_to_mfn_frame_list_list)); if ( !live_p2m_frame_list_list ) { @@ -131,7 +131,7 @@ xc_core_arch_map_p2m_rw(int xc_handle, unsigned int guest_width, xc_dominfo_t *i live_p2m_frame_list xc_map_foreign_pages(xc_handle, dom, PROT_READ, p2m_frame_list_list, - P2M_FLL_ENTRIES); + P2M_FLL_ENTRIES(p2m_size, guest_width)); if ( !live_p2m_frame_list ) { @@ -140,26 +140,26 @@ xc_core_arch_map_p2m_rw(int xc_handle, unsigned int guest_width, xc_dominfo_t *i } /* Get a local copy of the live_P2M_frame_list */ - if ( !(p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE)) ) + if ( !(p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE(p2m_size, guest_width))) ) { ERROR("Couldn''t allocate p2m_frame_list array"); goto out; } - memset(p2m_frame_list, 0, P2M_TOOLS_FL_SIZE); - memcpy(p2m_frame_list, live_p2m_frame_list, P2M_GUEST_FL_SIZE); + memset(p2m_frame_list, 0, P2M_TOOLS_FL_SIZE(p2m_size, guest_width)); + memcpy(p2m_frame_list, live_p2m_frame_list, P2M_GUEST_FL_SIZE(p2m_size, guest_width)); /* Canonicalize guest''s unsigned long vs ours */ if ( guest_width > sizeof(unsigned long) ) - for ( i = 0; i < P2M_FL_ENTRIES; i++ ) + for ( i = 0; i < P2M_FL_ENTRIES(p2m_size, guest_width); i++ ) p2m_frame_list[i] = ((uint64_t *)p2m_frame_list)[i]; else if ( guest_width < sizeof(unsigned long) ) - for ( i = P2M_FL_ENTRIES - 1; i >= 0; i-- ) + for ( i = P2M_FL_ENTRIES(p2m_size, guest_width) - 1; i >= 0; i-- ) p2m_frame_list[i] = ((uint32_t *)p2m_frame_list)[i]; *live_p2m = xc_map_foreign_pages(xc_handle, dom, rw ? (PROT_READ | PROT_WRITE) : PROT_READ, p2m_frame_list, - P2M_FL_ENTRIES); + P2M_FL_ENTRIES(p2m_size, guest_width)); if ( !*live_p2m ) { @@ -178,7 +178,7 @@ out: munmap(live_p2m_frame_list_list, PAGE_SIZE); if ( live_p2m_frame_list ) - munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE); + munmap(live_p2m_frame_list, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); if ( p2m_frame_list_list ) free(p2m_frame_list_list); diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c index 01d7924..e3d2d4a 100644 --- a/tools/libxc/xc_domain_restore.c +++ b/tools/libxc/xc_domain_restore.c @@ -525,7 +525,7 @@ static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, if ( !(pte & _PAGE_PRESENT) ) continue; - pfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86; + pfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86(guest_width); /* Allocate mfn if necessary */ if ( p2m[pfn] == INVALID_P2M_ENTRY ) @@ -535,7 +535,7 @@ static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, 1, &pfn, &force_pfn, superpages) != 0) return 0; } - pte &= ~MADDR_MASK_X86; + pte &= ~MADDR_MASK_X86(guest_width); pte |= (uint64_t)p2m[pfn] << PAGE_SHIFT; if ( pt_levels == 2 ) @@ -618,7 +618,7 @@ static xen_pfn_t *load_p2m_frame_list( tot_bytes -= chunk_bytes; chunk_bytes = 0; - if ( GET_FIELD(&ctxt, vm_assist) + if ( GET_FIELD(guest_width, &ctxt, vm_assist) & (1UL << VMASST_TYPE_pae_extended_cr3) ) *pae_extended_cr3 = 1; } @@ -651,7 +651,7 @@ static xen_pfn_t *load_p2m_frame_list( /* Now that we know the guest''s word-size, can safely allocate * the p2m frame list */ - if ( (p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE)) == NULL ) + if ( (p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE(p2m_size, guest_width))) == NULL ) { ERROR("Couldn''t allocate p2m_frame_list array"); return NULL; @@ -660,7 +660,7 @@ static xen_pfn_t *load_p2m_frame_list( /* First entry has already been read. */ p2m_frame_list[0] = p2m_fl_zero; if ( read_exact(io_fd, &p2m_frame_list[1], - (P2M_FL_ENTRIES - 1) * sizeof(xen_pfn_t)) ) + (P2M_FL_ENTRIES(p2m_size, guest_width) - 1) * sizeof(xen_pfn_t)) ) { ERROR("read p2m_frame_list failed"); return NULL; @@ -1787,7 +1787,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, DPRINTF("read VCPU %d\n", i); if ( !new_ctxt_format ) - SET_FIELD(&ctxt, flags, GET_FIELD(&ctxt, flags) | VGCF_online); + SET_FIELD(guest_width, &ctxt, flags, GET_FIELD(guest_width, &ctxt, flags) | VGCF_online); if ( i == 0 ) { @@ -1795,7 +1795,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, * Uncanonicalise the suspend-record frame number and poke * resume record. */ - pfn = GET_FIELD(&ctxt, user_regs.edx); + pfn = GET_FIELD(guest_width, &ctxt, user_regs.edx); if ( (pfn >= p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { @@ -1803,30 +1803,30 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, goto out; } mfn = p2m[pfn]; - SET_FIELD(&ctxt, user_regs.edx, mfn); + SET_FIELD(guest_width, &ctxt, user_regs.edx, mfn); start_info = xc_map_foreign_range( xc_handle, dom, PAGE_SIZE, PROT_READ | PROT_WRITE, mfn); - SET_FIELD(start_info, nr_pages, p2m_size); - SET_FIELD(start_info, shared_info, shared_info_frame<<PAGE_SHIFT); - SET_FIELD(start_info, flags, 0); - *store_mfn = p2m[GET_FIELD(start_info, store_mfn)]; - SET_FIELD(start_info, store_mfn, *store_mfn); - SET_FIELD(start_info, store_evtchn, store_evtchn); - *console_mfn = p2m[GET_FIELD(start_info, console.domU.mfn)]; - SET_FIELD(start_info, console.domU.mfn, *console_mfn); - SET_FIELD(start_info, console.domU.evtchn, console_evtchn); + SET_FIELD(guest_width, start_info, nr_pages, p2m_size); + SET_FIELD(guest_width, start_info, shared_info, shared_info_frame<<PAGE_SHIFT); + SET_FIELD(guest_width, start_info, flags, 0); + *store_mfn = p2m[GET_FIELD(guest_width, start_info, store_mfn)]; + SET_FIELD(guest_width, start_info, store_mfn, *store_mfn); + SET_FIELD(guest_width, start_info, store_evtchn, store_evtchn); + *console_mfn = p2m[GET_FIELD(guest_width, start_info, console.domU.mfn)]; + SET_FIELD(guest_width, start_info, console.domU.mfn, *console_mfn); + SET_FIELD(guest_width, start_info, console.domU.evtchn, console_evtchn); munmap(start_info, PAGE_SIZE); } /* Uncanonicalise each GDT frame number. */ - if ( GET_FIELD(&ctxt, gdt_ents) > 8192 ) + if ( GET_FIELD(guest_width, &ctxt, gdt_ents) > 8192 ) { ERROR("GDT entry count out of range"); goto out; } - for ( j = 0; (512*j) < GET_FIELD(&ctxt, gdt_ents); j++ ) + for ( j = 0; (512*j) < GET_FIELD(guest_width, &ctxt, gdt_ents); j++ ) { - pfn = GET_FIELD(&ctxt, gdt_frames[j]); + pfn = GET_FIELD(guest_width, &ctxt, gdt_frames[j]); if ( (pfn >= p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { @@ -1834,10 +1834,10 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, j, (unsigned long)pfn); goto out; } - SET_FIELD(&ctxt, gdt_frames[j], p2m[pfn]); + SET_FIELD(guest_width, &ctxt, gdt_frames[j], p2m[pfn]); } /* Uncanonicalise the page table base pointer. */ - pfn = UNFOLD_CR3(GET_FIELD(&ctxt, ctrlreg[3])); + pfn = UNFOLD_CR3(guest_width, GET_FIELD(guest_width, &ctxt, ctrlreg[3])); if ( pfn >= p2m_size ) { @@ -1854,12 +1854,12 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, (unsigned long)pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - SET_FIELD(&ctxt, ctrlreg[3], FOLD_CR3(p2m[pfn])); + SET_FIELD(guest_width, &ctxt, ctrlreg[3], FOLD_CR3(guest_width, p2m[pfn])); /* Guest pagetable (x86/64) stored in otherwise-unused CR1. */ if ( (pt_levels == 4) && (ctxt.x64.ctrlreg[1] & 1) ) { - pfn = UNFOLD_CR3(ctxt.x64.ctrlreg[1] & ~1); + pfn = UNFOLD_CR3(guest_width, ctxt.x64.ctrlreg[1] & ~1); if ( pfn >= p2m_size ) { ERROR("User PT base is bad: pfn=%lu p2m_size=%lu", @@ -1874,7 +1874,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, (unsigned long)pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - ctxt.x64.ctrlreg[1] = FOLD_CR3(p2m[pfn]); + ctxt.x64.ctrlreg[1] = FOLD_CR3(guest_width, p2m[pfn]); } domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = (domid_t)dom; @@ -1910,22 +1910,22 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, xc_handle, dom, PAGE_SIZE, PROT_WRITE, shared_info_frame); /* restore saved vcpu_info and arch specific info */ - MEMCPY_FIELD(new_shared_info, old_shared_info, vcpu_info); - MEMCPY_FIELD(new_shared_info, old_shared_info, arch); + MEMCPY_FIELD(guest_width, new_shared_info, old_shared_info, vcpu_info); + MEMCPY_FIELD(guest_width, new_shared_info, old_shared_info, arch); /* clear any pending events and the selector */ - MEMSET_ARRAY_FIELD(new_shared_info, evtchn_pending, 0); + MEMSET_ARRAY_FIELD(guest_width, new_shared_info, evtchn_pending, 0); for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ ) - SET_FIELD(new_shared_info, vcpu_info[i].evtchn_pending_sel, 0); + SET_FIELD(guest_width, new_shared_info, vcpu_info[i].evtchn_pending_sel, 0); /* mask event channels */ - MEMSET_ARRAY_FIELD(new_shared_info, evtchn_mask, 0xff); + MEMSET_ARRAY_FIELD(guest_width, new_shared_info, evtchn_mask, 0xff); /* leave wallclock time. set by hypervisor */ munmap(new_shared_info, PAGE_SIZE); /* Uncanonicalise the pfn-to-mfn table frame-number list. */ - for ( i = 0; i < P2M_FL_ENTRIES; i++ ) + for ( i = 0; i < P2M_FL_ENTRIES(p2m_size, guest_width); i++ ) { pfn = p2m_frame_list[i]; if ( (pfn >= p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) @@ -1938,7 +1938,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, /* Copy the P2M we''ve constructed to the ''live'' P2M */ if ( !(live_p2m = xc_map_foreign_batch(xc_handle, dom, PROT_WRITE, - p2m_frame_list, P2M_FL_ENTRIES)) ) + p2m_frame_list, P2M_FL_ENTRIES(p2m_size, guest_width))) ) { ERROR("Couldn''t map p2m table"); goto out; @@ -1954,7 +1954,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, ((uint32_t *)live_p2m)[i] = p2m[i]; else memcpy(live_p2m, p2m, p2m_size * sizeof(xen_pfn_t)); - munmap(live_p2m, P2M_FL_ENTRIES * PAGE_SIZE); + munmap(live_p2m, P2M_FL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); DPRINTF("Domain ready to be built.\n"); rc = 0; diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c index 30c1b6d..697d93c 100644 --- a/tools/libxc/xc_domain_save.c +++ b/tools/libxc/xc_domain_save.c @@ -75,7 +75,7 @@ struct outbuf { * Returns TRUE if the given machine frame number has a unique mapping * in the guest''s pseudophysical map. */ -#define MFN_IS_IN_PSEUDOPHYS_MAP(_mfn) \ +#define MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, _mfn) \ (((_mfn) < (max_mfn)) && \ ((mfn_to_pfn(_mfn) < (p2m_size)) && \ (pfn_to_mfn(mfn_to_pfn(_mfn)) == (_mfn)))) @@ -462,12 +462,12 @@ static void *map_frame_list_list(int xc_handle, uint32_t dom, { int count = 100; void *p; - uint64_t fll = GET_FIELD(shinfo, arch.pfn_to_mfn_frame_list_list); + uint64_t fll = GET_FIELD(guest_width, shinfo, arch.pfn_to_mfn_frame_list_list); while ( count-- && (fll == 0) ) { usleep(10000); - fll = GET_FIELD(shinfo, arch.pfn_to_mfn_frame_list_list); + fll = GET_FIELD(guest_width, shinfo, arch.pfn_to_mfn_frame_list_list); } if ( fll == 0 ) @@ -525,7 +525,7 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, hstart = (hvirt_start >> L2_PAGETABLE_SHIFT_PAE) & 0x1ff; he = ((const uint64_t *) spage)[hstart]; - if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86) == m2p_mfn0 ) + if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86(guest_width)) == m2p_mfn0 ) { /* hvirt starts with xen stuff... */ xen_start = hstart; @@ -535,7 +535,7 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, /* old L2s from before hole was shrunk... */ hstart = (0xf5800000 >> L2_PAGETABLE_SHIFT_PAE) & 0x1ff; he = ((const uint64_t *) spage)[hstart]; - if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86) == m2p_mfn0 ) + if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86(guest_width)) == m2p_mfn0 ) xen_start = hstart; } } @@ -565,8 +565,8 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, if ( pte & _PAGE_PRESENT ) { - mfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86; - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) ) + mfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86(guest_width); + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, mfn) ) { /* This will happen if the type info is stale which is quite feasible under live migration */ @@ -582,7 +582,7 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, else pfn = mfn_to_pfn(mfn); - pte &= ~MADDR_MASK_X86; + pte &= ~MADDR_MASK_X86(guest_width); pte |= (uint64_t)pfn << PAGE_SHIFT; /* @@ -718,7 +718,7 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, live_p2m_frame_list xc_map_foreign_batch(xc_handle, dom, PROT_READ, p2m_frame_list_list, - P2M_FLL_ENTRIES); + P2M_FLL_ENTRIES(p2m_size, guest_width)); if ( !live_p2m_frame_list ) { ERROR("Couldn''t map p2m_frame_list"); @@ -726,20 +726,20 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, } /* Get a local copy of the live_P2M_frame_list */ - if ( !(p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE)) ) + if ( !(p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE(p2m_size, guest_width))) ) { ERROR("Couldn''t allocate p2m_frame_list array"); goto out; } - memset(p2m_frame_list, 0, P2M_TOOLS_FL_SIZE); - memcpy(p2m_frame_list, live_p2m_frame_list, P2M_GUEST_FL_SIZE); + memset(p2m_frame_list, 0, P2M_TOOLS_FL_SIZE(p2m_size, guest_width)); + memcpy(p2m_frame_list, live_p2m_frame_list, P2M_GUEST_FL_SIZE(p2m_size, guest_width)); /* Canonicalize guest''s unsigned long vs ours */ if ( guest_width > sizeof(unsigned long) ) - for ( i = 0; i < P2M_FL_ENTRIES; i++ ) + for ( i = 0; i < P2M_FL_ENTRIES(p2m_size, guest_width); i++ ) p2m_frame_list[i] = ((uint64_t *)p2m_frame_list)[i]; else if ( guest_width < sizeof(unsigned long) ) - for ( i = P2M_FL_ENTRIES - 1; i >= 0; i-- ) + for ( i = P2M_FL_ENTRIES(p2m_size, guest_width) - 1; i >= 0; i-- ) p2m_frame_list[i] = ((uint32_t *)p2m_frame_list)[i]; @@ -750,7 +750,7 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, p2m = xc_map_foreign_batch(xc_handle, dom, PROT_READ, p2m_frame_list, - P2M_FL_ENTRIES); + P2M_FL_ENTRIES(p2m_size, guest_width)); if ( !p2m ) { ERROR("Couldn''t map p2m table"); @@ -759,26 +759,26 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, live_p2m = p2m; /* So that translation macros will work */ /* Canonicalise the pfn-to-mfn table frame-number list. */ - for ( i = 0; i < p2m_size; i += FPP ) + for ( i = 0; i < p2m_size; i += FPP(guest_width) ) { - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(p2m_frame_list[i/FPP]) ) + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, p2m_frame_list[i/FPP(guest_width)]) ) { ERROR("Frame# in pfn-to-mfn frame list is not in pseudophys"); ERROR("entry %d: p2m_frame_list[%ld] is 0x%"PRIx64", max 0x%lx", - i, i/FPP, (uint64_t)p2m_frame_list[i/FPP], max_mfn); - if ( p2m_frame_list[i/FPP] < max_mfn ) + i, i/FPP(guest_width), (uint64_t)p2m_frame_list[i/FPP(guest_width)], max_mfn); + if ( p2m_frame_list[i/FPP(guest_width)] < max_mfn ) { ERROR("m2p[0x%"PRIx64"] = 0x%"PRIx64, - (uint64_t)p2m_frame_list[i/FPP], - (uint64_t)live_m2p[p2m_frame_list[i/FPP]]); + (uint64_t)p2m_frame_list[i/FPP(guest_width)], + (uint64_t)live_m2p[p2m_frame_list[i/FPP(guest_width)]]); ERROR("p2m[0x%"PRIx64"] = 0x%"PRIx64, - (uint64_t)live_m2p[p2m_frame_list[i/FPP]], - (uint64_t)p2m[live_m2p[p2m_frame_list[i/FPP]]]); + (uint64_t)live_m2p[p2m_frame_list[i/FPP(guest_width)]], + (uint64_t)p2m[live_m2p[p2m_frame_list[i/FPP(guest_width)]]]); } goto out; } - p2m_frame_list[i/FPP] = mfn_to_pfn(p2m_frame_list[i/FPP]); + p2m_frame_list[i/FPP(guest_width)] = mfn_to_pfn(p2m_frame_list[i/FPP(guest_width)]); } if ( xc_vcpu_getcontext(xc_handle, dom, 0, &ctxt) ) @@ -813,7 +813,7 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, } if ( write_exact(io_fd, p2m_frame_list, - P2M_FL_ENTRIES * sizeof(xen_pfn_t)) ) + P2M_FL_ENTRIES(p2m_size, guest_width) * sizeof(xen_pfn_t)) ) { PERROR("write: p2m_frame_list"); goto out; @@ -824,13 +824,13 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, out: if ( !success && p2m ) - munmap(p2m, P2M_FLL_ENTRIES * PAGE_SIZE); + munmap(p2m, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); if ( live_p2m_frame_list_list ) munmap(live_p2m_frame_list_list, PAGE_SIZE); if ( live_p2m_frame_list ) - munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE); + munmap(live_p2m_frame_list, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); if ( p2m_frame_list_list ) free(p2m_frame_list_list); @@ -1632,13 +1632,13 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } /* Canonicalise the suspend-record frame number. */ - mfn = GET_FIELD(&ctxt, user_regs.edx); - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) ) + mfn = GET_FIELD(guest_width, &ctxt, user_regs.edx); + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, mfn) ) { ERROR("Suspend record is not in range of pseudophys map"); goto out; } - SET_FIELD(&ctxt, user_regs.edx, mfn_to_pfn(mfn)); + SET_FIELD(guest_width, &ctxt, user_regs.edx, mfn_to_pfn(mfn)); for ( i = 0; i <= info.max_vcpu_id; i++ ) { @@ -1652,38 +1652,38 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } /* Canonicalise each GDT frame number. */ - for ( j = 0; (512*j) < GET_FIELD(&ctxt, gdt_ents); j++ ) + for ( j = 0; (512*j) < GET_FIELD(guest_width, &ctxt, gdt_ents); j++ ) { - mfn = GET_FIELD(&ctxt, gdt_frames[j]); - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) ) + mfn = GET_FIELD(guest_width, &ctxt, gdt_frames[j]); + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, mfn) ) { ERROR("GDT frame is not in range of pseudophys map"); goto out; } - SET_FIELD(&ctxt, gdt_frames[j], mfn_to_pfn(mfn)); + SET_FIELD(guest_width, &ctxt, gdt_frames[j], mfn_to_pfn(mfn)); } /* Canonicalise the page table base pointer. */ - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3( - GET_FIELD(&ctxt, ctrlreg[3]))) ) + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, UNFOLD_CR3(guest_width, + GET_FIELD(guest_width, &ctxt, ctrlreg[3]))) ) { ERROR("PT base is not in range of pseudophys map"); goto out; } - SET_FIELD(&ctxt, ctrlreg[3], - FOLD_CR3(mfn_to_pfn(UNFOLD_CR3(GET_FIELD(&ctxt, ctrlreg[3]))))); + SET_FIELD(guest_width, &ctxt, ctrlreg[3], + FOLD_CR3(guest_width, mfn_to_pfn(UNFOLD_CR3(guest_width, GET_FIELD(guest_width, &ctxt, ctrlreg[3]))))); /* Guest pagetable (x86/64) stored in otherwise-unused CR1. */ if ( (pt_levels == 4) && ctxt.x64.ctrlreg[1] ) { - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(ctxt.x64.ctrlreg[1])) ) + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, UNFOLD_CR3(guest_width, ctxt.x64.ctrlreg[1])) ) { ERROR("PT base is not in range of pseudophys map"); goto out; } /* Least-significant bit means ''valid PFN''. */ ctxt.x64.ctrlreg[1] = 1 | - FOLD_CR3(mfn_to_pfn(UNFOLD_CR3(ctxt.x64.ctrlreg[1]))); + FOLD_CR3(guest_width, mfn_to_pfn(UNFOLD_CR3(guest_width, ctxt.x64.ctrlreg[1]))); } if ( write_exact(io_fd, &ctxt, ((guest_width==8) @@ -1713,7 +1713,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, * Reset the MFN to be a known-invalid value. See map_frame_list_list(). */ memcpy(page, live_shinfo, PAGE_SIZE); - SET_FIELD(((shared_info_any_t *)page), + SET_FIELD(guest_width, ((shared_info_any_t *)page), arch.pfn_to_mfn_frame_list_list, 0); if ( write_exact(io_fd, page, PAGE_SIZE) ) { @@ -1783,7 +1783,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, munmap(live_shinfo, PAGE_SIZE); if ( live_p2m ) - munmap(live_p2m, P2M_FLL_ENTRIES * PAGE_SIZE); + munmap(live_p2m, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); if ( live_m2p ) munmap(live_m2p, M2P_SIZE(max_mfn)); diff --git a/tools/libxc/xc_offline_page.c b/tools/libxc/xc_offline_page.c index c386d88..7d25ec0 100644 --- a/tools/libxc/xc_offline_page.c +++ b/tools/libxc/xc_offline_page.c @@ -210,7 +210,7 @@ static int close_mem_info(int xc_handle, struct domain_mem_info *minfo) if (minfo->pfn_type) free(minfo->pfn_type); munmap(minfo->m2p_table, M2P_SIZE(minfo->max_mfn)); - munmap(minfo->p2m_table, P2M_FLL_ENTRIES * PAGE_SIZE); + munmap(minfo->p2m_table, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); minfo->p2m_table = minfo->m2p_table = NULL; return 0; @@ -307,7 +307,7 @@ failed: if (live_shinfo) munmap(live_shinfo, PAGE_SIZE); munmap(minfo->m2p_table, M2P_SIZE(minfo->max_mfn)); - munmap(minfo->p2m_table, P2M_FLL_ENTRIES * PAGE_SIZE); + munmap(minfo->p2m_table, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); minfo->p2m_table = minfo->m2p_table = NULL; return -1; @@ -360,7 +360,7 @@ static int __clear_pte(uint64_t pte, uint64_t *new_pte, /* XXX Check for PSE bit here */ /* Hit one entry */ - if ( ((pte >> PAGE_SHIFT_X86) & MFN_MASK_X86) == mfn) + if ( ((pte >> PAGE_SHIFT_X86) & MFN_MASK_X86(guest_width)) == mfn) { *new_pte = pte & ~_PAGE_PRESENT; if (!backup_ptes(table_mfn, table_offset, backup)) @@ -389,7 +389,7 @@ static int __update_pte(uint64_t pte, uint64_t *new_pte, { if (pte & _PAGE_PRESENT) ERROR("Page present while in backup ptes\n"); - pte &= ~MFN_MASK_X86; + pte &= ~MFN_MASK_X86(guest_width); pte |= (new_mfn << PAGE_SHIFT_X86) | _PAGE_PRESENT; *new_pte = pte; return 1; diff --git a/tools/libxc/xc_resume.c b/tools/libxc/xc_resume.c index ad0f137..68e4f43 100644 --- a/tools/libxc/xc_resume.c +++ b/tools/libxc/xc_resume.c @@ -61,7 +61,7 @@ static int modify_returncode(int xc_handle, uint32_t domid) if ( (rc = xc_vcpu_getcontext(xc_handle, domid, 0, &ctxt)) != 0 ) return rc; - SET_FIELD(&ctxt, user_regs.eax, 1); + SET_FIELD(guest_width, &ctxt, user_regs.eax, 1); if ( (rc = xc_vcpu_setcontext(xc_handle, domid, 0, &ctxt)) != 0 ) return rc; @@ -157,7 +157,7 @@ static int xc_domain_resume_any(int xc_handle, uint32_t domid) p2m_frame_list = xc_map_foreign_batch(xc_handle, domid, PROT_READ, p2m_frame_list_list, - P2M_FLL_ENTRIES); + P2M_FLL_ENTRIES(p2m_size, guest_width)); if ( p2m_frame_list == NULL ) { ERROR("Couldn''t map p2m_frame_list"); @@ -170,7 +170,7 @@ static int xc_domain_resume_any(int xc_handle, uint32_t domid) from a safety POV anyhow. */ p2m = xc_map_foreign_batch(xc_handle, domid, PROT_READ, p2m_frame_list, - P2M_FL_ENTRIES); + P2M_FL_ENTRIES(p2m_size, guest_width)); if ( p2m == NULL ) { ERROR("Couldn''t map p2m table"); @@ -189,7 +189,7 @@ static int xc_domain_resume_any(int xc_handle, uint32_t domid) goto out; } - mfn = GET_FIELD(&ctxt, user_regs.edx); + mfn = GET_FIELD(guest_width, &ctxt, user_regs.edx); start_info = xc_map_foreign_range(xc_handle, domid, PAGE_SIZE, PROT_READ | PROT_WRITE, mfn); @@ -218,9 +218,9 @@ static int xc_domain_resume_any(int xc_handle, uint32_t domid) out: unlock_pages((void *)&ctxt, sizeof ctxt); if (p2m) - munmap(p2m, P2M_FL_ENTRIES*PAGE_SIZE); + munmap(p2m, P2M_FL_ENTRIES(p2m_size, guest_width) *PAGE_SIZE); if (p2m_frame_list) - munmap(p2m_frame_list, P2M_FLL_ENTRIES*PAGE_SIZE); + munmap(p2m_frame_list, P2M_FLL_ENTRIES(p2m_size, guest_width) *PAGE_SIZE); if (p2m_frame_list_list) munmap(p2m_frame_list_list, PAGE_SIZE); if (shinfo) diff --git a/tools/libxc/xg_private.h b/tools/libxc/xg_private.h index 1e74509..d86ef46 100644 --- a/tools/libxc/xg_private.h +++ b/tools/libxc/xg_private.h @@ -146,23 +146,23 @@ typedef l4_pgentry_64_t l4_pgentry_t; /* Number of xen_pfn_t in a page */ -#define FPP (PAGE_SIZE/(guest_width)) +#define FPP(guest_width) (PAGE_SIZE/(guest_width)) /* Number of entries in the pfn_to_mfn_frame_list_list */ -#define P2M_FLL_ENTRIES (((p2m_size)+(FPP*FPP)-1)/(FPP*FPP)) +#define P2M_FLL_ENTRIES(p2m_size, gw) (((p2m_size)+(FPP(gw) * FPP(gw))-1)/(FPP(gw) * FPP(gw))) /* Number of entries in the pfn_to_mfn_frame_list */ -#define P2M_FL_ENTRIES (((p2m_size)+FPP-1)/FPP) +#define P2M_FL_ENTRIES(p2m_size, gw) (((p2m_size)+ FPP(gw) -1)/ FPP(gw)) /* Size in bytes of the pfn_to_mfn_frame_list */ -#define P2M_GUEST_FL_SIZE ((P2M_FL_ENTRIES) * (guest_width)) -#define P2M_TOOLS_FL_SIZE ((P2M_FL_ENTRIES) * \ +#define P2M_GUEST_FL_SIZE(p2m_size, guest_width) ((P2M_FL_ENTRIES(p2m_size, guest_width)) * (guest_width)) +#define P2M_TOOLS_FL_SIZE(p2m_size, guest_width) ((P2M_FL_ENTRIES(p2m_size, guest_width)) * \ MAX((sizeof (xen_pfn_t)), guest_width)) /* Masks for PTE<->PFN conversions */ -#define MADDR_BITS_X86 ((guest_width == 8) ? 52 : 44) -#define MFN_MASK_X86 ((1ULL << (MADDR_BITS_X86 - PAGE_SHIFT_X86)) - 1) -#define MADDR_MASK_X86 (MFN_MASK_X86 << PAGE_SHIFT_X86) +#define MADDR_BITS_X86(guest_width) ((guest_width == 8) ? 52 : 44) +#define MFN_MASK_X86(gw) ((1ULL << (MADDR_BITS_X86(gw) - PAGE_SHIFT_X86)) - 1) +#define MADDR_MASK_X86(gw) (MFN_MASK_X86(gw) << PAGE_SHIFT_X86) #define PAEKERN_no 0 diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h index 5d39982..6f16399 100644 --- a/tools/libxc/xg_save_restore.h +++ b/tools/libxc/xg_save_restore.h @@ -112,34 +112,34 @@ static inline int get_platform_info(int xc_handle, uint32_t dom, #define is_mapped(pfn_type) (!((pfn_type) & 0x80000000UL)) -#define GET_FIELD(_p, _f) ((guest_width==8) ? ((_p)->x64._f) : ((_p)->x32._f)) +#define GET_FIELD(_gw, _p, _f) (((_gw)==8) ? ((_p)->x64._f) : ((_p)->x32._f)) -#define SET_FIELD(_p, _f, _v) do { \ - if (guest_width == 8) \ +#define SET_FIELD(_gw, _p, _f, _v) do { \ + if ((_gw) == 8) \ (_p)->x64._f = (_v); \ else \ (_p)->x32._f = (_v); \ } while (0) -#define UNFOLD_CR3(_c) \ - ((uint64_t)((guest_width == 8) \ +#define UNFOLD_CR3(_gw, _c) \ + ((uint64_t)(((_gw) == 8) \ ? ((_c) >> 12) \ : (((uint32_t)(_c) >> 12) | ((uint32_t)(_c) << 20)))) -#define FOLD_CR3(_c) \ - ((uint64_t)((guest_width == 8) \ +#define FOLD_CR3(_gw, _c) \ + ((uint64_t)(((_gw) == 8) \ ? ((uint64_t)(_c)) << 12 \ : (((uint32_t)(_c) << 12) | ((uint32_t)(_c) >> 20)))) -#define MEMCPY_FIELD(_d, _s, _f) do { \ - if (guest_width == 8) \ +#define MEMCPY_FIELD(_gw, _d, _s, _f) do { \ + if ((_gw) == 8) \ memcpy(&(_d)->x64._f, &(_s)->x64._f,sizeof((_d)->x64._f)); \ else \ memcpy(&(_d)->x32._f, &(_s)->x32._f,sizeof((_d)->x32._f)); \ } while (0) -#define MEMSET_ARRAY_FIELD(_p, _f, _v) do { \ - if (guest_width == 8) \ +#define MEMSET_ARRAY_FIELD(_gw, _p, _f, _v) do { \ + if ((_gw) == 8) \ memset(&(_p)->x64._f[0], (_v), sizeof((_p)->x64._f)); \ else \ memset(&(_p)->x32._f[0], (_v), sizeof((_p)->x32._f)); \ -- 1.6.5.2 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 2/7] p2m_size is unnecessarily passed as a parameter when it''s available as a global variable.
p2m_size is unnecessarily passed as a parameter when it''s available as a global variable. it''s leading to confusion to which one is used, hence removing the parameter overshadowing the global. --- tools/libxc/xc_domain_save.c | 8 +++----- 1 files changed, 3 insertions(+), 5 deletions(-) diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c index 697d93c..eb5d48d 100644 --- a/tools/libxc/xc_domain_save.c +++ b/tools/libxc/xc_domain_save.c @@ -401,7 +401,7 @@ static int print_stats(int xc_handle, uint32_t domid, int pages_sent, } -static int analysis_phase(int xc_handle, uint32_t domid, int p2m_size, +static int analysis_phase(int xc_handle, uint32_t domid, unsigned long *arr, int runs) { long long start, now; @@ -673,7 +673,6 @@ err0: static xen_pfn_t *map_and_save_p2m_table(int xc_handle, int io_fd, uint32_t dom, - unsigned long p2m_size, shared_info_any_t *live_shinfo) { vcpu_guest_context_any_t ctxt; @@ -1027,7 +1026,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } } - analysis_phase(xc_handle, dom, p2m_size, to_skip, 0); + analysis_phase(xc_handle, dom, to_skip, 0); pfn_type = xg_memalign(PAGE_SIZE, ROUNDUP( MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); @@ -1066,8 +1065,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, int err = 0; /* Map the P2M table, and write the list of P2M frames */ - live_p2m = map_and_save_p2m_table(xc_handle, io_fd, dom, - p2m_size, live_shinfo); + live_p2m = map_and_save_p2m_table(xc_handle, io_fd, dom, live_shinfo); if ( live_p2m == NULL ) { ERROR("Failed to map/save the p2m frame list"); -- 1.6.5.2 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 3/7] move global variables in suspend into a global context
move global variables in suspend into a global context --- tools/libxc/xc_domain_save.c | 220 +++++++++++++++++++++--------------------- 1 files changed, 108 insertions(+), 112 deletions(-) diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c index eb5d48d..97bd4ad 100644 --- a/tools/libxc/xc_domain_save.c +++ b/tools/libxc/xc_domain_save.c @@ -30,27 +30,23 @@ #define DEF_MAX_ITERS 29 /* limit us to 30 times round loop */ #define DEF_MAX_FACTOR 3 /* never send more than 3x p2m_size */ -/* max mfn of the whole machine */ -static unsigned long max_mfn; - -/* virtual starting address of the hypervisor */ -static unsigned long hvirt_start; - -/* #levels of page tables used by the current guest */ -static unsigned int pt_levels; - -/* number of pfns this guest has (i.e. number of entries in the P2M) */ -static unsigned long p2m_size; - -/* Live mapping of the table mapping each PFN to its current MFN. */ -static xen_pfn_t *live_p2m = NULL; +struct suspend_ctx { + unsigned long max_mfn; /* max mfn of the whole machine */ + unsigned int pt_levels; /* #levels of page tables used by the current guest */ + unsigned long hvirt_start; /* virtual starting address of the hypervisor */ + unsigned long p2m_size; /* number of pfns this guest has (i.e. number of entries in the P2M) */ + unsigned int guest_width; /* Address size of the guest */ + unsigned long m2p_mfn0; + xen_pfn_t *live_m2p; /* Live mapping of system MFN to PFN table. */ + xen_pfn_t *live_p2m; /* Live mapping of the table mapping each PFN to its current MFN. */ +}; -/* Live mapping of system MFN to PFN table. */ -static xen_pfn_t *live_m2p = NULL; -static unsigned long m2p_mfn0; +struct suspend_ctx _ctx = { + .live_p2m = NULL, + .live_m2p = NULL, +}; -/* Address size of the guest */ -unsigned int guest_width; +struct suspend_ctx *ctx = &_ctx; /* buffer for output */ struct outbuf { @@ -63,13 +59,13 @@ struct outbuf { /* grep fodder: machine_to_phys */ -#define mfn_to_pfn(_mfn) (live_m2p[(_mfn)]) +#define mfn_to_pfn(_mfn) (ctx->live_m2p[(_mfn)]) #define pfn_to_mfn(_pfn) \ - ((xen_pfn_t) ((guest_width==8) \ - ? (((uint64_t *)live_p2m)[(_pfn)]) \ - : ((((uint32_t *)live_p2m)[(_pfn)]) == 0xffffffffU \ - ? (-1UL) : (((uint32_t *)live_p2m)[(_pfn)])))) + ((xen_pfn_t) ((ctx->guest_width==8) \ + ? (((uint64_t *)ctx->live_p2m)[(_pfn)]) \ + : ((((uint32_t *)ctx->live_p2m)[(_pfn)]) == 0xffffffffU \ + ? (-1UL) : (((uint32_t *)ctx->live_p2m)[(_pfn)])))) /* * Returns TRUE if the given machine frame number has a unique mapping @@ -77,7 +73,7 @@ struct outbuf { */ #define MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, _mfn) \ (((_mfn) < (max_mfn)) && \ - ((mfn_to_pfn(_mfn) < (p2m_size)) && \ + ((mfn_to_pfn(_mfn) < (ctx->p2m_size)) && \ (pfn_to_mfn(mfn_to_pfn(_mfn)) == (_mfn)))) /* @@ -87,7 +83,7 @@ struct outbuf { #define BITS_PER_LONG (sizeof(unsigned long) * 8) #define BITS_TO_LONGS(bits) (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) -#define BITMAP_SIZE (BITS_TO_LONGS(p2m_size) * sizeof(unsigned long)) +#define BITMAP_SIZE (BITS_TO_LONGS(ctx->p2m_size) * sizeof(unsigned long)) #define BITMAP_ENTRY(_nr,_bmap) \ ((volatile unsigned long *)(_bmap))[(_nr)/BITS_PER_LONG] @@ -415,7 +411,7 @@ static int analysis_phase(int xc_handle, uint32_t domid, int i; xc_shadow_control(xc_handle, domid, XEN_DOMCTL_SHADOW_OP_CLEAN, - arr, p2m_size, NULL, 0, NULL); + arr, ctx->p2m_size, NULL, 0, NULL); DPRINTF("#Flush\n"); for ( i = 0; i < 40; i++ ) { @@ -462,12 +458,12 @@ static void *map_frame_list_list(int xc_handle, uint32_t dom, { int count = 100; void *p; - uint64_t fll = GET_FIELD(guest_width, shinfo, arch.pfn_to_mfn_frame_list_list); + uint64_t fll = GET_FIELD(ctx->guest_width, shinfo, arch.pfn_to_mfn_frame_list_list); while ( count-- && (fll == 0) ) { usleep(10000); - fll = GET_FIELD(guest_width, shinfo, arch.pfn_to_mfn_frame_list_list); + fll = GET_FIELD(ctx->guest_width, shinfo, arch.pfn_to_mfn_frame_list_list); } if ( fll == 0 ) @@ -504,12 +500,12 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, ** reserved hypervisor mappings. This depends on the current ** page table type as well as the number of paging levels. */ - xen_start = xen_end = pte_last = PAGE_SIZE / ((pt_levels == 2) ? 4 : 8); + xen_start = xen_end = pte_last = PAGE_SIZE / ((ctx->pt_levels == 2) ? 4 : 8); - if ( (pt_levels == 2) && (type == XEN_DOMCTL_PFINFO_L2TAB) ) - xen_start = (hvirt_start >> L2_PAGETABLE_SHIFT); + if ( (ctx->pt_levels == 2) && (type == XEN_DOMCTL_PFINFO_L2TAB) ) + xen_start = (ctx->hvirt_start >> L2_PAGETABLE_SHIFT); - if ( (pt_levels == 3) && (type == XEN_DOMCTL_PFINFO_L3TAB) ) + if ( (ctx->pt_levels == 3) && (type == XEN_DOMCTL_PFINFO_L3TAB) ) xen_start = L3_PAGETABLE_ENTRIES_PAE; /* @@ -517,30 +513,30 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, ** We can spot this by looking for the guest''s mappingof the m2p. ** Guests must ensure that this check will fail for other L2s. */ - if ( (pt_levels == 3) && (type == XEN_DOMCTL_PFINFO_L2TAB) ) + if ( (ctx->pt_levels == 3) && (type == XEN_DOMCTL_PFINFO_L2TAB) ) { int hstart; uint64_t he; - hstart = (hvirt_start >> L2_PAGETABLE_SHIFT_PAE) & 0x1ff; + hstart = (ctx->hvirt_start >> L2_PAGETABLE_SHIFT_PAE) & 0x1ff; he = ((const uint64_t *) spage)[hstart]; - if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86(guest_width)) == m2p_mfn0 ) + if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86(ctx->guest_width)) == ctx->m2p_mfn0 ) { /* hvirt starts with xen stuff... */ xen_start = hstart; } - else if ( hvirt_start != 0xf5800000 ) + else if ( ctx->hvirt_start != 0xf5800000 ) { /* old L2s from before hole was shrunk... */ hstart = (0xf5800000 >> L2_PAGETABLE_SHIFT_PAE) & 0x1ff; he = ((const uint64_t *) spage)[hstart]; - if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86(guest_width)) == m2p_mfn0 ) + if ( ((he >> PAGE_SHIFT) & MFN_MASK_X86(ctx->guest_width)) == ctx->m2p_mfn0 ) xen_start = hstart; } } - if ( (pt_levels == 4) && (type == XEN_DOMCTL_PFINFO_L4TAB) ) + if ( (ctx->pt_levels == 4) && (type == XEN_DOMCTL_PFINFO_L4TAB) ) { /* ** XXX SMH: should compute these from hvirt_start (which we have) @@ -555,7 +551,7 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, { unsigned long pfn, mfn; - if ( pt_levels == 2 ) + if ( ctx->pt_levels == 2 ) pte = ((const uint32_t*)spage)[i]; else pte = ((const uint64_t*)spage)[i]; @@ -565,8 +561,8 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, if ( pte & _PAGE_PRESENT ) { - mfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86(guest_width); - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, mfn) ) + mfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86(ctx->guest_width); + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, mfn) ) { /* This will happen if the type info is stale which is quite feasible under live migration */ @@ -576,13 +572,13 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, * compat m2p, so we quietly zap them. This doesn''t * count as a race, so don''t report it. */ if ( !(type == XEN_DOMCTL_PFINFO_L2TAB - && sizeof (unsigned long) > guest_width) ) + && sizeof (unsigned long) > ctx->guest_width) ) race = 1; /* inform the caller; fatal if !live */ } else pfn = mfn_to_pfn(mfn); - pte &= ~MADDR_MASK_X86(guest_width); + pte &= ~MADDR_MASK_X86(ctx->guest_width); pte |= (uint64_t)pfn << PAGE_SHIFT; /* @@ -590,13 +586,13 @@ static int canonicalize_pagetable(unsigned long type, unsigned long pfn, * a 64bit hypervisor. We zap these here to avoid any * surprise at restore time... */ - if ( (pt_levels == 3) && + if ( (ctx->pt_levels == 3) && (type == XEN_DOMCTL_PFINFO_L3TAB) && (pte & (_PAGE_USER|_PAGE_RW|_PAGE_ACCESSED)) ) pte &= ~(_PAGE_USER|_PAGE_RW|_PAGE_ACCESSED); } - if ( pt_levels == 2 ) + if ( ctx->pt_levels == 2 ) ((uint32_t*)dpage)[i] = pte; else ((uint64_t*)dpage)[i] = pte; @@ -704,20 +700,20 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, memcpy(p2m_frame_list_list, live_p2m_frame_list_list, PAGE_SIZE); /* Canonicalize guest''s unsigned long vs ours */ - if ( guest_width > sizeof(unsigned long) ) + if ( ctx->guest_width > sizeof(unsigned long) ) for ( i = 0; i < PAGE_SIZE/sizeof(unsigned long); i++ ) - if ( i < PAGE_SIZE/guest_width ) + if ( i < PAGE_SIZE/ctx->guest_width ) p2m_frame_list_list[i] = ((uint64_t *)p2m_frame_list_list)[i]; else p2m_frame_list_list[i] = 0; - else if ( guest_width < sizeof(unsigned long) ) + else if ( ctx->guest_width < sizeof(unsigned long) ) for ( i = PAGE_SIZE/sizeof(unsigned long) - 1; i >= 0; i-- ) p2m_frame_list_list[i] = ((uint32_t *)p2m_frame_list_list)[i]; live_p2m_frame_list xc_map_foreign_batch(xc_handle, dom, PROT_READ, p2m_frame_list_list, - P2M_FLL_ENTRIES(p2m_size, guest_width)); + P2M_FLL_ENTRIES(ctx->p2m_size, ctx->guest_width)); if ( !live_p2m_frame_list ) { ERROR("Couldn''t map p2m_frame_list"); @@ -725,20 +721,20 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, } /* Get a local copy of the live_P2M_frame_list */ - if ( !(p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE(p2m_size, guest_width))) ) + if ( !(p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE(ctx->p2m_size, ctx->guest_width))) ) { ERROR("Couldn''t allocate p2m_frame_list array"); goto out; } - memset(p2m_frame_list, 0, P2M_TOOLS_FL_SIZE(p2m_size, guest_width)); - memcpy(p2m_frame_list, live_p2m_frame_list, P2M_GUEST_FL_SIZE(p2m_size, guest_width)); + memset(p2m_frame_list, 0, P2M_TOOLS_FL_SIZE(ctx->p2m_size, ctx->guest_width)); + memcpy(p2m_frame_list, live_p2m_frame_list, P2M_GUEST_FL_SIZE(ctx->p2m_size, ctx->guest_width)); /* Canonicalize guest''s unsigned long vs ours */ - if ( guest_width > sizeof(unsigned long) ) - for ( i = 0; i < P2M_FL_ENTRIES(p2m_size, guest_width); i++ ) + if ( ctx->guest_width > sizeof(unsigned long) ) + for ( i = 0; i < P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width); i++ ) p2m_frame_list[i] = ((uint64_t *)p2m_frame_list)[i]; - else if ( guest_width < sizeof(unsigned long) ) - for ( i = P2M_FL_ENTRIES(p2m_size, guest_width) - 1; i >= 0; i-- ) + else if ( ctx->guest_width < sizeof(unsigned long) ) + for ( i = P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width) - 1; i >= 0; i-- ) p2m_frame_list[i] = ((uint32_t *)p2m_frame_list)[i]; @@ -749,35 +745,35 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, p2m = xc_map_foreign_batch(xc_handle, dom, PROT_READ, p2m_frame_list, - P2M_FL_ENTRIES(p2m_size, guest_width)); + P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width)); if ( !p2m ) { ERROR("Couldn''t map p2m table"); goto out; } - live_p2m = p2m; /* So that translation macros will work */ + ctx->live_p2m = p2m; /* So that translation macros will work */ /* Canonicalise the pfn-to-mfn table frame-number list. */ - for ( i = 0; i < p2m_size; i += FPP(guest_width) ) + for ( i = 0; i < ctx->p2m_size; i += FPP(ctx->guest_width) ) { - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, p2m_frame_list[i/FPP(guest_width)]) ) + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, p2m_frame_list[i/FPP(ctx->guest_width)]) ) { ERROR("Frame# in pfn-to-mfn frame list is not in pseudophys"); ERROR("entry %d: p2m_frame_list[%ld] is 0x%"PRIx64", max 0x%lx", - i, i/FPP(guest_width), (uint64_t)p2m_frame_list[i/FPP(guest_width)], max_mfn); - if ( p2m_frame_list[i/FPP(guest_width)] < max_mfn ) + i, i/FPP(ctx->guest_width), (uint64_t)p2m_frame_list[i/FPP(ctx->guest_width)], ctx->max_mfn); + if ( p2m_frame_list[i/FPP(ctx->guest_width)] < ctx->max_mfn ) { ERROR("m2p[0x%"PRIx64"] = 0x%"PRIx64, - (uint64_t)p2m_frame_list[i/FPP(guest_width)], - (uint64_t)live_m2p[p2m_frame_list[i/FPP(guest_width)]]); + (uint64_t)p2m_frame_list[i/FPP(ctx->guest_width)], + (uint64_t)ctx->live_m2p[p2m_frame_list[i/FPP(ctx->guest_width)]]); ERROR("p2m[0x%"PRIx64"] = 0x%"PRIx64, - (uint64_t)live_m2p[p2m_frame_list[i/FPP(guest_width)]], - (uint64_t)p2m[live_m2p[p2m_frame_list[i/FPP(guest_width)]]]); + (uint64_t)ctx->live_m2p[p2m_frame_list[i/FPP(ctx->guest_width)]], + (uint64_t)p2m[ctx->live_m2p[p2m_frame_list[i/FPP(ctx->guest_width)]]]); } goto out; } - p2m_frame_list[i/FPP(guest_width)] = mfn_to_pfn(p2m_frame_list[i/FPP(guest_width)]); + p2m_frame_list[i/FPP(ctx->guest_width)] = mfn_to_pfn(p2m_frame_list[i/FPP(ctx->guest_width)]); } if ( xc_vcpu_getcontext(xc_handle, dom, 0, &ctxt) ) @@ -793,7 +789,7 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, */ { unsigned long signature = ~0UL; - uint32_t chunk1_sz = ((guest_width==8) + uint32_t chunk1_sz = ((ctx->guest_width==8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32)); uint32_t chunk2_sz = 0; @@ -812,7 +808,7 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, } if ( write_exact(io_fd, p2m_frame_list, - P2M_FL_ENTRIES(p2m_size, guest_width) * sizeof(xen_pfn_t)) ) + P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width) * sizeof(xen_pfn_t)) ) { PERROR("write: p2m_frame_list"); goto out; @@ -823,13 +819,13 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, out: if ( !success && p2m ) - munmap(p2m, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); + munmap(p2m, P2M_FLL_ENTRIES(ctx->p2m_size, ctx->guest_width) * PAGE_SIZE); if ( live_p2m_frame_list_list ) munmap(live_p2m_frame_list_list, PAGE_SIZE); if ( live_p2m_frame_list ) - munmap(live_p2m_frame_list, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); + munmap(live_p2m_frame_list, P2M_FLL_ENTRIES(ctx->p2m_size, ctx->guest_width) * PAGE_SIZE); if ( p2m_frame_list_list ) free(p2m_frame_list_list); @@ -908,7 +904,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, initialize_mbit_rate(); if ( !get_platform_info(xc_handle, dom, - &max_mfn, &hvirt_start, &pt_levels, &guest_width) ) + &ctx->max_mfn, &ctx->hvirt_start, &ctx->pt_levels, &ctx->guest_width) ) { ERROR("Unable to get platform info."); return 1; @@ -935,7 +931,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } /* Get the size of the P2M table */ - p2m_size = xc_memory_op(xc_handle, XENMEM_maximum_gpfn, &dom) + 1; + ctx->p2m_size = xc_memory_op(xc_handle, XENMEM_maximum_gpfn, &dom) + 1; /* Domain is still running at this point */ if ( live ) @@ -981,7 +977,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, last_iter = !live; /* pretend we sent all the pages last iteration */ - sent_last_iter = p2m_size; + sent_last_iter = ctx->p2m_size; /* Setup to_send / to_fix and to_skip bitmaps */ to_send = xg_memalign(PAGE_SIZE, ROUNDUP(BITMAP_SIZE, PAGE_SHIFT)); @@ -1047,14 +1043,14 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } /* Setup the mfn_to_pfn table mapping */ - if ( !(live_m2p = xc_map_m2p(xc_handle, max_mfn, PROT_READ, &m2p_mfn0)) ) + if ( !(ctx->live_m2p = xc_map_m2p(xc_handle, ctx->max_mfn, PROT_READ, &ctx->m2p_mfn0)) ) { ERROR("Failed to map live M2P table"); goto out; } /* Start writing out the saved-domain record. */ - if ( write_exact(io_fd, &p2m_size, sizeof(unsigned long)) ) + if ( write_exact(io_fd, &ctx->p2m_size, sizeof(unsigned long)) ) { PERROR("write: p2m_size"); goto out; @@ -1065,8 +1061,8 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, int err = 0; /* Map the P2M table, and write the list of P2M frames */ - live_p2m = map_and_save_p2m_table(xc_handle, io_fd, dom, live_shinfo); - if ( live_p2m == NULL ) + ctx->live_p2m = map_and_save_p2m_table(xc_handle, io_fd, dom, live_shinfo); + if ( ctx->live_p2m == NULL ) { ERROR("Failed to map/save the p2m frame list"); goto out; @@ -1076,7 +1072,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, * Quick belt and braces sanity check. */ - for ( i = 0; i < p2m_size; i++ ) + for ( i = 0; i < ctx->p2m_size; i++ ) { mfn = pfn_to_mfn(i); if( (mfn != INVALID_P2M_ENTRY) && (mfn_to_pfn(mfn) != i) ) @@ -1118,9 +1114,9 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, DPRINTF("Saving memory pages: iter %d 0%%", iter); - while ( N < p2m_size ) + while ( N < ctx->p2m_size ) { - unsigned int this_pc = (N * 100) / p2m_size; + unsigned int this_pc = (N * 100) / ctx->p2m_size; if ( (this_pc - prev_pc) >= 5 ) { @@ -1134,8 +1130,8 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, but this is fast enough for the moment. */ frc = xc_shadow_control( xc_handle, dom, XEN_DOMCTL_SHADOW_OP_PEEK, to_skip, - p2m_size, NULL, 0, NULL); - if ( frc != p2m_size ) + ctx->p2m_size, NULL, 0, NULL); + if ( frc != ctx->p2m_size ) { ERROR("Error peeking shadow bitmap"); goto out; @@ -1145,7 +1141,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, /* load pfn_type[] with the mfn of all the pages we''re doing in this batch. */ for ( batch = 0; - (batch < MAX_BATCH_SIZE) && (N < p2m_size); + (batch < MAX_BATCH_SIZE) && (N < ctx->p2m_size); N++ ) { int n = N; @@ -1407,7 +1403,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, print_stats( xc_handle, dom, sent_this_iter, &stats, 1); DPRINTF("Total pages sent= %ld (%.2fx)\n", - total_sent, ((float)total_sent)/p2m_size ); + total_sent, ((float)total_sent)/ctx->p2m_size ); DPRINTF("(of which %ld were fixups)\n", needed_to_fix ); } @@ -1436,7 +1432,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, if ( ((sent_this_iter > sent_last_iter) && RATE_IS_MAX()) || (iter >= max_iters) || (sent_this_iter+skip_this_iter < 50) || - (total_sent > p2m_size*max_factor) ) + (total_sent > ctx->p2m_size*max_factor) ) { DPRINTF("Start last iteration\n"); last_iter = 1; @@ -1460,7 +1456,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, if ( xc_shadow_control(xc_handle, dom, XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, - p2m_size, NULL, 0, &stats) != p2m_size ) + ctx->p2m_size, NULL, 0, &stats) != ctx->p2m_size ) { ERROR("Error flushing shadow PT"); goto out; @@ -1593,7 +1589,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, unsigned int i,j; unsigned long pfntab[1024]; - for ( i = 0, j = 0; i < p2m_size; i++ ) + for ( i = 0, j = 0; i < ctx->p2m_size; i++ ) { if ( !is_mapped(pfn_to_mfn(i)) ) j++; @@ -1605,13 +1601,13 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, goto out; } - for ( i = 0, j = 0; i < p2m_size; ) + for ( i = 0, j = 0; i < ctx->p2m_size; ) { if ( !is_mapped(pfn_to_mfn(i)) ) pfntab[j++] = i; i++; - if ( (j == 1024) || (i == p2m_size) ) + if ( (j == 1024) || (i == ctx->p2m_size) ) { if ( write_exact(io_fd, &pfntab, sizeof(unsigned long)*j) ) { @@ -1630,13 +1626,13 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } /* Canonicalise the suspend-record frame number. */ - mfn = GET_FIELD(guest_width, &ctxt, user_regs.edx); - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, mfn) ) + mfn = GET_FIELD(ctx->guest_width, &ctxt, user_regs.edx); + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, mfn) ) { ERROR("Suspend record is not in range of pseudophys map"); goto out; } - SET_FIELD(guest_width, &ctxt, user_regs.edx, mfn_to_pfn(mfn)); + SET_FIELD(ctx->guest_width, &ctxt, user_regs.edx, mfn_to_pfn(mfn)); for ( i = 0; i <= info.max_vcpu_id; i++ ) { @@ -1650,41 +1646,41 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } /* Canonicalise each GDT frame number. */ - for ( j = 0; (512*j) < GET_FIELD(guest_width, &ctxt, gdt_ents); j++ ) + for ( j = 0; (512*j) < GET_FIELD(ctx->guest_width, &ctxt, gdt_ents); j++ ) { - mfn = GET_FIELD(guest_width, &ctxt, gdt_frames[j]); - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, mfn) ) + mfn = GET_FIELD(ctx->guest_width, &ctxt, gdt_frames[j]); + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, mfn) ) { ERROR("GDT frame is not in range of pseudophys map"); goto out; } - SET_FIELD(guest_width, &ctxt, gdt_frames[j], mfn_to_pfn(mfn)); + SET_FIELD(ctx->guest_width, &ctxt, gdt_frames[j], mfn_to_pfn(mfn)); } /* Canonicalise the page table base pointer. */ - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, UNFOLD_CR3(guest_width, - GET_FIELD(guest_width, &ctxt, ctrlreg[3]))) ) + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, UNFOLD_CR3(ctx->guest_width, + GET_FIELD(ctx->guest_width, &ctxt, ctrlreg[3]))) ) { ERROR("PT base is not in range of pseudophys map"); goto out; } - SET_FIELD(guest_width, &ctxt, ctrlreg[3], - FOLD_CR3(guest_width, mfn_to_pfn(UNFOLD_CR3(guest_width, GET_FIELD(guest_width, &ctxt, ctrlreg[3]))))); + SET_FIELD(ctx->guest_width, &ctxt, ctrlreg[3], + FOLD_CR3(ctx->guest_width, mfn_to_pfn(UNFOLD_CR3(ctx->guest_width, GET_FIELD(ctx->guest_width, &ctxt, ctrlreg[3]))))); /* Guest pagetable (x86/64) stored in otherwise-unused CR1. */ - if ( (pt_levels == 4) && ctxt.x64.ctrlreg[1] ) + if ( (ctx->pt_levels == 4) && ctxt.x64.ctrlreg[1] ) { - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(max_mfn, UNFOLD_CR3(guest_width, ctxt.x64.ctrlreg[1])) ) + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, UNFOLD_CR3(ctx->guest_width, ctxt.x64.ctrlreg[1])) ) { ERROR("PT base is not in range of pseudophys map"); goto out; } /* Least-significant bit means ''valid PFN''. */ ctxt.x64.ctrlreg[1] = 1 | - FOLD_CR3(guest_width, mfn_to_pfn(UNFOLD_CR3(guest_width, ctxt.x64.ctrlreg[1]))); + FOLD_CR3(ctx->guest_width, mfn_to_pfn(UNFOLD_CR3(ctx->guest_width, ctxt.x64.ctrlreg[1]))); } - if ( write_exact(io_fd, &ctxt, ((guest_width==8) + if ( write_exact(io_fd, &ctxt, ((ctx->guest_width==8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32))) ) { @@ -1711,7 +1707,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, * Reset the MFN to be a known-invalid value. See map_frame_list_list(). */ memcpy(page, live_shinfo, PAGE_SIZE); - SET_FIELD(guest_width, ((shared_info_any_t *)page), + SET_FIELD(ctx->guest_width, ((shared_info_any_t *)page), arch.pfn_to_mfn_frame_list_list, 0); if ( write_exact(io_fd, page, PAGE_SIZE) ) { @@ -1756,7 +1752,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, if ( xc_shadow_control(xc_handle, dom, XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, - p2m_size, NULL, 0, &stats) != p2m_size ) + ctx->p2m_size, NULL, 0, &stats) != ctx->p2m_size ) { ERROR("Error flushing shadow PT"); } @@ -1780,11 +1776,11 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, if ( live_shinfo ) munmap(live_shinfo, PAGE_SIZE); - if ( live_p2m ) - munmap(live_p2m, P2M_FLL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); + if ( ctx->live_p2m ) + munmap(ctx->live_p2m, P2M_FLL_ENTRIES(ctx->p2m_size, ctx->guest_width) * PAGE_SIZE); - if ( live_m2p ) - munmap(live_m2p, M2P_SIZE(max_mfn)); + if ( ctx->live_m2p ) + munmap(ctx->live_m2p, M2P_SIZE(ctx->max_mfn)); free(pfn_type); free(pfn_batch); -- 1.6.5.2 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 4/7] move the suspend_ctx on the save stack instead of a global one
move the suspend_ctx on the save stack instead of a global one --- tools/libxc/xc_domain_save.c | 28 +++++++++++++++------------- 1 files changed, 15 insertions(+), 13 deletions(-) diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c index 97bd4ad..341b0e0 100644 --- a/tools/libxc/xc_domain_save.c +++ b/tools/libxc/xc_domain_save.c @@ -41,13 +41,6 @@ struct suspend_ctx { xen_pfn_t *live_p2m; /* Live mapping of the table mapping each PFN to its current MFN. */ }; -struct suspend_ctx _ctx = { - .live_p2m = NULL, - .live_m2p = NULL, -}; - -struct suspend_ctx *ctx = &_ctx; - /* buffer for output */ struct outbuf { void* buf; @@ -398,6 +391,7 @@ static int print_stats(int xc_handle, uint32_t domid, int pages_sent, static int analysis_phase(int xc_handle, uint32_t domid, + struct suspend_ctx *ctx, unsigned long *arr, int runs) { long long start, now; @@ -454,6 +448,7 @@ static int suspend_and_state(int (*suspend)(void*), void* data, ** it to update the MFN to a reasonable value. */ static void *map_frame_list_list(int xc_handle, uint32_t dom, + struct suspend_ctx *ctx, shared_info_any_t *shinfo) { int count = 100; @@ -488,7 +483,8 @@ static void *map_frame_list_list(int xc_handle, uint32_t dom, ** which entries do not require canonicalization (in particular, those ** entries which map the virtual address reserved for the hypervisor). */ -static int canonicalize_pagetable(unsigned long type, unsigned long pfn, +static int canonicalize_pagetable(struct suspend_ctx *ctx, + unsigned long type, unsigned long pfn, const void *spage, void *dpage) { @@ -669,6 +665,7 @@ err0: static xen_pfn_t *map_and_save_p2m_table(int xc_handle, int io_fd, uint32_t dom, + struct suspend_ctx *ctx, shared_info_any_t *live_shinfo) { vcpu_guest_context_any_t ctxt; @@ -686,7 +683,7 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, int i, success = 0; - live_p2m_frame_list_list = map_frame_list_list(xc_handle, dom, + live_p2m_frame_list_list = map_frame_list_list(xc_handle, dom, ctx, live_shinfo); if ( !live_p2m_frame_list_list ) goto out; @@ -892,9 +889,14 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, unsigned long mfn; struct outbuf ob; - int completed = 0; + struct suspend_ctx _ctx = { + .live_p2m = NULL, + .live_m2p = NULL, + }; + struct suspend_ctx *ctx = &_ctx; + outbuf_init(&ob, OUTBUF_SIZE); /* If no explicit control parameters given, use defaults */ @@ -1022,7 +1024,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, } } - analysis_phase(xc_handle, dom, to_skip, 0); + analysis_phase(xc_handle, dom, ctx, to_skip, 0); pfn_type = xg_memalign(PAGE_SIZE, ROUNDUP( MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); @@ -1061,7 +1063,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, int err = 0; /* Map the P2M table, and write the list of P2M frames */ - ctx->live_p2m = map_and_save_p2m_table(xc_handle, io_fd, dom, live_shinfo); + ctx->live_p2m = map_and_save_p2m_table(xc_handle, io_fd, dom, ctx, live_shinfo); if ( ctx->live_p2m == NULL ) { ERROR("Failed to map/save the p2m frame list"); @@ -1349,7 +1351,7 @@ int xc_domain_save(int xc_handle, int io_fd, uint32_t dom, uint32_t max_iters, { /* We have a pagetable page: need to rewrite it. */ race = - canonicalize_pagetable(pagetype, pfn, spage, page); + canonicalize_pagetable(ctx, pagetype, pfn, spage, page); if ( race && !live ) { -- 1.6.5.2 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 5/7] alias i/FPP(guest_width) as p2m_index and replace every usage
alias i/FPP(guest_width) as p2m_index and replace every usage --- tools/libxc/xc_domain_save.c | 17 +++++++++-------- 1 files changed, 9 insertions(+), 8 deletions(-) diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c index 341b0e0..61cc589 100644 --- a/tools/libxc/xc_domain_save.c +++ b/tools/libxc/xc_domain_save.c @@ -753,24 +753,25 @@ static xen_pfn_t *map_and_save_p2m_table(int xc_handle, /* Canonicalise the pfn-to-mfn table frame-number list. */ for ( i = 0; i < ctx->p2m_size; i += FPP(ctx->guest_width) ) { - if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, p2m_frame_list[i/FPP(ctx->guest_width)]) ) + uint32_t p2m_index = i / FPP(ctx->guest_width); + if ( !MFN_IS_IN_PSEUDOPHYS_MAP(ctx->max_mfn, p2m_frame_list[p2m_index]) ) { ERROR("Frame# in pfn-to-mfn frame list is not in pseudophys"); ERROR("entry %d: p2m_frame_list[%ld] is 0x%"PRIx64", max 0x%lx", - i, i/FPP(ctx->guest_width), (uint64_t)p2m_frame_list[i/FPP(ctx->guest_width)], ctx->max_mfn); - if ( p2m_frame_list[i/FPP(ctx->guest_width)] < ctx->max_mfn ) + i, p2m_index, (uint64_t)p2m_frame_list[p2m_index], ctx->max_mfn); + if ( p2m_frame_list[p2m_index] < ctx->max_mfn ) { ERROR("m2p[0x%"PRIx64"] = 0x%"PRIx64, - (uint64_t)p2m_frame_list[i/FPP(ctx->guest_width)], - (uint64_t)ctx->live_m2p[p2m_frame_list[i/FPP(ctx->guest_width)]]); + (uint64_t)p2m_frame_list[p2m_index], + (uint64_t)ctx->live_m2p[p2m_frame_list[p2m_index]]); ERROR("p2m[0x%"PRIx64"] = 0x%"PRIx64, - (uint64_t)ctx->live_m2p[p2m_frame_list[i/FPP(ctx->guest_width)]], - (uint64_t)p2m[ctx->live_m2p[p2m_frame_list[i/FPP(ctx->guest_width)]]]); + (uint64_t)ctx->live_m2p[p2m_frame_list[p2m_index]], + (uint64_t)p2m[ctx->live_m2p[p2m_frame_list[p2m_index]]]); } goto out; } - p2m_frame_list[i/FPP(ctx->guest_width)] = mfn_to_pfn(p2m_frame_list[i/FPP(ctx->guest_width)]); + p2m_frame_list[p2m_index] = mfn_to_pfn(p2m_frame_list[p2m_index]); } if ( xc_vcpu_getcontext(xc_handle, dom, 0, &ctxt) ) -- 1.6.5.2 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 6/7] move restore global variables into a global context
move restore global variables into a global context --- tools/libxc/xc_domain_restore.c | 277 +++++++++++++++++++-------------------- 1 files changed, 135 insertions(+), 142 deletions(-) diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c index e3d2d4a..6430e91 100644 --- a/tools/libxc/xc_domain_restore.c +++ b/tools/libxc/xc_domain_restore.c @@ -32,32 +32,25 @@ #include <xen/hvm/ioreq.h> #include <xen/hvm/params.h> -/* max mfn of the current host machine */ -static unsigned long max_mfn; - -/* virtual starting address of the hypervisor */ -static unsigned long hvirt_start; - -/* #levels of page tables used by the current guest */ -static unsigned int pt_levels; - -/* number of pfns this guest has (i.e. number of entries in the P2M) */ -static unsigned long p2m_size; - -/* number of ''in use'' pfns in the guest (i.e. #P2M entries with a valid mfn) */ -static unsigned long nr_pfns; - -/* Live mapping of the table mapping each PFN to its current MFN. */ -static xen_pfn_t *live_p2m = NULL; - -/* A table mapping each PFN to its new MFN. */ -static xen_pfn_t *p2m = NULL; - -/* Address size of the guest, in bytes */ -unsigned int guest_width; - -/* If have enough continuous memory for super page allocation */ -static unsigned no_superpage_mem = 0; +struct restore_ctx { + unsigned long max_mfn; /* max mfn of the current host machine */ + unsigned long hvirt_start; /* virtual starting address of the hypervisor */ + unsigned int pt_levels; /* #levels of page tables used by the current guest */ + unsigned long p2m_size; /* number of pfns this guest has (i.e. number of entries in the P2M) */ + unsigned long nr_pfns; /* number of ''in use'' pfns in the guest (i.e. #P2M entries with a valid mfn) */ + xen_pfn_t *live_p2m; /* Live mapping of the table mapping each PFN to its current MFN. */ + xen_pfn_t *p2m; /* A table mapping each PFN to its new MFN. */ + unsigned int guest_width; /* Address size of the guest, in bytes */ + unsigned no_superpage_mem; /* If have enough continuous memory for super page allocation */ +}; + +struct restore_ctx _ctx = { + .live_p2m = NULL, + .p2m = NULL, + .no_superpage_mem = 0, +}; + +struct restore_ctx *ctx = &_ctx; /* ** @@ -84,7 +77,7 @@ static int super_page_populated(unsigned long pfn) pfn &= ~(SUPERPAGE_NR_PFNS - 1); for ( i = pfn; i < pfn + SUPERPAGE_NR_PFNS; i++ ) { - if ( p2m[i] != INVALID_P2M_ENTRY ) + if ( ctx->p2m[i] != INVALID_P2M_ENTRY ) return 1; } return 0; @@ -109,7 +102,7 @@ static int break_super_page(int xc_handle, for ( i = start_pfn; i < start_pfn + SUPERPAGE_NR_PFNS; i++ ) { /* check the 2M page are populated */ - if ( p2m[i] == INVALID_P2M_ENTRY ) { + if ( ctx->p2m[i] == INVALID_P2M_ENTRY ) { DPRINTF("Previous super page was populated wrongly!\n"); return 1; } @@ -158,7 +151,7 @@ static int break_super_page(int xc_handle, start_pfn = next_pfn & ~(SUPERPAGE_NR_PFNS - 1); for ( i = start_pfn; i < start_pfn + SUPERPAGE_NR_PFNS; i++ ) { - p2m[i] = INVALID_P2M_ENTRY; + ctx->p2m[i] = INVALID_P2M_ENTRY; } for ( i = start_pfn; i < start_pfn + tot_pfns; i++ ) @@ -172,7 +165,7 @@ static int break_super_page(int xc_handle, rc = 1; goto out; } - p2m[i] = mfn; + ctx->p2m[i] = mfn; } /* restore contents */ @@ -224,7 +217,7 @@ static int allocate_mfn_list(int xc_handle, sp_pfn = *next_pfn; if ( !superpages || - no_superpage_mem || + ctx->no_superpage_mem || !SUPER_PAGE_TRACKING(sp_pfn) ) goto normal_page; @@ -269,13 +262,13 @@ static int allocate_mfn_list(int xc_handle, { for ( i = pfn; i < pfn + SUPERPAGE_NR_PFNS; i++, mfn++ ) { - p2m[i] = mfn; + ctx->p2m[i] = mfn; } return 0; } DPRINTF("No 2M page available for pfn 0x%lx, fall back to 4K page.\n", pfn); - no_superpage_mem = 1; + ctx->no_superpage_mem = 1; normal_page: if ( !batch_buf ) @@ -291,7 +284,7 @@ normal_page: continue; pfn = mfn = batch_buf[i] & ~XEN_DOMCTL_PFINFO_LTAB_MASK; - if ( p2m[pfn] == INVALID_P2M_ENTRY ) + if ( ctx->p2m[pfn] == INVALID_P2M_ENTRY ) { if (xc_domain_memory_populate_physmap(xc_handle, dom, 1, 0, 0, &mfn) != 0) @@ -301,7 +294,7 @@ normal_page: errno = ENOMEM; return 1; } - p2m[pfn] = mfn; + ctx->p2m[pfn] = mfn; } } @@ -427,7 +420,7 @@ alloc_page: pfn = region_pfn_type[i] & ~XEN_DOMCTL_PFINFO_LTAB_MASK; pagetype = region_pfn_type[i] & XEN_DOMCTL_PFINFO_LTAB_MASK; - if ( pfn > p2m_size ) + if ( pfn > ctx->p2m_size ) { ERROR("pfn out of range"); return 1; @@ -438,7 +431,7 @@ alloc_page: } else { - if (p2m[pfn] == INVALID_P2M_ENTRY) + if (ctx->p2m[pfn] == INVALID_P2M_ENTRY) { DPRINTF("Warning: pfn 0x%lx are not allocated!\n", pfn); /*XXX:allocate this page?*/ @@ -446,7 +439,7 @@ alloc_page: /* setup region_mfn[] for batch map. * For HVM guests, this interface takes PFNs, not MFNs */ - region_mfn[i] = hvm ? pfn : p2m[pfn]; + region_mfn[i] = hvm ? pfn : ctx->p2m[pfn]; } } return 0; @@ -512,11 +505,11 @@ static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, unsigned long pfn; uint64_t pte; - pte_last = PAGE_SIZE / ((pt_levels == 2)? 4 : 8); + pte_last = PAGE_SIZE / ((ctx->pt_levels == 2)? 4 : 8); for ( i = 0; i < pte_last; i++ ) { - if ( pt_levels == 2 ) + if ( ctx->pt_levels == 2 ) pte = ((uint32_t *)page)[i]; else pte = ((uint64_t *)page)[i]; @@ -525,20 +518,20 @@ static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, if ( !(pte & _PAGE_PRESENT) ) continue; - pfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86(guest_width); + pfn = (pte >> PAGE_SHIFT) & MFN_MASK_X86(ctx->guest_width); /* Allocate mfn if necessary */ - if ( p2m[pfn] == INVALID_P2M_ENTRY ) + if ( ctx->p2m[pfn] == INVALID_P2M_ENTRY ) { unsigned long force_pfn = superpages ? FORCE_SP_MASK : pfn; if (allocate_mfn_list(xc_handle, dom, 1, &pfn, &force_pfn, superpages) != 0) return 0; } - pte &= ~MADDR_MASK_X86(guest_width); - pte |= (uint64_t)p2m[pfn] << PAGE_SHIFT; + pte &= ~MADDR_MASK_X86(ctx->guest_width); + pte |= (uint64_t)ctx->p2m[pfn] << PAGE_SHIFT; - if ( pt_levels == 2 ) + if ( ctx->pt_levels == 2 ) ((uint32_t *)page)[i] = (uint32_t)pte; else ((uint64_t *)page)[i] = (uint64_t)pte; @@ -595,14 +588,14 @@ static xen_pfn_t *load_p2m_frame_list( /* Pick a guest word-size and PT depth from the ctxt size */ if ( chunk_bytes == sizeof (ctxt.x32) ) { - guest_width = 4; - if ( pt_levels > 2 ) - pt_levels = 3; + ctx->guest_width = 4; + if ( ctx->pt_levels > 2 ) + ctx->pt_levels = 3; } else if ( chunk_bytes == sizeof (ctxt.x64) ) { - guest_width = 8; - pt_levels = 4; + ctx->guest_width = 8; + ctx->pt_levels = 4; } else { @@ -618,7 +611,7 @@ static xen_pfn_t *load_p2m_frame_list( tot_bytes -= chunk_bytes; chunk_bytes = 0; - if ( GET_FIELD(guest_width, &ctxt, vm_assist) + if ( GET_FIELD(ctx->guest_width, &ctxt, vm_assist) & (1UL << VMASST_TYPE_pae_extended_cr3) ) *pae_extended_cr3 = 1; } @@ -651,7 +644,7 @@ static xen_pfn_t *load_p2m_frame_list( /* Now that we know the guest''s word-size, can safely allocate * the p2m frame list */ - if ( (p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE(p2m_size, guest_width))) == NULL ) + if ( (p2m_frame_list = malloc(P2M_TOOLS_FL_SIZE(ctx->p2m_size, ctx->guest_width))) == NULL ) { ERROR("Couldn''t allocate p2m_frame_list array"); return NULL; @@ -660,7 +653,7 @@ static xen_pfn_t *load_p2m_frame_list( /* First entry has already been read. */ p2m_frame_list[0] = p2m_fl_zero; if ( read_exact(io_fd, &p2m_frame_list[1], - (P2M_FL_ENTRIES(p2m_size, guest_width) - 1) * sizeof(xen_pfn_t)) ) + (P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width) - 1) * sizeof(xen_pfn_t)) ) { ERROR("read p2m_frame_list failed"); return NULL; @@ -902,7 +895,7 @@ static int buffer_tail_pv(struct tailbuf_pv *buf, int fd, buf->vcpucount++; } // DPRINTF("VCPU count: %d\n", buf->vcpucount); - vcpulen = ((guest_width == 8) ? sizeof(vcpu_guest_context_x86_64_t) + vcpulen = ((ctx->guest_width == 8) ? sizeof(vcpu_guest_context_x86_64_t) : sizeof(vcpu_guest_context_x86_32_t)) * buf->vcpucount; if ( ext_vcpucontext ) vcpulen += 128 * buf->vcpucount; @@ -1202,7 +1195,7 @@ static int apply_batch(int xc_handle, uint32_t dom, xen_pfn_t* region_mfn, ++curpage; - if ( pfn > p2m_size ) + if ( pfn > ctx->p2m_size ) { ERROR("pfn out of range"); return -1; @@ -1210,7 +1203,7 @@ static int apply_batch(int xc_handle, uint32_t dom, xen_pfn_t* region_mfn, pfn_type[pfn] = pagetype; - mfn = p2m[pfn]; + mfn = ctx->p2m[pfn]; /* In verify mode, we use a copy; otherwise we work in place */ page = pagebuf->verify ? (void *)buf : (region_base + i*PAGE_SIZE); @@ -1231,7 +1224,7 @@ static int apply_batch(int xc_handle, uint32_t dom, xen_pfn_t* region_mfn, ** so we may need to update the p2m after the main loop. ** Hence we defer canonicalization of L1s until then. */ - if ((pt_levels != 3) || + if ((ctx->pt_levels != 3) || pae_extended_cr3 || (pagetype != XEN_DOMCTL_PFINFO_L1TAB)) { @@ -1252,7 +1245,7 @@ static int apply_batch(int xc_handle, uint32_t dom, xen_pfn_t* region_mfn, else if ( pagetype != XEN_DOMCTL_PFINFO_NOTAB ) { ERROR("Bogus page type %lx page table is out of range: " - "i=%d p2m_size=%lu", pagetype, i, p2m_size); + "i=%d p2m_size=%lu", pagetype, i, ctx->p2m_size); return -1; } @@ -1347,21 +1340,21 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, tailbuf.ishvm = hvm; /* For info only */ - nr_pfns = 0; + ctx->nr_pfns = 0; /* Always try to allocate 2M pages for HVM */ if ( hvm ) superpages = 1; - if ( read_exact(io_fd, &p2m_size, sizeof(unsigned long)) ) + if ( read_exact(io_fd, &ctx->p2m_size, sizeof(unsigned long)) ) { ERROR("read: p2m_size"); goto out; } - DPRINTF("xc_domain_restore start: p2m_size = %lx\n", p2m_size); + DPRINTF("xc_domain_restore start: p2m_size = %lx\n", ctx->p2m_size); if ( !get_platform_info(xc_handle, dom, - &max_mfn, &hvirt_start, &pt_levels, &guest_width) ) + &ctx->max_mfn, &ctx->hvirt_start, &ctx->pt_levels, &ctx->guest_width) ) { ERROR("Unable to get platform info."); return 1; @@ -1370,8 +1363,8 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, /* The *current* word size of the guest isn''t very interesting; for now * assume the guest will be the same as we are. We''ll fix that later * if we discover otherwise. */ - guest_width = sizeof(unsigned long); - pt_levels = (guest_width == 8) ? 4 : (pt_levels == 2) ? 2 : 3; + ctx->guest_width = sizeof(unsigned long); + ctx->pt_levels = (ctx->guest_width == 8) ? 4 : (ctx->pt_levels == 2) ? 2 : 3; if ( !hvm ) { @@ -1385,7 +1378,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, memset(&domctl, 0, sizeof(domctl)); domctl.domain = dom; domctl.cmd = XEN_DOMCTL_set_address_size; - domctl.u.address_size.size = guest_width * 8; + domctl.u.address_size.size = ctx->guest_width * 8; frc = do_domctl(xc_handle, &domctl); if ( frc != 0 ) { @@ -1395,13 +1388,13 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, } /* We want zeroed memory so use calloc rather than malloc. */ - p2m = calloc(p2m_size, sizeof(xen_pfn_t)); - pfn_type = calloc(p2m_size, sizeof(unsigned long)); + ctx->p2m = calloc(ctx->p2m_size, sizeof(xen_pfn_t)); + pfn_type = calloc(ctx->p2m_size, sizeof(unsigned long)); region_mfn = xg_memalign(PAGE_SIZE, ROUNDUP( MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); - if ( (p2m == NULL) || (pfn_type == NULL) || + if ( (ctx->p2m == NULL) || (pfn_type == NULL) || (region_mfn == NULL) ) { ERROR("memory alloc failed"); @@ -1429,8 +1422,8 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, shared_info_frame = domctl.u.getdomaininfo.shared_info_frame; /* Mark all PFNs as invalid; we allocate on demand */ - for ( pfn = 0; pfn < p2m_size; pfn++ ) - p2m[pfn] = INVALID_P2M_ENTRY; + for ( pfn = 0; pfn < ctx->p2m_size; pfn++ ) + ctx->p2m[pfn] = INVALID_P2M_ENTRY; mmu = xc_alloc_mmu_updates(xc_handle, dom); if ( mmu == NULL ) @@ -1453,7 +1446,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, { int j, curbatch; - this_pc = (n * 100) / p2m_size; + this_pc = (n * 100) / ctx->p2m_size; if ( (this_pc - prev_pc) >= 5 ) { PPRINTF("\b\b\b\b%3d%%", this_pc); @@ -1565,7 +1558,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, if ( hvm ) goto finish_hvm; - if ( (pt_levels == 3) && !pae_extended_cr3 ) + if ( (ctx->pt_levels == 3) && !pae_extended_cr3 ) { /* ** XXX SMH on PAE we need to ensure PGDs are in MFNs < 4G. This @@ -1582,11 +1575,11 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, int j, k; /* First pass: find all L3TABs current in > 4G mfns and get new mfns */ - for ( i = 0; i < p2m_size; i++ ) + for ( i = 0; i < ctx->p2m_size; i++ ) { if ( ((pfn_type[i] & XEN_DOMCTL_PFINFO_LTABTYPE_MASK) = XEN_DOMCTL_PFINFO_L3TAB) && - (p2m[i] > 0xfffffUL) ) + (ctx->p2m[i] > 0xfffffUL) ) { unsigned long new_mfn; uint64_t l3ptes[4]; @@ -1594,21 +1587,21 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, l3tab = (uint64_t *) xc_map_foreign_range(xc_handle, dom, PAGE_SIZE, - PROT_READ, p2m[i]); + PROT_READ, ctx->p2m[i]); for ( j = 0; j < 4; j++ ) l3ptes[j] = l3tab[j]; munmap(l3tab, PAGE_SIZE); - new_mfn = xc_make_page_below_4G(xc_handle, dom, p2m[i]); + new_mfn = xc_make_page_below_4G(xc_handle, dom, ctx->p2m[i]); if ( !new_mfn ) { ERROR("Couldn''t get a page below 4GB :-("); goto out; } - p2m[i] = new_mfn; + ctx->p2m[i] = new_mfn; if ( xc_add_mmu_update(xc_handle, mmu, (((unsigned long long)new_mfn) << PAGE_SHIFT) | @@ -1620,7 +1613,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, l3tab = (uint64_t *) xc_map_foreign_range(xc_handle, dom, PAGE_SIZE, - PROT_READ | PROT_WRITE, p2m[i]); + PROT_READ | PROT_WRITE, ctx->p2m[i]); for ( j = 0; j < 4; j++ ) l3tab[j] = l3ptes[j]; @@ -1632,16 +1625,16 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, /* Second pass: find all L1TABs and uncanonicalize them */ j = 0; - for ( i = 0; i < p2m_size; i++ ) + for ( i = 0; i < ctx->p2m_size; i++ ) { if ( ((pfn_type[i] & XEN_DOMCTL_PFINFO_LTABTYPE_MASK) = XEN_DOMCTL_PFINFO_L1TAB) ) { - region_mfn[j] = p2m[i]; + region_mfn[j] = ctx->p2m[i]; j++; } - if ( (i == (p2m_size-1)) || (j == MAX_BATCH_SIZE) ) + if ( (i == (ctx->p2m_size-1)) || (j == MAX_BATCH_SIZE) ) { region_base = xc_map_foreign_batch( xc_handle, dom, PROT_READ | PROT_WRITE, region_mfn, j); @@ -1679,7 +1672,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, * will barf when doing the type-checking. */ nr_pins = 0; - for ( i = 0; i < p2m_size; i++ ) + for ( i = 0; i < ctx->p2m_size; i++ ) { if ( (pfn_type[i] & XEN_DOMCTL_PFINFO_LPINTAB) == 0 ) continue; @@ -1706,7 +1699,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, continue; } - pin[nr_pins].arg1.mfn = p2m[i]; + pin[nr_pins].arg1.mfn = ctx->p2m[i]; nr_pins++; /* Batch full? Then flush. */ @@ -1729,7 +1722,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, } DPRINTF("\b\b\b\b100%%\n"); - DPRINTF("Memory reloaded (%ld pages)\n", nr_pfns); + DPRINTF("Memory reloaded (%ld pages)\n", ctx->nr_pfns); /* Get the list of PFNs that are not in the psuedo-phys map */ { @@ -1739,12 +1732,12 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, { unsigned long pfn = tailbuf.u.pv.pfntab[i]; - if ( p2m[pfn] != INVALID_P2M_ENTRY ) + if ( ctx->p2m[pfn] != INVALID_P2M_ENTRY ) { /* pfn is not in physmap now, but was at some point during the save/migration process - need to free it */ - tailbuf.u.pv.pfntab[nr_frees++] = p2m[pfn]; - p2m[pfn] = INVALID_P2M_ENTRY; /* not in pseudo-physical map */ + tailbuf.u.pv.pfntab[nr_frees++] = ctx->p2m[pfn]; + ctx->p2m[pfn] = INVALID_P2M_ENTRY; /* not in pseudo-physical map */ } } @@ -1780,14 +1773,14 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, if ( !(vcpumap & (1ULL << i)) ) continue; - memcpy(&ctxt, vcpup, ((guest_width == 8) ? sizeof(ctxt.x64) + memcpy(&ctxt, vcpup, ((ctx->guest_width == 8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32))); - vcpup += (guest_width == 8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32); + vcpup += (ctx->guest_width == 8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32); DPRINTF("read VCPU %d\n", i); if ( !new_ctxt_format ) - SET_FIELD(guest_width, &ctxt, flags, GET_FIELD(guest_width, &ctxt, flags) | VGCF_online); + SET_FIELD(ctx->guest_width, &ctxt, flags, GET_FIELD(ctx->guest_width, &ctxt, flags) | VGCF_online); if ( i == 0 ) { @@ -1795,86 +1788,86 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, * Uncanonicalise the suspend-record frame number and poke * resume record. */ - pfn = GET_FIELD(guest_width, &ctxt, user_regs.edx); - if ( (pfn >= p2m_size) || + pfn = GET_FIELD(ctx->guest_width, &ctxt, user_regs.edx); + if ( (pfn >= ctx->p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { ERROR("Suspend record frame number is bad"); goto out; } - mfn = p2m[pfn]; - SET_FIELD(guest_width, &ctxt, user_regs.edx, mfn); + mfn = ctx->p2m[pfn]; + SET_FIELD(ctx->guest_width, &ctxt, user_regs.edx, mfn); start_info = xc_map_foreign_range( xc_handle, dom, PAGE_SIZE, PROT_READ | PROT_WRITE, mfn); - SET_FIELD(guest_width, start_info, nr_pages, p2m_size); - SET_FIELD(guest_width, start_info, shared_info, shared_info_frame<<PAGE_SHIFT); - SET_FIELD(guest_width, start_info, flags, 0); - *store_mfn = p2m[GET_FIELD(guest_width, start_info, store_mfn)]; - SET_FIELD(guest_width, start_info, store_mfn, *store_mfn); - SET_FIELD(guest_width, start_info, store_evtchn, store_evtchn); - *console_mfn = p2m[GET_FIELD(guest_width, start_info, console.domU.mfn)]; - SET_FIELD(guest_width, start_info, console.domU.mfn, *console_mfn); - SET_FIELD(guest_width, start_info, console.domU.evtchn, console_evtchn); + SET_FIELD(ctx->guest_width, start_info, nr_pages, ctx->p2m_size); + SET_FIELD(ctx->guest_width, start_info, shared_info, shared_info_frame<<PAGE_SHIFT); + SET_FIELD(ctx->guest_width, start_info, flags, 0); + *store_mfn = ctx->p2m[GET_FIELD(ctx->guest_width, start_info, store_mfn)]; + SET_FIELD(ctx->guest_width, start_info, store_mfn, *store_mfn); + SET_FIELD(ctx->guest_width, start_info, store_evtchn, store_evtchn); + *console_mfn = ctx->p2m[GET_FIELD(ctx->guest_width, start_info, console.domU.mfn)]; + SET_FIELD(ctx->guest_width, start_info, console.domU.mfn, *console_mfn); + SET_FIELD(ctx->guest_width, start_info, console.domU.evtchn, console_evtchn); munmap(start_info, PAGE_SIZE); } /* Uncanonicalise each GDT frame number. */ - if ( GET_FIELD(guest_width, &ctxt, gdt_ents) > 8192 ) + if ( GET_FIELD(ctx->guest_width, &ctxt, gdt_ents) > 8192 ) { ERROR("GDT entry count out of range"); goto out; } - for ( j = 0; (512*j) < GET_FIELD(guest_width, &ctxt, gdt_ents); j++ ) + for ( j = 0; (512*j) < GET_FIELD(ctx->guest_width, &ctxt, gdt_ents); j++ ) { - pfn = GET_FIELD(guest_width, &ctxt, gdt_frames[j]); - if ( (pfn >= p2m_size) || + pfn = GET_FIELD(ctx->guest_width, &ctxt, gdt_frames[j]); + if ( (pfn >= ctx->p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { ERROR("GDT frame number %i (0x%lx) is bad", j, (unsigned long)pfn); goto out; } - SET_FIELD(guest_width, &ctxt, gdt_frames[j], p2m[pfn]); + SET_FIELD(ctx->guest_width, &ctxt, gdt_frames[j], ctx->p2m[pfn]); } /* Uncanonicalise the page table base pointer. */ - pfn = UNFOLD_CR3(guest_width, GET_FIELD(guest_width, &ctxt, ctrlreg[3])); + pfn = UNFOLD_CR3(ctx->guest_width, GET_FIELD(ctx->guest_width, &ctxt, ctrlreg[3])); - if ( pfn >= p2m_size ) + if ( pfn >= ctx->p2m_size ) { ERROR("PT base is bad: pfn=%lu p2m_size=%lu type=%08lx", - pfn, p2m_size, pfn_type[pfn]); + pfn, ctx->p2m_size, pfn_type[pfn]); goto out; } if ( (pfn_type[pfn] & XEN_DOMCTL_PFINFO_LTABTYPE_MASK) !- ((unsigned long)pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT) ) + ((unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT) ) { ERROR("PT base is bad. pfn=%lu nr=%lu type=%08lx %08lx", - pfn, p2m_size, pfn_type[pfn], - (unsigned long)pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); + pfn, ctx->p2m_size, pfn_type[pfn], + (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - SET_FIELD(guest_width, &ctxt, ctrlreg[3], FOLD_CR3(guest_width, p2m[pfn])); + SET_FIELD(ctx->guest_width, &ctxt, ctrlreg[3], FOLD_CR3(ctx->guest_width, ctx->p2m[pfn])); /* Guest pagetable (x86/64) stored in otherwise-unused CR1. */ - if ( (pt_levels == 4) && (ctxt.x64.ctrlreg[1] & 1) ) + if ( (ctx->pt_levels == 4) && (ctxt.x64.ctrlreg[1] & 1) ) { - pfn = UNFOLD_CR3(guest_width, ctxt.x64.ctrlreg[1] & ~1); - if ( pfn >= p2m_size ) + pfn = UNFOLD_CR3(ctx->guest_width, ctxt.x64.ctrlreg[1] & ~1); + if ( pfn >= ctx->p2m_size ) { ERROR("User PT base is bad: pfn=%lu p2m_size=%lu", - pfn, p2m_size); + pfn, ctx->p2m_size); goto out; } if ( (pfn_type[pfn] & XEN_DOMCTL_PFINFO_LTABTYPE_MASK) !- ((unsigned long)pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT) ) + ((unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT) ) { ERROR("User PT base is bad. pfn=%lu nr=%lu type=%08lx %08lx", - pfn, p2m_size, pfn_type[pfn], - (unsigned long)pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); + pfn, ctx->p2m_size, pfn_type[pfn], + (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - ctxt.x64.ctrlreg[1] = FOLD_CR3(guest_width, p2m[pfn]); + ctxt.x64.ctrlreg[1] = FOLD_CR3(ctx->guest_width, ctx->p2m[pfn]); } domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = (domid_t)dom; @@ -1910,35 +1903,35 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, xc_handle, dom, PAGE_SIZE, PROT_WRITE, shared_info_frame); /* restore saved vcpu_info and arch specific info */ - MEMCPY_FIELD(guest_width, new_shared_info, old_shared_info, vcpu_info); - MEMCPY_FIELD(guest_width, new_shared_info, old_shared_info, arch); + MEMCPY_FIELD(ctx->guest_width, new_shared_info, old_shared_info, vcpu_info); + MEMCPY_FIELD(ctx->guest_width, new_shared_info, old_shared_info, arch); /* clear any pending events and the selector */ - MEMSET_ARRAY_FIELD(guest_width, new_shared_info, evtchn_pending, 0); + MEMSET_ARRAY_FIELD(ctx->guest_width, new_shared_info, evtchn_pending, 0); for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ ) - SET_FIELD(guest_width, new_shared_info, vcpu_info[i].evtchn_pending_sel, 0); + SET_FIELD(ctx->guest_width, new_shared_info, vcpu_info[i].evtchn_pending_sel, 0); /* mask event channels */ - MEMSET_ARRAY_FIELD(guest_width, new_shared_info, evtchn_mask, 0xff); + MEMSET_ARRAY_FIELD(ctx->guest_width, new_shared_info, evtchn_mask, 0xff); /* leave wallclock time. set by hypervisor */ munmap(new_shared_info, PAGE_SIZE); /* Uncanonicalise the pfn-to-mfn table frame-number list. */ - for ( i = 0; i < P2M_FL_ENTRIES(p2m_size, guest_width); i++ ) + for ( i = 0; i < P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width); i++ ) { pfn = p2m_frame_list[i]; - if ( (pfn >= p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) + if ( (pfn >= ctx->p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { ERROR("PFN-to-MFN frame number %i (%#lx) is bad", i, pfn); goto out; } - p2m_frame_list[i] = p2m[pfn]; + p2m_frame_list[i] = ctx->p2m[pfn]; } /* Copy the P2M we''ve constructed to the ''live'' P2M */ - if ( !(live_p2m = xc_map_foreign_batch(xc_handle, dom, PROT_WRITE, - p2m_frame_list, P2M_FL_ENTRIES(p2m_size, guest_width))) ) + if ( !(ctx->live_p2m = xc_map_foreign_batch(xc_handle, dom, PROT_WRITE, + p2m_frame_list, P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width))) ) { ERROR("Couldn''t map p2m table"); goto out; @@ -1946,15 +1939,15 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, /* If the domain we''re restoring has a different word size to ours, * we need to adjust the live_p2m assignment appropriately */ - if ( guest_width > sizeof (xen_pfn_t) ) - for ( i = p2m_size - 1; i >= 0; i-- ) - ((int64_t *)live_p2m)[i] = (long)p2m[i]; - else if ( guest_width < sizeof (xen_pfn_t) ) - for ( i = 0; i < p2m_size; i++ ) - ((uint32_t *)live_p2m)[i] = p2m[i]; + if ( ctx->guest_width > sizeof (xen_pfn_t) ) + for ( i = ctx->p2m_size - 1; i >= 0; i-- ) + ((int64_t *)ctx->live_p2m)[i] = (long)ctx->p2m[i]; + else if ( ctx->guest_width < sizeof (xen_pfn_t) ) + for ( i = 0; i < ctx->p2m_size; i++ ) + ((uint32_t *)ctx->live_p2m)[i] = ctx->p2m[i]; else - memcpy(live_p2m, p2m, p2m_size * sizeof(xen_pfn_t)); - munmap(live_p2m, P2M_FL_ENTRIES(p2m_size, guest_width) * PAGE_SIZE); + memcpy(ctx->live_p2m, ctx->p2m, ctx->p2m_size * sizeof(xen_pfn_t)); + munmap(ctx->live_p2m, P2M_FL_ENTRIES(ctx->p2m_size, ctx->guest_width) * PAGE_SIZE); DPRINTF("Domain ready to be built.\n"); rc = 0; @@ -2008,7 +2001,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, if ( (rc != 0) && (dom != 0) ) xc_domain_destroy(xc_handle, dom); free(mmu); - free(p2m); + free(ctx->p2m); free(pfn_type); tailbuf_free(&tailbuf); -- 1.6.5.2 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-13 23:43 UTC
[Xen-devel] [PATCH 7/7] pass restore context as an argument instead of a global context
pass restore context as an argument instead of a global context --- tools/libxc/xc_domain_restore.c | 70 ++++++++++++++++++++------------------ 1 files changed, 37 insertions(+), 33 deletions(-) diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c index 6430e91..70a50e9 100644 --- a/tools/libxc/xc_domain_restore.c +++ b/tools/libxc/xc_domain_restore.c @@ -44,14 +44,6 @@ struct restore_ctx { unsigned no_superpage_mem; /* If have enough continuous memory for super page allocation */ }; -struct restore_ctx _ctx = { - .live_p2m = NULL, - .p2m = NULL, - .no_superpage_mem = 0, -}; - -struct restore_ctx *ctx = &_ctx; - /* ** ** @@ -71,7 +63,7 @@ struct restore_ctx *ctx = &_ctx; #define SUPER_PAGE_TRACKING(pfn) ( (pfn) != INVALID_SUPER_PAGE ) #define SUPER_PAGE_DONE(pfn) ( SUPER_PAGE_START(pfn) ) -static int super_page_populated(unsigned long pfn) +static int super_page_populated(struct restore_ctx *ctx, unsigned long pfn) { int i; pfn &= ~(SUPERPAGE_NR_PFNS - 1); @@ -88,7 +80,7 @@ static int super_page_populated(unsigned long pfn) * some new allocated 4K pages */ static int break_super_page(int xc_handle, - uint32_t dom, + uint32_t dom, struct restore_ctx *ctx, xen_pfn_t next_pfn) { xen_pfn_t *page_array, start_pfn, mfn; @@ -202,6 +194,7 @@ out: */ static int allocate_mfn_list(int xc_handle, uint32_t dom, + struct restore_ctx *ctx, unsigned long nr_extents, xen_pfn_t *batch_buf, xen_pfn_t *next_pfn, @@ -228,7 +221,7 @@ static int allocate_mfn_list(int xc_handle, !SUPER_PAGE_DONE(sp_pfn)) { /* break previously allocated super page*/ - if ( break_super_page(xc_handle, dom, sp_pfn) != 0 ) + if ( break_super_page(xc_handle, dom, ctx, sp_pfn) != 0 ) { ERROR("Break previous super page fail!\n"); return 1; @@ -251,7 +244,7 @@ static int allocate_mfn_list(int xc_handle, goto normal_page; pfn = batch_buf[0] & ~XEN_DOMCTL_PFINFO_LTAB_MASK; - if ( super_page_populated(pfn) ) + if ( super_page_populated(ctx, pfn) ) goto normal_page; pfn &= ~(SUPERPAGE_NR_PFNS - 1); @@ -301,7 +294,7 @@ normal_page: return 0; } -static int allocate_physmem(int xc_handle, uint32_t dom, +static int allocate_physmem(int xc_handle, uint32_t dom, struct restore_ctx *ctx, unsigned long *region_pfn_type, int region_size, unsigned int hvm, xen_pfn_t *region_mfn, int superpages) { @@ -342,7 +335,7 @@ static int allocate_physmem(int xc_handle, uint32_t dom, if ( SUPER_PAGE_START(pfn) ) { /* Start of a 2M extent, populate previsous buf */ - if ( allocate_mfn_list(xc_handle, dom, + if ( allocate_mfn_list(xc_handle, dom, ctx, batch_buf_len, batch_buf, &required_pfn, superpages) != 0 ) { @@ -364,7 +357,7 @@ static int allocate_physmem(int xc_handle, uint32_t dom, else if ( SUPER_PAGE_TRACKING(required_pfn) ) { /* break of a 2M extent, populate previous buf */ - if ( allocate_mfn_list(xc_handle, dom, + if ( allocate_mfn_list(xc_handle, dom, ctx, batch_buf_len, batch_buf, &required_pfn, superpages) != 0 ) { @@ -405,7 +398,7 @@ static int allocate_physmem(int xc_handle, uint32_t dom, alloc_page: if ( batch_buf ) { - if ( allocate_mfn_list(xc_handle, dom, + if ( allocate_mfn_list(xc_handle, dom, ctx, batch_buf_len, batch_buf, &required_pfn, superpages) != 0 ) @@ -498,7 +491,7 @@ static ssize_t read_exact_timed(int fd, void* buf, size_t size) ** This function inverts that operation, replacing the pfn values with ** the (now known) appropriate mfn values. */ -static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, +static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, struct restore_ctx *ctx, unsigned long type, void *page, int superpages) { int i, pte_last; @@ -524,7 +517,7 @@ static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, if ( ctx->p2m[pfn] == INVALID_P2M_ENTRY ) { unsigned long force_pfn = superpages ? FORCE_SP_MASK : pfn; - if (allocate_mfn_list(xc_handle, dom, + if (allocate_mfn_list(xc_handle, dom, ctx, 1, &pfn, &force_pfn, superpages) != 0) return 0; } @@ -542,7 +535,7 @@ static int uncanonicalize_pagetable(int xc_handle, uint32_t dom, /* Load the p2m frame list, plus potential extended info chunk */ -static xen_pfn_t *load_p2m_frame_list( +static xen_pfn_t *load_p2m_frame_list(struct restore_ctx *ctx, int io_fd, int *pae_extended_cr3, int *ext_vcpucontext) { xen_pfn_t *p2m_frame_list; @@ -797,7 +790,8 @@ static int dump_qemu(uint32_t dom, struct tailbuf_hvm *buf) return 0; } -static int buffer_tail_hvm(struct tailbuf_hvm *buf, int fd, +static int buffer_tail_hvm(struct restore_ctx *ctx, + struct tailbuf_hvm *buf, int fd, unsigned int max_vcpu_id, uint64_t vcpumap, int ext_vcpucontext) { @@ -858,7 +852,8 @@ static int buffer_tail_hvm(struct tailbuf_hvm *buf, int fd, return -1; } -static int buffer_tail_pv(struct tailbuf_pv *buf, int fd, +static int buffer_tail_pv(struct restore_ctx *ctx, + struct tailbuf_pv *buf, int fd, unsigned int max_vcpu_id, uint64_t vcpumap, int ext_vcpucontext) { @@ -935,14 +930,15 @@ static int buffer_tail_pv(struct tailbuf_pv *buf, int fd, return -1; } -static int buffer_tail(tailbuf_t *buf, int fd, unsigned int max_vcpu_id, +static int buffer_tail(struct restore_ctx *ctx, + tailbuf_t *buf, int fd, unsigned int max_vcpu_id, uint64_t vcpumap, int ext_vcpucontext) { if ( buf->ishvm ) - return buffer_tail_hvm(&buf->u.hvm, fd, max_vcpu_id, vcpumap, + return buffer_tail_hvm(ctx, &buf->u.hvm, fd, max_vcpu_id, vcpumap, ext_vcpucontext); else - return buffer_tail_pv(&buf->u.pv, fd, max_vcpu_id, vcpumap, + return buffer_tail_pv(ctx, &buf->u.pv, fd, max_vcpu_id, vcpumap, ext_vcpucontext); } @@ -1147,8 +1143,8 @@ static int pagebuf_get(pagebuf_t* buf, int fd, int xch, uint32_t dom) return rc; } -static int apply_batch(int xc_handle, uint32_t dom, xen_pfn_t* region_mfn, - unsigned long* pfn_type, int pae_extended_cr3, +static int apply_batch(int xc_handle, uint32_t dom, struct restore_ctx *ctx, + xen_pfn_t* region_mfn, unsigned long* pfn_type, int pae_extended_cr3, unsigned int hvm, struct xc_mmu* mmu, pagebuf_t* pagebuf, int curbatch, int superpages) { @@ -1167,7 +1163,7 @@ static int apply_batch(int xc_handle, uint32_t dom, xen_pfn_t* region_mfn, if (j > MAX_BATCH_SIZE) j = MAX_BATCH_SIZE; - if (allocate_physmem(xc_handle, dom, &pagebuf->pfn_types[curbatch], + if (allocate_physmem(xc_handle, dom, ctx, &pagebuf->pfn_types[curbatch], j, hvm, region_mfn, superpages) != 0) { ERROR("allocate_physmem() failed\n"); @@ -1228,7 +1224,7 @@ static int apply_batch(int xc_handle, uint32_t dom, xen_pfn_t* region_mfn, pae_extended_cr3 || (pagetype != XEN_DOMCTL_PFINFO_L1TAB)) { - if (!uncanonicalize_pagetable(xc_handle, dom, + if (!uncanonicalize_pagetable(xc_handle, dom, ctx, pagetype, page, superpages)) { /* ** Failing to uncanonicalize a page table can be ok @@ -1335,6 +1331,14 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, tailbuf_t tailbuf, tmptail; void* vcpup; + /* restore context */ + struct restore_ctx _ctx = { + .live_p2m = NULL, + .p2m = NULL, + .no_superpage_mem = 0, + }; + struct restore_ctx *ctx = &_ctx; + pagebuf_init(&pagebuf); memset(&tailbuf, 0, sizeof(tailbuf)); tailbuf.ishvm = hvm; @@ -1369,7 +1373,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, if ( !hvm ) { /* Load the p2m frame list, plus potential extended info chunk */ - p2m_frame_list = load_p2m_frame_list( + p2m_frame_list = load_p2m_frame_list(ctx, io_fd, &pae_extended_cr3, &ext_vcpucontext); if ( !p2m_frame_list ) goto out; @@ -1483,7 +1487,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, while ( curbatch < j ) { int brc; - brc = apply_batch(xc_handle, dom, region_mfn, pfn_type, + brc = apply_batch(xc_handle, dom, ctx, region_mfn, pfn_type, pae_extended_cr3, hvm, mmu, &pagebuf, curbatch, superpages); if ( brc < 0 ) goto out; @@ -1524,7 +1528,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, if ( !completed ) { int flags = 0; - if ( buffer_tail(&tailbuf, io_fd, max_vcpu_id, vcpumap, + if ( buffer_tail(ctx, &tailbuf, io_fd, max_vcpu_id, vcpumap, ext_vcpucontext) < 0 ) { ERROR ("error buffering image tail"); goto out; @@ -1544,7 +1548,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, } memset(&tmptail, 0, sizeof(tmptail)); tmptail.ishvm = hvm; - if ( buffer_tail(&tmptail, io_fd, max_vcpu_id, vcpumap, + if ( buffer_tail(ctx, &tmptail, io_fd, max_vcpu_id, vcpumap, ext_vcpucontext) < 0 ) { ERROR ("error buffering image tail, finishing"); goto finish; @@ -1647,7 +1651,7 @@ int xc_domain_restore(int xc_handle, int io_fd, uint32_t dom, for ( k = 0; k < j; k++ ) { if ( !uncanonicalize_pagetable( - xc_handle, dom, XEN_DOMCTL_PFINFO_L1TAB, + xc_handle, dom, ctx, XEN_DOMCTL_PFINFO_L1TAB, region_base + k*PAGE_SIZE, superpages) ) { ERROR("failed uncanonicalize pt!"); -- 1.6.5.2 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2009-Nov-14 08:25 UTC
Re: [Xen-devel] [PATCH 0/7][RFC] make xenguest save & restore functions reentrant
On 13/11/2009 23:43, "Vincent Hanquez" <vincent.hanquez@eu.citrix.com> wrote:> The following patchset make suspend and restore code reentrant by having an > explicit context to store current variables across all the suspend/restore > code. > > This work is necessary for beeing able to get rid of the fork of processes > during save&restore, and provide a simpler interface for toolstack developers.Rather than making the macros take extra arguments, can you make them refer to ctx->foo instead (i.e., make it implciit the structure containing these ex-globals is called ctx)? It avoids having to change every caller, and some callers already have macros nested three deep and adding guest_width/max_mfn all over the place does not help readability. Also send also as attachments next time. I have problems applying these patches from inline email for some reason; some chunks don''t apply. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-15 10:08 UTC
Re: [Xen-devel] [PATCH 0/7][RFC] make xenguest save & restore functions reentrant
Keir Fraser wrote:> Rather than making the macros take extra arguments, can you make them refer > to ctx->foo instead (i.e., make it implciit the structure containing these > ex-globals is called ctx)? It avoids having to change every caller, and some > callers already have macros nested three deep and adding guest_width/max_mfn > all over the place does not help readability.I agree this isn''t pretty. unfortunately i tried the route of changing the macro to get from ctx-> but the macro is used also on other files xc_core_x86.c and xc_resume.c which use the macro with the guest_width and/or p2m_size on the call stack. The only other solution I though of, would be to duplicate the value of the ex-globals on the stack like: ... int guest_width = ctx->guest_width; int p2m_size = ctx->p2m_size; ... I decided against, because it might look odd since it doesn''t appear used and also means I need to track all assignment to this variable. If you prefer, I can change this patchset to do that. There''s also the solution of carrying this patchset, and prettyfying some of thoses macro calls as if there were "expensive calls" just like my patch 5/7 does.> Also send also as attachments next time. I have problems applying these > patches from inline email for some reason; some chunks don''t apply.yep ok. -- Vincent _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2009-Nov-15 10:20 UTC
Re: [Xen-devel] [PATCH 0/7][RFC] make xenguest save & restore functions reentrant
On 15/11/2009 10:08, "Vincent Hanquez" <Vincent.Hanquez@eu.citrix.com> wrote:> agree this isn''t pretty. unfortunately i tried the route of changing > the macro to get from ctx-> but the macro is used also on other files > xc_core_x86.c and xc_resume.c which use the macro with the guest_width > and/or p2m_size on the call stack. > > The only other solution I though of, would be to duplicate the value of > the ex-globals on the stack like: > ... > int guest_width = ctx->guest_width; > int p2m_size = ctx->p2m_size; > ... > > I decided against, because it might look odd since it doesn''t appear > used and also means I need to track all assignment to this variable.Another option would be for all users of the macros to have a ''xenguest_ctx'' structure, or whatever you call it. So e.g., in xc_resume: struct xenguest_ctx _ctx, *ctx = &_ctx; ctx->guest_width = ... /* Leave unnecessary/meaningless fields for this caller uninitialised. */ What do you think? The ctx struct can''t be that big; we can just ignore fields that make no sense outside save/restore (i.e., kind of split it into general-purpose and private/application-specific fields); and it does keep the macro invocations cleaner. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vincent Hanquez
2009-Nov-16 11:39 UTC
Re: [Xen-devel] [PATCH 0/7][RFC] make xenguest save & restore functions reentrant
Keir Fraser wrote:> Another option would be for all users of the macros to have a ''xenguest_ctx'' > structure, or whatever you call it. So e.g., in xc_resume: > struct xenguest_ctx _ctx, *ctx = &_ctx; > ctx->guest_width = ... > /* Leave unnecessary/meaningless fields for this caller uninitialised. */ > > What do you think? The ctx struct can''t be that big; we can just ignore > fields that make no sense outside save/restore (i.e., kind of split it into > general-purpose and private/application-specific fields); and it does keep > the macro invocations cleaner.I''ll give that a try. I don''t think it''s going to make that serie much nicer though. -- Vincent _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel