The following dump-core patches changes its format into ELF, adds PFN-GMFN table, HVM support, and adds experimental IA64 support. - ELF format Program header and note section are adopted. - HVM domain support To know the memory area to dump, XENMEM_set_memory_map is added. XENMEM_memory_map hypercall is for current domain, so new one is created. and hvm domain builder tell xen its memory map. - IA64 support IA64 support is for only review. It doesn''t work because Xen/IA64 doesn''t support memory map hypercall. Subject: [PATCH 1/5] dump-core take 2: XENMEM_set_memory_map hypercall Subject: [PATCH 2/5] dump-core take 2: libxc: xc_domain memmap functions Subject: [PATCH 3/5] dump-core take 2: libxc: add xc_domain_tranlate_gpfn() Subject: [PATCH 4/5] dump-core take 2: hvm builder: tell memory map Subject: [PATCH 5/5] dump-core take 2: elf formatify and added PFN-GMFN table -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Isaku Yamahata
2007-Jan-18 06:52 UTC
[Xen-devel] [PATCH 1/5] dump-core take 2: XENMEM_set_memory_map hypercall
# HG changeset patch # User yamahata@valinux.co.jp # Date 1168929639 -32400 # Node ID 280d35294b8968b262c37df4d01712e0af288451 # Parent dd0989523d1700825a9feea3895811cec3c41bfa implemented XENMEM_set_memory_map hypercall which is needed by dump-core to know the area to dump. PATCHNAME: xenmem_set_memory_map_xen_side Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> diff -r dd0989523d17 -r 280d35294b89 xen/arch/x86/mm.c --- a/xen/arch/x86/mm.c Wed Jan 17 16:42:34 2007 +0000 +++ b/xen/arch/x86/mm.c Tue Jan 16 15:40:39 2007 +0900 @@ -3045,10 +3045,12 @@ long arch_memory_op(int op, XEN_GUEST_HA } case XENMEM_set_memory_map: + case XENMEM_get_memory_map: { struct xen_foreign_memory_map fmap; struct domain *d; - int rc; + XEN_GUEST_HANDLE(e820entry_t) buffer; + int rc = 0; if ( copy_from_guest(&fmap, arg, 1) ) return -EFAULT; @@ -3066,10 +3068,40 @@ long arch_memory_op(int op, XEN_GUEST_HA else if ( (d = find_domain_by_id(fmap.domid)) == NULL ) return -ESRCH; - rc = copy_from_guest(&d->arch.e820[0], fmap.map.buffer, - fmap.map.nr_entries) ? -EFAULT : 0; - d->arch.nr_e820 = fmap.map.nr_entries; - + LOCK_BIGLOCK(d); + switch ( op ) + { + case XENMEM_set_memory_map: + rc = copy_from_guest(&d->arch.e820[0], fmap.map.buffer, + fmap.map.nr_entries) ? -EFAULT : 0; + d->arch.nr_e820 = fmap.map.nr_entries; + break; + + case XENMEM_get_memory_map: + /* Backwards compatibility. */ + if ( d->arch.nr_e820 == 0 ) + { + rc = -ENOSYS; + break; + } + + buffer = guest_handle_cast(fmap.map.buffer, e820entry_t); + if ( fmap.map.nr_entries < d->arch.nr_e820 + 1 ) + { + rc = -EINVAL; + break; + } + + fmap.map.nr_entries = d->arch.nr_e820; + if ( copy_to_guest(buffer, &d->arch.e820[0], + fmap.map.nr_entries) || + copy_to_guest(arg, &fmap, 1) ) + { + rc = -EFAULT; + break; + } + } + UNLOCK_BIGLOCK(d); put_domain(d); return rc; } @@ -3079,18 +3111,29 @@ long arch_memory_op(int op, XEN_GUEST_HA struct xen_memory_map map; struct domain *d = current->domain; + LOCK_BIGLOCK(d); /* Backwards compatibility. */ if ( d->arch.nr_e820 == 0 ) + { + UNLOCK_BIGLOCK(d); return -ENOSYS; + } if ( copy_from_guest(&map, arg, 1) ) + { + UNLOCK_BIGLOCK(d); return -EFAULT; + } map.nr_entries = min(map.nr_entries, d->arch.nr_e820); if ( copy_to_guest(map.buffer, &d->arch.e820[0], map.nr_entries) || copy_to_guest(arg, &map, 1) ) + { + UNLOCK_BIGLOCK(d); return -EFAULT; - + } + + UNLOCK_BIGLOCK(d); return 0; } diff -r dd0989523d17 -r 280d35294b89 xen/include/asm-x86/domain.h --- a/xen/include/asm-x86/domain.h Wed Jan 17 16:42:34 2007 +0000 +++ b/xen/include/asm-x86/domain.h Tue Jan 16 15:40:39 2007 +0900 @@ -116,7 +116,8 @@ struct arch_domain unsigned long max_mapped_pfn; /* Pseudophysical e820 map (XENMEM_memory_map). */ - struct e820entry e820[3]; +#define MAX_E820 5 /* xc_hvm_build.c setups 5 e820 map */ + struct e820entry e820[MAX_E820]; unsigned int nr_e820; } __cacheline_aligned; diff -r dd0989523d17 -r 280d35294b89 xen/include/public/memory.h --- a/xen/include/public/memory.h Wed Jan 17 16:42:34 2007 +0000 +++ b/xen/include/public/memory.h Tue Jan 16 15:40:39 2007 +0900 @@ -263,6 +263,11 @@ typedef struct xen_foreign_memory_map xe typedef struct xen_foreign_memory_map xen_foreign_memory_map_t; DEFINE_XEN_GUEST_HANDLE(xen_foreign_memory_map_t); +/* + * Get the pseudo-physical memory mao fo a domain + */ +#define XENMEM_get_memory_map 14 + #endif /* __XEN_PUBLIC_MEMORY_H__ */ /* -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Isaku Yamahata
2007-Jan-18 06:52 UTC
[Xen-devel] [PATCH 2/5] dump-core take 2: libxc: xc_domain memmap functions
# HG changeset patch # User yamahata@valinux.co.jp # Date 1169100791 -32400 # Node ID c2db94de4afc030170609d7d9de6daf334b17182 # Parent 280d35294b8968b262c37df4d01712e0af288451 libxc:add xc_domain_set_memmap(), xc_domain_get_memmap() which is corresponding to XENMEM_set_memory_map and XENMEM_get_memory_map. dump-core needs those functions. PATCHNAME: libxc_memory_map Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> diff -r 280d35294b89 -r c2db94de4afc tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Tue Jan 16 15:40:39 2007 +0900 +++ b/tools/libxc/xc_domain.c Thu Jan 18 15:13:11 2007 +0900 @@ -351,10 +351,93 @@ int xc_domain_set_memmap_limit(int xc_ha unlock_pages(&e820, sizeof(e820)); return rc; } + +int xc_domain_set_memmap(int xc_handle, + uint32_t domid, + void *buffer, + unsigned int nr_entries) +{ + int rc; + + struct xen_foreign_memory_map fmap = { + .domid = domid, + .map = { .nr_entries = nr_entries } + }; + + set_xen_guest_handle(fmap.map.buffer, buffer); + + if ( lock_pages(&fmap, sizeof(fmap)) || + lock_pages(buffer, nr_entries * sizeof(struct e820entry)) ) + { + PERROR("Could not lock memory for Xen hypercall"); + rc = -1; + goto out; + } + + rc = xc_memory_op(xc_handle, XENMEM_set_memory_map, &fmap); + + out: + unlock_pages(&fmap, sizeof(fmap)); + unlock_pages(buffer, nr_entries * sizeof(struct e820entry)); + return rc; +} + +int xc_domain_get_memmap(int xc_handle, + uint32_t domid, + void *buffer, + unsigned int *nr_entries) +{ + int rc; + + struct xen_foreign_memory_map fmap = { + .domid = domid, + .map = { .nr_entries = *nr_entries } + }; + + set_xen_guest_handle(fmap.map.buffer, buffer); + + if ( lock_pages(&fmap, sizeof(fmap)) || + lock_pages(buffer, *nr_entries * sizeof(struct e820entry)) ) + { + PERROR("Could not lock memory for Xen hypercall"); + rc = -1; + goto out; + } + + rc = xc_memory_op(xc_handle, XENMEM_get_memory_map, &fmap); + + out: + unlock_pages(&fmap, sizeof(fmap)); + unlock_pages(buffer, *nr_entries * sizeof(struct e820entry)); + + *nr_entries = fmap.map.nr_entries; + return rc; + +} #else int xc_domain_set_memmap_limit(int xc_handle, uint32_t domid, unsigned long map_limitkb) +{ + PERROR("Function not implemented"); + errno = ENOSYS; + return -1; +} + +int xc_domain_set_memmap(int xc_handle, + uint32_t domid, + void *buffer, + unsigned int nr_entries) +{ + PERROR("Function not implemented"); + errno = ENOSYS; + return -1; +} + +int xc_domain_get_memmap(int xc_handle, + uint32_t domid, + void *buffer, + unsigned int nr_entries) { PERROR("Function not implemented"); errno = ENOSYS; diff -r 280d35294b89 -r c2db94de4afc tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Tue Jan 16 15:40:39 2007 +0900 +++ b/tools/libxc/xenctrl.h Thu Jan 18 15:13:11 2007 +0900 @@ -423,6 +423,16 @@ int xc_domain_set_memmap_limit(int xc_ha uint32_t domid, unsigned long map_limitkb); +int xc_domain_set_memmap(int xc_handle, + uint32_t domid, + void *buffer, + unsigned int nr_entries); + +int xc_domain_get_memmap(int xc_handle, + uint32_t domid, + void *buffer, + unsigned int *nr_entries); + int xc_domain_set_time_offset(int xc_handle, uint32_t domid, int32_t time_offset_seconds); -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Isaku Yamahata
2007-Jan-18 06:52 UTC
[Xen-devel] [PATCH 3/5] dump-core take 2: libxc: add xc_domain_tranlate_gpfn()
# HG changeset patch # User yamahata@valinux.co.jp # Date 1169088584 -32400 # Node ID 9d5b9b6ff32744c912c44cfb9944646224923628 # Parent c2db94de4afc030170609d7d9de6daf334b17182 libxc: add xc_domain_translate_gpfn for XENMEM_translate_gpfn_list which is used by dump-core ia64 support. PATCHNAME: dump_core_xc_domain_translate_gpfn Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> diff -r c2db94de4afc -r 9d5b9b6ff327 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Jan 18 15:13:11 2007 +0900 +++ b/tools/libxc/xc_domain.c Thu Jan 18 11:49:44 2007 +0900 @@ -556,6 +556,30 @@ int xc_domain_memory_populate_physmap(in err = -1; } + return err; +} + +int xc_domain_translate_gpfn(int xc_handle, + uint32_t domid, + unsigned long nr_gpfns, + xen_pfn_t *gpfn_list, + xen_pfn_t *mfn_list) +{ + int err; + struct xen_translate_gpfn_list translate = { + .domid = domid, + .nr_gpfns = nr_gpfns + }; + set_xen_guest_handle(translate.gpfn_list, gpfn_list); + set_xen_guest_handle(translate.mfn_list, mfn_list); + err = xc_memory_op(xc_handle, XENMEM_translate_gpfn_list, &translate); + if ( err ) + { + DPRINTF("Failed to translate for dom %d: %ld gpfns\n", + domid, nr_gpfns); + errno = -err; + } + return err; } diff -r c2db94de4afc -r 9d5b9b6ff327 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Thu Jan 18 15:13:11 2007 +0900 +++ b/tools/libxc/xc_private.c Thu Jan 18 11:49:44 2007 +0900 @@ -210,6 +210,7 @@ int xc_memory_op(int xc_handle, DECLARE_HYPERCALL; struct xen_memory_reservation *reservation = arg; struct xen_machphys_mfn_list *xmml = arg; + struct xen_translate_gpfn_list *translate = arg; xen_pfn_t *extent_start; long ret = -EINVAL; @@ -256,6 +257,32 @@ int xc_memory_op(int xc_handle, if ( lock_pages(arg, sizeof(struct xen_add_to_physmap)) ) { PERROR("Could not lock"); + goto out1; + } + break; + case XENMEM_translate_gpfn_list: + if ( lock_pages(translate, sizeof(*translate)) != 0 ) + { + PERROR("Coult not lock"); + goto out1; + } + get_xen_guest_handle(extent_start, translate->gpfn_list); + if ( lock_pages(extent_start, + translate->nr_gpfns * sizeof(xen_pfn_t)) ) + { + PERROR("Coult not lock"); + unlock_pages(translate, sizeof(*translate)); + goto out1; + } + get_xen_guest_handle(extent_start, translate->mfn_list); + if ( lock_pages(extent_start, + translate->nr_gpfns * sizeof(xen_pfn_t)) ) + { + PERROR("Coult not lock"); + unlock_pages(translate, sizeof(*translate)); + get_xen_guest_handle(extent_start, translate->gpfn_list); + unlock_pages(extent_start, + translate->nr_gpfns * sizeof(xen_pfn_t)); goto out1; } break; @@ -282,6 +309,13 @@ int xc_memory_op(int xc_handle, break; case XENMEM_add_to_physmap: unlock_pages(arg, sizeof(struct xen_add_to_physmap)); + break; + case XENMEM_translate_gpfn_list: + unlock_pages(translate, sizeof(*translate)); + get_xen_guest_handle(extent_start, translate->gpfn_list); + unlock_pages(extent_start, translate->nr_gpfns * sizeof(xen_pfn_t)); + get_xen_guest_handle(extent_start, translate->mfn_list); + unlock_pages(extent_start, translate->nr_gpfns * sizeof(xen_pfn_t)); break; } diff -r c2db94de4afc -r 9d5b9b6ff327 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Thu Jan 18 15:13:11 2007 +0900 +++ b/tools/libxc/xenctrl.h Thu Jan 18 11:49:44 2007 +0900 @@ -457,6 +457,12 @@ int xc_domain_memory_populate_physmap(in unsigned int address_bits, xen_pfn_t *extent_start); +int xc_domain_translate_gpfn(int xc_handle, + uint32_t domid, + unsigned long nr_gpfns, + xen_pfn_t *gpfn_list, + xen_pfn_t *mfn_list); + int xc_domain_ioport_permission(int xc_handle, uint32_t domid, uint32_t first_port, -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Isaku Yamahata
2007-Jan-18 06:53 UTC
[Xen-devel] [PATCH 4/5] dump-core take 2: hvm builder: tell memory map
# HG changeset patch # User yamahata@valinux.co.jp # Date 1168929749 -32400 # Node ID dae81535b77157d2bc3c3547088f0ef512c3b5d2 # Parent 9d5b9b6ff32744c912c44cfb9944646224923628 x86 hvm domain builder: tell xen memory map for dump-core to know the area to dump. PATCHNAME: x86_hvm_domain_builder_tell_memory_map Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> diff -r 9d5b9b6ff327 -r dae81535b771 tools/libxc/xc_hvm_build.c --- a/tools/libxc/xc_hvm_build.c Thu Jan 18 11:49:44 2007 +0900 +++ b/tools/libxc/xc_hvm_build.c Tue Jan 16 15:42:29 2007 +0900 @@ -66,12 +66,14 @@ int xc_get_hvm_param( return rc; } -static void build_e820map(void *e820_page, unsigned long long mem_size) +static int build_e820map(int xc_handle, uint32_t domid, + void *e820_page, unsigned long long mem_size) { struct e820entry *e820entry (struct e820entry *)(((unsigned char *)e820_page) + E820_MAP_OFFSET); unsigned long long extra_mem_size = 0; unsigned char nr_map = 0; + struct e820entry *tmp; /* * Physical address space from HVM_BELOW_4G_RAM_END to 4G is reserved @@ -142,6 +144,17 @@ static void build_e820map(void *e820_pag } *(((unsigned char *)e820_page) + E820_MAP_NR_OFFSET) = nr_map; + + tmp = malloc(nr_map * sizeof(struct e820entry)); + if ( tmp == NULL ) + { + PERROR("Could not allocate memory.\n"); + return -1; + } + memcpy(tmp, &e820entry[0], nr_map * sizeof(e820entry[0])); + xc_domain_set_memmap(xc_handle, domid, tmp, nr_map); + free(tmp); + return 0; } static int setup_guest(int xc_handle, @@ -219,8 +232,10 @@ static int setup_guest(int xc_handle, E820_MAP_PAGE >> PAGE_SHIFT)) == NULL ) goto error_out; memset(e820_page, 0, PAGE_SIZE); - build_e820map(e820_page, v_end); + rc = build_e820map(xc_handle, dom, e820_page, v_end); munmap(e820_page, PAGE_SIZE); + if ( rc != 0 ) + goto error_out; /* Map and initialise shared_info page. */ xatp.domid = dom; -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Isaku Yamahata
2007-Jan-18 06:53 UTC
[Xen-devel] [PATCH 5/5] dump-core take 2: elf formatify and added PFN-GMFN table
# HG changeset patch # User yamahata@valinux.co.jp # Date 1169101101 -32400 # Node ID 7da70af62b577478389d36069c824c4f2180f95e # Parent dae81535b77157d2bc3c3547088f0ef512c3b5d2 Use the guest''s own p2m table instead of xc_get_pfn_list(), which cannot handle PFNs with no MFN. Dump a zeroed page for PFNs with no MFN. Clearly deprecate xc_get_pfn_list(). Do not include a P2M table with HVM domains. Refuse to dump HVM until we can map its pages with PFNs. Signed-off-by: John Levon <john.levon@sun.com> ELF formatified. added PFN-GMFN table. HVM domain support. experimental IA64 support. NOTE: IA64 support is for only review. It doesn''t work because Xen/IA64 doesn''t support memory map hypercall. TODO: Xen/IA64 memory map hypercall. PATCHNAME: xm_dump_core_elf Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> diff -r dae81535b771 -r 7da70af62b57 tools/libxc/xc_core.c --- a/tools/libxc/xc_core.c Tue Jan 16 15:42:29 2007 +0900 +++ b/tools/libxc/xc_core.c Thu Jan 18 15:18:21 2007 +0900 @@ -1,10 +1,18 @@ +/* + * Elf format, (pfn, gmfn) table, IA64 support. + * Copyright (c) 2007 Isaku Yamahata <yamahata at valinux co jp> + * VA Linux Systems Japan K.K. + * + */ + #include "xg_private.h" +#include "xc_elf.h" +#include "xc_core.h" #include <stdlib.h> #include <unistd.h> /* number of pages to write at a time */ #define DUMP_INCREMENT (4 * 1024) -#define round_pgup(_p) (((_p)+(PAGE_SIZE-1))&PAGE_MASK) static int copy_from_domain_page(int xc_handle, @@ -21,107 +29,718 @@ copy_from_domain_page(int xc_handle, return 0; } +#if defined(__i386__) || defined(__x86_64__) +#define ELF_ARCH_DATA ELFDATA2LSB +#if defined (__i386__) +# define ELF_ARCH_MACHINE EM_386 +#else +# define ELF_ARCH_MACHINE EM_X86_64 +#endif + +static int +map_p2m(int xc_handle, xc_dominfo_t *info, xen_pfn_t **live_p2m, + unsigned long *pfnp) +{ + /* Double and single indirect references to the live P2M table */ + xen_pfn_t *live_p2m_frame_list_list = NULL; + xen_pfn_t *live_p2m_frame_list = NULL; + shared_info_t *live_shinfo = NULL; + uint32_t dom = info->domid; + unsigned long max_pfn = 0; + int ret = -1; + int err; + + /* Map the shared info frame */ + live_shinfo = xc_map_foreign_range(xc_handle, dom, PAGE_SIZE, + PROT_READ, info->shared_info_frame); + + if ( !live_shinfo ) + { + PERROR("Couldn''t map live_shinfo"); + goto out; + } + + max_pfn = live_shinfo->arch.max_pfn; + + if ( max_pfn < info->nr_pages ) + { + ERROR("max_pfn < nr_pages -1 (%lx < %lx", max_pfn, info->nr_pages - 1); + goto out; + } + + live_p2m_frame_list_list + xc_map_foreign_range(xc_handle, dom, PAGE_SIZE, PROT_READ, + live_shinfo->arch.pfn_to_mfn_frame_list_list); + + if ( !live_p2m_frame_list_list ) + { + PERROR("Couldn''t map p2m_frame_list_list (errno %d)", errno); + goto out; + } + + live_p2m_frame_list + xc_map_foreign_batch(xc_handle, dom, PROT_READ, + live_p2m_frame_list_list, + P2M_FLL_ENTRIES); + + if ( !live_p2m_frame_list ) + { + PERROR("Couldn''t map p2m_frame_list"); + goto out; + } + + *live_p2m = xc_map_foreign_batch(xc_handle, dom, PROT_READ, + live_p2m_frame_list, + P2M_FL_ENTRIES); + + if ( !live_p2m ) + { + PERROR("Couldn''t map p2m table"); + goto out; + } + + *pfnp = max_pfn; + + + ret = 0; + +out: + err = errno; + + if ( live_shinfo ) + munmap(live_shinfo, PAGE_SIZE); + + if ( live_p2m_frame_list_list ) + munmap(live_p2m_frame_list_list, PAGE_SIZE); + + if ( live_p2m_frame_list ) + munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE); + + errno = err; + return ret; +} + +#include <xen/hvm/e820.h> +typedef struct e820entry memory_map_entry_t; + +static inline int +memory_map_may_dump(const memory_map_entry_t *entry) +{ + return entry->type == E820_RAM && entry->size > 0; +} + +static inline uint64_t +memory_map_addr(const memory_map_entry_t *entry) +{ + return entry->addr; +} + +static inline uint64_t +memory_map_size(const memory_map_entry_t *entry) +{ + return entry->size; +} + +#elif defined (__ia64__) +#define ELF_ARCH_DATA ELFDATA2LSB +#define ELF_ARCH_MACHINE EM_IA64 + +static int +map_p2m(int xc_handle, xc_dominfo_t *info, xen_pfn_t **live_p2m, + unsigned long *pfnp) +{ + errno = ENOSYS; + reutrn -1; +} + +#include "xc_efi.h" +typedef efi_memory_desc_t memory_map_entry_t; + +static inline int +memory_map_may_dump(const memory_map_entry_t *md) +{ + switch ( md->type ) + { + case EFI_RESERVED_TYPE: + case EFI_LOADER_CODE: + case EFI_LOADER_DATA: + case EFI_BOOT_SERVICES_CODE: + case EFI_BOOT_SERVICES_DATA: + case EFI_RUNTIME_SERVICES_CODE: + case EFI_RUNTIME_SERVICES_DATA: + case EFI_CONVENTIONAL_MEMORY: + case EFI_ACPI_RECLAIM_MEMORY: + case EFI_ACPI_MEMORY_NVS: + case EFI_PAL_CODE: + if ( !(md->attribute & EFI_MEMORY_WB) ) + return 0; + return 1; + + case EFI_MEMORY_MAPPED_IO: + case EFI_MEMORY_MAPPED_IO_PORT_SPACE: + case EFI_UNUSABLE_MEMORY: + return 0; + + default: + break; + } + return 0; +} + +static inline uint64_t +memory_map_addr(const memory_map_entry_t *md) +{ + return md->phys_addr; +} + +static inline uint64_t +memory_map_size(const memory_map_entry_t *md) +{ + return md->num_pages << EFI_PAGE_SHIFT; +} + +#else +# error "unsupported architecture" +#endif + +#ifndef ELF_CORE_EFLAGS +#define ELF_CORE_EFLAGS 0 +#endif + +#ifndef INVLAID_MFN +#define INVALID_MFN (~0UL) +#endif + +static int +memory_map_get(int xc_handle, uint32_t domid, + memory_map_entry_t **entries, unsigned int *nr_entries) +{ + memory_map_entry_t *map; + int ret; + + *nr_entries = 5; /* xc_hvm_builder allocates 5 entries */ +again: + ret = -1; + map = malloc(*nr_entries * sizeof(map[0])); + if ( map == NULL ) + { + PERROR("Couldn''t allocate e820 entry: nr_entries = %d", *nr_entries); + return ret; + } + + ret = xc_domain_get_memmap(xc_handle, domid, map, nr_entries); + if ( ret != 0 ) + { + if ( errno == EINVAL ) + { + *nr_entries *= 2; + free(map); + goto again; + } + } + if ( ret == 0 ) + *entries = map; + return ret; +} + +static int +get_phdr(Elf_Phdr **phdr, unsigned int *max_phdr, unsigned int *nr_phdr) +{ + Elf_Phdr *tmp; + + (*nr_phdr)++; + if ( *nr_phdr < *max_phdr ) + return 0; + +#define PHDR_INC 4096 + if ( *max_phdr < PHDR_INC ) + *max_phdr *= 2; + else + *max_phdr += PHDR_INC; + + tmp = realloc(*phdr, *max_phdr * sizeof(Elf_Phdr)); + if ( tmp == NULL ) + return -1; + *phdr = tmp; + return 0; +} + +static void +set_phdr(Elf_Phdr *phdr, unsigned long offset, uint64_t addr, uint64_t size) +{ + memset(phdr, 0, sizeof(*phdr)); + phdr->p_type = PT_LOAD; + phdr->p_flags = PF_X | PF_W | PF_R; + phdr->p_offset = offset; + phdr->p_vaddr = 0; + phdr->p_paddr = addr; + phdr->p_filesz = size; + phdr->p_memsz = size; + phdr->p_align = 0; +} + int xc_domain_dumpcore_via_callback(int xc_handle, uint32_t domid, void *args, dumpcore_rtn_t dump_rtn) { - unsigned long nr_pages; - xen_pfn_t *page_array = NULL; xc_dominfo_t info; - int i, nr_vcpus = 0; + int nr_vcpus = 0; char *dump_mem, *dump_mem_start = NULL; - struct xc_core_header header; vcpu_guest_context_t ctxt[MAX_VIRT_CPUS]; char dummy[PAGE_SIZE]; int dummy_len; - int sts; + int sts = -1; + + unsigned long i; + unsigned long j; + unsigned long nr_pages; + + memory_map_entry_t *memory_map = NULL; + unsigned int nr_memory_map; + unsigned int map_idx; + xen_pfn_t pfn; + + int need_p2m_table; /* !XENFEAT_auto_translated_physmap */ + xen_pfn_t *p2m = NULL; + unsigned long max_pfn = 0; + struct p2m *p2m_array = NULL; + + int may_balloon; + unsigned long nr_pfn_array = 0; + xen_pfn_t *pfn_array = NULL; + + Elf_Ehdr ehdr; + unsigned long filesz; + unsigned long offset; + unsigned long fixup; +#define INIT_PHDR 32 + unsigned int max_phdr; + unsigned int nr_phdr; + Elf_Phdr *phdr; + struct xen_note note; + struct xen_core_header_desc core_header; if ( (dump_mem_start = malloc(DUMP_INCREMENT*PAGE_SIZE)) == NULL ) { PERROR("Could not allocate dump_mem"); - goto error_out; + goto out; } if ( xc_domain_getinfo(xc_handle, domid, 1, &info) != 1 ) { PERROR("Could not get info for domain"); - goto error_out; - } + goto out; + } + +#if defined(__i386__) || defined(__x86_64__) + need_p2m_table = 1; + may_balloon = 1; + if ( info.hvm ) + { + need_p2m_table = 0; + may_balloon = 0; + } +#elif defined (__ia64__) + need_p2m_table = 0; + may_balloon = 1; + if ( info.hvm ) + may_balloon = 0; +#else +# error "unsupported archtecture" +#endif if ( domid != info.domid ) { PERROR("Domain %d does not exist", domid); - goto error_out; + goto out; } for ( i = 0; i <= info.max_vcpu_id; i++ ) - if ( xc_vcpu_getcontext(xc_handle, domid, i, &ctxt[nr_vcpus]) == 0) + if ( xc_vcpu_getcontext(xc_handle, domid, i, &ctxt[nr_vcpus]) == 0 ) nr_vcpus++; - + if ( nr_vcpus == 0 ) + { + PERROR("No VCPU context could be grabbed"); + goto out; + } + + /* obtain memory map */ + sts = memory_map_get(xc_handle, domid, &memory_map, &nr_memory_map); + if ( sts != 0 ) + goto out; +#if 0 + for ( map_idx = 0; map_idx < nr_memory_map; map_idx++ ) + DPRINTF("%d: addr %llx size %llx\n", map_idx, + memory_map_addr(&memory_map[map_idx]), + memory_map_size(&memory_map[map_idx])); +#endif + nr_pages = info.nr_pages; - - header.xch_magic = info.hvm ? XC_CORE_MAGIC_HVM : XC_CORE_MAGIC; - header.xch_nr_vcpus = nr_vcpus; - header.xch_nr_pages = nr_pages; - header.xch_ctxt_offset = sizeof(struct xc_core_header); - header.xch_index_offset = sizeof(struct xc_core_header) + - sizeof(vcpu_guest_context_t)*nr_vcpus; - dummy_len = (sizeof(struct xc_core_header) + - (sizeof(vcpu_guest_context_t) * nr_vcpus) + - (nr_pages * sizeof(xen_pfn_t))); - header.xch_pages_offset = round_pgup(dummy_len); - - sts = dump_rtn(args, (char *)&header, sizeof(struct xc_core_header)); + if ( need_p2m_table ) + { + /* obtain p2m table */ + p2m_array = malloc(nr_pages * sizeof(struct p2m)); + if ( p2m_array == NULL ) + { + PERROR("Could not allocate p2m array"); + goto out; + } + + sts = map_p2m(xc_handle, &info, &p2m, &max_pfn); + if ( sts != 0 ) + goto out; + } + else + { + unsigned long total_pages = 0; + unsigned long pages; + + max_pfn = 0; + for ( map_idx = 0; map_idx < nr_memory_map; map_idx++ ) + { + + if ( !memory_map_may_dump(&memory_map[map_idx]) ) + continue; + + pages = memory_map_size(&memory_map[map_idx]) >> PAGE_SHIFT; + pfn = (memory_map_addr(&memory_map[map_idx]) >> PAGE_SHIFT) + + pages; + if ( max_pfn < pfn ) + max_pfn = pfn; + total_pages += pages; + } + + if ( may_balloon ) + { + pfn_array = malloc(total_pages * sizeof(pfn_array[0])); + if ( pfn_array == NULL ) + { + PERROR("Could not allocate pfn array"); + goto out; + } + nr_pfn_array = total_pages; + + total_pages = 0; + for ( map_idx = 0; map_idx < nr_memory_map; map_idx++ ) + { + if ( !memory_map_may_dump(&memory_map[map_idx]) ) + continue; + + pages = memory_map_size(&memory_map[map_idx]) >> PAGE_SHIFT; + pfn = memory_map_addr(&memory_map[map_idx]) >> PAGE_SHIFT; + for ( i = 0; i < pages; i++ ) + pfn_array[total_pages + i] = pfn + i; + total_pages += pages; + } + + sts = xc_domain_translate_gpfn(xc_handle, domid, total_pages, + pfn_array, pfn_array); + if ( sts ) + goto out; + } + else if ( nr_pages != total_pages ) + { + PERROR("nr_pages(%ld) != total_pages (%ld)", + nr_pages, total_pages); + } + } + + memset(&ehdr, 0, sizeof(ehdr)); + ehdr.e_ident[EI_MAG0] = ELFMAG0; + ehdr.e_ident[EI_MAG1] = ELFMAG1; + ehdr.e_ident[EI_MAG2] = ELFMAG2; + ehdr.e_ident[EI_MAG3] = ELFMAG3; + ehdr.e_ident[EI_CLASS] = ELFCLASS; + ehdr.e_ident[EI_DATA] = ELF_ARCH_DATA; + ehdr.e_ident[EI_VERSION] = EV_CURRENT; + ehdr.e_ident[EI_OSABI] = ELFOSABI_SYSV; + ehdr.e_ident[EI_ABIVERSION] = EV_CURRENT; + + ehdr.e_type = ET_CORE; + ehdr.e_machine = ELF_ARCH_MACHINE; + ehdr.e_version = EV_CURRENT; + ehdr.e_entry = 0; + ehdr.e_phoff = sizeof(ehdr); + ehdr.e_shoff = 0; + ehdr.e_flags = ELF_CORE_EFLAGS; + ehdr.e_ehsize = sizeof(ehdr); + ehdr.e_phentsize = sizeof(Elf_Phdr); + /* ehdr.e_phum isn''t know here yet. fill it later */ + ehdr.e_shentsize = 0; + ehdr.e_shnum = 0; + ehdr.e_shstrndx = 0; + + /* create program header */ + nr_phdr = 0; + max_phdr = INIT_PHDR; + phdr = malloc(max_phdr * sizeof(phdr[0])); + if ( phdr == NULL ) + { + PERROR("Could not allocate memory"); + goto out; + } + /* here the number of program header is unknown. fix up offset later. */ + offset = sizeof(ehdr); + + /* note section */ + filesz = sizeof(struct xen_core_header) + /* core header */ + sizeof(struct xen_note) + sizeof(ctxt[0]) * nr_vcpus; /* vcpu context */ + if ( need_p2m_table ) + filesz += sizeof(struct xen_note_p2m) + sizeof(p2m_array[0]) * nr_pages; /* p2m table */ + + + memset(&phdr[nr_phdr], 0, sizeof(phdr[0])); + phdr[nr_phdr].p_type = PT_NOTE; + phdr[nr_phdr].p_flags = 0; + phdr[nr_phdr].p_offset = offset; + phdr[nr_phdr].p_vaddr = 0; + phdr[nr_phdr].p_paddr = 0; + phdr[nr_phdr].p_filesz = filesz; + phdr[nr_phdr].p_memsz = filesz; + phdr[nr_phdr].p_align = 0; + + offset += filesz; + +#define INVALID_PFN (~0UL) +#define GET_SET_PHDR(offset, addr, size) \ + do { \ + sts = get_phdr(&phdr, &max_phdr, &nr_phdr); \ + if ( sts ) \ + goto out; \ + set_phdr(&phdr[nr_phdr], (offset), (addr), (size)); \ + (offset) += (size); \ + } while (0) +#define SET_PHDR_IF_NECESSARY \ + do { \ + if ( last_pfn != INVALID_PFN && size > 0 ) \ + GET_SET_PHDR(offset, last_pfn << PAGE_SHIFT, size); \ + \ + last_pfn = INVALID_PFN; \ + size = 0; \ + } while (0) + + if ( need_p2m_table ) + { + xen_pfn_t last_pfn = INVALID_PFN; + uint64_t size = 0; + + j = 0; + for ( i = 0; i < max_pfn && j < nr_pages; i++ ) + { + if ( last_pfn + (size >> PAGE_SHIFT) != i ) + SET_PHDR_IF_NECESSARY; + + if ( p2m[i] == INVALID_P2M_ENTRY ) + continue; + + if ( last_pfn == INVALID_PFN ) + last_pfn = i; + size += PAGE_SIZE; + + p2m_array[j].pfn = i; + p2m_array[j].gmfn = p2m[i]; + j++; + } + SET_PHDR_IF_NECESSARY; + + if ( j != nr_pages ) + PERROR("j(%ld) != nr_pages (%ld)", j, nr_pages); + } + else if ( may_balloon ) + { + unsigned long total_pages = 0; + j = 0; + for ( map_idx = 0; map_idx < nr_memory_map; map_idx++ ) + { + unsigned long pages; + xen_pfn_t last_pfn; + uint64_t size; + + if ( !memory_map_may_dump(&memory_map[map_idx]) ) + continue; + + pages = memory_map_size(&memory_map[map_idx]) >> PAGE_SHIFT; + pfn = memory_map_addr(&memory_map[map_idx]) >> PAGE_SHIFT; + last_pfn = INVALID_PFN; + size = 0; + + for ( i = 0; i < pages; i++ ) + { + if ( last_pfn + (size >> PAGE_SHIFT) != pfn + i ) + SET_PHDR_IF_NECESSARY; + + if ( pfn_array[total_pages + i] == INVALID_MFN ) + continue; +#ifdef __ia64__ + /* work around until fix ia64 gmfn_to_mfn() */ + if ( pfn_array[total_pages + i] == 0 ) + continue; +#endif + + if ( last_pfn == INVALID_PFN ) + last_pfn = pfn + i; + size += PAGE_SIZE; + + pfn_array[j] = pfn + i; + j++; + } + SET_PHDR_IF_NECESSARY; + + total_pages += pages; + } + if ( j != nr_pages ) + PERROR("j(%ld) != nr_pages (%ld)", j, nr_pages); + } + else + { + for ( map_idx = 0; map_idx < nr_memory_map; map_idx++ ) + { + uint64_t addr; + uint64_t size; + if ( !memory_map_may_dump(&memory_map[map_idx]) ) + continue; + addr = memory_map_addr(&memory_map[map_idx]); + size = memory_map_size(&memory_map[map_idx]); + + GET_SET_PHDR(offset, addr, size); + } + } + + nr_phdr++; + + /* write out elf header */ + ehdr.e_phnum = nr_phdr; + sts = dump_rtn(args, (char*)&ehdr, sizeof(ehdr)); if ( sts != 0 ) - goto error_out; - + goto out; + + fixup = nr_phdr * sizeof(phdr[0]); + /* fix up offset for note section */ + phdr[0].p_offset += fixup; + + dummy_len = ROUNDUP(offset + fixup, PAGE_SHIFT) - (offset + fixup); /* padding length */ + fixup += dummy_len; + /* fix up offset for pages */ + for ( i = 1; i < nr_phdr; i++ ) + phdr[i].p_offset += fixup; + /* write out program header */ + sts = dump_rtn(args, (char*)phdr, nr_phdr * sizeof(phdr[0])); + if ( sts != 0 ) + goto out; + + /* note section */ + memset(¬e, 0, sizeof(note)); + note.namesz = strlen(XEN_NOTES) + 1; + strncpy(note.name, XEN_NOTES, sizeof(note.name)); + + /* note section:xen core header */ + note.descsz = sizeof(core_header); + note.type = NT_XEN_HEADER; + core_header.xch_magic = info.hvm ? XC_CORE_MAGIC_HVM : XC_CORE_MAGIC; + core_header.xch_nr_vcpus = nr_vcpus; + core_header.xch_nr_pages = nr_pages; + core_header.xch_page_size = PAGE_SIZE; + sts = dump_rtn(args, (char*)¬e, sizeof(note)); + if ( sts != 0 ) + goto out; + sts = dump_rtn(args, (char*)&core_header, sizeof(core_header)); + if ( sts != 0 ) + goto out; + + /* note section:xen vcpu prstatus */ + note.descsz = sizeof(ctxt[0]) * nr_vcpus; + note.type = NT_XEN_PRSTATUS; + sts = dump_rtn(args, (char*)¬e, sizeof(note)); + if ( sts != 0 ) + goto out; sts = dump_rtn(args, (char *)&ctxt, sizeof(ctxt[0]) * nr_vcpus); if ( sts != 0 ) - goto error_out; - - if ( (page_array = malloc(nr_pages * sizeof(xen_pfn_t))) == NULL ) - { - IPRINTF("Could not allocate memory\n"); - goto error_out; - } - if ( xc_get_pfn_list(xc_handle, domid, page_array, nr_pages) != nr_pages ) - { - IPRINTF("Could not get the page frame list\n"); - goto error_out; - } - sts = dump_rtn(args, (char *)page_array, nr_pages * sizeof(xen_pfn_t)); - if ( sts != 0 ) - goto error_out; - + goto out; + + /* note section:create p2m table */ + if ( need_p2m_table ) + { + note.descsz = sizeof(p2m_array[0]) * nr_pages; + note.type = NT_XEN_P2M; + sts = dump_rtn(args, (char*)¬e, sizeof(note)); + if ( sts != 0 ) + goto out; + sts = dump_rtn(args, (char *)p2m_array, + sizeof(p2m_array[0]) * nr_pages); + if ( sts != 0 ) + goto out; + } + /* Pad the output data to page alignment. */ memset(dummy, 0, PAGE_SIZE); - sts = dump_rtn(args, dummy, header.xch_pages_offset - dummy_len); + sts = dump_rtn(args, dummy, dummy_len); if ( sts != 0 ) - goto error_out; - - for ( dump_mem = dump_mem_start, i = 0; i < nr_pages; i++ ) - { - copy_from_domain_page(xc_handle, domid, page_array[i], dump_mem); - dump_mem += PAGE_SIZE; - if ( ((i + 1) % DUMP_INCREMENT == 0) || ((i + 1) == nr_pages) ) - { - sts = dump_rtn(args, dump_mem_start, dump_mem - dump_mem_start); - if ( sts != 0 ) - goto error_out; - dump_mem = dump_mem_start; - } - } - + goto out; + +#define DUMP_PAGE(gmfn) \ + do { \ + copy_from_domain_page(xc_handle, domid, (gmfn), dump_mem); \ + dump_mem += PAGE_SIZE; \ + if ( ((i + 1) % DUMP_INCREMENT == 0) || ((i + 1) == nr_pages) ) \ + { \ + sts = dump_rtn(args, dump_mem_start, \ + dump_mem - dump_mem_start); \ + if ( sts != 0 ) \ + goto out; \ + dump_mem = dump_mem_start; \ + } \ + } while (0) + + /* dump pages */ + if ( need_p2m_table || may_balloon ) + { + for ( dump_mem = dump_mem_start, i = 0; i < nr_pages; i++ ) + { + xen_pfn_t gmfn; + if ( need_p2m_table ) + gmfn = p2m_array[i].gmfn; + else + gmfn = pfn_array[i]; /* may_balloon */ + + DUMP_PAGE(gmfn); + } + } + else + { + for ( map_idx = 0; map_idx < nr_memory_map; map_idx++ ) + { + if ( !memory_map_may_dump(&memory_map[map_idx]) ) + continue; + + pfn = memory_map_addr(&memory_map[map_idx]) >> PAGE_SHIFT; + nr_pages = memory_map_size(&memory_map[map_idx]) >> PAGE_SHIFT; + DPRINTF("%s:%d pfn %lx nr_pages %lx\n", + __func__, __LINE__, pfn, nr_pages); + + for ( dump_mem = dump_mem_start, i = 0; i < nr_pages; i++ ) + DUMP_PAGE(pfn + i); + } + } + + sts = 0; + +out: + if ( p2m ) + { + if ( info.hvm ) + free( p2m ); + else + munmap(p2m, P2M_SIZE); + } free(dump_mem_start); - free(page_array); - return 0; - - error_out: - free(dump_mem_start); - free(page_array); - return -1; + if ( p2m_array != NULL ) + free(p2m_array); + if ( pfn_array != NULL ) + free(pfn_array); + free(phdr); + return sts; } /* Callback args for writing to a local dump file. */ diff -r dae81535b771 -r 7da70af62b57 tools/libxc/xc_core.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/libxc/xc_core.h Thu Jan 18 15:18:21 2007 +0900 @@ -0,0 +1,81 @@ +/* + * Copyright (c) 2006 Isaku Yamahata <yamahata at valinux co jp> + * VA Linux Systems Japan K.K. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. + * + */ + +#ifndef XC_CORE_H +#define XC_CORE_H + +#define XEN_NOTES "XEN CORE" + +/* Notes used in xen core*/ +#define NT_XEN_NOTEBASE 256 /* large enough which isn''t used by others */ +#define NT_XEN_HEADER (NT_XEN_NOTEBASE + 0) +#define NT_XEN_PRSTATUS (NT_XEN_NOTEBASE + 1) +#define NT_XEN_P2M (NT_XEN_NOTEBASE + 2) + + +struct xen_note { + uint32_t namesz; + uint32_t descsz; + uint32_t type; + char name[12]; /* to hold XEN_NOTES and 64bit aligned. + * 8 <= sizeof(XEN_NOTES) < 12 + */ +}; + + +struct xen_core_header_desc { + uint64_t xch_magic; + uint64_t xch_nr_vcpus; + uint64_t xch_nr_pages; + uint64_t xch_page_size; +}; + +struct p2m { + xen_pfn_t pfn; + xen_pfn_t gmfn; +}; + + +struct xen_core_header { + struct xen_note note; + struct xen_core_header_desc core_header; +}; + +struct xen_note_prstatus { + struct xen_note note; + vcpu_guest_context_t ctxt[0]; +}; + +struct xen_note_p2m { + struct xen_note note; + struct p2m p2m[0]; +}; + +#endif /* XC_CORE_H */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff -r dae81535b771 -r 7da70af62b57 tools/libxc/xc_efi.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/libxc/xc_efi.h Thu Jan 18 15:18:21 2007 +0900 @@ -0,0 +1,68 @@ +#ifndef XC_EFI_H +#define XC_EFI_H + +/* definitions from xen/include/asm-ia64/linux-xen/linux/efi.h */ + +/* + * Extensible Firmware Interface + * Based on ''Extensible Firmware Interface Specification'' version 0.9, April 30, 1999 + * + * Copyright (C) 1999 VA Linux Systems + * Copyright (C) 1999 Walt Drummond <drummond@valinux.com> + * Copyright (C) 1999, 2002-2003 Hewlett-Packard Co. + * David Mosberger-Tang <davidm@hpl.hp.com> + * Stephane Eranian <eranian@hpl.hp.com> + */ + +/* + * Memory map descriptor: + */ + +/* Memory types: */ +#define EFI_RESERVED_TYPE 0 +#define EFI_LOADER_CODE 1 +#define EFI_LOADER_DATA 2 +#define EFI_BOOT_SERVICES_CODE 3 +#define EFI_BOOT_SERVICES_DATA 4 +#define EFI_RUNTIME_SERVICES_CODE 5 +#define EFI_RUNTIME_SERVICES_DATA 6 +#define EFI_CONVENTIONAL_MEMORY 7 +#define EFI_UNUSABLE_MEMORY 8 +#define EFI_ACPI_RECLAIM_MEMORY 9 +#define EFI_ACPI_MEMORY_NVS 10 +#define EFI_MEMORY_MAPPED_IO 11 +#define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12 +#define EFI_PAL_CODE 13 +#define EFI_MAX_MEMORY_TYPE 14 + +/* Attribute values: */ +#define EFI_MEMORY_UC ((u64)0x0000000000000001ULL) /* uncached */ +#define EFI_MEMORY_WC ((u64)0x0000000000000002ULL) /* write-coalescing */ +#define EFI_MEMORY_WT ((u64)0x0000000000000004ULL) /* write-through */ +#define EFI_MEMORY_WB ((u64)0x0000000000000008ULL) /* write-back */ +#define EFI_MEMORY_WP ((u64)0x0000000000001000ULL) /* write-protect */ +#define EFI_MEMORY_RP ((u64)0x0000000000002000ULL) /* read-protect */ +#define EFI_MEMORY_XP ((u64)0x0000000000004000ULL) /* execute-protect */ +#define EFI_MEMORY_RUNTIME ((u64)0x8000000000000000ULL) /* range requires runtime mapping */ +#define EFI_MEMORY_DESCRIPTOR_VERSION 1 + +#define EFI_PAGE_SHIFT 12 + +/* + * For current x86 implementations of EFI, there is + * additional padding in the mem descriptors. This is not + * the case in ia64. Need to have this fixed in the f/w. + */ +typedef struct { + u32 type; + u32 pad; + u64 phys_addr; + u64 virt_addr; + u64 num_pages; + u64 attribute; +#if defined (__i386__) + u64 pad1; +#endif +} efi_memory_desc_t; + +#endif /* XC_EFI_H */ diff -r dae81535b771 -r 7da70af62b57 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Tue Jan 16 15:42:29 2007 +0900 +++ b/tools/libxc/xenctrl.h Thu Jan 18 15:18:21 2007 +0900 @@ -529,6 +529,10 @@ unsigned long xc_translate_foreign_addre unsigned long xc_translate_foreign_address(int xc_handle, uint32_t dom, int vcpu, unsigned long long virt); +/** + * DEPRECATED. Avoid using this, as it does not correctly account for PFNs + * without a backing MFN. + */ int xc_get_pfn_list(int xc_handle, uint32_t domid, xen_pfn_t *pfn_buf, unsigned long max_pfns); diff -r dae81535b771 -r 7da70af62b57 tools/libxc/xg_private.h --- a/tools/libxc/xg_private.h Tue Jan 16 15:42:29 2007 +0900 +++ b/tools/libxc/xg_private.h Thu Jan 18 15:18:21 2007 +0900 @@ -119,6 +119,25 @@ typedef unsigned long l4_pgentry_t; (((_a) >> L4_PAGETABLE_SHIFT) & (L4_PAGETABLE_ENTRIES - 1)) #endif +#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1)) + +/* Size in bytes of the P2M (rounded up to the nearest PAGE_SIZE bytes) */ +#define P2M_SIZE ROUNDUP((max_pfn * sizeof(xen_pfn_t)), PAGE_SHIFT) + +/* Number of xen_pfn_t in a page */ +#define fpp (PAGE_SIZE/sizeof(xen_pfn_t)) + +/* Number of entries in the pfn_to_mfn_frame_list_list */ +#define P2M_FLL_ENTRIES (((max_pfn)+(fpp*fpp)-1)/(fpp*fpp)) + +/* Number of entries in the pfn_to_mfn_frame_list */ +#define P2M_FL_ENTRIES (((max_pfn)+fpp-1)/fpp) + +/* Size in bytes of the pfn_to_mfn_frame_list */ +#define P2M_FL_SIZE ((P2M_FL_ENTRIES)*sizeof(unsigned long)) + +#define INVALID_P2M_ENTRY (~0UL) + struct domain_setup_info { uint64_t v_start; diff -r dae81535b771 -r 7da70af62b57 tools/libxc/xg_save_restore.h --- a/tools/libxc/xg_save_restore.h Tue Jan 16 15:42:29 2007 +0900 +++ b/tools/libxc/xg_save_restore.h Thu Jan 18 15:18:21 2007 +0900 @@ -82,7 +82,6 @@ static int get_platform_info(int xc_hand */ #define PFN_TO_KB(_pfn) ((_pfn) << (PAGE_SHIFT - 10)) -#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1)) /* @@ -95,25 +94,5 @@ static int get_platform_info(int xc_hand #define M2P_SIZE(_m) ROUNDUP(((_m) * sizeof(xen_pfn_t)), M2P_SHIFT) #define M2P_CHUNKS(_m) (M2P_SIZE((_m)) >> M2P_SHIFT) -/* Size in bytes of the P2M (rounded up to the nearest PAGE_SIZE bytes) */ -#define P2M_SIZE ROUNDUP((max_pfn * sizeof(xen_pfn_t)), PAGE_SHIFT) - -/* Number of xen_pfn_t in a page */ -#define fpp (PAGE_SIZE/sizeof(xen_pfn_t)) - -/* Number of entries in the pfn_to_mfn_frame_list */ -#define P2M_FL_ENTRIES (((max_pfn)+fpp-1)/fpp) - -/* Size in bytes of the pfn_to_mfn_frame_list */ -#define P2M_FL_SIZE ((P2M_FL_ENTRIES)*sizeof(unsigned long)) - -/* Number of entries in the pfn_to_mfn_frame_list_list */ -#define P2M_FLL_ENTRIES (((max_pfn)+(fpp*fpp)-1)/(fpp*fpp)) - /* Returns TRUE if the PFN is currently mapped */ #define is_mapped(pfn_type) (!((pfn_type) & 0x80000000UL)) - -#define INVALID_P2M_ENTRY (~0UL) - - - -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 18/1/07 6:52 am, "Isaku Yamahata" <yamahata@valinux.co.jp> wrote:> Subject: [PATCH 1/5] dump-core take 2: XENMEM_set_memory_map hypercall > Subject: [PATCH 2/5] dump-core take 2: libxc: xc_domain memmap functionsShould be able to work without these. We need to be able to support ballooning anyway, so it''s not as if every E820_RAM region will necessarily be entirely populated with memory. What you need is a max_pfn value and then iterate 0...max_pfn-1 and try to map each page. If the mapping fails then there is no underlying memory. The tools could give a suitable max_pfn or we could add a hypercall to get it from Xen.> Subject: [PATCH 3/5] dump-core take 2: libxc: add xc_domain_tranlate_gpfn()Why? x86 moved to always mapping HVM memory by GPFN. Can ia64 do the same?> Subject: [PATCH 4/5] dump-core take 2: hvm builder: tell memory mapHopefully not needed.> Subject: [PATCH 5/5] dump-core take 2: elf formatify and added PFN-GMFN tableShouldn''t dump zero pages. Hence we need PFN-GMFN info even for HVM guests -- absence of PFN-GMFN pair, or GMFN==INVALID_MFN, could represent a RAM hole more cheaply than 4kB of zeroes. Otherwise PFN=GMFN. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thu, Jan 18, 2007 at 07:13:17AM +0000, Keir Fraser wrote:> On 18/1/07 6:52 am, "Isaku Yamahata" <yamahata@valinux.co.jp> wrote: > > > Subject: [PATCH 1/5] dump-core take 2: XENMEM_set_memory_map hypercall > > Subject: [PATCH 2/5] dump-core take 2: libxc: xc_domain memmap functions > > Should be able to work without these. We need to be able to support > ballooning anyway, so it''s not as if every E820_RAM region will necessarily > be entirely populated with memory. What you need is a max_pfn value and then > iterate 0...max_pfn-1 and try to map each page. If the mapping fails then > there is no underlying memory. The tools could give a suitable max_pfn or we > could add a hypercall to get it from Xen.max_pfn isn''t sufficient. Memory may be sparse on ia64 so that iterating on [0, max_pfn - 1] isn''t practical. It would take too long time. Mempry map is also necessary to avoid dumping I/O regions of a driver domain.> > Subject: [PATCH 3/5] dump-core take 2: libxc: add xc_domain_tranlate_gpfn() > Why? x86 moved to always mapping HVM memory by GPFN. Can ia64 do the same?IA64 uses GPFN for both domU and HVM. It is used to just in order to check whether each GPFNs have underlying memory. Not in order to get MFN. It can be replaced with trying to map and checking the result, however I want to know it _before_ dumping pages to create program headers. Is there any cheaper way than trying to map each PFNs?> > Subject: [PATCH 5/5] dump-core take 2: elf formatify and added PFN-GMFN table > Shouldn''t dump zero pages. Hence we need PFN-GMFN info even for HVM guests > -- absence of PFN-GMFN pair, or GMFN==INVALID_MFN, could represent a RAM > hole more cheaply than 4kB of zeroes. Otherwise PFN=GMFN.I''m not sure I understand. The posted patch doesn''t dump a page which doesn''t have underlying memory. By checking program header''s physical address and size (Elf_Phdr.{p_paddr, p_filesz}), we can know whether a given GPFN is present or not. -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-18 08:54 UTC
RE: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:
>From: Isaku Yamahata >Sent: 2007年1月18日 16:33 >> >> Should be able to work without these. We need to be able to support >> ballooning anyway, so it''s not as if every E820_RAM region will >necessarily >> be entirely populated with memory. What you need is a max_pfn value >and then >> iterate 0...max_pfn-1 and try to map each page. If the mapping fails >then >> there is no underlying memory. The tools could give a suitable max_pfn >or we >> could add a hypercall to get it from Xen. > >max_pfn isn''t sufficient. >Memory may be sparse on ia64 so that iterating on [0, max_pfn - 1] >isn''t practical. It would take too long time. >Mempry map is also necessary to avoid dumping I/O regions of a driver >domain. >Yeah, memory map may be sparse on ia64, but, only at physical level. You can always present a compact pseudo physical layout to a domain, despite of sparse or not in real physical.:-) BTW, is it possible to save memmap into xenstore, so that multiple user components can communicate such info directly without xen''s intervention? Thanks, Kevin _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Isaku Yamahata
2007-Jan-18 09:25 UTC
Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:
On Thu, Jan 18, 2007 at 04:54:05PM +0800, Tian, Kevin wrote:> Yeah, memory map may be sparse on ia64, but, only at physical level. > You can always present a compact pseudo physical layout to a > domain, despite of sparse or not in real physical.:-)That''s right. Xen/ia64 does so now for paravirtualized domain except dom0. There is an unsolved issue. If much memory (e.g. >4GB) is given to a driver domain, the domain can''t access I/O. At least the I/O area must be avoided somehow, thus paravirtualized domain''s memory map may become sparse (in the future when the issue is solved).> BTW, is it possible > to save memmap into xenstore, so that multiple user components can > communicate such info directly without xen''s intervention?Do you have any usage in mind? -- yamahata _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-18 13:26 UTC
RE: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:
>From: Isaku Yamahata [mailto:yamahata@valinux.co.jp] >Sent: 2007年1月18日 17:25 > >On Thu, Jan 18, 2007 at 04:54:05PM +0800, Tian, Kevin wrote: >> Yeah, memory map may be sparse on ia64, but, only at physical level. >> You can always present a compact pseudo physical layout to a >> domain, despite of sparse or not in real physical.:-) > >That''s right. Xen/ia64 does so now for paravirtualized domain >except dom0. >There is an unsolved issue. If much memory (e.g. >4GB) is given >to a driver domain, the domain can''t access I/O. >At least the I/O area must be avoided somehow, >thus paravirtualized domain''s memory map may become sparse >(in the future when the issue is solved).Yes, if I/O regions are very sparse, so does memory map for driver domain.> > >> BTW, is it possible >> to save memmap into xenstore, so that multiple user components can >> communicate such info directly without xen''s intervention? > >Do you have any usage in mind?Case like your above requirement, case like qemu, and even case like save/restore... anyway, to me there''s no need to let Xen aware of the domain memmap. Domain image builder constructs the memmap based on its configuration, and then just notify xen to allocate pages for appropriate regions or setup mapping for assigned MMIO ranges. If builder also saves memmap to xenstore, you don''t need above hypercall then. Just an alternative... Thanks, Kevin _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
On 18/1/07 08:33, "Isaku Yamahata" <yamahata@valinux.co.jp> wrote:> max_pfn isn''t sufficient. > Memory may be sparse on ia64 so that iterating on [0, max_pfn - 1] > isn''t practical. It would take too long time. > Mempry map is also necessary to avoid dumping I/O regions of a driver domain.But *you* make up the memory map. Why can''t you make it dense for virtual machines? If you can''t, how about pre-defining where the holes are and implicitly sharing that knowledge between the builder and sav/restore. They''re part of the same toolstack after all. And if the memory map''s extremely sparse that would make saving zero pages for empty PFNs even more sucky. Or do you avoid that for big holes? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-18 14:02 UTC
Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:
On 18/1/07 13:26, "Tian, Kevin" <kevin.tian@intel.com> wrote:>> Do you have any usage in mind? > > Case like your above requirement, case like qemu, and even case > like save/restore... anyway, to me there''s no need to let Xen aware > of the domain memmap. Domain image builder constructs the memmap > based on its configuration, and then just notify xen to allocate pages for > appropriate regions or setup mapping for assigned MMIO ranges. If > builder also saves memmap to xenstore, you don''t need above > hypercall then. Just an alternative...This is a better alternative *if* it is actually necessary, which I have not been convinced of. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jes Sorensen
2007-Jan-19 03:18 UTC
Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:
>>>>> "Kevin" == Tian, Kevin <kevin.tian@intel.com> writes:Kevin> Yeah, memory map may be sparse on ia64, but, only at physical Kevin> level. You can always present a compact pseudo physical layout Kevin> to a domain, despite of sparse or not in real physical.:-) BTW, Kevin> is it possible to save memmap into xenstore, so that multiple Kevin> user components can communicate such info directly without Kevin> xen''s intervention? Providing a fake linear memory map like that is totally broken, it means the domU operating system will not be able to benefit from NUMA information and do appropriate scheduling. The domU pages needs to be placed in the metaphysical memory zones that match their physical zone to get this right. We can provide a virtual linear map for special cases, like to support lesser operating systems that can''t handle real computers, but the general case needs to be that pages go into the metaphysical zone that matches their real physical zone. This is applicable to any NUMA system, not just ia64 systems, so with x86_64 becoming mainstream they will need it there too. A linear scan of the pfn list is just wrong, one should never do that. Cheers, Jes _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Reasonably Related Threads
- [PATCH] Support cross-bitness guest when core-dumping
- Fwd: NetBSD xl core-dump not working... Memory fault (core dumped)
- Re: dumpcore changes -- [Xen-changelog] [xen-unstable] In this patch, the xc_domain_dumpcore_via_callback() in xc_core.c of
- [PATCH] xenpm: make argument parsing and error handling more consistent
- [rfc] [patch] 32/64-bit hypercall interface revisited