libxc currently locks various on-stack data structures present on the stack using mlock(2) in order to try and make them safe for passing to hypercalls (which requires the memory to be mapped) There are several issues with this approach: 1) mlock/munlock do not nest, therefore mlocking multiple pieces of data on the stack which happen to share a page causes everything to be unlocked on the first munlock not the last. This is likely to be currently OK for the uses in libxc taken in isolation but could impact any caller of libxc which uses mlock itself. 2) mlocking only parts of the stack is considered by many to be a dubious, if strictly speaking allowed by the relevant specifications, use of mlock. 3) mlock may not provide the required semantics needed for hypercall safe memory. mlock simply ensures that there can be no major faults (page faults requiring I/O to satisfy) but does not necessarily rule out minor faults (e.g. due to page migration) The following introduces an explicit hypercall-safe memory pool API which includes support for bouncing user-supplied memory buffers into suitable memory. This series addresses (1) and (2) but does not directly address (3) other than by encapsulating the code which acquires hypercall safe memory into one place where it can be more easily fixed. There is also the slightly separate issue of code which forgets to lock buffers as necessary and therefor this series overrides the Xen guest-handle interfaces to attempt to improve compile-time checking for the correct use of the memory pool. This scheme works for the pointers contained within hypercall argument structures but doesn''t catch the actual hypercall arguments themselves. I''m open to suggestions on how to extend it cleanly to catch those cases. The bits which touch ia64 are not even compile tested since I do not have access to a suitable userspace-capable cross compiler. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID a40c36db2a03279fcb6a0525359d6a95de4e4800 # Parent 0b5d85ea10f8fff3f654c564c0e66900e83e8012 libxc: infrastructure for hypercall safe data buffers. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 0b5d85ea10f8 -r a40c36db2a03 tools/libxc/Makefile --- a/tools/libxc/Makefile Tue Oct 19 09:17:18 2010 +0100 +++ b/tools/libxc/Makefile Thu Oct 21 09:37:34 2010 +0100 @@ -27,6 +27,7 @@ CTRL_SRCS-y += xc_mem_event.c CTRL_SRCS-y += xc_mem_event.c CTRL_SRCS-y += xc_mem_paging.c CTRL_SRCS-y += xc_memshr.c +CTRL_SRCS-y += xc_hcall_buf.c CTRL_SRCS-y += xtl_core.c CTRL_SRCS-y += xtl_logger_stdio.c CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c diff -r 0b5d85ea10f8 -r a40c36db2a03 tools/libxc/xc_hcall_buf.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/libxc/xc_hcall_buf.c Thu Oct 21 09:37:34 2010 +0100 @@ -0,0 +1,160 @@ +/* + * Copyright (c) 2010, Citrix Systems, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; + * version 2.1 of the License. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include <inttypes.h> +#include "xc_private.h" +#include "xg_private.h" + +xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL) = { + .hbuf = NULL, + .param_shadow = NULL, + HYPERCALL_BUFFER_INIT_NO_BOUNCE +}; + +void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) +{ + size_t size = nr_pages * PAGE_SIZE; + void *p; +#if defined(_POSIX_C_SOURCE) && !defined(__sun__) + int ret; + ret = posix_memalign(&p, PAGE_SIZE, size); + if (ret != 0) + return NULL; +#elif defined(__NetBSD__) || defined(__OpenBSD__) + p = valloc(size); +#else + p = memalign(PAGE_SIZE, size); +#endif + + if (!p) + return NULL; + +#ifndef __sun__ + if ( mlock(p, size) < 0 ) + { + free(p); + return NULL; + } +#endif + + b->hbuf = p; + + memset(p, 0, size); + return b->hbuf; +} + +void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) +{ + if ( b->hbuf == NULL ) + return; + +#ifndef __sun__ + (void) munlock(b->hbuf, nr_pages * PAGE_SIZE); +#endif + + free(b->hbuf); +} + +struct allocation_header { + int nr_pages; +}; + +void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size) +{ + size_t actual_size = ROUNDUP(size + sizeof(struct allocation_header), PAGE_SHIFT); + int nr_pages = actual_size >> PAGE_SHIFT; + struct allocation_header *hdr; + + hdr = xc__hypercall_buffer_alloc_pages(xch, b, nr_pages); + if ( hdr == NULL ) + return NULL; + + b->hbuf = (void *)(hdr+1); + + hdr->nr_pages = nr_pages; + return b->hbuf; +} + +void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + struct allocation_header *hdr; + + if (b->hbuf == NULL) + return; + + hdr = b->hbuf; + b->hbuf = --hdr; + + xc__hypercall_buffer_free_pages(xch, b, hdr->nr_pages); +} + +int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + void *p; + + /* + * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE. + */ + if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE ) + abort(); + + /* + * Do need to bounce a NULL buffer. + */ + if ( b->ubuf == NULL ) + { + b->hbuf = NULL; + return 0; + } + + p = xc__hypercall_buffer_alloc(xch, b, b->sz); + if ( p == NULL ) + return -1; + + if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_IN || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH ) + memcpy(b->hbuf, b->ubuf, b->sz); + + return 0; +} + +void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + /* + * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE. + */ + if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE ) + abort(); + + if ( b->hbuf == NULL ) + return; + + if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_OUT || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH ) + memcpy(b->ubuf, b->hbuf, b->sz); + + xc__hypercall_buffer_free(xch, b); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff -r 0b5d85ea10f8 -r a40c36db2a03 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Tue Oct 19 09:17:18 2010 +0100 +++ b/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 @@ -105,6 +105,64 @@ void unlock_pages(xc_interface *xch, voi int hcall_buf_prep(xc_interface *xch, void **addr, size_t len); void hcall_buf_release(xc_interface *xch, void **addr, size_t len); + +/* + * HYPERCALL ARGUMENT BUFFERS + * + * Augment the public hypercall buffer interface with the ability to + * bounce between user provided buffers and hypercall safe memory. + * + * Use xc_hypercall_bounce_pre/post instead of + * xc_hypercall_buffer_alloc/free(_pages). The specified user + * supplied buffer is automatically copied in/out of the hypercall + * safe memory. + */ +enum { + XC_HYPERCALL_BUFFER_BOUNCE_NONE = 0, + XC_HYPERCALL_BUFFER_BOUNCE_IN = 1, + XC_HYPERCALL_BUFFER_BOUNCE_OUT = 2, + XC_HYPERCALL_BUFFER_BOUNCE_BOTH = 3 +}; + +/* + * Declare a named bounce buffer. + * + * Normally you should use DECLARE_HYPERCALL_BOUNCE (see below). + * + * This declaration should only be used when the user pointer is + * non-trivial, e.g. when it is contained within an existing data + * structure. + */ +#define DECLARE_NAMED_HYPERCALL_BOUNCE(_name, _ubuf, _sz, _dir) \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = NULL, \ + .param_shadow = NULL, \ + .sz = _sz, .dir = _dir, .ubuf = _ubuf, \ + } + +/* + * Declare a bounce buffer shadowing the named user data pointer. + */ +#define DECLARE_HYPERCALL_BOUNCE(_ubuf, _sz, _dir) DECLARE_NAMED_HYPERCALL_BOUNCE(_ubuf, _ubuf, _sz, _dir) + +/* + * Set the size of data to bounce. Useful when the size is not known + * when the bounce buffer is declared. + */ +#define HYPERCALL_BOUNCE_SET_SIZE(_buf, _sz) do { (HYPERCALL_BUFFER(_buf))->sz = _sz; } while (0) + +/* + * Initialise and free hypercall safe memory. Takes care of any required + * copying. + */ +int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *bounce); +#define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name)) +void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce); +#define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name)) + +/* + * Hypercall interfaces. + */ int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall); diff -r 0b5d85ea10f8 -r a40c36db2a03 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Tue Oct 19 09:17:18 2010 +0100 +++ b/tools/libxc/xenctrl.h Thu Oct 21 09:37:34 2010 +0100 @@ -147,6 +147,137 @@ enum xc_open_flags { * @return 0 on success, -1 otherwise. */ int xc_interface_close(xc_interface *xch); + +/* + * HYPERCALL SAFE MEMORY BUFFER + * + * Ensure that memory which is passed to a hypercall has been + * specially allocated in order to be safe to access from the + * hypervisor. + * + * Each user data pointer is shadowed by an xc_hypercall_buffer data + * structure. You should never define an xc_hypercall_buffer type + * directly, instead use the DECLARE_HYPERCALL_BUFFER* macros below. + * + * The strucuture should be considered opaque and all access should be + * via the macros and helper functions defined below. + * + * Once the buffer is declared the user is responsible for explicitly + * allocating and releasing the memory using + * xc_hypercall_buffer_alloc(_pages) and + * xc_hypercall_buffer_free(_pages). + * + * Once the buffer has been allocated the user can initialise the data + * via the normal pointer. The xc_hypercall_buffer structure is + * transparently referenced by the helper macros (such as + * xen_set_guest_handle) in order to check at compile time that the + * correct type of memory is being used. + */ +struct xc_hypercall_buffer { + /* Hypercall safe memory buffer. */ + void *hbuf; + + /* + * Reference to xc_hypercall_buffer passed as argument to the + * current function. + */ + struct xc_hypercall_buffer *param_shadow; + + /* + * Direction of copy for bounce buffering. + */ + int dir; + + /* Used iff dir != 0. */ + void *ubuf; + size_t sz; +}; +typedef struct xc_hypercall_buffer xc_hypercall_buffer_t; + +/* + * Construct the name of the hypercall buffer for a given variable. + * For internal use only + */ +#define XC__HYPERCALL_BUFFER_NAME(_name) xc__hypercall_buffer_##_name + +/* + * Returns the hypercall_buffer associated with a variable. + */ +#define HYPERCALL_BUFFER(_name) \ + ({ xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = &XC__HYPERCALL_BUFFER_NAME(_name); \ + (void)(&_val1 == _val2); \ + (_val2)->param_shadow ? (_val2)->param_shadow : (_val2); \ + }) + +#define HYPERCALL_BUFFER_INIT_NO_BOUNCE .dir = 0, .sz = 0, .ubuf = (void *)-1 + +/* + * Defines a hypercall buffer and user pointer with _name of _type. + * + * The user accesses the data as normal via _name which will be + * transparently converted to the hypercall buffer as necessary. + */ +#define DECLARE_HYPERCALL_BUFFER(_type, _name) \ + _type *_name = NULL; \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = NULL, \ + .param_shadow = NULL, \ + HYPERCALL_BUFFER_INIT_NO_BOUNCE \ + } + +/* + * Declare the necessary data structure to allow a hypercall buffer + * passed as an argument to a function to be used in the normal way. + */ +#define DECLARE_HYPERCALL_BUFFER_ARGUMENT(_name) \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = (void *)-1, \ + .param_shadow = _name, \ + HYPERCALL_BUFFER_INIT_NO_BOUNCE \ + } + +/* + * Get the hypercall buffer data pointer in a form suitable for use + * directly as a hypercall argument. + */ +#define HYPERCALL_BUFFER_AS_ARG(_name) \ + ({ xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = HYPERCALL_BUFFER(_name); \ + (void)(&_val1 == _val2); \ + (unsigned long)(_val2)->hbuf; \ + }) + +/* + * Set a xen_guest_handle in a type safe manner, ensuring that the + * data pointer has been correctly allocated. + */ +#define xc_set_xen_guest_handle(_hnd, _val) \ + do { \ + xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_val)) *_val2 = HYPERCALL_BUFFER(_val); \ + (void) (&_val1 == _val2); \ + set_xen_guest_handle_raw(_hnd, (_val2)->hbuf); \ + } while (0) + +/* Use with xc_set_xen_guest_handle in place of NULL */ +extern xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL); + +/* + * Allocate and free hypercall buffers with byte granularity. + */ +void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size); +#define xc_hypercall_buffer_alloc(_xch, _name, _size) xc__hypercall_buffer_alloc(_xch, HYPERCALL_BUFFER(_name), _size) +void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b); +#define xc_hypercall_buffer_free(_xch, _name) xc__hypercall_buffer_free(_xch, HYPERCALL_BUFFER(_name)) + +/* + * Allocate and free hypercall buffers with page alignment. + */ +void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages); +#define xc_hypercall_buffer_alloc_pages(_xch, _name, _nr) xc__hypercall_buffer_alloc_pages(_xch, HYPERCALL_BUFFER(_name), _nr) +void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages); +#define xc_hypercall_buffer_free_pages(_xch, _name, _nr) xc__hypercall_buffer_free_pages(_xch, HYPERCALL_BUFFER(_name), _nr) /* * DOMAIN DEBUGGING FUNCTIONS _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 02 of 25] libxc: convert xc_version over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID f262590f9b94d9f6da603082748fe9f560becc7d # Parent a40c36db2a03279fcb6a0525359d6a95de4e4800 libxc: convert xc_version over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r a40c36db2a03 -r f262590f9b94 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_private.c Thu Oct 21 09:37:34 2010 +0100 @@ -569,42 +569,46 @@ int xc_sysctl(xc_interface *xch, struct int xc_version(xc_interface *xch, int cmd, void *arg) { - int rc, argsize = 0; + DECLARE_HYPERCALL_BOUNCE(arg, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT); /* Size unknown until cmd decoded */ + size_t sz = 0; + int rc; switch ( cmd ) { case XENVER_extraversion: - argsize = sizeof(xen_extraversion_t); + sz = sizeof(xen_extraversion_t); break; case XENVER_compile_info: - argsize = sizeof(xen_compile_info_t); + sz = sizeof(xen_compile_info_t); break; case XENVER_capabilities: - argsize = sizeof(xen_capabilities_info_t); + sz = sizeof(xen_capabilities_info_t); break; case XENVER_changeset: - argsize = sizeof(xen_changeset_info_t); + sz = sizeof(xen_changeset_info_t); break; case XENVER_platform_parameters: - argsize = sizeof(xen_platform_parameters_t); + sz = sizeof(xen_platform_parameters_t); break; } - if ( (argsize != 0) && (lock_pages(xch, arg, argsize) != 0) ) + HYPERCALL_BOUNCE_SET_SIZE(arg, sz); + + if ( (sz != 0) && xc_hypercall_bounce_pre(xch, arg) ) { - PERROR("Could not lock memory for version hypercall"); + PERROR("Could not bounce buffer for version hypercall"); return -ENOMEM; } #ifdef VALGRIND - if (argsize != 0) - memset(arg, 0, argsize); + if (sz != 0) + memset(hypercall_bounce_get(bounce), 0, sz); #endif - rc = do_xen_version(xch, cmd, arg); + rc = do_xen_version(xch, cmd, HYPERCALL_BUFFER(arg)); - if ( argsize != 0 ) - unlock_pages(xch, arg, argsize); + if ( sz != 0 ) + xc_hypercall_bounce_post(xch, arg); return rc; } diff -r a40c36db2a03 -r f262590f9b94 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 @@ -166,13 +166,14 @@ void xc__hypercall_bounce_post(xc_interf int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall); -static inline int do_xen_version(xc_interface *xch, int cmd, void *dest) +static inline int do_xen_version(xc_interface *xch, int cmd, xc_hypercall_buffer_t *dest) { DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(dest); hypercall.op = __HYPERVISOR_xen_version; hypercall.arg[0] = (unsigned long) cmd; - hypercall.arg[1] = (unsigned long) dest; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(dest); return do_xen_hypercall(xch, &hypercall); } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 03 of 25] libxc: convert domctl interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID 6766a5b07735888ae5c5fdc16cbb4a915a997f05 # Parent f262590f9b94d9f6da603082748fe9f560becc7d libxc: convert domctl interfaces over to hypercall buffers (defer save/restore and shadow related interfaces til a later patch) Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r f262590f9b94 -r 6766a5b07735 tools/libxc/xc_dom_boot.c --- a/tools/libxc/xc_dom_boot.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_dom_boot.c Thu Oct 21 09:37:34 2010 +0100 @@ -61,9 +61,10 @@ static int setup_hypercall_page(struct x return rc; } -static int launch_vm(xc_interface *xch, domid_t domid, void *ctxt) +static int launch_vm(xc_interface *xch, domid_t domid, xc_hypercall_buffer_t *ctxt) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(ctxt); int rc; xc_dom_printf(xch, "%s: called, ctxt=%p", __FUNCTION__, ctxt); @@ -71,7 +72,7 @@ static int launch_vm(xc_interface *xch, domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = 0; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); if ( rc != 0 ) xc_dom_panic(xch, XC_INTERNAL_ERROR, @@ -202,8 +203,12 @@ int xc_dom_boot_image(struct xc_dom_imag int xc_dom_boot_image(struct xc_dom_image *dom) { DECLARE_DOMCTL; - vcpu_guest_context_any_t ctxt; + DECLARE_HYPERCALL_BUFFER(vcpu_guest_context_any_t, ctxt); int rc; + + ctxt = xc_hypercall_buffer_alloc(dom->xch, ctxt, sizeof(*ctxt)); + if ( ctxt == NULL ) + return -1; DOMPRINTF_CALLED(dom->xch); @@ -260,12 +265,13 @@ int xc_dom_boot_image(struct xc_dom_imag return rc; /* let the vm run */ - memset(&ctxt, 0, sizeof(ctxt)); - if ( (rc = dom->arch_hooks->vcpu(dom, &ctxt)) != 0 ) + memset(ctxt, 0, sizeof(ctxt)); + if ( (rc = dom->arch_hooks->vcpu(dom, ctxt)) != 0 ) return rc; xc_dom_unmap_all(dom); - rc = launch_vm(dom->xch, dom->guest_domid, &ctxt); + rc = launch_vm(dom->xch, dom->guest_domid, HYPERCALL_BUFFER(ctxt)); + xc_hypercall_buffer_free(dom->xch, ctxt); return rc; } diff -r f262590f9b94 -r 6766a5b07735 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 @@ -115,36 +115,31 @@ int xc_vcpu_setaffinity(xc_interface *xc uint64_t *cpumap, int cpusize) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); int ret = -1; - uint8_t *local = malloc(cpusize); - if(local == NULL) + local = xc_hypercall_buffer_alloc(xch, local, cpusize); + if ( local == NULL ) { - PERROR("Could not alloc memory for Xen hypercall"); + PERROR("Could not allocate memory for setvcpuaffinity domctl hypercall"); goto out; } + domctl.cmd = XEN_DOMCTL_setvcpuaffinity; domctl.domain = (domid_t)domid; domctl.u.vcpuaffinity.vcpu = vcpu; bitmap_64_to_byte(local, cpumap, cpusize * 8); - set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; - - if ( lock_pages(xch, local, cpusize) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } ret = do_domctl(xch, &domctl); - unlock_pages(xch, local, cpusize); + xc_hypercall_buffer_free(xch, local); out: - free(local); return ret; } @@ -155,12 +150,13 @@ int xc_vcpu_getaffinity(xc_interface *xc uint64_t *cpumap, int cpusize) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); int ret = -1; - uint8_t * local = malloc(cpusize); + local = xc_hypercall_buffer_alloc(xch, local, cpusize); if(local == NULL) { - PERROR("Could not alloc memory for Xen hypercall"); + PERROR("Could not allocate memory for getvcpuaffinity domctl hypercall"); goto out; } @@ -168,22 +164,15 @@ int xc_vcpu_getaffinity(xc_interface *xc domctl.domain = (domid_t)domid; domctl.u.vcpuaffinity.vcpu = vcpu; - - set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; - - if ( lock_pages(xch, local, sizeof(local)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } ret = do_domctl(xch, &domctl); - unlock_pages(xch, local, sizeof (local)); bitmap_byte_to_64(cpumap, local, cpusize * 8); + + xc_hypercall_buffer_free(xch, local); out: - free(local); return ret; } @@ -283,20 +272,19 @@ int xc_domain_hvm_getcontext(xc_interfac { int ret; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, ctxt_buf) ) + return -1; domctl.cmd = XEN_DOMCTL_gethvmcontext; domctl.domain = (domid_t)domid; domctl.u.hvmcontext.size = size; - set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); - - if ( ctxt_buf ) - if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 ) - return ret; + xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); - if ( ctxt_buf ) - unlock_pages(xch, ctxt_buf, size); + xc_hypercall_bounce_post(xch, ctxt_buf); return (ret < 0 ? -1 : domctl.u.hvmcontext.size); } @@ -312,23 +300,21 @@ int xc_domain_hvm_getcontext_partial(xc_ { int ret; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_OUT); - if ( !ctxt_buf ) - return -EINVAL; + if ( !ctxt_buf || xc_hypercall_bounce_pre(xch, ctxt_buf) ) + return -1; domctl.cmd = XEN_DOMCTL_gethvmcontext_partial; domctl.domain = (domid_t) domid; domctl.u.hvmcontext_partial.type = typecode; domctl.u.hvmcontext_partial.instance = instance; - set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); + xc_set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); - if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 ) - return ret; - ret = do_domctl(xch, &domctl); - if ( ctxt_buf ) - unlock_pages(xch, ctxt_buf, size); + if ( ctxt_buf ) + xc_hypercall_bounce_post(xch, ctxt_buf); return ret ? -1 : 0; } @@ -341,18 +327,19 @@ int xc_domain_hvm_setcontext(xc_interfac { int ret; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_IN); + + if ( xc_hypercall_bounce_pre(xch, ctxt_buf) ) + return -1; domctl.cmd = XEN_DOMCTL_sethvmcontext; domctl.domain = domid; domctl.u.hvmcontext.size = size; - set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); - - if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 ) - return ret; + xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); - unlock_pages(xch, ctxt_buf, size); + xc_hypercall_bounce_post(xch, ctxt_buf); return ret; } @@ -364,18 +351,19 @@ int xc_vcpu_getcontext(xc_interface *xch { int rc; DECLARE_DOMCTL; - size_t sz = sizeof(vcpu_guest_context_any_t); + DECLARE_HYPERCALL_BOUNCE(ctxt, sizeof(vcpu_guest_context_any_t), XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, ctxt) ) + return -1; domctl.cmd = XEN_DOMCTL_getvcpucontext; domctl.domain = (domid_t)domid; domctl.u.vcpucontext.vcpu = (uint16_t)vcpu; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); - - if ( (rc = lock_pages(xch, ctxt, sz)) != 0 ) - return rc; rc = do_domctl(xch, &domctl); - unlock_pages(xch, ctxt, sz); + + xc_hypercall_bounce_post(xch, ctxt); return rc; } @@ -558,22 +546,24 @@ int xc_domain_get_tsc_info(xc_interface { int rc; DECLARE_DOMCTL; - xen_guest_tsc_info_t info = { 0 }; + DECLARE_HYPERCALL_BUFFER(xen_guest_tsc_info_t, info); + + info = xc_hypercall_buffer_alloc(xch, info, sizeof(*info)); + if ( info == NULL ) + return -ENOMEM; domctl.cmd = XEN_DOMCTL_gettscinfo; domctl.domain = (domid_t)domid; - set_xen_guest_handle(domctl.u.tsc_info.out_info, &info); - if ( (rc = lock_pages(xch, &info, sizeof(info))) != 0 ) - return rc; + xc_set_xen_guest_handle(domctl.u.tsc_info.out_info, info); rc = do_domctl(xch, &domctl); if ( rc == 0 ) { - *tsc_mode = info.tsc_mode; - *elapsed_nsec = info.elapsed_nsec; - *gtsc_khz = info.gtsc_khz; - *incarnation = info.incarnation; + *tsc_mode = info->tsc_mode; + *elapsed_nsec = info->elapsed_nsec; + *gtsc_khz = info->gtsc_khz; + *incarnation = info->incarnation; } - unlock_pages(xch, &info,sizeof(info)); + xc_hypercall_buffer_free(xch, info); return rc; } @@ -957,8 +947,8 @@ int xc_vcpu_setcontext(xc_interface *xch vcpu_guest_context_any_t *ctxt) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt, sizeof(vcpu_guest_context_any_t), XC_HYPERCALL_BUFFER_BOUNCE_IN); int rc; - size_t sz = sizeof(vcpu_guest_context_any_t); if (ctxt == NULL) { @@ -966,16 +956,17 @@ int xc_vcpu_setcontext(xc_interface *xch return -1; } + if ( xc_hypercall_bounce_pre(xch, ctxt) ) + return -1; + domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = vcpu; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); - if ( (rc = lock_pages(xch, ctxt, sz)) != 0 ) - return rc; rc = do_domctl(xch, &domctl); - - unlock_pages(xch, ctxt, sz); + + xc_hypercall_bounce_post(xch, ctxt); return rc; } @@ -1101,6 +1092,13 @@ int xc_get_device_group( { int rc; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(sdev_array, max_sdevs * sizeof(*sdev_array), XC_HYPERCALL_BUFFER_BOUNCE_IN); + + if ( xc_hypercall_bounce_pre(xch, sdev_array) ) + { + PERROR("Could not bounce buffer for xc_get_device_group"); + return -1; + } domctl.cmd = XEN_DOMCTL_get_device_group; domctl.domain = (domid_t)domid; @@ -1108,17 +1106,14 @@ int xc_get_device_group( domctl.u.get_device_group.machine_bdf = machine_bdf; domctl.u.get_device_group.max_sdevs = max_sdevs; - set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); + xc_set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); - if ( lock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 ) - { - PERROR("Could not lock memory for xc_get_device_group"); - return -ENOMEM; - } rc = do_domctl(xch, &domctl); - unlock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)); *num_sdevs = domctl.u.get_device_group.num_sdevs; + + xc_hypercall_bounce_post(xch, sdev_array); + return rc; } diff -r f262590f9b94 -r 6766a5b07735 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_private.c Thu Oct 21 09:37:34 2010 +0100 @@ -322,12 +322,18 @@ int xc_get_pfn_type_batch(xc_interface * int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom, unsigned int num, xen_pfn_t *arr) { + int rc; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(arr, sizeof(*arr) * num, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + if ( xc_hypercall_bounce_pre(xch, arr) ) + return -1; domctl.cmd = XEN_DOMCTL_getpageframeinfo3; domctl.domain = (domid_t)dom; domctl.u.getpageframeinfo3.num = num; - set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); - return do_domctl(xch, &domctl); + xc_set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); + rc = do_domctl(xch, &domctl); + xc_hypercall_bounce_post(xch, arr); + return rc; } int xc_mmuext_op( @@ -498,25 +504,27 @@ int xc_get_pfn_list(xc_interface *xch, unsigned long max_pfns) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(pfn_buf, max_pfns * sizeof(*pfn_buf), XC_HYPERCALL_BUFFER_BOUNCE_OUT); int ret; - domctl.cmd = XEN_DOMCTL_getmemlist; - domctl.domain = (domid_t)domid; - domctl.u.getmemlist.max_pfns = max_pfns; - set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); #ifdef VALGRIND memset(pfn_buf, 0, max_pfns * sizeof(*pfn_buf)); #endif - if ( lock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, pfn_buf) ) { - PERROR("xc_get_pfn_list: pfn_buf lock failed"); + PERROR("xc_get_pfn_list: pfn_buf bounce failed"); return -1; } + domctl.cmd = XEN_DOMCTL_getmemlist; + domctl.domain = (domid_t)domid; + domctl.u.getmemlist.max_pfns = max_pfns; + xc_set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); + ret = do_domctl(xch, &domctl); - unlock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)); + xc_hypercall_bounce_post(xch, pfn_buf); return (ret < 0) ? -1 : domctl.u.getmemlist.num_pfns; } diff -r f262590f9b94 -r 6766a5b07735 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 @@ -211,17 +211,18 @@ static inline int do_domctl(xc_interface { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(domctl, sizeof(*domctl), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - if ( hcall_buf_prep(xch, (void **)&domctl, sizeof(*domctl)) != 0 ) + domctl->interface_version = XEN_DOMCTL_INTERFACE_VERSION; + + if ( xc_hypercall_bounce_pre(xch, domctl) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce buffer for domctl hypercall"); goto out1; } - domctl->interface_version = XEN_DOMCTL_INTERFACE_VERSION; - hypercall.op = __HYPERVISOR_domctl; - hypercall.arg[0] = (unsigned long)domctl; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(domctl); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { @@ -230,8 +231,7 @@ static inline int do_domctl(xc_interface " rebuild the user-space tool set?\n"); } - hcall_buf_release(xch, (void **)&domctl, sizeof(*domctl)); - + xc_hypercall_bounce_post(xch, domctl); out1: return ret; } diff -r f262590f9b94 -r 6766a5b07735 tools/libxc/xc_resume.c --- a/tools/libxc/xc_resume.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_resume.c Thu Oct 21 09:37:34 2010 +0100 @@ -196,12 +196,6 @@ static int xc_domain_resume_any(xc_inter goto out; } - if ( lock_pages(xch, &ctxt, sizeof(ctxt)) ) - { - ERROR("Unable to lock ctxt"); - goto out; - } - if ( xc_vcpu_getcontext(xch, domid, 0, &ctxt) ) { ERROR("Could not get vcpu context"); @@ -235,7 +229,6 @@ static int xc_domain_resume_any(xc_inter #if defined(__i386__) || defined(__x86_64__) out: - unlock_pages(xch, (void *)&ctxt, sizeof ctxt); if (p2m) munmap(p2m, P2M_FL_ENTRIES*PAGE_SIZE); if (p2m_frame_list) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 04 of 25] libxc: convert shadow domctl interfaces and save/restore over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID 7a0260895b7f4c596f68cfef0fddd4959e116662 # Parent 6766a5b07735888ae5c5fdc16cbb4a915a997f05 libxc: convert shadow domctl interfaces and save/restore over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 6766a5b07735 -r 7a0260895b7f tools/libxc/ia64/xc_ia64_linux_save.c --- a/tools/libxc/ia64/xc_ia64_linux_save.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/ia64/xc_ia64_linux_save.c Thu Oct 21 09:37:34 2010 +0100 @@ -432,9 +432,9 @@ xc_domain_save(xc_interface *xch, int io int last_iter = 0; /* Bitmap of pages to be sent. */ - unsigned long *to_send = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, to_send); /* Bitmap of pages not to be sent (because dirtied). */ - unsigned long *to_skip = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, to_skip); char *mem; @@ -542,8 +542,8 @@ xc_domain_save(xc_interface *xch, int io last_iter = 0; bitmap_size = ((p2m_size + BITS_PER_LONG-1) & ~(BITS_PER_LONG-1)) / 8; - to_send = malloc(bitmap_size); - to_skip = malloc(bitmap_size); + to_send = xc_hypercall_buffer_alloc(xch, to_send, bitmap_size); + to_skip = xc_hypercall_buffer_alloc(xch, to_skip, bitmap_size); if (!to_send || !to_skip) { ERROR("Couldn''t allocate bitmap array"); @@ -552,15 +552,6 @@ xc_domain_save(xc_interface *xch, int io /* Initially all the pages must be sent. */ memset(to_send, 0xff, bitmap_size); - - if (lock_pages(to_send, bitmap_size)) { - ERROR("Unable to lock_pages to_send"); - goto out; - } - if (lock_pages(to_skip, bitmap_size)) { - ERROR("Unable to lock_pages to_skip"); - goto out; - } /* Enable qemu-dm logging dirty pages to xen */ if (hvm && !callbacks->switch_qemu_logdirty(dom, 1, callbacks->data)) { @@ -621,7 +612,7 @@ xc_domain_save(xc_interface *xch, int io if (!last_iter) { if (xc_shadow_control(xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, - to_skip, p2m_size, + HYPERCALL_BUFFER(to_skip), p2m_size, NULL, 0, NULL) != p2m_size) { ERROR("Error peeking shadow bitmap"); goto out; @@ -713,7 +704,7 @@ xc_domain_save(xc_interface *xch, int io /* Pages to be sent are pages which were dirty. */ if (xc_shadow_control(xch, dom, XEN_DOMCTL_SHADOW_OP_CLEAN, - to_send, p2m_size, + HYPERCALL_BUFFER(to_send), p2m_size, NULL, 0, NULL ) != p2m_size) { ERROR("Error flushing shadow PT"); goto out; @@ -779,7 +770,7 @@ xc_domain_save(xc_interface *xch, int io //print_stats(xch, dom, 0, &stats, 1); if ( xc_shadow_control(xch, dom, - XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, + XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send), p2m_size, NULL, 0, NULL) != p2m_size ) { ERROR("Error flushing shadow PT"); @@ -799,10 +790,8 @@ xc_domain_save(xc_interface *xch, int io } } - unlock_pages(to_send, bitmap_size); - free(to_send); - unlock_pages(to_skip, bitmap_size); - free(to_skip); + xc_hypercall_buffer_free(xch, to_send); + xc_hypercall_buffer_free(xch, to_skip); if (live_shinfo) munmap(live_shinfo, PAGE_SIZE); if (memmap_info) diff -r 6766a5b07735 -r 7a0260895b7f tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 @@ -400,7 +400,7 @@ int xc_shadow_control(xc_interface *xch, int xc_shadow_control(xc_interface *xch, uint32_t domid, unsigned int sop, - unsigned long *dirty_bitmap, + xc_hypercall_buffer_t *dirty_bitmap, unsigned long pages, unsigned long *mb, uint32_t mode, @@ -408,14 +408,17 @@ int xc_shadow_control(xc_interface *xch, { int rc; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap); + domctl.cmd = XEN_DOMCTL_shadow_op; domctl.domain = (domid_t)domid; domctl.u.shadow_op.op = sop; domctl.u.shadow_op.pages = pages; domctl.u.shadow_op.mb = mb ? *mb : 0; domctl.u.shadow_op.mode = mode; - set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, - (uint8_t *)dirty_bitmap); + if (dirty_bitmap != NULL) + xc_set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, + dirty_bitmap); rc = do_domctl(xch, &domctl); diff -r 6766a5b07735 -r 7a0260895b7f tools/libxc/xc_domain_restore.c --- a/tools/libxc/xc_domain_restore.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_domain_restore.c Thu Oct 21 09:37:34 2010 +0100 @@ -1063,7 +1063,7 @@ int xc_domain_restore(xc_interface *xch, shared_info_any_t *new_shared_info; /* A copy of the CPU context of the guest. */ - vcpu_guest_context_any_t ctxt; + DECLARE_HYPERCALL_BUFFER(vcpu_guest_context_any_t, ctxt); /* A table containing the type of each PFN (/not/ MFN!). */ unsigned long *pfn_type = NULL; @@ -1112,6 +1112,15 @@ int xc_domain_restore(xc_interface *xch, if ( superpages ) return 1; + + ctxt = xc_hypercall_buffer_alloc(xch, ctxt, sizeof(*ctxt)); + + if ( ctxt == NULL ) + { + PERROR("Unable to allocate VCPU ctxt buffer"); + return 1; + } + if ( (orig_io_fd_flags = fcntl(io_fd, F_GETFL, 0)) < 0 ) { PERROR("unable to read IO FD flags"); @@ -1539,26 +1548,20 @@ int xc_domain_restore(xc_interface *xch, } } - if ( lock_pages(xch, &ctxt, sizeof(ctxt)) ) - { - PERROR("Unable to lock ctxt"); - return 1; - } - vcpup = tailbuf.u.pv.vcpubuf; for ( i = 0; i <= max_vcpu_id; i++ ) { if ( !(vcpumap & (1ULL << i)) ) continue; - memcpy(&ctxt, vcpup, ((dinfo->guest_width == 8) ? sizeof(ctxt.x64) - : sizeof(ctxt.x32))); - vcpup += (dinfo->guest_width == 8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32); + memcpy(ctxt, vcpup, ((dinfo->guest_width == 8) ? sizeof(ctxt->x64) + : sizeof(ctxt->x32))); + vcpup += (dinfo->guest_width == 8) ? sizeof(ctxt->x64) : sizeof(ctxt->x32); DPRINTF("read VCPU %d\n", i); if ( !new_ctxt_format ) - SET_FIELD(&ctxt, flags, GET_FIELD(&ctxt, flags) | VGCF_online); + SET_FIELD(ctxt, flags, GET_FIELD(ctxt, flags) | VGCF_online); if ( i == 0 ) { @@ -1566,7 +1569,7 @@ int xc_domain_restore(xc_interface *xch, * Uncanonicalise the suspend-record frame number and poke * resume record. */ - pfn = GET_FIELD(&ctxt, user_regs.edx); + pfn = GET_FIELD(ctxt, user_regs.edx); if ( (pfn >= dinfo->p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { @@ -1574,7 +1577,7 @@ int xc_domain_restore(xc_interface *xch, goto out; } mfn = ctx->p2m[pfn]; - SET_FIELD(&ctxt, user_regs.edx, mfn); + SET_FIELD(ctxt, user_regs.edx, mfn); start_info = xc_map_foreign_range( xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE, mfn); SET_FIELD(start_info, nr_pages, dinfo->p2m_size); @@ -1589,15 +1592,15 @@ int xc_domain_restore(xc_interface *xch, munmap(start_info, PAGE_SIZE); } /* Uncanonicalise each GDT frame number. */ - if ( GET_FIELD(&ctxt, gdt_ents) > 8192 ) + if ( GET_FIELD(ctxt, gdt_ents) > 8192 ) { ERROR("GDT entry count out of range"); goto out; } - for ( j = 0; (512*j) < GET_FIELD(&ctxt, gdt_ents); j++ ) + for ( j = 0; (512*j) < GET_FIELD(ctxt, gdt_ents); j++ ) { - pfn = GET_FIELD(&ctxt, gdt_frames[j]); + pfn = GET_FIELD(ctxt, gdt_frames[j]); if ( (pfn >= dinfo->p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { @@ -1605,10 +1608,10 @@ int xc_domain_restore(xc_interface *xch, j, (unsigned long)pfn); goto out; } - SET_FIELD(&ctxt, gdt_frames[j], ctx->p2m[pfn]); + SET_FIELD(ctxt, gdt_frames[j], ctx->p2m[pfn]); } /* Uncanonicalise the page table base pointer. */ - pfn = UNFOLD_CR3(GET_FIELD(&ctxt, ctrlreg[3])); + pfn = UNFOLD_CR3(GET_FIELD(ctxt, ctrlreg[3])); if ( pfn >= dinfo->p2m_size ) { @@ -1625,12 +1628,12 @@ int xc_domain_restore(xc_interface *xch, (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - SET_FIELD(&ctxt, ctrlreg[3], FOLD_CR3(ctx->p2m[pfn])); + SET_FIELD(ctxt, ctrlreg[3], FOLD_CR3(ctx->p2m[pfn])); /* Guest pagetable (x86/64) stored in otherwise-unused CR1. */ - if ( (ctx->pt_levels == 4) && (ctxt.x64.ctrlreg[1] & 1) ) + if ( (ctx->pt_levels == 4) && (ctxt->x64.ctrlreg[1] & 1) ) { - pfn = UNFOLD_CR3(ctxt.x64.ctrlreg[1] & ~1); + pfn = UNFOLD_CR3(ctxt->x64.ctrlreg[1] & ~1); if ( pfn >= dinfo->p2m_size ) { ERROR("User PT base is bad: pfn=%lu p2m_size=%lu", @@ -1645,12 +1648,12 @@ int xc_domain_restore(xc_interface *xch, (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - ctxt.x64.ctrlreg[1] = FOLD_CR3(ctx->p2m[pfn]); + ctxt->x64.ctrlreg[1] = FOLD_CR3(ctx->p2m[pfn]); } domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = (domid_t)dom; domctl.u.vcpucontext.vcpu = i; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt.c); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); frc = xc_domctl(xch, &domctl); if ( frc != 0 ) { @@ -1791,6 +1794,7 @@ int xc_domain_restore(xc_interface *xch, out: if ( (rc != 0) && (dom != 0) ) xc_domain_destroy(xch, dom); + xc_hypercall_buffer_free(xch, ctxt); free(mmu); free(ctx->p2m); free(pfn_type); diff -r 6766a5b07735 -r 7a0260895b7f tools/libxc/xc_domain_save.c --- a/tools/libxc/xc_domain_save.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_domain_save.c Thu Oct 21 09:37:34 2010 +0100 @@ -411,7 +411,7 @@ static int print_stats(xc_interface *xch static int analysis_phase(xc_interface *xch, uint32_t domid, struct save_ctx *ctx, - unsigned long *arr, int runs) + xc_hypercall_buffer_t *arr, int runs) { long long start, now; xc_shadow_op_stats_t stats; @@ -909,7 +909,9 @@ int xc_domain_save(xc_interface *xch, in - that should be sent this iteration (unless later marked as skip); - to skip this iteration because already dirty; - to fixup by sending at the end if not already resent; */ - unsigned long *to_send = NULL, *to_skip = NULL, *to_fix = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, to_skip); + DECLARE_HYPERCALL_BUFFER(unsigned long, to_send); + unsigned long *to_fix = NULL; xc_shadow_op_stats_t stats; @@ -1038,9 +1040,9 @@ int xc_domain_save(xc_interface *xch, in sent_last_iter = dinfo->p2m_size; /* Setup to_send / to_fix and to_skip bitmaps */ - to_send = xc_memalign(PAGE_SIZE, ROUNDUP(BITMAP_SIZE, PAGE_SHIFT)); + to_send = xc_hypercall_buffer_alloc_pages(xch, to_send, NRPAGES(BITMAP_SIZE)); + to_skip = xc_hypercall_buffer_alloc_pages(xch, to_skip, NRPAGES(BITMAP_SIZE)); to_fix = calloc(1, BITMAP_SIZE); - to_skip = xc_memalign(PAGE_SIZE, ROUNDUP(BITMAP_SIZE, PAGE_SHIFT)); if ( !to_send || !to_fix || !to_skip ) { @@ -1050,20 +1052,7 @@ int xc_domain_save(xc_interface *xch, in memset(to_send, 0xff, BITMAP_SIZE); - if ( lock_pages(xch, to_send, BITMAP_SIZE) ) - { - PERROR("Unable to lock to_send"); - return 1; - } - - /* (to fix is local only) */ - if ( lock_pages(xch, to_skip, BITMAP_SIZE) ) - { - PERROR("Unable to lock to_skip"); - return 1; - } - - if ( hvm ) + if ( hvm ) { /* Need another buffer for HVM context */ hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0); @@ -1080,7 +1069,7 @@ int xc_domain_save(xc_interface *xch, in } } - analysis_phase(xch, dom, ctx, to_skip, 0); + analysis_phase(xch, dom, ctx, HYPERCALL_BUFFER(to_skip), 0); pfn_type = xc_memalign(PAGE_SIZE, ROUNDUP( MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); @@ -1192,7 +1181,7 @@ int xc_domain_save(xc_interface *xch, in /* Slightly wasteful to peek the whole array evey time, but this is fast enough for the moment. */ frc = xc_shadow_control( - xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, to_skip, + xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, HYPERCALL_BUFFER(to_skip), dinfo->p2m_size, NULL, 0, NULL); if ( frc != dinfo->p2m_size ) { @@ -1532,8 +1521,8 @@ int xc_domain_save(xc_interface *xch, in } - if ( xc_shadow_control(xch, dom, - XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, + if ( xc_shadow_control(xch, dom, + XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send), dinfo->p2m_size, NULL, 0, &stats) != dinfo->p2m_size ) { PERROR("Error flushing shadow PT"); @@ -1861,7 +1850,7 @@ int xc_domain_save(xc_interface *xch, in print_stats(xch, dom, 0, &stats, 1); if ( xc_shadow_control(xch, dom, - XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, + XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send), dinfo->p2m_size, NULL, 0, &stats) != dinfo->p2m_size ) { PERROR("Error flushing shadow PT"); @@ -1892,12 +1881,13 @@ int xc_domain_save(xc_interface *xch, in if ( ctx->live_m2p ) munmap(ctx->live_m2p, M2P_SIZE(ctx->max_mfn)); + xc_hypercall_buffer_free_pages(xch, to_send, NRPAGES(BITMAP_SIZE)); + xc_hypercall_buffer_free_pages(xch, to_skip, NRPAGES(BITMAP_SIZE)); + free(pfn_type); free(pfn_batch); free(pfn_err); - free(to_send); free(to_fix); - free(to_skip); DPRINTF("Save exit rc=%d\n",rc); diff -r 6766a5b07735 -r 7a0260895b7f tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xenctrl.h Thu Oct 21 09:37:34 2010 +0100 @@ -598,7 +598,7 @@ int xc_shadow_control(xc_interface *xch, int xc_shadow_control(xc_interface *xch, uint32_t domid, unsigned int sop, - unsigned long *dirty_bitmap, + xc_hypercall_buffer_t *dirty_bitmap, unsigned long pages, unsigned long *mb, uint32_t mode, diff -r 6766a5b07735 -r 7a0260895b7f tools/libxc/xg_private.h --- a/tools/libxc/xg_private.h Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xg_private.h Thu Oct 21 09:37:34 2010 +0100 @@ -157,6 +157,7 @@ typedef l4_pgentry_64_t l4_pgentry_t; #define PAGE_MASK_IA64 (~(PAGE_SIZE_IA64-1)) #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1)) +#define NRPAGES(x) (ROUNDUP(x, PAGE_SHIFT) >> PAGE_SHIFT) /* XXX SMH: following skanky macros rely on variable p2m_size being set */ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 05 of 25] libxc: convert sysctl interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID 71e4092089af29d01192810e4bd4a732c8ed3933 # Parent 7a0260895b7f4c596f68cfef0fddd4959e116662 libxc: convert sysctl interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xc_cpupool.c --- a/tools/libxc/xc_cpupool.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_cpupool.c Thu Oct 21 09:37:34 2010 +0100 @@ -72,8 +72,14 @@ int xc_cpupool_getinfo(xc_interface *xch int err = 0; int p; uint32_t poolid = first_poolid; - uint8_t local[sizeof (info->cpumap)]; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); + + local = xc_hypercall_buffer_alloc(xch, local, sizeof (info->cpumap)); + if ( local == NULL ) { + PERROR("Could not allocate locked memory for xc_cpupool_getinfo"); + return -ENOMEM; + } memset(info, 0, n_max * sizeof(xc_cpupoolinfo_t)); @@ -82,17 +88,10 @@ int xc_cpupool_getinfo(xc_interface *xch sysctl.cmd = XEN_SYSCTL_cpupool_op; sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO; sysctl.u.cpupool_op.cpupool_id = poolid; - set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(info->cpumap) * 8; - if ( (err = lock_pages(xch, local, sizeof(local))) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - break; - } err = do_sysctl_save(xch, &sysctl); - unlock_pages(xch, local, sizeof (local)); - if ( err < 0 ) break; @@ -103,6 +102,8 @@ int xc_cpupool_getinfo(xc_interface *xch poolid = sysctl.u.cpupool_op.cpupool_id + 1; info++; } + + xc_hypercall_buffer_free(xch, local); if ( p == 0 ) return err; @@ -153,27 +154,28 @@ int xc_cpupool_freeinfo(xc_interface *xc uint64_t *cpumap) { int err; - uint8_t local[sizeof (*cpumap)]; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); + + local = xc_hypercall_buffer_alloc(xch, local, sizeof (*cpumap)); + if ( local == NULL ) { + PERROR("Could not allocate locked memory for xc_cpupool_freeinfo"); + return -ENOMEM; + } sysctl.cmd = XEN_SYSCTL_cpupool_op; sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_FREEINFO; - set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(*cpumap) * 8; - if ( (err = lock_pages(xch, local, sizeof(local))) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - return err; - } - err = do_sysctl_save(xch, &sysctl); - unlock_pages(xch, local, sizeof (local)); if (err < 0) return err; bitmap_byte_to_64(cpumap, local, sizeof(local) * 8); + xc_hypercall_buffer_free(xch, local); + return 0; } diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 @@ -245,21 +245,22 @@ int xc_domain_getinfolist(xc_interface * { int ret = 0; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(info, max_domains*sizeof(*info), XC_HYPERCALL_BUFFER_BOUNCE_OUT); - if ( lock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, info) ) return -1; sysctl.cmd = XEN_SYSCTL_getdomaininfolist; sysctl.u.getdomaininfolist.first_domain = first_domain; sysctl.u.getdomaininfolist.max_domains = max_domains; - set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); + xc_set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); if ( xc_sysctl(xch, &sysctl) < 0 ) ret = -1; else ret = sysctl.u.getdomaininfolist.num_domains; - unlock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)); + xc_hypercall_bounce_post(xch, info); return ret; } diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_misc.c Thu Oct 21 09:37:34 2010 +0100 @@ -27,11 +27,15 @@ int xc_readconsolering(xc_interface *xch int clear, int incremental, uint32_t *pindex) { int ret; + unsigned int nr_chars = *pnr_chars; DECLARE_SYSCTL; - unsigned int nr_chars = *pnr_chars; + DECLARE_HYPERCALL_BOUNCE(buffer, nr_chars, XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, buffer) ) + return -1; sysctl.cmd = XEN_SYSCTL_readconsole; - set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); + xc_set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); sysctl.u.readconsole.count = nr_chars; sysctl.u.readconsole.clear = clear; sysctl.u.readconsole.incremental = 0; @@ -41,9 +45,6 @@ int xc_readconsolering(xc_interface *xch sysctl.u.readconsole.incremental = incremental; } - if ( (ret = lock_pages(xch, buffer, nr_chars)) != 0 ) - return ret; - if ( (ret = do_sysctl(xch, &sysctl)) == 0 ) { *pnr_chars = sysctl.u.readconsole.count; @@ -51,7 +52,7 @@ int xc_readconsolering(xc_interface *xch *pindex = sysctl.u.readconsole.index; } - unlock_pages(xch, buffer, nr_chars); + xc_hypercall_bounce_post(xch, buffer); return ret; } @@ -60,17 +61,18 @@ int xc_send_debug_keys(xc_interface *xch { int ret, len = strlen(keys); DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(keys, len, XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, keys) ) + return -1; sysctl.cmd = XEN_SYSCTL_debug_keys; - set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); + xc_set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); sysctl.u.debug_keys.nr_keys = len; - - if ( (ret = lock_pages(xch, keys, len)) != 0 ) - return ret; ret = do_sysctl(xch, &sysctl); - unlock_pages(xch, keys, len); + xc_hypercall_bounce_post(xch, keys); return ret; } @@ -173,8 +175,8 @@ int xc_perfc_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_reset; - set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL); - set_xen_guest_handle(sysctl.u.perfc_op.val, NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -188,8 +190,8 @@ int xc_perfc_query_number(xc_interface * sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL); - set_xen_guest_handle(sysctl.u.perfc_op.val, NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -202,15 +204,17 @@ int xc_perfc_query_number(xc_interface * } int xc_perfc_query(xc_interface *xch, - xc_perfc_desc_t *desc, - xc_perfc_val_t *val) + struct xc_hypercall_buffer *desc, + struct xc_hypercall_buffer *val) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(desc); + DECLARE_HYPERCALL_BUFFER_ARGUMENT(val); sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); - set_xen_guest_handle(sysctl.u.perfc_op.val, val); + xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); + xc_set_xen_guest_handle(sysctl.u.perfc_op.val, val); return do_sysctl(xch, &sysctl); } @@ -221,7 +225,7 @@ int xc_lockprof_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_reset; - set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL); + xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -234,7 +238,7 @@ int xc_lockprof_query_number(xc_interfac sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; - set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL); + xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -244,17 +248,18 @@ int xc_lockprof_query_number(xc_interfac } int xc_lockprof_query(xc_interface *xch, - uint32_t *n_elems, - uint64_t *time, - xc_lockprof_data_t *data) + uint32_t *n_elems, + uint64_t *time, + struct xc_hypercall_buffer *data) { int rc; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(data); sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; sysctl.u.lockprof_op.max_elem = *n_elems; - set_xen_guest_handle(sysctl.u.lockprof_op.data, data); + xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, data); rc = do_sysctl(xch, &sysctl); @@ -268,20 +273,21 @@ int xc_getcpuinfo(xc_interface *xch, int { int rc; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(info, max_cpus*sizeof(*info), XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, info) ) + return -1; sysctl.cmd = XEN_SYSCTL_getcpuinfo; - sysctl.u.getcpuinfo.max_cpus = max_cpus; - set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); - - if ( (rc = lock_pages(xch, info, max_cpus*sizeof(*info))) != 0 ) - return rc; + sysctl.u.getcpuinfo.max_cpus = max_cpus; + xc_set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); rc = do_sysctl(xch, &sysctl); - unlock_pages(xch, info, max_cpus*sizeof(*info)); + xc_hypercall_bounce_post(xch, info); if ( nr_cpus ) - *nr_cpus = sysctl.u.getcpuinfo.nr_cpus; + *nr_cpus = sysctl.u.getcpuinfo.nr_cpus; return rc; } diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xc_offline_page.c --- a/tools/libxc/xc_offline_page.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_offline_page.c Thu Oct 21 09:37:34 2010 +0100 @@ -66,14 +66,15 @@ int xc_mark_page_online(xc_interface *xc unsigned long end, uint32_t *status) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int ret = -1; if ( !status || (end < start) ) return -EINVAL; - if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1))) + if ( xc_hypercall_bounce_pre(xch, status) ) { - ERROR("Could not lock memory for xc_mark_page_online\n"); + ERROR("Could not bounce memory for xc_mark_page_online\n"); return -EINVAL; } @@ -81,10 +82,10 @@ int xc_mark_page_online(xc_interface *xc sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_online; sysctl.u.page_offline.end = end; - set_xen_guest_handle(sysctl.u.page_offline.status, status); + xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); - unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)); + xc_hypercall_bounce_post(xch, status); return ret; } @@ -93,14 +94,15 @@ int xc_mark_page_offline(xc_interface *x unsigned long end, uint32_t *status) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int ret = -1; if ( !status || (end < start) ) return -EINVAL; - if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1))) + if ( xc_hypercall_bounce_pre(xch, status) ) { - ERROR("Could not lock memory for xc_mark_page_offline"); + ERROR("Could not bounce memory for xc_mark_page_offline"); return -EINVAL; } @@ -108,10 +110,10 @@ int xc_mark_page_offline(xc_interface *x sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_offline; sysctl.u.page_offline.end = end; - set_xen_guest_handle(sysctl.u.page_offline.status, status); + xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); - unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)); + xc_hypercall_bounce_post(xch, status); return ret; } @@ -120,14 +122,15 @@ int xc_query_page_offline_status(xc_inte unsigned long end, uint32_t *status) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int ret = -1; if ( !status || (end < start) ) return -EINVAL; - if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1))) + if ( xc_hypercall_bounce_pre(xch, status) ) { - ERROR("Could not lock memory for xc_query_page_offline_status\n"); + ERROR("Could not bounce memory for xc_query_page_offline_status\n"); return -EINVAL; } @@ -135,10 +138,10 @@ int xc_query_page_offline_status(xc_inte sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_query_page_offline; sysctl.u.page_offline.end = end; - set_xen_guest_handle(sysctl.u.page_offline.status, status); + xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); - unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)); + xc_hypercall_bounce_post(xch, status); return ret; } diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xc_pm.c --- a/tools/libxc/xc_pm.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_pm.c Thu Oct 21 09:37:34 2010 +0100 @@ -45,6 +45,10 @@ int xc_pm_get_pxstat(xc_interface *xch, int xc_pm_get_pxstat(xc_interface *xch, int cpuid, struct xc_px_stat *pxpt) { DECLARE_SYSCTL; + /* Sizes unknown until xc_pm_get_max_px */ + DECLARE_NAMED_HYPERCALL_BOUNCE(trans, &pxpt->trans_pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(pt, &pxpt->pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + int max_px, ret; if ( !pxpt || !(pxpt->trans_pt) || !(pxpt->pt) ) @@ -53,14 +57,15 @@ int xc_pm_get_pxstat(xc_interface *xch, if ( (ret = xc_pm_get_max_px(xch, cpuid, &max_px)) != 0) return ret; - if ( (ret = lock_pages(xch, pxpt->trans_pt, - max_px * max_px * sizeof(uint64_t))) != 0 ) + HYPERCALL_BOUNCE_SET_SIZE(trans, max_px * max_px * sizeof(uint64_t)); + HYPERCALL_BOUNCE_SET_SIZE(pt, max_px * sizeof(struct xc_px_val)); + + if ( xc_hypercall_bounce_pre(xch, trans) ) return ret; - if ( (ret = lock_pages(xch, pxpt->pt, - max_px * sizeof(struct xc_px_val))) != 0 ) + if ( xc_hypercall_bounce_pre(xch, pt) ) { - unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t)); + xc_hypercall_bounce_post(xch, trans); return ret; } @@ -68,15 +73,14 @@ int xc_pm_get_pxstat(xc_interface *xch, sysctl.u.get_pmstat.type = PMSTAT_get_pxstat; sysctl.u.get_pmstat.cpuid = cpuid; sysctl.u.get_pmstat.u.getpx.total = max_px; - set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, pxpt->trans_pt); - set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, - (pm_px_val_t *)pxpt->pt); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, trans); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, pt); ret = xc_sysctl(xch, &sysctl); if ( ret ) { - unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t)); - unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val)); + xc_hypercall_bounce_post(xch, trans); + xc_hypercall_bounce_post(xch, pt); return ret; } @@ -85,8 +89,8 @@ int xc_pm_get_pxstat(xc_interface *xch, pxpt->last = sysctl.u.get_pmstat.u.getpx.last; pxpt->cur = sysctl.u.get_pmstat.u.getpx.cur; - unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t)); - unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val)); + xc_hypercall_bounce_post(xch, trans); + xc_hypercall_bounce_post(xch, pt); return ret; } @@ -120,6 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt) { DECLARE_SYSCTL; + DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int max_cx, ret; if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) ) @@ -128,22 +134,23 @@ int xc_pm_get_cxstat(xc_interface *xch, if ( (ret = xc_pm_get_max_cx(xch, cpuid, &max_cx)) ) goto unlock_0; - if ( (ret = lock_pages(xch, cxpt, sizeof(struct xc_cx_stat))) ) + HYPERCALL_BOUNCE_SET_SIZE(triggers, max_cx * sizeof(uint64_t)); + HYPERCALL_BOUNCE_SET_SIZE(residencies, max_cx * sizeof(uint64_t)); + + ret = -1; + if ( xc_hypercall_bounce_pre(xch, triggers) ) goto unlock_0; - if ( (ret = lock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t))) ) + if ( xc_hypercall_bounce_pre(xch, residencies) ) goto unlock_1; - if ( (ret = lock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t))) ) - goto unlock_2; sysctl.cmd = XEN_SYSCTL_get_pmstat; sysctl.u.get_pmstat.type = PMSTAT_get_cxstat; sysctl.u.get_pmstat.cpuid = cpuid; - set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, cxpt->triggers); - set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, - cxpt->residencies); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, triggers); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, residencies); if ( (ret = xc_sysctl(xch, &sysctl)) ) - goto unlock_3; + goto unlock_2; cxpt->nr = sysctl.u.get_pmstat.u.getcx.nr; cxpt->last = sysctl.u.get_pmstat.u.getcx.last; @@ -154,12 +161,10 @@ int xc_pm_get_cxstat(xc_interface *xch, cxpt->cc3 = sysctl.u.get_pmstat.u.getcx.cc3; cxpt->cc6 = sysctl.u.get_pmstat.u.getcx.cc6; -unlock_3: - unlock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t)); unlock_2: - unlock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t)); + xc_hypercall_bounce_post(xch, residencies); unlock_1: - unlock_pages(xch, cxpt, sizeof(struct xc_cx_stat)); + xc_hypercall_bounce_post(xch, triggers); unlock_0: return ret; } @@ -186,12 +191,19 @@ int xc_get_cpufreq_para(xc_interface *xc DECLARE_SYSCTL; int ret = 0; struct xen_get_cpufreq_para *sys_para = &sysctl.u.pm_op.u.get_para; + DECLARE_NAMED_HYPERCALL_BOUNCE(affected_cpus, + user_para->affected_cpus, + user_para->cpu_num * sizeof(uint32_t), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(scaling_available_frequencies, + user_para->scaling_available_frequencies, + user_para->freq_num * sizeof(uint32_t), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(scaling_available_governors, + user_para->scaling_available_governors, + user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + bool has_num = user_para->cpu_num && user_para->freq_num && user_para->gov_num; - - if ( (xch < 0) || !user_para ) - return -EINVAL; if ( has_num ) { @@ -200,22 +212,16 @@ int xc_get_cpufreq_para(xc_interface *xc (!user_para->scaling_available_governors) ) return -EINVAL; - if ( (ret = lock_pages(xch, user_para->affected_cpus, - user_para->cpu_num * sizeof(uint32_t))) ) + if ( xc_hypercall_bounce_pre(xch, affected_cpus) ) goto unlock_1; - if ( (ret = lock_pages(xch, user_para->scaling_available_frequencies, - user_para->freq_num * sizeof(uint32_t))) ) + if ( xc_hypercall_bounce_pre(xch, scaling_available_frequencies) ) goto unlock_2; - if ( (ret = lock_pages(xch, user_para->scaling_available_governors, - user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char))) ) + if ( xc_hypercall_bounce_pre(xch, scaling_available_governors) ) goto unlock_3; - set_xen_guest_handle(sys_para->affected_cpus, - user_para->affected_cpus); - set_xen_guest_handle(sys_para->scaling_available_frequencies, - user_para->scaling_available_frequencies); - set_xen_guest_handle(sys_para->scaling_available_governors, - user_para->scaling_available_governors); + xc_set_xen_guest_handle(sys_para->affected_cpus, affected_cpus); + xc_set_xen_guest_handle(sys_para->scaling_available_frequencies, scaling_available_frequencies); + xc_set_xen_guest_handle(sys_para->scaling_available_governors, scaling_available_governors); } sysctl.cmd = XEN_SYSCTL_pm_op; @@ -250,7 +256,7 @@ int xc_get_cpufreq_para(xc_interface *xc user_para->scaling_min_freq = sys_para->scaling_min_freq; user_para->turbo_enabled = sys_para->turbo_enabled; - memcpy(user_para->scaling_driver, + memcpy(user_para->scaling_driver, sys_para->scaling_driver, CPUFREQ_NAME_LEN); memcpy(user_para->scaling_governor, sys_para->scaling_governor, CPUFREQ_NAME_LEN); @@ -263,14 +269,11 @@ int xc_get_cpufreq_para(xc_interface *xc } unlock_4: - unlock_pages(xch, user_para->scaling_available_governors, - user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char)); + xc_hypercall_bounce_post(xch, scaling_available_governors); unlock_3: - unlock_pages(xch, user_para->scaling_available_frequencies, - user_para->freq_num * sizeof(uint32_t)); + xc_hypercall_bounce_post(xch, scaling_available_frequencies); unlock_2: - unlock_pages(xch, user_para->affected_cpus, - user_para->cpu_num * sizeof(uint32_t)); + xc_hypercall_bounce_post(xch, affected_cpus); unlock_1: return ret; } diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 @@ -240,18 +240,18 @@ static inline int do_sysctl(xc_interface { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(sysctl, sizeof(*sysctl), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - if ( hcall_buf_prep(xch, (void **)&sysctl, sizeof(*sysctl)) != 0 ) + sysctl->interface_version = XEN_SYSCTL_INTERFACE_VERSION; + + if ( xc_hypercall_bounce_pre(xch, sysctl) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce buffer for sysctl hypercall"); goto out1; } - sysctl->interface_version = XEN_SYSCTL_INTERFACE_VERSION; - hypercall.op = __HYPERVISOR_sysctl; - hypercall.arg[0] = (unsigned long)sysctl; - + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(sysctl); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { if ( errno == EACCES ) @@ -259,8 +259,7 @@ static inline int do_sysctl(xc_interface " rebuild the user-space tool set?\n"); } - hcall_buf_release(xch, (void **)&sysctl, sizeof(*sysctl)); - + xc_hypercall_bounce_post(xch, sysctl); out1: return ret; } diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xc_tbuf.c --- a/tools/libxc/xc_tbuf.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_tbuf.c Thu Oct 21 09:37:34 2010 +0100 @@ -116,9 +116,15 @@ int xc_tbuf_set_cpu_mask(xc_interface *x int xc_tbuf_set_cpu_mask(xc_interface *xch, uint32_t mask) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, bytemap); int ret = -1; uint64_t mask64 = mask; - uint8_t bytemap[sizeof(mask64)]; + + bytemap = xc_hypercall_buffer_alloc(xch, bytemap, sizeof(mask64)); + { + PERROR("Could not allocate memory for xc_tbuf_set_cpu_mask hypercall"); + goto out; + } sysctl.cmd = XEN_SYSCTL_tbuf_op; sysctl.interface_version = XEN_SYSCTL_INTERFACE_VERSION; @@ -126,18 +132,12 @@ int xc_tbuf_set_cpu_mask(xc_interface *x bitmap_64_to_byte(bytemap, &mask64, sizeof (mask64) * 8); - set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); + xc_set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8; - - if ( lock_pages(xch, &bytemap, sizeof(bytemap)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } ret = do_sysctl(xch, &sysctl); - unlock_pages(xch, &bytemap, sizeof(bytemap)); + xc_hypercall_buffer_free(xch, bytemap); out: return ret; diff -r 7a0260895b7f -r 71e4092089af tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xenctrl.h Thu Oct 21 09:37:34 2010 +0100 @@ -1022,21 +1022,18 @@ int xc_perfc_query_number(xc_interface * int xc_perfc_query_number(xc_interface *xch, int *nbr_desc, int *nbr_val); -/* IMPORTANT: The caller is responsible for mlock()''ing the @desc and @val - arrays. */ int xc_perfc_query(xc_interface *xch, - xc_perfc_desc_t *desc, - xc_perfc_val_t *val); + xc_hypercall_buffer_t *desc, + xc_hypercall_buffer_t *val); typedef xen_sysctl_lockprof_data_t xc_lockprof_data_t; int xc_lockprof_reset(xc_interface *xch); int xc_lockprof_query_number(xc_interface *xch, uint32_t *n_elems); -/* IMPORTANT: The caller is responsible for mlock()''ing the @data array. */ int xc_lockprof_query(xc_interface *xch, uint32_t *n_elems, uint64_t *time, - xc_lockprof_data_t *data); + xc_hypercall_buffer_t *data); /** * Memory maps a range within one domain to a local address range. Mappings diff -r 7a0260895b7f -r 71e4092089af tools/misc/xenlockprof.c --- a/tools/misc/xenlockprof.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/misc/xenlockprof.c Thu Oct 21 09:37:34 2010 +0100 @@ -18,22 +18,6 @@ #include <string.h> #include <inttypes.h> -static int lock_pages(void *addr, size_t len) -{ - int e = 0; -#ifndef __sun__ - e = mlock(addr, len); -#endif - return (e); -} - -static void unlock_pages(void *addr, size_t len) -{ -#ifndef __sun__ - munlock(addr, len); -#endif -} - int main(int argc, char *argv[]) { xc_interface *xc_handle; @@ -41,7 +25,7 @@ int main(int argc, char *argv[]) uint64_t time; double l, b, sl, sb; char name[60]; - xc_lockprof_data_t *data; + DECLARE_HYPERCALL_BUFFER(xc_lockprof_data_t, data); if ( (argc > 2) || ((argc == 2) && (strcmp(argv[1], "-r") != 0)) ) { @@ -78,23 +62,21 @@ int main(int argc, char *argv[]) } n += 32; /* just to be sure */ - data = malloc(sizeof(*data) * n); - if ( (data == NULL) || (lock_pages(data, sizeof(*data) * n) != 0) ) + data = xc_hypercall_buffer_alloc(xc_handle, data, sizeof(*data) * n); + if ( data == NULL ) { - fprintf(stderr, "Could not alloc or lock buffers: %d (%s)\n", + fprintf(stderr, "Could not allocate buffers: %d (%s)\n", errno, strerror(errno)); return 1; } i = n; - if ( xc_lockprof_query(xc_handle, &i, &time, data) != 0 ) + if ( xc_lockprof_query(xc_handle, &i, &time, HYPERCALL_BUFFER(data)) != 0 ) { fprintf(stderr, "Error getting profile records: %d (%s)\n", errno, strerror(errno)); return 1; } - - unlock_pages(data, sizeof(*data) * n); if ( i > n ) { @@ -132,5 +114,7 @@ int main(int argc, char *argv[]) printf("total locked time: %20.9fs\n", sl); printf("total blocked time: %20.9fs\n", sb); + xc_hypercall_buffer_free(xc_handle, data); + return 0; } diff -r 7a0260895b7f -r 71e4092089af tools/misc/xenperf.c --- a/tools/misc/xenperf.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/misc/xenperf.c Thu Oct 21 09:37:34 2010 +0100 @@ -68,28 +68,12 @@ const char *hypercall_name_table[64] }; #undef X -static int lock_pages(void *addr, size_t len) -{ - int e = 0; -#ifndef __sun__ - e = mlock(addr, len); -#endif - return (e); -} - -static void unlock_pages(void *addr, size_t len) -{ -#ifndef __sun__ - munlock(addr, len); -#endif -} - int main(int argc, char *argv[]) { int i, j; xc_interface *xc_handle; - xc_perfc_desc_t *pcd; - xc_perfc_val_t *pcv; + DECLARE_HYPERCALL_BUFFER(xc_perfc_desc_t, pcd); + DECLARE_HYPERCALL_BUFFER(xc_perfc_val_t, pcv); xc_perfc_val_t *val; int num_desc, num_val; unsigned int sum, reset = 0, full = 0, pretty = 0; @@ -154,28 +138,22 @@ int main(int argc, char *argv[]) return 1; } - pcd = malloc(sizeof(*pcd) * num_desc); - pcv = malloc(sizeof(*pcv) * num_val); + pcd = xc_hypercall_buffer_alloc(xc_handle, pcd, sizeof(*pcd) * num_desc); + pcv = xc_hypercall_buffer_alloc(xc_handle, pcv, sizeof(*pcv) * num_val); - if ( pcd == NULL - || lock_pages(pcd, sizeof(*pcd) * num_desc) != 0 - || pcv == NULL - || lock_pages(pcv, sizeof(*pcv) * num_val) != 0) + if ( pcd == NULL || pcv == NULL) { - fprintf(stderr, "Could not alloc or lock buffers: %d (%s)\n", + fprintf(stderr, "Could not allocate buffers: %d (%s)\n", errno, strerror(errno)); exit(-1); } - if ( xc_perfc_query(xc_handle, pcd, pcv) != 0 ) + if ( xc_perfc_query(xc_handle, HYPERCALL_BUFFER(pcd), HYPERCALL_BUFFER(pcv)) != 0 ) { fprintf(stderr, "Error getting perf counter: %d (%s)\n", errno, strerror(errno)); return 1; } - - unlock_pages(pcd, sizeof(*pcd) * num_desc); - unlock_pages(pcv, sizeof(*pcv) * num_val); val = pcv; for ( i = 0; i < num_desc; i++ ) @@ -221,5 +199,7 @@ int main(int argc, char *argv[]) val += pcd[i].nr_vals; } + xc_hypercall_buffer_free(xc_handle, pcd); + xc_hypercall_buffer_free(xc_handle, pcv); return 0; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 06 of 25] libxc: convert watchdog interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID ff41c37f36487d250e971e54c80cf29dc0d64eac # Parent 71e4092089af29d01192810e4bd4a732c8ed3933 libxc: convert watchdog interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 71e4092089af -r ff41c37f3648 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 @@ -374,24 +374,25 @@ int xc_watchdog(xc_interface *xch, uint32_t timeout) { int ret = -1; - sched_watchdog_t arg; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BUFFER(sched_watchdog_t, arg); + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_watchdog hypercall"); + goto out1; + } hypercall.op = __HYPERVISOR_sched_op; hypercall.arg[0] = (unsigned long)SCHEDOP_watchdog; - hypercall.arg[1] = (unsigned long)&arg; - arg.id = id; - arg.timeout = timeout; - - if ( lock_pages(xch, &arg, sizeof(arg)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out1; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->id = id; + arg->timeout = timeout; ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); out1: return ret; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 07 of 25] libxc: convert acm interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID c02e7347dc20e1c2b23a71910dcf623928dbf4ea # Parent ff41c37f36487d250e971e54c80cf29dc0d64eac libxc: convert acm interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r ff41c37f3648 -r c02e7347dc20 tools/libxc/xc_acm.c --- a/tools/libxc/xc_acm.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_acm.c Thu Oct 21 09:37:34 2010 +0100 @@ -27,12 +27,19 @@ int xc_acm_op(xc_interface *xch, int cmd { int ret; DECLARE_HYPERCALL; - struct xen_acmctl acmctl; + DECLARE_HYPERCALL_BUFFER(struct xen_acmctl, acmctl); + + acmctl = xc_hypercall_buffer_alloc(xch, acmctl, sizeof(*acmctl)); + if ( acmctl == NULL ) + { + PERROR("Could not allocate memory for ACM OP hypercall"); + return -EFAULT; + } switch (cmd) { case ACMOP_setpolicy: { struct acm_setpolicy *setpolicy = (struct acm_setpolicy *)arg; - memcpy(&acmctl.u.setpolicy, + memcpy(&acmctl->u.setpolicy, setpolicy, sizeof(struct acm_setpolicy)); } @@ -40,7 +47,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_getpolicy: { struct acm_getpolicy *getpolicy = (struct acm_getpolicy *)arg; - memcpy(&acmctl.u.getpolicy, + memcpy(&acmctl->u.getpolicy, getpolicy, sizeof(struct acm_getpolicy)); } @@ -48,7 +55,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_dumpstats: { struct acm_dumpstats *dumpstats = (struct acm_dumpstats *)arg; - memcpy(&acmctl.u.dumpstats, + memcpy(&acmctl->u.dumpstats, dumpstats, sizeof(struct acm_dumpstats)); } @@ -56,7 +63,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_getssid: { struct acm_getssid *getssid = (struct acm_getssid *)arg; - memcpy(&acmctl.u.getssid, + memcpy(&acmctl->u.getssid, getssid, sizeof(struct acm_getssid)); } @@ -64,7 +71,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_getdecision: { struct acm_getdecision *getdecision = (struct acm_getdecision *)arg; - memcpy(&acmctl.u.getdecision, + memcpy(&acmctl->u.getdecision, getdecision, sizeof(struct acm_getdecision)); } @@ -72,7 +79,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_chgpolicy: { struct acm_change_policy *change_policy = (struct acm_change_policy *)arg; - memcpy(&acmctl.u.change_policy, + memcpy(&acmctl->u.change_policy, change_policy, sizeof(struct acm_change_policy)); } @@ -80,40 +87,36 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_relabeldoms: { struct acm_relabel_doms *relabel_doms = (struct acm_relabel_doms *)arg; - memcpy(&acmctl.u.relabel_doms, + memcpy(&acmctl->u.relabel_doms, relabel_doms, sizeof(struct acm_relabel_doms)); } break; } - acmctl.cmd = cmd; - acmctl.interface_version = ACM_INTERFACE_VERSION; + acmctl->cmd = cmd; + acmctl->interface_version = ACM_INTERFACE_VERSION; hypercall.op = __HYPERVISOR_xsm_op; - hypercall.arg[0] = (unsigned long)&acmctl; - if ( lock_pages(xch, &acmctl, sizeof(acmctl)) != 0) - { - PERROR("Could not lock memory for Xen hypercall"); - return -EFAULT; - } + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(acmctl); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0) { if ( errno == EACCES ) DPRINTF("acmctl operation failed -- need to" " rebuild the user-space tool set?\n"); } - unlock_pages(xch, &acmctl, sizeof(acmctl)); switch (cmd) { case ACMOP_getdecision: { struct acm_getdecision *getdecision = (struct acm_getdecision *)arg; memcpy(getdecision, - &acmctl.u.getdecision, + &acmctl->u.getdecision, sizeof(struct acm_getdecision)); break; } } + + xc_hypercall_buffer_free(xch, acmctl); return ret; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 08 of 25] libxc: convert evtchn interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID 1900730c5bfccbaddf517139c6c4eb390b75237f # Parent c02e7347dc20e1c2b23a71910dcf623928dbf4ea libxc: convert evtchn interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r c02e7347dc20 -r 1900730c5bfc tools/libxc/xc_evtchn.c --- a/tools/libxc/xc_evtchn.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_evtchn.c Thu Oct 21 09:37:34 2010 +0100 @@ -22,31 +22,30 @@ #include "xc_private.h" - static int do_evtchn_op(xc_interface *xch, int cmd, void *arg, size_t arg_size, int silently_fail) { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(arg, arg_size, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, arg) ) + { + PERROR("do_evtchn_op: bouncing arg failed"); + goto out; + } hypercall.op = __HYPERVISOR_event_channel_op; hypercall.arg[0] = cmd; - hypercall.arg[1] = (unsigned long)arg; - - if ( lock_pages(xch, arg, arg_size) != 0 ) - { - PERROR("do_evtchn_op: arg lock failed"); - goto out; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); if ((ret = do_xen_hypercall(xch, &hypercall)) < 0 && !silently_fail) ERROR("do_evtchn_op: HYPERVISOR_event_channel_op failed: %d", ret); - unlock_pages(xch, arg, arg_size); + xc_hypercall_bounce_post(xch, arg); out: return ret; } - evtchn_port_or_error_t xc_evtchn_alloc_unbound(xc_interface *xch, _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 09 of 25] libxc: convert schedop interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID 584a8eddcea8410026af987a0dbd910852f8f1a9 # Parent 1900730c5bfccbaddf517139c6c4eb390b75237f libxc: convert schedop interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 1900730c5bfc -r 584a8eddcea8 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:34 2010 +0100 @@ -85,24 +85,25 @@ int xc_domain_shutdown(xc_interface *xch int reason) { int ret = -1; - sched_remote_shutdown_t arg; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BUFFER(sched_remote_shutdown_t, arg); + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_domain_shutdown hypercall"); + goto out1; + } hypercall.op = __HYPERVISOR_sched_op; hypercall.arg[0] = (unsigned long)SCHEDOP_remote_shutdown; - hypercall.arg[1] = (unsigned long)&arg; - arg.domain_id = domid; - arg.reason = reason; - - if ( lock_pages(xch, &arg, sizeof(arg)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out1; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->domain_id = domid; + arg->reason = reason; ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); out1: return ret; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 10 of 25] libxc: convert physdevop interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650254 -3600 # Node ID a4430532beb9ddc8c48d80b45591fb25a139db8c # Parent 584a8eddcea8410026af987a0dbd910852f8f1a9 libxc: convert physdevop interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 584a8eddcea8 -r a4430532beb9 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_private.h Thu Oct 21 09:37:34 2010 +0100 @@ -181,18 +181,18 @@ static inline int do_physdev_op(xc_inter static inline int do_physdev_op(xc_interface *xch, int cmd, void *op, size_t len) { int ret = -1; + DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - DECLARE_HYPERCALL; - - if ( hcall_buf_prep(xch, &op, len) != 0 ) + if ( xc_hypercall_bounce_pre(xch, op) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce memory for physdev hypercall"); goto out1; } hypercall.op = __HYPERVISOR_physdev_op; hypercall.arg[0] = (unsigned long) cmd; - hypercall.arg[1] = (unsigned long) op; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(op); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { @@ -201,8 +201,7 @@ static inline int do_physdev_op(xc_inter " rebuild the user-space tool set?\n"); } - hcall_buf_release(xch, &op, len); - + xc_hypercall_bounce_post(xch, op); out1: return ret; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 11 of 25] libxc: convert flask interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 9986007519dce12dd0503f88cc32f415a5f11c3d # Parent a4430532beb9ddc8c48d80b45591fb25a139db8c libxc: convert flask interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r a4430532beb9 -r 9986007519dc tools/libxc/xc_flask.c --- a/tools/libxc/xc_flask.c Thu Oct 21 09:37:34 2010 +0100 +++ b/tools/libxc/xc_flask.c Thu Oct 21 09:37:35 2010 +0100 @@ -40,15 +40,16 @@ int xc_flask_op(xc_interface *xch, flask { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, op) ) + { + PERROR("Could not bounce memory for flask op hypercall"); + goto out; + } hypercall.op = __HYPERVISOR_xsm_op; - hypercall.arg[0] = (unsigned long)op; - - if ( lock_pages(xch, op, sizeof(*op)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { @@ -56,7 +57,7 @@ int xc_flask_op(xc_interface *xch, flask fprintf(stderr, "XSM operation failed!\n"); } - unlock_pages(xch, op, sizeof(*op)); + xc_hypercall_bounce_post(xch, op); out: return ret; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 12 of 25] libxc: convert hvmop interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID a688a8de1cf73954974b9dcc46304e9dcc981068 # Parent 9986007519dce12dd0503f88cc32f415a5f11c3d libxc: convert hvmop interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 9986007519dc -r a688a8de1cf7 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:35 2010 +0100 @@ -1027,38 +1027,42 @@ int xc_set_hvm_param(xc_interface *handl int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value) { DECLARE_HYPERCALL; - xen_hvm_param_t arg; + DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg); int rc; + + arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg)); + if ( arg == NULL ) + return -1; hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_param; - hypercall.arg[1] = (unsigned long)&arg; - arg.domid = dom; - arg.index = param; - arg.value = value; - if ( lock_pages(handle, &arg, sizeof(arg)) != 0 ) - return -1; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->domid = dom; + arg->index = param; + arg->value = value; rc = do_xen_hypercall(handle, &hypercall); - unlock_pages(handle, &arg, sizeof(arg)); + xc_hypercall_buffer_free(handle, arg); return rc; } int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value) { DECLARE_HYPERCALL; - xen_hvm_param_t arg; + DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg); int rc; + + arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg)); + if ( arg == NULL ) + return -1; hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_get_param; - hypercall.arg[1] = (unsigned long)&arg; - arg.domid = dom; - arg.index = param; - if ( lock_pages(handle, &arg, sizeof(arg)) != 0 ) - return -1; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->domid = dom; + arg->index = param; rc = do_xen_hypercall(handle, &hypercall); - unlock_pages(handle, &arg, sizeof(arg)); - *value = arg.value; + *value = arg->value; + xc_hypercall_buffer_free(handle, arg); return rc; } diff -r 9986007519dc -r a688a8de1cf7 tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_misc.c Thu Oct 21 09:37:35 2010 +0100 @@ -299,18 +299,19 @@ int xc_hvm_set_pci_intx_level( unsigned int level) { DECLARE_HYPERCALL; - struct xen_hvm_set_pci_intx_level _arg, *arg = &_arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_intx_level, arg); int rc; - if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 ) + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) { - PERROR("Could not lock memory"); - return rc; + PERROR("Could not allocate memory for xc_hvm_set_pci_intx_level hypercall"); + return -1; } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_pci_intx_level; - hypercall.arg[1] = (unsigned long)arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); arg->domid = dom; arg->domain = domain; @@ -321,7 +322,7 @@ int xc_hvm_set_pci_intx_level( rc = do_xen_hypercall(xch, &hypercall); - hcall_buf_release(xch, (void **)&arg, sizeof(*arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -332,18 +333,19 @@ int xc_hvm_set_isa_irq_level( unsigned int level) { DECLARE_HYPERCALL; - struct xen_hvm_set_isa_irq_level _arg, *arg = &_arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_isa_irq_level, arg); int rc; - if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 ) + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) { - PERROR("Could not lock memory"); - return rc; + PERROR("Could not allocate memory for xc_hvm_set_isa_irq_level hypercall"); + return -1; } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_isa_irq_level; - hypercall.arg[1] = (unsigned long)arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); arg->domid = dom; arg->isa_irq = isa_irq; @@ -351,7 +353,7 @@ int xc_hvm_set_isa_irq_level( rc = do_xen_hypercall(xch, &hypercall); - hcall_buf_release(xch, (void **)&arg, sizeof(*arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -360,26 +362,27 @@ int xc_hvm_set_pci_link_route( xc_interface *xch, domid_t dom, uint8_t link, uint8_t isa_irq) { DECLARE_HYPERCALL; - struct xen_hvm_set_pci_link_route arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_link_route, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_hvm_set_pci_link_route hypercall"); + return -1; + } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_pci_link_route; - hypercall.arg[1] = (unsigned long)&arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); - arg.domid = dom; - arg.link = link; - arg.isa_irq = isa_irq; - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + arg->domid = dom; + arg->link = link; + arg->isa_irq = isa_irq; rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -390,28 +393,32 @@ int xc_hvm_track_dirty_vram( unsigned long *dirty_bitmap) { DECLARE_HYPERCALL; - struct xen_hvm_track_dirty_vram arg; + DECLARE_HYPERCALL_BOUNCE(dirty_bitmap, (nr+31) / 32, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_track_dirty_vram, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL || xc_hypercall_bounce_pre(xch, dirty_bitmap) ) + { + PERROR("Could not bounce memory for xc_hvm_track_dirty_vram hypercall"); + rc = -1; + goto out; + } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_track_dirty_vram; - hypercall.arg[1] = (unsigned long)&arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); - arg.domid = dom; - arg.first_pfn = first_pfn; - arg.nr = nr; - set_xen_guest_handle(arg.dirty_bitmap, (uint8_t *)dirty_bitmap); - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + arg->domid = dom; + arg->first_pfn = first_pfn; + arg->nr = nr; + xc_set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap); rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); - +out: + xc_hypercall_buffer_free(xch, arg); + xc_hypercall_bounce_post(xch, dirty_bitmap); return rc; } @@ -419,26 +426,27 @@ int xc_hvm_modified_memory( xc_interface *xch, domid_t dom, uint64_t first_pfn, uint64_t nr) { DECLARE_HYPERCALL; - struct xen_hvm_modified_memory arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_modified_memory, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_hvm_modified_memory hypercall"); + return -1; + } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_modified_memory; - hypercall.arg[1] = (unsigned long)&arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); - arg.domid = dom; - arg.first_pfn = first_pfn; - arg.nr = nr; - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + arg->domid = dom; + arg->first_pfn = first_pfn; + arg->nr = nr; rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -447,27 +455,28 @@ int xc_hvm_set_mem_type( xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint64_t nr) { DECLARE_HYPERCALL; - struct xen_hvm_set_mem_type arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_hvm_set_mem_type hypercall"); + return -1; + } + + arg->domid = dom; + arg->hvmmem_type = mem_type; + arg->first_pfn = first_pfn; + arg->nr = nr; hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_mem_type; - hypercall.arg[1] = (unsigned long)&arg; - - arg.domid = dom; - arg.hvmmem_type = mem_type; - arg.first_pfn = first_pfn; - arg.nr = nr; - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 13 of 25] libxc: convert mca interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 2ef7e26cabd8919f5797a22dbd070a4a189063f1 # Parent a688a8de1cf73954974b9dcc46304e9dcc981068 libxc: convert mca interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r a688a8de1cf7 -r 2ef7e26cabd8 tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_misc.c Thu Oct 21 09:37:35 2010 +0100 @@ -153,18 +153,19 @@ int xc_mca_op(xc_interface *xch, struct { int ret = 0; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(mc, sizeof(*mc), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + if ( xc_hypercall_bounce_pre(xch, mc) ) + { + PERROR("Could not bounce xen_mc memory buffer"); + return -1; + } mc->interface_version = XEN_MCA_INTERFACE_VERSION; - if ( lock_pages(xch, mc, sizeof(*mc)) ) - { - PERROR("Could not lock xen_mc memory"); - return -EINVAL; - } hypercall.op = __HYPERVISOR_mca; - hypercall.arg[0] = (unsigned long)mc; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(mc); ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, mc, sizeof(*mc)); + xc_hypercall_bounce_post(xch, mc); return ret; } #endif _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 14 of 25] libxc: convert tmem interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID f9ce4cbcfbc43e34a14493aff2c7605d17d33439 # Parent 2ef7e26cabd8919f5797a22dbd070a4a189063f1 libxc: convert tmem interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 2ef7e26cabd8 -r f9ce4cbcfbc4 tools/libxc/xc_tmem.c --- a/tools/libxc/xc_tmem.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_tmem.c Thu Oct 21 09:37:35 2010 +0100 @@ -25,21 +25,23 @@ static int do_tmem_op(xc_interface *xch, { int ret; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, op) ) + { + PERROR("Could not bounce buffer for tmem op hypercall"); + return -EFAULT; + } hypercall.op = __HYPERVISOR_tmem_op; - hypercall.arg[0] = (unsigned long)op; - if (lock_pages(xch, op, sizeof(*op)) != 0) - { - PERROR("Could not lock memory for Xen hypercall"); - return -EFAULT; - } + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op); if ((ret = do_xen_hypercall(xch, &hypercall)) < 0) { if ( errno == EACCES ) DPRINTF("tmem operation failed -- need to" " rebuild the user-space tool set?\n"); } - unlock_pages(xch, op, sizeof(*op)); + xc_hypercall_bounce_post(xch, op); return ret; } @@ -54,13 +56,13 @@ int xc_tmem_control(xc_interface *xch, void *buf) { tmem_op_t op; + DECLARE_HYPERCALL_BOUNCE(buf, arg1, XC_HYPERCALL_BUFFER_BOUNCE_OUT); int rc; op.cmd = TMEM_CONTROL; op.pool_id = pool_id; op.u.ctrl.subop = subop; op.u.ctrl.cli_id = cli_id; - set_xen_guest_handle(op.u.ctrl.buf,buf); op.u.ctrl.arg1 = arg1; op.u.ctrl.arg2 = arg2; /* use xc_tmem_control_oid if arg3 is required */ @@ -68,25 +70,28 @@ int xc_tmem_control(xc_interface *xch, op.u.ctrl.oid[1] = 0; op.u.ctrl.oid[2] = 0; - if (subop == TMEMC_LIST) { - if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0)) - { - PERROR("Could not lock memory for Xen hypercall"); - return -ENOMEM; - } - } - #ifdef VALGRIND if (arg1 != 0) memset(buf, 0, arg1); #endif + if ( subop == TMEMC_LIST && arg1 != 0 ) + { + if ( buf == NULL ) + return -EINVAL; + if ( xc_hypercall_bounce_pre(xch, buf) ) + { + PERROR("Could not bounce buffer for tmem control hypercall"); + return -ENOMEM; + } + } + + xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + rc = do_tmem_op(xch, &op); - if (subop == TMEMC_LIST) { - if (arg1 != 0) - unlock_pages(xch, buf, arg1); - } + if (subop == TMEMC_LIST && arg1 != 0) + xc_hypercall_bounce_post(xch, buf); return rc; } @@ -101,6 +106,7 @@ int xc_tmem_control_oid(xc_interface *xc void *buf) { tmem_op_t op; + DECLARE_HYPERCALL_BOUNCE(buf, arg1, XC_HYPERCALL_BUFFER_BOUNCE_OUT); int rc; op.cmd = TMEM_CONTROL; @@ -114,25 +120,28 @@ int xc_tmem_control_oid(xc_interface *xc op.u.ctrl.oid[1] = oid.oid[1]; op.u.ctrl.oid[2] = oid.oid[2]; - if (subop == TMEMC_LIST) { - if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0)) - { - PERROR("Could not lock memory for Xen hypercall"); - return -ENOMEM; - } - } - #ifdef VALGRIND if (arg1 != 0) memset(buf, 0, arg1); #endif + if ( subop == TMEMC_LIST && arg1 != 0 ) + { + if ( buf == NULL ) + return -EINVAL; + if ( xc_hypercall_bounce_pre(xch, buf) ) + { + PERROR("Could not bounce buffer for tmem control (OID) hypercall"); + return -ENOMEM; + } + } + + xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + rc = do_tmem_op(xch, &op); - if (subop == TMEMC_LIST) { - if (arg1 != 0) - unlock_pages(xch, buf, arg1); - } + if (subop == TMEMC_LIST && arg1 != 0) + xc_hypercall_bounce_post(xch, buf); return rc; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 15 of 25] libxc: convert gnttab interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 9e1e5016ca8e1ff2314daac8457059dc0a5ef549 # Parent f9ce4cbcfbc43e34a14493aff2c7605d17d33439 libxc: convert gnttab interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r f9ce4cbcfbc4 -r 9e1e5016ca8e tools/libxc/xc_linux.c --- a/tools/libxc/xc_linux.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_linux.c Thu Oct 21 09:37:35 2010 +0100 @@ -612,21 +612,22 @@ int xc_gnttab_op(xc_interface *xch, int { int ret = 0; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, count * op_size, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, op) ) + { + PERROR("Could not bounce buffer for grant table op hypercall"); + goto out1; + } hypercall.op = __HYPERVISOR_grant_table_op; hypercall.arg[0] = cmd; - hypercall.arg[1] = (unsigned long)op; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(op); hypercall.arg[2] = count; - - if ( lock_pages(xch, op, count* op_size) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out1; - } ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, op, count * op_size); + xc_hypercall_bounce_post(xch, op); out1: return ret; @@ -651,7 +652,7 @@ static void *_gnttab_map_table(xc_interf int rc, i; struct gnttab_query_size query; struct gnttab_setup_table setup; - unsigned long *frame_list = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, frame_list); xen_pfn_t *pfn_list = NULL; grant_entry_v1_t *gnt = NULL; @@ -669,26 +670,23 @@ static void *_gnttab_map_table(xc_interf *gnt_num = query.nr_frames * (PAGE_SIZE / sizeof(grant_entry_v1_t) ); - frame_list = malloc(query.nr_frames * sizeof(unsigned long)); - if ( !frame_list || lock_pages(xch, frame_list, - query.nr_frames * sizeof(unsigned long)) ) + frame_list = xc_hypercall_buffer_alloc(xch, frame_list, query.nr_frames * sizeof(unsigned long)); + if ( !frame_list ) { - ERROR("Alloc/lock frame_list in xc_gnttab_map_table\n"); - if ( frame_list ) - free(frame_list); + ERROR("Could not allocate frame_list in xc_gnttab_map_table\n"); return NULL; } pfn_list = malloc(query.nr_frames * sizeof(xen_pfn_t)); if ( !pfn_list ) { - ERROR("Could not lock pfn_list in xc_gnttab_map_table\n"); + ERROR("Could not allocate pfn_list in xc_gnttab_map_table\n"); goto err; } setup.dom = domid; setup.nr_frames = query.nr_frames; - set_xen_guest_handle(setup.frame_list, frame_list); + xc_set_xen_guest_handle(setup.frame_list, frame_list); /* XXX Any race with other setup_table hypercall? */ rc = xc_gnttab_op(xch, GNTTABOP_setup_table, &setup, sizeof(setup), @@ -713,10 +711,7 @@ static void *_gnttab_map_table(xc_interf err: if ( frame_list ) - { - unlock_pages(xch, frame_list, query.nr_frames * sizeof(unsigned long)); - free(frame_list); - } + xc_hypercall_buffer_free(xch, frame_list); if ( pfn_list ) free(pfn_list); diff -r f9ce4cbcfbc4 -r 9e1e5016ca8e tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xenctrl.h Thu Oct 21 09:37:35 2010 +0100 @@ -1290,7 +1290,7 @@ int xc_gnttab_set_max_grants(xc_interfac int xc_gnttab_op(xc_interface *xch, int cmd, void * op, int op_size, int count); -/* Logs iff lock_pages failes, otherwise doesn''t. */ +/* Logs iff hypercall bounce fails, otherwise doesn''t. */ int xc_gnttab_get_version(xc_interface *xch, int domid); /* Never logs */ grant_entry_v1_t *xc_gnttab_map_table_v1(xc_interface *xch, int domid, int *gnt_num); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:58 UTC
[Xen-devel] [PATCH 16 of 25] libxc: convert memory op interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 38b752b683f3aec13669c1019e6637e3d3aeb434 # Parent 9e1e5016ca8e1ff2314daac8457059dc0a5ef549 libxc: convert memory op interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 9e1e5016ca8e -r 38b752b683f3 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:35 2010 +0100 @@ -468,31 +468,30 @@ int xc_domain_set_memmap_limit(xc_interf unsigned long map_limitkb) { int rc; - struct xen_foreign_memory_map fmap = { .domid = domid, .map = { .nr_entries = 1 } }; + DECLARE_HYPERCALL_BUFFER(struct e820entry, e820); - struct e820entry e820 = { - .addr = 0, - .size = (uint64_t)map_limitkb << 10, - .type = E820_RAM - }; + e820 = xc_hypercall_buffer_alloc(xch, e820, sizeof(*e820)); - set_xen_guest_handle(fmap.map.buffer, &e820); + if ( e820 == NULL ) + { + PERROR("Could not allocate memory for xc_domain_set_memmap_limit hypercall"); + return -1; + } - if ( lock_pages(xch, &e820, sizeof(e820)) ) - { - PERROR("Could not lock memory for Xen hypercall"); - rc = -1; - goto out; - } + e820->addr = 0; + e820->size = (uint64_t)map_limitkb << 10; + e820->type = E820_RAM; + + xc_set_xen_guest_handle(fmap.map.buffer, e820); rc = do_memory_op(xch, XENMEM_set_memory_map, &fmap, sizeof(fmap)); - out: - unlock_pages(xch, &e820, sizeof(e820)); + xc_hypercall_buffer_free(xch, e820); + return rc; } #else @@ -587,6 +586,7 @@ int xc_domain_increase_reservation(xc_in xen_pfn_t *extent_start) { int err; + DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { .nr_extents = nr_extents, .extent_order = extent_order, @@ -595,18 +595,17 @@ int xc_domain_increase_reservation(xc_in }; /* may be NULL */ - if ( extent_start && lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, extent_start) ) { - PERROR("Could not lock memory for XENMEM_increase_reservation hypercall"); + PERROR("Could not bounce memory for XENMEM_increase_reservation hypercall"); return -1; } - set_xen_guest_handle(reservation.extent_start, extent_start); + xc_set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_increase_reservation, &reservation, sizeof(reservation)); - if ( extent_start ) - unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)); + xc_hypercall_bounce_post(xch, extent_start); return err; } @@ -645,18 +644,13 @@ int xc_domain_decrease_reservation(xc_in xen_pfn_t *extent_start) { int err; + DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { .nr_extents = nr_extents, .extent_order = extent_order, .mem_flags = 0, .domid = domid }; - - if ( lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 ) - { - PERROR("Could not lock memory for XENMEM_decrease_reservation hypercall"); - return -1; - } if ( extent_start == NULL ) { @@ -665,11 +659,16 @@ int xc_domain_decrease_reservation(xc_in return -1; } - set_xen_guest_handle(reservation.extent_start, extent_start); + if ( xc_hypercall_bounce_pre(xch, extent_start) ) + { + PERROR("Could not bounce memory for XENMEM_decrease_reservation hypercall"); + return -1; + } + xc_set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation)); - unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)); + xc_hypercall_bounce_post(xch, extent_start); return err; } @@ -722,6 +721,7 @@ int xc_domain_populate_physmap(xc_interf xen_pfn_t *extent_start) { int err; + DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { .nr_extents = nr_extents, .extent_order = extent_order, @@ -729,18 +729,16 @@ int xc_domain_populate_physmap(xc_interf .domid = domid }; - if ( lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, extent_start) ) { - PERROR("Could not lock memory for XENMEM_populate_physmap hypercall"); + PERROR("Could not bounce memory for XENMEM_populate_physmap hypercall"); return -1; } - - set_xen_guest_handle(reservation.extent_start, extent_start); + xc_set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation)); - unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)); - + xc_hypercall_bounce_post(xch, extent_start); return err; } @@ -778,8 +776,9 @@ int xc_domain_memory_exchange_pages(xc_i unsigned int out_order, xen_pfn_t *out_extents) { - int rc; - + int rc = -1; + DECLARE_HYPERCALL_BOUNCE(in_extents, nr_in_extents*sizeof(*in_extents), XC_HYPERCALL_BUFFER_BOUNCE_IN); + DECLARE_HYPERCALL_BOUNCE(out_extents, nr_out_extents*sizeof(*out_extents), XC_HYPERCALL_BUFFER_BOUNCE_OUT); struct xen_memory_exchange exchange = { .in = { .nr_extents = nr_in_extents, @@ -792,10 +791,19 @@ int xc_domain_memory_exchange_pages(xc_i .domid = domid } }; - set_xen_guest_handle(exchange.in.extent_start, in_extents); - set_xen_guest_handle(exchange.out.extent_start, out_extents); + + if ( xc_hypercall_bounce_pre(xch, in_extents) || + xc_hypercall_bounce_pre(xch, out_extents)) + goto out; + + xc_set_xen_guest_handle(exchange.in.extent_start, in_extents); + xc_set_xen_guest_handle(exchange.out.extent_start, out_extents); rc = do_memory_op(xch, XENMEM_exchange, &exchange, sizeof(exchange)); + +out: + xc_hypercall_bounce_post(xch, in_extents); + xc_hypercall_bounce_post(xch, out_extents); return rc; } diff -r 9e1e5016ca8e -r 38b752b683f3 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_private.c Thu Oct 21 09:37:35 2010 +0100 @@ -430,23 +430,22 @@ int do_memory_op(xc_interface *xch, int int do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len) { DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(arg, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); long ret = -EINVAL; - hypercall.op = __HYPERVISOR_memory_op; - hypercall.arg[0] = (unsigned long)cmd; - hypercall.arg[1] = (unsigned long)arg; - - if ( len && lock_pages(xch, arg, len) != 0 ) + if ( xc_hypercall_bounce_pre(xch, arg) ) { - PERROR("Could not lock memory for XENMEM hypercall"); + PERROR("Could not bounce memory for XENMEM hypercall"); goto out1; } + hypercall.op = __HYPERVISOR_memory_op; + hypercall.arg[0] = (unsigned long) cmd; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + ret = do_xen_hypercall(xch, &hypercall); - if ( len ) - unlock_pages(xch, arg, len); - + xc_hypercall_bounce_post(xch, arg); out1: return ret; } @@ -476,24 +475,25 @@ int xc_machphys_mfn_list(xc_interface *x xen_pfn_t *extent_start) { int rc; + DECLARE_HYPERCALL_BOUNCE(extent_start, max_extents * sizeof(xen_pfn_t), XC_HYPERCALL_BUFFER_BOUNCE_OUT); struct xen_machphys_mfn_list xmml = { .max_extents = max_extents, }; - if ( lock_pages(xch, extent_start, max_extents * sizeof(xen_pfn_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, extent_start) ) { - PERROR("Could not lock memory for XENMEM_machphys_mfn_list hypercall"); + PERROR("Could not bounce memory for XENMEM_machphys_mfn_list hypercall"); return -1; } - set_xen_guest_handle(xmml.extent_start, extent_start); + xc_set_xen_guest_handle(xmml.extent_start, extent_start); rc = do_memory_op(xch, XENMEM_machphys_mfn_list, &xmml, sizeof(xmml)); if (rc || xmml.nr_extents != max_extents) rc = -1; else rc = 0; - unlock_pages(xch, extent_start, max_extents * sizeof(xen_pfn_t)); + xc_hypercall_bounce_post(xch, extent_start); return rc; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 17 of 25] libxc: convert mmuext op interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 0d9e118f705231b0ac88b9ae98f996e0e62152c7 # Parent 38b752b683f3aec13669c1019e6637e3d3aeb434 libxc: convert mmuext op interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 38b752b683f3 -r 0d9e118f7052 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_private.c Thu Oct 21 09:37:35 2010 +0100 @@ -343,23 +343,24 @@ int xc_mmuext_op( domid_t dom) { DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, nr_ops*sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); long ret = -EINVAL; - if ( hcall_buf_prep(xch, (void **)&op, nr_ops*sizeof(*op)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, op) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce memory for mmuext op hypercall"); goto out1; } hypercall.op = __HYPERVISOR_mmuext_op; - hypercall.arg[0] = (unsigned long)op; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op); hypercall.arg[1] = (unsigned long)nr_ops; hypercall.arg[2] = (unsigned long)0; hypercall.arg[3] = (unsigned long)dom; ret = do_xen_hypercall(xch, &hypercall); - hcall_buf_release(xch, (void **)&op, nr_ops*sizeof(*op)); + xc_hypercall_bounce_post(xch, op); out1: return ret; @@ -369,22 +370,23 @@ static int flush_mmu_updates(xc_interfac { int err = 0; DECLARE_HYPERCALL; + DECLARE_NAMED_HYPERCALL_BOUNCE(updates, mmu->updates, mmu->idx*sizeof(*mmu->updates), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); if ( mmu->idx == 0 ) return 0; + if ( xc_hypercall_bounce_pre(xch, updates) ) + { + PERROR("flush_mmu_updates: bounce buffer failed"); + err = 1; + goto out; + } + hypercall.op = __HYPERVISOR_mmu_update; - hypercall.arg[0] = (unsigned long)mmu->updates; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(updates); hypercall.arg[1] = (unsigned long)mmu->idx; hypercall.arg[2] = 0; hypercall.arg[3] = mmu->subject; - - if ( lock_pages(xch, mmu->updates, sizeof(mmu->updates)) != 0 ) - { - PERROR("flush_mmu_updates: mmu updates lock_pages failed"); - err = 1; - goto out; - } if ( do_xen_hypercall(xch, &hypercall) < 0 ) { @@ -394,7 +396,7 @@ static int flush_mmu_updates(xc_interfac mmu->idx = 0; - unlock_pages(xch, mmu->updates, sizeof(mmu->updates)); + xc_hypercall_bounce_post(xch, updates); out: return err; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 18 of 25] libxc: switch page offlining interfaces to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 19f7acc52f243f91f3ba539b0475dafbb0546ba0 # Parent 0d9e118f705231b0ac88b9ae98f996e0e62152c7 libxc: switch page offlining interfaces to hypercall buffers There is no need to lock/bounce minfo->pfn_type in init_mem_info since xc_get_pfn_type_batch() will take care of that for us. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 0d9e118f7052 -r 19f7acc52f24 tools/libxc/xc_offline_page.c --- a/tools/libxc/xc_offline_page.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_offline_page.c Thu Oct 21 09:37:35 2010 +0100 @@ -294,12 +294,6 @@ static int init_mem_info(xc_interface *x minfo->pfn_type[i] = pfn_to_mfn(i, minfo->p2m_table, minfo->guest_width); - if ( lock_pages(xch, minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type)) ) - { - ERROR("Unable to lock pfn_type array"); - goto failed; - } - for (i = 0; i < minfo->p2m_size ; i+=1024) { int count = ((dinfo->p2m_size - i ) > 1024 ) ? 1024: (dinfo->p2m_size - i); @@ -307,13 +301,11 @@ static int init_mem_info(xc_interface *x minfo->pfn_type + i)) ) { ERROR("Failed to get pfn_type %x\n", rc); - goto unlock; + goto failed; } } return 0; -unlock: - unlock_pages(xch, minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type)); failed: if (minfo->pfn_type) { _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 19 of 25] libxc: convert ia64 dom0vp interface to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 56cb1fbab19d9e8602244d56976877c13f52f91a # Parent 19f7acc52f243f91f3ba539b0475dafbb0546ba0 libxc: convert ia64 dom0vp interface to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 19f7acc52f24 -r 56cb1fbab19d tools/libxc/ia64/xc_dom_ia64_util.c --- a/tools/libxc/ia64/xc_dom_ia64_util.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/ia64/xc_dom_ia64_util.c Thu Oct 21 09:37:35 2010 +0100 @@ -36,19 +36,21 @@ xen_ia64_fpswa_revision(struct xc_dom_im { int ret; DECLARE_HYPERCALL; - hypercall.op = __HYPERVISOR_ia64_dom0vp_op; - hypercall.arg[0] = IA64_DOM0VP_fpswa_revision; - hypercall.arg[1] = (unsigned long)revision; + DECLARE_HYPERCALL_BOUNCE(revision, sizeof(*revision), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - if (lock_pages(revision, sizeof(*revision)) != 0) { - xc_interface *xch = dom->xch; + if (xc_hypercall_bounce_pre(dom->xch, revision) ) + { PERROR("Could not lock memory for xen fpswa hypercall"); return -1; } + hypercall.op = __HYPERVISOR_ia64_dom0vp_op; + hypercall.arg[0] = IA64_DOM0VP_fpswa_revision; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(revision); + ret = do_xen_hypercall(dom->xch, &hypercall); - - unlock_pages(revision, sizeof(*revision)); + + xc_hypercall_bounce_post(dom->xch, revision); return ret; } diff -r 19f7acc52f24 -r 56cb1fbab19d tools/libxc/ia64/xc_ia64_stubs.c --- a/tools/libxc/ia64/xc_ia64_stubs.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/ia64/xc_ia64_stubs.c Thu Oct 21 09:37:35 2010 +0100 @@ -42,19 +42,24 @@ xc_ia64_get_memmap(xc_interface *xch, uint32_t domid, char *buf, unsigned long bufsize) { privcmd_hypercall_t hypercall; + DECLARE_HYPERCALL_BOUNCE(buf, bufsize, XC_HYPERCALL_BUFFER_BOUNCE_OUT); int ret; + + if ( xc_hypercall_bounce_pre(xch, pfn_buf) ) + { + PERROR("xc_get_pfn_list: pfn_buf bounce failed"); + return -1; + } hypercall.op = __HYPERVISOR_ia64_dom0vp_op; hypercall.arg[0] = IA64_DOM0VP_get_memmap; hypercall.arg[1] = domid; - hypercall.arg[2] = (unsigned long)buf; + hypercall.arg[2] = HYPERCALL_BUFFER_AS_ARG(buf); hypercall.arg[3] = bufsize; hypercall.arg[4] = 0; - if (lock_pages(buf, bufsize) != 0) - return -1; ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(buf, bufsize); + xc_hypercall_bounce_post(xc, buf); return ret; } @@ -142,6 +147,7 @@ xc_ia64_map_foreign_p2m(xc_interface *xc struct xen_ia64_memmap_info *memmap_info, unsigned long flags, unsigned long *p2m_size_p) { + DECLARE_HYPERCALL_BOUNCE(memmap_info, sizeof(*memmap_info) + memmap_info->efi_memmap_size, XC_HYPERCALL_BOUNCE_BUFFER_IN); unsigned long gpfn_max; unsigned long p2m_size; void *addr; @@ -157,25 +163,23 @@ xc_ia64_map_foreign_p2m(xc_interface *xc addr = mmap(NULL, p2m_size, PROT_READ, MAP_SHARED, xch->fd, 0); if (addr == MAP_FAILED) return NULL; + if (xc_hypercall_bounce_pre(xc, memmap_info)) { + saved_errno = errno; + munmap(addr, p2m_size); + errno = saved_errno; + return NULL; + } hypercall.op = __HYPERVISOR_ia64_dom0vp_op; hypercall.arg[0] = IA64_DOM0VP_expose_foreign_p2m; hypercall.arg[1] = (unsigned long)addr; hypercall.arg[2] = dom; - hypercall.arg[3] = (unsigned long)memmap_info; + hypercall.arg[3] = HYPERCALL_BUFFER_AS_ARG(memmap_info); hypercall.arg[4] = flags; - if (lock_pages(memmap_info, - sizeof(*memmap_info) + memmap_info->efi_memmap_size) != 0) { - saved_errno = errno; - munmap(addr, p2m_size); - errno = saved_errno; - return NULL; - } ret = do_xen_hypercall(xch, &hypercall); saved_errno = errno; - unlock_pages(memmap_info, - sizeof(*memmap_info) + memmap_info->efi_memmap_size); + xc_hypercall_bounce_post(xch, memmap_info); if (ret < 0) { munmap(addr, p2m_size); errno = saved_errno; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 20 of 25] python acm: use hypercall buffer interface
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 98b20dffcac0b3b26907af7eafe8a807a78284b6 # Parent 56cb1fbab19d9e8602244d56976877c13f52f91a python acm: use hypercall buffer interface. I have a suspicion these routines should be using libxc rather than reimplementing all the hypercalls again, but I don''t have the enthusiasm to fix it. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 56cb1fbab19d -r 98b20dffcac0 tools/python/xen/lowlevel/acm/acm.c --- a/tools/python/xen/lowlevel/acm/acm.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/python/xen/lowlevel/acm/acm.c Thu Oct 21 09:37:35 2010 +0100 @@ -40,22 +40,20 @@ static PyObject *acm_error_obj; static PyObject *acm_error_obj; /* generic shared function */ -static void *__getssid(int domid, uint32_t *buflen) +static void *__getssid(xc_interface *xc_handle, int domid, uint32_t *buflen, xc_hypercall_buffer_t *buffer) { struct acm_getssid getssid; - xc_interface *xc_handle; #define SSID_BUFFER_SIZE 4096 - void *buf = NULL; + void *buf; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(buffer); - if ((xc_handle = xc_interface_open(0,0,0)) == 0) { - goto out1; + if ((buf = xc_hypercall_buffer_alloc(xc_handle, buffer, SSID_BUFFER_SIZE)) == NULL) { + PERROR("acm.policytype: Could not allocate ssid buffer!\n"); + return NULL; } - if ((buf = malloc(SSID_BUFFER_SIZE)) == NULL) { - PERROR("acm.policytype: Could not allocate ssid buffer!\n"); - goto out2; - } + memset(buf, 0, SSID_BUFFER_SIZE); - set_xen_guest_handle(getssid.ssidbuf, buf); + xc_set_xen_guest_handle(getssid.ssidbuf, buffer); getssid.ssidbuf_size = SSID_BUFFER_SIZE; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; @@ -63,16 +61,10 @@ static void *__getssid(int domid, uint32 if (xc_acm_op(xc_handle, ACMOP_getssid, &getssid, sizeof(getssid)) < 0) { if (errno == EACCES) PERROR("ACM operation failed."); - free(buf); buf = NULL; - goto out2; } else { *buflen = SSID_BUFFER_SIZE; - goto out2; } - out2: - xc_interface_close(xc_handle); - out1: return buf; } @@ -81,52 +73,60 @@ static void *__getssid(int domid, uint32 * ssidref for domain 0 (always exists) */ static PyObject *policy(PyObject * self, PyObject * args) { - /* out */ + xc_interface *xc_handle; char *policyreference; PyObject *ret; - void *ssid_buffer; uint32_t buf_len; + DECLARE_HYPERCALL_BUFFER(void, ssid_buffer); if (!PyArg_ParseTuple(args, "", NULL)) { return NULL; } - ssid_buffer = __getssid(0, &buf_len); - if (ssid_buffer == NULL || buf_len < sizeof(struct acm_ssid_buffer)) { - free(ssid_buffer); + if ((xc_handle = xc_interface_open(0,0,0)) == 0) return PyErr_SetFromErrno(acm_error_obj); - } + + ssid_buffer = __getssid(xc_handle, 0, &buf_len, HYPERCALL_BUFFER(ssid_buffer)); + if (ssid_buffer == NULL || buf_len < sizeof(struct acm_ssid_buffer)) + ret = PyErr_SetFromErrno(acm_error_obj); else { struct acm_ssid_buffer *ssid = (struct acm_ssid_buffer *)ssid_buffer; policyreference = (char *)(ssid_buffer + ssid->policy_reference_offset + sizeof (struct acm_policy_reference_buffer)); ret = Py_BuildValue("s", policyreference); - free(ssid_buffer); - return ret; } + + xc_hypercall_buffer_free(xc_handle, ssid_buffer); + xc_interface_close(xc_handle); + return ret; } /* retrieve ssid info for a domain domid*/ static PyObject *getssid(PyObject * self, PyObject * args) { + xc_interface *xc_handle; + /* in */ uint32_t domid; /* out */ char *policytype, *policyreference; uint32_t ssidref; + PyObject *ret; - void *ssid_buffer; + DECLARE_HYPERCALL_BUFFER(void, ssid_buffer); uint32_t buf_len; if (!PyArg_ParseTuple(args, "i", &domid)) { return NULL; } - ssid_buffer = __getssid(domid, &buf_len); + if ((xc_handle = xc_interface_open(0,0,0)) == 0) + return PyErr_SetFromErrno(acm_error_obj); + + ssid_buffer = __getssid(xc_handle, domid, &buf_len, HYPERCALL_BUFFER(ssid_buffer)); if (ssid_buffer == NULL) { - return NULL; + ret = NULL; } else if (buf_len < sizeof(struct acm_ssid_buffer)) { - free(ssid_buffer); - return NULL; + ret = NULL; } else { struct acm_ssid_buffer *ssid = (struct acm_ssid_buffer *) ssid_buffer; policytype = ACM_POLICY_NAME(ssid->secondary_policy_code << 4 | @@ -134,12 +134,14 @@ static PyObject *getssid(PyObject * self ssidref = ssid->ssidref; policyreference = (char *)(ssid_buffer + ssid->policy_reference_offset + sizeof (struct acm_policy_reference_buffer)); + ret = Py_BuildValue("{s:s,s:s,s:i}", + "policyreference", policyreference, + "policytype", policytype, + "ssidref", ssidref); } - free(ssid_buffer); - return Py_BuildValue("{s:s,s:s,s:i}", - "policyreference", policyreference, - "policytype", policytype, - "ssidref", ssidref); + xc_hypercall_buffer_free(xc_handle, ssid_buffer); + xc_interface_close(xc_handle); + return ret; } @@ -206,7 +208,6 @@ const char ctrlif_op[] = "Could not open const char ctrlif_op[] = "Could not open control interface."; const char hv_op_err[] = "Error from hypervisor operation."; - static PyObject *chgpolicy(PyObject *self, PyObject *args) { struct acm_change_policy chgpolicy; @@ -215,9 +216,12 @@ static PyObject *chgpolicy(PyObject *sel char *bin_pol = NULL, *del_arr = NULL, *chg_arr = NULL; int bin_pol_len = 0, del_arr_len = 0, chg_arr_len = 0; uint errarray_mbrs = 20 * 2; - uint32_t error_array[errarray_mbrs]; - PyObject *result; + PyObject *result = NULL; uint len; + DECLARE_HYPERCALL_BUFFER(char, bin_pol_buf); + DECLARE_HYPERCALL_BUFFER(char, del_arr_buf); + DECLARE_HYPERCALL_BUFFER(char, chg_arr_buf); + DECLARE_HYPERCALL_BUFFER(uint32_t, error_array); memset(&chgpolicy, 0x0, sizeof(chgpolicy)); @@ -228,24 +232,34 @@ static PyObject *chgpolicy(PyObject *sel return NULL; } - chgpolicy.policy_pushcache_size = bin_pol_len; - chgpolicy.delarray_size = del_arr_len; - chgpolicy.chgarray_size = chg_arr_len; - chgpolicy.errarray_size = sizeof(error_array); - - set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol); - set_xen_guest_handle(chgpolicy.del_array, del_arr); - set_xen_guest_handle(chgpolicy.chg_array, chg_arr); - set_xen_guest_handle(chgpolicy.err_array, error_array); - if ((xc_handle = xc_interface_open(0,0,0)) == 0) { PyErr_SetString(PyExc_IOError, ctrlif_op); return NULL; } + if ( (bin_pol_buf = xc_hypercall_buffer_alloc(xc_handle, bin_pol_buf, bin_pol_len)) == NULL ) + goto out; + if ( (del_arr_buf = xc_hypercall_buffer_alloc(xc_handle, del_arr_buf, del_arr_len)) == NULL ) + goto out; + if ( (chg_arr_buf = xc_hypercall_buffer_alloc(xc_handle, chg_arr_buf, chg_arr_len)) == NULL ) + goto out; + if ( (error_array = xc_hypercall_buffer_alloc(xc_handle, error_array, sizeof(*error_array)*errarray_mbrs)) == NULL ) + goto out; + + memcpy(bin_pol_buf, bin_pol, bin_pol_len); + memcpy(del_arr_buf, del_arr, del_arr_len); + memcpy(chg_arr_buf, chg_arr, chg_arr_len); + + chgpolicy.policy_pushcache_size = bin_pol_len; + chgpolicy.delarray_size = del_arr_len; + chgpolicy.chgarray_size = chg_arr_len; + chgpolicy.errarray_size = sizeof(*error_array)*errarray_mbrs; + xc_set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol_buf); + xc_set_xen_guest_handle(chgpolicy.del_array, del_arr_buf); + xc_set_xen_guest_handle(chgpolicy.chg_array, chg_arr_buf); + xc_set_xen_guest_handle(chgpolicy.err_array, error_array); + rc = xc_acm_op(xc_handle, ACMOP_chgpolicy, &chgpolicy, sizeof(chgpolicy)); - - xc_interface_close(xc_handle); /* only pass the filled error codes */ for (len = 0; (len + 1) < errarray_mbrs; len += 2) { @@ -256,6 +270,13 @@ static PyObject *chgpolicy(PyObject *sel } result = Py_BuildValue("is#", rc, error_array, len); + +out: + xc_hypercall_buffer_free(xc_handle, bin_pol_buf); + xc_hypercall_buffer_free(xc_handle, del_arr_buf); + xc_hypercall_buffer_free(xc_handle, chg_arr_buf); + xc_hypercall_buffer_free(xc_handle, error_array); + xc_interface_close(xc_handle); return result; } @@ -265,33 +286,37 @@ static PyObject *getpolicy(PyObject *sel struct acm_getpolicy getpolicy; xc_interface *xc_handle; int rc; - uint8_t pull_buffer[8192]; - PyObject *result; - uint32_t len = sizeof(pull_buffer); - - memset(&getpolicy, 0x0, sizeof(getpolicy)); - set_xen_guest_handle(getpolicy.pullcache, pull_buffer); - getpolicy.pullcache_size = sizeof(pull_buffer); + PyObject *result = NULL; + uint32_t len = 8192; + DECLARE_HYPERCALL_BUFFER(uint8_t, pull_buffer); if ((xc_handle = xc_interface_open(0,0,0)) == 0) { PyErr_SetString(PyExc_IOError, ctrlif_op); return NULL; } + if ((pull_buffer = xc_hypercall_buffer_alloc(xc_handle, pull_buffer, len)) == NULL) + goto out; + + memset(&getpolicy, 0x0, sizeof(getpolicy)); + xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + getpolicy.pullcache_size = sizeof(pull_buffer); + rc = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); - - xc_interface_close(xc_handle); if (rc == 0) { struct acm_policy_buffer *header (struct acm_policy_buffer *)pull_buffer; - if (ntohl(header->len) < sizeof(pull_buffer)) + if (ntohl(header->len) < 8192) len = ntohl(header->len); } else { len = 0; } result = Py_BuildValue("is#", rc, pull_buffer, len); +out: + xc_hypercall_buffer_free(xc_handle, pull_buffer); + xc_interface_close(xc_handle); return result; } @@ -304,8 +329,9 @@ static PyObject *relabel_domains(PyObjec char *relabel_rules = NULL; int rel_rules_len = 0; uint errarray_mbrs = 20 * 2; - uint32_t error_array[errarray_mbrs]; - PyObject *result; + DECLARE_HYPERCALL_BUFFER(uint32_t, error_array); + DECLARE_HYPERCALL_BUFFER(char, relabel_rules_buf); + PyObject *result = NULL; uint len; memset(&reldoms, 0x0, sizeof(reldoms)); @@ -315,21 +341,25 @@ static PyObject *relabel_domains(PyObjec return NULL; } - reldoms.relabel_map_size = rel_rules_len; - reldoms.errarray_size = sizeof(error_array); - - set_xen_guest_handle(reldoms.relabel_map, relabel_rules); - set_xen_guest_handle(reldoms.err_array, error_array); - if ((xc_handle = xc_interface_open(0,0,0)) == 0) { PyErr_SetString(PyExc_IOError, ctrlif_op); return NULL; } + if ((relabel_rules_buf = xc_hypercall_buffer_alloc(xc_handle, relabel_rules_buf, rel_rules_len)) == NULL) + goto out; + if ((error_array = xc_hypercall_buffer_alloc(xc_handle, error_array, sizeof(*error_array)*errarray_mbrs)) == NULL) + goto out; + + memcpy(relabel_rules_buf, relabel_rules, rel_rules_len); + + reldoms.relabel_map_size = rel_rules_len; + reldoms.errarray_size = sizeof(error_array); + + xc_set_xen_guest_handle(reldoms.relabel_map, relabel_rules_buf); + xc_set_xen_guest_handle(reldoms.err_array, error_array); + rc = xc_acm_op(xc_handle, ACMOP_relabeldoms, &reldoms, sizeof(reldoms)); - - xc_interface_close(xc_handle); - /* only pass the filled error codes */ for (len = 0; (len + 1) < errarray_mbrs; len += 2) { @@ -340,6 +370,11 @@ static PyObject *relabel_domains(PyObjec } result = Py_BuildValue("is#", rc, error_array, len); +out: + xc_hypercall_buffer_free(xc_handle, relabel_rules_buf); + xc_hypercall_buffer_free(xc_handle, error_array); + xc_interface_close(xc_handle); + return result; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 21 of 25] python xc: use hypercall buffer interface
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 9729034ef96a2247fa913470109299b6b1344e34 # Parent 98b20dffcac0b3b26907af7eafe8a807a78284b6 python xc: use hypercall buffer interface. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 98b20dffcac0 -r 9729034ef96a tools/python/xen/lowlevel/xc/xc.c --- a/tools/python/xen/lowlevel/xc/xc.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/python/xen/lowlevel/xc/xc.c Thu Oct 21 09:37:35 2010 +0100 @@ -1206,19 +1206,29 @@ static PyObject *pyxc_topologyinfo(XcObj #define MAX_CPU_INDEX 255 xc_topologyinfo_t tinfo = { 0 }; int i, max_cpu_index; - PyObject *ret_obj; + PyObject *ret_obj = NULL; PyObject *cpu_to_core_obj, *cpu_to_socket_obj, *cpu_to_node_obj; - xc_cpu_to_core_t coremap[MAX_CPU_INDEX + 1]; - xc_cpu_to_socket_t socketmap[MAX_CPU_INDEX + 1]; - xc_cpu_to_node_t nodemap[MAX_CPU_INDEX + 1]; + DECLARE_HYPERCALL_BUFFER(xc_cpu_to_core_t, coremap); + DECLARE_HYPERCALL_BUFFER(xc_cpu_to_socket_t, socketmap); + DECLARE_HYPERCALL_BUFFER(xc_cpu_to_node_t, nodemap); - set_xen_guest_handle(tinfo.cpu_to_core, coremap); - set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); - set_xen_guest_handle(tinfo.cpu_to_node, nodemap); + coremap = xc_hypercall_buffer_alloc(self->xc_handle, coremap, sizeof(*coremap) * (MAX_CPU_INDEX+1)); + if ( coremap == NULL ) + goto out; + socketmap = xc_hypercall_buffer_alloc(self->xc_handle, socketmap, sizeof(*socketmap) * (MAX_CPU_INDEX+1)); + if ( socketmap == NULL ) + goto out; + nodemap = xc_hypercall_buffer_alloc(self->xc_handle, nodemap, sizeof(*nodemap) * (MAX_CPU_INDEX+1)); + if ( nodemap == NULL ) + goto out; + + xc_set_xen_guest_handle(tinfo.cpu_to_core, coremap); + xc_set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); + xc_set_xen_guest_handle(tinfo.cpu_to_node, nodemap); tinfo.max_cpu_index = MAX_CPU_INDEX; if ( xc_topologyinfo(self->xc_handle, &tinfo) != 0 ) - return pyxc_error_to_exception(self->xc_handle); + goto out; max_cpu_index = tinfo.max_cpu_index; if ( max_cpu_index > MAX_CPU_INDEX ) @@ -1271,11 +1281,15 @@ static PyObject *pyxc_topologyinfo(XcObj PyDict_SetItemString(ret_obj, "cpu_to_socket", cpu_to_socket_obj); Py_DECREF(cpu_to_socket_obj); - + PyDict_SetItemString(ret_obj, "cpu_to_node", cpu_to_node_obj); Py_DECREF(cpu_to_node_obj); - - return ret_obj; + +out: + xc_hypercall_buffer_free(self->xc_handle, coremap); + xc_hypercall_buffer_free(self->xc_handle, socketmap); + xc_hypercall_buffer_free(self->xc_handle, nodemap); + return ret_obj ? ret_obj : pyxc_error_to_exception(self->xc_handle); #undef MAX_CPU_INDEX } @@ -1285,20 +1299,30 @@ static PyObject *pyxc_numainfo(XcObject xc_numainfo_t ninfo = { 0 }; int i, j, max_node_index; uint64_t free_heap; - PyObject *ret_obj, *node_to_node_dist_list_obj; + PyObject *ret_obj = NULL, *node_to_node_dist_list_obj; PyObject *node_to_memsize_obj, *node_to_memfree_obj; PyObject *node_to_dma32_mem_obj, *node_to_node_dist_obj; - xc_node_to_memsize_t node_memsize[MAX_NODE_INDEX + 1]; - xc_node_to_memfree_t node_memfree[MAX_NODE_INDEX + 1]; - xc_node_to_node_dist_t nodes_dist[(MAX_NODE_INDEX+1) * (MAX_NODE_INDEX+1)]; + DECLARE_HYPERCALL_BUFFER(xc_node_to_memsize_t, node_memsize); + DECLARE_HYPERCALL_BUFFER(xc_node_to_memfree_t, node_memfree); + DECLARE_HYPERCALL_BUFFER(xc_node_to_node_dist_t, nodes_dist); - set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); - set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); - set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); + node_memsize = xc_hypercall_buffer_alloc(self->xc_handle, node_memsize, sizeof(*node_memsize)*(MAX_NODE_INDEX+1)); + if ( node_memsize == NULL ) + goto out; + node_memfree = xc_hypercall_buffer_alloc(self->xc_handle, node_memfree, sizeof(*node_memfree)*(MAX_NODE_INDEX+1)); + if ( node_memfree == NULL ) + goto out; + nodes_dist = xc_hypercall_buffer_alloc(self->xc_handle, nodes_dist, sizeof(*nodes_dist)*(MAX_NODE_INDEX+1)*(MAX_NODE_INDEX+1)); + if ( nodes_dist == NULL ) + goto out; + + xc_set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); + xc_set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); + xc_set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); ninfo.max_node_index = MAX_NODE_INDEX; if ( xc_numainfo(self->xc_handle, &ninfo) != 0 ) - return pyxc_error_to_exception(self->xc_handle); + goto out; max_node_index = ninfo.max_node_index; if ( max_node_index > MAX_NODE_INDEX ) @@ -1363,8 +1387,12 @@ static PyObject *pyxc_numainfo(XcObject PyDict_SetItemString(ret_obj, "node_to_node_dist", node_to_node_dist_list_obj); Py_DECREF(node_to_node_dist_list_obj); - - return ret_obj; + +out: + xc_hypercall_buffer_free(self->xc_handle, node_memsize); + xc_hypercall_buffer_free(self->xc_handle, node_memfree); + xc_hypercall_buffer_free(self->xc_handle, nodes_dist); + return ret_obj ? ret_obj : pyxc_error_to_exception(self->xc_handle); #undef MAX_NODE_INDEX } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 22 of 25] xenpm: use hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 26a8a8cd558e7a9ed92de6bc3605fef97571735f # Parent 9729034ef96a2247fa913470109299b6b1344e34 xenpm: use hypercall buffers. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 9729034ef96a -r 26a8a8cd558e tools/misc/xenpm.c --- a/tools/misc/xenpm.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/misc/xenpm.c Thu Oct 21 09:37:35 2010 +0100 @@ -317,15 +317,25 @@ static void signal_int_handler(int signo int i, j, k, ret; struct timeval tv; int cx_cap = 0, px_cap = 0; - uint32_t cpu_to_core[MAX_NR_CPU]; - uint32_t cpu_to_socket[MAX_NR_CPU]; - uint32_t cpu_to_node[MAX_NR_CPU]; + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_core); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_socket); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_node); xc_topologyinfo_t info = { 0 }; + + cpu_to_core = xc_hypercall_buffer_alloc(xc_handle, cpu_to_core, sizeof(*cpu_to_core) * MAX_NR_CPU); + cpu_to_socket = xc_hypercall_buffer_alloc(xc_handle, cpu_to_socket, sizeof(*cpu_to_socket) * MAX_NR_CPU); + cpu_to_node = xc_hypercall_buffer_alloc(xc_handle, cpu_to_node, sizeof(*cpu_to_node) * MAX_NR_CPU); + + if ( cpu_to_core == NULL || cpu_to_socket == NULL || cpu_to_node == NULL ) + { + fprintf(stderr, "failed to allocate hypercall buffers\n"); + goto out; + } if ( gettimeofday(&tv, NULL) == -1 ) { fprintf(stderr, "failed to get timeofday\n"); - return ; + goto out ; } usec_end = tv.tv_sec * 1000000UL + tv.tv_usec; @@ -385,9 +395,9 @@ static void signal_int_handler(int signo } } - set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU - 1; ret = xc_topologyinfo(xc_handle, &info); @@ -485,6 +495,10 @@ static void signal_int_handler(int signo free(pxstat); free(sum); free(avgfreq); +out: + xc_hypercall_buffer_free(xc_handle, cpu_to_core); + xc_hypercall_buffer_free(xc_handle, cpu_to_socket); + xc_hypercall_buffer_free(xc_handle, cpu_to_node); xc_interface_close(xc_handle); exit(0); } @@ -934,21 +948,31 @@ out: void cpu_topology_func(int argc, char *argv[]) { - uint32_t cpu_to_core[MAX_NR_CPU]; - uint32_t cpu_to_socket[MAX_NR_CPU]; - uint32_t cpu_to_node[MAX_NR_CPU]; + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_core); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_socket); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_node); xc_topologyinfo_t info = { 0 }; int i; - set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + cpu_to_core = xc_hypercall_buffer_alloc(xc_handle, cpu_to_core, sizeof(*cpu_to_core) * MAX_NR_CPU); + cpu_to_socket = xc_hypercall_buffer_alloc(xc_handle, cpu_to_socket, sizeof(*cpu_to_socket) * MAX_NR_CPU); + cpu_to_node = xc_hypercall_buffer_alloc(xc_handle, cpu_to_node, sizeof(*cpu_to_node) * MAX_NR_CPU); + + if ( cpu_to_core == NULL || cpu_to_socket == NULL || cpu_to_node == NULL ) + { + fprintf(stderr, "failed to allocate hypercall buffers\n"); + goto out; + } + + xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU-1; if ( xc_topologyinfo(xc_handle, &info) ) { printf("Can not get Xen CPU topology: %d\n", errno); - return; + goto out; } if ( info.max_cpu_index > (MAX_NR_CPU-1) ) @@ -962,6 +986,10 @@ void cpu_topology_func(int argc, char *a printf("CPU%d\t %d\t %d\t %d\n", i, cpu_to_core[i], cpu_to_socket[i], cpu_to_node[i]); } +out: + xc_hypercall_buffer_free(xc_handle, cpu_to_core); + xc_hypercall_buffer_free(xc_handle, cpu_to_socket); + xc_hypercall_buffer_free(xc_handle, cpu_to_node); } void set_sched_smt_func(int argc, char *argv[]) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 23 of 25] secpol: use hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 970a248771788f44fac6e4139972deb3af40a280 # Parent 26a8a8cd558e7a9ed92de6bc3605fef97571735f secpol: use hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 26a8a8cd558e -r 970a24877178 tools/security/secpol_tool.c --- a/tools/security/secpol_tool.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/security/secpol_tool.c Thu Oct 21 09:37:35 2010 +0100 @@ -242,11 +242,14 @@ int acm_get_ssidref(xc_interface *xc_han uint16_t *ste_ref) { int ret; + DECLARE_HYPERCALL_BUFFER(struct acm_ssid_buffer, ssid); + size_t ssid_buffer_size = 4096; struct acm_getssid getssid; - char buf[4096]; - struct acm_ssid_buffer *ssid = (struct acm_ssid_buffer *)buf; - set_xen_guest_handle(getssid.ssidbuf, buf); - getssid.ssidbuf_size = sizeof(buf); + ssid = xc_hypercall_buffer_alloc(xc_handle, ssid, ssid_buffer_size); + if ( ssid == NULL ) + return 1; + xc_set_xen_guest_handle(getssid.ssidbuf, ssid); + getssid.ssidbuf_size = ssid_buffer_size; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; ret = xc_acm_op(xc_handle, ACMOP_getssid, &getssid, sizeof(getssid)); @@ -254,23 +257,27 @@ int acm_get_ssidref(xc_interface *xc_han *chwall_ref = ssid->ssidref & 0xffff; *ste_ref = ssid->ssidref >> 16; } + xc_hypercall_buffer_free(xc_handle, ssid); return ret; } /******************************* get policy ******************************/ -#define PULL_CACHE_SIZE 8192 -uint8_t pull_buffer[PULL_CACHE_SIZE]; - int acm_domain_getpolicy(xc_interface *xc_handle) { + DECLARE_HYPERCALL_BUFFER(uint8_t, pull_buffer); + size_t pull_cache_size = 8192; struct acm_getpolicy getpolicy; int ret; uint16_t chwall_ref, ste_ref; - memset(pull_buffer, 0x00, sizeof(pull_buffer)); - set_xen_guest_handle(getpolicy.pullcache, pull_buffer); - getpolicy.pullcache_size = sizeof(pull_buffer); + pull_buffer = xc_hypercall_buffer_alloc(xc_handle, pull_buffer, pull_cache_size); + if ( pull_buffer == NULL ) + return -1; + + memset(pull_buffer, 0x00, pull_cache_size); + xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + getpolicy.pullcache_size = pull_cache_size; ret = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); if (ret >= 0) { ret = acm_get_ssidref(xc_handle, 0, &chwall_ref, &ste_ref); @@ -284,8 +291,10 @@ int acm_domain_getpolicy(xc_interface *x } /* dump policy */ - acm_dump_policy_buffer(pull_buffer, sizeof(pull_buffer), + acm_dump_policy_buffer(pull_buffer, pull_cache_size, chwall_ref, ste_ref); + + xc_hypercall_buffer_free(xc_handle, pull_buffer); return ret; } @@ -293,11 +302,14 @@ int acm_domain_getpolicy(xc_interface *x /************************ dump binary policy ******************************/ static int load_file(const char *filename, - uint8_t **buffer, off_t *len) + uint8_t **buffer, off_t *len, + xc_interface *xc_handle, + xc_hypercall_buffer_t *hcall) { struct stat mystat; int ret = 0; int fd; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(hcall); if ((ret = stat(filename, &mystat)) != 0) { printf("File %s not found.\n", filename); @@ -307,9 +319,16 @@ static int load_file(const char *filenam *len = mystat.st_size; - if ((*buffer = malloc(*len)) == NULL) { - ret = -ENOMEM; - goto out; + if ( hcall == NULL ) { + if ((*buffer = malloc(*len)) == NULL) { + ret = -ENOMEM; + goto out; + } + } else { + if ((*buffer = xc_hypercall_buffer_alloc(xc_handle, hcall, *len)) == NULL) { + ret = -ENOMEM; + goto out; + } } if ((fd = open(filename, O_RDONLY)) <= 0) { @@ -322,7 +341,10 @@ static int load_file(const char *filenam return 0; free_out: - free(*buffer); + if ( hcall == NULL ) + free(*buffer); + else + xc_hypercall_buffer_free(xc_handle, hcall); *buffer = NULL; *len = 0; out: @@ -339,7 +361,7 @@ static int acm_domain_dumppolicy(const c chwall_ssidref = (ssidref ) & 0xffff; ste_ssidref = (ssidref >> 16) & 0xffff; - if ((ret = load_file(filename, &buffer, &len)) == 0) { + if ((ret = load_file(filename, &buffer, &len, NULL, NULL)) == 0) { acm_dump_policy_buffer(buffer, len, chwall_ssidref, ste_ssidref); free(buffer); } @@ -353,11 +375,11 @@ int acm_domain_loadpolicy(xc_interface * { int ret; off_t len; - uint8_t *buffer; + DECLARE_HYPERCALL_BUFFER(uint8_t, buffer); uint16_t chwall_ssidref, ste_ssidref; struct acm_setpolicy setpolicy; - ret = load_file(filename, &buffer, &len); + ret = load_file(filename, &buffer, &len, xc_handle, HYPERCALL_BUFFER(buffer)); if (ret != 0) goto out; @@ -367,7 +389,7 @@ int acm_domain_loadpolicy(xc_interface * /* dump it and then push it down into xen/acm */ acm_dump_policy_buffer(buffer, len, chwall_ssidref, ste_ssidref); - set_xen_guest_handle(setpolicy.pushcache, buffer); + xc_set_xen_guest_handle(setpolicy.pushcache, buffer); setpolicy.pushcache_size = len; ret = xc_acm_op(xc_handle, ACMOP_setpolicy, &setpolicy, sizeof(setpolicy)); @@ -378,7 +400,7 @@ int acm_domain_loadpolicy(xc_interface * } free_out: - free(buffer); + xc_hypercall_buffer_free(xc_handle, buffer); out: return ret; } @@ -402,22 +424,27 @@ void dump_ste_stats(struct acm_ste_stats ntohl(ste_stats->gt_cachehit_count)); } -#define PULL_STATS_SIZE 8192 int acm_domain_dumpstats(xc_interface *xc_handle) { - uint8_t stats_buffer[PULL_STATS_SIZE]; + DECLARE_HYPERCALL_BUFFER(uint8_t, stats_buffer); + size_t pull_stats_size = 8192; struct acm_dumpstats dumpstats; int ret; struct acm_stats_buffer *stats; - memset(stats_buffer, 0x00, sizeof(stats_buffer)); - set_xen_guest_handle(dumpstats.pullcache, stats_buffer); - dumpstats.pullcache_size = sizeof(stats_buffer); + stats_buffer = xc_hypercall_buffer_alloc(xc_handle, stats_buffer, pull_stats_size); + if ( stats_buffer == NULL ) + return -1; + + memset(stats_buffer, 0x00, pull_stats_size); + xc_set_xen_guest_handle(dumpstats.pullcache, stats_buffer); + dumpstats.pullcache_size = pull_stats_size; ret = xc_acm_op(xc_handle, ACMOP_dumpstats, &dumpstats, sizeof(dumpstats)); if (ret < 0) { printf ("ERROR dumping policy stats. Try ''xm dmesg'' to see details.\n"); + xc_hypercall_buffer_free(xc_handle, stats_buffer); return ret; } stats = (struct acm_stats_buffer *) stats_buffer; @@ -464,6 +491,7 @@ int acm_domain_dumpstats(xc_interface *x default: printf("UNKNOWN SECONDARY POLICY ERROR!\n"); } + xc_hypercall_buffer_free(xc_handle, stats_buffer); return ret; } @@ -472,7 +500,8 @@ int main(int argc, char **argv) int main(int argc, char **argv) { - xc_interface *xc_handle, ret = 0; + xc_interface *xc_handle; + int ret = 0; if (argc < 2) usage(argv[0]); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 24 of 25] libxc: do not align/lock buffers which do not need it
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID 42caa87197dfe69901d4d20c6432b5914b62ac07 # Parent 970a248771788f44fac6e4139972deb3af40a280 libxc: do not align/lock buffers which do not need it On restore: region_mfn is passed to xc_map_foreign_range and xc_map_foreign_bulk. In both cases the buffer is accessed from the ioctl handler in the kernel and not from any hypercall. Therefore normal copy_{to,from}_user handling in the kernel will cope with any faulting access. p2m_batch is passed to xc_domain_memory_populate_physmap which takes care of bouncing the buffer already. On save: pfn_type is passed to xc_map_foreign_bulk which does not need locking as per region_mfn above. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 970a24877178 -r 42caa87197df tools/libxc/xc_domain_restore.c --- a/tools/libxc/xc_domain_restore.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_domain_restore.c Thu Oct 21 09:37:35 2010 +0100 @@ -1172,10 +1172,8 @@ int xc_domain_restore(xc_interface *xch, ctx->p2m = calloc(dinfo->p2m_size, sizeof(xen_pfn_t)); pfn_type = calloc(dinfo->p2m_size, sizeof(unsigned long)); - region_mfn = xc_memalign(PAGE_SIZE, ROUNDUP( - MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); - ctx->p2m_batch = xc_memalign( - PAGE_SIZE, ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); + region_mfn = malloc(ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); + ctx->p2m_batch = malloc(ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); if ( (ctx->p2m == NULL) || (pfn_type == NULL) || (region_mfn == NULL) || (ctx->p2m_batch == NULL) ) @@ -1189,18 +1187,6 @@ int xc_domain_restore(xc_interface *xch, ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); memset(ctx->p2m_batch, 0, ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); - - if ( lock_pages(xch, region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) ) - { - PERROR("Could not lock region_mfn"); - goto out; - } - - if ( lock_pages(xch, ctx->p2m_batch, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) ) - { - ERROR("Could not lock p2m_batch"); - goto out; - } /* Get the domain''s shared-info frame. */ domctl.cmd = XEN_DOMCTL_getdomaininfo; diff -r 970a24877178 -r 42caa87197df tools/libxc/xc_domain_save.c --- a/tools/libxc/xc_domain_save.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_domain_save.c Thu Oct 21 09:37:35 2010 +0100 @@ -1071,8 +1071,7 @@ int xc_domain_save(xc_interface *xch, in analysis_phase(xch, dom, ctx, HYPERCALL_BUFFER(to_skip), 0); - pfn_type = xc_memalign(PAGE_SIZE, ROUNDUP( - MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); + pfn_type = malloc(ROUNDUP(MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); pfn_batch = calloc(MAX_BATCH_SIZE, sizeof(*pfn_batch)); pfn_err = malloc(MAX_BATCH_SIZE * sizeof(*pfn_err)); if ( (pfn_type == NULL) || (pfn_batch == NULL) || (pfn_err == NULL) ) @@ -1083,12 +1082,6 @@ int xc_domain_save(xc_interface *xch, in } memset(pfn_type, 0, ROUNDUP(MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); - - if ( lock_pages(xch, pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) ) - { - PERROR("Unable to lock pfn_type array"); - goto out; - } /* Setup the mfn_to_pfn table mapping */ if ( !(ctx->live_m2p = xc_map_m2p(xch, ctx->max_mfn, PROT_READ, &ctx->m2p_mfn0)) ) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-21 10:59 UTC
[Xen-devel] [PATCH 25 of 25] libxc: finalise transition to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287650255 -3600 # Node ID a4e1a2dbc215e601acf69b212f82cea767f3330c # Parent 42caa87197dfe69901d4d20c6432b5914b62ac07 libxc: finalise transition to hypercall buffers. Rename xc_set_xen_guest_handle to set_xen_guest_handle[0] and remove now unused functions: - xc_memalign - lock_pages - unlock_pages - hcall_buf_prep - hcall_buf_release [0] sed -i -e ''s/xc_set_xen_guest_handle/set_xen_guest_handle/g'' \ tools/libxc/*.[ch] \ tools/python/xen/lowlevel/xc/xc.c \ tools/python/xen/lowlevel/acm/acm.c \ tools/libxc/ia64/xc_ia64_stubs.c \ tools/security/secpol_tool.c \ tools/misc/xenpm.c Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_cpupool.c --- a/tools/libxc/xc_cpupool.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_cpupool.c Thu Oct 21 09:37:35 2010 +0100 @@ -88,7 +88,7 @@ int xc_cpupool_getinfo(xc_interface *xch sysctl.cmd = XEN_SYSCTL_cpupool_op; sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO; sysctl.u.cpupool_op.cpupool_id = poolid; - xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(info->cpumap) * 8; err = do_sysctl_save(xch, &sysctl); @@ -165,7 +165,7 @@ int xc_cpupool_freeinfo(xc_interface *xc sysctl.cmd = XEN_SYSCTL_cpupool_op; sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_FREEINFO; - xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(*cpumap) * 8; err = do_sysctl_save(xch, &sysctl); diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_dom_boot.c --- a/tools/libxc/xc_dom_boot.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_dom_boot.c Thu Oct 21 09:37:35 2010 +0100 @@ -72,7 +72,7 @@ static int launch_vm(xc_interface *xch, domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = 0; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); if ( rc != 0 ) xc_dom_panic(xch, XC_INTERNAL_ERROR, diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_domain.c Thu Oct 21 09:37:35 2010 +0100 @@ -132,7 +132,7 @@ int xc_vcpu_setaffinity(xc_interface *xc bitmap_64_to_byte(local, cpumap, cpusize * 8); - xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; @@ -165,7 +165,7 @@ int xc_vcpu_getaffinity(xc_interface *xc domctl.domain = (domid_t)domid; domctl.u.vcpuaffinity.vcpu = vcpu; - xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; ret = do_domctl(xch, &domctl); @@ -254,7 +254,7 @@ int xc_domain_getinfolist(xc_interface * sysctl.cmd = XEN_SYSCTL_getdomaininfolist; sysctl.u.getdomaininfolist.first_domain = first_domain; sysctl.u.getdomaininfolist.max_domains = max_domains; - xc_set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); + set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); if ( xc_sysctl(xch, &sysctl) < 0 ) ret = -1; @@ -282,7 +282,7 @@ int xc_domain_hvm_getcontext(xc_interfac domctl.cmd = XEN_DOMCTL_gethvmcontext; domctl.domain = (domid_t)domid; domctl.u.hvmcontext.size = size; - xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); + set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); @@ -311,7 +311,7 @@ int xc_domain_hvm_getcontext_partial(xc_ domctl.domain = (domid_t) domid; domctl.u.hvmcontext_partial.type = typecode; domctl.u.hvmcontext_partial.instance = instance; - xc_set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); + set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); @@ -337,7 +337,7 @@ int xc_domain_hvm_setcontext(xc_interfac domctl.cmd = XEN_DOMCTL_sethvmcontext; domctl.domain = domid; domctl.u.hvmcontext.size = size; - xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); + set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); @@ -361,7 +361,7 @@ int xc_vcpu_getcontext(xc_interface *xch domctl.cmd = XEN_DOMCTL_getvcpucontext; domctl.domain = (domid_t)domid; domctl.u.vcpucontext.vcpu = (uint16_t)vcpu; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); @@ -420,7 +420,7 @@ int xc_shadow_control(xc_interface *xch, domctl.u.shadow_op.mb = mb ? *mb : 0; domctl.u.shadow_op.mode = mode; if (dirty_bitmap != NULL) - xc_set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, + set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, dirty_bitmap); rc = do_domctl(xch, &domctl); @@ -486,7 +486,7 @@ int xc_domain_set_memmap_limit(xc_interf e820->size = (uint64_t)map_limitkb << 10; e820->type = E820_RAM; - xc_set_xen_guest_handle(fmap.map.buffer, e820); + set_xen_guest_handle(fmap.map.buffer, e820); rc = do_memory_op(xch, XENMEM_set_memory_map, &fmap, sizeof(fmap)); @@ -559,7 +559,7 @@ int xc_domain_get_tsc_info(xc_interface domctl.cmd = XEN_DOMCTL_gettscinfo; domctl.domain = (domid_t)domid; - xc_set_xen_guest_handle(domctl.u.tsc_info.out_info, info); + set_xen_guest_handle(domctl.u.tsc_info.out_info, info); rc = do_domctl(xch, &domctl); if ( rc == 0 ) { @@ -601,7 +601,7 @@ int xc_domain_increase_reservation(xc_in return -1; } - xc_set_xen_guest_handle(reservation.extent_start, extent_start); + set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_increase_reservation, &reservation, sizeof(reservation)); @@ -664,7 +664,7 @@ int xc_domain_decrease_reservation(xc_in PERROR("Could not bounce memory for XENMEM_decrease_reservation hypercall"); return -1; } - xc_set_xen_guest_handle(reservation.extent_start, extent_start); + set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation)); @@ -734,7 +734,7 @@ int xc_domain_populate_physmap(xc_interf PERROR("Could not bounce memory for XENMEM_populate_physmap hypercall"); return -1; } - xc_set_xen_guest_handle(reservation.extent_start, extent_start); + set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation)); @@ -796,8 +796,8 @@ int xc_domain_memory_exchange_pages(xc_i xc_hypercall_bounce_pre(xch, out_extents)) goto out; - xc_set_xen_guest_handle(exchange.in.extent_start, in_extents); - xc_set_xen_guest_handle(exchange.out.extent_start, out_extents); + set_xen_guest_handle(exchange.in.extent_start, in_extents); + set_xen_guest_handle(exchange.out.extent_start, out_extents); rc = do_memory_op(xch, XENMEM_exchange, &exchange, sizeof(exchange)); @@ -976,7 +976,7 @@ int xc_vcpu_setcontext(xc_interface *xch domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = vcpu; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); @@ -1124,7 +1124,7 @@ int xc_get_device_group( domctl.u.get_device_group.machine_bdf = machine_bdf; domctl.u.get_device_group.max_sdevs = max_sdevs; - xc_set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); + set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); rc = do_domctl(xch, &domctl); diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_domain_restore.c --- a/tools/libxc/xc_domain_restore.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_domain_restore.c Thu Oct 21 09:37:35 2010 +0100 @@ -1639,7 +1639,7 @@ int xc_domain_restore(xc_interface *xch, domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = (domid_t)dom; domctl.u.vcpucontext.vcpu = i; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); frc = xc_domctl(xch, &domctl); if ( frc != 0 ) { diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_linux.c --- a/tools/libxc/xc_linux.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_linux.c Thu Oct 21 09:37:35 2010 +0100 @@ -686,7 +686,7 @@ static void *_gnttab_map_table(xc_interf setup.dom = domid; setup.nr_frames = query.nr_frames; - xc_set_xen_guest_handle(setup.frame_list, frame_list); + set_xen_guest_handle(setup.frame_list, frame_list); /* XXX Any race with other setup_table hypercall? */ rc = xc_gnttab_op(xch, GNTTABOP_setup_table, &setup, sizeof(setup), diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_misc.c Thu Oct 21 09:37:35 2010 +0100 @@ -35,7 +35,7 @@ int xc_readconsolering(xc_interface *xch return -1; sysctl.cmd = XEN_SYSCTL_readconsole; - xc_set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); + set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); sysctl.u.readconsole.count = nr_chars; sysctl.u.readconsole.clear = clear; sysctl.u.readconsole.incremental = 0; @@ -67,7 +67,7 @@ int xc_send_debug_keys(xc_interface *xch return -1; sysctl.cmd = XEN_SYSCTL_debug_keys; - xc_set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); + set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); sysctl.u.debug_keys.nr_keys = len; ret = do_sysctl(xch, &sysctl); @@ -176,8 +176,8 @@ int xc_perfc_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_reset; - xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); - xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -191,8 +191,8 @@ int xc_perfc_query_number(xc_interface * sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); - xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -214,8 +214,8 @@ int xc_perfc_query(xc_interface *xch, sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); - xc_set_xen_guest_handle(sysctl.u.perfc_op.val, val); + set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); + set_xen_guest_handle(sysctl.u.perfc_op.val, val); return do_sysctl(xch, &sysctl); } @@ -226,7 +226,7 @@ int xc_lockprof_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_reset; - xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -239,7 +239,7 @@ int xc_lockprof_query_number(xc_interfac sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; - xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -260,7 +260,7 @@ int xc_lockprof_query(xc_interface *xch, sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; sysctl.u.lockprof_op.max_elem = *n_elems; - xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, data); + set_xen_guest_handle(sysctl.u.lockprof_op.data, data); rc = do_sysctl(xch, &sysctl); @@ -281,7 +281,7 @@ int xc_getcpuinfo(xc_interface *xch, int sysctl.cmd = XEN_SYSCTL_getcpuinfo; sysctl.u.getcpuinfo.max_cpus = max_cpus; - xc_set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); + set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); rc = do_sysctl(xch, &sysctl); @@ -413,7 +413,7 @@ int xc_hvm_track_dirty_vram( arg->domid = dom; arg->first_pfn = first_pfn; arg->nr = nr; - xc_set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap); + set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap); rc = do_xen_hypercall(xch, &hypercall); diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_offline_page.c --- a/tools/libxc/xc_offline_page.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_offline_page.c Thu Oct 21 09:37:35 2010 +0100 @@ -82,7 +82,7 @@ int xc_mark_page_online(xc_interface *xc sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_online; sysctl.u.page_offline.end = end; - xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); + set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); xc_hypercall_bounce_post(xch, status); @@ -110,7 +110,7 @@ int xc_mark_page_offline(xc_interface *x sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_offline; sysctl.u.page_offline.end = end; - xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); + set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); xc_hypercall_bounce_post(xch, status); @@ -138,7 +138,7 @@ int xc_query_page_offline_status(xc_inte sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_query_page_offline; sysctl.u.page_offline.end = end; - xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); + set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); xc_hypercall_bounce_post(xch, status); diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_pm.c --- a/tools/libxc/xc_pm.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_pm.c Thu Oct 21 09:37:35 2010 +0100 @@ -73,8 +73,8 @@ int xc_pm_get_pxstat(xc_interface *xch, sysctl.u.get_pmstat.type = PMSTAT_get_pxstat; sysctl.u.get_pmstat.cpuid = cpuid; sysctl.u.get_pmstat.u.getpx.total = max_px; - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, trans); - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, pt); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, trans); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, pt); ret = xc_sysctl(xch, &sysctl); if ( ret ) @@ -146,8 +146,8 @@ int xc_pm_get_cxstat(xc_interface *xch, sysctl.cmd = XEN_SYSCTL_get_pmstat; sysctl.u.get_pmstat.type = PMSTAT_get_cxstat; sysctl.u.get_pmstat.cpuid = cpuid; - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, triggers); - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, residencies); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, triggers); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, residencies); if ( (ret = xc_sysctl(xch, &sysctl)) ) goto unlock_2; @@ -219,9 +219,9 @@ int xc_get_cpufreq_para(xc_interface *xc if ( xc_hypercall_bounce_pre(xch, scaling_available_governors) ) goto unlock_3; - xc_set_xen_guest_handle(sys_para->affected_cpus, affected_cpus); - xc_set_xen_guest_handle(sys_para->scaling_available_frequencies, scaling_available_frequencies); - xc_set_xen_guest_handle(sys_para->scaling_available_governors, scaling_available_governors); + set_xen_guest_handle(sys_para->affected_cpus, affected_cpus); + set_xen_guest_handle(sys_para->scaling_available_frequencies, scaling_available_frequencies); + set_xen_guest_handle(sys_para->scaling_available_governors, scaling_available_governors); } sysctl.cmd = XEN_SYSCTL_pm_op; diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_private.c Thu Oct 21 09:37:35 2010 +0100 @@ -71,8 +71,6 @@ xc_interface *xc_interface_open(xentooll return 0; } -static void xc_clean_hcall_buf(xc_interface *xch); - int xc_interface_close(xc_interface *xch) { int rc = 0; @@ -84,8 +82,6 @@ int xc_interface_close(xc_interface *xch rc = xc_interface_close_core(xch, xch->fd); if (rc) PERROR("Could not close hypervisor interface"); } - - xc_clean_hcall_buf(xch); free(xch); return rc; @@ -191,133 +187,6 @@ void xc_report_progress_step(xc_interfac done, total); } -#ifdef __sun__ - -int lock_pages(xc_interface *xch, void *addr, size_t len) { return 0; } -void unlock_pages(xc_interface *xch, void *addr, size_t len) { } - -int hcall_buf_prep(xc_interface *xch, void **addr, size_t len) { return 0; } -void hcall_buf_release(xc_interface *xch, void **addr, size_t len) { } - -static void xc_clean_hcall_buf(xc_interface *xch) { } - -#else /* !__sun__ */ - -int lock_pages(xc_interface *xch, void *addr, size_t len) -{ - int e; - void *laddr = (void *)((unsigned long)addr & PAGE_MASK); - size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) + - PAGE_SIZE - 1) & PAGE_MASK; - e = mlock(laddr, llen); - return e; -} - -void unlock_pages(xc_interface *xch, void *addr, size_t len) -{ - void *laddr = (void *)((unsigned long)addr & PAGE_MASK); - size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) + - PAGE_SIZE - 1) & PAGE_MASK; - int saved_errno = errno; - (void)munlock(laddr, llen); - errno = saved_errno; -} - -static pthread_key_t hcall_buf_pkey; -static pthread_once_t hcall_buf_pkey_once = PTHREAD_ONCE_INIT; -struct hcall_buf { - xc_interface *xch; - void *buf; - void *oldbuf; -}; - -static void _xc_clean_hcall_buf(void *m) -{ - struct hcall_buf *hcall_buf = m; - - if ( hcall_buf ) - { - if ( hcall_buf->buf ) - { - unlock_pages(hcall_buf->xch, hcall_buf->buf, PAGE_SIZE); - free(hcall_buf->buf); - } - - free(hcall_buf); - } - - pthread_setspecific(hcall_buf_pkey, NULL); -} - -static void _xc_init_hcall_buf(void) -{ - pthread_key_create(&hcall_buf_pkey, _xc_clean_hcall_buf); -} - -static void xc_clean_hcall_buf(xc_interface *xch) -{ - pthread_once(&hcall_buf_pkey_once, _xc_init_hcall_buf); - - _xc_clean_hcall_buf(pthread_getspecific(hcall_buf_pkey)); -} - -int hcall_buf_prep(xc_interface *xch, void **addr, size_t len) -{ - struct hcall_buf *hcall_buf; - - pthread_once(&hcall_buf_pkey_once, _xc_init_hcall_buf); - - hcall_buf = pthread_getspecific(hcall_buf_pkey); - if ( !hcall_buf ) - { - hcall_buf = calloc(1, sizeof(*hcall_buf)); - if ( !hcall_buf ) - goto out; - hcall_buf->xch = xch; - pthread_setspecific(hcall_buf_pkey, hcall_buf); - } - - if ( !hcall_buf->buf ) - { - hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE); - if ( !hcall_buf->buf || lock_pages(xch, hcall_buf->buf, PAGE_SIZE) ) - { - free(hcall_buf->buf); - hcall_buf->buf = NULL; - goto out; - } - } - - if ( (len < PAGE_SIZE) && !hcall_buf->oldbuf ) - { - memcpy(hcall_buf->buf, *addr, len); - hcall_buf->oldbuf = *addr; - *addr = hcall_buf->buf; - return 0; - } - - out: - return lock_pages(xch, *addr, len); -} - -void hcall_buf_release(xc_interface *xch, void **addr, size_t len) -{ - struct hcall_buf *hcall_buf = pthread_getspecific(hcall_buf_pkey); - - if ( hcall_buf && (hcall_buf->buf == *addr) ) - { - memcpy(hcall_buf->oldbuf, *addr, len); - *addr = hcall_buf->oldbuf; - hcall_buf->oldbuf = NULL; - } - else - { - unlock_pages(xch, *addr, len); - } -} - -#endif - /* NB: arr must be locked */ int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom, unsigned int num, xen_pfn_t *arr) @@ -330,7 +199,7 @@ int xc_get_pfn_type_batch(xc_interface * domctl.cmd = XEN_DOMCTL_getpageframeinfo3; domctl.domain = (domid_t)dom; domctl.u.getpageframeinfo3.num = num; - xc_set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); + set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); rc = do_domctl(xch, &domctl); xc_hypercall_bounce_post(xch, arr); return rc; @@ -488,7 +357,7 @@ int xc_machphys_mfn_list(xc_interface *x return -1; } - xc_set_xen_guest_handle(xmml.extent_start, extent_start); + set_xen_guest_handle(xmml.extent_start, extent_start); rc = do_memory_op(xch, XENMEM_machphys_mfn_list, &xmml, sizeof(xmml)); if (rc || xmml.nr_extents != max_extents) rc = -1; @@ -522,7 +391,7 @@ int xc_get_pfn_list(xc_interface *xch, domctl.cmd = XEN_DOMCTL_getmemlist; domctl.domain = (domid_t)domid; domctl.u.getmemlist.max_pfns = max_pfns; - xc_set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); + set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); ret = do_domctl(xch, &domctl); @@ -782,22 +651,6 @@ int xc_ffs64(uint64_t x) return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0; } -void *xc_memalign(size_t alignment, size_t size) -{ -#if defined(_POSIX_C_SOURCE) && !defined(__sun__) - int ret; - void *ptr; - ret = posix_memalign(&ptr, alignment, size); - if (ret != 0) - return NULL; - return ptr; -#elif defined(__NetBSD__) || defined(__OpenBSD__) - return valloc(size); -#else - return memalign(alignment, size); -#endif -} - /* * Local variables: * mode: C diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_private.h Thu Oct 21 09:37:35 2010 +0100 @@ -97,14 +97,6 @@ void xc_report_progress_step(xc_interfac #define ERROR(_m, _a...) xc_report_error(xch,XC_INTERNAL_ERROR,_m , ## _a ) #define PERROR(_m, _a...) xc_report_error(xch,XC_INTERNAL_ERROR,_m \ " (%d = %s)", ## _a , errno, safe_strerror(errno)) - -void *xc_memalign(size_t alignment, size_t size); - -int lock_pages(xc_interface *xch, void *addr, size_t len); -void unlock_pages(xc_interface *xch, void *addr, size_t len); - -int hcall_buf_prep(xc_interface *xch, void **addr, size_t len); -void hcall_buf_release(xc_interface *xch, void **addr, size_t len); /* * HYPERCALL ARGUMENT BUFFERS diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_tbuf.c --- a/tools/libxc/xc_tbuf.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_tbuf.c Thu Oct 21 09:37:35 2010 +0100 @@ -132,7 +132,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x bitmap_64_to_byte(bytemap, &mask64, sizeof (mask64) * 8); - xc_set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); + set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8; ret = do_sysctl(xch, &sysctl); diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xc_tmem.c --- a/tools/libxc/xc_tmem.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xc_tmem.c Thu Oct 21 09:37:35 2010 +0100 @@ -86,7 +86,7 @@ int xc_tmem_control(xc_interface *xch, } } - xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + set_xen_guest_handle(op.u.ctrl.buf, buf); rc = do_tmem_op(xch, &op); @@ -136,7 +136,7 @@ int xc_tmem_control_oid(xc_interface *xc } } - xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + set_xen_guest_handle(op.u.ctrl.buf, buf); rc = do_tmem_op(xch, &op); diff -r 42caa87197df -r a4e1a2dbc215 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/libxc/xenctrl.h Thu Oct 21 09:37:35 2010 +0100 @@ -252,7 +252,8 @@ typedef struct xc_hypercall_buffer xc_hy * Set a xen_guest_handle in a type safe manner, ensuring that the * data pointer has been correctly allocated. */ -#define xc_set_xen_guest_handle(_hnd, _val) \ +#undef set_xen_guest_handle +#define set_xen_guest_handle(_hnd, _val) \ do { \ xc_hypercall_buffer_t _val1; \ typeof(XC__HYPERCALL_BUFFER_NAME(_val)) *_val2 = HYPERCALL_BUFFER(_val); \ @@ -260,7 +261,7 @@ typedef struct xc_hypercall_buffer xc_hy set_xen_guest_handle_raw(_hnd, (_val2)->hbuf); \ } while (0) -/* Use with xc_set_xen_guest_handle in place of NULL */ +/* Use with set_xen_guest_handle in place of NULL */ extern xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL); /* diff -r 42caa87197df -r a4e1a2dbc215 tools/misc/xenpm.c --- a/tools/misc/xenpm.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/misc/xenpm.c Thu Oct 21 09:37:35 2010 +0100 @@ -395,9 +395,9 @@ static void signal_int_handler(int signo } } - xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU - 1; ret = xc_topologyinfo(xc_handle, &info); @@ -964,9 +964,9 @@ void cpu_topology_func(int argc, char *a goto out; } - xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU-1; if ( xc_topologyinfo(xc_handle, &info) ) diff -r 42caa87197df -r a4e1a2dbc215 tools/python/xen/lowlevel/acm/acm.c --- a/tools/python/xen/lowlevel/acm/acm.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/python/xen/lowlevel/acm/acm.c Thu Oct 21 09:37:35 2010 +0100 @@ -53,7 +53,7 @@ static void *__getssid(xc_interface *xc_ } memset(buf, 0, SSID_BUFFER_SIZE); - xc_set_xen_guest_handle(getssid.ssidbuf, buffer); + set_xen_guest_handle(getssid.ssidbuf, buffer); getssid.ssidbuf_size = SSID_BUFFER_SIZE; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; @@ -254,10 +254,10 @@ static PyObject *chgpolicy(PyObject *sel chgpolicy.delarray_size = del_arr_len; chgpolicy.chgarray_size = chg_arr_len; chgpolicy.errarray_size = sizeof(*error_array)*errarray_mbrs; - xc_set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol_buf); - xc_set_xen_guest_handle(chgpolicy.del_array, del_arr_buf); - xc_set_xen_guest_handle(chgpolicy.chg_array, chg_arr_buf); - xc_set_xen_guest_handle(chgpolicy.err_array, error_array); + set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol_buf); + set_xen_guest_handle(chgpolicy.del_array, del_arr_buf); + set_xen_guest_handle(chgpolicy.chg_array, chg_arr_buf); + set_xen_guest_handle(chgpolicy.err_array, error_array); rc = xc_acm_op(xc_handle, ACMOP_chgpolicy, &chgpolicy, sizeof(chgpolicy)); @@ -299,7 +299,7 @@ static PyObject *getpolicy(PyObject *sel goto out; memset(&getpolicy, 0x0, sizeof(getpolicy)); - xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + set_xen_guest_handle(getpolicy.pullcache, pull_buffer); getpolicy.pullcache_size = sizeof(pull_buffer); rc = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); @@ -356,8 +356,8 @@ static PyObject *relabel_domains(PyObjec reldoms.relabel_map_size = rel_rules_len; reldoms.errarray_size = sizeof(error_array); - xc_set_xen_guest_handle(reldoms.relabel_map, relabel_rules_buf); - xc_set_xen_guest_handle(reldoms.err_array, error_array); + set_xen_guest_handle(reldoms.relabel_map, relabel_rules_buf); + set_xen_guest_handle(reldoms.err_array, error_array); rc = xc_acm_op(xc_handle, ACMOP_relabeldoms, &reldoms, sizeof(reldoms)); diff -r 42caa87197df -r a4e1a2dbc215 tools/python/xen/lowlevel/xc/xc.c --- a/tools/python/xen/lowlevel/xc/xc.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/python/xen/lowlevel/xc/xc.c Thu Oct 21 09:37:35 2010 +0100 @@ -1222,9 +1222,9 @@ static PyObject *pyxc_topologyinfo(XcObj if ( nodemap == NULL ) goto out; - xc_set_xen_guest_handle(tinfo.cpu_to_core, coremap); - xc_set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); - xc_set_xen_guest_handle(tinfo.cpu_to_node, nodemap); + set_xen_guest_handle(tinfo.cpu_to_core, coremap); + set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); + set_xen_guest_handle(tinfo.cpu_to_node, nodemap); tinfo.max_cpu_index = MAX_CPU_INDEX; if ( xc_topologyinfo(self->xc_handle, &tinfo) != 0 ) @@ -1316,9 +1316,9 @@ static PyObject *pyxc_numainfo(XcObject if ( nodes_dist == NULL ) goto out; - xc_set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); - xc_set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); - xc_set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); + set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); + set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); + set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); ninfo.max_node_index = MAX_NODE_INDEX; if ( xc_numainfo(self->xc_handle, &ninfo) != 0 ) diff -r 42caa87197df -r a4e1a2dbc215 tools/security/secpol_tool.c --- a/tools/security/secpol_tool.c Thu Oct 21 09:37:35 2010 +0100 +++ b/tools/security/secpol_tool.c Thu Oct 21 09:37:35 2010 +0100 @@ -248,7 +248,7 @@ int acm_get_ssidref(xc_interface *xc_han ssid = xc_hypercall_buffer_alloc(xc_handle, ssid, ssid_buffer_size); if ( ssid == NULL ) return 1; - xc_set_xen_guest_handle(getssid.ssidbuf, ssid); + set_xen_guest_handle(getssid.ssidbuf, ssid); getssid.ssidbuf_size = ssid_buffer_size; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; @@ -276,7 +276,7 @@ int acm_domain_getpolicy(xc_interface *x return -1; memset(pull_buffer, 0x00, pull_cache_size); - xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + set_xen_guest_handle(getpolicy.pullcache, pull_buffer); getpolicy.pullcache_size = pull_cache_size; ret = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); if (ret >= 0) { @@ -389,7 +389,7 @@ int acm_domain_loadpolicy(xc_interface * /* dump it and then push it down into xen/acm */ acm_dump_policy_buffer(buffer, len, chwall_ssidref, ste_ssidref); - xc_set_xen_guest_handle(setpolicy.pushcache, buffer); + set_xen_guest_handle(setpolicy.pushcache, buffer); setpolicy.pushcache_size = len; ret = xc_acm_op(xc_handle, ACMOP_setpolicy, &setpolicy, sizeof(setpolicy)); @@ -437,7 +437,7 @@ int acm_domain_dumpstats(xc_interface *x return -1; memset(stats_buffer, 0x00, pull_stats_size); - xc_set_xen_guest_handle(dumpstats.pullcache, stats_buffer); + set_xen_guest_handle(dumpstats.pullcache, stats_buffer); dumpstats.pullcache_size = pull_stats_size; ret = xc_acm_op(xc_handle, ACMOP_dumpstats, &dumpstats, sizeof(dumpstats)); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 12:06 UTC
[Xen-devel] Re: [PATCH 00 of 25] libxc: Hypercall buffers
On Thu, 2010-10-21 at 11:58 +0100, Ian Campbell wrote:> > This series addresses (1) and (2) but does not directly address (3) > other than by encapsulating the code which acquires hypercall safe > memory into one place where it can be more easily fixed.WRT solving (3) the approach which I am considering is to implement a new misc device (e.g. /dev/xen/hypercall). The device would support mmap which would provide suitably locked etc memory for use as a hypercall argument as well as supporting the existing IOCTL_PRIVCMD_HYPERCALL on /proc/xen/privcmd (deprecating that ioctl on privcmd). There are a couple of reasons for the new device instead of extending the existing privcmd, firstly I think it''s generally a more upstream friendly/acceptable interface and secondly privcmd already implements mmap as part 1 of the 2 part IOCTL_PRIVCMD_MMAP thing which makes retrofitting the desired new behaviour in a forwards/backwards compatible way a bit difficult. It might also be nice to migrate IOCTL_PRIVCMD_MMAP* over (or a single generic interface subsuming them) to /dev/xen as well, either as part of this new device or a new /dev/xen/m(achine)mem or similar. This would allow deprecation of /proc/xen/privcmd entirely. Opinions? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
libxc currently locks various on-stack data structures present on the stack using mlock(2) in order to try and make them safe for passing to hypercalls (which requires the memory to be mapped) There are several issues with this approach: 1) mlock/munlock do not nest, therefore mlocking multiple pieces of data on the stack which happen to share a page causes everything to be unlocked on the first munlock not the last. This is likely to be currently OK for the uses in libxc taken in isolation but could impact any caller of libxc which uses mlock itself. 2) mlocking only parts of the stack is considered by many to be a dubious, if strictly speaking allowed by the relevant specifications, use of mlock. 3) mlock may not provide the required semantics needed for hypercall safe memory. mlock simply ensures that there can be no major faults (page faults requiring I/O to satisfy) but does not necessarily rule out minor faults (e.g. due to page migration) The following introduces an explicit hypercall-safe memory pool API which includes support for bouncing user-supplied memory buffers into suitable memory. This series addresses (1) and (2) but does not directly address (3) other than by encapsulating the code which acquires hypercall safe memory into one place where it can be more easily fixed. There is also the slightly separate issue of code which forgets to lock buffers as necessary and therefor this series overrides the Xen guest-handle interfaces to attempt to improve compile-time checking for the correct use of the memory pool. This scheme works for the pointers contained within hypercall argument structures but doesn''t catch the actual hypercall arguments themselves. I''m open to suggestions on how to extend it cleanly to catch those cases. The bits which touch ia64 are not even compile tested since I do not have access to a suitable userspace-capable cross compiler. Changes since last time: - rebased on top of recent cpupool changes, conflicts in xc_cpupool_getinfo and xc_cpupool_freeinfo. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287742257 -3600 # Node ID 38e25ffde90ec62f659f08996a828ef24f0ee8fb # Parent 1f5676c9f1266d49a5fd1d8fdd84e60d7fe357a6 libxc: infrastructure for hypercall safe data buffers. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 1f5676c9f126 -r 38e25ffde90e tools/libxc/Makefile --- a/tools/libxc/Makefile Fri Oct 22 10:27:31 2010 +0100 +++ b/tools/libxc/Makefile Fri Oct 22 11:10:57 2010 +0100 @@ -27,6 +27,7 @@ CTRL_SRCS-y += xc_mem_event.c CTRL_SRCS-y += xc_mem_event.c CTRL_SRCS-y += xc_mem_paging.c CTRL_SRCS-y += xc_memshr.c +CTRL_SRCS-y += xc_hcall_buf.c CTRL_SRCS-y += xtl_core.c CTRL_SRCS-y += xtl_logger_stdio.c CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c diff -r 1f5676c9f126 -r 38e25ffde90e tools/libxc/xc_hcall_buf.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/libxc/xc_hcall_buf.c Fri Oct 22 11:10:57 2010 +0100 @@ -0,0 +1,160 @@ +/* + * Copyright (c) 2010, Citrix Systems, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; + * version 2.1 of the License. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include <inttypes.h> +#include "xc_private.h" +#include "xg_private.h" + +xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL) = { + .hbuf = NULL, + .param_shadow = NULL, + HYPERCALL_BUFFER_INIT_NO_BOUNCE +}; + +void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) +{ + size_t size = nr_pages * PAGE_SIZE; + void *p; +#if defined(_POSIX_C_SOURCE) && !defined(__sun__) + int ret; + ret = posix_memalign(&p, PAGE_SIZE, size); + if (ret != 0) + return NULL; +#elif defined(__NetBSD__) || defined(__OpenBSD__) + p = valloc(size); +#else + p = memalign(PAGE_SIZE, size); +#endif + + if (!p) + return NULL; + +#ifndef __sun__ + if ( mlock(p, size) < 0 ) + { + free(p); + return NULL; + } +#endif + + b->hbuf = p; + + memset(p, 0, size); + return b->hbuf; +} + +void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) +{ + if ( b->hbuf == NULL ) + return; + +#ifndef __sun__ + (void) munlock(b->hbuf, nr_pages * PAGE_SIZE); +#endif + + free(b->hbuf); +} + +struct allocation_header { + int nr_pages; +}; + +void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size) +{ + size_t actual_size = ROUNDUP(size + sizeof(struct allocation_header), PAGE_SHIFT); + int nr_pages = actual_size >> PAGE_SHIFT; + struct allocation_header *hdr; + + hdr = xc__hypercall_buffer_alloc_pages(xch, b, nr_pages); + if ( hdr == NULL ) + return NULL; + + b->hbuf = (void *)(hdr+1); + + hdr->nr_pages = nr_pages; + return b->hbuf; +} + +void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + struct allocation_header *hdr; + + if (b->hbuf == NULL) + return; + + hdr = b->hbuf; + b->hbuf = --hdr; + + xc__hypercall_buffer_free_pages(xch, b, hdr->nr_pages); +} + +int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + void *p; + + /* + * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE. + */ + if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE ) + abort(); + + /* + * Do need to bounce a NULL buffer. + */ + if ( b->ubuf == NULL ) + { + b->hbuf = NULL; + return 0; + } + + p = xc__hypercall_buffer_alloc(xch, b, b->sz); + if ( p == NULL ) + return -1; + + if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_IN || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH ) + memcpy(b->hbuf, b->ubuf, b->sz); + + return 0; +} + +void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + /* + * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE. + */ + if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE ) + abort(); + + if ( b->hbuf == NULL ) + return; + + if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_OUT || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH ) + memcpy(b->ubuf, b->hbuf, b->sz); + + xc__hypercall_buffer_free(xch, b); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff -r 1f5676c9f126 -r 38e25ffde90e tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Fri Oct 22 10:27:31 2010 +0100 +++ b/tools/libxc/xc_private.h Fri Oct 22 11:10:57 2010 +0100 @@ -105,6 +105,64 @@ void unlock_pages(xc_interface *xch, voi int hcall_buf_prep(xc_interface *xch, void **addr, size_t len); void hcall_buf_release(xc_interface *xch, void **addr, size_t len); + +/* + * HYPERCALL ARGUMENT BUFFERS + * + * Augment the public hypercall buffer interface with the ability to + * bounce between user provided buffers and hypercall safe memory. + * + * Use xc_hypercall_bounce_pre/post instead of + * xc_hypercall_buffer_alloc/free(_pages). The specified user + * supplied buffer is automatically copied in/out of the hypercall + * safe memory. + */ +enum { + XC_HYPERCALL_BUFFER_BOUNCE_NONE = 0, + XC_HYPERCALL_BUFFER_BOUNCE_IN = 1, + XC_HYPERCALL_BUFFER_BOUNCE_OUT = 2, + XC_HYPERCALL_BUFFER_BOUNCE_BOTH = 3 +}; + +/* + * Declare a named bounce buffer. + * + * Normally you should use DECLARE_HYPERCALL_BOUNCE (see below). + * + * This declaration should only be used when the user pointer is + * non-trivial, e.g. when it is contained within an existing data + * structure. + */ +#define DECLARE_NAMED_HYPERCALL_BOUNCE(_name, _ubuf, _sz, _dir) \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = NULL, \ + .param_shadow = NULL, \ + .sz = _sz, .dir = _dir, .ubuf = _ubuf, \ + } + +/* + * Declare a bounce buffer shadowing the named user data pointer. + */ +#define DECLARE_HYPERCALL_BOUNCE(_ubuf, _sz, _dir) DECLARE_NAMED_HYPERCALL_BOUNCE(_ubuf, _ubuf, _sz, _dir) + +/* + * Set the size of data to bounce. Useful when the size is not known + * when the bounce buffer is declared. + */ +#define HYPERCALL_BOUNCE_SET_SIZE(_buf, _sz) do { (HYPERCALL_BUFFER(_buf))->sz = _sz; } while (0) + +/* + * Initialise and free hypercall safe memory. Takes care of any required + * copying. + */ +int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *bounce); +#define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name)) +void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce); +#define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name)) + +/* + * Hypercall interfaces. + */ int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall); diff -r 1f5676c9f126 -r 38e25ffde90e tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Fri Oct 22 10:27:31 2010 +0100 +++ b/tools/libxc/xenctrl.h Fri Oct 22 11:10:57 2010 +0100 @@ -147,6 +147,137 @@ enum xc_open_flags { * @return 0 on success, -1 otherwise. */ int xc_interface_close(xc_interface *xch); + +/* + * HYPERCALL SAFE MEMORY BUFFER + * + * Ensure that memory which is passed to a hypercall has been + * specially allocated in order to be safe to access from the + * hypervisor. + * + * Each user data pointer is shadowed by an xc_hypercall_buffer data + * structure. You should never define an xc_hypercall_buffer type + * directly, instead use the DECLARE_HYPERCALL_BUFFER* macros below. + * + * The strucuture should be considered opaque and all access should be + * via the macros and helper functions defined below. + * + * Once the buffer is declared the user is responsible for explicitly + * allocating and releasing the memory using + * xc_hypercall_buffer_alloc(_pages) and + * xc_hypercall_buffer_free(_pages). + * + * Once the buffer has been allocated the user can initialise the data + * via the normal pointer. The xc_hypercall_buffer structure is + * transparently referenced by the helper macros (such as + * xen_set_guest_handle) in order to check at compile time that the + * correct type of memory is being used. + */ +struct xc_hypercall_buffer { + /* Hypercall safe memory buffer. */ + void *hbuf; + + /* + * Reference to xc_hypercall_buffer passed as argument to the + * current function. + */ + struct xc_hypercall_buffer *param_shadow; + + /* + * Direction of copy for bounce buffering. + */ + int dir; + + /* Used iff dir != 0. */ + void *ubuf; + size_t sz; +}; +typedef struct xc_hypercall_buffer xc_hypercall_buffer_t; + +/* + * Construct the name of the hypercall buffer for a given variable. + * For internal use only + */ +#define XC__HYPERCALL_BUFFER_NAME(_name) xc__hypercall_buffer_##_name + +/* + * Returns the hypercall_buffer associated with a variable. + */ +#define HYPERCALL_BUFFER(_name) \ + ({ xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = &XC__HYPERCALL_BUFFER_NAME(_name); \ + (void)(&_val1 == _val2); \ + (_val2)->param_shadow ? (_val2)->param_shadow : (_val2); \ + }) + +#define HYPERCALL_BUFFER_INIT_NO_BOUNCE .dir = 0, .sz = 0, .ubuf = (void *)-1 + +/* + * Defines a hypercall buffer and user pointer with _name of _type. + * + * The user accesses the data as normal via _name which will be + * transparently converted to the hypercall buffer as necessary. + */ +#define DECLARE_HYPERCALL_BUFFER(_type, _name) \ + _type *_name = NULL; \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = NULL, \ + .param_shadow = NULL, \ + HYPERCALL_BUFFER_INIT_NO_BOUNCE \ + } + +/* + * Declare the necessary data structure to allow a hypercall buffer + * passed as an argument to a function to be used in the normal way. + */ +#define DECLARE_HYPERCALL_BUFFER_ARGUMENT(_name) \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = (void *)-1, \ + .param_shadow = _name, \ + HYPERCALL_BUFFER_INIT_NO_BOUNCE \ + } + +/* + * Get the hypercall buffer data pointer in a form suitable for use + * directly as a hypercall argument. + */ +#define HYPERCALL_BUFFER_AS_ARG(_name) \ + ({ xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = HYPERCALL_BUFFER(_name); \ + (void)(&_val1 == _val2); \ + (unsigned long)(_val2)->hbuf; \ + }) + +/* + * Set a xen_guest_handle in a type safe manner, ensuring that the + * data pointer has been correctly allocated. + */ +#define xc_set_xen_guest_handle(_hnd, _val) \ + do { \ + xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_val)) *_val2 = HYPERCALL_BUFFER(_val); \ + (void) (&_val1 == _val2); \ + set_xen_guest_handle_raw(_hnd, (_val2)->hbuf); \ + } while (0) + +/* Use with xc_set_xen_guest_handle in place of NULL */ +extern xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL); + +/* + * Allocate and free hypercall buffers with byte granularity. + */ +void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size); +#define xc_hypercall_buffer_alloc(_xch, _name, _size) xc__hypercall_buffer_alloc(_xch, HYPERCALL_BUFFER(_name), _size) +void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b); +#define xc_hypercall_buffer_free(_xch, _name) xc__hypercall_buffer_free(_xch, HYPERCALL_BUFFER(_name)) + +/* + * Allocate and free hypercall buffers with page alignment. + */ +void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages); +#define xc_hypercall_buffer_alloc_pages(_xch, _name, _nr) xc__hypercall_buffer_alloc_pages(_xch, HYPERCALL_BUFFER(_name), _nr) +void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages); +#define xc_hypercall_buffer_free_pages(_xch, _name, _nr) xc__hypercall_buffer_free_pages(_xch, HYPERCALL_BUFFER(_name), _nr) /* * DOMAIN DEBUGGING FUNCTIONS _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 02 of 25] libxc: convert xc_version over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 5ebd5f2c9cea0c0b43fabe61545beeb8f3ddc908 # Parent 38e25ffde90ec62f659f08996a828ef24f0ee8fb libxc: convert xc_version over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 38e25ffde90e -r 5ebd5f2c9cea tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Fri Oct 22 11:10:57 2010 +0100 +++ b/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 @@ -569,42 +569,46 @@ int xc_sysctl(xc_interface *xch, struct int xc_version(xc_interface *xch, int cmd, void *arg) { - int rc, argsize = 0; + DECLARE_HYPERCALL_BOUNCE(arg, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT); /* Size unknown until cmd decoded */ + size_t sz = 0; + int rc; switch ( cmd ) { case XENVER_extraversion: - argsize = sizeof(xen_extraversion_t); + sz = sizeof(xen_extraversion_t); break; case XENVER_compile_info: - argsize = sizeof(xen_compile_info_t); + sz = sizeof(xen_compile_info_t); break; case XENVER_capabilities: - argsize = sizeof(xen_capabilities_info_t); + sz = sizeof(xen_capabilities_info_t); break; case XENVER_changeset: - argsize = sizeof(xen_changeset_info_t); + sz = sizeof(xen_changeset_info_t); break; case XENVER_platform_parameters: - argsize = sizeof(xen_platform_parameters_t); + sz = sizeof(xen_platform_parameters_t); break; } - if ( (argsize != 0) && (lock_pages(xch, arg, argsize) != 0) ) + HYPERCALL_BOUNCE_SET_SIZE(arg, sz); + + if ( (sz != 0) && xc_hypercall_bounce_pre(xch, arg) ) { - PERROR("Could not lock memory for version hypercall"); + PERROR("Could not bounce buffer for version hypercall"); return -ENOMEM; } #ifdef VALGRIND - if (argsize != 0) - memset(arg, 0, argsize); + if (sz != 0) + memset(hypercall_bounce_get(bounce), 0, sz); #endif - rc = do_xen_version(xch, cmd, arg); + rc = do_xen_version(xch, cmd, HYPERCALL_BUFFER(arg)); - if ( argsize != 0 ) - unlock_pages(xch, arg, argsize); + if ( sz != 0 ) + xc_hypercall_bounce_post(xch, arg); return rc; } diff -r 38e25ffde90e -r 5ebd5f2c9cea tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Fri Oct 22 11:10:57 2010 +0100 +++ b/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 @@ -166,13 +166,14 @@ void xc__hypercall_bounce_post(xc_interf int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall); -static inline int do_xen_version(xc_interface *xch, int cmd, void *dest) +static inline int do_xen_version(xc_interface *xch, int cmd, xc_hypercall_buffer_t *dest) { DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(dest); hypercall.op = __HYPERVISOR_xen_version; hypercall.arg[0] = (unsigned long) cmd; - hypercall.arg[1] = (unsigned long) dest; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(dest); return do_xen_hypercall(xch, &hypercall); } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 03 of 25] libxc: convert domctl interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 76289f9bffff7f966e99ba6597321e2d655cc643 # Parent 5ebd5f2c9cea0c0b43fabe61545beeb8f3ddc908 libxc: convert domctl interfaces over to hypercall buffers (defer save/restore and shadow related interfaces til a later patch) Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 5ebd5f2c9cea -r 76289f9bffff tools/libxc/xc_dom_boot.c --- a/tools/libxc/xc_dom_boot.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_dom_boot.c Fri Oct 22 15:14:51 2010 +0100 @@ -61,9 +61,10 @@ static int setup_hypercall_page(struct x return rc; } -static int launch_vm(xc_interface *xch, domid_t domid, void *ctxt) +static int launch_vm(xc_interface *xch, domid_t domid, xc_hypercall_buffer_t *ctxt) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(ctxt); int rc; xc_dom_printf(xch, "%s: called, ctxt=%p", __FUNCTION__, ctxt); @@ -71,7 +72,7 @@ static int launch_vm(xc_interface *xch, domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = 0; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); if ( rc != 0 ) xc_dom_panic(xch, XC_INTERNAL_ERROR, @@ -202,8 +203,12 @@ int xc_dom_boot_image(struct xc_dom_imag int xc_dom_boot_image(struct xc_dom_image *dom) { DECLARE_DOMCTL; - vcpu_guest_context_any_t ctxt; + DECLARE_HYPERCALL_BUFFER(vcpu_guest_context_any_t, ctxt); int rc; + + ctxt = xc_hypercall_buffer_alloc(dom->xch, ctxt, sizeof(*ctxt)); + if ( ctxt == NULL ) + return -1; DOMPRINTF_CALLED(dom->xch); @@ -260,12 +265,13 @@ int xc_dom_boot_image(struct xc_dom_imag return rc; /* let the vm run */ - memset(&ctxt, 0, sizeof(ctxt)); - if ( (rc = dom->arch_hooks->vcpu(dom, &ctxt)) != 0 ) + memset(ctxt, 0, sizeof(ctxt)); + if ( (rc = dom->arch_hooks->vcpu(dom, ctxt)) != 0 ) return rc; xc_dom_unmap_all(dom); - rc = launch_vm(dom->xch, dom->guest_domid, &ctxt); + rc = launch_vm(dom->xch, dom->guest_domid, HYPERCALL_BUFFER(ctxt)); + xc_hypercall_buffer_free(dom->xch, ctxt); return rc; } diff -r 5ebd5f2c9cea -r 76289f9bffff tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -115,36 +115,31 @@ int xc_vcpu_setaffinity(xc_interface *xc uint64_t *cpumap, int cpusize) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); int ret = -1; - uint8_t *local = malloc(cpusize); - if(local == NULL) + local = xc_hypercall_buffer_alloc(xch, local, cpusize); + if ( local == NULL ) { - PERROR("Could not alloc memory for Xen hypercall"); + PERROR("Could not allocate memory for setvcpuaffinity domctl hypercall"); goto out; } + domctl.cmd = XEN_DOMCTL_setvcpuaffinity; domctl.domain = (domid_t)domid; domctl.u.vcpuaffinity.vcpu = vcpu; bitmap_64_to_byte(local, cpumap, cpusize * 8); - set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; - - if ( lock_pages(xch, local, cpusize) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } ret = do_domctl(xch, &domctl); - unlock_pages(xch, local, cpusize); + xc_hypercall_buffer_free(xch, local); out: - free(local); return ret; } @@ -155,12 +150,13 @@ int xc_vcpu_getaffinity(xc_interface *xc uint64_t *cpumap, int cpusize) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); int ret = -1; - uint8_t * local = malloc(cpusize); + local = xc_hypercall_buffer_alloc(xch, local, cpusize); if(local == NULL) { - PERROR("Could not alloc memory for Xen hypercall"); + PERROR("Could not allocate memory for getvcpuaffinity domctl hypercall"); goto out; } @@ -168,22 +164,15 @@ int xc_vcpu_getaffinity(xc_interface *xc domctl.domain = (domid_t)domid; domctl.u.vcpuaffinity.vcpu = vcpu; - - set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; - - if ( lock_pages(xch, local, sizeof(local)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } ret = do_domctl(xch, &domctl); - unlock_pages(xch, local, sizeof (local)); bitmap_byte_to_64(cpumap, local, cpusize * 8); + + xc_hypercall_buffer_free(xch, local); out: - free(local); return ret; } @@ -283,20 +272,19 @@ int xc_domain_hvm_getcontext(xc_interfac { int ret; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, ctxt_buf) ) + return -1; domctl.cmd = XEN_DOMCTL_gethvmcontext; domctl.domain = (domid_t)domid; domctl.u.hvmcontext.size = size; - set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); - - if ( ctxt_buf ) - if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 ) - return ret; + xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); - if ( ctxt_buf ) - unlock_pages(xch, ctxt_buf, size); + xc_hypercall_bounce_post(xch, ctxt_buf); return (ret < 0 ? -1 : domctl.u.hvmcontext.size); } @@ -312,23 +300,21 @@ int xc_domain_hvm_getcontext_partial(xc_ { int ret; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_OUT); - if ( !ctxt_buf ) - return -EINVAL; + if ( !ctxt_buf || xc_hypercall_bounce_pre(xch, ctxt_buf) ) + return -1; domctl.cmd = XEN_DOMCTL_gethvmcontext_partial; domctl.domain = (domid_t) domid; domctl.u.hvmcontext_partial.type = typecode; domctl.u.hvmcontext_partial.instance = instance; - set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); + xc_set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); - if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 ) - return ret; - ret = do_domctl(xch, &domctl); - if ( ctxt_buf ) - unlock_pages(xch, ctxt_buf, size); + if ( ctxt_buf ) + xc_hypercall_bounce_post(xch, ctxt_buf); return ret ? -1 : 0; } @@ -341,18 +327,19 @@ int xc_domain_hvm_setcontext(xc_interfac { int ret; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_IN); + + if ( xc_hypercall_bounce_pre(xch, ctxt_buf) ) + return -1; domctl.cmd = XEN_DOMCTL_sethvmcontext; domctl.domain = domid; domctl.u.hvmcontext.size = size; - set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); - - if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 ) - return ret; + xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); - unlock_pages(xch, ctxt_buf, size); + xc_hypercall_bounce_post(xch, ctxt_buf); return ret; } @@ -364,18 +351,19 @@ int xc_vcpu_getcontext(xc_interface *xch { int rc; DECLARE_DOMCTL; - size_t sz = sizeof(vcpu_guest_context_any_t); + DECLARE_HYPERCALL_BOUNCE(ctxt, sizeof(vcpu_guest_context_any_t), XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, ctxt) ) + return -1; domctl.cmd = XEN_DOMCTL_getvcpucontext; domctl.domain = (domid_t)domid; domctl.u.vcpucontext.vcpu = (uint16_t)vcpu; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); - - if ( (rc = lock_pages(xch, ctxt, sz)) != 0 ) - return rc; rc = do_domctl(xch, &domctl); - unlock_pages(xch, ctxt, sz); + + xc_hypercall_bounce_post(xch, ctxt); return rc; } @@ -558,22 +546,24 @@ int xc_domain_get_tsc_info(xc_interface { int rc; DECLARE_DOMCTL; - xen_guest_tsc_info_t info = { 0 }; + DECLARE_HYPERCALL_BUFFER(xen_guest_tsc_info_t, info); + + info = xc_hypercall_buffer_alloc(xch, info, sizeof(*info)); + if ( info == NULL ) + return -ENOMEM; domctl.cmd = XEN_DOMCTL_gettscinfo; domctl.domain = (domid_t)domid; - set_xen_guest_handle(domctl.u.tsc_info.out_info, &info); - if ( (rc = lock_pages(xch, &info, sizeof(info))) != 0 ) - return rc; + xc_set_xen_guest_handle(domctl.u.tsc_info.out_info, info); rc = do_domctl(xch, &domctl); if ( rc == 0 ) { - *tsc_mode = info.tsc_mode; - *elapsed_nsec = info.elapsed_nsec; - *gtsc_khz = info.gtsc_khz; - *incarnation = info.incarnation; + *tsc_mode = info->tsc_mode; + *elapsed_nsec = info->elapsed_nsec; + *gtsc_khz = info->gtsc_khz; + *incarnation = info->incarnation; } - unlock_pages(xch, &info,sizeof(info)); + xc_hypercall_buffer_free(xch, info); return rc; } @@ -957,8 +947,8 @@ int xc_vcpu_setcontext(xc_interface *xch vcpu_guest_context_any_t *ctxt) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(ctxt, sizeof(vcpu_guest_context_any_t), XC_HYPERCALL_BUFFER_BOUNCE_IN); int rc; - size_t sz = sizeof(vcpu_guest_context_any_t); if (ctxt == NULL) { @@ -966,16 +956,17 @@ int xc_vcpu_setcontext(xc_interface *xch return -1; } + if ( xc_hypercall_bounce_pre(xch, ctxt) ) + return -1; + domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = vcpu; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); - if ( (rc = lock_pages(xch, ctxt, sz)) != 0 ) - return rc; rc = do_domctl(xch, &domctl); - - unlock_pages(xch, ctxt, sz); + + xc_hypercall_bounce_post(xch, ctxt); return rc; } @@ -1101,6 +1092,13 @@ int xc_get_device_group( { int rc; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(sdev_array, max_sdevs * sizeof(*sdev_array), XC_HYPERCALL_BUFFER_BOUNCE_IN); + + if ( xc_hypercall_bounce_pre(xch, sdev_array) ) + { + PERROR("Could not bounce buffer for xc_get_device_group"); + return -1; + } domctl.cmd = XEN_DOMCTL_get_device_group; domctl.domain = (domid_t)domid; @@ -1108,17 +1106,14 @@ int xc_get_device_group( domctl.u.get_device_group.machine_bdf = machine_bdf; domctl.u.get_device_group.max_sdevs = max_sdevs; - set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); + xc_set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); - if ( lock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 ) - { - PERROR("Could not lock memory for xc_get_device_group"); - return -ENOMEM; - } rc = do_domctl(xch, &domctl); - unlock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)); *num_sdevs = domctl.u.get_device_group.num_sdevs; + + xc_hypercall_bounce_post(xch, sdev_array); + return rc; } diff -r 5ebd5f2c9cea -r 76289f9bffff tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 @@ -322,12 +322,18 @@ int xc_get_pfn_type_batch(xc_interface * int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom, unsigned int num, xen_pfn_t *arr) { + int rc; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(arr, sizeof(*arr) * num, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + if ( xc_hypercall_bounce_pre(xch, arr) ) + return -1; domctl.cmd = XEN_DOMCTL_getpageframeinfo3; domctl.domain = (domid_t)dom; domctl.u.getpageframeinfo3.num = num; - set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); - return do_domctl(xch, &domctl); + xc_set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); + rc = do_domctl(xch, &domctl); + xc_hypercall_bounce_post(xch, arr); + return rc; } int xc_mmuext_op( @@ -498,25 +504,27 @@ int xc_get_pfn_list(xc_interface *xch, unsigned long max_pfns) { DECLARE_DOMCTL; + DECLARE_HYPERCALL_BOUNCE(pfn_buf, max_pfns * sizeof(*pfn_buf), XC_HYPERCALL_BUFFER_BOUNCE_OUT); int ret; - domctl.cmd = XEN_DOMCTL_getmemlist; - domctl.domain = (domid_t)domid; - domctl.u.getmemlist.max_pfns = max_pfns; - set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); #ifdef VALGRIND memset(pfn_buf, 0, max_pfns * sizeof(*pfn_buf)); #endif - if ( lock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, pfn_buf) ) { - PERROR("xc_get_pfn_list: pfn_buf lock failed"); + PERROR("xc_get_pfn_list: pfn_buf bounce failed"); return -1; } + domctl.cmd = XEN_DOMCTL_getmemlist; + domctl.domain = (domid_t)domid; + domctl.u.getmemlist.max_pfns = max_pfns; + xc_set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); + ret = do_domctl(xch, &domctl); - unlock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)); + xc_hypercall_bounce_post(xch, pfn_buf); return (ret < 0) ? -1 : domctl.u.getmemlist.num_pfns; } diff -r 5ebd5f2c9cea -r 76289f9bffff tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 @@ -211,17 +211,18 @@ static inline int do_domctl(xc_interface { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(domctl, sizeof(*domctl), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - if ( hcall_buf_prep(xch, (void **)&domctl, sizeof(*domctl)) != 0 ) + domctl->interface_version = XEN_DOMCTL_INTERFACE_VERSION; + + if ( xc_hypercall_bounce_pre(xch, domctl) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce buffer for domctl hypercall"); goto out1; } - domctl->interface_version = XEN_DOMCTL_INTERFACE_VERSION; - hypercall.op = __HYPERVISOR_domctl; - hypercall.arg[0] = (unsigned long)domctl; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(domctl); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { @@ -230,8 +231,7 @@ static inline int do_domctl(xc_interface " rebuild the user-space tool set?\n"); } - hcall_buf_release(xch, (void **)&domctl, sizeof(*domctl)); - + xc_hypercall_bounce_post(xch, domctl); out1: return ret; } diff -r 5ebd5f2c9cea -r 76289f9bffff tools/libxc/xc_resume.c --- a/tools/libxc/xc_resume.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_resume.c Fri Oct 22 15:14:51 2010 +0100 @@ -196,12 +196,6 @@ static int xc_domain_resume_any(xc_inter goto out; } - if ( lock_pages(xch, &ctxt, sizeof(ctxt)) ) - { - ERROR("Unable to lock ctxt"); - goto out; - } - if ( xc_vcpu_getcontext(xch, domid, 0, &ctxt) ) { ERROR("Could not get vcpu context"); @@ -235,7 +229,6 @@ static int xc_domain_resume_any(xc_inter #if defined(__i386__) || defined(__x86_64__) out: - unlock_pages(xch, (void *)&ctxt, sizeof ctxt); if (p2m) munmap(p2m, P2M_FL_ENTRIES*PAGE_SIZE); if (p2m_frame_list) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 04 of 25] libxc: convert shadow domctl interfaces and save/restore over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 889ad17d10f98e3e2aed45bb04d2903328e97479 # Parent 76289f9bffff7f966e99ba6597321e2d655cc643 libxc: convert shadow domctl interfaces and save/restore over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 76289f9bffff -r 889ad17d10f9 tools/libxc/ia64/xc_ia64_linux_save.c --- a/tools/libxc/ia64/xc_ia64_linux_save.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/ia64/xc_ia64_linux_save.c Fri Oct 22 15:14:51 2010 +0100 @@ -432,9 +432,9 @@ xc_domain_save(xc_interface *xch, int io int last_iter = 0; /* Bitmap of pages to be sent. */ - unsigned long *to_send = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, to_send); /* Bitmap of pages not to be sent (because dirtied). */ - unsigned long *to_skip = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, to_skip); char *mem; @@ -542,8 +542,8 @@ xc_domain_save(xc_interface *xch, int io last_iter = 0; bitmap_size = ((p2m_size + BITS_PER_LONG-1) & ~(BITS_PER_LONG-1)) / 8; - to_send = malloc(bitmap_size); - to_skip = malloc(bitmap_size); + to_send = xc_hypercall_buffer_alloc(xch, to_send, bitmap_size); + to_skip = xc_hypercall_buffer_alloc(xch, to_skip, bitmap_size); if (!to_send || !to_skip) { ERROR("Couldn''t allocate bitmap array"); @@ -552,15 +552,6 @@ xc_domain_save(xc_interface *xch, int io /* Initially all the pages must be sent. */ memset(to_send, 0xff, bitmap_size); - - if (lock_pages(to_send, bitmap_size)) { - ERROR("Unable to lock_pages to_send"); - goto out; - } - if (lock_pages(to_skip, bitmap_size)) { - ERROR("Unable to lock_pages to_skip"); - goto out; - } /* Enable qemu-dm logging dirty pages to xen */ if (hvm && !callbacks->switch_qemu_logdirty(dom, 1, callbacks->data)) { @@ -621,7 +612,7 @@ xc_domain_save(xc_interface *xch, int io if (!last_iter) { if (xc_shadow_control(xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, - to_skip, p2m_size, + HYPERCALL_BUFFER(to_skip), p2m_size, NULL, 0, NULL) != p2m_size) { ERROR("Error peeking shadow bitmap"); goto out; @@ -713,7 +704,7 @@ xc_domain_save(xc_interface *xch, int io /* Pages to be sent are pages which were dirty. */ if (xc_shadow_control(xch, dom, XEN_DOMCTL_SHADOW_OP_CLEAN, - to_send, p2m_size, + HYPERCALL_BUFFER(to_send), p2m_size, NULL, 0, NULL ) != p2m_size) { ERROR("Error flushing shadow PT"); goto out; @@ -779,7 +770,7 @@ xc_domain_save(xc_interface *xch, int io //print_stats(xch, dom, 0, &stats, 1); if ( xc_shadow_control(xch, dom, - XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, + XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send), p2m_size, NULL, 0, NULL) != p2m_size ) { ERROR("Error flushing shadow PT"); @@ -799,10 +790,8 @@ xc_domain_save(xc_interface *xch, int io } } - unlock_pages(to_send, bitmap_size); - free(to_send); - unlock_pages(to_skip, bitmap_size); - free(to_skip); + xc_hypercall_buffer_free(xch, to_send); + xc_hypercall_buffer_free(xch, to_skip); if (live_shinfo) munmap(live_shinfo, PAGE_SIZE); if (memmap_info) diff -r 76289f9bffff -r 889ad17d10f9 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -400,7 +400,7 @@ int xc_shadow_control(xc_interface *xch, int xc_shadow_control(xc_interface *xch, uint32_t domid, unsigned int sop, - unsigned long *dirty_bitmap, + xc_hypercall_buffer_t *dirty_bitmap, unsigned long pages, unsigned long *mb, uint32_t mode, @@ -408,14 +408,17 @@ int xc_shadow_control(xc_interface *xch, { int rc; DECLARE_DOMCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap); + domctl.cmd = XEN_DOMCTL_shadow_op; domctl.domain = (domid_t)domid; domctl.u.shadow_op.op = sop; domctl.u.shadow_op.pages = pages; domctl.u.shadow_op.mb = mb ? *mb : 0; domctl.u.shadow_op.mode = mode; - set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, - (uint8_t *)dirty_bitmap); + if (dirty_bitmap != NULL) + xc_set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, + dirty_bitmap); rc = do_domctl(xch, &domctl); diff -r 76289f9bffff -r 889ad17d10f9 tools/libxc/xc_domain_restore.c --- a/tools/libxc/xc_domain_restore.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain_restore.c Fri Oct 22 15:14:51 2010 +0100 @@ -1063,7 +1063,7 @@ int xc_domain_restore(xc_interface *xch, shared_info_any_t *new_shared_info; /* A copy of the CPU context of the guest. */ - vcpu_guest_context_any_t ctxt; + DECLARE_HYPERCALL_BUFFER(vcpu_guest_context_any_t, ctxt); /* A table containing the type of each PFN (/not/ MFN!). */ unsigned long *pfn_type = NULL; @@ -1112,6 +1112,15 @@ int xc_domain_restore(xc_interface *xch, if ( superpages ) return 1; + + ctxt = xc_hypercall_buffer_alloc(xch, ctxt, sizeof(*ctxt)); + + if ( ctxt == NULL ) + { + PERROR("Unable to allocate VCPU ctxt buffer"); + return 1; + } + if ( (orig_io_fd_flags = fcntl(io_fd, F_GETFL, 0)) < 0 ) { PERROR("unable to read IO FD flags"); @@ -1539,26 +1548,20 @@ int xc_domain_restore(xc_interface *xch, } } - if ( lock_pages(xch, &ctxt, sizeof(ctxt)) ) - { - PERROR("Unable to lock ctxt"); - return 1; - } - vcpup = tailbuf.u.pv.vcpubuf; for ( i = 0; i <= max_vcpu_id; i++ ) { if ( !(vcpumap & (1ULL << i)) ) continue; - memcpy(&ctxt, vcpup, ((dinfo->guest_width == 8) ? sizeof(ctxt.x64) - : sizeof(ctxt.x32))); - vcpup += (dinfo->guest_width == 8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32); + memcpy(ctxt, vcpup, ((dinfo->guest_width == 8) ? sizeof(ctxt->x64) + : sizeof(ctxt->x32))); + vcpup += (dinfo->guest_width == 8) ? sizeof(ctxt->x64) : sizeof(ctxt->x32); DPRINTF("read VCPU %d\n", i); if ( !new_ctxt_format ) - SET_FIELD(&ctxt, flags, GET_FIELD(&ctxt, flags) | VGCF_online); + SET_FIELD(ctxt, flags, GET_FIELD(ctxt, flags) | VGCF_online); if ( i == 0 ) { @@ -1566,7 +1569,7 @@ int xc_domain_restore(xc_interface *xch, * Uncanonicalise the suspend-record frame number and poke * resume record. */ - pfn = GET_FIELD(&ctxt, user_regs.edx); + pfn = GET_FIELD(ctxt, user_regs.edx); if ( (pfn >= dinfo->p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { @@ -1574,7 +1577,7 @@ int xc_domain_restore(xc_interface *xch, goto out; } mfn = ctx->p2m[pfn]; - SET_FIELD(&ctxt, user_regs.edx, mfn); + SET_FIELD(ctxt, user_regs.edx, mfn); start_info = xc_map_foreign_range( xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE, mfn); SET_FIELD(start_info, nr_pages, dinfo->p2m_size); @@ -1589,15 +1592,15 @@ int xc_domain_restore(xc_interface *xch, munmap(start_info, PAGE_SIZE); } /* Uncanonicalise each GDT frame number. */ - if ( GET_FIELD(&ctxt, gdt_ents) > 8192 ) + if ( GET_FIELD(ctxt, gdt_ents) > 8192 ) { ERROR("GDT entry count out of range"); goto out; } - for ( j = 0; (512*j) < GET_FIELD(&ctxt, gdt_ents); j++ ) + for ( j = 0; (512*j) < GET_FIELD(ctxt, gdt_ents); j++ ) { - pfn = GET_FIELD(&ctxt, gdt_frames[j]); + pfn = GET_FIELD(ctxt, gdt_frames[j]); if ( (pfn >= dinfo->p2m_size) || (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) ) { @@ -1605,10 +1608,10 @@ int xc_domain_restore(xc_interface *xch, j, (unsigned long)pfn); goto out; } - SET_FIELD(&ctxt, gdt_frames[j], ctx->p2m[pfn]); + SET_FIELD(ctxt, gdt_frames[j], ctx->p2m[pfn]); } /* Uncanonicalise the page table base pointer. */ - pfn = UNFOLD_CR3(GET_FIELD(&ctxt, ctrlreg[3])); + pfn = UNFOLD_CR3(GET_FIELD(ctxt, ctrlreg[3])); if ( pfn >= dinfo->p2m_size ) { @@ -1625,12 +1628,12 @@ int xc_domain_restore(xc_interface *xch, (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - SET_FIELD(&ctxt, ctrlreg[3], FOLD_CR3(ctx->p2m[pfn])); + SET_FIELD(ctxt, ctrlreg[3], FOLD_CR3(ctx->p2m[pfn])); /* Guest pagetable (x86/64) stored in otherwise-unused CR1. */ - if ( (ctx->pt_levels == 4) && (ctxt.x64.ctrlreg[1] & 1) ) + if ( (ctx->pt_levels == 4) && (ctxt->x64.ctrlreg[1] & 1) ) { - pfn = UNFOLD_CR3(ctxt.x64.ctrlreg[1] & ~1); + pfn = UNFOLD_CR3(ctxt->x64.ctrlreg[1] & ~1); if ( pfn >= dinfo->p2m_size ) { ERROR("User PT base is bad: pfn=%lu p2m_size=%lu", @@ -1645,12 +1648,12 @@ int xc_domain_restore(xc_interface *xch, (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT); goto out; } - ctxt.x64.ctrlreg[1] = FOLD_CR3(ctx->p2m[pfn]); + ctxt->x64.ctrlreg[1] = FOLD_CR3(ctx->p2m[pfn]); } domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = (domid_t)dom; domctl.u.vcpucontext.vcpu = i; - set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt.c); + xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); frc = xc_domctl(xch, &domctl); if ( frc != 0 ) { @@ -1791,6 +1794,7 @@ int xc_domain_restore(xc_interface *xch, out: if ( (rc != 0) && (dom != 0) ) xc_domain_destroy(xch, dom); + xc_hypercall_buffer_free(xch, ctxt); free(mmu); free(ctx->p2m); free(pfn_type); diff -r 76289f9bffff -r 889ad17d10f9 tools/libxc/xc_domain_save.c --- a/tools/libxc/xc_domain_save.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain_save.c Fri Oct 22 15:14:51 2010 +0100 @@ -411,7 +411,7 @@ static int print_stats(xc_interface *xch static int analysis_phase(xc_interface *xch, uint32_t domid, struct save_ctx *ctx, - unsigned long *arr, int runs) + xc_hypercall_buffer_t *arr, int runs) { long long start, now; xc_shadow_op_stats_t stats; @@ -909,7 +909,9 @@ int xc_domain_save(xc_interface *xch, in - that should be sent this iteration (unless later marked as skip); - to skip this iteration because already dirty; - to fixup by sending at the end if not already resent; */ - unsigned long *to_send = NULL, *to_skip = NULL, *to_fix = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, to_skip); + DECLARE_HYPERCALL_BUFFER(unsigned long, to_send); + unsigned long *to_fix = NULL; xc_shadow_op_stats_t stats; @@ -1038,9 +1040,9 @@ int xc_domain_save(xc_interface *xch, in sent_last_iter = dinfo->p2m_size; /* Setup to_send / to_fix and to_skip bitmaps */ - to_send = xc_memalign(PAGE_SIZE, ROUNDUP(BITMAP_SIZE, PAGE_SHIFT)); + to_send = xc_hypercall_buffer_alloc_pages(xch, to_send, NRPAGES(BITMAP_SIZE)); + to_skip = xc_hypercall_buffer_alloc_pages(xch, to_skip, NRPAGES(BITMAP_SIZE)); to_fix = calloc(1, BITMAP_SIZE); - to_skip = xc_memalign(PAGE_SIZE, ROUNDUP(BITMAP_SIZE, PAGE_SHIFT)); if ( !to_send || !to_fix || !to_skip ) { @@ -1050,20 +1052,7 @@ int xc_domain_save(xc_interface *xch, in memset(to_send, 0xff, BITMAP_SIZE); - if ( lock_pages(xch, to_send, BITMAP_SIZE) ) - { - PERROR("Unable to lock to_send"); - return 1; - } - - /* (to fix is local only) */ - if ( lock_pages(xch, to_skip, BITMAP_SIZE) ) - { - PERROR("Unable to lock to_skip"); - return 1; - } - - if ( hvm ) + if ( hvm ) { /* Need another buffer for HVM context */ hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0); @@ -1080,7 +1069,7 @@ int xc_domain_save(xc_interface *xch, in } } - analysis_phase(xch, dom, ctx, to_skip, 0); + analysis_phase(xch, dom, ctx, HYPERCALL_BUFFER(to_skip), 0); pfn_type = xc_memalign(PAGE_SIZE, ROUNDUP( MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); @@ -1192,7 +1181,7 @@ int xc_domain_save(xc_interface *xch, in /* Slightly wasteful to peek the whole array evey time, but this is fast enough for the moment. */ frc = xc_shadow_control( - xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, to_skip, + xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, HYPERCALL_BUFFER(to_skip), dinfo->p2m_size, NULL, 0, NULL); if ( frc != dinfo->p2m_size ) { @@ -1532,8 +1521,8 @@ int xc_domain_save(xc_interface *xch, in } - if ( xc_shadow_control(xch, dom, - XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, + if ( xc_shadow_control(xch, dom, + XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send), dinfo->p2m_size, NULL, 0, &stats) != dinfo->p2m_size ) { PERROR("Error flushing shadow PT"); @@ -1861,7 +1850,7 @@ int xc_domain_save(xc_interface *xch, in print_stats(xch, dom, 0, &stats, 1); if ( xc_shadow_control(xch, dom, - XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, + XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send), dinfo->p2m_size, NULL, 0, &stats) != dinfo->p2m_size ) { PERROR("Error flushing shadow PT"); @@ -1892,12 +1881,13 @@ int xc_domain_save(xc_interface *xch, in if ( ctx->live_m2p ) munmap(ctx->live_m2p, M2P_SIZE(ctx->max_mfn)); + xc_hypercall_buffer_free_pages(xch, to_send, NRPAGES(BITMAP_SIZE)); + xc_hypercall_buffer_free_pages(xch, to_skip, NRPAGES(BITMAP_SIZE)); + free(pfn_type); free(pfn_batch); free(pfn_err); - free(to_send); free(to_fix); - free(to_skip); DPRINTF("Save exit rc=%d\n",rc); diff -r 76289f9bffff -r 889ad17d10f9 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 @@ -601,7 +601,7 @@ int xc_shadow_control(xc_interface *xch, int xc_shadow_control(xc_interface *xch, uint32_t domid, unsigned int sop, - unsigned long *dirty_bitmap, + xc_hypercall_buffer_t *dirty_bitmap, unsigned long pages, unsigned long *mb, uint32_t mode, diff -r 76289f9bffff -r 889ad17d10f9 tools/libxc/xg_private.h --- a/tools/libxc/xg_private.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xg_private.h Fri Oct 22 15:14:51 2010 +0100 @@ -157,6 +157,7 @@ typedef l4_pgentry_64_t l4_pgentry_t; #define PAGE_MASK_IA64 (~(PAGE_SIZE_IA64-1)) #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1)) +#define NRPAGES(x) (ROUNDUP(x, PAGE_SHIFT) >> PAGE_SHIFT) /* XXX SMH: following skanky macros rely on variable p2m_size being set */ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 05 of 25] libxc: convert sysctl interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID a535e89658c09f6a491213e0f2373de775fbabb1 # Parent 889ad17d10f98e3e2aed45bb04d2903328e97479 libxc: convert sysctl interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xc_cpupool.c --- a/tools/libxc/xc_cpupool.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_cpupool.c Fri Oct 22 15:14:51 2010 +0100 @@ -73,12 +73,12 @@ xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_ uint32_t poolid) { int err = 0; - xc_cpupoolinfo_t *info; - uint8_t *local; + xc_cpupoolinfo_t *info = NULL; int local_size; int cpumap_size; int size; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); local_size = get_cpumap_size(xch); if (!local_size) @@ -86,42 +86,42 @@ xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_ PERROR("Could not get number of cpus"); return NULL; } - local = alloca(local_size); + + local = xc_hypercall_buffer_alloc(xch, local, local_size); + if ( local == NULL ) { + PERROR("Could not allocate locked memory for xc_cpupool_getinfo"); + return NULL; + } + cpumap_size = (local_size + sizeof(*info->cpumap) - 1) / sizeof(*info->cpumap); size = sizeof(xc_cpupoolinfo_t) + cpumap_size * sizeof(*info->cpumap); + + sysctl.cmd = XEN_SYSCTL_cpupool_op; + sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO; + sysctl.u.cpupool_op.cpupool_id = poolid; + xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + sysctl.u.cpupool_op.cpumap.nr_cpus = local_size * 8; + + err = do_sysctl_save(xch, &sysctl); + + if ( err < 0 ) + goto out; + info = malloc(size); if ( !info ) - return NULL; + goto out; memset(info, 0, size); info->cpumap_size = local_size * 8; info->cpumap = (uint64_t *)(info + 1); - sysctl.cmd = XEN_SYSCTL_cpupool_op; - sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO; - sysctl.u.cpupool_op.cpupool_id = poolid; - set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); - sysctl.u.cpupool_op.cpumap.nr_cpus = local_size * 8; - - if ( (err = lock_pages(xch, local, local_size)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - free(info); - return NULL; - } - err = do_sysctl_save(xch, &sysctl); - unlock_pages(xch, local, local_size); - - if ( err < 0 ) - { - free(info); - return NULL; - } - info->cpupool_id = sysctl.u.cpupool_op.cpupool_id; info->sched_id = sysctl.u.cpupool_op.sched_id; info->n_dom = sysctl.u.cpupool_op.n_dom; bitmap_byte_to_64(info->cpumap, local, local_size * 8); + +out: + xc_hypercall_buffer_free(xch, local); return info; } @@ -168,38 +168,38 @@ uint64_t * xc_cpupool_freeinfo(xc_interf uint64_t * xc_cpupool_freeinfo(xc_interface *xch, int *cpusize) { - int err; - uint8_t *local; - uint64_t *cpumap; + int err = -1; + uint64_t *cpumap = NULL; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, local); *cpusize = get_cpumap_size(xch); if (*cpusize == 0) return NULL; - local = alloca(*cpusize); - cpumap = calloc((*cpusize + sizeof(*cpumap) - 1) / sizeof(*cpumap), sizeof(*cpumap)); - if (cpumap == NULL) + + local = xc_hypercall_buffer_alloc(xch, local, *cpusize); + if ( local == NULL ) { + PERROR("Could not allocate locked memory for xc_cpupool_freeinfo"); return NULL; + } sysctl.cmd = XEN_SYSCTL_cpupool_op; sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_FREEINFO; - set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); sysctl.u.cpupool_op.cpumap.nr_cpus = *cpusize * 8; - if ( (err = lock_pages(xch, local, *cpusize)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - free(cpumap); - return NULL; - } + err = do_sysctl_save(xch, &sysctl); - err = do_sysctl_save(xch, &sysctl); - unlock_pages(xch, local, *cpusize); + if ( err < 0 ) + goto out; + + cpumap = calloc((*cpusize + sizeof(*cpumap) - 1) / sizeof(*cpumap), sizeof(*cpumap)); + if (cpumap == NULL) + goto out; + bitmap_byte_to_64(cpumap, local, *cpusize * 8); - if (err >= 0) - return cpumap; - - free(cpumap); - return NULL; +out: + xc_hypercall_buffer_free(xch, local); + return cpumap; } diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -245,21 +245,22 @@ int xc_domain_getinfolist(xc_interface * { int ret = 0; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(info, max_domains*sizeof(*info), XC_HYPERCALL_BUFFER_BOUNCE_OUT); - if ( lock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, info) ) return -1; sysctl.cmd = XEN_SYSCTL_getdomaininfolist; sysctl.u.getdomaininfolist.first_domain = first_domain; sysctl.u.getdomaininfolist.max_domains = max_domains; - set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); + xc_set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); if ( xc_sysctl(xch, &sysctl) < 0 ) ret = -1; else ret = sysctl.u.getdomaininfolist.num_domains; - unlock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)); + xc_hypercall_bounce_post(xch, info); return ret; } diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 @@ -41,11 +41,15 @@ int xc_readconsolering(xc_interface *xch int clear, int incremental, uint32_t *pindex) { int ret; + unsigned int nr_chars = *pnr_chars; DECLARE_SYSCTL; - unsigned int nr_chars = *pnr_chars; + DECLARE_HYPERCALL_BOUNCE(buffer, nr_chars, XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, buffer) ) + return -1; sysctl.cmd = XEN_SYSCTL_readconsole; - set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); + xc_set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); sysctl.u.readconsole.count = nr_chars; sysctl.u.readconsole.clear = clear; sysctl.u.readconsole.incremental = 0; @@ -55,9 +59,6 @@ int xc_readconsolering(xc_interface *xch sysctl.u.readconsole.incremental = incremental; } - if ( (ret = lock_pages(xch, buffer, nr_chars)) != 0 ) - return ret; - if ( (ret = do_sysctl(xch, &sysctl)) == 0 ) { *pnr_chars = sysctl.u.readconsole.count; @@ -65,7 +66,7 @@ int xc_readconsolering(xc_interface *xch *pindex = sysctl.u.readconsole.index; } - unlock_pages(xch, buffer, nr_chars); + xc_hypercall_bounce_post(xch, buffer); return ret; } @@ -74,17 +75,18 @@ int xc_send_debug_keys(xc_interface *xch { int ret, len = strlen(keys); DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(keys, len, XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, keys) ) + return -1; sysctl.cmd = XEN_SYSCTL_debug_keys; - set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); + xc_set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); sysctl.u.debug_keys.nr_keys = len; - - if ( (ret = lock_pages(xch, keys, len)) != 0 ) - return ret; ret = do_sysctl(xch, &sysctl); - unlock_pages(xch, keys, len); + xc_hypercall_bounce_post(xch, keys); return ret; } @@ -187,8 +189,8 @@ int xc_perfc_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_reset; - set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL); - set_xen_guest_handle(sysctl.u.perfc_op.val, NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -202,8 +204,8 @@ int xc_perfc_query_number(xc_interface * sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL); - set_xen_guest_handle(sysctl.u.perfc_op.val, NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -216,15 +218,17 @@ int xc_perfc_query_number(xc_interface * } int xc_perfc_query(xc_interface *xch, - xc_perfc_desc_t *desc, - xc_perfc_val_t *val) + struct xc_hypercall_buffer *desc, + struct xc_hypercall_buffer *val) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(desc); + DECLARE_HYPERCALL_BUFFER_ARGUMENT(val); sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); - set_xen_guest_handle(sysctl.u.perfc_op.val, val); + xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); + xc_set_xen_guest_handle(sysctl.u.perfc_op.val, val); return do_sysctl(xch, &sysctl); } @@ -235,7 +239,7 @@ int xc_lockprof_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_reset; - set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL); + xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -248,7 +252,7 @@ int xc_lockprof_query_number(xc_interfac sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; - set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL); + xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -258,17 +262,18 @@ int xc_lockprof_query_number(xc_interfac } int xc_lockprof_query(xc_interface *xch, - uint32_t *n_elems, - uint64_t *time, - xc_lockprof_data_t *data) + uint32_t *n_elems, + uint64_t *time, + struct xc_hypercall_buffer *data) { int rc; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(data); sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; sysctl.u.lockprof_op.max_elem = *n_elems; - set_xen_guest_handle(sysctl.u.lockprof_op.data, data); + xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, data); rc = do_sysctl(xch, &sysctl); @@ -282,20 +287,21 @@ int xc_getcpuinfo(xc_interface *xch, int { int rc; DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(info, max_cpus*sizeof(*info), XC_HYPERCALL_BUFFER_BOUNCE_OUT); + + if ( xc_hypercall_bounce_pre(xch, info) ) + return -1; sysctl.cmd = XEN_SYSCTL_getcpuinfo; - sysctl.u.getcpuinfo.max_cpus = max_cpus; - set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); - - if ( (rc = lock_pages(xch, info, max_cpus*sizeof(*info))) != 0 ) - return rc; + sysctl.u.getcpuinfo.max_cpus = max_cpus; + xc_set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); rc = do_sysctl(xch, &sysctl); - unlock_pages(xch, info, max_cpus*sizeof(*info)); + xc_hypercall_bounce_post(xch, info); if ( nr_cpus ) - *nr_cpus = sysctl.u.getcpuinfo.nr_cpus; + *nr_cpus = sysctl.u.getcpuinfo.nr_cpus; return rc; } diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xc_offline_page.c --- a/tools/libxc/xc_offline_page.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_offline_page.c Fri Oct 22 15:14:51 2010 +0100 @@ -66,14 +66,15 @@ int xc_mark_page_online(xc_interface *xc unsigned long end, uint32_t *status) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int ret = -1; if ( !status || (end < start) ) return -EINVAL; - if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1))) + if ( xc_hypercall_bounce_pre(xch, status) ) { - ERROR("Could not lock memory for xc_mark_page_online\n"); + ERROR("Could not bounce memory for xc_mark_page_online\n"); return -EINVAL; } @@ -81,10 +82,10 @@ int xc_mark_page_online(xc_interface *xc sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_online; sysctl.u.page_offline.end = end; - set_xen_guest_handle(sysctl.u.page_offline.status, status); + xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); - unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)); + xc_hypercall_bounce_post(xch, status); return ret; } @@ -93,14 +94,15 @@ int xc_mark_page_offline(xc_interface *x unsigned long end, uint32_t *status) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int ret = -1; if ( !status || (end < start) ) return -EINVAL; - if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1))) + if ( xc_hypercall_bounce_pre(xch, status) ) { - ERROR("Could not lock memory for xc_mark_page_offline"); + ERROR("Could not bounce memory for xc_mark_page_offline"); return -EINVAL; } @@ -108,10 +110,10 @@ int xc_mark_page_offline(xc_interface *x sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_offline; sysctl.u.page_offline.end = end; - set_xen_guest_handle(sysctl.u.page_offline.status, status); + xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); - unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)); + xc_hypercall_bounce_post(xch, status); return ret; } @@ -120,14 +122,15 @@ int xc_query_page_offline_status(xc_inte unsigned long end, uint32_t *status) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int ret = -1; if ( !status || (end < start) ) return -EINVAL; - if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1))) + if ( xc_hypercall_bounce_pre(xch, status) ) { - ERROR("Could not lock memory for xc_query_page_offline_status\n"); + ERROR("Could not bounce memory for xc_query_page_offline_status\n"); return -EINVAL; } @@ -135,10 +138,10 @@ int xc_query_page_offline_status(xc_inte sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_query_page_offline; sysctl.u.page_offline.end = end; - set_xen_guest_handle(sysctl.u.page_offline.status, status); + xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); - unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)); + xc_hypercall_bounce_post(xch, status); return ret; } diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xc_pm.c --- a/tools/libxc/xc_pm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_pm.c Fri Oct 22 15:14:51 2010 +0100 @@ -45,6 +45,10 @@ int xc_pm_get_pxstat(xc_interface *xch, int xc_pm_get_pxstat(xc_interface *xch, int cpuid, struct xc_px_stat *pxpt) { DECLARE_SYSCTL; + /* Sizes unknown until xc_pm_get_max_px */ + DECLARE_NAMED_HYPERCALL_BOUNCE(trans, &pxpt->trans_pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(pt, &pxpt->pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + int max_px, ret; if ( !pxpt || !(pxpt->trans_pt) || !(pxpt->pt) ) @@ -53,14 +57,15 @@ int xc_pm_get_pxstat(xc_interface *xch, if ( (ret = xc_pm_get_max_px(xch, cpuid, &max_px)) != 0) return ret; - if ( (ret = lock_pages(xch, pxpt->trans_pt, - max_px * max_px * sizeof(uint64_t))) != 0 ) + HYPERCALL_BOUNCE_SET_SIZE(trans, max_px * max_px * sizeof(uint64_t)); + HYPERCALL_BOUNCE_SET_SIZE(pt, max_px * sizeof(struct xc_px_val)); + + if ( xc_hypercall_bounce_pre(xch, trans) ) return ret; - if ( (ret = lock_pages(xch, pxpt->pt, - max_px * sizeof(struct xc_px_val))) != 0 ) + if ( xc_hypercall_bounce_pre(xch, pt) ) { - unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t)); + xc_hypercall_bounce_post(xch, trans); return ret; } @@ -68,15 +73,14 @@ int xc_pm_get_pxstat(xc_interface *xch, sysctl.u.get_pmstat.type = PMSTAT_get_pxstat; sysctl.u.get_pmstat.cpuid = cpuid; sysctl.u.get_pmstat.u.getpx.total = max_px; - set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, pxpt->trans_pt); - set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, - (pm_px_val_t *)pxpt->pt); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, trans); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, pt); ret = xc_sysctl(xch, &sysctl); if ( ret ) { - unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t)); - unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val)); + xc_hypercall_bounce_post(xch, trans); + xc_hypercall_bounce_post(xch, pt); return ret; } @@ -85,8 +89,8 @@ int xc_pm_get_pxstat(xc_interface *xch, pxpt->last = sysctl.u.get_pmstat.u.getpx.last; pxpt->cur = sysctl.u.get_pmstat.u.getpx.cur; - unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t)); - unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val)); + xc_hypercall_bounce_post(xch, trans); + xc_hypercall_bounce_post(xch, pt); return ret; } @@ -120,6 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt) { DECLARE_SYSCTL; + DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int max_cx, ret; if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) ) @@ -128,22 +134,23 @@ int xc_pm_get_cxstat(xc_interface *xch, if ( (ret = xc_pm_get_max_cx(xch, cpuid, &max_cx)) ) goto unlock_0; - if ( (ret = lock_pages(xch, cxpt, sizeof(struct xc_cx_stat))) ) + HYPERCALL_BOUNCE_SET_SIZE(triggers, max_cx * sizeof(uint64_t)); + HYPERCALL_BOUNCE_SET_SIZE(residencies, max_cx * sizeof(uint64_t)); + + ret = -1; + if ( xc_hypercall_bounce_pre(xch, triggers) ) goto unlock_0; - if ( (ret = lock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t))) ) + if ( xc_hypercall_bounce_pre(xch, residencies) ) goto unlock_1; - if ( (ret = lock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t))) ) - goto unlock_2; sysctl.cmd = XEN_SYSCTL_get_pmstat; sysctl.u.get_pmstat.type = PMSTAT_get_cxstat; sysctl.u.get_pmstat.cpuid = cpuid; - set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, cxpt->triggers); - set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, - cxpt->residencies); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, triggers); + xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, residencies); if ( (ret = xc_sysctl(xch, &sysctl)) ) - goto unlock_3; + goto unlock_2; cxpt->nr = sysctl.u.get_pmstat.u.getcx.nr; cxpt->last = sysctl.u.get_pmstat.u.getcx.last; @@ -154,12 +161,10 @@ int xc_pm_get_cxstat(xc_interface *xch, cxpt->cc3 = sysctl.u.get_pmstat.u.getcx.cc3; cxpt->cc6 = sysctl.u.get_pmstat.u.getcx.cc6; -unlock_3: - unlock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t)); unlock_2: - unlock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t)); + xc_hypercall_bounce_post(xch, residencies); unlock_1: - unlock_pages(xch, cxpt, sizeof(struct xc_cx_stat)); + xc_hypercall_bounce_post(xch, triggers); unlock_0: return ret; } @@ -186,12 +191,19 @@ int xc_get_cpufreq_para(xc_interface *xc DECLARE_SYSCTL; int ret = 0; struct xen_get_cpufreq_para *sys_para = &sysctl.u.pm_op.u.get_para; + DECLARE_NAMED_HYPERCALL_BOUNCE(affected_cpus, + user_para->affected_cpus, + user_para->cpu_num * sizeof(uint32_t), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(scaling_available_frequencies, + user_para->scaling_available_frequencies, + user_para->freq_num * sizeof(uint32_t), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(scaling_available_governors, + user_para->scaling_available_governors, + user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + bool has_num = user_para->cpu_num && user_para->freq_num && user_para->gov_num; - - if ( (xch < 0) || !user_para ) - return -EINVAL; if ( has_num ) { @@ -200,22 +212,16 @@ int xc_get_cpufreq_para(xc_interface *xc (!user_para->scaling_available_governors) ) return -EINVAL; - if ( (ret = lock_pages(xch, user_para->affected_cpus, - user_para->cpu_num * sizeof(uint32_t))) ) + if ( xc_hypercall_bounce_pre(xch, affected_cpus) ) goto unlock_1; - if ( (ret = lock_pages(xch, user_para->scaling_available_frequencies, - user_para->freq_num * sizeof(uint32_t))) ) + if ( xc_hypercall_bounce_pre(xch, scaling_available_frequencies) ) goto unlock_2; - if ( (ret = lock_pages(xch, user_para->scaling_available_governors, - user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char))) ) + if ( xc_hypercall_bounce_pre(xch, scaling_available_governors) ) goto unlock_3; - set_xen_guest_handle(sys_para->affected_cpus, - user_para->affected_cpus); - set_xen_guest_handle(sys_para->scaling_available_frequencies, - user_para->scaling_available_frequencies); - set_xen_guest_handle(sys_para->scaling_available_governors, - user_para->scaling_available_governors); + xc_set_xen_guest_handle(sys_para->affected_cpus, affected_cpus); + xc_set_xen_guest_handle(sys_para->scaling_available_frequencies, scaling_available_frequencies); + xc_set_xen_guest_handle(sys_para->scaling_available_governors, scaling_available_governors); } sysctl.cmd = XEN_SYSCTL_pm_op; @@ -250,7 +256,7 @@ int xc_get_cpufreq_para(xc_interface *xc user_para->scaling_min_freq = sys_para->scaling_min_freq; user_para->turbo_enabled = sys_para->turbo_enabled; - memcpy(user_para->scaling_driver, + memcpy(user_para->scaling_driver, sys_para->scaling_driver, CPUFREQ_NAME_LEN); memcpy(user_para->scaling_governor, sys_para->scaling_governor, CPUFREQ_NAME_LEN); @@ -263,14 +269,11 @@ int xc_get_cpufreq_para(xc_interface *xc } unlock_4: - unlock_pages(xch, user_para->scaling_available_governors, - user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char)); + xc_hypercall_bounce_post(xch, scaling_available_governors); unlock_3: - unlock_pages(xch, user_para->scaling_available_frequencies, - user_para->freq_num * sizeof(uint32_t)); + xc_hypercall_bounce_post(xch, scaling_available_frequencies); unlock_2: - unlock_pages(xch, user_para->affected_cpus, - user_para->cpu_num * sizeof(uint32_t)); + xc_hypercall_bounce_post(xch, affected_cpus); unlock_1: return ret; } diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 @@ -240,18 +240,18 @@ static inline int do_sysctl(xc_interface { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(sysctl, sizeof(*sysctl), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - if ( hcall_buf_prep(xch, (void **)&sysctl, sizeof(*sysctl)) != 0 ) + sysctl->interface_version = XEN_SYSCTL_INTERFACE_VERSION; + + if ( xc_hypercall_bounce_pre(xch, sysctl) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce buffer for sysctl hypercall"); goto out1; } - sysctl->interface_version = XEN_SYSCTL_INTERFACE_VERSION; - hypercall.op = __HYPERVISOR_sysctl; - hypercall.arg[0] = (unsigned long)sysctl; - + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(sysctl); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { if ( errno == EACCES ) @@ -259,8 +259,7 @@ static inline int do_sysctl(xc_interface " rebuild the user-space tool set?\n"); } - hcall_buf_release(xch, (void **)&sysctl, sizeof(*sysctl)); - + xc_hypercall_bounce_post(xch, sysctl); out1: return ret; } diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xc_tbuf.c --- a/tools/libxc/xc_tbuf.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_tbuf.c Fri Oct 22 15:14:51 2010 +0100 @@ -116,9 +116,15 @@ int xc_tbuf_set_cpu_mask(xc_interface *x int xc_tbuf_set_cpu_mask(xc_interface *xch, uint32_t mask) { DECLARE_SYSCTL; + DECLARE_HYPERCALL_BUFFER(uint8_t, bytemap); int ret = -1; uint64_t mask64 = mask; - uint8_t bytemap[sizeof(mask64)]; + + bytemap = xc_hypercall_buffer_alloc(xch, bytemap, sizeof(mask64)); + { + PERROR("Could not allocate memory for xc_tbuf_set_cpu_mask hypercall"); + goto out; + } sysctl.cmd = XEN_SYSCTL_tbuf_op; sysctl.interface_version = XEN_SYSCTL_INTERFACE_VERSION; @@ -126,18 +132,12 @@ int xc_tbuf_set_cpu_mask(xc_interface *x bitmap_64_to_byte(bytemap, &mask64, sizeof (mask64) * 8); - set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); + xc_set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8; - - if ( lock_pages(xch, &bytemap, sizeof(bytemap)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } ret = do_sysctl(xch, &sysctl); - unlock_pages(xch, &bytemap, sizeof(bytemap)); + xc_hypercall_buffer_free(xch, bytemap); out: return ret; diff -r 889ad17d10f9 -r a535e89658c0 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 @@ -1022,21 +1022,18 @@ int xc_perfc_query_number(xc_interface * int xc_perfc_query_number(xc_interface *xch, int *nbr_desc, int *nbr_val); -/* IMPORTANT: The caller is responsible for mlock()''ing the @desc and @val - arrays. */ int xc_perfc_query(xc_interface *xch, - xc_perfc_desc_t *desc, - xc_perfc_val_t *val); + xc_hypercall_buffer_t *desc, + xc_hypercall_buffer_t *val); typedef xen_sysctl_lockprof_data_t xc_lockprof_data_t; int xc_lockprof_reset(xc_interface *xch); int xc_lockprof_query_number(xc_interface *xch, uint32_t *n_elems); -/* IMPORTANT: The caller is responsible for mlock()''ing the @data array. */ int xc_lockprof_query(xc_interface *xch, uint32_t *n_elems, uint64_t *time, - xc_lockprof_data_t *data); + xc_hypercall_buffer_t *data); /** * Memory maps a range within one domain to a local address range. Mappings diff -r 889ad17d10f9 -r a535e89658c0 tools/misc/xenlockprof.c --- a/tools/misc/xenlockprof.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/misc/xenlockprof.c Fri Oct 22 15:14:51 2010 +0100 @@ -18,22 +18,6 @@ #include <string.h> #include <inttypes.h> -static int lock_pages(void *addr, size_t len) -{ - int e = 0; -#ifndef __sun__ - e = mlock(addr, len); -#endif - return (e); -} - -static void unlock_pages(void *addr, size_t len) -{ -#ifndef __sun__ - munlock(addr, len); -#endif -} - int main(int argc, char *argv[]) { xc_interface *xc_handle; @@ -41,7 +25,7 @@ int main(int argc, char *argv[]) uint64_t time; double l, b, sl, sb; char name[60]; - xc_lockprof_data_t *data; + DECLARE_HYPERCALL_BUFFER(xc_lockprof_data_t, data); if ( (argc > 2) || ((argc == 2) && (strcmp(argv[1], "-r") != 0)) ) { @@ -78,23 +62,21 @@ int main(int argc, char *argv[]) } n += 32; /* just to be sure */ - data = malloc(sizeof(*data) * n); - if ( (data == NULL) || (lock_pages(data, sizeof(*data) * n) != 0) ) + data = xc_hypercall_buffer_alloc(xc_handle, data, sizeof(*data) * n); + if ( data == NULL ) { - fprintf(stderr, "Could not alloc or lock buffers: %d (%s)\n", + fprintf(stderr, "Could not allocate buffers: %d (%s)\n", errno, strerror(errno)); return 1; } i = n; - if ( xc_lockprof_query(xc_handle, &i, &time, data) != 0 ) + if ( xc_lockprof_query(xc_handle, &i, &time, HYPERCALL_BUFFER(data)) != 0 ) { fprintf(stderr, "Error getting profile records: %d (%s)\n", errno, strerror(errno)); return 1; } - - unlock_pages(data, sizeof(*data) * n); if ( i > n ) { @@ -132,5 +114,7 @@ int main(int argc, char *argv[]) printf("total locked time: %20.9fs\n", sl); printf("total blocked time: %20.9fs\n", sb); + xc_hypercall_buffer_free(xc_handle, data); + return 0; } diff -r 889ad17d10f9 -r a535e89658c0 tools/misc/xenperf.c --- a/tools/misc/xenperf.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/misc/xenperf.c Fri Oct 22 15:14:51 2010 +0100 @@ -68,28 +68,12 @@ const char *hypercall_name_table[64] }; #undef X -static int lock_pages(void *addr, size_t len) -{ - int e = 0; -#ifndef __sun__ - e = mlock(addr, len); -#endif - return (e); -} - -static void unlock_pages(void *addr, size_t len) -{ -#ifndef __sun__ - munlock(addr, len); -#endif -} - int main(int argc, char *argv[]) { int i, j; xc_interface *xc_handle; - xc_perfc_desc_t *pcd; - xc_perfc_val_t *pcv; + DECLARE_HYPERCALL_BUFFER(xc_perfc_desc_t, pcd); + DECLARE_HYPERCALL_BUFFER(xc_perfc_val_t, pcv); xc_perfc_val_t *val; int num_desc, num_val; unsigned int sum, reset = 0, full = 0, pretty = 0; @@ -154,28 +138,22 @@ int main(int argc, char *argv[]) return 1; } - pcd = malloc(sizeof(*pcd) * num_desc); - pcv = malloc(sizeof(*pcv) * num_val); + pcd = xc_hypercall_buffer_alloc(xc_handle, pcd, sizeof(*pcd) * num_desc); + pcv = xc_hypercall_buffer_alloc(xc_handle, pcv, sizeof(*pcv) * num_val); - if ( pcd == NULL - || lock_pages(pcd, sizeof(*pcd) * num_desc) != 0 - || pcv == NULL - || lock_pages(pcv, sizeof(*pcv) * num_val) != 0) + if ( pcd == NULL || pcv == NULL) { - fprintf(stderr, "Could not alloc or lock buffers: %d (%s)\n", + fprintf(stderr, "Could not allocate buffers: %d (%s)\n", errno, strerror(errno)); exit(-1); } - if ( xc_perfc_query(xc_handle, pcd, pcv) != 0 ) + if ( xc_perfc_query(xc_handle, HYPERCALL_BUFFER(pcd), HYPERCALL_BUFFER(pcv)) != 0 ) { fprintf(stderr, "Error getting perf counter: %d (%s)\n", errno, strerror(errno)); return 1; } - - unlock_pages(pcd, sizeof(*pcd) * num_desc); - unlock_pages(pcv, sizeof(*pcv) * num_val); val = pcv; for ( i = 0; i < num_desc; i++ ) @@ -221,5 +199,7 @@ int main(int argc, char *argv[]) val += pcd[i].nr_vals; } + xc_hypercall_buffer_free(xc_handle, pcd); + xc_hypercall_buffer_free(xc_handle, pcv); return 0; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 06 of 25] libxc: convert watchdog interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID ae3069299aaee8067deb74fe678029c29eb15717 # Parent a535e89658c09f6a491213e0f2373de775fbabb1 libxc: convert watchdog interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r a535e89658c0 -r ae3069299aae tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -374,24 +374,25 @@ int xc_watchdog(xc_interface *xch, uint32_t timeout) { int ret = -1; - sched_watchdog_t arg; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BUFFER(sched_watchdog_t, arg); + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_watchdog hypercall"); + goto out1; + } hypercall.op = __HYPERVISOR_sched_op; hypercall.arg[0] = (unsigned long)SCHEDOP_watchdog; - hypercall.arg[1] = (unsigned long)&arg; - arg.id = id; - arg.timeout = timeout; - - if ( lock_pages(xch, &arg, sizeof(arg)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out1; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->id = id; + arg->timeout = timeout; ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); out1: return ret; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 07 of 25] libxc: convert acm interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 64abd5a41eed626a677ec79ff9728ddd68b824d9 # Parent ae3069299aaee8067deb74fe678029c29eb15717 libxc: convert acm interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r ae3069299aae -r 64abd5a41eed tools/libxc/xc_acm.c --- a/tools/libxc/xc_acm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_acm.c Fri Oct 22 15:14:51 2010 +0100 @@ -27,12 +27,19 @@ int xc_acm_op(xc_interface *xch, int cmd { int ret; DECLARE_HYPERCALL; - struct xen_acmctl acmctl; + DECLARE_HYPERCALL_BUFFER(struct xen_acmctl, acmctl); + + acmctl = xc_hypercall_buffer_alloc(xch, acmctl, sizeof(*acmctl)); + if ( acmctl == NULL ) + { + PERROR("Could not allocate memory for ACM OP hypercall"); + return -EFAULT; + } switch (cmd) { case ACMOP_setpolicy: { struct acm_setpolicy *setpolicy = (struct acm_setpolicy *)arg; - memcpy(&acmctl.u.setpolicy, + memcpy(&acmctl->u.setpolicy, setpolicy, sizeof(struct acm_setpolicy)); } @@ -40,7 +47,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_getpolicy: { struct acm_getpolicy *getpolicy = (struct acm_getpolicy *)arg; - memcpy(&acmctl.u.getpolicy, + memcpy(&acmctl->u.getpolicy, getpolicy, sizeof(struct acm_getpolicy)); } @@ -48,7 +55,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_dumpstats: { struct acm_dumpstats *dumpstats = (struct acm_dumpstats *)arg; - memcpy(&acmctl.u.dumpstats, + memcpy(&acmctl->u.dumpstats, dumpstats, sizeof(struct acm_dumpstats)); } @@ -56,7 +63,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_getssid: { struct acm_getssid *getssid = (struct acm_getssid *)arg; - memcpy(&acmctl.u.getssid, + memcpy(&acmctl->u.getssid, getssid, sizeof(struct acm_getssid)); } @@ -64,7 +71,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_getdecision: { struct acm_getdecision *getdecision = (struct acm_getdecision *)arg; - memcpy(&acmctl.u.getdecision, + memcpy(&acmctl->u.getdecision, getdecision, sizeof(struct acm_getdecision)); } @@ -72,7 +79,7 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_chgpolicy: { struct acm_change_policy *change_policy = (struct acm_change_policy *)arg; - memcpy(&acmctl.u.change_policy, + memcpy(&acmctl->u.change_policy, change_policy, sizeof(struct acm_change_policy)); } @@ -80,40 +87,36 @@ int xc_acm_op(xc_interface *xch, int cmd case ACMOP_relabeldoms: { struct acm_relabel_doms *relabel_doms = (struct acm_relabel_doms *)arg; - memcpy(&acmctl.u.relabel_doms, + memcpy(&acmctl->u.relabel_doms, relabel_doms, sizeof(struct acm_relabel_doms)); } break; } - acmctl.cmd = cmd; - acmctl.interface_version = ACM_INTERFACE_VERSION; + acmctl->cmd = cmd; + acmctl->interface_version = ACM_INTERFACE_VERSION; hypercall.op = __HYPERVISOR_xsm_op; - hypercall.arg[0] = (unsigned long)&acmctl; - if ( lock_pages(xch, &acmctl, sizeof(acmctl)) != 0) - { - PERROR("Could not lock memory for Xen hypercall"); - return -EFAULT; - } + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(acmctl); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0) { if ( errno == EACCES ) DPRINTF("acmctl operation failed -- need to" " rebuild the user-space tool set?\n"); } - unlock_pages(xch, &acmctl, sizeof(acmctl)); switch (cmd) { case ACMOP_getdecision: { struct acm_getdecision *getdecision = (struct acm_getdecision *)arg; memcpy(getdecision, - &acmctl.u.getdecision, + &acmctl->u.getdecision, sizeof(struct acm_getdecision)); break; } } + + xc_hypercall_buffer_free(xch, acmctl); return ret; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 08 of 25] libxc: convert evtchn interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 0ddcbfd4824fe9c44c4f98f9a4e9cf02e869c290 # Parent 64abd5a41eed626a677ec79ff9728ddd68b824d9 libxc: convert evtchn interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 64abd5a41eed -r 0ddcbfd4824f tools/libxc/xc_evtchn.c --- a/tools/libxc/xc_evtchn.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_evtchn.c Fri Oct 22 15:14:51 2010 +0100 @@ -22,31 +22,30 @@ #include "xc_private.h" - static int do_evtchn_op(xc_interface *xch, int cmd, void *arg, size_t arg_size, int silently_fail) { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(arg, arg_size, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, arg) ) + { + PERROR("do_evtchn_op: bouncing arg failed"); + goto out; + } hypercall.op = __HYPERVISOR_event_channel_op; hypercall.arg[0] = cmd; - hypercall.arg[1] = (unsigned long)arg; - - if ( lock_pages(xch, arg, arg_size) != 0 ) - { - PERROR("do_evtchn_op: arg lock failed"); - goto out; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); if ((ret = do_xen_hypercall(xch, &hypercall)) < 0 && !silently_fail) ERROR("do_evtchn_op: HYPERVISOR_event_channel_op failed: %d", ret); - unlock_pages(xch, arg, arg_size); + xc_hypercall_bounce_post(xch, arg); out: return ret; } - evtchn_port_or_error_t xc_evtchn_alloc_unbound(xc_interface *xch, _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 09 of 25] libxc: convert schedop interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 13eb390743e77d302151bb2c53a333626997194c # Parent 0ddcbfd4824fe9c44c4f98f9a4e9cf02e869c290 libxc: convert schedop interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 0ddcbfd4824f -r 13eb390743e7 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -85,24 +85,25 @@ int xc_domain_shutdown(xc_interface *xch int reason) { int ret = -1; - sched_remote_shutdown_t arg; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BUFFER(sched_remote_shutdown_t, arg); + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_domain_shutdown hypercall"); + goto out1; + } hypercall.op = __HYPERVISOR_sched_op; hypercall.arg[0] = (unsigned long)SCHEDOP_remote_shutdown; - hypercall.arg[1] = (unsigned long)&arg; - arg.domain_id = domid; - arg.reason = reason; - - if ( lock_pages(xch, &arg, sizeof(arg)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out1; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->domain_id = domid; + arg->reason = reason; ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); out1: return ret; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 10 of 25] libxc: convert physdevop interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID d6d85046e71462902f8ac32a9a5bf90cd2d1e14a # Parent 13eb390743e77d302151bb2c53a333626997194c libxc: convert physdevop interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 13eb390743e7 -r d6d85046e714 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 @@ -181,18 +181,18 @@ static inline int do_physdev_op(xc_inter static inline int do_physdev_op(xc_interface *xch, int cmd, void *op, size_t len) { int ret = -1; + DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - DECLARE_HYPERCALL; - - if ( hcall_buf_prep(xch, &op, len) != 0 ) + if ( xc_hypercall_bounce_pre(xch, op) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce memory for physdev hypercall"); goto out1; } hypercall.op = __HYPERVISOR_physdev_op; hypercall.arg[0] = (unsigned long) cmd; - hypercall.arg[1] = (unsigned long) op; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(op); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { @@ -201,8 +201,7 @@ static inline int do_physdev_op(xc_inter " rebuild the user-space tool set?\n"); } - hcall_buf_release(xch, &op, len); - + xc_hypercall_bounce_post(xch, op); out1: return ret; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 11 of 25] libxc: convert flask interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID f30814a875717258b94df571cfacce949ca37a50 # Parent d6d85046e71462902f8ac32a9a5bf90cd2d1e14a libxc: convert flask interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r d6d85046e714 -r f30814a87571 tools/libxc/xc_flask.c --- a/tools/libxc/xc_flask.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_flask.c Fri Oct 22 15:14:51 2010 +0100 @@ -40,15 +40,16 @@ int xc_flask_op(xc_interface *xch, flask { int ret = -1; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, op) ) + { + PERROR("Could not bounce memory for flask op hypercall"); + goto out; + } hypercall.op = __HYPERVISOR_xsm_op; - hypercall.arg[0] = (unsigned long)op; - - if ( lock_pages(xch, op, sizeof(*op)) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out; - } + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op); if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 ) { @@ -56,7 +57,7 @@ int xc_flask_op(xc_interface *xch, flask fprintf(stderr, "XSM operation failed!\n"); } - unlock_pages(xch, op, sizeof(*op)); + xc_hypercall_bounce_post(xch, op); out: return ret; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 12 of 25] libxc: convert hvmop interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 36df3e4c37de798250bd45b24f4d13197b627aac # Parent f30814a875717258b94df571cfacce949ca37a50 libxc: convert hvmop interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r f30814a87571 -r 36df3e4c37de tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -1027,38 +1027,42 @@ int xc_set_hvm_param(xc_interface *handl int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value) { DECLARE_HYPERCALL; - xen_hvm_param_t arg; + DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg); int rc; + + arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg)); + if ( arg == NULL ) + return -1; hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_param; - hypercall.arg[1] = (unsigned long)&arg; - arg.domid = dom; - arg.index = param; - arg.value = value; - if ( lock_pages(handle, &arg, sizeof(arg)) != 0 ) - return -1; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->domid = dom; + arg->index = param; + arg->value = value; rc = do_xen_hypercall(handle, &hypercall); - unlock_pages(handle, &arg, sizeof(arg)); + xc_hypercall_buffer_free(handle, arg); return rc; } int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value) { DECLARE_HYPERCALL; - xen_hvm_param_t arg; + DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg); int rc; + + arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg)); + if ( arg == NULL ) + return -1; hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_get_param; - hypercall.arg[1] = (unsigned long)&arg; - arg.domid = dom; - arg.index = param; - if ( lock_pages(handle, &arg, sizeof(arg)) != 0 ) - return -1; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + arg->domid = dom; + arg->index = param; rc = do_xen_hypercall(handle, &hypercall); - unlock_pages(handle, &arg, sizeof(arg)); - *value = arg.value; + *value = arg->value; + xc_hypercall_buffer_free(handle, arg); return rc; } diff -r f30814a87571 -r 36df3e4c37de tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 @@ -313,18 +313,19 @@ int xc_hvm_set_pci_intx_level( unsigned int level) { DECLARE_HYPERCALL; - struct xen_hvm_set_pci_intx_level _arg, *arg = &_arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_intx_level, arg); int rc; - if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 ) + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) { - PERROR("Could not lock memory"); - return rc; + PERROR("Could not allocate memory for xc_hvm_set_pci_intx_level hypercall"); + return -1; } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_pci_intx_level; - hypercall.arg[1] = (unsigned long)arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); arg->domid = dom; arg->domain = domain; @@ -335,7 +336,7 @@ int xc_hvm_set_pci_intx_level( rc = do_xen_hypercall(xch, &hypercall); - hcall_buf_release(xch, (void **)&arg, sizeof(*arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -346,18 +347,19 @@ int xc_hvm_set_isa_irq_level( unsigned int level) { DECLARE_HYPERCALL; - struct xen_hvm_set_isa_irq_level _arg, *arg = &_arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_isa_irq_level, arg); int rc; - if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 ) + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) { - PERROR("Could not lock memory"); - return rc; + PERROR("Could not allocate memory for xc_hvm_set_isa_irq_level hypercall"); + return -1; } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_isa_irq_level; - hypercall.arg[1] = (unsigned long)arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); arg->domid = dom; arg->isa_irq = isa_irq; @@ -365,7 +367,7 @@ int xc_hvm_set_isa_irq_level( rc = do_xen_hypercall(xch, &hypercall); - hcall_buf_release(xch, (void **)&arg, sizeof(*arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -374,26 +376,27 @@ int xc_hvm_set_pci_link_route( xc_interface *xch, domid_t dom, uint8_t link, uint8_t isa_irq) { DECLARE_HYPERCALL; - struct xen_hvm_set_pci_link_route arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_link_route, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_hvm_set_pci_link_route hypercall"); + return -1; + } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_pci_link_route; - hypercall.arg[1] = (unsigned long)&arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); - arg.domid = dom; - arg.link = link; - arg.isa_irq = isa_irq; - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + arg->domid = dom; + arg->link = link; + arg->isa_irq = isa_irq; rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -404,28 +407,32 @@ int xc_hvm_track_dirty_vram( unsigned long *dirty_bitmap) { DECLARE_HYPERCALL; - struct xen_hvm_track_dirty_vram arg; + DECLARE_HYPERCALL_BOUNCE(dirty_bitmap, (nr+31) / 32, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_track_dirty_vram, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL || xc_hypercall_bounce_pre(xch, dirty_bitmap) ) + { + PERROR("Could not bounce memory for xc_hvm_track_dirty_vram hypercall"); + rc = -1; + goto out; + } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_track_dirty_vram; - hypercall.arg[1] = (unsigned long)&arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); - arg.domid = dom; - arg.first_pfn = first_pfn; - arg.nr = nr; - set_xen_guest_handle(arg.dirty_bitmap, (uint8_t *)dirty_bitmap); - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + arg->domid = dom; + arg->first_pfn = first_pfn; + arg->nr = nr; + xc_set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap); rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); - +out: + xc_hypercall_buffer_free(xch, arg); + xc_hypercall_bounce_post(xch, dirty_bitmap); return rc; } @@ -433,26 +440,27 @@ int xc_hvm_modified_memory( xc_interface *xch, domid_t dom, uint64_t first_pfn, uint64_t nr) { DECLARE_HYPERCALL; - struct xen_hvm_modified_memory arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_modified_memory, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_hvm_modified_memory hypercall"); + return -1; + } hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_modified_memory; - hypercall.arg[1] = (unsigned long)&arg; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); - arg.domid = dom; - arg.first_pfn = first_pfn; - arg.nr = nr; - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + arg->domid = dom; + arg->first_pfn = first_pfn; + arg->nr = nr; rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } @@ -461,27 +469,28 @@ int xc_hvm_set_mem_type( xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint64_t nr) { DECLARE_HYPERCALL; - struct xen_hvm_set_mem_type arg; + DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg); int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + { + PERROR("Could not allocate memory for xc_hvm_set_mem_type hypercall"); + return -1; + } + + arg->domid = dom; + arg->hvmmem_type = mem_type; + arg->first_pfn = first_pfn; + arg->nr = nr; hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_set_mem_type; - hypercall.arg[1] = (unsigned long)&arg; - - arg.domid = dom; - arg.hvmmem_type = mem_type; - arg.first_pfn = first_pfn; - arg.nr = nr; - - if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 ) - { - PERROR("Could not lock memory"); - return rc; - } + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); rc = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, &arg, sizeof(arg)); + xc_hypercall_buffer_free(xch, arg); return rc; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 13 of 25] libxc: convert mca interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 441891ba02392f7ac7ca215569eab45023b8c9cc # Parent 36df3e4c37de798250bd45b24f4d13197b627aac libxc: convert mca interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 36df3e4c37de -r 441891ba0239 tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 @@ -167,18 +167,19 @@ int xc_mca_op(xc_interface *xch, struct { int ret = 0; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(mc, sizeof(*mc), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + if ( xc_hypercall_bounce_pre(xch, mc) ) + { + PERROR("Could not bounce xen_mc memory buffer"); + return -1; + } mc->interface_version = XEN_MCA_INTERFACE_VERSION; - if ( lock_pages(xch, mc, sizeof(*mc)) ) - { - PERROR("Could not lock xen_mc memory"); - return -EINVAL; - } hypercall.op = __HYPERVISOR_mca; - hypercall.arg[0] = (unsigned long)mc; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(mc); ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, mc, sizeof(*mc)); + xc_hypercall_bounce_post(xch, mc); return ret; } #endif _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 14 of 25] libxc: convert tmem interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID f9d7420fae6d3f4a324cd783ab56ea5a158cf664 # Parent 441891ba02392f7ac7ca215569eab45023b8c9cc libxc: convert tmem interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 441891ba0239 -r f9d7420fae6d tools/libxc/xc_tmem.c --- a/tools/libxc/xc_tmem.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_tmem.c Fri Oct 22 15:14:51 2010 +0100 @@ -25,21 +25,23 @@ static int do_tmem_op(xc_interface *xch, { int ret; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, op) ) + { + PERROR("Could not bounce buffer for tmem op hypercall"); + return -EFAULT; + } hypercall.op = __HYPERVISOR_tmem_op; - hypercall.arg[0] = (unsigned long)op; - if (lock_pages(xch, op, sizeof(*op)) != 0) - { - PERROR("Could not lock memory for Xen hypercall"); - return -EFAULT; - } + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op); if ((ret = do_xen_hypercall(xch, &hypercall)) < 0) { if ( errno == EACCES ) DPRINTF("tmem operation failed -- need to" " rebuild the user-space tool set?\n"); } - unlock_pages(xch, op, sizeof(*op)); + xc_hypercall_bounce_post(xch, op); return ret; } @@ -54,13 +56,13 @@ int xc_tmem_control(xc_interface *xch, void *buf) { tmem_op_t op; + DECLARE_HYPERCALL_BOUNCE(buf, arg1, XC_HYPERCALL_BUFFER_BOUNCE_OUT); int rc; op.cmd = TMEM_CONTROL; op.pool_id = pool_id; op.u.ctrl.subop = subop; op.u.ctrl.cli_id = cli_id; - set_xen_guest_handle(op.u.ctrl.buf,buf); op.u.ctrl.arg1 = arg1; op.u.ctrl.arg2 = arg2; /* use xc_tmem_control_oid if arg3 is required */ @@ -68,25 +70,28 @@ int xc_tmem_control(xc_interface *xch, op.u.ctrl.oid[1] = 0; op.u.ctrl.oid[2] = 0; - if (subop == TMEMC_LIST) { - if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0)) - { - PERROR("Could not lock memory for Xen hypercall"); - return -ENOMEM; - } - } - #ifdef VALGRIND if (arg1 != 0) memset(buf, 0, arg1); #endif + if ( subop == TMEMC_LIST && arg1 != 0 ) + { + if ( buf == NULL ) + return -EINVAL; + if ( xc_hypercall_bounce_pre(xch, buf) ) + { + PERROR("Could not bounce buffer for tmem control hypercall"); + return -ENOMEM; + } + } + + xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + rc = do_tmem_op(xch, &op); - if (subop == TMEMC_LIST) { - if (arg1 != 0) - unlock_pages(xch, buf, arg1); - } + if (subop == TMEMC_LIST && arg1 != 0) + xc_hypercall_bounce_post(xch, buf); return rc; } @@ -101,6 +106,7 @@ int xc_tmem_control_oid(xc_interface *xc void *buf) { tmem_op_t op; + DECLARE_HYPERCALL_BOUNCE(buf, arg1, XC_HYPERCALL_BUFFER_BOUNCE_OUT); int rc; op.cmd = TMEM_CONTROL; @@ -114,25 +120,28 @@ int xc_tmem_control_oid(xc_interface *xc op.u.ctrl.oid[1] = oid.oid[1]; op.u.ctrl.oid[2] = oid.oid[2]; - if (subop == TMEMC_LIST) { - if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0)) - { - PERROR("Could not lock memory for Xen hypercall"); - return -ENOMEM; - } - } - #ifdef VALGRIND if (arg1 != 0) memset(buf, 0, arg1); #endif + if ( subop == TMEMC_LIST && arg1 != 0 ) + { + if ( buf == NULL ) + return -EINVAL; + if ( xc_hypercall_bounce_pre(xch, buf) ) + { + PERROR("Could not bounce buffer for tmem control (OID) hypercall"); + return -ENOMEM; + } + } + + xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + rc = do_tmem_op(xch, &op); - if (subop == TMEMC_LIST) { - if (arg1 != 0) - unlock_pages(xch, buf, arg1); - } + if (subop == TMEMC_LIST && arg1 != 0) + xc_hypercall_bounce_post(xch, buf); return rc; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 15 of 25] libxc: convert gnttab interfaces over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID ba4bc1c93fee7072c32d8a0a1aef61d6fb50e757 # Parent f9d7420fae6d3f4a324cd783ab56ea5a158cf664 libxc: convert gnttab interfaces over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r f9d7420fae6d -r ba4bc1c93fee tools/libxc/xc_linux.c --- a/tools/libxc/xc_linux.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_linux.c Fri Oct 22 15:14:51 2010 +0100 @@ -612,21 +612,22 @@ int xc_gnttab_op(xc_interface *xch, int { int ret = 0; DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, count * op_size, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, op) ) + { + PERROR("Could not bounce buffer for grant table op hypercall"); + goto out1; + } hypercall.op = __HYPERVISOR_grant_table_op; hypercall.arg[0] = cmd; - hypercall.arg[1] = (unsigned long)op; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(op); hypercall.arg[2] = count; - - if ( lock_pages(xch, op, count* op_size) != 0 ) - { - PERROR("Could not lock memory for Xen hypercall"); - goto out1; - } ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(xch, op, count * op_size); + xc_hypercall_bounce_post(xch, op); out1: return ret; @@ -651,7 +652,7 @@ static void *_gnttab_map_table(xc_interf int rc, i; struct gnttab_query_size query; struct gnttab_setup_table setup; - unsigned long *frame_list = NULL; + DECLARE_HYPERCALL_BUFFER(unsigned long, frame_list); xen_pfn_t *pfn_list = NULL; grant_entry_v1_t *gnt = NULL; @@ -669,26 +670,23 @@ static void *_gnttab_map_table(xc_interf *gnt_num = query.nr_frames * (PAGE_SIZE / sizeof(grant_entry_v1_t) ); - frame_list = malloc(query.nr_frames * sizeof(unsigned long)); - if ( !frame_list || lock_pages(xch, frame_list, - query.nr_frames * sizeof(unsigned long)) ) + frame_list = xc_hypercall_buffer_alloc(xch, frame_list, query.nr_frames * sizeof(unsigned long)); + if ( !frame_list ) { - ERROR("Alloc/lock frame_list in xc_gnttab_map_table\n"); - if ( frame_list ) - free(frame_list); + ERROR("Could not allocate frame_list in xc_gnttab_map_table\n"); return NULL; } pfn_list = malloc(query.nr_frames * sizeof(xen_pfn_t)); if ( !pfn_list ) { - ERROR("Could not lock pfn_list in xc_gnttab_map_table\n"); + ERROR("Could not allocate pfn_list in xc_gnttab_map_table\n"); goto err; } setup.dom = domid; setup.nr_frames = query.nr_frames; - set_xen_guest_handle(setup.frame_list, frame_list); + xc_set_xen_guest_handle(setup.frame_list, frame_list); /* XXX Any race with other setup_table hypercall? */ rc = xc_gnttab_op(xch, GNTTABOP_setup_table, &setup, sizeof(setup), @@ -713,10 +711,7 @@ static void *_gnttab_map_table(xc_interf err: if ( frame_list ) - { - unlock_pages(xch, frame_list, query.nr_frames * sizeof(unsigned long)); - free(frame_list); - } + xc_hypercall_buffer_free(xch, frame_list); if ( pfn_list ) free(pfn_list); diff -r f9d7420fae6d -r ba4bc1c93fee tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 @@ -1290,7 +1290,7 @@ int xc_gnttab_set_max_grants(xc_interfac int xc_gnttab_op(xc_interface *xch, int cmd, void * op, int op_size, int count); -/* Logs iff lock_pages failes, otherwise doesn''t. */ +/* Logs iff hypercall bounce fails, otherwise doesn''t. */ int xc_gnttab_get_version(xc_interface *xch, int domid); /* Never logs */ grant_entry_v1_t *xc_gnttab_map_table_v1(xc_interface *xch, int domid, int *gnt_num); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 16 of 25] libxc: convert memory op interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 63c5a929ae7ca0c82406e0cd33f95c82f219d59f # Parent ba4bc1c93fee7072c32d8a0a1aef61d6fb50e757 libxc: convert memory op interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r ba4bc1c93fee -r 63c5a929ae7c tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -468,31 +468,30 @@ int xc_domain_set_memmap_limit(xc_interf unsigned long map_limitkb) { int rc; - struct xen_foreign_memory_map fmap = { .domid = domid, .map = { .nr_entries = 1 } }; + DECLARE_HYPERCALL_BUFFER(struct e820entry, e820); - struct e820entry e820 = { - .addr = 0, - .size = (uint64_t)map_limitkb << 10, - .type = E820_RAM - }; + e820 = xc_hypercall_buffer_alloc(xch, e820, sizeof(*e820)); - set_xen_guest_handle(fmap.map.buffer, &e820); + if ( e820 == NULL ) + { + PERROR("Could not allocate memory for xc_domain_set_memmap_limit hypercall"); + return -1; + } - if ( lock_pages(xch, &e820, sizeof(e820)) ) - { - PERROR("Could not lock memory for Xen hypercall"); - rc = -1; - goto out; - } + e820->addr = 0; + e820->size = (uint64_t)map_limitkb << 10; + e820->type = E820_RAM; + + xc_set_xen_guest_handle(fmap.map.buffer, e820); rc = do_memory_op(xch, XENMEM_set_memory_map, &fmap, sizeof(fmap)); - out: - unlock_pages(xch, &e820, sizeof(e820)); + xc_hypercall_buffer_free(xch, e820); + return rc; } #else @@ -587,6 +586,7 @@ int xc_domain_increase_reservation(xc_in xen_pfn_t *extent_start) { int err; + DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { .nr_extents = nr_extents, .extent_order = extent_order, @@ -595,18 +595,17 @@ int xc_domain_increase_reservation(xc_in }; /* may be NULL */ - if ( extent_start && lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, extent_start) ) { - PERROR("Could not lock memory for XENMEM_increase_reservation hypercall"); + PERROR("Could not bounce memory for XENMEM_increase_reservation hypercall"); return -1; } - set_xen_guest_handle(reservation.extent_start, extent_start); + xc_set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_increase_reservation, &reservation, sizeof(reservation)); - if ( extent_start ) - unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)); + xc_hypercall_bounce_post(xch, extent_start); return err; } @@ -645,18 +644,13 @@ int xc_domain_decrease_reservation(xc_in xen_pfn_t *extent_start) { int err; + DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { .nr_extents = nr_extents, .extent_order = extent_order, .mem_flags = 0, .domid = domid }; - - if ( lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 ) - { - PERROR("Could not lock memory for XENMEM_decrease_reservation hypercall"); - return -1; - } if ( extent_start == NULL ) { @@ -665,11 +659,16 @@ int xc_domain_decrease_reservation(xc_in return -1; } - set_xen_guest_handle(reservation.extent_start, extent_start); + if ( xc_hypercall_bounce_pre(xch, extent_start) ) + { + PERROR("Could not bounce memory for XENMEM_decrease_reservation hypercall"); + return -1; + } + xc_set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation)); - unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)); + xc_hypercall_bounce_post(xch, extent_start); return err; } @@ -722,6 +721,7 @@ int xc_domain_populate_physmap(xc_interf xen_pfn_t *extent_start) { int err; + DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { .nr_extents = nr_extents, .extent_order = extent_order, @@ -729,18 +729,16 @@ int xc_domain_populate_physmap(xc_interf .domid = domid }; - if ( lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, extent_start) ) { - PERROR("Could not lock memory for XENMEM_populate_physmap hypercall"); + PERROR("Could not bounce memory for XENMEM_populate_physmap hypercall"); return -1; } - - set_xen_guest_handle(reservation.extent_start, extent_start); + xc_set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation)); - unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)); - + xc_hypercall_bounce_post(xch, extent_start); return err; } @@ -778,8 +776,9 @@ int xc_domain_memory_exchange_pages(xc_i unsigned int out_order, xen_pfn_t *out_extents) { - int rc; - + int rc = -1; + DECLARE_HYPERCALL_BOUNCE(in_extents, nr_in_extents*sizeof(*in_extents), XC_HYPERCALL_BUFFER_BOUNCE_IN); + DECLARE_HYPERCALL_BOUNCE(out_extents, nr_out_extents*sizeof(*out_extents), XC_HYPERCALL_BUFFER_BOUNCE_OUT); struct xen_memory_exchange exchange = { .in = { .nr_extents = nr_in_extents, @@ -792,10 +791,19 @@ int xc_domain_memory_exchange_pages(xc_i .domid = domid } }; - set_xen_guest_handle(exchange.in.extent_start, in_extents); - set_xen_guest_handle(exchange.out.extent_start, out_extents); + + if ( xc_hypercall_bounce_pre(xch, in_extents) || + xc_hypercall_bounce_pre(xch, out_extents)) + goto out; + + xc_set_xen_guest_handle(exchange.in.extent_start, in_extents); + xc_set_xen_guest_handle(exchange.out.extent_start, out_extents); rc = do_memory_op(xch, XENMEM_exchange, &exchange, sizeof(exchange)); + +out: + xc_hypercall_bounce_post(xch, in_extents); + xc_hypercall_bounce_post(xch, out_extents); return rc; } diff -r ba4bc1c93fee -r 63c5a929ae7c tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 @@ -430,23 +430,22 @@ int do_memory_op(xc_interface *xch, int int do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len) { DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(arg, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); long ret = -EINVAL; - hypercall.op = __HYPERVISOR_memory_op; - hypercall.arg[0] = (unsigned long)cmd; - hypercall.arg[1] = (unsigned long)arg; - - if ( len && lock_pages(xch, arg, len) != 0 ) + if ( xc_hypercall_bounce_pre(xch, arg) ) { - PERROR("Could not lock memory for XENMEM hypercall"); + PERROR("Could not bounce memory for XENMEM hypercall"); goto out1; } + hypercall.op = __HYPERVISOR_memory_op; + hypercall.arg[0] = (unsigned long) cmd; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg); + ret = do_xen_hypercall(xch, &hypercall); - if ( len ) - unlock_pages(xch, arg, len); - + xc_hypercall_bounce_post(xch, arg); out1: return ret; } @@ -476,24 +475,25 @@ int xc_machphys_mfn_list(xc_interface *x xen_pfn_t *extent_start) { int rc; + DECLARE_HYPERCALL_BOUNCE(extent_start, max_extents * sizeof(xen_pfn_t), XC_HYPERCALL_BUFFER_BOUNCE_OUT); struct xen_machphys_mfn_list xmml = { .max_extents = max_extents, }; - if ( lock_pages(xch, extent_start, max_extents * sizeof(xen_pfn_t)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, extent_start) ) { - PERROR("Could not lock memory for XENMEM_machphys_mfn_list hypercall"); + PERROR("Could not bounce memory for XENMEM_machphys_mfn_list hypercall"); return -1; } - set_xen_guest_handle(xmml.extent_start, extent_start); + xc_set_xen_guest_handle(xmml.extent_start, extent_start); rc = do_memory_op(xch, XENMEM_machphys_mfn_list, &xmml, sizeof(xmml)); if (rc || xmml.nr_extents != max_extents) rc = -1; else rc = 0; - unlock_pages(xch, extent_start, max_extents * sizeof(xen_pfn_t)); + xc_hypercall_bounce_post(xch, extent_start); return rc; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:15 UTC
[Xen-devel] [PATCH 17 of 25] libxc: convert mmuext op interface over to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 571e4fd050f7ae7cad9a94c94f9be79acf55710d # Parent 63c5a929ae7ca0c82406e0cd33f95c82f219d59f libxc: convert mmuext op interface over to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 63c5a929ae7c -r 571e4fd050f7 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 @@ -343,23 +343,24 @@ int xc_mmuext_op( domid_t dom) { DECLARE_HYPERCALL; + DECLARE_HYPERCALL_BOUNCE(op, nr_ops*sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); long ret = -EINVAL; - if ( hcall_buf_prep(xch, (void **)&op, nr_ops*sizeof(*op)) != 0 ) + if ( xc_hypercall_bounce_pre(xch, op) ) { - PERROR("Could not lock memory for Xen hypercall"); + PERROR("Could not bounce memory for mmuext op hypercall"); goto out1; } hypercall.op = __HYPERVISOR_mmuext_op; - hypercall.arg[0] = (unsigned long)op; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op); hypercall.arg[1] = (unsigned long)nr_ops; hypercall.arg[2] = (unsigned long)0; hypercall.arg[3] = (unsigned long)dom; ret = do_xen_hypercall(xch, &hypercall); - hcall_buf_release(xch, (void **)&op, nr_ops*sizeof(*op)); + xc_hypercall_bounce_post(xch, op); out1: return ret; @@ -369,22 +370,23 @@ static int flush_mmu_updates(xc_interfac { int err = 0; DECLARE_HYPERCALL; + DECLARE_NAMED_HYPERCALL_BOUNCE(updates, mmu->updates, mmu->idx*sizeof(*mmu->updates), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); if ( mmu->idx == 0 ) return 0; + if ( xc_hypercall_bounce_pre(xch, updates) ) + { + PERROR("flush_mmu_updates: bounce buffer failed"); + err = 1; + goto out; + } + hypercall.op = __HYPERVISOR_mmu_update; - hypercall.arg[0] = (unsigned long)mmu->updates; + hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(updates); hypercall.arg[1] = (unsigned long)mmu->idx; hypercall.arg[2] = 0; hypercall.arg[3] = mmu->subject; - - if ( lock_pages(xch, mmu->updates, sizeof(mmu->updates)) != 0 ) - { - PERROR("flush_mmu_updates: mmu updates lock_pages failed"); - err = 1; - goto out; - } if ( do_xen_hypercall(xch, &hypercall) < 0 ) { @@ -394,7 +396,7 @@ static int flush_mmu_updates(xc_interfac mmu->idx = 0; - unlock_pages(xch, mmu->updates, sizeof(mmu->updates)); + xc_hypercall_bounce_post(xch, updates); out: return err; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 18 of 25] libxc: switch page offlining interfaces to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 6624f76887ab0672598d99d4ab5a37815d5b4aa3 # Parent 571e4fd050f7ae7cad9a94c94f9be79acf55710d libxc: switch page offlining interfaces to hypercall buffers There is no need to lock/bounce minfo->pfn_type in init_mem_info since xc_get_pfn_type_batch() will take care of that for us. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 571e4fd050f7 -r 6624f76887ab tools/libxc/xc_offline_page.c --- a/tools/libxc/xc_offline_page.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_offline_page.c Fri Oct 22 15:14:51 2010 +0100 @@ -294,12 +294,6 @@ static int init_mem_info(xc_interface *x minfo->pfn_type[i] = pfn_to_mfn(i, minfo->p2m_table, minfo->guest_width); - if ( lock_pages(xch, minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type)) ) - { - ERROR("Unable to lock pfn_type array"); - goto failed; - } - for (i = 0; i < minfo->p2m_size ; i+=1024) { int count = ((dinfo->p2m_size - i ) > 1024 ) ? 1024: (dinfo->p2m_size - i); @@ -307,13 +301,11 @@ static int init_mem_info(xc_interface *x minfo->pfn_type + i)) ) { ERROR("Failed to get pfn_type %x\n", rc); - goto unlock; + goto failed; } } return 0; -unlock: - unlock_pages(xch, minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type)); failed: if (minfo->pfn_type) { _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 19 of 25] libxc: convert ia64 dom0vp interface to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID cfab497ee5a0c8ea0ed6b136c4d10f21c921eac0 # Parent 6624f76887ab0672598d99d4ab5a37815d5b4aa3 libxc: convert ia64 dom0vp interface to hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 6624f76887ab -r cfab497ee5a0 tools/libxc/ia64/xc_dom_ia64_util.c --- a/tools/libxc/ia64/xc_dom_ia64_util.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/ia64/xc_dom_ia64_util.c Fri Oct 22 15:14:51 2010 +0100 @@ -36,19 +36,21 @@ xen_ia64_fpswa_revision(struct xc_dom_im { int ret; DECLARE_HYPERCALL; - hypercall.op = __HYPERVISOR_ia64_dom0vp_op; - hypercall.arg[0] = IA64_DOM0VP_fpswa_revision; - hypercall.arg[1] = (unsigned long)revision; + DECLARE_HYPERCALL_BOUNCE(revision, sizeof(*revision), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - if (lock_pages(revision, sizeof(*revision)) != 0) { - xc_interface *xch = dom->xch; + if (xc_hypercall_bounce_pre(dom->xch, revision) ) + { PERROR("Could not lock memory for xen fpswa hypercall"); return -1; } + hypercall.op = __HYPERVISOR_ia64_dom0vp_op; + hypercall.arg[0] = IA64_DOM0VP_fpswa_revision; + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(revision); + ret = do_xen_hypercall(dom->xch, &hypercall); - - unlock_pages(revision, sizeof(*revision)); + + xc_hypercall_bounce_post(dom->xch, revision); return ret; } diff -r 6624f76887ab -r cfab497ee5a0 tools/libxc/ia64/xc_ia64_stubs.c --- a/tools/libxc/ia64/xc_ia64_stubs.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/ia64/xc_ia64_stubs.c Fri Oct 22 15:14:51 2010 +0100 @@ -42,19 +42,24 @@ xc_ia64_get_memmap(xc_interface *xch, uint32_t domid, char *buf, unsigned long bufsize) { privcmd_hypercall_t hypercall; + DECLARE_HYPERCALL_BOUNCE(buf, bufsize, XC_HYPERCALL_BUFFER_BOUNCE_OUT); int ret; + + if ( xc_hypercall_bounce_pre(xch, pfn_buf) ) + { + PERROR("xc_get_pfn_list: pfn_buf bounce failed"); + return -1; + } hypercall.op = __HYPERVISOR_ia64_dom0vp_op; hypercall.arg[0] = IA64_DOM0VP_get_memmap; hypercall.arg[1] = domid; - hypercall.arg[2] = (unsigned long)buf; + hypercall.arg[2] = HYPERCALL_BUFFER_AS_ARG(buf); hypercall.arg[3] = bufsize; hypercall.arg[4] = 0; - if (lock_pages(buf, bufsize) != 0) - return -1; ret = do_xen_hypercall(xch, &hypercall); - unlock_pages(buf, bufsize); + xc_hypercall_bounce_post(xc, buf); return ret; } @@ -142,6 +147,7 @@ xc_ia64_map_foreign_p2m(xc_interface *xc struct xen_ia64_memmap_info *memmap_info, unsigned long flags, unsigned long *p2m_size_p) { + DECLARE_HYPERCALL_BOUNCE(memmap_info, sizeof(*memmap_info) + memmap_info->efi_memmap_size, XC_HYPERCALL_BOUNCE_BUFFER_IN); unsigned long gpfn_max; unsigned long p2m_size; void *addr; @@ -157,25 +163,23 @@ xc_ia64_map_foreign_p2m(xc_interface *xc addr = mmap(NULL, p2m_size, PROT_READ, MAP_SHARED, xch->fd, 0); if (addr == MAP_FAILED) return NULL; + if (xc_hypercall_bounce_pre(xc, memmap_info)) { + saved_errno = errno; + munmap(addr, p2m_size); + errno = saved_errno; + return NULL; + } hypercall.op = __HYPERVISOR_ia64_dom0vp_op; hypercall.arg[0] = IA64_DOM0VP_expose_foreign_p2m; hypercall.arg[1] = (unsigned long)addr; hypercall.arg[2] = dom; - hypercall.arg[3] = (unsigned long)memmap_info; + hypercall.arg[3] = HYPERCALL_BUFFER_AS_ARG(memmap_info); hypercall.arg[4] = flags; - if (lock_pages(memmap_info, - sizeof(*memmap_info) + memmap_info->efi_memmap_size) != 0) { - saved_errno = errno; - munmap(addr, p2m_size); - errno = saved_errno; - return NULL; - } ret = do_xen_hypercall(xch, &hypercall); saved_errno = errno; - unlock_pages(memmap_info, - sizeof(*memmap_info) + memmap_info->efi_memmap_size); + xc_hypercall_bounce_post(xch, memmap_info); if (ret < 0) { munmap(addr, p2m_size); errno = saved_errno; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 20 of 25] python acm: use hypercall buffer interface
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 84dc04924708cd8e7fff48312a436fbbd0c79456 # Parent cfab497ee5a0c8ea0ed6b136c4d10f21c921eac0 python acm: use hypercall buffer interface. I have a suspicion these routines should be using libxc rather than reimplementing all the hypercalls again, but I don''t have the enthusiasm to fix it. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r cfab497ee5a0 -r 84dc04924708 tools/python/xen/lowlevel/acm/acm.c --- a/tools/python/xen/lowlevel/acm/acm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/python/xen/lowlevel/acm/acm.c Fri Oct 22 15:14:51 2010 +0100 @@ -40,22 +40,20 @@ static PyObject *acm_error_obj; static PyObject *acm_error_obj; /* generic shared function */ -static void *__getssid(int domid, uint32_t *buflen) +static void *__getssid(xc_interface *xc_handle, int domid, uint32_t *buflen, xc_hypercall_buffer_t *buffer) { struct acm_getssid getssid; - xc_interface *xc_handle; #define SSID_BUFFER_SIZE 4096 - void *buf = NULL; + void *buf; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(buffer); - if ((xc_handle = xc_interface_open(0,0,0)) == 0) { - goto out1; + if ((buf = xc_hypercall_buffer_alloc(xc_handle, buffer, SSID_BUFFER_SIZE)) == NULL) { + PERROR("acm.policytype: Could not allocate ssid buffer!\n"); + return NULL; } - if ((buf = malloc(SSID_BUFFER_SIZE)) == NULL) { - PERROR("acm.policytype: Could not allocate ssid buffer!\n"); - goto out2; - } + memset(buf, 0, SSID_BUFFER_SIZE); - set_xen_guest_handle(getssid.ssidbuf, buf); + xc_set_xen_guest_handle(getssid.ssidbuf, buffer); getssid.ssidbuf_size = SSID_BUFFER_SIZE; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; @@ -63,16 +61,10 @@ static void *__getssid(int domid, uint32 if (xc_acm_op(xc_handle, ACMOP_getssid, &getssid, sizeof(getssid)) < 0) { if (errno == EACCES) PERROR("ACM operation failed."); - free(buf); buf = NULL; - goto out2; } else { *buflen = SSID_BUFFER_SIZE; - goto out2; } - out2: - xc_interface_close(xc_handle); - out1: return buf; } @@ -81,52 +73,60 @@ static void *__getssid(int domid, uint32 * ssidref for domain 0 (always exists) */ static PyObject *policy(PyObject * self, PyObject * args) { - /* out */ + xc_interface *xc_handle; char *policyreference; PyObject *ret; - void *ssid_buffer; uint32_t buf_len; + DECLARE_HYPERCALL_BUFFER(void, ssid_buffer); if (!PyArg_ParseTuple(args, "", NULL)) { return NULL; } - ssid_buffer = __getssid(0, &buf_len); - if (ssid_buffer == NULL || buf_len < sizeof(struct acm_ssid_buffer)) { - free(ssid_buffer); + if ((xc_handle = xc_interface_open(0,0,0)) == 0) return PyErr_SetFromErrno(acm_error_obj); - } + + ssid_buffer = __getssid(xc_handle, 0, &buf_len, HYPERCALL_BUFFER(ssid_buffer)); + if (ssid_buffer == NULL || buf_len < sizeof(struct acm_ssid_buffer)) + ret = PyErr_SetFromErrno(acm_error_obj); else { struct acm_ssid_buffer *ssid = (struct acm_ssid_buffer *)ssid_buffer; policyreference = (char *)(ssid_buffer + ssid->policy_reference_offset + sizeof (struct acm_policy_reference_buffer)); ret = Py_BuildValue("s", policyreference); - free(ssid_buffer); - return ret; } + + xc_hypercall_buffer_free(xc_handle, ssid_buffer); + xc_interface_close(xc_handle); + return ret; } /* retrieve ssid info for a domain domid*/ static PyObject *getssid(PyObject * self, PyObject * args) { + xc_interface *xc_handle; + /* in */ uint32_t domid; /* out */ char *policytype, *policyreference; uint32_t ssidref; + PyObject *ret; - void *ssid_buffer; + DECLARE_HYPERCALL_BUFFER(void, ssid_buffer); uint32_t buf_len; if (!PyArg_ParseTuple(args, "i", &domid)) { return NULL; } - ssid_buffer = __getssid(domid, &buf_len); + if ((xc_handle = xc_interface_open(0,0,0)) == 0) + return PyErr_SetFromErrno(acm_error_obj); + + ssid_buffer = __getssid(xc_handle, domid, &buf_len, HYPERCALL_BUFFER(ssid_buffer)); if (ssid_buffer == NULL) { - return NULL; + ret = NULL; } else if (buf_len < sizeof(struct acm_ssid_buffer)) { - free(ssid_buffer); - return NULL; + ret = NULL; } else { struct acm_ssid_buffer *ssid = (struct acm_ssid_buffer *) ssid_buffer; policytype = ACM_POLICY_NAME(ssid->secondary_policy_code << 4 | @@ -134,12 +134,14 @@ static PyObject *getssid(PyObject * self ssidref = ssid->ssidref; policyreference = (char *)(ssid_buffer + ssid->policy_reference_offset + sizeof (struct acm_policy_reference_buffer)); + ret = Py_BuildValue("{s:s,s:s,s:i}", + "policyreference", policyreference, + "policytype", policytype, + "ssidref", ssidref); } - free(ssid_buffer); - return Py_BuildValue("{s:s,s:s,s:i}", - "policyreference", policyreference, - "policytype", policytype, - "ssidref", ssidref); + xc_hypercall_buffer_free(xc_handle, ssid_buffer); + xc_interface_close(xc_handle); + return ret; } @@ -206,7 +208,6 @@ const char ctrlif_op[] = "Could not open const char ctrlif_op[] = "Could not open control interface."; const char hv_op_err[] = "Error from hypervisor operation."; - static PyObject *chgpolicy(PyObject *self, PyObject *args) { struct acm_change_policy chgpolicy; @@ -215,9 +216,12 @@ static PyObject *chgpolicy(PyObject *sel char *bin_pol = NULL, *del_arr = NULL, *chg_arr = NULL; int bin_pol_len = 0, del_arr_len = 0, chg_arr_len = 0; uint errarray_mbrs = 20 * 2; - uint32_t error_array[errarray_mbrs]; - PyObject *result; + PyObject *result = NULL; uint len; + DECLARE_HYPERCALL_BUFFER(char, bin_pol_buf); + DECLARE_HYPERCALL_BUFFER(char, del_arr_buf); + DECLARE_HYPERCALL_BUFFER(char, chg_arr_buf); + DECLARE_HYPERCALL_BUFFER(uint32_t, error_array); memset(&chgpolicy, 0x0, sizeof(chgpolicy)); @@ -228,24 +232,34 @@ static PyObject *chgpolicy(PyObject *sel return NULL; } - chgpolicy.policy_pushcache_size = bin_pol_len; - chgpolicy.delarray_size = del_arr_len; - chgpolicy.chgarray_size = chg_arr_len; - chgpolicy.errarray_size = sizeof(error_array); - - set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol); - set_xen_guest_handle(chgpolicy.del_array, del_arr); - set_xen_guest_handle(chgpolicy.chg_array, chg_arr); - set_xen_guest_handle(chgpolicy.err_array, error_array); - if ((xc_handle = xc_interface_open(0,0,0)) == 0) { PyErr_SetString(PyExc_IOError, ctrlif_op); return NULL; } + if ( (bin_pol_buf = xc_hypercall_buffer_alloc(xc_handle, bin_pol_buf, bin_pol_len)) == NULL ) + goto out; + if ( (del_arr_buf = xc_hypercall_buffer_alloc(xc_handle, del_arr_buf, del_arr_len)) == NULL ) + goto out; + if ( (chg_arr_buf = xc_hypercall_buffer_alloc(xc_handle, chg_arr_buf, chg_arr_len)) == NULL ) + goto out; + if ( (error_array = xc_hypercall_buffer_alloc(xc_handle, error_array, sizeof(*error_array)*errarray_mbrs)) == NULL ) + goto out; + + memcpy(bin_pol_buf, bin_pol, bin_pol_len); + memcpy(del_arr_buf, del_arr, del_arr_len); + memcpy(chg_arr_buf, chg_arr, chg_arr_len); + + chgpolicy.policy_pushcache_size = bin_pol_len; + chgpolicy.delarray_size = del_arr_len; + chgpolicy.chgarray_size = chg_arr_len; + chgpolicy.errarray_size = sizeof(*error_array)*errarray_mbrs; + xc_set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol_buf); + xc_set_xen_guest_handle(chgpolicy.del_array, del_arr_buf); + xc_set_xen_guest_handle(chgpolicy.chg_array, chg_arr_buf); + xc_set_xen_guest_handle(chgpolicy.err_array, error_array); + rc = xc_acm_op(xc_handle, ACMOP_chgpolicy, &chgpolicy, sizeof(chgpolicy)); - - xc_interface_close(xc_handle); /* only pass the filled error codes */ for (len = 0; (len + 1) < errarray_mbrs; len += 2) { @@ -256,6 +270,13 @@ static PyObject *chgpolicy(PyObject *sel } result = Py_BuildValue("is#", rc, error_array, len); + +out: + xc_hypercall_buffer_free(xc_handle, bin_pol_buf); + xc_hypercall_buffer_free(xc_handle, del_arr_buf); + xc_hypercall_buffer_free(xc_handle, chg_arr_buf); + xc_hypercall_buffer_free(xc_handle, error_array); + xc_interface_close(xc_handle); return result; } @@ -265,33 +286,37 @@ static PyObject *getpolicy(PyObject *sel struct acm_getpolicy getpolicy; xc_interface *xc_handle; int rc; - uint8_t pull_buffer[8192]; - PyObject *result; - uint32_t len = sizeof(pull_buffer); - - memset(&getpolicy, 0x0, sizeof(getpolicy)); - set_xen_guest_handle(getpolicy.pullcache, pull_buffer); - getpolicy.pullcache_size = sizeof(pull_buffer); + PyObject *result = NULL; + uint32_t len = 8192; + DECLARE_HYPERCALL_BUFFER(uint8_t, pull_buffer); if ((xc_handle = xc_interface_open(0,0,0)) == 0) { PyErr_SetString(PyExc_IOError, ctrlif_op); return NULL; } + if ((pull_buffer = xc_hypercall_buffer_alloc(xc_handle, pull_buffer, len)) == NULL) + goto out; + + memset(&getpolicy, 0x0, sizeof(getpolicy)); + xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + getpolicy.pullcache_size = sizeof(pull_buffer); + rc = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); - - xc_interface_close(xc_handle); if (rc == 0) { struct acm_policy_buffer *header (struct acm_policy_buffer *)pull_buffer; - if (ntohl(header->len) < sizeof(pull_buffer)) + if (ntohl(header->len) < 8192) len = ntohl(header->len); } else { len = 0; } result = Py_BuildValue("is#", rc, pull_buffer, len); +out: + xc_hypercall_buffer_free(xc_handle, pull_buffer); + xc_interface_close(xc_handle); return result; } @@ -304,8 +329,9 @@ static PyObject *relabel_domains(PyObjec char *relabel_rules = NULL; int rel_rules_len = 0; uint errarray_mbrs = 20 * 2; - uint32_t error_array[errarray_mbrs]; - PyObject *result; + DECLARE_HYPERCALL_BUFFER(uint32_t, error_array); + DECLARE_HYPERCALL_BUFFER(char, relabel_rules_buf); + PyObject *result = NULL; uint len; memset(&reldoms, 0x0, sizeof(reldoms)); @@ -315,21 +341,25 @@ static PyObject *relabel_domains(PyObjec return NULL; } - reldoms.relabel_map_size = rel_rules_len; - reldoms.errarray_size = sizeof(error_array); - - set_xen_guest_handle(reldoms.relabel_map, relabel_rules); - set_xen_guest_handle(reldoms.err_array, error_array); - if ((xc_handle = xc_interface_open(0,0,0)) == 0) { PyErr_SetString(PyExc_IOError, ctrlif_op); return NULL; } + if ((relabel_rules_buf = xc_hypercall_buffer_alloc(xc_handle, relabel_rules_buf, rel_rules_len)) == NULL) + goto out; + if ((error_array = xc_hypercall_buffer_alloc(xc_handle, error_array, sizeof(*error_array)*errarray_mbrs)) == NULL) + goto out; + + memcpy(relabel_rules_buf, relabel_rules, rel_rules_len); + + reldoms.relabel_map_size = rel_rules_len; + reldoms.errarray_size = sizeof(error_array); + + xc_set_xen_guest_handle(reldoms.relabel_map, relabel_rules_buf); + xc_set_xen_guest_handle(reldoms.err_array, error_array); + rc = xc_acm_op(xc_handle, ACMOP_relabeldoms, &reldoms, sizeof(reldoms)); - - xc_interface_close(xc_handle); - /* only pass the filled error codes */ for (len = 0; (len + 1) < errarray_mbrs; len += 2) { @@ -340,6 +370,11 @@ static PyObject *relabel_domains(PyObjec } result = Py_BuildValue("is#", rc, error_array, len); +out: + xc_hypercall_buffer_free(xc_handle, relabel_rules_buf); + xc_hypercall_buffer_free(xc_handle, error_array); + xc_interface_close(xc_handle); + return result; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 21 of 25] python xc: use hypercall buffer interface
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 5daeac9c13b0d728d93e9e30b53bb70bd5e81ee2 # Parent 84dc04924708cd8e7fff48312a436fbbd0c79456 python xc: use hypercall buffer interface. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 84dc04924708 -r 5daeac9c13b0 tools/python/xen/lowlevel/xc/xc.c --- a/tools/python/xen/lowlevel/xc/xc.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/python/xen/lowlevel/xc/xc.c Fri Oct 22 15:14:51 2010 +0100 @@ -1203,19 +1203,29 @@ static PyObject *pyxc_topologyinfo(XcObj #define MAX_CPU_INDEX 255 xc_topologyinfo_t tinfo = { 0 }; int i, max_cpu_index; - PyObject *ret_obj; + PyObject *ret_obj = NULL; PyObject *cpu_to_core_obj, *cpu_to_socket_obj, *cpu_to_node_obj; - xc_cpu_to_core_t coremap[MAX_CPU_INDEX + 1]; - xc_cpu_to_socket_t socketmap[MAX_CPU_INDEX + 1]; - xc_cpu_to_node_t nodemap[MAX_CPU_INDEX + 1]; + DECLARE_HYPERCALL_BUFFER(xc_cpu_to_core_t, coremap); + DECLARE_HYPERCALL_BUFFER(xc_cpu_to_socket_t, socketmap); + DECLARE_HYPERCALL_BUFFER(xc_cpu_to_node_t, nodemap); - set_xen_guest_handle(tinfo.cpu_to_core, coremap); - set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); - set_xen_guest_handle(tinfo.cpu_to_node, nodemap); + coremap = xc_hypercall_buffer_alloc(self->xc_handle, coremap, sizeof(*coremap) * (MAX_CPU_INDEX+1)); + if ( coremap == NULL ) + goto out; + socketmap = xc_hypercall_buffer_alloc(self->xc_handle, socketmap, sizeof(*socketmap) * (MAX_CPU_INDEX+1)); + if ( socketmap == NULL ) + goto out; + nodemap = xc_hypercall_buffer_alloc(self->xc_handle, nodemap, sizeof(*nodemap) * (MAX_CPU_INDEX+1)); + if ( nodemap == NULL ) + goto out; + + xc_set_xen_guest_handle(tinfo.cpu_to_core, coremap); + xc_set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); + xc_set_xen_guest_handle(tinfo.cpu_to_node, nodemap); tinfo.max_cpu_index = MAX_CPU_INDEX; if ( xc_topologyinfo(self->xc_handle, &tinfo) != 0 ) - return pyxc_error_to_exception(self->xc_handle); + goto out; max_cpu_index = tinfo.max_cpu_index; if ( max_cpu_index > MAX_CPU_INDEX ) @@ -1268,11 +1278,15 @@ static PyObject *pyxc_topologyinfo(XcObj PyDict_SetItemString(ret_obj, "cpu_to_socket", cpu_to_socket_obj); Py_DECREF(cpu_to_socket_obj); - + PyDict_SetItemString(ret_obj, "cpu_to_node", cpu_to_node_obj); Py_DECREF(cpu_to_node_obj); - - return ret_obj; + +out: + xc_hypercall_buffer_free(self->xc_handle, coremap); + xc_hypercall_buffer_free(self->xc_handle, socketmap); + xc_hypercall_buffer_free(self->xc_handle, nodemap); + return ret_obj ? ret_obj : pyxc_error_to_exception(self->xc_handle); #undef MAX_CPU_INDEX } @@ -1282,20 +1296,30 @@ static PyObject *pyxc_numainfo(XcObject xc_numainfo_t ninfo = { 0 }; int i, j, max_node_index; uint64_t free_heap; - PyObject *ret_obj, *node_to_node_dist_list_obj; + PyObject *ret_obj = NULL, *node_to_node_dist_list_obj; PyObject *node_to_memsize_obj, *node_to_memfree_obj; PyObject *node_to_dma32_mem_obj, *node_to_node_dist_obj; - xc_node_to_memsize_t node_memsize[MAX_NODE_INDEX + 1]; - xc_node_to_memfree_t node_memfree[MAX_NODE_INDEX + 1]; - xc_node_to_node_dist_t nodes_dist[(MAX_NODE_INDEX+1) * (MAX_NODE_INDEX+1)]; + DECLARE_HYPERCALL_BUFFER(xc_node_to_memsize_t, node_memsize); + DECLARE_HYPERCALL_BUFFER(xc_node_to_memfree_t, node_memfree); + DECLARE_HYPERCALL_BUFFER(xc_node_to_node_dist_t, nodes_dist); - set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); - set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); - set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); + node_memsize = xc_hypercall_buffer_alloc(self->xc_handle, node_memsize, sizeof(*node_memsize)*(MAX_NODE_INDEX+1)); + if ( node_memsize == NULL ) + goto out; + node_memfree = xc_hypercall_buffer_alloc(self->xc_handle, node_memfree, sizeof(*node_memfree)*(MAX_NODE_INDEX+1)); + if ( node_memfree == NULL ) + goto out; + nodes_dist = xc_hypercall_buffer_alloc(self->xc_handle, nodes_dist, sizeof(*nodes_dist)*(MAX_NODE_INDEX+1)*(MAX_NODE_INDEX+1)); + if ( nodes_dist == NULL ) + goto out; + + xc_set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); + xc_set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); + xc_set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); ninfo.max_node_index = MAX_NODE_INDEX; if ( xc_numainfo(self->xc_handle, &ninfo) != 0 ) - return pyxc_error_to_exception(self->xc_handle); + goto out; max_node_index = ninfo.max_node_index; if ( max_node_index > MAX_NODE_INDEX ) @@ -1360,8 +1384,12 @@ static PyObject *pyxc_numainfo(XcObject PyDict_SetItemString(ret_obj, "node_to_node_dist", node_to_node_dist_list_obj); Py_DECREF(node_to_node_dist_list_obj); - - return ret_obj; + +out: + xc_hypercall_buffer_free(self->xc_handle, node_memsize); + xc_hypercall_buffer_free(self->xc_handle, node_memfree); + xc_hypercall_buffer_free(self->xc_handle, nodes_dist); + return ret_obj ? ret_obj : pyxc_error_to_exception(self->xc_handle); #undef MAX_NODE_INDEX } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 22 of 25] xenpm: use hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID de7310bdfe91d9356d5c6fca92daa93f07ab4fb5 # Parent 5daeac9c13b0d728d93e9e30b53bb70bd5e81ee2 xenpm: use hypercall buffers. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 5daeac9c13b0 -r de7310bdfe91 tools/misc/xenpm.c --- a/tools/misc/xenpm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/misc/xenpm.c Fri Oct 22 15:14:51 2010 +0100 @@ -317,15 +317,25 @@ static void signal_int_handler(int signo int i, j, k, ret; struct timeval tv; int cx_cap = 0, px_cap = 0; - uint32_t cpu_to_core[MAX_NR_CPU]; - uint32_t cpu_to_socket[MAX_NR_CPU]; - uint32_t cpu_to_node[MAX_NR_CPU]; + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_core); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_socket); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_node); xc_topologyinfo_t info = { 0 }; + + cpu_to_core = xc_hypercall_buffer_alloc(xc_handle, cpu_to_core, sizeof(*cpu_to_core) * MAX_NR_CPU); + cpu_to_socket = xc_hypercall_buffer_alloc(xc_handle, cpu_to_socket, sizeof(*cpu_to_socket) * MAX_NR_CPU); + cpu_to_node = xc_hypercall_buffer_alloc(xc_handle, cpu_to_node, sizeof(*cpu_to_node) * MAX_NR_CPU); + + if ( cpu_to_core == NULL || cpu_to_socket == NULL || cpu_to_node == NULL ) + { + fprintf(stderr, "failed to allocate hypercall buffers\n"); + goto out; + } if ( gettimeofday(&tv, NULL) == -1 ) { fprintf(stderr, "failed to get timeofday\n"); - return ; + goto out ; } usec_end = tv.tv_sec * 1000000UL + tv.tv_usec; @@ -385,9 +395,9 @@ static void signal_int_handler(int signo } } - set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU - 1; ret = xc_topologyinfo(xc_handle, &info); @@ -485,6 +495,10 @@ static void signal_int_handler(int signo free(pxstat); free(sum); free(avgfreq); +out: + xc_hypercall_buffer_free(xc_handle, cpu_to_core); + xc_hypercall_buffer_free(xc_handle, cpu_to_socket); + xc_hypercall_buffer_free(xc_handle, cpu_to_node); xc_interface_close(xc_handle); exit(0); } @@ -934,21 +948,31 @@ out: void cpu_topology_func(int argc, char *argv[]) { - uint32_t cpu_to_core[MAX_NR_CPU]; - uint32_t cpu_to_socket[MAX_NR_CPU]; - uint32_t cpu_to_node[MAX_NR_CPU]; + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_core); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_socket); + DECLARE_HYPERCALL_BUFFER(uint32_t, cpu_to_node); xc_topologyinfo_t info = { 0 }; int i; - set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + cpu_to_core = xc_hypercall_buffer_alloc(xc_handle, cpu_to_core, sizeof(*cpu_to_core) * MAX_NR_CPU); + cpu_to_socket = xc_hypercall_buffer_alloc(xc_handle, cpu_to_socket, sizeof(*cpu_to_socket) * MAX_NR_CPU); + cpu_to_node = xc_hypercall_buffer_alloc(xc_handle, cpu_to_node, sizeof(*cpu_to_node) * MAX_NR_CPU); + + if ( cpu_to_core == NULL || cpu_to_socket == NULL || cpu_to_node == NULL ) + { + fprintf(stderr, "failed to allocate hypercall buffers\n"); + goto out; + } + + xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU-1; if ( xc_topologyinfo(xc_handle, &info) ) { printf("Can not get Xen CPU topology: %d\n", errno); - return; + goto out; } if ( info.max_cpu_index > (MAX_NR_CPU-1) ) @@ -962,6 +986,10 @@ void cpu_topology_func(int argc, char *a printf("CPU%d\t %d\t %d\t %d\n", i, cpu_to_core[i], cpu_to_socket[i], cpu_to_node[i]); } +out: + xc_hypercall_buffer_free(xc_handle, cpu_to_core); + xc_hypercall_buffer_free(xc_handle, cpu_to_socket); + xc_hypercall_buffer_free(xc_handle, cpu_to_node); } void set_sched_smt_func(int argc, char *argv[]) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 23 of 25] secpol: use hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 02dca31076126f4a0334881eb9a9980fd188cf25 # Parent de7310bdfe91d9356d5c6fca92daa93f07ab4fb5 secpol: use hypercall buffers Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r de7310bdfe91 -r 02dca3107612 tools/security/secpol_tool.c --- a/tools/security/secpol_tool.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/security/secpol_tool.c Fri Oct 22 15:14:51 2010 +0100 @@ -242,11 +242,14 @@ int acm_get_ssidref(xc_interface *xc_han uint16_t *ste_ref) { int ret; + DECLARE_HYPERCALL_BUFFER(struct acm_ssid_buffer, ssid); + size_t ssid_buffer_size = 4096; struct acm_getssid getssid; - char buf[4096]; - struct acm_ssid_buffer *ssid = (struct acm_ssid_buffer *)buf; - set_xen_guest_handle(getssid.ssidbuf, buf); - getssid.ssidbuf_size = sizeof(buf); + ssid = xc_hypercall_buffer_alloc(xc_handle, ssid, ssid_buffer_size); + if ( ssid == NULL ) + return 1; + xc_set_xen_guest_handle(getssid.ssidbuf, ssid); + getssid.ssidbuf_size = ssid_buffer_size; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; ret = xc_acm_op(xc_handle, ACMOP_getssid, &getssid, sizeof(getssid)); @@ -254,23 +257,27 @@ int acm_get_ssidref(xc_interface *xc_han *chwall_ref = ssid->ssidref & 0xffff; *ste_ref = ssid->ssidref >> 16; } + xc_hypercall_buffer_free(xc_handle, ssid); return ret; } /******************************* get policy ******************************/ -#define PULL_CACHE_SIZE 8192 -uint8_t pull_buffer[PULL_CACHE_SIZE]; - int acm_domain_getpolicy(xc_interface *xc_handle) { + DECLARE_HYPERCALL_BUFFER(uint8_t, pull_buffer); + size_t pull_cache_size = 8192; struct acm_getpolicy getpolicy; int ret; uint16_t chwall_ref, ste_ref; - memset(pull_buffer, 0x00, sizeof(pull_buffer)); - set_xen_guest_handle(getpolicy.pullcache, pull_buffer); - getpolicy.pullcache_size = sizeof(pull_buffer); + pull_buffer = xc_hypercall_buffer_alloc(xc_handle, pull_buffer, pull_cache_size); + if ( pull_buffer == NULL ) + return -1; + + memset(pull_buffer, 0x00, pull_cache_size); + xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + getpolicy.pullcache_size = pull_cache_size; ret = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); if (ret >= 0) { ret = acm_get_ssidref(xc_handle, 0, &chwall_ref, &ste_ref); @@ -284,8 +291,10 @@ int acm_domain_getpolicy(xc_interface *x } /* dump policy */ - acm_dump_policy_buffer(pull_buffer, sizeof(pull_buffer), + acm_dump_policy_buffer(pull_buffer, pull_cache_size, chwall_ref, ste_ref); + + xc_hypercall_buffer_free(xc_handle, pull_buffer); return ret; } @@ -293,11 +302,14 @@ int acm_domain_getpolicy(xc_interface *x /************************ dump binary policy ******************************/ static int load_file(const char *filename, - uint8_t **buffer, off_t *len) + uint8_t **buffer, off_t *len, + xc_interface *xc_handle, + xc_hypercall_buffer_t *hcall) { struct stat mystat; int ret = 0; int fd; + DECLARE_HYPERCALL_BUFFER_ARGUMENT(hcall); if ((ret = stat(filename, &mystat)) != 0) { printf("File %s not found.\n", filename); @@ -307,9 +319,16 @@ static int load_file(const char *filenam *len = mystat.st_size; - if ((*buffer = malloc(*len)) == NULL) { - ret = -ENOMEM; - goto out; + if ( hcall == NULL ) { + if ((*buffer = malloc(*len)) == NULL) { + ret = -ENOMEM; + goto out; + } + } else { + if ((*buffer = xc_hypercall_buffer_alloc(xc_handle, hcall, *len)) == NULL) { + ret = -ENOMEM; + goto out; + } } if ((fd = open(filename, O_RDONLY)) <= 0) { @@ -322,7 +341,10 @@ static int load_file(const char *filenam return 0; free_out: - free(*buffer); + if ( hcall == NULL ) + free(*buffer); + else + xc_hypercall_buffer_free(xc_handle, hcall); *buffer = NULL; *len = 0; out: @@ -339,7 +361,7 @@ static int acm_domain_dumppolicy(const c chwall_ssidref = (ssidref ) & 0xffff; ste_ssidref = (ssidref >> 16) & 0xffff; - if ((ret = load_file(filename, &buffer, &len)) == 0) { + if ((ret = load_file(filename, &buffer, &len, NULL, NULL)) == 0) { acm_dump_policy_buffer(buffer, len, chwall_ssidref, ste_ssidref); free(buffer); } @@ -353,11 +375,11 @@ int acm_domain_loadpolicy(xc_interface * { int ret; off_t len; - uint8_t *buffer; + DECLARE_HYPERCALL_BUFFER(uint8_t, buffer); uint16_t chwall_ssidref, ste_ssidref; struct acm_setpolicy setpolicy; - ret = load_file(filename, &buffer, &len); + ret = load_file(filename, &buffer, &len, xc_handle, HYPERCALL_BUFFER(buffer)); if (ret != 0) goto out; @@ -367,7 +389,7 @@ int acm_domain_loadpolicy(xc_interface * /* dump it and then push it down into xen/acm */ acm_dump_policy_buffer(buffer, len, chwall_ssidref, ste_ssidref); - set_xen_guest_handle(setpolicy.pushcache, buffer); + xc_set_xen_guest_handle(setpolicy.pushcache, buffer); setpolicy.pushcache_size = len; ret = xc_acm_op(xc_handle, ACMOP_setpolicy, &setpolicy, sizeof(setpolicy)); @@ -378,7 +400,7 @@ int acm_domain_loadpolicy(xc_interface * } free_out: - free(buffer); + xc_hypercall_buffer_free(xc_handle, buffer); out: return ret; } @@ -402,22 +424,27 @@ void dump_ste_stats(struct acm_ste_stats ntohl(ste_stats->gt_cachehit_count)); } -#define PULL_STATS_SIZE 8192 int acm_domain_dumpstats(xc_interface *xc_handle) { - uint8_t stats_buffer[PULL_STATS_SIZE]; + DECLARE_HYPERCALL_BUFFER(uint8_t, stats_buffer); + size_t pull_stats_size = 8192; struct acm_dumpstats dumpstats; int ret; struct acm_stats_buffer *stats; - memset(stats_buffer, 0x00, sizeof(stats_buffer)); - set_xen_guest_handle(dumpstats.pullcache, stats_buffer); - dumpstats.pullcache_size = sizeof(stats_buffer); + stats_buffer = xc_hypercall_buffer_alloc(xc_handle, stats_buffer, pull_stats_size); + if ( stats_buffer == NULL ) + return -1; + + memset(stats_buffer, 0x00, pull_stats_size); + xc_set_xen_guest_handle(dumpstats.pullcache, stats_buffer); + dumpstats.pullcache_size = pull_stats_size; ret = xc_acm_op(xc_handle, ACMOP_dumpstats, &dumpstats, sizeof(dumpstats)); if (ret < 0) { printf ("ERROR dumping policy stats. Try ''xm dmesg'' to see details.\n"); + xc_hypercall_buffer_free(xc_handle, stats_buffer); return ret; } stats = (struct acm_stats_buffer *) stats_buffer; @@ -464,6 +491,7 @@ int acm_domain_dumpstats(xc_interface *x default: printf("UNKNOWN SECONDARY POLICY ERROR!\n"); } + xc_hypercall_buffer_free(xc_handle, stats_buffer); return ret; } @@ -472,7 +500,8 @@ int main(int argc, char **argv) int main(int argc, char **argv) { - xc_interface *xc_handle, ret = 0; + xc_interface *xc_handle; + int ret = 0; if (argc < 2) usage(argv[0]); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 24 of 25] libxc: do not align/lock buffers which do not need it
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID 5d4e169f1ef0cc1ec40855f42443752b2c29093c # Parent 02dca31076126f4a0334881eb9a9980fd188cf25 libxc: do not align/lock buffers which do not need it On restore: region_mfn is passed to xc_map_foreign_range and xc_map_foreign_bulk. In both cases the buffer is accessed from the ioctl handler in the kernel and not from any hypercall. Therefore normal copy_{to,from}_user handling in the kernel will cope with any faulting access. p2m_batch is passed to xc_domain_memory_populate_physmap which takes care of bouncing the buffer already. On save: pfn_type is passed to xc_map_foreign_bulk which does not need locking as per region_mfn above. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 02dca3107612 -r 5d4e169f1ef0 tools/libxc/xc_domain_restore.c --- a/tools/libxc/xc_domain_restore.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain_restore.c Fri Oct 22 15:14:51 2010 +0100 @@ -1172,10 +1172,8 @@ int xc_domain_restore(xc_interface *xch, ctx->p2m = calloc(dinfo->p2m_size, sizeof(xen_pfn_t)); pfn_type = calloc(dinfo->p2m_size, sizeof(unsigned long)); - region_mfn = xc_memalign(PAGE_SIZE, ROUNDUP( - MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); - ctx->p2m_batch = xc_memalign( - PAGE_SIZE, ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); + region_mfn = malloc(ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); + ctx->p2m_batch = malloc(ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); if ( (ctx->p2m == NULL) || (pfn_type == NULL) || (region_mfn == NULL) || (ctx->p2m_batch == NULL) ) @@ -1189,18 +1187,6 @@ int xc_domain_restore(xc_interface *xch, ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); memset(ctx->p2m_batch, 0, ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); - - if ( lock_pages(xch, region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) ) - { - PERROR("Could not lock region_mfn"); - goto out; - } - - if ( lock_pages(xch, ctx->p2m_batch, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) ) - { - ERROR("Could not lock p2m_batch"); - goto out; - } /* Get the domain''s shared-info frame. */ domctl.cmd = XEN_DOMCTL_getdomaininfo; diff -r 02dca3107612 -r 5d4e169f1ef0 tools/libxc/xc_domain_save.c --- a/tools/libxc/xc_domain_save.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain_save.c Fri Oct 22 15:14:51 2010 +0100 @@ -1071,8 +1071,7 @@ int xc_domain_save(xc_interface *xch, in analysis_phase(xch, dom, ctx, HYPERCALL_BUFFER(to_skip), 0); - pfn_type = xc_memalign(PAGE_SIZE, ROUNDUP( - MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); + pfn_type = malloc(ROUNDUP(MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); pfn_batch = calloc(MAX_BATCH_SIZE, sizeof(*pfn_batch)); pfn_err = malloc(MAX_BATCH_SIZE * sizeof(*pfn_err)); if ( (pfn_type == NULL) || (pfn_batch == NULL) || (pfn_err == NULL) ) @@ -1083,12 +1082,6 @@ int xc_domain_save(xc_interface *xch, in } memset(pfn_type, 0, ROUNDUP(MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT)); - - if ( lock_pages(xch, pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) ) - { - PERROR("Unable to lock pfn_type array"); - goto out; - } /* Setup the mfn_to_pfn table mapping */ if ( !(ctx->live_m2p = xc_map_m2p(xch, ctx->max_mfn, PROT_READ, &ctx->m2p_mfn0)) ) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-22 14:16 UTC
[Xen-devel] [PATCH 25 of 25] libxc: finalise transition to hypercall buffers
# HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1287756891 -3600 # Node ID d3f52cd04d85a3d3e1a69b658e736722b14e243c # Parent 5d4e169f1ef0cc1ec40855f42443752b2c29093c libxc: finalise transition to hypercall buffers. Rename xc_set_xen_guest_handle to set_xen_guest_handle[0] and remove now unused functions: - xc_memalign - lock_pages - unlock_pages - hcall_buf_prep - hcall_buf_release [0] sed -i -e ''s/xc_set_xen_guest_handle/set_xen_guest_handle/g'' \ tools/libxc/*.[ch] \ tools/python/xen/lowlevel/xc/xc.c \ tools/python/xen/lowlevel/acm/acm.c \ tools/libxc/ia64/xc_ia64_stubs.c \ tools/security/secpol_tool.c \ tools/misc/xenpm.c Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_cpupool.c --- a/tools/libxc/xc_cpupool.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_cpupool.c Fri Oct 22 15:14:51 2010 +0100 @@ -99,7 +99,7 @@ xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_ sysctl.cmd = XEN_SYSCTL_cpupool_op; sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO; sysctl.u.cpupool_op.cpupool_id = poolid; - xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); sysctl.u.cpupool_op.cpumap.nr_cpus = local_size * 8; err = do_sysctl_save(xch, &sysctl); @@ -185,7 +185,7 @@ uint64_t * xc_cpupool_freeinfo(xc_interf sysctl.cmd = XEN_SYSCTL_cpupool_op; sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_FREEINFO; - xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); + set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local); sysctl.u.cpupool_op.cpumap.nr_cpus = *cpusize * 8; err = do_sysctl_save(xch, &sysctl); diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_dom_boot.c --- a/tools/libxc/xc_dom_boot.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_dom_boot.c Fri Oct 22 15:14:51 2010 +0100 @@ -72,7 +72,7 @@ static int launch_vm(xc_interface *xch, domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = 0; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); if ( rc != 0 ) xc_dom_panic(xch, XC_INTERNAL_ERROR, diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_domain.c --- a/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain.c Fri Oct 22 15:14:51 2010 +0100 @@ -132,7 +132,7 @@ int xc_vcpu_setaffinity(xc_interface *xc bitmap_64_to_byte(local, cpumap, cpusize * 8); - xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; @@ -165,7 +165,7 @@ int xc_vcpu_getaffinity(xc_interface *xc domctl.domain = (domid_t)domid; domctl.u.vcpuaffinity.vcpu = vcpu; - xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); + set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local); domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8; ret = do_domctl(xch, &domctl); @@ -254,7 +254,7 @@ int xc_domain_getinfolist(xc_interface * sysctl.cmd = XEN_SYSCTL_getdomaininfolist; sysctl.u.getdomaininfolist.first_domain = first_domain; sysctl.u.getdomaininfolist.max_domains = max_domains; - xc_set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); + set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info); if ( xc_sysctl(xch, &sysctl) < 0 ) ret = -1; @@ -282,7 +282,7 @@ int xc_domain_hvm_getcontext(xc_interfac domctl.cmd = XEN_DOMCTL_gethvmcontext; domctl.domain = (domid_t)domid; domctl.u.hvmcontext.size = size; - xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); + set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); @@ -311,7 +311,7 @@ int xc_domain_hvm_getcontext_partial(xc_ domctl.domain = (domid_t) domid; domctl.u.hvmcontext_partial.type = typecode; domctl.u.hvmcontext_partial.instance = instance; - xc_set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); + set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); @@ -337,7 +337,7 @@ int xc_domain_hvm_setcontext(xc_interfac domctl.cmd = XEN_DOMCTL_sethvmcontext; domctl.domain = domid; domctl.u.hvmcontext.size = size; - xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); + set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf); ret = do_domctl(xch, &domctl); @@ -361,7 +361,7 @@ int xc_vcpu_getcontext(xc_interface *xch domctl.cmd = XEN_DOMCTL_getvcpucontext; domctl.domain = (domid_t)domid; domctl.u.vcpucontext.vcpu = (uint16_t)vcpu; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); @@ -420,7 +420,7 @@ int xc_shadow_control(xc_interface *xch, domctl.u.shadow_op.mb = mb ? *mb : 0; domctl.u.shadow_op.mode = mode; if (dirty_bitmap != NULL) - xc_set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, + set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap, dirty_bitmap); rc = do_domctl(xch, &domctl); @@ -486,7 +486,7 @@ int xc_domain_set_memmap_limit(xc_interf e820->size = (uint64_t)map_limitkb << 10; e820->type = E820_RAM; - xc_set_xen_guest_handle(fmap.map.buffer, e820); + set_xen_guest_handle(fmap.map.buffer, e820); rc = do_memory_op(xch, XENMEM_set_memory_map, &fmap, sizeof(fmap)); @@ -559,7 +559,7 @@ int xc_domain_get_tsc_info(xc_interface domctl.cmd = XEN_DOMCTL_gettscinfo; domctl.domain = (domid_t)domid; - xc_set_xen_guest_handle(domctl.u.tsc_info.out_info, info); + set_xen_guest_handle(domctl.u.tsc_info.out_info, info); rc = do_domctl(xch, &domctl); if ( rc == 0 ) { @@ -601,7 +601,7 @@ int xc_domain_increase_reservation(xc_in return -1; } - xc_set_xen_guest_handle(reservation.extent_start, extent_start); + set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_increase_reservation, &reservation, sizeof(reservation)); @@ -664,7 +664,7 @@ int xc_domain_decrease_reservation(xc_in PERROR("Could not bounce memory for XENMEM_decrease_reservation hypercall"); return -1; } - xc_set_xen_guest_handle(reservation.extent_start, extent_start); + set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation)); @@ -734,7 +734,7 @@ int xc_domain_populate_physmap(xc_interf PERROR("Could not bounce memory for XENMEM_populate_physmap hypercall"); return -1; } - xc_set_xen_guest_handle(reservation.extent_start, extent_start); + set_xen_guest_handle(reservation.extent_start, extent_start); err = do_memory_op(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation)); @@ -796,8 +796,8 @@ int xc_domain_memory_exchange_pages(xc_i xc_hypercall_bounce_pre(xch, out_extents)) goto out; - xc_set_xen_guest_handle(exchange.in.extent_start, in_extents); - xc_set_xen_guest_handle(exchange.out.extent_start, out_extents); + set_xen_guest_handle(exchange.in.extent_start, in_extents); + set_xen_guest_handle(exchange.out.extent_start, out_extents); rc = do_memory_op(xch, XENMEM_exchange, &exchange, sizeof(exchange)); @@ -976,7 +976,7 @@ int xc_vcpu_setcontext(xc_interface *xch domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = domid; domctl.u.vcpucontext.vcpu = vcpu; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); rc = do_domctl(xch, &domctl); @@ -1124,7 +1124,7 @@ int xc_get_device_group( domctl.u.get_device_group.machine_bdf = machine_bdf; domctl.u.get_device_group.max_sdevs = max_sdevs; - xc_set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); + set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array); rc = do_domctl(xch, &domctl); diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_domain_restore.c --- a/tools/libxc/xc_domain_restore.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_domain_restore.c Fri Oct 22 15:14:51 2010 +0100 @@ -1639,7 +1639,7 @@ int xc_domain_restore(xc_interface *xch, domctl.cmd = XEN_DOMCTL_setvcpucontext; domctl.domain = (domid_t)dom; domctl.u.vcpucontext.vcpu = i; - xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); + set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt); frc = xc_domctl(xch, &domctl); if ( frc != 0 ) { diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_linux.c --- a/tools/libxc/xc_linux.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_linux.c Fri Oct 22 15:14:51 2010 +0100 @@ -686,7 +686,7 @@ static void *_gnttab_map_table(xc_interf setup.dom = domid; setup.nr_frames = query.nr_frames; - xc_set_xen_guest_handle(setup.frame_list, frame_list); + set_xen_guest_handle(setup.frame_list, frame_list); /* XXX Any race with other setup_table hypercall? */ rc = xc_gnttab_op(xch, GNTTABOP_setup_table, &setup, sizeof(setup), diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_misc.c --- a/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_misc.c Fri Oct 22 15:14:51 2010 +0100 @@ -49,7 +49,7 @@ int xc_readconsolering(xc_interface *xch return -1; sysctl.cmd = XEN_SYSCTL_readconsole; - xc_set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); + set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer); sysctl.u.readconsole.count = nr_chars; sysctl.u.readconsole.clear = clear; sysctl.u.readconsole.incremental = 0; @@ -81,7 +81,7 @@ int xc_send_debug_keys(xc_interface *xch return -1; sysctl.cmd = XEN_SYSCTL_debug_keys; - xc_set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); + set_xen_guest_handle(sysctl.u.debug_keys.keys, keys); sysctl.u.debug_keys.nr_keys = len; ret = do_sysctl(xch, &sysctl); @@ -190,8 +190,8 @@ int xc_perfc_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_reset; - xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); - xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -205,8 +205,8 @@ int xc_perfc_query_number(xc_interface * sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); - xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -228,8 +228,8 @@ int xc_perfc_query(xc_interface *xch, sysctl.cmd = XEN_SYSCTL_perfc_op; sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query; - xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); - xc_set_xen_guest_handle(sysctl.u.perfc_op.val, val); + set_xen_guest_handle(sysctl.u.perfc_op.desc, desc); + set_xen_guest_handle(sysctl.u.perfc_op.val, val); return do_sysctl(xch, &sysctl); } @@ -240,7 +240,7 @@ int xc_lockprof_reset(xc_interface *xch) sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_reset; - xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); return do_sysctl(xch, &sysctl); } @@ -253,7 +253,7 @@ int xc_lockprof_query_number(xc_interfac sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; - xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); + set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL); rc = do_sysctl(xch, &sysctl); @@ -274,7 +274,7 @@ int xc_lockprof_query(xc_interface *xch, sysctl.cmd = XEN_SYSCTL_lockprof_op; sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query; sysctl.u.lockprof_op.max_elem = *n_elems; - xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, data); + set_xen_guest_handle(sysctl.u.lockprof_op.data, data); rc = do_sysctl(xch, &sysctl); @@ -295,7 +295,7 @@ int xc_getcpuinfo(xc_interface *xch, int sysctl.cmd = XEN_SYSCTL_getcpuinfo; sysctl.u.getcpuinfo.max_cpus = max_cpus; - xc_set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); + set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); rc = do_sysctl(xch, &sysctl); @@ -427,7 +427,7 @@ int xc_hvm_track_dirty_vram( arg->domid = dom; arg->first_pfn = first_pfn; arg->nr = nr; - xc_set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap); + set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap); rc = do_xen_hypercall(xch, &hypercall); diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_offline_page.c --- a/tools/libxc/xc_offline_page.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_offline_page.c Fri Oct 22 15:14:51 2010 +0100 @@ -82,7 +82,7 @@ int xc_mark_page_online(xc_interface *xc sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_online; sysctl.u.page_offline.end = end; - xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); + set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); xc_hypercall_bounce_post(xch, status); @@ -110,7 +110,7 @@ int xc_mark_page_offline(xc_interface *x sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_page_offline; sysctl.u.page_offline.end = end; - xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); + set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); xc_hypercall_bounce_post(xch, status); @@ -138,7 +138,7 @@ int xc_query_page_offline_status(xc_inte sysctl.u.page_offline.start = start; sysctl.u.page_offline.cmd = sysctl_query_page_offline; sysctl.u.page_offline.end = end; - xc_set_xen_guest_handle(sysctl.u.page_offline.status, status); + set_xen_guest_handle(sysctl.u.page_offline.status, status); ret = xc_sysctl(xch, &sysctl); xc_hypercall_bounce_post(xch, status); diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_pm.c --- a/tools/libxc/xc_pm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_pm.c Fri Oct 22 15:14:51 2010 +0100 @@ -73,8 +73,8 @@ int xc_pm_get_pxstat(xc_interface *xch, sysctl.u.get_pmstat.type = PMSTAT_get_pxstat; sysctl.u.get_pmstat.cpuid = cpuid; sysctl.u.get_pmstat.u.getpx.total = max_px; - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, trans); - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, pt); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, trans); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, pt); ret = xc_sysctl(xch, &sysctl); if ( ret ) @@ -146,8 +146,8 @@ int xc_pm_get_cxstat(xc_interface *xch, sysctl.cmd = XEN_SYSCTL_get_pmstat; sysctl.u.get_pmstat.type = PMSTAT_get_cxstat; sysctl.u.get_pmstat.cpuid = cpuid; - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, triggers); - xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, residencies); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, triggers); + set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, residencies); if ( (ret = xc_sysctl(xch, &sysctl)) ) goto unlock_2; @@ -219,9 +219,9 @@ int xc_get_cpufreq_para(xc_interface *xc if ( xc_hypercall_bounce_pre(xch, scaling_available_governors) ) goto unlock_3; - xc_set_xen_guest_handle(sys_para->affected_cpus, affected_cpus); - xc_set_xen_guest_handle(sys_para->scaling_available_frequencies, scaling_available_frequencies); - xc_set_xen_guest_handle(sys_para->scaling_available_governors, scaling_available_governors); + set_xen_guest_handle(sys_para->affected_cpus, affected_cpus); + set_xen_guest_handle(sys_para->scaling_available_frequencies, scaling_available_frequencies); + set_xen_guest_handle(sys_para->scaling_available_governors, scaling_available_governors); } sysctl.cmd = XEN_SYSCTL_pm_op; diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.c Fri Oct 22 15:14:51 2010 +0100 @@ -71,8 +71,6 @@ xc_interface *xc_interface_open(xentooll return 0; } -static void xc_clean_hcall_buf(xc_interface *xch); - int xc_interface_close(xc_interface *xch) { int rc = 0; @@ -84,8 +82,6 @@ int xc_interface_close(xc_interface *xch rc = xc_interface_close_core(xch, xch->fd); if (rc) PERROR("Could not close hypervisor interface"); } - - xc_clean_hcall_buf(xch); free(xch); return rc; @@ -191,133 +187,6 @@ void xc_report_progress_step(xc_interfac done, total); } -#ifdef __sun__ - -int lock_pages(xc_interface *xch, void *addr, size_t len) { return 0; } -void unlock_pages(xc_interface *xch, void *addr, size_t len) { } - -int hcall_buf_prep(xc_interface *xch, void **addr, size_t len) { return 0; } -void hcall_buf_release(xc_interface *xch, void **addr, size_t len) { } - -static void xc_clean_hcall_buf(xc_interface *xch) { } - -#else /* !__sun__ */ - -int lock_pages(xc_interface *xch, void *addr, size_t len) -{ - int e; - void *laddr = (void *)((unsigned long)addr & PAGE_MASK); - size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) + - PAGE_SIZE - 1) & PAGE_MASK; - e = mlock(laddr, llen); - return e; -} - -void unlock_pages(xc_interface *xch, void *addr, size_t len) -{ - void *laddr = (void *)((unsigned long)addr & PAGE_MASK); - size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) + - PAGE_SIZE - 1) & PAGE_MASK; - int saved_errno = errno; - (void)munlock(laddr, llen); - errno = saved_errno; -} - -static pthread_key_t hcall_buf_pkey; -static pthread_once_t hcall_buf_pkey_once = PTHREAD_ONCE_INIT; -struct hcall_buf { - xc_interface *xch; - void *buf; - void *oldbuf; -}; - -static void _xc_clean_hcall_buf(void *m) -{ - struct hcall_buf *hcall_buf = m; - - if ( hcall_buf ) - { - if ( hcall_buf->buf ) - { - unlock_pages(hcall_buf->xch, hcall_buf->buf, PAGE_SIZE); - free(hcall_buf->buf); - } - - free(hcall_buf); - } - - pthread_setspecific(hcall_buf_pkey, NULL); -} - -static void _xc_init_hcall_buf(void) -{ - pthread_key_create(&hcall_buf_pkey, _xc_clean_hcall_buf); -} - -static void xc_clean_hcall_buf(xc_interface *xch) -{ - pthread_once(&hcall_buf_pkey_once, _xc_init_hcall_buf); - - _xc_clean_hcall_buf(pthread_getspecific(hcall_buf_pkey)); -} - -int hcall_buf_prep(xc_interface *xch, void **addr, size_t len) -{ - struct hcall_buf *hcall_buf; - - pthread_once(&hcall_buf_pkey_once, _xc_init_hcall_buf); - - hcall_buf = pthread_getspecific(hcall_buf_pkey); - if ( !hcall_buf ) - { - hcall_buf = calloc(1, sizeof(*hcall_buf)); - if ( !hcall_buf ) - goto out; - hcall_buf->xch = xch; - pthread_setspecific(hcall_buf_pkey, hcall_buf); - } - - if ( !hcall_buf->buf ) - { - hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE); - if ( !hcall_buf->buf || lock_pages(xch, hcall_buf->buf, PAGE_SIZE) ) - { - free(hcall_buf->buf); - hcall_buf->buf = NULL; - goto out; - } - } - - if ( (len < PAGE_SIZE) && !hcall_buf->oldbuf ) - { - memcpy(hcall_buf->buf, *addr, len); - hcall_buf->oldbuf = *addr; - *addr = hcall_buf->buf; - return 0; - } - - out: - return lock_pages(xch, *addr, len); -} - -void hcall_buf_release(xc_interface *xch, void **addr, size_t len) -{ - struct hcall_buf *hcall_buf = pthread_getspecific(hcall_buf_pkey); - - if ( hcall_buf && (hcall_buf->buf == *addr) ) - { - memcpy(hcall_buf->oldbuf, *addr, len); - *addr = hcall_buf->oldbuf; - hcall_buf->oldbuf = NULL; - } - else - { - unlock_pages(xch, *addr, len); - } -} - -#endif - /* NB: arr must be locked */ int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom, unsigned int num, xen_pfn_t *arr) @@ -330,7 +199,7 @@ int xc_get_pfn_type_batch(xc_interface * domctl.cmd = XEN_DOMCTL_getpageframeinfo3; domctl.domain = (domid_t)dom; domctl.u.getpageframeinfo3.num = num; - xc_set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); + set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr); rc = do_domctl(xch, &domctl); xc_hypercall_bounce_post(xch, arr); return rc; @@ -488,7 +357,7 @@ int xc_machphys_mfn_list(xc_interface *x return -1; } - xc_set_xen_guest_handle(xmml.extent_start, extent_start); + set_xen_guest_handle(xmml.extent_start, extent_start); rc = do_memory_op(xch, XENMEM_machphys_mfn_list, &xmml, sizeof(xmml)); if (rc || xmml.nr_extents != max_extents) rc = -1; @@ -522,7 +391,7 @@ int xc_get_pfn_list(xc_interface *xch, domctl.cmd = XEN_DOMCTL_getmemlist; domctl.domain = (domid_t)domid; domctl.u.getmemlist.max_pfns = max_pfns; - xc_set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); + set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf); ret = do_domctl(xch, &domctl); @@ -782,22 +651,6 @@ int xc_ffs64(uint64_t x) return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0; } -void *xc_memalign(size_t alignment, size_t size) -{ -#if defined(_POSIX_C_SOURCE) && !defined(__sun__) - int ret; - void *ptr; - ret = posix_memalign(&ptr, alignment, size); - if (ret != 0) - return NULL; - return ptr; -#elif defined(__NetBSD__) || defined(__OpenBSD__) - return valloc(size); -#else - return memalign(alignment, size); -#endif -} - /* * Local variables: * mode: C diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_private.h Fri Oct 22 15:14:51 2010 +0100 @@ -97,14 +97,6 @@ void xc_report_progress_step(xc_interfac #define ERROR(_m, _a...) xc_report_error(xch,XC_INTERNAL_ERROR,_m , ## _a ) #define PERROR(_m, _a...) xc_report_error(xch,XC_INTERNAL_ERROR,_m \ " (%d = %s)", ## _a , errno, safe_strerror(errno)) - -void *xc_memalign(size_t alignment, size_t size); - -int lock_pages(xc_interface *xch, void *addr, size_t len); -void unlock_pages(xc_interface *xch, void *addr, size_t len); - -int hcall_buf_prep(xc_interface *xch, void **addr, size_t len); -void hcall_buf_release(xc_interface *xch, void **addr, size_t len); /* * HYPERCALL ARGUMENT BUFFERS diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_tbuf.c --- a/tools/libxc/xc_tbuf.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_tbuf.c Fri Oct 22 15:14:51 2010 +0100 @@ -132,7 +132,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x bitmap_64_to_byte(bytemap, &mask64, sizeof (mask64) * 8); - xc_set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); + set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap); sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8; ret = do_sysctl(xch, &sysctl); diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xc_tmem.c --- a/tools/libxc/xc_tmem.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_tmem.c Fri Oct 22 15:14:51 2010 +0100 @@ -86,7 +86,7 @@ int xc_tmem_control(xc_interface *xch, } } - xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + set_xen_guest_handle(op.u.ctrl.buf, buf); rc = do_tmem_op(xch, &op); @@ -136,7 +136,7 @@ int xc_tmem_control_oid(xc_interface *xc } } - xc_set_xen_guest_handle(op.u.ctrl.buf, buf); + set_xen_guest_handle(op.u.ctrl.buf, buf); rc = do_tmem_op(xch, &op); diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xenctrl.h Fri Oct 22 15:14:51 2010 +0100 @@ -252,7 +252,8 @@ typedef struct xc_hypercall_buffer xc_hy * Set a xen_guest_handle in a type safe manner, ensuring that the * data pointer has been correctly allocated. */ -#define xc_set_xen_guest_handle(_hnd, _val) \ +#undef set_xen_guest_handle +#define set_xen_guest_handle(_hnd, _val) \ do { \ xc_hypercall_buffer_t _val1; \ typeof(XC__HYPERCALL_BUFFER_NAME(_val)) *_val2 = HYPERCALL_BUFFER(_val); \ @@ -260,7 +261,7 @@ typedef struct xc_hypercall_buffer xc_hy set_xen_guest_handle_raw(_hnd, (_val2)->hbuf); \ } while (0) -/* Use with xc_set_xen_guest_handle in place of NULL */ +/* Use with set_xen_guest_handle in place of NULL */ extern xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL); /* diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/misc/xenpm.c --- a/tools/misc/xenpm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/misc/xenpm.c Fri Oct 22 15:14:51 2010 +0100 @@ -395,9 +395,9 @@ static void signal_int_handler(int signo } } - xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU - 1; ret = xc_topologyinfo(xc_handle, &info); @@ -964,9 +964,9 @@ void cpu_topology_func(int argc, char *a goto out; } - xc_set_xen_guest_handle(info.cpu_to_core, cpu_to_core); - xc_set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); - xc_set_xen_guest_handle(info.cpu_to_node, cpu_to_node); + set_xen_guest_handle(info.cpu_to_core, cpu_to_core); + set_xen_guest_handle(info.cpu_to_socket, cpu_to_socket); + set_xen_guest_handle(info.cpu_to_node, cpu_to_node); info.max_cpu_index = MAX_NR_CPU-1; if ( xc_topologyinfo(xc_handle, &info) ) diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/python/xen/lowlevel/acm/acm.c --- a/tools/python/xen/lowlevel/acm/acm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/python/xen/lowlevel/acm/acm.c Fri Oct 22 15:14:51 2010 +0100 @@ -53,7 +53,7 @@ static void *__getssid(xc_interface *xc_ } memset(buf, 0, SSID_BUFFER_SIZE); - xc_set_xen_guest_handle(getssid.ssidbuf, buffer); + set_xen_guest_handle(getssid.ssidbuf, buffer); getssid.ssidbuf_size = SSID_BUFFER_SIZE; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; @@ -254,10 +254,10 @@ static PyObject *chgpolicy(PyObject *sel chgpolicy.delarray_size = del_arr_len; chgpolicy.chgarray_size = chg_arr_len; chgpolicy.errarray_size = sizeof(*error_array)*errarray_mbrs; - xc_set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol_buf); - xc_set_xen_guest_handle(chgpolicy.del_array, del_arr_buf); - xc_set_xen_guest_handle(chgpolicy.chg_array, chg_arr_buf); - xc_set_xen_guest_handle(chgpolicy.err_array, error_array); + set_xen_guest_handle(chgpolicy.policy_pushcache, bin_pol_buf); + set_xen_guest_handle(chgpolicy.del_array, del_arr_buf); + set_xen_guest_handle(chgpolicy.chg_array, chg_arr_buf); + set_xen_guest_handle(chgpolicy.err_array, error_array); rc = xc_acm_op(xc_handle, ACMOP_chgpolicy, &chgpolicy, sizeof(chgpolicy)); @@ -299,7 +299,7 @@ static PyObject *getpolicy(PyObject *sel goto out; memset(&getpolicy, 0x0, sizeof(getpolicy)); - xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + set_xen_guest_handle(getpolicy.pullcache, pull_buffer); getpolicy.pullcache_size = sizeof(pull_buffer); rc = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); @@ -356,8 +356,8 @@ static PyObject *relabel_domains(PyObjec reldoms.relabel_map_size = rel_rules_len; reldoms.errarray_size = sizeof(error_array); - xc_set_xen_guest_handle(reldoms.relabel_map, relabel_rules_buf); - xc_set_xen_guest_handle(reldoms.err_array, error_array); + set_xen_guest_handle(reldoms.relabel_map, relabel_rules_buf); + set_xen_guest_handle(reldoms.err_array, error_array); rc = xc_acm_op(xc_handle, ACMOP_relabeldoms, &reldoms, sizeof(reldoms)); diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/python/xen/lowlevel/xc/xc.c --- a/tools/python/xen/lowlevel/xc/xc.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/python/xen/lowlevel/xc/xc.c Fri Oct 22 15:14:51 2010 +0100 @@ -1219,9 +1219,9 @@ static PyObject *pyxc_topologyinfo(XcObj if ( nodemap == NULL ) goto out; - xc_set_xen_guest_handle(tinfo.cpu_to_core, coremap); - xc_set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); - xc_set_xen_guest_handle(tinfo.cpu_to_node, nodemap); + set_xen_guest_handle(tinfo.cpu_to_core, coremap); + set_xen_guest_handle(tinfo.cpu_to_socket, socketmap); + set_xen_guest_handle(tinfo.cpu_to_node, nodemap); tinfo.max_cpu_index = MAX_CPU_INDEX; if ( xc_topologyinfo(self->xc_handle, &tinfo) != 0 ) @@ -1313,9 +1313,9 @@ static PyObject *pyxc_numainfo(XcObject if ( nodes_dist == NULL ) goto out; - xc_set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); - xc_set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); - xc_set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); + set_xen_guest_handle(ninfo.node_to_memsize, node_memsize); + set_xen_guest_handle(ninfo.node_to_memfree, node_memfree); + set_xen_guest_handle(ninfo.node_to_node_distance, nodes_dist); ninfo.max_node_index = MAX_NODE_INDEX; if ( xc_numainfo(self->xc_handle, &ninfo) != 0 ) diff -r 5d4e169f1ef0 -r d3f52cd04d85 tools/security/secpol_tool.c --- a/tools/security/secpol_tool.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/security/secpol_tool.c Fri Oct 22 15:14:51 2010 +0100 @@ -248,7 +248,7 @@ int acm_get_ssidref(xc_interface *xc_han ssid = xc_hypercall_buffer_alloc(xc_handle, ssid, ssid_buffer_size); if ( ssid == NULL ) return 1; - xc_set_xen_guest_handle(getssid.ssidbuf, ssid); + set_xen_guest_handle(getssid.ssidbuf, ssid); getssid.ssidbuf_size = ssid_buffer_size; getssid.get_ssid_by = ACM_GETBY_domainid; getssid.id.domainid = domid; @@ -276,7 +276,7 @@ int acm_domain_getpolicy(xc_interface *x return -1; memset(pull_buffer, 0x00, pull_cache_size); - xc_set_xen_guest_handle(getpolicy.pullcache, pull_buffer); + set_xen_guest_handle(getpolicy.pullcache, pull_buffer); getpolicy.pullcache_size = pull_cache_size; ret = xc_acm_op(xc_handle, ACMOP_getpolicy, &getpolicy, sizeof(getpolicy)); if (ret >= 0) { @@ -389,7 +389,7 @@ int acm_domain_loadpolicy(xc_interface * /* dump it and then push it down into xen/acm */ acm_dump_policy_buffer(buffer, len, chwall_ssidref, ste_ssidref); - xc_set_xen_guest_handle(setpolicy.pushcache, buffer); + set_xen_guest_handle(setpolicy.pushcache, buffer); setpolicy.pushcache_size = len; ret = xc_acm_op(xc_handle, ACMOP_setpolicy, &setpolicy, sizeof(setpolicy)); @@ -437,7 +437,7 @@ int acm_domain_dumpstats(xc_interface *x return -1; memset(stats_buffer, 0x00, pull_stats_size); - xc_set_xen_guest_handle(dumpstats.pullcache, stats_buffer); + set_xen_guest_handle(dumpstats.pullcache, stats_buffer); dumpstats.pullcache_size = pull_stats_size; ret = xc_acm_op(xc_handle, ACMOP_dumpstats, &dumpstats, sizeof(dumpstats)); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-25 16:04 UTC
[Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
The following patch is needed before this one to fix build error in xc_hcall_buf.c when building with stub domains. # HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1288022053 -3600 # Node ID 53b519f53bc1471610ab1423e3c70288a6c867b5 # Parent 3b5c6d7181fecdf6c1043a35047632ddf9950343 minios: add parentheses to mlock/mulock arguments. Fixes warning/build error with non-trivial arguments. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 3b5c6d7181fe -r 53b519f53bc1 extras/mini-os/include/posix/sys/mman.h --- a/extras/mini-os/include/posix/sys/mman.h Mon Oct 25 14:56:39 2010 +0100 +++ b/extras/mini-os/include/posix/sys/mman.h Mon Oct 25 16:54:13 2010 +0100 @@ -16,7 +16,7 @@ void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset) asm("mmap64"); int munmap(void *start, size_t length); -#define munlock(addr, len) ((void)addr, (void)len, 0) -#define mlock(addr, len) ((void)addr, (void)len, 0) +#define munlock(addr, len) ((void)(addr), (void)(len), 0) +#define mlock(addr, len) ((void)(addr), (void)(len), 0) #endif /* _POSIX_SYS_MMAN_H */ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-25 16:05 UTC
[Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
Resending with updates #includes in xc_hcall_buf.c to fix stubdomain build. # HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1288022054 -3600 # Node ID 56ea205819916669ff0f78414a461d12c35606bc # Parent 53b519f53bc1471610ab1423e3c70288a6c867b5 libxc: infrastructure for hypercall safe data buffers. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r 53b519f53bc1 -r 56ea20581991 tools/libxc/Makefile --- a/tools/libxc/Makefile Mon Oct 25 16:54:13 2010 +0100 +++ b/tools/libxc/Makefile Mon Oct 25 16:54:14 2010 +0100 @@ -27,6 +27,7 @@ CTRL_SRCS-y += xc_mem_event.c CTRL_SRCS-y += xc_mem_event.c CTRL_SRCS-y += xc_mem_paging.c CTRL_SRCS-y += xc_memshr.c +CTRL_SRCS-y += xc_hcall_buf.c CTRL_SRCS-y += xtl_core.c CTRL_SRCS-y += xtl_logger_stdio.c CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c diff -r 53b519f53bc1 -r 56ea20581991 tools/libxc/xc_hcall_buf.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/libxc/xc_hcall_buf.c Mon Oct 25 16:54:14 2010 +0100 @@ -0,0 +1,162 @@ +/* + * Copyright (c) 2010, Citrix Systems, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; + * version 2.1 of the License. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include <stdlib.h> +#include <malloc.h> + +#include "xc_private.h" +#include "xg_private.h" + +xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL) = { + .hbuf = NULL, + .param_shadow = NULL, + HYPERCALL_BUFFER_INIT_NO_BOUNCE +}; + +void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) +{ + size_t size = nr_pages * PAGE_SIZE; + void *p; +#if defined(_POSIX_C_SOURCE) && !defined(__sun__) + int ret; + ret = posix_memalign(&p, PAGE_SIZE, size); + if (ret != 0) + return NULL; +#elif defined(__NetBSD__) || defined(__OpenBSD__) + p = valloc(size); +#else + p = memalign(PAGE_SIZE, size); +#endif + + if (!p) + return NULL; + +#ifndef __sun__ + if ( mlock(p, size) < 0 ) + { + free(p); + return NULL; + } +#endif + + b->hbuf = p; + + memset(p, 0, size); + return b->hbuf; +} + +void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) +{ + if ( b->hbuf == NULL ) + return; + +#ifndef __sun__ + (void) munlock(b->hbuf, nr_pages * PAGE_SIZE); +#endif + + free(b->hbuf); +} + +struct allocation_header { + int nr_pages; +}; + +void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size) +{ + size_t actual_size = ROUNDUP(size + sizeof(struct allocation_header), PAGE_SHIFT); + int nr_pages = actual_size >> PAGE_SHIFT; + struct allocation_header *hdr; + + hdr = xc__hypercall_buffer_alloc_pages(xch, b, nr_pages); + if ( hdr == NULL ) + return NULL; + + b->hbuf = (void *)(hdr+1); + + hdr->nr_pages = nr_pages; + return b->hbuf; +} + +void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + struct allocation_header *hdr; + + if (b->hbuf == NULL) + return; + + hdr = b->hbuf; + b->hbuf = --hdr; + + xc__hypercall_buffer_free_pages(xch, b, hdr->nr_pages); +} + +int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + void *p; + + /* + * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE. + */ + if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE ) + abort(); + + /* + * Do need to bounce a NULL buffer. + */ + if ( b->ubuf == NULL ) + { + b->hbuf = NULL; + return 0; + } + + p = xc__hypercall_buffer_alloc(xch, b, b->sz); + if ( p == NULL ) + return -1; + + if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_IN || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH ) + memcpy(b->hbuf, b->ubuf, b->sz); + + return 0; +} + +void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *b) +{ + /* + * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE. + */ + if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE ) + abort(); + + if ( b->hbuf == NULL ) + return; + + if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_OUT || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH ) + memcpy(b->ubuf, b->hbuf, b->sz); + + xc__hypercall_buffer_free(xch, b); +} + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff -r 53b519f53bc1 -r 56ea20581991 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Mon Oct 25 16:54:13 2010 +0100 +++ b/tools/libxc/xc_private.c Mon Oct 25 16:54:14 2010 +0100 @@ -18,13 +18,11 @@ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA */ -#include <inttypes.h> #include "xc_private.h" #include "xg_private.h" #include "xc_dom.h" #include <stdarg.h> #include <stdlib.h> -#include <malloc.h> #include <unistd.h> #include <pthread.h> #include <assert.h> diff -r 53b519f53bc1 -r 56ea20581991 tools/libxc/xc_private.h --- a/tools/libxc/xc_private.h Mon Oct 25 16:54:13 2010 +0100 +++ b/tools/libxc/xc_private.h Mon Oct 25 16:54:14 2010 +0100 @@ -105,6 +105,64 @@ void unlock_pages(xc_interface *xch, voi int hcall_buf_prep(xc_interface *xch, void **addr, size_t len); void hcall_buf_release(xc_interface *xch, void **addr, size_t len); + +/* + * HYPERCALL ARGUMENT BUFFERS + * + * Augment the public hypercall buffer interface with the ability to + * bounce between user provided buffers and hypercall safe memory. + * + * Use xc_hypercall_bounce_pre/post instead of + * xc_hypercall_buffer_alloc/free(_pages). The specified user + * supplied buffer is automatically copied in/out of the hypercall + * safe memory. + */ +enum { + XC_HYPERCALL_BUFFER_BOUNCE_NONE = 0, + XC_HYPERCALL_BUFFER_BOUNCE_IN = 1, + XC_HYPERCALL_BUFFER_BOUNCE_OUT = 2, + XC_HYPERCALL_BUFFER_BOUNCE_BOTH = 3 +}; + +/* + * Declare a named bounce buffer. + * + * Normally you should use DECLARE_HYPERCALL_BOUNCE (see below). + * + * This declaration should only be used when the user pointer is + * non-trivial, e.g. when it is contained within an existing data + * structure. + */ +#define DECLARE_NAMED_HYPERCALL_BOUNCE(_name, _ubuf, _sz, _dir) \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = NULL, \ + .param_shadow = NULL, \ + .sz = _sz, .dir = _dir, .ubuf = _ubuf, \ + } + +/* + * Declare a bounce buffer shadowing the named user data pointer. + */ +#define DECLARE_HYPERCALL_BOUNCE(_ubuf, _sz, _dir) DECLARE_NAMED_HYPERCALL_BOUNCE(_ubuf, _ubuf, _sz, _dir) + +/* + * Set the size of data to bounce. Useful when the size is not known + * when the bounce buffer is declared. + */ +#define HYPERCALL_BOUNCE_SET_SIZE(_buf, _sz) do { (HYPERCALL_BUFFER(_buf))->sz = _sz; } while (0) + +/* + * Initialise and free hypercall safe memory. Takes care of any required + * copying. + */ +int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *bounce); +#define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name)) +void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce); +#define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name)) + +/* + * Hypercall interfaces. + */ int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall); diff -r 53b519f53bc1 -r 56ea20581991 tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Mon Oct 25 16:54:13 2010 +0100 +++ b/tools/libxc/xenctrl.h Mon Oct 25 16:54:14 2010 +0100 @@ -147,6 +147,137 @@ enum xc_open_flags { * @return 0 on success, -1 otherwise. */ int xc_interface_close(xc_interface *xch); + +/* + * HYPERCALL SAFE MEMORY BUFFER + * + * Ensure that memory which is passed to a hypercall has been + * specially allocated in order to be safe to access from the + * hypervisor. + * + * Each user data pointer is shadowed by an xc_hypercall_buffer data + * structure. You should never define an xc_hypercall_buffer type + * directly, instead use the DECLARE_HYPERCALL_BUFFER* macros below. + * + * The strucuture should be considered opaque and all access should be + * via the macros and helper functions defined below. + * + * Once the buffer is declared the user is responsible for explicitly + * allocating and releasing the memory using + * xc_hypercall_buffer_alloc(_pages) and + * xc_hypercall_buffer_free(_pages). + * + * Once the buffer has been allocated the user can initialise the data + * via the normal pointer. The xc_hypercall_buffer structure is + * transparently referenced by the helper macros (such as + * xen_set_guest_handle) in order to check at compile time that the + * correct type of memory is being used. + */ +struct xc_hypercall_buffer { + /* Hypercall safe memory buffer. */ + void *hbuf; + + /* + * Reference to xc_hypercall_buffer passed as argument to the + * current function. + */ + struct xc_hypercall_buffer *param_shadow; + + /* + * Direction of copy for bounce buffering. + */ + int dir; + + /* Used iff dir != 0. */ + void *ubuf; + size_t sz; +}; +typedef struct xc_hypercall_buffer xc_hypercall_buffer_t; + +/* + * Construct the name of the hypercall buffer for a given variable. + * For internal use only + */ +#define XC__HYPERCALL_BUFFER_NAME(_name) xc__hypercall_buffer_##_name + +/* + * Returns the hypercall_buffer associated with a variable. + */ +#define HYPERCALL_BUFFER(_name) \ + ({ xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = &XC__HYPERCALL_BUFFER_NAME(_name); \ + (void)(&_val1 == _val2); \ + (_val2)->param_shadow ? (_val2)->param_shadow : (_val2); \ + }) + +#define HYPERCALL_BUFFER_INIT_NO_BOUNCE .dir = 0, .sz = 0, .ubuf = (void *)-1 + +/* + * Defines a hypercall buffer and user pointer with _name of _type. + * + * The user accesses the data as normal via _name which will be + * transparently converted to the hypercall buffer as necessary. + */ +#define DECLARE_HYPERCALL_BUFFER(_type, _name) \ + _type *_name = NULL; \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = NULL, \ + .param_shadow = NULL, \ + HYPERCALL_BUFFER_INIT_NO_BOUNCE \ + } + +/* + * Declare the necessary data structure to allow a hypercall buffer + * passed as an argument to a function to be used in the normal way. + */ +#define DECLARE_HYPERCALL_BUFFER_ARGUMENT(_name) \ + xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \ + .hbuf = (void *)-1, \ + .param_shadow = _name, \ + HYPERCALL_BUFFER_INIT_NO_BOUNCE \ + } + +/* + * Get the hypercall buffer data pointer in a form suitable for use + * directly as a hypercall argument. + */ +#define HYPERCALL_BUFFER_AS_ARG(_name) \ + ({ xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = HYPERCALL_BUFFER(_name); \ + (void)(&_val1 == _val2); \ + (unsigned long)(_val2)->hbuf; \ + }) + +/* + * Set a xen_guest_handle in a type safe manner, ensuring that the + * data pointer has been correctly allocated. + */ +#define xc_set_xen_guest_handle(_hnd, _val) \ + do { \ + xc_hypercall_buffer_t _val1; \ + typeof(XC__HYPERCALL_BUFFER_NAME(_val)) *_val2 = HYPERCALL_BUFFER(_val); \ + (void) (&_val1 == _val2); \ + set_xen_guest_handle_raw(_hnd, (_val2)->hbuf); \ + } while (0) + +/* Use with xc_set_xen_guest_handle in place of NULL */ +extern xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL); + +/* + * Allocate and free hypercall buffers with byte granularity. + */ +void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size); +#define xc_hypercall_buffer_alloc(_xch, _name, _size) xc__hypercall_buffer_alloc(_xch, HYPERCALL_BUFFER(_name), _size) +void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b); +#define xc_hypercall_buffer_free(_xch, _name) xc__hypercall_buffer_free(_xch, HYPERCALL_BUFFER(_name)) + +/* + * Allocate and free hypercall buffers with page alignment. + */ +void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages); +#define xc_hypercall_buffer_alloc_pages(_xch, _name, _nr) xc__hypercall_buffer_alloc_pages(_xch, HYPERCALL_BUFFER(_name), _nr) +void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages); +#define xc_hypercall_buffer_free_pages(_xch, _name, _nr) xc__hypercall_buffer_free_pages(_xch, HYPERCALL_BUFFER(_name), _nr) /* * DOMAIN DEBUGGING FUNCTIONS _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2010-Oct-26 11:23 UTC
[Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
Ian Campbell writes ("[Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers"):> The following patch is needed before this one to fix build error in > xc_hcall_buf.c when building with stub domains.I''ve committed, this, and the following 25-patch series (with the revised version of 01/25). Thanks, Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2010-Oct-26 15:17 UTC
Re: [Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
On Tue, Oct 26, Ian Jackson wrote:> Ian Campbell writes ("[Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers"): > > The following patch is needed before this one to fix build error in > > xc_hcall_buf.c when building with stub domains. > > I''ve committed, this, and the following 25-patch series (with the > revised version of 01/25).Does that actually for anyone? Rev 22285 worked, rev 22313 broke. ... [2010-10-26 17:09:11 4743] INFO (SrvDaemon:332) Xend Daemon started [2010-10-26 17:09:11 4743] ERROR (SrvDaemon:349) Exception starting xend ((14, ''Bad address'')) Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/xen/xend/server/SrvDaemon.py", line 335, in run xinfo = xc.xeninfo() Error: (14, ''Bad address'') [2010-10-26 17:09:11 4742] INFO (SrvDaemon:220) Xend exited with status 1. ... Does this series require dom0 kernel changes by any chance? Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-26 15:24 UTC
Re: [Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
On Tue, 2010-10-26 at 16:17 +0100, Olaf Hering wrote:> On Tue, Oct 26, Ian Jackson wrote: > > > Ian Campbell writes ("[Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers"): > > > The following patch is needed before this one to fix build error in > > > xc_hcall_buf.c when building with stub domains. > > > > I''ve committed, this, and the following 25-patch series (with the > > revised version of 01/25). > > Does that actually for anyone? > Rev 22285 worked, rev 22313 broke. > > ... > [2010-10-26 17:09:11 4743] INFO (SrvDaemon:332) Xend Daemon started > [2010-10-26 17:09:11 4743] ERROR (SrvDaemon:349) Exception starting xend ((14, ''Bad address'')) > Traceback (most recent call last): > File "/usr/lib64/python2.6/site-packages/xen/xend/server/SrvDaemon.py", line 335, in run > xinfo = xc.xeninfo() > Error: (14, ''Bad address'') > [2010-10-26 17:09:11 4742] INFO (SrvDaemon:220) Xend exited with status 1. > ...hmm, I tested xend but trying it now myself I see that it is broken. I''ll sort it out ASAP.> Does this series require dom0 kernel changes by any chance?it shouldn''t. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2010-Oct-26 15:37 UTC
Re: [Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers
Ian, is the usage like shown below ok, adding an offset +start to the initial buffer? Olaf --- xen-unstable.hg-4.1.22313.orig/tools/libxc/xc_domain.c +++ xen-unstable.hg-4.1.22313/tools/libxc/xc_domain.c @@ -572,6 +572,55 @@ int xc_domain_get_tsc_info(xc_interface return rc; } +static int do_xenmem_op_retry(xc_interface *xch, int cmd, struct xen_memory_reservation *reservation, size_t len, unsigned long nr_extents, xen_pfn_t *extent_start) +{ + xen_pfn_t *es = extent_start; + DECLARE_HYPERCALL_BOUNCE(es, nr_extents * sizeof(*es), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + int err = 0; + unsigned long count = nr_extents; + unsigned long delay = 0; + unsigned long start = 0; + + fprintf(stderr, "%s: %d count %lx\n",__func__,cmd,count); + + if ( xc_hypercall_bounce_pre(xch, es) ) + { + PERROR("Could not bounce memory for XENMEM_* hypercall"); + return -1; + } + + while ( start < nr_extents ) + { + es = extent_start + start; + set_xen_guest_handle(reservation->extent_start, es); + reservation->nr_extents = count; + + err = do_memory_op(xch, cmd, reservation, len); + if ( err == count ) + break; + + if ( err > count || err < 0 ) + break; + + if ( delay > 1000 * 1000) + { + err = start; + break; + } + + if ( err ) + delay = 0; + + start += err; + count -= err; + usleep(delay); + delay += 666; /* 1500 iterations, 12 seconds */ + } + fprintf(stderr, "%s: %d err %x count %lx start %lx delay %lu/%lu\n",__func__,cmd,err,count,start,delay,delay/666); + + xc_hypercall_bounce_post(xch, es); + return err; +} int xc_domain_maximum_gpfn(xc_interface *xch, domid_t domid) { @@ -643,10 +692,7 @@ int xc_domain_decrease_reservation(xc_in unsigned int extent_order, xen_pfn_t *extent_start) { - int err; - DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { - .nr_extents = nr_extents, .extent_order = extent_order, .mem_flags = 0, .domid = domid @@ -659,18 +705,7 @@ int xc_domain_decrease_reservation(xc_in return -1; } - if ( xc_hypercall_bounce_pre(xch, extent_start) ) - { - PERROR("Could not bounce memory for XENMEM_decrease_reservation hypercall"); - return -1; - } - set_xen_guest_handle(reservation.extent_start, extent_start); - - err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation)); - - xc_hypercall_bounce_post(xch, extent_start); - - return err; + return do_xenmem_op_retry(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation), nr_extents, extent_start); } int xc_domain_decrease_reservation_exact(xc_interface *xch, @@ -704,13 +739,20 @@ int xc_domain_add_to_physmap(xc_interfac unsigned long idx, xen_pfn_t gpfn) { + uint8_t delay = 0; + int rc; struct xen_add_to_physmap xatp = { .domid = domid, .space = space, .idx = idx, .gpfn = gpfn, }; - return do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp)); + do { + rc = do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp)); + if ( rc < 0 && errno == ENOENT ) + usleep(1000); + } while ( rc < 0 && errno == ENOENT && ++delay ); + return rc; } int xc_domain_populate_physmap(xc_interface *xch, @@ -720,26 +762,13 @@ int xc_domain_populate_physmap(xc_interf unsigned int mem_flags, xen_pfn_t *extent_start) { - int err; - DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); struct xen_memory_reservation reservation = { - .nr_extents = nr_extents, .extent_order = extent_order, .mem_flags = mem_flags, .domid = domid }; - if ( xc_hypercall_bounce_pre(xch, extent_start) ) - { - PERROR("Could not bounce memory for XENMEM_populate_physmap hypercall"); - return -1; - } - set_xen_guest_handle(reservation.extent_start, extent_start); - - err = do_memory_op(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation)); - - xc_hypercall_bounce_post(xch, extent_start); - return err; + return do_xenmem_op_retry(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation), nr_extents, extent_start); } int xc_domain_populate_physmap_exact(xc_interface *xch, @@ -799,6 +828,7 @@ int xc_domain_memory_exchange_pages(xc_i set_xen_guest_handle(exchange.in.extent_start, in_extents); set_xen_guest_handle(exchange.out.extent_start, out_extents); + /* FIXME use do_xenmem_op_retry or some retry loop??? */ rc = do_memory_op(xch, XENMEM_exchange, &exchange, sizeof(exchange)); out: _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-26 16:25 UTC
Re: [Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers
On Tue, 2010-10-26 at 16:37 +0100, Olaf Hering wrote:> Ian, > > is the usage like shown below ok, adding an offset +start to the > initial buffer?Unfortunately I think you need to extend the infrastructure to make it work. es is a pointer to the unbounced buffer so you can''t just take offsets from it etc. I think you need to add a set_xen_guest_handle_offset used as set_xen_guest_handle_offset(reservation->extent_start, es, start) or whatever. (I think this actually makes es useless so you can bounce extent_start directly and use extent_start directly). Ian.> Olaf > > --- xen-unstable.hg-4.1.22313.orig/tools/libxc/xc_domain.c > +++ xen-unstable.hg-4.1.22313/tools/libxc/xc_domain.c > @@ -572,6 +572,55 @@ int xc_domain_get_tsc_info(xc_interface > return rc; > } > > +static int do_xenmem_op_retry(xc_interface *xch, int cmd, struct xen_memory_reservation *reservation, size_t len, unsigned long nr_extents, xen_pfn_t *extent_start) > +{ > + xen_pfn_t *es = extent_start; > + DECLARE_HYPERCALL_BOUNCE(es, nr_extents * sizeof(*es), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > + int err = 0; > + unsigned long count = nr_extents; > + unsigned long delay = 0; > + unsigned long start = 0; > + > + fprintf(stderr, "%s: %d count %lx\n",__func__,cmd,count); > + > + if ( xc_hypercall_bounce_pre(xch, es) ) > + { > + PERROR("Could not bounce memory for XENMEM_* hypercall"); > + return -1; > + } > + > + while ( start < nr_extents ) > + { > + es = extent_start + start; > + set_xen_guest_handle(reservation->extent_start, es); > + reservation->nr_extents = count; > + > + err = do_memory_op(xch, cmd, reservation, len); > + if ( err == count ) > + break; > + > + if ( err > count || err < 0 ) > + break; > + > + if ( delay > 1000 * 1000) > + { > + err = start; > + break; > + } > + > + if ( err ) > + delay = 0; > + > + start += err; > + count -= err; > + usleep(delay); > + delay += 666; /* 1500 iterations, 12 seconds */ > + } > + fprintf(stderr, "%s: %d err %x count %lx start %lx delay %lu/%lu\n",__func__,cmd,err,count,start,delay,delay/666); > + > + xc_hypercall_bounce_post(xch, es); > + return err; > +} > > int xc_domain_maximum_gpfn(xc_interface *xch, domid_t domid) > { > @@ -643,10 +692,7 @@ int xc_domain_decrease_reservation(xc_in > unsigned int extent_order, > xen_pfn_t *extent_start) > { > - int err; > - DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > struct xen_memory_reservation reservation = { > - .nr_extents = nr_extents, > .extent_order = extent_order, > .mem_flags = 0, > .domid = domid > @@ -659,18 +705,7 @@ int xc_domain_decrease_reservation(xc_in > return -1; > } > > - if ( xc_hypercall_bounce_pre(xch, extent_start) ) > - { > - PERROR("Could not bounce memory for XENMEM_decrease_reservation hypercall"); > - return -1; > - } > - set_xen_guest_handle(reservation.extent_start, extent_start); > - > - err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation)); > - > - xc_hypercall_bounce_post(xch, extent_start); > - > - return err; > + return do_xenmem_op_retry(xch, XENMEM_decrease_reservation, &reservation, sizeof(reservation), nr_extents, extent_start); > } > > int xc_domain_decrease_reservation_exact(xc_interface *xch, > @@ -704,13 +739,20 @@ int xc_domain_add_to_physmap(xc_interfac > unsigned long idx, > xen_pfn_t gpfn) > { > + uint8_t delay = 0; > + int rc; > struct xen_add_to_physmap xatp = { > .domid = domid, > .space = space, > .idx = idx, > .gpfn = gpfn, > }; > - return do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp)); > + do { > + rc = do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp)); > + if ( rc < 0 && errno == ENOENT ) > + usleep(1000); > + } while ( rc < 0 && errno == ENOENT && ++delay ); > + return rc; > } > > int xc_domain_populate_physmap(xc_interface *xch, > @@ -720,26 +762,13 @@ int xc_domain_populate_physmap(xc_interf > unsigned int mem_flags, > xen_pfn_t *extent_start) > { > - int err; > - DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > struct xen_memory_reservation reservation = { > - .nr_extents = nr_extents, > .extent_order = extent_order, > .mem_flags = mem_flags, > .domid = domid > }; > > - if ( xc_hypercall_bounce_pre(xch, extent_start) ) > - { > - PERROR("Could not bounce memory for XENMEM_populate_physmap hypercall"); > - return -1; > - } > - set_xen_guest_handle(reservation.extent_start, extent_start); > - > - err = do_memory_op(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation)); > - > - xc_hypercall_bounce_post(xch, extent_start); > - return err; > + return do_xenmem_op_retry(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation), nr_extents, extent_start); > } > > int xc_domain_populate_physmap_exact(xc_interface *xch, > @@ -799,6 +828,7 @@ int xc_domain_memory_exchange_pages(xc_i > set_xen_guest_handle(exchange.in.extent_start, in_extents); > set_xen_guest_handle(exchange.out.extent_start, out_extents); > > + /* FIXME use do_xenmem_op_retry or some retry loop??? */ > rc = do_memory_op(xch, XENMEM_exchange, &exchange, sizeof(exchange)); > > out:_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-26 16:38 UTC
Re: [Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
On Tue, 2010-10-26 at 16:24 +0100, Ian Campbell wrote:> On Tue, 2010-10-26 at 16:17 +0100, Olaf Hering wrote: > > On Tue, Oct 26, Ian Jackson wrote: > > > > > Ian Campbell writes ("[Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers"): > > > > The following patch is needed before this one to fix build error in > > > > xc_hcall_buf.c when building with stub domains. > > > > > > I''ve committed, this, and the following 25-patch series (with the > > > revised version of 01/25). > > > > Does that actually for anyone? > > Rev 22285 worked, rev 22313 broke. > > > > ... > > [2010-10-26 17:09:11 4743] INFO (SrvDaemon:332) Xend Daemon started > > [2010-10-26 17:09:11 4743] ERROR (SrvDaemon:349) Exception starting xend ((14, ''Bad address'')) > > Traceback (most recent call last): > > File "/usr/lib64/python2.6/site-packages/xen/xend/server/SrvDaemon.py", line 335, in run > > xinfo = xc.xeninfo() > > Error: (14, ''Bad address'') > > [2010-10-26 17:09:11 4742] INFO (SrvDaemon:220) Xend exited with status 1. > > ... > > hmm, I tested xend but trying it now myself I see that it is broken. > I''ll sort it out ASAP.ASAP won''t be today now unfortunately. Reverting 9fad5e5e2fc1 followed by ca4a781c8ae8 fixes the issue for me, with ca4a781c8ae8 more than likely being the actual culprit. I''m sure I fixed something like this once before, must have rebroken it in a rebase or something. Ian.> > > Does this series require dom0 kernel changes by any chance? > > it shouldn''t. > > Ian. > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-26 18:47 UTC
Re: [Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
On Tue, 2010-10-26 at 17:38 +0100, Ian Campbell wrote:> ASAP won''t be today now unfortunately.Actually, maybe it will: # HG changeset patch # User Ian Campbell <ian.campbell@citrix.com> # Date 1288118755 -3600 # Node ID 630441a717dacd0764836386b63a5db8cd5d11dd # Parent cd193fa265b88bf4ff891f03c9be0e12415e6778 libxc: fix xc_version by handling all known command types. xend was crashing since 22289:ca4a781c8ae8 due to missing handling of XENVER_commandline. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> diff -r cd193fa265b8 -r 630441a717da tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Tue Oct 26 12:22:52 2010 +0100 +++ b/tools/libxc/xc_private.c Tue Oct 26 19:45:55 2010 +0100 @@ -447,11 +447,14 @@ int xc_version(xc_interface *xch, int cm int xc_version(xc_interface *xch, int cmd, void *arg) { DECLARE_HYPERCALL_BOUNCE(arg, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT); /* Size unknown until cmd decoded */ - size_t sz = 0; + size_t sz; int rc; switch ( cmd ) { + case XENVER_version: + sz = 0; + break; case XENVER_extraversion: sz = sizeof(xen_extraversion_t); break; @@ -467,6 +470,21 @@ int xc_version(xc_interface *xch, int cm case XENVER_platform_parameters: sz = sizeof(xen_platform_parameters_t); break; + case XENVER_get_features: + sz = sizeof(xen_feature_info_t); + break; + case XENVER_pagesize: + sz = 0; + break; + case XENVER_guest_handle: + sz = sizeof(xen_domain_handle_t); + break; + case XENVER_commandline: + sz = sizeof(xen_commandline_t); + break; + default: + ERROR("xc_version: unknown command %d\n", cmd); + return -EINVAL; } HYPERCALL_BOUNCE_SET_SIZE(arg, sz); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2010-Oct-27 06:30 UTC
Re: [Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
On Tue, Oct 26, Ian Campbell wrote:> On Tue, 2010-10-26 at 17:38 +0100, Ian Campbell wrote: > > ASAP won''t be today now unfortunately. > > Actually, maybe it will: > > # HG changeset patch > # User Ian Campbell <ian.campbell@citrix.com> > # Date 1288118755 -3600 > # Node ID 630441a717dacd0764836386b63a5db8cd5d11dd > # Parent cd193fa265b88bf4ff891f03c9be0e12415e6778 > libxc: fix xc_version by handling all known command types. > > xend was crashing since 22289:ca4a781c8ae8 due to missing handling of > XENVER_commandline. > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>Tested-by: Olaf Hering <olaf@aepfle.de> Thanks, that change helps. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2010-Oct-27 11:25 UTC
Re: [Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers
Ian Campbell writes ("Re: [Xen-devel] Re: [PATCH 01 of 25] libxc: infrastructure for hypercall safe data buffers"):> libxc: fix xc_version by handling all known command types. > > xend was crashing since 22289:ca4a781c8ae8 due to missing handling of > XENVER_commandline.Applied, thanks. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2010-Oct-27 14:53 UTC
Re: [Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers
On Tue, Oct 26, Ian Campbell wrote:> On Tue, 2010-10-26 at 16:37 +0100, Olaf Hering wrote: > > Ian, > > > > is the usage like shown below ok, adding an offset +start to the > > initial buffer? > > Unfortunately I think you need to extend the infrastructure to make it > work.This should work as well, using existing code: err = do_memory_op(xch, cmd | (start << MEMOP_EXTENT_SHIFT), reservation, len); Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-27 15:45 UTC
Re: [Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers
On Wed, 2010-10-27 at 15:53 +0100, Olaf Hering wrote:> On Tue, Oct 26, Ian Campbell wrote: > > > On Tue, 2010-10-26 at 16:37 +0100, Olaf Hering wrote: > > > Ian, > > > > > > is the usage like shown below ok, adding an offset +start to the > > > initial buffer? > > > > Unfortunately I think you need to extend the infrastructure to make it > > work. > > This should work as well, using existing code: > > err = do_memory_op(xch, cmd | (start << MEMOP_EXTENT_SHIFT), reservation, len);That would be preferable then, I think (without having seen the entire patch). Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel