Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 00 of 18] libxc: preparation for hypercall buffers + random cleanups
The following contains some clean ups in preparation for the hypercall
buffer patch series, plus some other bits a bobs which I happened to
notice while preparing that series.
The bulk is adding (and consistently using) specific libxc functions
for each XENMEM_* operation. This allows the memory management in
xc_memory_op to be greatly simplified by converting it to a purely
internal function.
Part of this involved adding variants of
xc_domain_memory_{increase_reservation,decrease_reservation,populate_physmap}
which return the actual number of succesful operations instead of
swallowing partial success and converting to failure.
Rather than add a difficult to detect API change by redefining the
meaning of the integer return value of these types I have instead
introduced new names for these functions in the form of
xc_domain_{increase_reservation,decrease_reservation,populate_physmap}. In
each case I have also added an xc_domain_*_exact variant which
maintains the semantics of the old xc_domain_memory_* functions.
For consistency xc_domain_memory_{set,get}_pod_target have now become
xc_domain_{set,get}_pod_target.
The bits which touch ia64 are not even compile tested since I do not
have access to a suitable userspace-capable cross compiler. However
they are relatively straightforward substitutions.
One patch in the series (#11/18) adds a "# XXX update" suffix to
QEMU_TAG. Rather than applying directly QEMU_TAG should be updated at
this point to include the qemu series posted alongside this one.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 01 of 18] libxc: flask: use (un)lock pages rather than open coding m(un)lock
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892401 -3600
# Node ID 73a05c8f7c3ec924c7a334a8840b54fcba31c3c1
# Parent b5ed73f6f9b57d90dd3816f20594977e240497c1
libxc: flask: use (un)lock pages rather than open coding m(un)lock.
Allows us to do away with safe_unlock and merge into unlock_pages.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r b5ed73f6f9b5 -r 73a05c8f7c3e tools/libxc/xc_flask.c
--- a/tools/libxc/xc_flask.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_flask.c Tue Oct 12 15:06:41 2010 +0100
@@ -44,7 +44,7 @@ int xc_flask_op(xc_interface *xch, flask
hypercall.op = __HYPERVISOR_xsm_op;
hypercall.arg[0] = (unsigned long)op;
- if ( mlock(op, sizeof(*op)) != 0 )
+ if ( lock_pages(op, sizeof(*op)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out;
@@ -56,7 +56,7 @@ int xc_flask_op(xc_interface *xch, flask
fprintf(stderr, "XSM operation failed!\n");
}
- safe_munlock(op, sizeof(*op));
+ unlock_pages(op, sizeof(*op));
out:
return ret;
diff -r b5ed73f6f9b5 -r 73a05c8f7c3e tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:41 2010 +0100
@@ -218,7 +218,9 @@ void unlock_pages(void *addr, size_t len
void *laddr = (void *)((unsigned long)addr & PAGE_MASK);
size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) +
PAGE_SIZE - 1) & PAGE_MASK;
- safe_munlock(laddr, llen);
+ int saved_errno = errno;
+ (void)munlock(laddr, llen);
+ errno = saved_errno;
}
static pthread_key_t hcall_buf_pkey;
diff -r b5ed73f6f9b5 -r 73a05c8f7c3e tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_private.h Tue Oct 12 15:06:41 2010 +0100
@@ -105,13 +105,6 @@ void unlock_pages(void *addr, size_t len
int hcall_buf_prep(void **addr, size_t len);
void hcall_buf_release(void **addr, size_t len);
-
-static inline void safe_munlock(const void *addr, size_t len)
-{
- int saved_errno = errno;
- (void)munlock(addr, len);
- errno = saved_errno;
-}
int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 02 of 18] libxc: pass an xc_interface handle to page locking functions
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892401 -3600
# Node ID 29a5439889c36e72df0f0828aee8f2b002a545b9
# Parent 73a05c8f7c3ec924c7a334a8840b54fcba31c3c1
libxc: pass an xc_interface handle to page locking functions
Not actually used here but useful to confirm that a handle is passed
down to each location where it will be required once we switch to
hypercall buffers.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_acm.c
--- a/tools/libxc/xc_acm.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_acm.c Tue Oct 12 15:06:41 2010 +0100
@@ -92,7 +92,7 @@ int xc_acm_op(xc_interface *xch, int cmd
hypercall.op = __HYPERVISOR_xsm_op;
hypercall.arg[0] = (unsigned long)&acmctl;
- if ( lock_pages(&acmctl, sizeof(acmctl)) != 0)
+ if ( lock_pages(xch, &acmctl, sizeof(acmctl)) != 0)
{
PERROR("Could not lock memory for Xen hypercall");
return -EFAULT;
@@ -103,7 +103,7 @@ int xc_acm_op(xc_interface *xch, int cmd
DPRINTF("acmctl operation failed -- need to"
" rebuild the user-space tool set?\n");
}
- unlock_pages(&acmctl, sizeof(acmctl));
+ unlock_pages(xch, &acmctl, sizeof(acmctl));
switch (cmd) {
case ACMOP_getdecision: {
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_cpupool.c
--- a/tools/libxc/xc_cpupool.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_cpupool.c Tue Oct 12 15:06:41 2010 +0100
@@ -85,13 +85,13 @@ int xc_cpupool_getinfo(xc_interface *xch
set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(info->cpumap) * 8;
- if ( (err = lock_pages(local, sizeof(local))) != 0 )
+ if ( (err = lock_pages(xch, local, sizeof(local))) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
break;
}
err = do_sysctl_save(xch, &sysctl);
- unlock_pages(local, sizeof (local));
+ unlock_pages(xch, local, sizeof (local));
if ( err < 0 )
break;
@@ -161,14 +161,14 @@ int xc_cpupool_freeinfo(xc_interface *xc
set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(*cpumap) * 8;
- if ( (err = lock_pages(local, sizeof(local))) != 0 )
+ if ( (err = lock_pages(xch, local, sizeof(local))) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
return err;
}
err = do_sysctl_save(xch, &sysctl);
- unlock_pages(local, sizeof (local));
+ unlock_pages(xch, local, sizeof (local));
if (err < 0)
return err;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_domain.c Tue Oct 12 15:06:41 2010 +0100
@@ -94,7 +94,7 @@ int xc_domain_shutdown(xc_interface *xch
arg.domain_id = domid;
arg.reason = reason;
- if ( lock_pages(&arg, sizeof(arg)) != 0 )
+ if ( lock_pages(xch, &arg, sizeof(arg)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out1;
@@ -102,7 +102,7 @@ int xc_domain_shutdown(xc_interface *xch
ret = do_xen_hypercall(xch, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(xch, &arg, sizeof(arg));
out1:
return ret;
@@ -133,7 +133,7 @@ int xc_vcpu_setaffinity(xc_interface *xc
domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
- if ( lock_pages(local, cpusize) != 0 )
+ if ( lock_pages(xch, local, cpusize) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out;
@@ -141,7 +141,7 @@ int xc_vcpu_setaffinity(xc_interface *xc
ret = do_domctl(xch, &domctl);
- unlock_pages(local, cpusize);
+ unlock_pages(xch, local, cpusize);
out:
free(local);
@@ -172,7 +172,7 @@ int xc_vcpu_getaffinity(xc_interface *xc
set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
- if ( lock_pages(local, sizeof(local)) != 0 )
+ if ( lock_pages(xch, local, sizeof(local)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out;
@@ -180,7 +180,7 @@ int xc_vcpu_getaffinity(xc_interface *xc
ret = do_domctl(xch, &domctl);
- unlock_pages(local, sizeof (local));
+ unlock_pages(xch, local, sizeof (local));
bitmap_byte_to_64(cpumap, local, cpusize * 8);
out:
free(local);
@@ -257,7 +257,7 @@ int xc_domain_getinfolist(xc_interface *
int ret = 0;
DECLARE_SYSCTL;
- if ( lock_pages(info, max_domains*sizeof(xc_domaininfo_t)) != 0 )
+ if ( lock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)) != 0 )
return -1;
sysctl.cmd = XEN_SYSCTL_getdomaininfolist;
@@ -270,7 +270,7 @@ int xc_domain_getinfolist(xc_interface *
else
ret = sysctl.u.getdomaininfolist.num_domains;
- unlock_pages(info, max_domains*sizeof(xc_domaininfo_t));
+ unlock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t));
return ret;
}
@@ -290,13 +290,13 @@ int xc_domain_hvm_getcontext(xc_interfac
set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
if ( ctxt_buf )
- if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
+ if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
return ret;
ret = do_domctl(xch, &domctl);
if ( ctxt_buf )
- unlock_pages(ctxt_buf, size);
+ unlock_pages(xch, ctxt_buf, size);
return (ret < 0 ? -1 : domctl.u.hvmcontext.size);
}
@@ -322,13 +322,13 @@ int xc_domain_hvm_getcontext_partial(xc_
domctl.u.hvmcontext_partial.instance = instance;
set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf);
- if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
+ if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
return ret;
ret = do_domctl(xch, &domctl);
if ( ctxt_buf )
- unlock_pages(ctxt_buf, size);
+ unlock_pages(xch, ctxt_buf, size);
return ret ? -1 : 0;
}
@@ -347,12 +347,12 @@ int xc_domain_hvm_setcontext(xc_interfac
domctl.u.hvmcontext.size = size;
set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
- if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
+ if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
return ret;
ret = do_domctl(xch, &domctl);
- unlock_pages(ctxt_buf, size);
+ unlock_pages(xch, ctxt_buf, size);
return ret;
}
@@ -372,10 +372,10 @@ int xc_vcpu_getcontext(xc_interface *xch
set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
- if ( (rc = lock_pages(ctxt, sz)) != 0 )
+ if ( (rc = lock_pages(xch, ctxt, sz)) != 0 )
return rc;
rc = do_domctl(xch, &domctl);
- unlock_pages(ctxt, sz);
+ unlock_pages(xch, ctxt, sz);
return rc;
}
@@ -394,7 +394,7 @@ int xc_watchdog(xc_interface *xch,
arg.id = id;
arg.timeout = timeout;
- if ( lock_pages(&arg, sizeof(arg)) != 0 )
+ if ( lock_pages(xch, &arg, sizeof(arg)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out1;
@@ -402,7 +402,7 @@ int xc_watchdog(xc_interface *xch,
ret = do_xen_hypercall(xch, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(xch, &arg, sizeof(arg));
out1:
return ret;
@@ -488,7 +488,7 @@ int xc_domain_set_memmap_limit(xc_interf
set_xen_guest_handle(fmap.map.buffer, &e820);
- if ( lock_pages(&fmap, sizeof(fmap)) || lock_pages(&e820,
sizeof(e820)) )
+ if ( lock_pages(xch, &fmap, sizeof(fmap)) || lock_pages(xch, &e820,
sizeof(e820)) )
{
PERROR("Could not lock memory for Xen hypercall");
rc = -1;
@@ -498,8 +498,8 @@ int xc_domain_set_memmap_limit(xc_interf
rc = xc_memory_op(xch, XENMEM_set_memory_map, &fmap);
out:
- unlock_pages(&fmap, sizeof(fmap));
- unlock_pages(&e820, sizeof(e820));
+ unlock_pages(xch, &fmap, sizeof(fmap));
+ unlock_pages(xch, &e820, sizeof(e820));
return rc;
}
#else
@@ -564,7 +564,7 @@ int xc_domain_get_tsc_info(xc_interface
domctl.cmd = XEN_DOMCTL_gettscinfo;
domctl.domain = (domid_t)domid;
set_xen_guest_handle(domctl.u.tsc_info.out_info, &info);
- if ( (rc = lock_pages(&info, sizeof(info))) != 0 )
+ if ( (rc = lock_pages(xch, &info, sizeof(info))) != 0 )
return rc;
rc = do_domctl(xch, &domctl);
if ( rc == 0 )
@@ -574,7 +574,7 @@ int xc_domain_get_tsc_info(xc_interface
*gtsc_khz = info.gtsc_khz;
*incarnation = info.incarnation;
}
- unlock_pages(&info,sizeof(info));
+ unlock_pages(xch, &info,sizeof(info));
return rc;
}
@@ -849,11 +849,11 @@ int xc_vcpu_setcontext(xc_interface *xch
domctl.u.vcpucontext.vcpu = vcpu;
set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
- if ( (rc = lock_pages(ctxt, sz)) != 0 )
+ if ( (rc = lock_pages(xch, ctxt, sz)) != 0 )
return rc;
rc = do_domctl(xch, &domctl);
- unlock_pages(ctxt, sz);
+ unlock_pages(xch, ctxt, sz);
return rc;
}
@@ -917,10 +917,10 @@ int xc_set_hvm_param(xc_interface *handl
arg.domid = dom;
arg.index = param;
arg.value = value;
- if ( lock_pages(&arg, sizeof(arg)) != 0 )
+ if ( lock_pages(handle, &arg, sizeof(arg)) != 0 )
return -1;
rc = do_xen_hypercall(handle, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(handle, &arg, sizeof(arg));
return rc;
}
@@ -935,10 +935,10 @@ int xc_get_hvm_param(xc_interface *handl
hypercall.arg[1] = (unsigned long)&arg;
arg.domid = dom;
arg.index = param;
- if ( lock_pages(&arg, sizeof(arg)) != 0 )
+ if ( lock_pages(handle, &arg, sizeof(arg)) != 0 )
return -1;
rc = do_xen_hypercall(handle, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(handle, &arg, sizeof(arg));
*value = arg.value;
return rc;
}
@@ -988,13 +988,13 @@ int xc_get_device_group(
set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array);
- if ( lock_pages(sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 )
+ if ( lock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 )
{
PERROR("Could not lock memory for xc_get_device_group");
return -ENOMEM;
}
rc = do_domctl(xch, &domctl);
- unlock_pages(sdev_array, max_sdevs * sizeof(*sdev_array));
+ unlock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array));
*num_sdevs = domctl.u.get_device_group.num_sdevs;
return rc;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c Tue Oct 12 15:06:41 2010 +0100
@@ -1181,13 +1181,13 @@ int xc_domain_restore(xc_interface *xch,
memset(ctx->p2m_batch, 0,
ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT));
- if ( lock_pages(region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
+ if ( lock_pages(xch, region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
{
PERROR("Could not lock region_mfn");
goto out;
}
- if ( lock_pages(ctx->p2m_batch, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
+ if ( lock_pages(xch, ctx->p2m_batch, sizeof(xen_pfn_t) * MAX_BATCH_SIZE)
)
{
ERROR("Could not lock p2m_batch");
goto out;
@@ -1547,7 +1547,7 @@ int xc_domain_restore(xc_interface *xch,
}
}
- if ( lock_pages(&ctxt, sizeof(ctxt)) )
+ if ( lock_pages(xch, &ctxt, sizeof(ctxt)) )
{
PERROR("Unable to lock ctxt");
return 1;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_domain_save.c Tue Oct 12 15:06:41 2010 +0100
@@ -1046,14 +1046,14 @@ int xc_domain_save(xc_interface *xch, in
memset(to_send, 0xff, BITMAP_SIZE);
- if ( lock_pages(to_send, BITMAP_SIZE) )
+ if ( lock_pages(xch, to_send, BITMAP_SIZE) )
{
PERROR("Unable to lock to_send");
return 1;
}
/* (to fix is local only) */
- if ( lock_pages(to_skip, BITMAP_SIZE) )
+ if ( lock_pages(xch, to_skip, BITMAP_SIZE) )
{
PERROR("Unable to lock to_skip");
return 1;
@@ -1091,7 +1091,7 @@ int xc_domain_save(xc_interface *xch, in
memset(pfn_type, 0,
ROUNDUP(MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT));
- if ( lock_pages(pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) )
+ if ( lock_pages(xch, pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) )
{
PERROR("Unable to lock pfn_type array");
goto out;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_evtchn.c
--- a/tools/libxc/xc_evtchn.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_evtchn.c Tue Oct 12 15:06:41 2010 +0100
@@ -33,7 +33,7 @@ static int do_evtchn_op(xc_interface *xc
hypercall.arg[0] = cmd;
hypercall.arg[1] = (unsigned long)arg;
- if ( lock_pages(arg, arg_size) != 0 )
+ if ( lock_pages(xch, arg, arg_size) != 0 )
{
PERROR("do_evtchn_op: arg lock failed");
goto out;
@@ -42,7 +42,7 @@ static int do_evtchn_op(xc_interface *xc
if ((ret = do_xen_hypercall(xch, &hypercall)) < 0 &&
!silently_fail)
ERROR("do_evtchn_op: HYPERVISOR_event_channel_op failed: %d",
ret);
- unlock_pages(arg, arg_size);
+ unlock_pages(xch, arg, arg_size);
out:
return ret;
}
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_flask.c
--- a/tools/libxc/xc_flask.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_flask.c Tue Oct 12 15:06:41 2010 +0100
@@ -44,7 +44,7 @@ int xc_flask_op(xc_interface *xch, flask
hypercall.op = __HYPERVISOR_xsm_op;
hypercall.arg[0] = (unsigned long)op;
- if ( lock_pages(op, sizeof(*op)) != 0 )
+ if ( lock_pages(xch, op, sizeof(*op)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out;
@@ -56,7 +56,7 @@ int xc_flask_op(xc_interface *xch, flask
fprintf(stderr, "XSM operation failed!\n");
}
- unlock_pages(op, sizeof(*op));
+ unlock_pages(xch, op, sizeof(*op));
out:
return ret;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_linux.c
--- a/tools/libxc/xc_linux.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_linux.c Tue Oct 12 15:06:41 2010 +0100
@@ -618,7 +618,7 @@ int xc_gnttab_op(xc_interface *xch, int
hypercall.arg[1] = (unsigned long)op;
hypercall.arg[2] = count;
- if ( lock_pages(op, count* op_size) != 0 )
+ if ( lock_pages(xch, op, count* op_size) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out1;
@@ -626,7 +626,7 @@ int xc_gnttab_op(xc_interface *xch, int
ret = do_xen_hypercall(xch, &hypercall);
- unlock_pages(op, count * op_size);
+ unlock_pages(xch, op, count * op_size);
out1:
return ret;
@@ -670,7 +670,7 @@ static void *_gnttab_map_table(xc_interf
*gnt_num = query.nr_frames * (PAGE_SIZE / sizeof(grant_entry_v1_t) );
frame_list = malloc(query.nr_frames * sizeof(unsigned long));
- if ( !frame_list || lock_pages(frame_list,
+ if ( !frame_list || lock_pages(xch, frame_list,
query.nr_frames * sizeof(unsigned long)) )
{
ERROR("Alloc/lock frame_list in xc_gnttab_map_table\n");
@@ -714,7 +714,7 @@ err:
err:
if ( frame_list )
{
- unlock_pages(frame_list, query.nr_frames * sizeof(unsigned long));
+ unlock_pages(xch, frame_list, query.nr_frames * sizeof(unsigned long));
free(frame_list);
}
if ( pfn_list )
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_misc.c Tue Oct 12 15:06:41 2010 +0100
@@ -42,7 +42,7 @@ int xc_readconsolering(xc_interface *xch
sysctl.u.readconsole.incremental = incremental;
}
- if ( (ret = lock_pages(buffer, nr_chars)) != 0 )
+ if ( (ret = lock_pages(xch, buffer, nr_chars)) != 0 )
return ret;
if ( (ret = do_sysctl(xch, &sysctl)) == 0 )
@@ -52,7 +52,7 @@ int xc_readconsolering(xc_interface *xch
*pindex = sysctl.u.readconsole.index;
}
- unlock_pages(buffer, nr_chars);
+ unlock_pages(xch, buffer, nr_chars);
return ret;
}
@@ -66,12 +66,12 @@ int xc_send_debug_keys(xc_interface *xch
set_xen_guest_handle(sysctl.u.debug_keys.keys, keys);
sysctl.u.debug_keys.nr_keys = len;
- if ( (ret = lock_pages(keys, len)) != 0 )
+ if ( (ret = lock_pages(xch, keys, len)) != 0 )
return ret;
ret = do_sysctl(xch, &sysctl);
- unlock_pages(keys, len);
+ unlock_pages(xch, keys, len);
return ret;
}
@@ -154,7 +154,7 @@ int xc_mca_op(xc_interface *xch, struct
DECLARE_HYPERCALL;
mc->interface_version = XEN_MCA_INTERFACE_VERSION;
- if ( lock_pages(mc, sizeof(mc)) )
+ if ( lock_pages(xch, mc, sizeof(mc)) )
{
PERROR("Could not lock xen_mc memory");
return -EINVAL;
@@ -163,7 +163,7 @@ int xc_mca_op(xc_interface *xch, struct
hypercall.op = __HYPERVISOR_mca;
hypercall.arg[0] = (unsigned long)mc;
ret = do_xen_hypercall(xch, &hypercall);
- unlock_pages(mc, sizeof(mc));
+ unlock_pages(xch, mc, sizeof(mc));
return ret;
}
#endif
@@ -227,12 +227,12 @@ int xc_getcpuinfo(xc_interface *xch, int
sysctl.u.getcpuinfo.max_cpus = max_cpus;
set_xen_guest_handle(sysctl.u.getcpuinfo.info, info);
- if ( (rc = lock_pages(info, max_cpus*sizeof(*info))) != 0 )
+ if ( (rc = lock_pages(xch, info, max_cpus*sizeof(*info))) != 0 )
return rc;
rc = do_sysctl(xch, &sysctl);
- unlock_pages(info, max_cpus*sizeof(*info));
+ unlock_pages(xch, info, max_cpus*sizeof(*info));
if ( nr_cpus )
*nr_cpus = sysctl.u.getcpuinfo.nr_cpus;
@@ -250,7 +250,7 @@ int xc_hvm_set_pci_intx_level(
struct xen_hvm_set_pci_intx_level _arg, *arg = &_arg;
int rc;
- if ( (rc = hcall_buf_prep((void **)&arg, sizeof(*arg))) != 0 )
+ if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
{
PERROR("Could not lock memory");
return rc;
@@ -269,7 +269,7 @@ int xc_hvm_set_pci_intx_level(
rc = do_xen_hypercall(xch, &hypercall);
- hcall_buf_release((void **)&arg, sizeof(*arg));
+ hcall_buf_release(xch, (void **)&arg, sizeof(*arg));
return rc;
}
@@ -283,7 +283,7 @@ int xc_hvm_set_isa_irq_level(
struct xen_hvm_set_isa_irq_level _arg, *arg = &_arg;
int rc;
- if ( (rc = hcall_buf_prep((void **)&arg, sizeof(*arg))) != 0 )
+ if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
{
PERROR("Could not lock memory");
return rc;
@@ -299,7 +299,7 @@ int xc_hvm_set_isa_irq_level(
rc = do_xen_hypercall(xch, &hypercall);
- hcall_buf_release((void **)&arg, sizeof(*arg));
+ hcall_buf_release(xch, (void **)&arg, sizeof(*arg));
return rc;
}
@@ -319,7 +319,7 @@ int xc_hvm_set_pci_link_route(
arg.link = link;
arg.isa_irq = isa_irq;
- if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+ if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
{
PERROR("Could not lock memory");
return rc;
@@ -327,7 +327,7 @@ int xc_hvm_set_pci_link_route(
rc = do_xen_hypercall(xch, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(xch, &arg, sizeof(arg));
return rc;
}
@@ -350,7 +350,7 @@ int xc_hvm_track_dirty_vram(
arg.nr = nr;
set_xen_guest_handle(arg.dirty_bitmap, (uint8_t *)dirty_bitmap);
- if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+ if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
{
PERROR("Could not lock memory");
return rc;
@@ -358,7 +358,7 @@ int xc_hvm_track_dirty_vram(
rc = do_xen_hypercall(xch, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(xch, &arg, sizeof(arg));
return rc;
}
@@ -378,7 +378,7 @@ int xc_hvm_modified_memory(
arg.first_pfn = first_pfn;
arg.nr = nr;
- if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+ if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
{
PERROR("Could not lock memory");
return rc;
@@ -386,7 +386,7 @@ int xc_hvm_modified_memory(
rc = do_xen_hypercall(xch, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(xch, &arg, sizeof(arg));
return rc;
}
@@ -407,7 +407,7 @@ int xc_hvm_set_mem_type(
arg.first_pfn = first_pfn;
arg.nr = nr;
- if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+ if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
{
PERROR("Could not lock memory");
return rc;
@@ -415,7 +415,7 @@ int xc_hvm_set_mem_type(
rc = do_xen_hypercall(xch, &hypercall);
- unlock_pages(&arg, sizeof(arg));
+ unlock_pages(xch, &arg, sizeof(arg));
return rc;
}
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_offline_page.c
--- a/tools/libxc/xc_offline_page.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_offline_page.c Tue Oct 12 15:06:41 2010 +0100
@@ -71,7 +71,7 @@ int xc_mark_page_online(xc_interface *xc
if ( !status || (end < start) )
return -EINVAL;
- if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
+ if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
{
ERROR("Could not lock memory for xc_mark_page_online\n");
return -EINVAL;
@@ -84,7 +84,7 @@ int xc_mark_page_online(xc_interface *xc
set_xen_guest_handle(sysctl.u.page_offline.status, status);
ret = xc_sysctl(xch, &sysctl);
- unlock_pages(status, sizeof(uint32_t)*(end - start + 1));
+ unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
return ret;
}
@@ -98,7 +98,7 @@ int xc_mark_page_offline(xc_interface *x
if ( !status || (end < start) )
return -EINVAL;
- if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
+ if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
{
ERROR("Could not lock memory for xc_mark_page_offline");
return -EINVAL;
@@ -111,7 +111,7 @@ int xc_mark_page_offline(xc_interface *x
set_xen_guest_handle(sysctl.u.page_offline.status, status);
ret = xc_sysctl(xch, &sysctl);
- unlock_pages(status, sizeof(uint32_t)*(end - start + 1));
+ unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
return ret;
}
@@ -125,7 +125,7 @@ int xc_query_page_offline_status(xc_inte
if ( !status || (end < start) )
return -EINVAL;
- if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
+ if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
{
ERROR("Could not lock memory for
xc_query_page_offline_status\n");
return -EINVAL;
@@ -138,7 +138,7 @@ int xc_query_page_offline_status(xc_inte
set_xen_guest_handle(sysctl.u.page_offline.status, status);
ret = xc_sysctl(xch, &sysctl);
- unlock_pages(status, sizeof(uint32_t)*(end - start + 1));
+ unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
return ret;
}
@@ -291,7 +291,7 @@ static int init_mem_info(xc_interface *x
minfo->pfn_type[i] = pfn_to_mfn(i, minfo->p2m_table,
minfo->guest_width);
- if ( lock_pages(minfo->pfn_type, minfo->p2m_size *
sizeof(*minfo->pfn_type)) )
+ if ( lock_pages(xch, minfo->pfn_type, minfo->p2m_size *
sizeof(*minfo->pfn_type)) )
{
ERROR("Unable to lock pfn_type array");
goto failed;
@@ -310,7 +310,7 @@ static int init_mem_info(xc_interface *x
return 0;
unlock:
- unlock_pages(minfo->pfn_type, minfo->p2m_size *
sizeof(*minfo->pfn_type));
+ unlock_pages(xch, minfo->pfn_type, minfo->p2m_size *
sizeof(*minfo->pfn_type));
failed:
if (minfo->pfn_type)
{
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_pm.c
--- a/tools/libxc/xc_pm.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_pm.c Tue Oct 12 15:06:41 2010 +0100
@@ -53,14 +53,14 @@ int xc_pm_get_pxstat(xc_interface *xch,
if ( (ret = xc_pm_get_max_px(xch, cpuid, &max_px)) != 0)
return ret;
- if ( (ret = lock_pages(pxpt->trans_pt,
+ if ( (ret = lock_pages(xch, pxpt->trans_pt,
max_px * max_px * sizeof(uint64_t))) != 0 )
return ret;
- if ( (ret = lock_pages(pxpt->pt,
+ if ( (ret = lock_pages(xch, pxpt->pt,
max_px * sizeof(struct xc_px_val))) != 0 )
{
- unlock_pages(pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
+ unlock_pages(xch, pxpt->trans_pt, max_px * max_px *
sizeof(uint64_t));
return ret;
}
@@ -75,8 +75,8 @@ int xc_pm_get_pxstat(xc_interface *xch,
ret = xc_sysctl(xch, &sysctl);
if ( ret )
{
- unlock_pages(pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
- unlock_pages(pxpt->pt, max_px * sizeof(struct xc_px_val));
+ unlock_pages(xch, pxpt->trans_pt, max_px * max_px *
sizeof(uint64_t));
+ unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val));
return ret;
}
@@ -85,8 +85,8 @@ int xc_pm_get_pxstat(xc_interface *xch,
pxpt->last = sysctl.u.get_pmstat.u.getpx.last;
pxpt->cur = sysctl.u.get_pmstat.u.getpx.cur;
- unlock_pages(pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
- unlock_pages(pxpt->pt, max_px * sizeof(struct xc_px_val));
+ unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
+ unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val));
return ret;
}
@@ -128,11 +128,11 @@ int xc_pm_get_cxstat(xc_interface *xch,
if ( (ret = xc_pm_get_max_cx(xch, cpuid, &max_cx)) )
goto unlock_0;
- if ( (ret = lock_pages(cxpt, sizeof(struct xc_cx_stat))) )
+ if ( (ret = lock_pages(xch, cxpt, sizeof(struct xc_cx_stat))) )
goto unlock_0;
- if ( (ret = lock_pages(cxpt->triggers, max_cx * sizeof(uint64_t))) )
+ if ( (ret = lock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t)))
)
goto unlock_1;
- if ( (ret = lock_pages(cxpt->residencies, max_cx * sizeof(uint64_t))) )
+ if ( (ret = lock_pages(xch, cxpt->residencies, max_cx *
sizeof(uint64_t))) )
goto unlock_2;
sysctl.cmd = XEN_SYSCTL_get_pmstat;
@@ -155,11 +155,11 @@ int xc_pm_get_cxstat(xc_interface *xch,
cxpt->cc6 = sysctl.u.get_pmstat.u.getcx.cc6;
unlock_3:
- unlock_pages(cxpt->residencies, max_cx * sizeof(uint64_t));
+ unlock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t));
unlock_2:
- unlock_pages(cxpt->triggers, max_cx * sizeof(uint64_t));
+ unlock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t));
unlock_1:
- unlock_pages(cxpt, sizeof(struct xc_cx_stat));
+ unlock_pages(xch, cxpt, sizeof(struct xc_cx_stat));
unlock_0:
return ret;
}
@@ -200,13 +200,13 @@ int xc_get_cpufreq_para(xc_interface *xc
(!user_para->scaling_available_governors) )
return -EINVAL;
- if ( (ret = lock_pages(user_para->affected_cpus,
+ if ( (ret = lock_pages(xch, user_para->affected_cpus,
user_para->cpu_num * sizeof(uint32_t))) )
goto unlock_1;
- if ( (ret = lock_pages(user_para->scaling_available_frequencies,
+ if ( (ret = lock_pages(xch,
user_para->scaling_available_frequencies,
user_para->freq_num * sizeof(uint32_t))) )
goto unlock_2;
- if ( (ret = lock_pages(user_para->scaling_available_governors,
+ if ( (ret = lock_pages(xch, user_para->scaling_available_governors,
user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char))) )
goto unlock_3;
@@ -263,13 +263,13 @@ int xc_get_cpufreq_para(xc_interface *xc
}
unlock_4:
- unlock_pages(user_para->scaling_available_governors,
+ unlock_pages(xch, user_para->scaling_available_governors,
user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char));
unlock_3:
- unlock_pages(user_para->scaling_available_frequencies,
+ unlock_pages(xch, user_para->scaling_available_frequencies,
user_para->freq_num * sizeof(uint32_t));
unlock_2:
- unlock_pages(user_para->affected_cpus,
+ unlock_pages(xch, user_para->affected_cpus,
user_para->cpu_num * sizeof(uint32_t));
unlock_1:
return ret;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:41 2010 +0100
@@ -71,7 +71,7 @@ xc_interface *xc_interface_open(xentooll
return 0;
}
-static void xc_clean_hcall_buf(void);
+static void xc_clean_hcall_buf(xc_interface *xch);
int xc_interface_close(xc_interface *xch)
{
@@ -85,7 +85,7 @@ int xc_interface_close(xc_interface *xch
if (rc) PERROR("Could not close hypervisor interface");
}
- xc_clean_hcall_buf();
+ xc_clean_hcall_buf(xch);
free(xch);
return rc;
@@ -193,17 +193,17 @@ void xc_report_progress_step(xc_interfac
#ifdef __sun__
-int lock_pages(void *addr, size_t len) { return 0; }
-void unlock_pages(void *addr, size_t len) { }
+int lock_pages(xc_interface *xch, void *addr, size_t len) { return 0; }
+void unlock_pages(xc_interface *xch, void *addr, size_t len) { }
-int hcall_buf_prep(void **addr, size_t len) { return 0; }
-void hcall_buf_release(void **addr, size_t len) { }
+int hcall_buf_prep(xc_interface *xch, void **addr, size_t len) { return 0; }
+void hcall_buf_release(xc_interface *xch, void **addr, size_t len) { }
-static void xc_clean_hcall_buf(void) { }
+static void xc_clean_hcall_buf(xc_interface *xch) { }
#else /* !__sun__ */
-int lock_pages(void *addr, size_t len)
+int lock_pages(xc_interface *xch, void *addr, size_t len)
{
int e;
void *laddr = (void *)((unsigned long)addr & PAGE_MASK);
@@ -213,7 +213,7 @@ int lock_pages(void *addr, size_t len)
return e;
}
-void unlock_pages(void *addr, size_t len)
+void unlock_pages(xc_interface *xch, void *addr, size_t len)
{
void *laddr = (void *)((unsigned long)addr & PAGE_MASK);
size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) +
@@ -226,6 +226,7 @@ static pthread_key_t hcall_buf_pkey;
static pthread_key_t hcall_buf_pkey;
static pthread_once_t hcall_buf_pkey_once = PTHREAD_ONCE_INIT;
struct hcall_buf {
+ xc_interface *xch;
void *buf;
void *oldbuf;
};
@@ -238,7 +239,7 @@ static void _xc_clean_hcall_buf(void *m)
{
if ( hcall_buf->buf )
{
- unlock_pages(hcall_buf->buf, PAGE_SIZE);
+ unlock_pages(hcall_buf->xch, hcall_buf->buf, PAGE_SIZE);
free(hcall_buf->buf);
}
@@ -253,14 +254,14 @@ static void _xc_init_hcall_buf(void)
pthread_key_create(&hcall_buf_pkey, _xc_clean_hcall_buf);
}
-static void xc_clean_hcall_buf(void)
+static void xc_clean_hcall_buf(xc_interface *xch)
{
pthread_once(&hcall_buf_pkey_once, _xc_init_hcall_buf);
_xc_clean_hcall_buf(pthread_getspecific(hcall_buf_pkey));
}
-int hcall_buf_prep(void **addr, size_t len)
+int hcall_buf_prep(xc_interface *xch, void **addr, size_t len)
{
struct hcall_buf *hcall_buf;
@@ -272,13 +273,14 @@ int hcall_buf_prep(void **addr, size_t l
hcall_buf = calloc(1, sizeof(*hcall_buf));
if ( !hcall_buf )
goto out;
+ hcall_buf->xch = xch;
pthread_setspecific(hcall_buf_pkey, hcall_buf);
}
if ( !hcall_buf->buf )
{
hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE);
- if ( !hcall_buf->buf || lock_pages(hcall_buf->buf, PAGE_SIZE) )
+ if ( !hcall_buf->buf || lock_pages(xch, hcall_buf->buf,
PAGE_SIZE) )
{
free(hcall_buf->buf);
hcall_buf->buf = NULL;
@@ -295,10 +297,10 @@ int hcall_buf_prep(void **addr, size_t l
}
out:
- return lock_pages(*addr, len);
+ return lock_pages(xch, *addr, len);
}
-void hcall_buf_release(void **addr, size_t len)
+void hcall_buf_release(xc_interface *xch, void **addr, size_t len)
{
struct hcall_buf *hcall_buf = pthread_getspecific(hcall_buf_pkey);
@@ -310,7 +312,7 @@ void hcall_buf_release(void **addr, size
}
else
{
- unlock_pages(*addr, len);
+ unlock_pages(xch, *addr, len);
}
}
@@ -337,7 +339,7 @@ int xc_mmuext_op(
DECLARE_HYPERCALL;
long ret = -EINVAL;
- if ( hcall_buf_prep((void **)&op, nr_ops*sizeof(*op)) != 0 )
+ if ( hcall_buf_prep(xch, (void **)&op, nr_ops*sizeof(*op)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out1;
@@ -351,7 +353,7 @@ int xc_mmuext_op(
ret = do_xen_hypercall(xch, &hypercall);
- hcall_buf_release((void **)&op, nr_ops*sizeof(*op));
+ hcall_buf_release(xch, (void **)&op, nr_ops*sizeof(*op));
out1:
return ret;
@@ -371,7 +373,7 @@ static int flush_mmu_updates(xc_interfac
hypercall.arg[2] = 0;
hypercall.arg[3] = mmu->subject;
- if ( lock_pages(mmu->updates, sizeof(mmu->updates)) != 0 )
+ if ( lock_pages(xch, mmu->updates, sizeof(mmu->updates)) != 0 )
{
PERROR("flush_mmu_updates: mmu updates lock_pages failed");
err = 1;
@@ -386,7 +388,7 @@ static int flush_mmu_updates(xc_interfac
mmu->idx = 0;
- unlock_pages(mmu->updates, sizeof(mmu->updates));
+ unlock_pages(xch, mmu->updates, sizeof(mmu->updates));
out:
return err;
@@ -438,38 +440,38 @@ int xc_memory_op(xc_interface *xch,
case XENMEM_increase_reservation:
case XENMEM_decrease_reservation:
case XENMEM_populate_physmap:
- if ( lock_pages(reservation, sizeof(*reservation)) != 0 )
+ if ( lock_pages(xch, reservation, sizeof(*reservation)) != 0 )
{
PERROR("Could not lock");
goto out1;
}
get_xen_guest_handle(extent_start, reservation->extent_start);
if ( (extent_start != NULL) &&
- (lock_pages(extent_start,
+ (lock_pages(xch, extent_start,
reservation->nr_extents * sizeof(xen_pfn_t)) != 0) )
{
PERROR("Could not lock");
- unlock_pages(reservation, sizeof(*reservation));
+ unlock_pages(xch, reservation, sizeof(*reservation));
goto out1;
}
break;
case XENMEM_machphys_mfn_list:
- if ( lock_pages(xmml, sizeof(*xmml)) != 0 )
+ if ( lock_pages(xch, xmml, sizeof(*xmml)) != 0 )
{
PERROR("Could not lock");
goto out1;
}
get_xen_guest_handle(extent_start, xmml->extent_start);
- if ( lock_pages(extent_start,
+ if ( lock_pages(xch, extent_start,
xmml->max_extents * sizeof(xen_pfn_t)) != 0 )
{
PERROR("Could not lock");
- unlock_pages(xmml, sizeof(*xmml));
+ unlock_pages(xch, xmml, sizeof(*xmml));
goto out1;
}
break;
case XENMEM_add_to_physmap:
- if ( lock_pages(arg, sizeof(struct xen_add_to_physmap)) )
+ if ( lock_pages(xch, arg, sizeof(struct xen_add_to_physmap)) )
{
PERROR("Could not lock");
goto out1;
@@ -478,7 +480,7 @@ int xc_memory_op(xc_interface *xch,
case XENMEM_current_reservation:
case XENMEM_maximum_reservation:
case XENMEM_maximum_gpfn:
- if ( lock_pages(arg, sizeof(domid_t)) )
+ if ( lock_pages(xch, arg, sizeof(domid_t)) )
{
PERROR("Could not lock");
goto out1;
@@ -486,7 +488,7 @@ int xc_memory_op(xc_interface *xch,
break;
case XENMEM_set_pod_target:
case XENMEM_get_pod_target:
- if ( lock_pages(arg, sizeof(struct xen_pod_target)) )
+ if ( lock_pages(xch, arg, sizeof(struct xen_pod_target)) )
{
PERROR("Could not lock");
goto out1;
@@ -501,29 +503,29 @@ int xc_memory_op(xc_interface *xch,
case XENMEM_increase_reservation:
case XENMEM_decrease_reservation:
case XENMEM_populate_physmap:
- unlock_pages(reservation, sizeof(*reservation));
+ unlock_pages(xch, reservation, sizeof(*reservation));
get_xen_guest_handle(extent_start, reservation->extent_start);
if ( extent_start != NULL )
- unlock_pages(extent_start,
+ unlock_pages(xch, extent_start,
reservation->nr_extents * sizeof(xen_pfn_t));
break;
case XENMEM_machphys_mfn_list:
- unlock_pages(xmml, sizeof(*xmml));
+ unlock_pages(xch, xmml, sizeof(*xmml));
get_xen_guest_handle(extent_start, xmml->extent_start);
- unlock_pages(extent_start,
+ unlock_pages(xch, extent_start,
xmml->max_extents * sizeof(xen_pfn_t));
break;
case XENMEM_add_to_physmap:
- unlock_pages(arg, sizeof(struct xen_add_to_physmap));
+ unlock_pages(xch, arg, sizeof(struct xen_add_to_physmap));
break;
case XENMEM_current_reservation:
case XENMEM_maximum_reservation:
case XENMEM_maximum_gpfn:
- unlock_pages(arg, sizeof(domid_t));
+ unlock_pages(xch, arg, sizeof(domid_t));
break;
case XENMEM_set_pod_target:
case XENMEM_get_pod_target:
- unlock_pages(arg, sizeof(struct xen_pod_target));
+ unlock_pages(xch, arg, sizeof(struct xen_pod_target));
break;
}
@@ -565,7 +567,7 @@ int xc_get_pfn_list(xc_interface *xch,
memset(pfn_buf, 0, max_pfns * sizeof(*pfn_buf));
#endif
- if ( lock_pages(pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 )
+ if ( lock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 )
{
PERROR("xc_get_pfn_list: pfn_buf lock failed");
return -1;
@@ -573,7 +575,7 @@ int xc_get_pfn_list(xc_interface *xch,
ret = do_domctl(xch, &domctl);
- unlock_pages(pfn_buf, max_pfns * sizeof(*pfn_buf));
+ unlock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf));
return (ret < 0) ? -1 : domctl.u.getmemlist.num_pfns;
}
@@ -648,7 +650,7 @@ int xc_version(xc_interface *xch, int cm
break;
}
- if ( (argsize != 0) && (lock_pages(arg, argsize) != 0) )
+ if ( (argsize != 0) && (lock_pages(xch, arg, argsize) != 0) )
{
PERROR("Could not lock memory for version hypercall");
return -ENOMEM;
@@ -662,7 +664,7 @@ int xc_version(xc_interface *xch, int cm
rc = do_xen_version(xch, cmd, arg);
if ( argsize != 0 )
- unlock_pages(arg, argsize);
+ unlock_pages(xch, arg, argsize);
return rc;
}
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_private.h Tue Oct 12 15:06:41 2010 +0100
@@ -100,11 +100,11 @@ void xc_report_progress_step(xc_interfac
void *xc_memalign(size_t alignment, size_t size);
-int lock_pages(void *addr, size_t len);
-void unlock_pages(void *addr, size_t len);
+int lock_pages(xc_interface *xch, void *addr, size_t len);
+void unlock_pages(xc_interface *xch, void *addr, size_t len);
-int hcall_buf_prep(void **addr, size_t len);
-void hcall_buf_release(void **addr, size_t len);
+int hcall_buf_prep(xc_interface *xch, void **addr, size_t len);
+void hcall_buf_release(xc_interface *xch, void **addr, size_t len);
int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall);
@@ -125,7 +125,7 @@ static inline int do_physdev_op(xc_inter
DECLARE_HYPERCALL;
- if ( hcall_buf_prep(&op, len) != 0 )
+ if ( hcall_buf_prep(xch, &op, len) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out1;
@@ -142,7 +142,7 @@ static inline int do_physdev_op(xc_inter
" rebuild the user-space tool set?\n");
}
- hcall_buf_release(&op, len);
+ hcall_buf_release(xch, &op, len);
out1:
return ret;
@@ -153,7 +153,7 @@ static inline int do_domctl(xc_interface
int ret = -1;
DECLARE_HYPERCALL;
- if ( hcall_buf_prep((void **)&domctl, sizeof(*domctl)) != 0 )
+ if ( hcall_buf_prep(xch, (void **)&domctl, sizeof(*domctl)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out1;
@@ -171,7 +171,7 @@ static inline int do_domctl(xc_interface
" rebuild the user-space tool set?\n");
}
- hcall_buf_release((void **)&domctl, sizeof(*domctl));
+ hcall_buf_release(xch, (void **)&domctl, sizeof(*domctl));
out1:
return ret;
@@ -182,7 +182,7 @@ static inline int do_sysctl(xc_interface
int ret = -1;
DECLARE_HYPERCALL;
- if ( hcall_buf_prep((void **)&sysctl, sizeof(*sysctl)) != 0 )
+ if ( hcall_buf_prep(xch, (void **)&sysctl, sizeof(*sysctl)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out1;
@@ -200,7 +200,7 @@ static inline int do_sysctl(xc_interface
" rebuild the user-space tool set?\n");
}
- hcall_buf_release((void **)&sysctl, sizeof(*sysctl));
+ hcall_buf_release(xch, (void **)&sysctl, sizeof(*sysctl));
out1:
return ret;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_resume.c
--- a/tools/libxc/xc_resume.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_resume.c Tue Oct 12 15:06:41 2010 +0100
@@ -196,7 +196,7 @@ static int xc_domain_resume_any(xc_inter
goto out;
}
- if ( lock_pages(&ctxt, sizeof(ctxt)) )
+ if ( lock_pages(xch, &ctxt, sizeof(ctxt)) )
{
ERROR("Unable to lock ctxt");
goto out;
@@ -235,7 +235,7 @@ static int xc_domain_resume_any(xc_inter
#if defined(__i386__) || defined(__x86_64__)
out:
- unlock_pages((void *)&ctxt, sizeof ctxt);
+ unlock_pages(xch, (void *)&ctxt, sizeof ctxt);
if (p2m)
munmap(p2m, P2M_FL_ENTRIES*PAGE_SIZE);
if (p2m_frame_list)
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_tbuf.c
--- a/tools/libxc/xc_tbuf.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_tbuf.c Tue Oct 12 15:06:41 2010 +0100
@@ -129,7 +129,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap);
sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8;
- if ( lock_pages(&bytemap, sizeof(bytemap)) != 0 )
+ if ( lock_pages(xch, &bytemap, sizeof(bytemap)) != 0 )
{
PERROR("Could not lock memory for Xen hypercall");
goto out;
@@ -137,7 +137,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
ret = do_sysctl(xch, &sysctl);
- unlock_pages(&bytemap, sizeof(bytemap));
+ unlock_pages(xch, &bytemap, sizeof(bytemap));
out:
return ret;
diff -r 73a05c8f7c3e -r 29a5439889c3 tools/libxc/xc_tmem.c
--- a/tools/libxc/xc_tmem.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_tmem.c Tue Oct 12 15:06:41 2010 +0100
@@ -28,7 +28,7 @@ static int do_tmem_op(xc_interface *xch,
hypercall.op = __HYPERVISOR_tmem_op;
hypercall.arg[0] = (unsigned long)op;
- if (lock_pages(op, sizeof(*op)) != 0)
+ if (lock_pages(xch, op, sizeof(*op)) != 0)
{
PERROR("Could not lock memory for Xen hypercall");
return -EFAULT;
@@ -39,7 +39,7 @@ static int do_tmem_op(xc_interface *xch,
DPRINTF("tmem operation failed -- need to"
" rebuild the user-space tool set?\n");
}
- unlock_pages(op, sizeof(*op));
+ unlock_pages(xch, op, sizeof(*op));
return ret;
}
@@ -69,7 +69,7 @@ int xc_tmem_control(xc_interface *xch,
op.u.ctrl.oid[2] = 0;
if (subop == TMEMC_LIST) {
- if ((arg1 != 0) && (lock_pages(buf, arg1) != 0))
+ if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0))
{
PERROR("Could not lock memory for Xen hypercall");
return -ENOMEM;
@@ -85,7 +85,7 @@ int xc_tmem_control(xc_interface *xch,
if (subop == TMEMC_LIST) {
if (arg1 != 0)
- unlock_pages(buf, arg1);
+ unlock_pages(xch, buf, arg1);
}
return rc;
@@ -115,7 +115,7 @@ int xc_tmem_control_oid(xc_interface *xc
op.u.ctrl.oid[2] = oid.oid[2];
if (subop == TMEMC_LIST) {
- if ((arg1 != 0) && (lock_pages(buf, arg1) != 0))
+ if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0))
{
PERROR("Could not lock memory for Xen hypercall");
return -ENOMEM;
@@ -131,7 +131,7 @@ int xc_tmem_control_oid(xc_interface *xc
if (subop == TMEMC_LIST) {
if (arg1 != 0)
- unlock_pages(buf, arg1);
+ unlock_pages(xch, buf, arg1);
}
return rc;
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 03 of 18] libxc: remove unnecessary double indirection from xc_readconsolering
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892401 -3600
# Node ID a577eeeb43690b9df6be4789ced815e0c8e4cf13
# Parent 29a5439889c36e72df0f0828aee8f2b002a545b9
libxc: remove unnecessary double indirection from xc_readconsolering
The double indirection has been unnecessary since 9867:ec61a8c25429,
there is no possibility of the buffer being reallocated now.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 29a5439889c3 -r a577eeeb4369 tools/console/daemon/io.c
--- a/tools/console/daemon/io.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/console/daemon/io.c Tue Oct 12 15:06:41 2010 +0100
@@ -887,7 +887,7 @@ static void handle_hv_logs(void)
if ((port = xc_evtchn_pending(xce_handle)) == -1)
return;
- if (xc_readconsolering(xch, &bufptr, &size, 0, 1, &index) == 0
&& size > 0) {
+ if (xc_readconsolering(xch, bufptr, &size, 0, 1, &index) == 0
&& size > 0) {
int logret;
if (log_time_hv)
logret = write_with_timestamp(log_hv_fd, buffer, size,
diff -r 29a5439889c3 -r a577eeeb4369 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_misc.c Tue Oct 12 15:06:41 2010 +0100
@@ -22,13 +22,12 @@
#include <xen/hvm/hvm_op.h>
int xc_readconsolering(xc_interface *xch,
- char **pbuffer,
+ char *buffer,
unsigned int *pnr_chars,
int clear, int incremental, uint32_t *pindex)
{
int ret;
DECLARE_SYSCTL;
- char *buffer = *pbuffer;
unsigned int nr_chars = *pnr_chars;
sysctl.cmd = XEN_SYSCTL_readconsole;
diff -r 29a5439889c3 -r a577eeeb4369 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:41 2010 +0100
@@ -729,7 +729,7 @@ int xc_physdev_pci_access_modify(xc_inte
int enable);
int xc_readconsolering(xc_interface *xch,
- char **pbuffer,
+ char *buffer,
unsigned int *pnr_chars,
int clear, int incremental, uint32_t *pindex);
diff -r 29a5439889c3 -r a577eeeb4369 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxl/libxl.c Tue Oct 12 15:06:41 2010 +0100
@@ -3464,7 +3464,7 @@ int libxl_xen_console_read_line(libxl_ct
int ret;
memset(cr->buffer, 0, cr->size);
- ret = xc_readconsolering(ctx->xch, &cr->buffer,
&cr->count,
+ ret = xc_readconsolering(ctx->xch, cr->buffer, &cr->count,
cr->clear, cr->incremental,
&cr->index);
if (ret < 0) {
LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "reading console ring
buffer");
diff -r 29a5439889c3 -r a577eeeb4369 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/python/xen/lowlevel/xc/xc.c Tue Oct 12 15:06:41 2010 +0100
@@ -1116,7 +1116,7 @@ static PyObject *pyxc_readconsolering(Xc
!str )
return NULL;
- ret = xc_readconsolering(self->xc_handle, &str, &count, clear,
+ ret = xc_readconsolering(self->xc_handle, str, &count, clear,
incremental, &index);
if ( ret < 0 )
return pyxc_error_to_exception(self->xc_handle);
@@ -1133,7 +1133,7 @@ static PyObject *pyxc_readconsolering(Xc
str = ptr + count;
count = size - count;
- ret = xc_readconsolering(self->xc_handle, &str, &count,
clear,
+ ret = xc_readconsolering(self->xc_handle, str, &count, clear,
1, &index);
if ( ret < 0 )
break;
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 04 of 18] libxc: use correct size of struct xen_mc
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892401 -3600
# Node ID 15c4f1cde006e6d8309eff86a99b609c4c1f090a
# Parent a577eeeb43690b9df6be4789ced815e0c8e4cf13
libxc: use correct size of struct xen_mc
We want the size of the struct not the pointer (although rounding up
to page size in lock_pages probably saves us).
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r a577eeeb4369 -r 15c4f1cde006 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_misc.c Tue Oct 12 15:06:41 2010 +0100
@@ -153,7 +153,7 @@ int xc_mca_op(xc_interface *xch, struct
DECLARE_HYPERCALL;
mc->interface_version = XEN_MCA_INTERFACE_VERSION;
- if ( lock_pages(xch, mc, sizeof(mc)) )
+ if ( lock_pages(xch, mc, sizeof(*mc)) )
{
PERROR("Could not lock xen_mc memory");
return -EINVAL;
@@ -162,7 +162,7 @@ int xc_mca_op(xc_interface *xch, struct
hypercall.op = __HYPERVISOR_mca;
hypercall.arg[0] = (unsigned long)mc;
ret = do_xen_hypercall(xch, &hypercall);
- unlock_pages(xch, mc, sizeof(mc));
+ unlock_pages(xch, mc, sizeof(*mc));
return ret;
}
#endif
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 05 of 18] libxc: add wrappers for XENMEM {increase, decrease}_reservation and populate_physmap
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 6834151bfad74e84e201062d4e8f3ae58155cd43
# Parent 15c4f1cde006e6d8309eff86a99b609c4c1f090a
libxc: add wrappers for XENMEM {increase,decrease}_reservation and
populate_physmap
Currently the wrappers for these hypercalls swallow partial success
and return failure to the caller.
In order to use these functions more widely instead of open-coding
uses of XENMEM_* and xc_memory_op add variants which return the actual
hypercall result.
Therefore add the following functions:
xc_domain_increase_reservation
xc_domain_decrease_reservation
xc_domain_populate_physmap
and implement the existing semantics using these new functions as
xc_domain_increase_reservation_exact
xc_domain_decrease_reservation_exact
xc_domain_populate_physmap_exact
replacing the existing xc_domain_memory_* functions.
Use these new functions to replace all open coded uses of
XENMEM_increase_reservation, XENMEM_decrease_reservation and
XENMEM_populate_physmap.
Also rename xc_domain_memory_*_pod_target to xc_domain_*_pod_target
for consistency.
Temporarily add a compatibility macro for
xc_domain_memory_populate_physmap to allow time for qemu to catch up.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/ia64/xc_ia64_hvm_build.c
--- a/tools/libxc/ia64/xc_ia64_hvm_build.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_hvm_build.c Tue Oct 12 15:06:42 2010 +0100
@@ -903,7 +903,7 @@ xc_ia64_setup_shared_info(xc_interface *
* In this function, we will allocate memory and build P2M/M2P table for VTI
* guest. Frist, a pfn list will be initialized discontiguous, normal memory
* begins with 0, GFW memory and other five pages at their place defined in
- * xen/include/public/arch-ia64.h xc_domain_memory_populate_physmap() called
+ * xen/include/public/arch-ia64.h xc_domain_populate_physmap_exact() called
* five times, to set parameter ''extent_order'' to different
value, this is
* convenient to allocate discontiguous memory with different size.
*/
@@ -966,7 +966,7 @@ setup_guest(xc_interface *xch, uint32_t
pfn++)
pfn_list[i++] = pfn;
- rc = xc_domain_memory_populate_physmap(xch, dom, nr_pages, 0, 0,
+ rc = xc_domain_populate_physmap_exact(xch, dom, nr_pages, 0, 0,
&pfn_list[0]);
if (rc != 0) {
PERROR("Could not allocate normal memory for Vti guest.");
@@ -979,7 +979,7 @@ setup_guest(xc_interface *xch, uint32_t
for (i = 0; i < GFW_PAGES; i++)
pfn_list[i] = (GFW_START >> PAGE_SHIFT) + i;
- rc = xc_domain_memory_populate_physmap(xch, dom, GFW_PAGES,
+ rc = xc_domain_populate_physmap_exact(xch, dom, GFW_PAGES,
0, 0, &pfn_list[0]);
if (rc != 0) {
PERROR("Could not allocate GFW memory for Vti guest.");
@@ -995,7 +995,7 @@ setup_guest(xc_interface *xch, uint32_t
pfn_list[nr_special_pages] = memmap_info_pfn;
nr_special_pages++;
- rc = xc_domain_memory_populate_physmap(xch, dom, nr_special_pages,
+ rc = xc_domain_populate_physmap_exact(xch, dom, nr_special_pages,
0, 0, &pfn_list[0]);
if (rc != 0) {
PERROR("Could not allocate IO page or store page or buffer io
page.");
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/ia64/xc_ia64_linux_restore.c
--- a/tools/libxc/ia64/xc_ia64_linux_restore.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_linux_restore.c Tue Oct 12 15:06:42 2010 +0100
@@ -49,7 +49,7 @@ populate_page_if_necessary(xc_interface
if (xc_ia64_p2m_present(p2m_table, gmfn))
return 0;
- return xc_domain_memory_populate_physmap(xch, dom, 1, 0, 0, &gmfn);
+ return xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &gmfn);
}
static int
@@ -112,7 +112,7 @@ xc_ia64_recv_unallocated_list(xc_interfa
}
}
if (nr_frees > 0) {
- if (xc_domain_memory_decrease_reservation(xch, dom, nr_frees,
+ if (xc_domain_decrease_reservation_exact(xch, dom, nr_frees,
0, pfntab) < 0) {
PERROR("Could not decrease reservation");
goto out;
@@ -546,7 +546,7 @@ xc_ia64_hvm_domain_setup(xc_interface *x
};
unsigned long nr_pages = sizeof(pfn_list) / sizeof(pfn_list[0]);
- rc = xc_domain_memory_populate_physmap(xch, dom, nr_pages,
+ rc = xc_domain_populate_physmap_exact(xch, dom, nr_pages,
0, 0, &pfn_list[0]);
if (rc != 0)
PERROR("Could not allocate IO page or buffer io page.");
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/xc_dom_ia64.c
--- a/tools/libxc/xc_dom_ia64.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_dom_ia64.c Tue Oct 12 15:06:42 2010 +0100
@@ -186,7 +186,7 @@ int arch_setup_meminit(struct xc_dom_ima
dom->p2m_host[pfn] = start + pfn;
/* allocate guest memory */
- rc = xc_domain_memory_populate_physmap(dom->xch, dom->guest_domid,
+ rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
nbr, 0, 0,
dom->p2m_host);
return rc;
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/xc_dom_x86.c
--- a/tools/libxc/xc_dom_x86.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_dom_x86.c Tue Oct 12 15:06:42 2010 +0100
@@ -733,7 +733,7 @@ int arch_setup_meminit(struct xc_dom_ima
DOMPRINTF("Populating memory with %d superpages", count);
for ( pfn = 0; pfn < count; pfn++ )
extents[pfn] = pfn << SUPERPAGE_PFN_SHIFT;
- rc = xc_domain_memory_populate_physmap(dom->xch,
dom->guest_domid,
+ rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
count, SUPERPAGE_PFN_SHIFT, 0,
extents);
if ( rc )
@@ -762,7 +762,7 @@ int arch_setup_meminit(struct xc_dom_ima
allocsz = dom->total_pages - i;
if ( allocsz > 1024*1024 )
allocsz = 1024*1024;
- rc = xc_domain_memory_populate_physmap(
+ rc = xc_domain_populate_physmap_exact(
dom->xch, dom->guest_domid, allocsz,
0, 0, &dom->p2m_host[i]);
}
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
@@ -579,12 +579,12 @@ int xc_domain_get_tsc_info(xc_interface
}
-int xc_domain_memory_increase_reservation(xc_interface *xch,
- uint32_t domid,
- unsigned long nr_extents,
- unsigned int extent_order,
- unsigned int mem_flags,
- xen_pfn_t *extent_start)
+int xc_domain_increase_reservation(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start)
{
int err;
struct xen_memory_reservation reservation = {
@@ -598,6 +598,22 @@ int xc_domain_memory_increase_reservatio
set_xen_guest_handle(reservation.extent_start, extent_start);
err = xc_memory_op(xch, XENMEM_increase_reservation, &reservation);
+
+ return err;
+}
+
+int xc_domain_increase_reservation_exact(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start)
+{
+ int err;
+
+ err = xc_domain_increase_reservation(xch, domid, nr_extents,
+ extent_order, mem_flags,
extent_start);
+
if ( err == nr_extents )
return 0;
@@ -613,11 +629,11 @@ int xc_domain_memory_increase_reservatio
return err;
}
-int xc_domain_memory_decrease_reservation(xc_interface *xch,
- uint32_t domid,
- unsigned long nr_extents,
- unsigned int extent_order,
- xen_pfn_t *extent_start)
+int xc_domain_decrease_reservation(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ xen_pfn_t *extent_start)
{
int err;
struct xen_memory_reservation reservation = {
@@ -637,6 +653,21 @@ int xc_domain_memory_decrease_reservatio
}
err = xc_memory_op(xch, XENMEM_decrease_reservation, &reservation);
+
+ return err;
+}
+
+int xc_domain_decrease_reservation_exact(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ xen_pfn_t *extent_start)
+{
+ int err;
+
+ err = xc_domain_decrease_reservation(xch, domid, nr_extents,
+ extent_order, extent_start);
+
if ( err == nr_extents )
return 0;
@@ -651,12 +682,12 @@ int xc_domain_memory_decrease_reservatio
return err;
}
-int xc_domain_memory_populate_physmap(xc_interface *xch,
- uint32_t domid,
- unsigned long nr_extents,
- unsigned int extent_order,
- unsigned int mem_flags,
- xen_pfn_t *extent_start)
+int xc_domain_populate_physmap(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start)
{
int err;
struct xen_memory_reservation reservation = {
@@ -668,6 +699,21 @@ int xc_domain_memory_populate_physmap(xc
set_xen_guest_handle(reservation.extent_start, extent_start);
err = xc_memory_op(xch, XENMEM_populate_physmap, &reservation);
+
+ return err;
+}
+
+int xc_domain_populate_physmap_exact(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start)
+{
+ int err;
+
+ err = xc_domain_populate_physmap(xch, domid, nr_extents,
+ extent_order, mem_flags, extent_start);
if ( err == nr_extents )
return 0;
@@ -682,13 +728,13 @@ int xc_domain_memory_populate_physmap(xc
return err;
}
-static int xc_domain_memory_pod_target(xc_interface *xch,
- int op,
- uint32_t domid,
- uint64_t target_pages,
- uint64_t *tot_pages,
- uint64_t *pod_cache_pages,
- uint64_t *pod_entries)
+static int xc_domain_pod_target(xc_interface *xch,
+ int op,
+ uint32_t domid,
+ uint64_t target_pages,
+ uint64_t *tot_pages,
+ uint64_t *pod_cache_pages,
+ uint64_t *pod_entries)
{
int err;
@@ -701,7 +747,7 @@ static int xc_domain_memory_pod_target(x
if ( err < 0 )
{
- DPRINTF("Failed %s_memory_target dom %d\n",
+ DPRINTF("Failed %s_pod_target dom %d\n",
(op==XENMEM_set_pod_target)?"set":"get",
domid);
errno = -err;
@@ -719,37 +765,37 @@ static int xc_domain_memory_pod_target(x
return err;
}
-
-int xc_domain_memory_set_pod_target(xc_interface *xch,
- uint32_t domid,
- uint64_t target_pages,
- uint64_t *tot_pages,
- uint64_t *pod_cache_pages,
- uint64_t *pod_entries)
+
+int xc_domain_set_pod_target(xc_interface *xch,
+ uint32_t domid,
+ uint64_t target_pages,
+ uint64_t *tot_pages,
+ uint64_t *pod_cache_pages,
+ uint64_t *pod_entries)
{
- return xc_domain_memory_pod_target(xch,
- XENMEM_set_pod_target,
- domid,
- target_pages,
- tot_pages,
- pod_cache_pages,
- pod_entries);
+ return xc_domain_pod_target(xch,
+ XENMEM_set_pod_target,
+ domid,
+ target_pages,
+ tot_pages,
+ pod_cache_pages,
+ pod_entries);
}
-int xc_domain_memory_get_pod_target(xc_interface *xch,
- uint32_t domid,
- uint64_t *tot_pages,
- uint64_t *pod_cache_pages,
- uint64_t *pod_entries)
+int xc_domain_get_pod_target(xc_interface *xch,
+ uint32_t domid,
+ uint64_t *tot_pages,
+ uint64_t *pod_cache_pages,
+ uint64_t *pod_entries)
{
- return xc_domain_memory_pod_target(xch,
- XENMEM_get_pod_target,
- domid,
- -1,
- tot_pages,
- pod_cache_pages,
- pod_entries);
+ return xc_domain_pod_target(xch,
+ XENMEM_get_pod_target,
+ domid,
+ -1,
+ tot_pages,
+ pod_cache_pages,
+ pod_entries);
}
int xc_domain_max_vcpus(xc_interface *xch, uint32_t domid, unsigned int max)
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c Tue Oct 12 15:06:42 2010 +0100
@@ -147,7 +147,7 @@ static int uncanonicalize_pagetable(
/* Allocate the requisite number of mfns. */
if ( nr_mfns &&
- (xc_domain_memory_populate_physmap(xch, dom, nr_mfns, 0, 0,
+ (xc_domain_populate_physmap_exact(xch, dom, nr_mfns, 0, 0,
ctx->p2m_batch) != 0) )
{
ERROR("Failed to allocate memory for batch.!\n");
@@ -888,7 +888,7 @@ static int apply_batch(xc_interface *xch
/* Now allocate a bunch of mfns for this batch */
if ( nr_mfns &&
- (xc_domain_memory_populate_physmap(xch, dom, nr_mfns, 0,
+ (xc_domain_populate_physmap_exact(xch, dom, nr_mfns, 0,
0, ctx->p2m_batch) != 0) )
{
ERROR("Failed to allocate memory for batch.!\n");
@@ -1529,15 +1529,7 @@ int xc_domain_restore(xc_interface *xch,
if ( nr_frees > 0 )
{
- struct xen_memory_reservation reservation = {
- .nr_extents = nr_frees,
- .extent_order = 0,
- .domid = dom
- };
- set_xen_guest_handle(reservation.extent_start,
tailbuf.u.pv.pfntab);
-
- if ( (frc = xc_memory_op(xch, XENMEM_decrease_reservation,
- &reservation)) != nr_frees )
+ if ( (frc = xc_domain_decrease_reservation(xch, dom, nr_frees, 0,
tailbuf.u.pv.pfntab)) != nr_frees )
{
PERROR("Could not decrease reservation : %d", frc);
goto out;
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/xc_hvm_build.c
--- a/tools/libxc/xc_hvm_build.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_hvm_build.c Tue Oct 12 15:06:42 2010 +0100
@@ -203,7 +203,7 @@ static int setup_guest(xc_interface *xch
* Under 2MB mode, we allocate pages in batches of no more than 8MB to
* ensure that we can be preempted and hence dom0 remains responsive.
*/
- rc = xc_domain_memory_populate_physmap(
+ rc = xc_domain_populate_physmap_exact(
xch, dom, 0xa0, 0, 0, &page_array[0x00]);
cur_pages = 0xc0;
stat_normal_pages = 0xc0;
@@ -233,20 +233,16 @@ static int setup_guest(xc_interface *xch
SUPERPAGE_1GB_NR_PFNS << PAGE_SHIFT) )
{
long done;
- xen_pfn_t sp_extents[count >> SUPERPAGE_1GB_SHIFT];
- struct xen_memory_reservation sp_req = {
- .nr_extents = count >> SUPERPAGE_1GB_SHIFT,
- .extent_order = SUPERPAGE_1GB_SHIFT,
- .domid = dom
- };
+ unsigned long nr_extents = count >> SUPERPAGE_1GB_SHIFT;
+ xen_pfn_t sp_extents[nr_extents];
- if ( pod_mode )
- sp_req.mem_flags = XENMEMF_populate_on_demand;
+ for ( i = 0; i < nr_extents; i++ )
+ sp_extents[i] =
page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];
- set_xen_guest_handle(sp_req.extent_start, sp_extents);
- for ( i = 0; i < sp_req.nr_extents; i++ )
- sp_extents[i] =
page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];
- done = xc_memory_op(xch, XENMEM_populate_physmap, &sp_req);
+ done = xc_domain_populate_physmap(xch, dom, nr_extents,
SUPERPAGE_1GB_SHIFT,
+ pod_mode ?
XENMEMF_populate_on_demand : 0,
+ sp_extents);
+
if ( done > 0 )
{
stat_1gb_pages += done;
@@ -275,20 +271,16 @@ static int setup_guest(xc_interface *xch
if ( ((count | cur_pages) & (SUPERPAGE_2MB_NR_PFNS - 1)) == 0 )
{
long done;
- xen_pfn_t sp_extents[count >> SUPERPAGE_2MB_SHIFT];
- struct xen_memory_reservation sp_req = {
- .nr_extents = count >> SUPERPAGE_2MB_SHIFT,
- .extent_order = SUPERPAGE_2MB_SHIFT,
- .domid = dom
- };
+ unsigned long nr_extents = count >> SUPERPAGE_2MB_SHIFT;
+ xen_pfn_t sp_extents[nr_extents];
- if ( pod_mode )
- sp_req.mem_flags = XENMEMF_populate_on_demand;
+ for ( i = 0; i < nr_extents; i++ )
+ sp_extents[i] =
page_array[cur_pages+(i<<SUPERPAGE_2MB_SHIFT)];
- set_xen_guest_handle(sp_req.extent_start, sp_extents);
- for ( i = 0; i < sp_req.nr_extents; i++ )
- sp_extents[i] =
page_array[cur_pages+(i<<SUPERPAGE_2MB_SHIFT)];
- done = xc_memory_op(xch, XENMEM_populate_physmap, &sp_req);
+ done = xc_domain_populate_physmap(xch, dom, nr_extents,
SUPERPAGE_2MB_SHIFT,
+ pod_mode ?
XENMEMF_populate_on_demand : 0,
+ sp_extents);
+
if ( done > 0 )
{
stat_2mb_pages += done;
@@ -302,7 +294,7 @@ static int setup_guest(xc_interface *xch
/* Fall back to 4kB extents. */
if ( count != 0 )
{
- rc = xc_domain_memory_populate_physmap(
+ rc = xc_domain_populate_physmap_exact(
xch, dom, count, 0, 0, &page_array[cur_pages]);
cur_pages += count;
stat_normal_pages += count;
@@ -313,10 +305,8 @@ static int setup_guest(xc_interface *xch
* adjust the PoD cache size so that domain tot_pages will be
* target_pages - 0x20 after this call. */
if ( pod_mode )
- rc = xc_domain_memory_set_pod_target(xch,
- dom,
- target_pages - 0x20,
- NULL, NULL, NULL);
+ rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+ NULL, NULL, NULL);
if ( rc != 0 )
{
@@ -344,7 +334,7 @@ static int setup_guest(xc_interface *xch
for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
{
xen_pfn_t pfn = special_pfn(i);
- rc = xc_domain_memory_populate_physmap(xch, dom, 1, 0, 0, &pfn);
+ rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
if ( rc != 0 )
{
PERROR("Could not allocate %d''th special page.",
i);
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
@@ -675,14 +675,14 @@ unsigned long xc_make_page_below_4G(
xen_pfn_t old_mfn = mfn;
xen_pfn_t new_mfn;
- if ( xc_domain_memory_decrease_reservation(
+ if ( xc_domain_decrease_reservation_exact(
xch, domid, 1, 0, &old_mfn) != 0 )
{
DPRINTF("xc_make_page_below_4G decrease failed.
mfn=%lx\n",mfn);
return 0;
}
- if ( xc_domain_memory_increase_reservation(
+ if ( xc_domain_increase_reservation_exact(
xch, domid, 1, 0, XENMEMF_address_bits(32), &new_mfn) != 0 )
{
DPRINTF("xc_make_page_below_4G increase failed.
mfn=%lx\n",mfn);
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -785,38 +785,62 @@ int xc_domain_get_tsc_info(xc_interface
int xc_domain_disable_migrate(xc_interface *xch, uint32_t domid);
-int xc_domain_memory_increase_reservation(xc_interface *xch,
- uint32_t domid,
- unsigned long nr_extents,
- unsigned int extent_order,
- unsigned int mem_flags,
- xen_pfn_t *extent_start);
+int xc_domain_increase_reservation(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start);
-int xc_domain_memory_decrease_reservation(xc_interface *xch,
- uint32_t domid,
- unsigned long nr_extents,
- unsigned int extent_order,
- xen_pfn_t *extent_start);
+int xc_domain_increase_reservation_exact(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start);
-int xc_domain_memory_populate_physmap(xc_interface *xch,
- uint32_t domid,
- unsigned long nr_extents,
- unsigned int extent_order,
- unsigned int mem_flags,
- xen_pfn_t *extent_start);
+int xc_domain_decrease_reservation(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ xen_pfn_t *extent_start);
-int xc_domain_memory_set_pod_target(xc_interface *xch,
- uint32_t domid,
- uint64_t target_pages,
- uint64_t *tot_pages,
- uint64_t *pod_cache_pages,
- uint64_t *pod_entries);
+int xc_domain_decrease_reservation_exact(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ xen_pfn_t *extent_start);
-int xc_domain_memory_get_pod_target(xc_interface *xch,
- uint32_t domid,
- uint64_t *tot_pages,
- uint64_t *pod_cache_pages,
- uint64_t *pod_entries);
+int xc_domain_populate_physmap(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start);
+
+int xc_domain_populate_physmap_exact(xc_interface *xch,
+ uint32_t domid,
+ unsigned long nr_extents,
+ unsigned int extent_order,
+ unsigned int mem_flags,
+ xen_pfn_t *extent_start);
+
+/* Temporary for compatibility */
+#define xc_domain_memory_populate_physmap(x, d, nr, eo, mf, es) \
+ xc_domain_populate_physmap_exact(x, d, nr, eo, mf, es)
+
+int xc_domain_set_pod_target(xc_interface *xch,
+ uint32_t domid,
+ uint64_t target_pages,
+ uint64_t *tot_pages,
+ uint64_t *pod_cache_pages,
+ uint64_t *pod_entries);
+
+int xc_domain_get_pod_target(xc_interface *xch,
+ uint32_t domid,
+ uint64_t *tot_pages,
+ uint64_t *pod_cache_pages,
+ uint64_t *pod_entries);
int xc_domain_ioport_permission(xc_interface *xch,
uint32_t domid,
diff -r 15c4f1cde006 -r 6834151bfad7 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/libxl/libxl.c Tue Oct 12 15:06:42 2010 +0100
@@ -2948,11 +2948,11 @@ retry_transaction:
}
new_target_memkb -= videoram;
- rc = xc_domain_memory_set_pod_target(ctx->xch, domid,
+ rc = xc_domain_set_pod_target(ctx->xch, domid,
new_target_memkb / 4, NULL, NULL, NULL);
if (rc != 0) {
LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
- "xc_domain_memory_set_pod_target domid=%d, memkb=%d "
+ "xc_domain_set_pod_target domid=%d, memkb=%d "
"failed rc=%d\n", domid, new_target_memkb / 4,
rc);
abort = 1;
diff -r 15c4f1cde006 -r 6834151bfad7 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c Tue Oct 12 15:06:41 2010 +0100
+++ b/tools/python/xen/lowlevel/xc/xc.c Tue Oct 12 15:06:42 2010 +0100
@@ -1635,8 +1635,8 @@ static PyObject *pyxc_domain_set_target_
mem_pages = mem_kb / 4;
- if (xc_domain_memory_set_pod_target(self->xc_handle, dom, mem_pages,
- NULL, NULL, NULL) != 0)
+ if (xc_domain_set_pod_target(self->xc_handle, dom, mem_pages,
+ NULL, NULL, NULL) != 0)
return pyxc_error_to_exception(self->xc_handle);
Py_INCREF(zero);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 06 of 18] libxc: add xc_domain_memory_exchange_pages to wrap XENMEM_exchange
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID d284f5cbda808a8ac816829bdd67c8a9f692c8e4
# Parent 6834151bfad74e84e201062d4e8f3ae58155cd43
libxc: add xc_domain_memory_exchange_pages to wrap XENMEM_exchange
Generalised from exchange_page in xc_offline_page.c
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 6834151bfad7 -r d284f5cbda80 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
@@ -726,6 +726,37 @@ int xc_domain_populate_physmap_exact(xc_
}
return err;
+}
+
+int xc_domain_memory_exchange_pages(xc_interface *xch,
+ int domid,
+ unsigned long nr_in_extents,
+ unsigned int in_order,
+ xen_pfn_t *in_extents,
+ unsigned long nr_out_extents,
+ unsigned int out_order,
+ xen_pfn_t *out_extents)
+{
+ int rc;
+
+ struct xen_memory_exchange exchange = {
+ .in = {
+ .nr_extents = nr_in_extents,
+ .extent_order = in_order,
+ .domid = domid
+ },
+ .out = {
+ .nr_extents = nr_out_extents,
+ .extent_order = out_order,
+ .domid = domid
+ }
+ };
+ set_xen_guest_handle(exchange.in.extent_start, in_extents);
+ set_xen_guest_handle(exchange.out.extent_start, out_extents);
+
+ rc = xc_memory_op(xch, XENMEM_exchange, &exchange);
+
+ return rc;
}
static int xc_domain_pod_target(xc_interface *xch,
diff -r 6834151bfad7 -r d284f5cbda80 tools/libxc/xc_offline_page.c
--- a/tools/libxc/xc_offline_page.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_offline_page.c Tue Oct 12 15:06:42 2010 +0100
@@ -512,35 +512,6 @@ static int clear_pte(xc_interface *xch,
__clear_pte, mfn);
}
-static int exchange_page(xc_interface *xch, xen_pfn_t mfn,
- xen_pfn_t *new_mfn, int domid)
-{
- int rc;
- xen_pfn_t out_mfn;
-
- struct xen_memory_exchange exchange = {
- .in = {
- .nr_extents = 1,
- .extent_order = 0,
- .domid = domid
- },
- .out = {
- .nr_extents = 1,
- .extent_order = 0,
- .domid = domid
- }
- };
- set_xen_guest_handle(exchange.in.extent_start, &mfn);
- set_xen_guest_handle(exchange.out.extent_start, &out_mfn);
-
- rc = xc_memory_op(xch, XENMEM_exchange, &exchange);
-
- if (!rc)
- *new_mfn = out_mfn;
-
- return rc;
-}
-
/*
* Check if a page can be exchanged successfully
*/
@@ -704,7 +675,9 @@ int xc_exchange_page(xc_interface *xch,
goto failed;
}
- rc = exchange_page(xch, mfn, &new_mfn, domid);
+ rc = xc_domain_memory_exchange_pages(xch, domid,
+ 1, 0, &mfn,
+ 1, 0, &new_mfn);
if (rc)
{
diff -r 6834151bfad7 -r d284f5cbda80 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -828,6 +828,15 @@ int xc_domain_populate_physmap_exact(xc_
/* Temporary for compatibility */
#define xc_domain_memory_populate_physmap(x, d, nr, eo, mf, es) \
xc_domain_populate_physmap_exact(x, d, nr, eo, mf, es)
+
+int xc_domain_memory_exchange_pages(xc_interface *xch,
+ int domid,
+ unsigned long nr_in_extents,
+ unsigned int in_order,
+ xen_pfn_t *in_extents,
+ unsigned long nr_out_extents,
+ unsigned int out_order,
+ xen_pfn_t *out_extents);
int xc_domain_set_pod_target(xc_interface *xch,
uint32_t domid,
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 07 of 18] libxc: add xc_domain_add_to_physmap to wrap XENMEM_add_to_physmap
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 901ec3e53b42d599fe8d8e148797cfc729774702
# Parent d284f5cbda808a8ac816829bdd67c8a9f692c8e4
libxc: add xc_domain_add_to_physmap to wrap XENMEM_add_to_physmap
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r d284f5cbda80 -r 901ec3e53b42 tools/libxc/xc_dom_x86.c
--- a/tools/libxc/xc_dom_x86.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_dom_x86.c Tue Oct 12 15:06:42 2010 +0100
@@ -815,31 +815,26 @@ int arch_setup_bootlate(struct xc_dom_im
else
{
/* paravirtualized guest with auto-translation */
- struct xen_add_to_physmap xatp;
int i;
/* Map shared info frame into guest physmap. */
- xatp.domid = dom->guest_domid;
- xatp.space = XENMAPSPACE_shared_info;
- xatp.idx = 0;
- xatp.gpfn = dom->shared_info_pfn;
- rc = xc_memory_op(dom->xch, XENMEM_add_to_physmap, &xatp);
+ rc = xc_domain_add_to_physmap(dom->xch, dom->guest_domid,
+ XENMAPSPACE_shared_info,
+ 0, dom->shared_info_pfn);
if ( rc != 0 )
{
xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, "%s:
mapping"
" shared_info failed (pfn=0x%" PRIpfn
", rc=%d)",
- __FUNCTION__, xatp.gpfn, rc);
+ __FUNCTION__, dom->shared_info_pfn, rc);
return rc;
}
/* Map grant table frames into guest physmap. */
for ( i = 0; ; i++ )
{
- xatp.domid = dom->guest_domid;
- xatp.space = XENMAPSPACE_grant_table;
- xatp.idx = i;
- xatp.gpfn = dom->total_pages + i;
- rc = xc_memory_op(dom->xch, XENMEM_add_to_physmap, &xatp);
+ rc = xc_domain_add_to_physmap(dom->xch, dom->guest_domid,
+ XENMAPSPACE_grant_table,
+ i, dom->total_pages + i);
if ( rc != 0 )
{
if ( (i > 0) && (errno == EINVAL) )
@@ -849,7 +844,7 @@ int arch_setup_bootlate(struct xc_dom_im
}
xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
"%s: mapping grant tables failed "
"(pfn=0x%"
- PRIpfn ", rc=%d)", __FUNCTION__,
xatp.gpfn, rc);
+ PRIpfn ", rc=%d)", __FUNCTION__,
dom->total_pages + i, rc);
return rc;
}
}
diff -r d284f5cbda80 -r 901ec3e53b42 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
@@ -680,6 +680,21 @@ int xc_domain_decrease_reservation_exact
}
return err;
+}
+
+int xc_domain_add_to_physmap(xc_interface *xch,
+ uint32_t domid,
+ unsigned int space,
+ unsigned long idx,
+ xen_pfn_t gpfn)
+{
+ struct xen_add_to_physmap xatp = {
+ .domid = domid,
+ .space = space,
+ .idx = idx,
+ .gpfn = gpfn,
+ };
+ return xc_memory_op(xch, XENMEM_add_to_physmap, &xatp);
}
int xc_domain_populate_physmap(xc_interface *xch,
diff -r d284f5cbda80 -r 901ec3e53b42 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -810,6 +810,12 @@ int xc_domain_decrease_reservation_exact
unsigned long nr_extents,
unsigned int extent_order,
xen_pfn_t *extent_start);
+
+int xc_domain_add_to_physmap(xc_interface *xch,
+ uint32_t domid,
+ unsigned int space,
+ unsigned long idx,
+ xen_pfn_t gpfn);
int xc_domain_populate_physmap(xc_interface *xch,
uint32_t domid,
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 08 of 18] libxc: add xc_domain_maximum_gpfn to wrap XENMEM_maximum_gpfn
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 9c4485d27ea16109765386fc582e00156bf7676a
# Parent 901ec3e53b42d599fe8d8e148797cfc729774702
libxc: add xc_domain_maximum_gpfn to wrap XENMEM_maximum_gpfn
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 901ec3e53b42 -r 9c4485d27ea1 tools/libxc/ia64/xc_ia64_linux_save.c
--- a/tools/libxc/ia64/xc_ia64_linux_save.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_linux_save.c Tue Oct 12 15:06:42 2010 +0100
@@ -487,7 +487,7 @@ xc_domain_save(xc_interface *xch, int io
goto out;
}
- p2m_size = xc_memory_op(xch, XENMEM_maximum_gpfn, &dom) + 1;
+ p2m_size = xc_domain_maximum_gpfn(xch, dom) + 1;
/* This is expected by xm restore. */
if (write_exact(io_fd, &p2m_size, sizeof(unsigned long))) {
diff -r 901ec3e53b42 -r 9c4485d27ea1 tools/libxc/ia64/xc_ia64_stubs.c
--- a/tools/libxc/ia64/xc_ia64_stubs.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_stubs.c Tue Oct 12 15:06:42 2010 +0100
@@ -114,7 +114,7 @@ xc_ia64_copy_memmap(xc_interface *xch, u
int ret;
- gpfn_max_prev = xc_memory_op(xch, XENMEM_maximum_gpfn, &domid);
+ gpfn_max_prev = xc_domain_maximum_gpfn(xch, domid);
if (gpfn_max_prev < 0)
return -1;
@@ -143,7 +143,7 @@ xc_ia64_copy_memmap(xc_interface *xch, u
goto again;
}
- gpfn_max_post = xc_memory_op(xch, XENMEM_maximum_gpfn, &domid);
+ gpfn_max_post = xc_domain_maximum_gpfn(xch, domid);
if (gpfn_max_prev < 0) {
free(memmap_info);
return -1;
@@ -190,7 +190,7 @@ xc_ia64_map_foreign_p2m(xc_interface *xc
int ret;
int saved_errno;
- gpfn_max = xc_memory_op(xch, XENMEM_maximum_gpfn, &dom);
+ gpfn_max = xc_domain_maximum_gpfn(xch, dom);
if (gpfn_max < 0)
return NULL;
p2m_size diff -r 901ec3e53b42 -r 9c4485d27ea1 tools/libxc/xc_core_x86.c
--- a/tools/libxc/xc_core_x86.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_core_x86.c Tue Oct 12 15:06:42 2010 +0100
@@ -42,7 +42,7 @@ xc_core_arch_gpfn_may_present(struct xc_
static int nr_gpfns(xc_interface *xch, domid_t domid)
{
- return xc_memory_op(xch, XENMEM_maximum_gpfn, &domid) + 1;
+ return xc_domain_maximum_gpfn(xch, domid) + 1;
}
int
diff -r 901ec3e53b42 -r 9c4485d27ea1 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
@@ -578,6 +578,11 @@ int xc_domain_get_tsc_info(xc_interface
return rc;
}
+
+int xc_domain_maximum_gpfn(xc_interface *xch, domid_t domid)
+{
+ return xc_memory_op(xch, XENMEM_maximum_gpfn, &domid);
+}
int xc_domain_increase_reservation(xc_interface *xch,
uint32_t domid,
diff -r 901ec3e53b42 -r 9c4485d27ea1 tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_domain_save.c Tue Oct 12 15:06:42 2010 +0100
@@ -979,7 +979,7 @@ int xc_domain_save(xc_interface *xch, in
}
/* Get the size of the P2M table */
- dinfo->p2m_size = xc_memory_op(xch, XENMEM_maximum_gpfn, &dom) + 1;
+ dinfo->p2m_size = xc_domain_maximum_gpfn(xch, dom) + 1;
if ( dinfo->p2m_size > ~XEN_DOMCTL_PFINFO_LTAB_MASK )
{
diff -r 901ec3e53b42 -r 9c4485d27ea1 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -784,6 +784,8 @@ int xc_domain_get_tsc_info(xc_interface
uint32_t *incarnation);
int xc_domain_disable_migrate(xc_interface *xch, uint32_t domid);
+
+int xc_domain_maximum_gpfn(xc_interface *xch, domid_t domid);
int xc_domain_increase_reservation(xc_interface *xch,
uint32_t domid,
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 09 of 18] libxc: add xc_machphys_mfn_list to wrap XENMEM_machphys_mfn_list
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID e2e86e7d7af71f12956af780bd23cc53134920e5
# Parent 9c4485d27ea16109765386fc582e00156bf7676a
libxc: add xc_machphys_mfn_list to wrap XENMEM_machphys_mfn_list
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 9c4485d27ea1 -r e2e86e7d7af7 tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_domain_save.c Tue Oct 12 15:06:42 2010 +0100
@@ -623,7 +623,6 @@ xen_pfn_t *xc_map_m2p(xc_interface *xch,
int prot,
unsigned long *mfn0)
{
- struct xen_machphys_mfn_list xmml;
privcmd_mmap_entry_t *entries;
unsigned long m2p_chunks, m2p_size;
xen_pfn_t *m2p;
@@ -634,18 +633,14 @@ xen_pfn_t *xc_map_m2p(xc_interface *xch,
m2p_size = M2P_SIZE(max_mfn);
m2p_chunks = M2P_CHUNKS(max_mfn);
- xmml.max_extents = m2p_chunks;
-
extent_start = calloc(m2p_chunks, sizeof(xen_pfn_t));
if ( !extent_start )
{
ERROR("failed to allocate space for m2p mfns");
goto err0;
}
- set_xen_guest_handle(xmml.extent_start, extent_start);
- if ( xc_memory_op(xch, XENMEM_machphys_mfn_list, &xmml) ||
- (xmml.nr_extents != m2p_chunks) )
+ if ( xc_machphys_mfn_list(xch, m2p_chunks, extent_start) )
{
PERROR("xc_get_m2p_mfns");
goto err1;
diff -r 9c4485d27ea1 -r e2e86e7d7af7 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
@@ -549,6 +549,20 @@ long long xc_domain_get_cpu_usage( xc_in
return domctl.u.getvcpuinfo.cpu_time;
}
+int xc_machphys_mfn_list(xc_interface *xch,
+ unsigned long max_extents,
+ xen_pfn_t *extent_start)
+{
+ int rc;
+ struct xen_machphys_mfn_list xmml = {
+ .max_extents = max_extents,
+ };
+ set_xen_guest_handle(xmml.extent_start, extent_start);
+ rc = xc_memory_op(xch, XENMEM_machphys_mfn_list, &xmml);
+ if (rc || xmml.nr_extents != max_extents)
+ return -1;
+ return 0;
+}
#ifndef __ia64__
int xc_get_pfn_list(xc_interface *xch,
diff -r 9c4485d27ea1 -r e2e86e7d7af7 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -752,6 +752,10 @@ int xc_numainfo(xc_interface *xch, xc_nu
int xc_sched_id(xc_interface *xch,
int *sched_id);
+
+int xc_machphys_mfn_list(xc_interface *xch,
+ unsigned long max_extents,
+ xen_pfn_t *extent_start);
typedef xen_sysctl_cpuinfo_t xc_cpuinfo_t;
int xc_getcpuinfo(xc_interface *xch, int max_cpus,
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 10 of 18] libxc: add xc_maximum_ram_page to wrap XENMEM_maximum_ram_page
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 4a56557e18e05fdba2f8bb9f477fb33760c3814b
# Parent e2e86e7d7af71f12956af780bd23cc53134920e5
libxc: add xc_maximum_ram_page to wrap XENMEM_maximum_ram_page
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r e2e86e7d7af7 -r 4a56557e18e0 tools/libxc/xc_offline_page.c
--- a/tools/libxc/xc_offline_page.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_offline_page.c Tue Oct 12 15:06:42 2010 +0100
@@ -271,7 +271,7 @@ static int init_mem_info(xc_interface *x
dinfo->p2m_size = minfo->p2m_size;
- minfo->max_mfn = xc_memory_op(xch, XENMEM_maximum_ram_page, NULL);
+ minfo->max_mfn = xc_maximum_ram_page(xch);
if ( !(minfo->m2p_table xc_map_m2p(xch, minfo->max_mfn,
PROT_READ, NULL)) )
{
diff -r e2e86e7d7af7 -r 4a56557e18e0 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
@@ -533,6 +533,10 @@ int xc_memory_op(xc_interface *xch,
return ret;
}
+long xc_maximum_ram_page(xc_interface *xch)
+{
+ return xc_memory_op(xch, XENMEM_maximum_ram_page, NULL);
+}
long long xc_domain_get_cpu_usage( xc_interface *xch, domid_t domid, int vcpu )
{
diff -r e2e86e7d7af7 -r 4a56557e18e0 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -982,6 +982,9 @@ int xc_mmuext_op(xc_interface *xch, stru
int xc_mmuext_op(xc_interface *xch, struct mmuext_op *op, unsigned int nr_ops,
domid_t dom);
+/* System wide memory properties */
+long xc_maximum_ram_page(xc_interface *xch);
+
int xc_memory_op(xc_interface *xch, int cmd, void *arg);
diff -r e2e86e7d7af7 -r 4a56557e18e0 tools/libxc/xg_save_restore.h
--- a/tools/libxc/xg_save_restore.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xg_save_restore.h Tue Oct 12 15:06:42 2010 +0100
@@ -179,7 +179,7 @@ static inline int get_platform_info(xc_i
if (xc_version(xch, XENVER_capabilities, &xen_caps) != 0)
return 0;
- *max_mfn = xc_memory_op(xch, XENMEM_maximum_ram_page, NULL);
+ *max_mfn = xc_maximum_ram_page(xch);
*hvirt_start = xen_params.virt_start;
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 11 of 18] libxc: update QEMU_TAG and remove compatibility macro
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID ec389a7aa0d6a4215d95fe3ed167ed1049bb0dc9
# Parent 4a56557e18e05fdba2f8bb9f477fb33760c3814b
libxc: update QEMU_TAG and remove compatibility macro
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 4a56557e18e0 -r ec389a7aa0d6 Config.mk
--- a/Config.mk Tue Oct 12 15:06:42 2010 +0100
+++ b/Config.mk Tue Oct 12 15:06:42 2010 +0100
@@ -185,7 +185,7 @@ endif
# CONFIG_QEMU ?= ../qemu-xen.git
CONFIG_QEMU ?= $(QEMU_REMOTE)
-QEMU_TAG ?= f95d202ed6444dacb15fbea4dee185eb0e048d9a
+QEMU_TAG ?= f95d202ed6444dacb15fbea4dee185eb0e048d9a # XXX update
# Tue Sep 14 17:31:43 2010 +0100
# ioemu: fix VNC altgr-insert behavior
diff -r 4a56557e18e0 -r ec389a7aa0d6 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -836,10 +836,6 @@ int xc_domain_populate_physmap_exact(xc_
unsigned int extent_order,
unsigned int mem_flags,
xen_pfn_t *extent_start);
-
-/* Temporary for compatibility */
-#define xc_domain_memory_populate_physmap(x, d, nr, eo, mf, es) \
- xc_domain_populate_physmap_exact(x, d, nr, eo, mf, es)
int xc_domain_memory_exchange_pages(xc_interface *xch,
int domid,
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 12 of 18] libxc: make xc_memory_op library private
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 400adff91720efab6413ad73bba8329c715f58ba
# Parent ec389a7aa0d6a4215d95fe3ed167ed1049bb0dc9
libxc: make xc_memory_op library private
Now that all XENMEM_* callers go via an op specific function make
xc_memory_op private to libxc (and rename to do_memory_op for
consistency with other private functions).
Also change the interface to take a size parameter so that
do_memory_op knows how much memory to lock for the top-level argument,
removing some of the introspection.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r ec389a7aa0d6 -r 400adff91720 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
@@ -488,17 +488,16 @@ int xc_domain_set_memmap_limit(xc_interf
set_xen_guest_handle(fmap.map.buffer, &e820);
- if ( lock_pages(xch, &fmap, sizeof(fmap)) || lock_pages(xch, &e820,
sizeof(e820)) )
+ if ( lock_pages(xch, &e820, sizeof(e820)) )
{
PERROR("Could not lock memory for Xen hypercall");
rc = -1;
goto out;
}
- rc = xc_memory_op(xch, XENMEM_set_memory_map, &fmap);
+ rc = do_memory_op(xch, XENMEM_set_memory_map, &fmap, sizeof(fmap));
out:
- unlock_pages(xch, &fmap, sizeof(fmap));
unlock_pages(xch, &e820, sizeof(e820));
return rc;
}
@@ -581,7 +580,7 @@ int xc_domain_get_tsc_info(xc_interface
int xc_domain_maximum_gpfn(xc_interface *xch, domid_t domid)
{
- return xc_memory_op(xch, XENMEM_maximum_gpfn, &domid);
+ return do_memory_op(xch, XENMEM_maximum_gpfn, &domid, sizeof(domid));
}
int xc_domain_increase_reservation(xc_interface *xch,
@@ -602,7 +601,7 @@ int xc_domain_increase_reservation(xc_in
/* may be NULL */
set_xen_guest_handle(reservation.extent_start, extent_start);
- err = xc_memory_op(xch, XENMEM_increase_reservation, &reservation);
+ err = do_memory_op(xch, XENMEM_increase_reservation, &reservation,
sizeof(reservation));
return err;
}
@@ -657,7 +656,7 @@ int xc_domain_decrease_reservation(xc_in
return -1;
}
- err = xc_memory_op(xch, XENMEM_decrease_reservation, &reservation);
+ err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation,
sizeof(reservation));
return err;
}
@@ -699,7 +698,7 @@ int xc_domain_add_to_physmap(xc_interfac
.idx = idx,
.gpfn = gpfn,
};
- return xc_memory_op(xch, XENMEM_add_to_physmap, &xatp);
+ return do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp));
}
int xc_domain_populate_physmap(xc_interface *xch,
@@ -718,7 +717,7 @@ int xc_domain_populate_physmap(xc_interf
};
set_xen_guest_handle(reservation.extent_start, extent_start);
- err = xc_memory_op(xch, XENMEM_populate_physmap, &reservation);
+ err = do_memory_op(xch, XENMEM_populate_physmap, &reservation,
sizeof(reservation));
return err;
}
@@ -774,7 +773,7 @@ int xc_domain_memory_exchange_pages(xc_i
set_xen_guest_handle(exchange.in.extent_start, in_extents);
set_xen_guest_handle(exchange.out.extent_start, out_extents);
- rc = xc_memory_op(xch, XENMEM_exchange, &exchange);
+ rc = do_memory_op(xch, XENMEM_exchange, &exchange, sizeof(exchange));
return rc;
}
@@ -794,7 +793,7 @@ static int xc_domain_pod_target(xc_inter
.target_pages = target_pages
};
- err = xc_memory_op(xch, op, &pod_target);
+ err = do_memory_op(xch, op, &pod_target, sizeof(pod_target));
if ( err < 0 )
{
diff -r ec389a7aa0d6 -r 400adff91720 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
@@ -421,9 +421,7 @@ int xc_flush_mmu_updates(xc_interface *x
return flush_mmu_updates(xch, mmu);
}
-int xc_memory_op(xc_interface *xch,
- int cmd,
- void *arg)
+int do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len)
{
DECLARE_HYPERCALL;
struct xen_memory_reservation *reservation = arg;
@@ -435,16 +433,17 @@ int xc_memory_op(xc_interface *xch,
hypercall.arg[0] = (unsigned long)cmd;
hypercall.arg[1] = (unsigned long)arg;
+ if ( len && lock_pages(xch, arg, len) != 0 )
+ {
+ PERROR("Could not lock memory for XENMEM hypercall");
+ goto out1;
+ }
+
switch ( cmd )
{
case XENMEM_increase_reservation:
case XENMEM_decrease_reservation:
case XENMEM_populate_physmap:
- if ( lock_pages(xch, reservation, sizeof(*reservation)) != 0 )
- {
- PERROR("Could not lock");
- goto out1;
- }
get_xen_guest_handle(extent_start, reservation->extent_start);
if ( (extent_start != NULL) &&
(lock_pages(xch, extent_start,
@@ -456,11 +455,6 @@ int xc_memory_op(xc_interface *xch,
}
break;
case XENMEM_machphys_mfn_list:
- if ( lock_pages(xch, xmml, sizeof(*xmml)) != 0 )
- {
- PERROR("Could not lock");
- goto out1;
- }
get_xen_guest_handle(extent_start, xmml->extent_start);
if ( lock_pages(xch, extent_start,
xmml->max_extents * sizeof(xen_pfn_t)) != 0 )
@@ -471,61 +465,40 @@ int xc_memory_op(xc_interface *xch,
}
break;
case XENMEM_add_to_physmap:
- if ( lock_pages(xch, arg, sizeof(struct xen_add_to_physmap)) )
- {
- PERROR("Could not lock");
- goto out1;
- }
- break;
case XENMEM_current_reservation:
case XENMEM_maximum_reservation:
case XENMEM_maximum_gpfn:
- if ( lock_pages(xch, arg, sizeof(domid_t)) )
- {
- PERROR("Could not lock");
- goto out1;
- }
- break;
case XENMEM_set_pod_target:
case XENMEM_get_pod_target:
- if ( lock_pages(xch, arg, sizeof(struct xen_pod_target)) )
- {
- PERROR("Could not lock");
- goto out1;
- }
break;
}
ret = do_xen_hypercall(xch, &hypercall);
+
+ if ( len )
+ unlock_pages(xch, arg, len);
switch ( cmd )
{
case XENMEM_increase_reservation:
case XENMEM_decrease_reservation:
case XENMEM_populate_physmap:
- unlock_pages(xch, reservation, sizeof(*reservation));
get_xen_guest_handle(extent_start, reservation->extent_start);
if ( extent_start != NULL )
unlock_pages(xch, extent_start,
reservation->nr_extents * sizeof(xen_pfn_t));
break;
case XENMEM_machphys_mfn_list:
- unlock_pages(xch, xmml, sizeof(*xmml));
get_xen_guest_handle(extent_start, xmml->extent_start);
unlock_pages(xch, extent_start,
xmml->max_extents * sizeof(xen_pfn_t));
break;
case XENMEM_add_to_physmap:
- unlock_pages(xch, arg, sizeof(struct xen_add_to_physmap));
- break;
case XENMEM_current_reservation:
case XENMEM_maximum_reservation:
case XENMEM_maximum_gpfn:
- unlock_pages(xch, arg, sizeof(domid_t));
- break;
case XENMEM_set_pod_target:
case XENMEM_get_pod_target:
- unlock_pages(xch, arg, sizeof(struct xen_pod_target));
break;
}
@@ -535,7 +508,7 @@ int xc_memory_op(xc_interface *xch,
long xc_maximum_ram_page(xc_interface *xch)
{
- return xc_memory_op(xch, XENMEM_maximum_ram_page, NULL);
+ return do_memory_op(xch, XENMEM_maximum_ram_page, NULL, 0);
}
long long xc_domain_get_cpu_usage( xc_interface *xch, domid_t domid, int vcpu )
@@ -562,7 +535,7 @@ int xc_machphys_mfn_list(xc_interface *x
.max_extents = max_extents,
};
set_xen_guest_handle(xmml.extent_start, extent_start);
- rc = xc_memory_op(xch, XENMEM_machphys_mfn_list, &xmml);
+ rc = do_memory_op(xch, XENMEM_machphys_mfn_list, &xmml, sizeof(xmml));
if (rc || xmml.nr_extents != max_extents)
return -1;
return 0;
diff -r ec389a7aa0d6 -r 400adff91720 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_private.h Tue Oct 12 15:06:42 2010 +0100
@@ -206,6 +206,8 @@ static inline int do_sysctl(xc_interface
return ret;
}
+int do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len);
+
int xc_interface_open_core(xc_interface *xch); /* returns fd, logs errors */
int xc_interface_close_core(xc_interface *xch, int fd); /* no logging */
diff -r ec389a7aa0d6 -r 400adff91720 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -981,9 +981,6 @@ int xc_mmuext_op(xc_interface *xch, stru
/* System wide memory properties */
long xc_maximum_ram_page(xc_interface *xch);
-int xc_memory_op(xc_interface *xch, int cmd, void *arg);
-
-
/* Get current total pages allocated to a domain. */
long xc_get_tot_pages(xc_interface *xch, uint32_t domid);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 13 of 18] libxc: make do_memory_op''s callers responsible for locking indirect buffers
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 91597ec2218db759eef6916dec73ea42560c1504
# Parent 400adff91720efab6413ad73bba8329c715f58ba
libxc: make do_memory_op''s callers responsible for locking indirect
buffers
Push responsibility for locking buffers refered to by the memory_op
argument up into the callers (which are now all internal to libxc).
This removes the last of the introspecation from do_memory_op and
generally makes the transistion to hypercall buffers smoother.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 400adff91720 -r 91597ec2218d tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_domain.c Tue Oct 12 15:06:42 2010 +0100
@@ -599,9 +599,18 @@ int xc_domain_increase_reservation(xc_in
};
/* may be NULL */
+ if ( extent_start && lock_pages(xch, extent_start, nr_extents *
sizeof(xen_pfn_t)) != 0 )
+ {
+ PERROR("Could not lock memory for XENMEM_increase_reservation
hypercall");
+ return -1;
+ }
+
set_xen_guest_handle(reservation.extent_start, extent_start);
err = do_memory_op(xch, XENMEM_increase_reservation, &reservation,
sizeof(reservation));
+
+ if ( extent_start )
+ unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t));
return err;
}
@@ -647,7 +656,11 @@ int xc_domain_decrease_reservation(xc_in
.domid = domid
};
- set_xen_guest_handle(reservation.extent_start, extent_start);
+ if ( lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 )
+ {
+ PERROR("Could not lock memory for XENMEM_decrease_reservation
hypercall");
+ return -1;
+ }
if ( extent_start == NULL )
{
@@ -656,7 +669,11 @@ int xc_domain_decrease_reservation(xc_in
return -1;
}
+ set_xen_guest_handle(reservation.extent_start, extent_start);
+
err = do_memory_op(xch, XENMEM_decrease_reservation, &reservation,
sizeof(reservation));
+
+ unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t));
return err;
}
@@ -715,9 +732,18 @@ int xc_domain_populate_physmap(xc_interf
.mem_flags = mem_flags,
.domid = domid
};
+
+ if ( lock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t)) != 0 )
+ {
+ PERROR("Could not lock memory for XENMEM_populate_physmap
hypercall");
+ return -1;
+ }
+
set_xen_guest_handle(reservation.extent_start, extent_start);
err = do_memory_op(xch, XENMEM_populate_physmap, &reservation,
sizeof(reservation));
+
+ unlock_pages(xch, extent_start, nr_extents * sizeof(xen_pfn_t));
return err;
}
diff -r 400adff91720 -r 91597ec2218d tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
@@ -424,9 +424,6 @@ int do_memory_op(xc_interface *xch, int
int do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len)
{
DECLARE_HYPERCALL;
- struct xen_memory_reservation *reservation = arg;
- struct xen_machphys_mfn_list *xmml = arg;
- xen_pfn_t *extent_start;
long ret = -EINVAL;
hypercall.op = __HYPERVISOR_memory_op;
@@ -439,68 +436,10 @@ int do_memory_op(xc_interface *xch, int
goto out1;
}
- switch ( cmd )
- {
- case XENMEM_increase_reservation:
- case XENMEM_decrease_reservation:
- case XENMEM_populate_physmap:
- get_xen_guest_handle(extent_start, reservation->extent_start);
- if ( (extent_start != NULL) &&
- (lock_pages(xch, extent_start,
- reservation->nr_extents * sizeof(xen_pfn_t)) != 0) )
- {
- PERROR("Could not lock");
- unlock_pages(xch, reservation, sizeof(*reservation));
- goto out1;
- }
- break;
- case XENMEM_machphys_mfn_list:
- get_xen_guest_handle(extent_start, xmml->extent_start);
- if ( lock_pages(xch, extent_start,
- xmml->max_extents * sizeof(xen_pfn_t)) != 0 )
- {
- PERROR("Could not lock");
- unlock_pages(xch, xmml, sizeof(*xmml));
- goto out1;
- }
- break;
- case XENMEM_add_to_physmap:
- case XENMEM_current_reservation:
- case XENMEM_maximum_reservation:
- case XENMEM_maximum_gpfn:
- case XENMEM_set_pod_target:
- case XENMEM_get_pod_target:
- break;
- }
-
ret = do_xen_hypercall(xch, &hypercall);
if ( len )
unlock_pages(xch, arg, len);
-
- switch ( cmd )
- {
- case XENMEM_increase_reservation:
- case XENMEM_decrease_reservation:
- case XENMEM_populate_physmap:
- get_xen_guest_handle(extent_start, reservation->extent_start);
- if ( extent_start != NULL )
- unlock_pages(xch, extent_start,
- reservation->nr_extents * sizeof(xen_pfn_t));
- break;
- case XENMEM_machphys_mfn_list:
- get_xen_guest_handle(extent_start, xmml->extent_start);
- unlock_pages(xch, extent_start,
- xmml->max_extents * sizeof(xen_pfn_t));
- break;
- case XENMEM_add_to_physmap:
- case XENMEM_current_reservation:
- case XENMEM_maximum_reservation:
- case XENMEM_maximum_gpfn:
- case XENMEM_set_pod_target:
- case XENMEM_get_pod_target:
- break;
- }
out1:
return ret;
@@ -534,11 +473,23 @@ int xc_machphys_mfn_list(xc_interface *x
struct xen_machphys_mfn_list xmml = {
.max_extents = max_extents,
};
+
+ if ( lock_pages(xch, extent_start, max_extents * sizeof(xen_pfn_t)) != 0 )
+ {
+ PERROR("Could not lock memory for XENMEM_machphys_mfn_list
hypercall");
+ return -1;
+ }
+
set_xen_guest_handle(xmml.extent_start, extent_start);
rc = do_memory_op(xch, XENMEM_machphys_mfn_list, &xmml, sizeof(xmml));
if (rc || xmml.nr_extents != max_extents)
- return -1;
- return 0;
+ rc = -1;
+ else
+ rc = 0;
+
+ unlock_pages(xch, extent_start, max_extents * sizeof(xen_pfn_t));
+
+ return rc;
}
#ifndef __ia64__
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 14 of 18] libxc: simplify performance counters API
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 5e7e8cc9642550550c576808f237f02519b2669d
# Parent 91597ec2218db759eef6916dec73ea42560c1504
libxc: simplify performance counters API
Current function has heavily overloaded semantics for the various
arguments. Separate out into more specific functions.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 91597ec2218d -r 5e7e8cc96425 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_misc.c Tue Oct 12 15:06:42 2010 +0100
@@ -167,20 +167,29 @@ int xc_mca_op(xc_interface *xch, struct
}
#endif
-int xc_perfc_control(xc_interface *xch,
- uint32_t opcode,
- xc_perfc_desc_t *desc,
- xc_perfc_val_t *val,
- int *nbr_desc,
- int *nbr_val)
+int xc_perfc_reset(xc_interface *xch)
+{
+ DECLARE_SYSCTL;
+
+ sysctl.cmd = XEN_SYSCTL_perfc_op;
+ sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_reset;
+ set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL);
+ set_xen_guest_handle(sysctl.u.perfc_op.val, NULL);
+
+ return do_sysctl(xch, &sysctl);
+}
+
+int xc_perfc_query_number(xc_interface *xch,
+ int *nbr_desc,
+ int *nbr_val)
{
int rc;
DECLARE_SYSCTL;
sysctl.cmd = XEN_SYSCTL_perfc_op;
- sysctl.u.perfc_op.cmd = opcode;
- set_xen_guest_handle(sysctl.u.perfc_op.desc, desc);
- set_xen_guest_handle(sysctl.u.perfc_op.val, val);
+ sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query;
+ set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL);
+ set_xen_guest_handle(sysctl.u.perfc_op.val, NULL);
rc = do_sysctl(xch, &sysctl);
@@ -190,6 +199,20 @@ int xc_perfc_control(xc_interface *xch,
*nbr_val = sysctl.u.perfc_op.nr_vals;
return rc;
+}
+
+int xc_perfc_query(xc_interface *xch,
+ xc_perfc_desc_t *desc,
+ xc_perfc_val_t *val)
+{
+ DECLARE_SYSCTL;
+
+ sysctl.cmd = XEN_SYSCTL_perfc_op;
+ sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query;
+ set_xen_guest_handle(sysctl.u.perfc_op.desc, desc);
+ set_xen_guest_handle(sysctl.u.perfc_op.val, val);
+
+ return do_sysctl(xch, &sysctl);
}
int xc_lockprof_control(xc_interface *xch,
diff -r 91597ec2218d -r 5e7e8cc96425 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -887,14 +887,15 @@ unsigned long xc_make_page_below_4G(xc_i
typedef xen_sysctl_perfc_desc_t xc_perfc_desc_t;
typedef xen_sysctl_perfc_val_t xc_perfc_val_t;
+int xc_perfc_reset(xc_interface *xch);
+int xc_perfc_query_number(xc_interface *xch,
+ int *nbr_desc,
+ int *nbr_val);
/* IMPORTANT: The caller is responsible for mlock()''ing the @desc and
@val
arrays. */
-int xc_perfc_control(xc_interface *xch,
- uint32_t op,
- xc_perfc_desc_t *desc,
- xc_perfc_val_t *val,
- int *nbr_desc,
- int *nbr_val);
+int xc_perfc_query(xc_interface *xch,
+ xc_perfc_desc_t *desc,
+ xc_perfc_val_t *val);
typedef xen_sysctl_lockprof_data_t xc_lockprof_data_t;
/* IMPORTANT: The caller is responsible for mlock()''ing the @data
array. */
diff -r 91597ec2218d -r 5e7e8cc96425 tools/misc/xenperf.c
--- a/tools/misc/xenperf.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/misc/xenperf.c Tue Oct 12 15:06:42 2010 +0100
@@ -137,8 +137,7 @@ int main(int argc, char *argv[])
if ( reset )
{
- if ( xc_perfc_control(xc_handle, XEN_SYSCTL_PERFCOP_reset,
- NULL, NULL, NULL, NULL) != 0 )
+ if ( xc_perfc_reset(xc_handle) != 0 )
{
fprintf(stderr, "Error reseting performance counters: %d
(%s)\n",
errno, strerror(errno));
@@ -148,8 +147,7 @@ int main(int argc, char *argv[])
return 0;
}
- if ( xc_perfc_control(xc_handle, XEN_SYSCTL_PERFCOP_query,
- NULL, NULL, &num_desc, &num_val) != 0 )
+ if ( xc_perfc_query_number(xc_handle, &num_desc, &num_val) != 0 )
{
fprintf(stderr, "Error getting number of perf counters: %d
(%s)\n",
errno, strerror(errno));
@@ -169,8 +167,7 @@ int main(int argc, char *argv[])
exit(-1);
}
- if ( xc_perfc_control(xc_handle, XEN_SYSCTL_PERFCOP_query,
- pcd, pcv, NULL, NULL) != 0 )
+ if ( xc_perfc_query(xc_handle, pcd, pcv) != 0 )
{
fprintf(stderr, "Error getting perf counter: %d (%s)\n",
errno, strerror(errno));
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 15 of 18] libxc: simplify lock profiling API
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID af3e98227d919192f9cce637343c6163ecb23daa
# Parent 5e7e8cc9642550550c576808f237f02519b2669d
libxc: simplify lock profiling API
Current function has heavily overloaded semantics for the various
arguments. Separate out into more specific functions.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 5e7e8cc96425 -r af3e98227d91 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_misc.c Tue Oct 12 15:06:42 2010 +0100
@@ -215,8 +215,35 @@ int xc_perfc_query(xc_interface *xch,
return do_sysctl(xch, &sysctl);
}
-int xc_lockprof_control(xc_interface *xch,
- uint32_t opcode,
+int xc_lockprof_reset(xc_interface *xch)
+{
+ DECLARE_SYSCTL;
+
+ sysctl.cmd = XEN_SYSCTL_lockprof_op;
+ sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_reset;
+ set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL);
+
+ return do_sysctl(xch, &sysctl);
+}
+
+int xc_lockprof_query_number(xc_interface *xch,
+ uint32_t *n_elems)
+{
+ int rc;
+ DECLARE_SYSCTL;
+
+ sysctl.cmd = XEN_SYSCTL_lockprof_op;
+ sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query;
+ set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL);
+
+ rc = do_sysctl(xch, &sysctl);
+
+ *n_elems = sysctl.u.lockprof_op.nr_elem;
+
+ return rc;
+}
+
+int xc_lockprof_query(xc_interface *xch,
uint32_t *n_elems,
uint64_t *time,
xc_lockprof_data_t *data)
@@ -225,16 +252,13 @@ int xc_lockprof_control(xc_interface *xc
DECLARE_SYSCTL;
sysctl.cmd = XEN_SYSCTL_lockprof_op;
- sysctl.u.lockprof_op.cmd = opcode;
- sysctl.u.lockprof_op.max_elem = n_elems ? *n_elems : 0;
+ sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query;
+ sysctl.u.lockprof_op.max_elem = *n_elems;
set_xen_guest_handle(sysctl.u.lockprof_op.data, data);
rc = do_sysctl(xch, &sysctl);
- if (n_elems)
- *n_elems = sysctl.u.lockprof_op.nr_elem;
- if (time)
- *time = sysctl.u.lockprof_op.time;
+ *n_elems = sysctl.u.lockprof_op.nr_elem;
return rc;
}
diff -r 5e7e8cc96425 -r af3e98227d91 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -898,12 +898,14 @@ int xc_perfc_query(xc_interface *xch,
xc_perfc_val_t *val);
typedef xen_sysctl_lockprof_data_t xc_lockprof_data_t;
+int xc_lockprof_reset(xc_interface *xch);
+int xc_lockprof_query_number(xc_interface *xch,
+ uint32_t *n_elems);
/* IMPORTANT: The caller is responsible for mlock()''ing the @data
array. */
-int xc_lockprof_control(xc_interface *xch,
- uint32_t opcode,
- uint32_t *n_elems,
- uint64_t *time,
- xc_lockprof_data_t *data);
+int xc_lockprof_query(xc_interface *xch,
+ uint32_t *n_elems,
+ uint64_t *time,
+ xc_lockprof_data_t *data);
/**
* Memory maps a range within one domain to a local address range. Mappings
diff -r 5e7e8cc96425 -r af3e98227d91 tools/misc/xenlockprof.c
--- a/tools/misc/xenlockprof.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/misc/xenlockprof.c Tue Oct 12 15:06:42 2010 +0100
@@ -60,8 +60,7 @@ int main(int argc, char *argv[])
if ( argc > 1 )
{
- if ( xc_lockprof_control(xc_handle, XEN_SYSCTL_LOCKPROF_reset, NULL,
- NULL, NULL) != 0 )
+ if ( xc_lockprof_reset(xc_handle) != 0 )
{
fprintf(stderr, "Error reseting profile data: %d (%s)\n",
errno, strerror(errno));
@@ -71,8 +70,7 @@ int main(int argc, char *argv[])
}
n = 0;
- if ( xc_lockprof_control(xc_handle, XEN_SYSCTL_LOCKPROF_query, &n,
- NULL, NULL) != 0 )
+ if ( xc_lockprof_query_number(xc_handle, &n) != 0 )
{
fprintf(stderr, "Error getting number of profile records: %d
(%s)\n",
errno, strerror(errno));
@@ -89,8 +87,7 @@ int main(int argc, char *argv[])
}
i = n;
- if ( xc_lockprof_control(xc_handle, XEN_SYSCTL_LOCKPROF_query, &i,
- &time, data) != 0 )
+ if ( xc_lockprof_query(xc_handle, &i, &time, data) != 0 )
{
fprintf(stderr, "Error getting profile records: %d (%s)\n",
errno, strerror(errno));
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 16 of 18] libxc: drop xc_get_max_pages
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID d1501c6dca3f879287359cf9de877b86c32d2e95
# Parent af3e98227d919192f9cce637343c6163ecb23daa
libxc: drop xc_get_max_pages
The function isn''t really ia64 specific but since the result
isn''t
actually used in the only caller and the same info is available via
xc_domain_getinfo simply drop the function.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r af3e98227d91 -r d1501c6dca3f tools/libxc/ia64/xc_ia64_hvm_build.c
--- a/tools/libxc/ia64/xc_ia64_hvm_build.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_hvm_build.c Tue Oct 12 15:06:42 2010 +0100
@@ -1078,13 +1078,6 @@ xc_hvm_build(xc_interface *xch, uint32_t
vcpu_guest_context_t *ctxt = &st_ctxt_any.c;
char *image = NULL;
unsigned long image_size;
- unsigned long nr_pages;
-
- nr_pages = xc_get_max_pages(xch, domid);
- if (nr_pages < 0) {
- PERROR("Could not find total pages for domain");
- goto error_out;
- }
image = xc_read_image(xch, image_name, &image_size);
if (image == NULL) {
diff -r af3e98227d91 -r d1501c6dca3f tools/libxc/ia64/xc_ia64_stubs.c
--- a/tools/libxc/ia64/xc_ia64_stubs.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_stubs.c Tue Oct 12 15:06:42 2010 +0100
@@ -64,16 +64,6 @@ xc_get_pfn_list(xc_interface *xch, uint3
{
return xc_ia64_get_pfn_list(xch, domid, (xen_pfn_t *)pfn_buf,
0, max_pfns);
-}
-
-long
-xc_get_max_pages(xc_interface *xch, uint32_t domid)
-{
- struct xen_domctl domctl;
- domctl.cmd = XEN_DOMCTL_getdomaininfo;
- domctl.domain = (domid_t)domid;
- return ((do_domctl(xch, &domctl) < 0)
- ? -1 : domctl.u.getdomaininfo.max_pages);
}
/* It is possible to get memmap_info and memmap by
diff -r af3e98227d91 -r d1501c6dca3f tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xenctrl.h Tue Oct 12 15:06:42 2010 +0100
@@ -976,8 +976,6 @@ int xc_clear_domain_page(xc_interface *x
int xc_clear_domain_page(xc_interface *xch, uint32_t domid,
unsigned long dst_pfn);
-long xc_get_max_pages(xc_interface *xch, uint32_t domid);
-
int xc_mmuext_op(xc_interface *xch, struct mmuext_op *op, unsigned int nr_ops,
domid_t dom);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 17 of 18] libxc: do not lock VCPU context in xc_ia64_pv_recv_vcpu_context
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 21f32c40fc2da4842ab8e93e52149a2baf7b25b0
# Parent d1501c6dca3f879287359cf9de877b86c32d2e95
libxc: do not lock VCPU context in xc_ia64_pv_recv_vcpu_context
xc_ia64_pv_recv_vcpu_context does not need to lock the ctxt buffer
since it calls xc_ia64_recv_vcpu_context which calls
xc_vcpu_setcontext which takes care of any necessary bouncing.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r d1501c6dca3f -r 21f32c40fc2d tools/libxc/ia64/xc_ia64_linux_restore.c
--- a/tools/libxc/ia64/xc_ia64_linux_restore.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_linux_restore.c Tue Oct 12 15:06:42 2010 +0100
@@ -246,12 +246,6 @@ xc_ia64_pv_recv_vcpu_context(xc_interfac
vcpu_guest_context_any_t ctxt_any;
vcpu_guest_context_t *ctxt = &ctxt_any.c;
- if (lock_pages(&ctxt_any, sizeof(ctxt_any))) {
- /* needed for build domctl, but might as well do early */
- ERROR("Unable to lock_pages ctxt");
- return -1;
- }
-
if (xc_ia64_recv_vcpu_context(xch, io_fd, dom, vcpu, &ctxt_any))
goto out;
@@ -264,7 +258,6 @@ xc_ia64_pv_recv_vcpu_context(xc_interfac
rc = 0;
out:
- unlock_pages(&ctxt, sizeof(ctxt));
return rc;
}
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-12 14:16 UTC
[Xen-devel] [PATCH 18 of 18] libxc: use generic xc_get_pfn_list on ia64
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1286892402 -3600
# Node ID 0daee22f925bd15ab5b9cd3807b0693e0055f176
# Parent 21f32c40fc2da4842ab8e93e52149a2baf7b25b0
libxc: use generic xc_get_pfn_list on ia64
The ia64 specific xc_get_pfn_list doesn''t seem any different to the
generic xc_get_pfn_list once the call to xc_ia64_get_pfn_list is
expanded so remove and just use the generic one.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 21f32c40fc2d -r 0daee22f925b tools/libxc/ia64/xc_ia64_stubs.c
--- a/tools/libxc/ia64/xc_ia64_stubs.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_stubs.c Tue Oct 12 15:06:42 2010 +0100
@@ -34,37 +34,6 @@ xc_ia64_fpsr_default(void)
return FPSR_DEFAULT;
}
-static int
-xc_ia64_get_pfn_list(xc_interface *xch, uint32_t domid, xen_pfn_t *pfn_buf,
- unsigned int start_page, unsigned int nr_pages)
-{
- DECLARE_DOMCTL;
- int ret;
-
- domctl.cmd = XEN_DOMCTL_getmemlist;
- domctl.domain = (domid_t)domid;
- domctl.u.getmemlist.max_pfns = nr_pages;
- domctl.u.getmemlist.start_pfn = start_page;
- domctl.u.getmemlist.num_pfns = 0;
- set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf);
-
- if (lock_pages(pfn_buf, nr_pages * sizeof(xen_pfn_t)) != 0) {
- PERROR("Could not lock pfn list buffer");
- return -1;
- }
- ret = do_domctl(xch, &domctl);
- unlock_pages(pfn_buf, nr_pages * sizeof(xen_pfn_t));
-
- return ret < 0 ? -1 : nr_pages;
-}
-
-int
-xc_get_pfn_list(xc_interface *xch, uint32_t domid, uint64_t *pfn_buf,
- unsigned long max_pfns)
-{
- return xc_ia64_get_pfn_list(xch, domid, (xen_pfn_t *)pfn_buf,
- 0, max_pfns);
-}
/* It is possible to get memmap_info and memmap by
foreign domain page mapping. But it''s racy. Use hypercall to avoid
race. */
diff -r 21f32c40fc2d -r 0daee22f925b tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
+++ b/tools/libxc/xc_private.c Tue Oct 12 15:06:42 2010 +0100
@@ -492,7 +492,6 @@ int xc_machphys_mfn_list(xc_interface *x
return rc;
}
-#ifndef __ia64__
int xc_get_pfn_list(xc_interface *xch,
uint32_t domid,
uint64_t *pfn_buf,
@@ -521,7 +520,6 @@ int xc_get_pfn_list(xc_interface *xch,
return (ret < 0) ? -1 : domctl.u.getmemlist.num_pfns;
}
-#endif
long xc_get_tot_pages(xc_interface *xch, uint32_t domid)
{
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Jackson
2010-Oct-18 16:43 UTC
Re: [Xen-devel] [PATCH 00 of 18] libxc: preparation for hypercall buffers + random cleanups
Ian Campbell writes ("[Xen-devel] [PATCH 00 of 18] libxc: preparation for
hypercall buffers + random cleanups"):> The following contains some clean ups in preparation for the hypercall
> buffer patch series, plus some other bits a bobs which I happened to
> notice while preparing that series.
Thanks. I have applied all 18 and the two related qemu patches (in
what I think is the right order; I fiddled with the QEMU_TAG update
slightly).
I''m going to hold off on other patches today to give your series a
dedicated run of the nightly tests.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ian Campbell
2010-Oct-19 08:25 UTC
Re: [Xen-devel] [PATCH 00 of 18] libxc: preparation for hypercall buffers + random cleanups
On Mon, 2010-10-18 at 17:43 +0100, Ian Jackson wrote:> Ian Campbell writes ("[Xen-devel] [PATCH 00 of 18] libxc: preparation for hypercall buffers + random cleanups"): > > The following contains some clean ups in preparation for the hypercall > > buffer patch series, plus some other bits a bobs which I happened to > > notice while preparing that series. > > Thanks. I have applied all 18 and the two related qemu patches (in > what I think is the right order; I fiddled with the QEMU_TAG update > slightly).Thanks, all looks correct to me. The compatibility notes which you added to the commit messages are a very good idea.> I''m going to hold off on other patches today to give your series a > dedicated run of the nightly tests.Good idea. Thanks. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel