Here are some small changes for xenpaging. The first change is a bugfix in the drop page path. The following patches are small cleanups. The could be applied now. The last one implements the configuration interface for xenpaging to start it automatically, optionally specify the directory to store the paging files, enable debug and the amount of pages to keep in memory. If thats ok, I will start working on xl support. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-31 17:36 UTC
[Xen-devel] [PATCH 1 of 7] xenpaging: correct dropping pages to avoid full ring buffer
# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1301386985 -7200
# Node ID cc831886cb6a2ee356e132e331741dff2257fca3
# Parent 8ec7808f9c232e1aa6dcf9a51a8a8925444122a1
xenpaging: correct dropping pages to avoid full ring buffer
Doing a one-way channel from Xen to xenpaging is not possible with the
current ring buffer implementation. xenpaging uses the mem_event ring
buffer, which expects request/response pairs to make progress. The
previous patch, which tried to establish a one-way communication from
Xen to xenpaging, stalled the guest once the buffer was filled up with
requests. Correct page-dropping by taking the slow path and let
p2m_mem_paging_resume() consume the response from xenpaging. This makes
room for yet another request/response pair and avoids hanging guests.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
diff -r 8ec7808f9c23 -r cc831886cb6a tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Thu Mar 24 11:09:02 2011 +0000
+++ b/tools/xenpaging/xenpaging.c Tue Mar 29 10:23:05 2011 +0200
@@ -653,19 +653,19 @@
ERROR("Error populating page");
goto out;
}
+ }
- /* Prepare the response */
- rsp.gfn = req.gfn;
- rsp.p2mt = req.p2mt;
- rsp.vcpu_id = req.vcpu_id;
- rsp.flags = req.flags;
+ /* Prepare the response */
+ rsp.gfn = req.gfn;
+ rsp.p2mt = req.p2mt;
+ rsp.vcpu_id = req.vcpu_id;
+ rsp.flags = req.flags;
- rc = xenpaging_resume_page(paging, &rsp, 1);
- if ( rc != 0 )
- {
- ERROR("Error resuming page");
- goto out;
- }
+ rc = xenpaging_resume_page(paging, &rsp, 1);
+ if ( rc != 0 )
+ {
+ ERROR("Error resuming page");
+ goto out;
}
/* Evict a new page to replace the one we just paged in */
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-31 17:36 UTC
[Xen-devel] [PATCH 2 of 7] xenpaging: do not bounce p2mt to xenpaging
# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1301591517 -7200
# Node ID a811d86a48f400cd541500e0e6ae765fdcd02ef9
# Parent cc831886cb6a2ee356e132e331741dff2257fca3
xenpaging: do not bounce p2mt to xenpaging
Do not bounce p2mt to xenpaging because p2m_mem_paging_populate and
p2m_mem_paging_resume dont make use of p2mt. Only pages of type
p2m_ram_rw will be paged-out, and during page-in this type has to be
restored.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
diff -r cc831886cb6a -r a811d86a48f4 tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Tue Mar 29 10:23:05 2011 +0200
+++ b/tools/xenpaging/xenpaging.c Thu Mar 31 19:11:57 2011 +0200
@@ -657,7 +657,6 @@
/* Prepare the response */
rsp.gfn = req.gfn;
- rsp.p2mt = req.p2mt;
rsp.vcpu_id = req.vcpu_id;
rsp.flags = req.flags;
@@ -674,10 +673,8 @@
else
{
DPRINTF("page already populated (domain = %d; vcpu =
%d;"
- " p2mt = %x;"
" gfn = %"PRIx64"; paused = %d)\n",
paging->mem_event.domain_id, req.vcpu_id,
- req.p2mt,
req.gfn, req.flags & MEM_EVENT_FLAG_VCPU_PAUSED);
/* Tell Xen to resume the vcpu */
@@ -686,7 +683,6 @@
{
/* Prepare the response */
rsp.gfn = req.gfn;
- rsp.p2mt = req.p2mt;
rsp.vcpu_id = req.vcpu_id;
rsp.flags = req.flags;
diff -r cc831886cb6a -r a811d86a48f4 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c Tue Mar 29 10:23:05 2011 +0200
+++ b/xen/arch/x86/mm/p2m.c Thu Mar 31 19:11:57 2011 +0200
@@ -2903,7 +2903,6 @@
/* Send request to pager */
req.gfn = gfn;
- req.p2mt = p2mt;
req.vcpu_id = v->vcpu_id;
mem_event_put_request(d, &req);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1301591570 -7200
# Node ID cd35892de8ff2388aa46e3768393f217a3c63521
# Parent a811d86a48f400cd541500e0e6ae765fdcd02ef9
xenpaging: remove srand call
The policy uses now a linear algorithm instead of picking random gfn
numbers. Remove the call to srand().
Signed-off-by: Olaf Hering <olaf@aepfle.de>
diff -r a811d86a48f4 -r cd35892de8ff tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Thu Mar 31 19:11:57 2011 +0200
+++ b/tools/xenpaging/xenpaging.c Thu Mar 31 19:12:50 2011 +0200
@@ -544,9 +544,6 @@
domain_id = atoi(argv[1]);
num_pages = atoi(argv[2]);
- /* Seed random-number generator */
- srand(time(NULL));
-
/* Initialise domain paging */
paging = xenpaging_init(domain_id);
if ( paging == NULL )
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-31 17:36 UTC
[Xen-devel] [PATCH 4 of 7] xenpaging: remove return values from functions that can not fail
# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1301591599 -7200
# Node ID 8825c216096a80e5590ce075da273eeb06c1e7aa
# Parent cd35892de8ff2388aa46e3768393f217a3c63521
xenpaging: remove return values from functions that can not fail
get_request() and put_response() can not fail, remove return value
and update calling functions.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
diff -r cd35892de8ff -r 8825c216096a tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Thu Mar 31 19:12:50 2011 +0200
+++ b/tools/xenpaging/xenpaging.c Thu Mar 31 19:13:19 2011 +0200
@@ -297,7 +297,7 @@
return -1;
}
-static int get_request(mem_event_t *mem_event, mem_event_request_t *req)
+static void get_request(mem_event_t *mem_event, mem_event_request_t *req)
{
mem_event_back_ring_t *back_ring;
RING_IDX req_cons;
@@ -316,11 +316,9 @@
back_ring->sring->req_event = req_cons + 1;
mem_event_ring_unlock(mem_event);
-
- return 0;
}
-static int put_response(mem_event_t *mem_event, mem_event_response_t *rsp)
+static void put_response(mem_event_t *mem_event, mem_event_response_t *rsp)
{
mem_event_back_ring_t *back_ring;
RING_IDX rsp_prod;
@@ -339,8 +337,6 @@
RING_PUSH_RESPONSES(back_ring);
mem_event_ring_unlock(mem_event);
-
- return 0;
}
static int xenpaging_evict_page(xenpaging_t *paging,
@@ -400,9 +396,7 @@
int ret;
/* Put the page info on the ring */
- ret = put_response(&paging->mem_event, rsp);
- if ( ret != 0 )
- goto out;
+ put_response(&paging->mem_event, rsp);
/* Notify policy of page being paged in */
if ( notify_policy )
@@ -612,12 +606,7 @@
while (
RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
{
- rc = get_request(&paging->mem_event, &req);
- if ( rc != 0 )
- {
- ERROR("Error getting request");
- goto out;
- }
+ get_request(&paging->mem_event, &req);
/* Check if the page has already been paged in */
if ( test_and_clear_bit(req.gfn, paging->bitmap) )
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-31 17:36 UTC
[Xen-devel] [PATCH 5 of 7] xenpaging: catch xc_mem_paging_resume errors
# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1301591618 -7200
# Node ID bbf495e57371ae102d4e2fbb8ce1a5c54a7357c4
# Parent 8825c216096a80e5590ce075da273eeb06c1e7aa
xenpaging: catch xc_mem_paging_resume errors
In the unlikely event that xc_mem_paging_resume() fails, do not overwrite the
error with the return value from xc_evtchn_notify()
Signed-off-by: Olaf Hering <olaf@aepfle.de>
diff -r 8825c216096a -r bbf495e57371 tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Thu Mar 31 19:13:19 2011 +0200
+++ b/tools/xenpaging/xenpaging.c Thu Mar 31 19:13:38 2011 +0200
@@ -405,8 +405,9 @@
/* Tell Xen page is ready */
ret = xc_mem_paging_resume(paging->xc_handle,
paging->mem_event.domain_id,
rsp->gfn);
- ret = xc_evtchn_notify(paging->mem_event.xce_handle,
- paging->mem_event.port);
+ if ( ret == 0 )
+ ret = xc_evtchn_notify(paging->mem_event.xce_handle,
+ paging->mem_event.port);
out:
return ret;
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-31 17:36 UTC
[Xen-devel] [PATCH 6 of 7] xenpaging: pass integer to xenpaging_populate_page
# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1301591641 -7200
# Node ID 1d040925ea0dc5c01f7cf2c188ab01da48028f92
# Parent bbf495e57371ae102d4e2fbb8ce1a5c54a7357c4
xenpaging: pass integer to xenpaging_populate_page
Pass gfn as integer to xenpaging_populate_page(). xc_map_foreign_pages()
takes a pointer to a list of gfns, but its a const pointer. So writing
the value back to the caller is not needed.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
diff -r bbf495e57371 -r 1d040925ea0d tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Thu Mar 31 19:13:38 2011 +0200
+++ b/tools/xenpaging/xenpaging.c Thu Mar 31 19:14:01 2011 +0200
@@ -413,28 +413,24 @@
return ret;
}
-static int xenpaging_populate_page(xenpaging_t *paging,
- uint64_t *gfn, int fd, int i)
+static int xenpaging_populate_page(xenpaging_t *paging, xen_pfn_t gfn, int fd,
int i)
{
xc_interface *xch = paging->xc_handle;
- unsigned long _gfn;
void *page;
int ret;
unsigned char oom = 0;
- _gfn = *gfn;
- DPRINTF("populate_page < gfn %lx pageslot %d\n", _gfn, i);
+ DPRINTF("populate_page < gfn %"PRI_xen_pfn" pageslot
%d\n", gfn, i);
do
{
/* Tell Xen to allocate a page for the domain */
- ret = xc_mem_paging_prep(xch, paging->mem_event.domain_id,
- _gfn);
+ ret = xc_mem_paging_prep(xch, paging->mem_event.domain_id, gfn);
if ( ret != 0 )
{
if ( errno == ENOMEM )
{
if ( oom++ == 0 )
- DPRINTF("ENOMEM while preparing gfn %lx\n",
_gfn);
+ DPRINTF("ENOMEM while preparing gfn
%"PRI_xen_pfn"\n", gfn);
sleep(1);
continue;
}
@@ -447,8 +443,7 @@
/* Map page */
ret = -EFAULT;
page = xc_map_foreign_pages(xch, paging->mem_event.domain_id,
- PROT_READ | PROT_WRITE, &_gfn, 1);
- *gfn = _gfn;
+ PROT_READ | PROT_WRITE, &gfn, 1);
if ( page == NULL )
{
ERROR("Error mapping page: page is null");
@@ -634,7 +629,7 @@
else
{
/* Populate the page */
- rc = xenpaging_populate_page(paging, &req.gfn, fd, i);
+ rc = xenpaging_populate_page(paging, req.gfn, fd, i);
if ( rc != 0 )
{
ERROR("Error populating page");
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-31 17:36 UTC
[Xen-devel] [PATCH 7 of 7] xenpaging: start xenpaging via config option
# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1301591717 -7200
# Node ID 93889c2c6aad3d8ab9c02da4197e9645a9a1aae2
# Parent 1d040925ea0dc5c01f7cf2c188ab01da48028f92
xenpaging: start xenpaging via config option
Start xenpaging via config option.
TODO: add libxl support
TODO: parse config values like 42K, 42M, 42G, 42%
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v4:
add config option for pagefile directory
add config option to enable debug
add config option to set polic mru_size
fail if chdir fails
force self.xenpaging* variables to be strings because a xm new may turn some
of them into type int and later os.execve fails with a TypeError
v3:
decouple create/destroycreateXenPaging from _create/_removeDevices
init xenpaging variable to 0 if xenpaging is not in config file to
avoid string None coming from sxp file
v2:
unlink logfile instead of truncating it.
allows hardlinking for further inspection
diff -r 1d040925ea0d -r 93889c2c6aad tools/examples/xmexample.hvm
--- a/tools/examples/xmexample.hvm Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/examples/xmexample.hvm Thu Mar 31 19:15:17 2011 +0200
@@ -127,6 +127,18 @@
# Device Model to be used
device_model = ''qemu-dm''
+# number of guest pages to page-out, or -1 for entire guest memory range
+xenpaging=42
+
+# directory to store guest page file
+#xenpaging_workdir="/var/lib/xen/xenpaging"
+
+# enable debug output in pager
+#xenpaging_debug=0
+
+# number of paged-in pages to keep in memory
+#xenpaging_policy_mru_size=1024
+
#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
diff -r 1d040925ea0d -r 93889c2c6aad tools/python/README.XendConfig
--- a/tools/python/README.XendConfig Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/python/README.XendConfig Thu Mar 31 19:15:17 2011 +0200
@@ -120,6 +120,10 @@
image.vncdisplay
image.vncunused
image.hvm.device_model
+ image.hvm.xenpaging
+ image.hvm.xenpaging_workdir
+ image.hvm.xenpaging_debug
+ image.hvm.xenpaging_policy_mru_size
image.hvm.display
image.hvm.xauthority
image.hvm.vncconsole
diff -r 1d040925ea0d -r 93889c2c6aad tools/python/README.sxpcfg
--- a/tools/python/README.sxpcfg Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/python/README.sxpcfg Thu Mar 31 19:15:17 2011 +0200
@@ -51,6 +51,10 @@
- vncunused
(HVM)
- device_model
+ - xenpaging
+ - xenpaging_workdir
+ - xenpaging_debug
+ - xenpaging_policy_mru_size
- display
- xauthority
- vncconsole
diff -r 1d040925ea0d -r 93889c2c6aad tools/python/xen/xend/XendConfig.py
--- a/tools/python/xen/xend/XendConfig.py Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/python/xen/xend/XendConfig.py Thu Mar 31 19:15:17 2011 +0200
@@ -147,6 +147,10 @@
''apic'': int,
''boot'': str,
''device_model'': str,
+ ''xenpaging'': str,
+ ''xenpaging_workdir'': str,
+ ''xenpaging_debug'': str,
+ ''xenpaging_policy_mru_size'': str,
''loader'': str,
''display'' : str,
''fda'': str,
@@ -512,6 +516,14 @@
self[''platform''][''nomigrate''] =
0
if self.is_hvm():
+ if ''xenpaging'' not in
self[''platform'']:
+
self[''platform''][''xenpaging''] =
"0"
+ if ''xenpaging_workdir'' not in
self[''platform'']:
+
self[''platform''][''xenpaging_workdir''] =
"/var/lib/xen/xenpaging"
+ if ''xenpaging_debug'' not in
self[''platform'']:
+
self[''platform''][''xenpaging_debug''] =
"0"
+ if ''xenpaging_policy_mru_size'' not in
self[''platform'']:
+
self[''platform''][''xenpaging_policy_mru_size'']
= "0"
if ''timer_mode'' not in
self[''platform'']:
self[''platform''][''timer_mode''] = 1
if ''viridian'' not in
self[''platform'']:
diff -r 1d040925ea0d -r 93889c2c6aad tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/python/xen/xend/XendDomainInfo.py Thu Mar 31 19:15:17 2011 +0200
@@ -2246,6 +2246,8 @@
self.info[''name_label''], self.domid,
self.info[''uuid''],
new_name, new_uuid)
self._unwatchVm()
+ if self.image:
+ self.image.destroyXenPaging()
self._releaseDevices()
# Remove existing vm node in xenstore
self._removeVm()
@@ -2913,6 +2915,9 @@
self._createDevices()
+ if self.image:
+ self.image.createXenPaging()
+
self.image.cleanupTmpImages()
self.info[''start_time''] = time.time()
@@ -2937,6 +2942,8 @@
self.refresh_shutdown_lock.acquire()
try:
self.unwatchShutdown()
+ if self.image:
+ self.image.destroyXenPaging()
self._releaseDevices()
bootloader_tidy(self)
@@ -3016,6 +3023,7 @@
self.image = image.create(self, self.info)
if self.image:
self.image.createDeviceModel(True)
+ self.image.createXenPaging()
self._storeDomDetails()
self._registerWatches()
self.refreshShutdown()
@@ -3151,6 +3159,8 @@
# could also fetch a parsed note from xenstore
fast =
self.info.get_notes().get(''SUSPEND_CANCEL'') and 1 or 0
if not fast:
+ if self.image:
+ self.image.destroyXenPaging()
self._releaseDevices()
self.testDeviceComplete()
self.testvifsComplete()
@@ -3166,6 +3176,8 @@
self._storeDomDetails()
self._createDevices()
+ if self.image:
+ self.image.createXenPaging()
log.debug("XendDomainInfo.resumeDomain: devices
created")
xc.domain_resume(self.domid, fast)
diff -r 1d040925ea0d -r 93889c2c6aad tools/python/xen/xend/image.py
--- a/tools/python/xen/xend/image.py Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/python/xen/xend/image.py Thu Mar 31 19:15:17 2011 +0200
@@ -122,6 +122,11 @@
self.vm.permissionsVm("image/cmdline", {
''dom'': self.vm.getDomid(), ''read'': True } )
self.device_model =
vmConfig[''platform''].get(''device_model'')
+ self.xenpaging =
str(vmConfig[''platform''].get(''xenpaging''))
+ self.xenpaging_workdir =
str(vmConfig[''platform''].get(''xenpaging_workdir''))
+ self.xenpaging_debug =
str(vmConfig[''platform''].get(''xenpaging_debug''))
+ self.xenpaging_policy_mru_size =
str(vmConfig[''platform''].get(''xenpaging_policy_mru_size''))
+ self.xenpaging_pid = None
self.display =
vmConfig[''platform''].get(''display'')
self.xauthority =
vmConfig[''platform''].get(''xauthority'')
@@ -392,6 +397,88 @@
sentinel_fifos_inuse[sentinel_path_fifo] = 1
self.sentinel_path_fifo = sentinel_path_fifo
+ def createXenPaging(self):
+ if not self.vm.info.is_hvm():
+ return
+ if self.xenpaging == "0":
+ return
+ if self.xenpaging_pid:
+ return
+ xenpaging_bin = auxbin.pathTo("xenpaging")
+ args = [xenpaging_bin]
+ args = args + ([ "%d" % self.vm.getDomid()])
+ args = args + ([ "%s" % self.xenpaging])
+ env = dict(os.environ)
+ if not self.xenpaging_debug == "0":
+ env[''XENPAGING_DEBUG''] = self.xenpaging_debug
+ if not self.xenpaging_policy_mru_size == "0":
+ env[''XENPAGING_POLICY_MRU_SIZE''] =
self.xenpaging_policy_mru_size
+ self.xenpaging_logfile = "/var/log/xen/xenpaging-%s.log" %
str(self.vm.info[''name_label''])
+ logfile_mode = os.O_WRONLY|os.O_CREAT|os.O_APPEND|os.O_TRUNC
+ null = os.open("/dev/null", os.O_RDONLY)
+ try:
+ os.unlink(self.xenpaging_logfile)
+ except:
+ pass
+ logfd = os.open(self.xenpaging_logfile, logfile_mode, 0644)
+ sys.stderr.flush()
+ contract = osdep.prefork("%s:%d" % (self.vm.getName(),
self.vm.getDomid()))
+ xenpaging_pid = os.fork()
+ if xenpaging_pid == 0: #child
+ try:
+ osdep.postfork(contract)
+ os.dup2(null, 0)
+ os.dup2(logfd, 1)
+ os.dup2(logfd, 2)
+ os.chdir(self.xenpaging_workdir)
+ try:
+ log.info("starting %s" % args)
+ os.execve(xenpaging_bin, args, env)
+ except Exception, e:
+ log.warn(''failed to execute xenpaging:
%s'' % utils.exception_string(e))
+ os._exit(126)
+ except:
+ log.warn("starting xenpaging in %s failed" %
self.xenpaging_workdir)
+ os._exit(127)
+ else:
+ osdep.postfork(contract, abandon=True)
+ self.xenpaging_pid = xenpaging_pid
+ os.close(null)
+ os.close(logfd)
+
+ def destroyXenPaging(self):
+ if self.xenpaging == "0":
+ return
+ if self.xenpaging_pid:
+ try:
+ os.kill(self.xenpaging_pid, signal.SIGHUP)
+ except OSError, exn:
+ log.exception(exn)
+ for i in xrange(100):
+ try:
+ (p, rv) = os.waitpid(self.xenpaging_pid, os.WNOHANG)
+ if p == self.xenpaging_pid:
+ break
+ except OSError:
+ # This is expected if Xend has been restarted within
+ # the life of this domain. In this case, we can kill
+ # the process, but we can''t wait for it because
it''s
+ # not our child. We continue this loop, and after it is
+ # terminated make really sure the process is going away
+ # (SIGKILL).
+ pass
+ time.sleep(0.1)
+ else:
+ log.warning("xenpaging %d took more than 10s "
+ "to terminate: sending SIGKILL" %
self.xenpaging_pid)
+ try:
+ os.kill(self.xenpaging_pid, signal.SIGKILL)
+ os.waitpid(self.xenpaging_pid, 0)
+ except OSError:
+ # This happens if the process doesn''t exist.
+ pass
+ self.xenpaging_pid = None
+
def createDeviceModel(self, restore = False):
if self.device_model is None:
return
diff -r 1d040925ea0d -r 93889c2c6aad tools/python/xen/xm/create.py
--- a/tools/python/xen/xm/create.py Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/python/xen/xm/create.py Thu Mar 31 19:15:17 2011 +0200
@@ -491,6 +491,22 @@
fn=set_value, default=None,
use="Set the path of the root NFS directory.")
+gopts.var(''xenpaging'', val=''NUM'',
+ fn=set_value, default=''0'',
+ use="Number of pages to swap.")
+
+gopts.var(''xenpaging_workdir'', val=''PATH'',
+ fn=set_value, default=''/var/lib/xen/xenpaging'',
+ use="Number of pages to swap.")
+
+gopts.var(''xenpaging_debug'', val=''NUM'',
+ fn=set_value, default=''0'',
+ use="Number of pages to swap.")
+
+gopts.var(''xenpaging_policy_mru_size'',
val=''NUM'',
+ fn=set_value, default=''0'',
+ use="Number of pages to swap.")
+
gopts.var(''device_model'', val=''FILE'',
fn=set_value, default=None,
use="Path to device model program.")
@@ -1076,6 +1092,10 @@
args = [ ''acpi'', ''apic'',
''boot'',
''cpuid'', ''cpuid_check'',
+ ''xenpaging'',
+ ''xenpaging_workdir'',
+ ''xenpaging_debug'',
+ ''xenpaging_policy_mru_size'',
''device_model'', ''display'',
''fda'', ''fdb'',
''gfx_passthru'',
''guest_os_type'',
diff -r 1d040925ea0d -r 93889c2c6aad tools/python/xen/xm/xenapi_create.py
--- a/tools/python/xen/xm/xenapi_create.py Thu Mar 31 19:14:01 2011 +0200
+++ b/tools/python/xen/xm/xenapi_create.py Thu Mar 31 19:15:17 2011 +0200
@@ -1085,6 +1085,10 @@
''acpi'',
''apic'',
''boot'',
+ ''xenpaging'',
+ ''xenpaging_workdir'',
+ ''xenpaging_debug'',
+ ''xenpaging_policy_mru_size'',
''device_model'',
''loader'',
''fda'',
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Patrick Colp
2011-Mar-31 17:48 UTC
Re: [Xen-devel] [PATCH 3 of 7] xenpaging: remove srand call
On 31 March 2011 10:36, Olaf Hering <olaf@aepfle.de> wrote:> # HG changeset patch > # User Olaf Hering <olaf@aepfle.de> > # Date 1301591570 -7200 > # Node ID cd35892de8ff2388aa46e3768393f217a3c63521 > # Parent a811d86a48f400cd541500e0e6ae765fdcd02ef9 > xenpaging: remove srand call > > The policy uses now a linear algorithm instead of picking random gfn > numbers. Remove the call to srand().Is a linear algorithm better than random? Patrick> > Signed-off-by: Olaf Hering <olaf@aepfle.de> > > diff -r a811d86a48f4 -r cd35892de8ff tools/xenpaging/xenpaging.c > --- a/tools/xenpaging/xenpaging.c Thu Mar 31 19:11:57 2011 +0200 > +++ b/tools/xenpaging/xenpaging.c Thu Mar 31 19:12:50 2011 +0200 > @@ -544,9 +544,6 @@ > domain_id = atoi(argv[1]); > num_pages = atoi(argv[2]); > > - /* Seed random-number generator */ > - srand(time(NULL)); > - > /* Initialise domain paging */ > paging = xenpaging_init(domain_id); > if ( paging == NULL ) > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-31 18:17 UTC
Re: [Xen-devel] [PATCH 3 of 7] xenpaging: remove srand call
On Thu, Mar 31, Patrick Colp wrote:> On 31 March 2011 10:36, Olaf Hering <olaf@aepfle.de> wrote: > > # HG changeset patch > > # User Olaf Hering <olaf@aepfle.de> > > # Date 1301591570 -7200 > > # Node ID cd35892de8ff2388aa46e3768393f217a3c63521 > > # Parent a811d86a48f400cd541500e0e6ae765fdcd02ef9 > > xenpaging: remove srand call > > > > The policy uses now a linear algorithm instead of picking random gfn > > numbers. Remove the call to srand(). > > Is a linear algorithm better than random?The current linear policy can detects when no more pages can be nominated. The previous random policy would instead just try forever with random numbers, and never find an end. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Patrick Colp
2011-Mar-31 18:37 UTC
Re: [Xen-devel] [PATCH 3 of 7] xenpaging: remove srand call
On 31 March 2011 11:17, Olaf Hering <olaf@aepfle.de> wrote:> On Thu, Mar 31, Patrick Colp wrote: > >> On 31 March 2011 10:36, Olaf Hering <olaf@aepfle.de> wrote: >> > # HG changeset patch >> > # User Olaf Hering <olaf@aepfle.de> >> > # Date 1301591570 -7200 >> > # Node ID cd35892de8ff2388aa46e3768393f217a3c63521 >> > # Parent a811d86a48f400cd541500e0e6ae765fdcd02ef9 >> > xenpaging: remove srand call >> > >> > The policy uses now a linear algorithm instead of picking random gfn >> > numbers. Remove the call to srand(). >> >> Is a linear algorithm better than random? > > The current linear policy can detects when no more pages can be > nominated. The previous random policy would instead just try forever > with random numbers, and never find an end.Yeah, I saw that. Is it actually possible to run out of pages to nominate? I would think the only way this would happen is if you specified that 100% of the guest memory is paged out. If it is possible, then would it maybe be better to add a check to the random policy to detect when it''s tried all the pages? Of course, if linear performs just as well (or poorly) as random, then there''s no point changing it from what it is now. Patrick> > Olaf > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Apr-01 08:20 UTC
Re: [Xen-devel] [PATCH 3 of 7] xenpaging: remove srand call
On Thu, Mar 31, Patrick Colp wrote:> Yeah, I saw that. Is it actually possible to run out of pages to > nominate? I would think the only way this would happen is if you > specified that 100% of the guest memory is paged out. If it is > possible, then would it maybe be better to add a check to the random > policy to detect when it''s tried all the pages? Of course, if linear > performs just as well (or poorly) as random, then there''s no point > changing it from what it is now.There is a wrap check in policy_choose_victim(). If 100% pages should be swapped, nominate fails for a few and 100% cant be reached. I think thats not easy to detect from within policy_choose_victim(). I havent done any performance analysis in the policy, nor in gneral. The performance with a linear approach is eventually better because the loop does need to wait for a random gfn number thats still free. The bottleneck is likely the IO and the stopped vcpus, not testing an array of bits. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 31/03/2011 18:36, "Olaf Hering" <olaf@aepfle.de> wrote:> > Here are some small changes for xenpaging. > The first change is a bugfix in the drop page path. The following > patches are small cleanups. The could be applied now.All of these are toolstack patches, so I''ll leave it to a toolstack maintainer to Ack and apply. -- Keir> The last one implements the configuration interface for xenpaging to > start it automatically, optionally specify the directory to store the > paging files, enable debug and the amount of pages to keep in memory. > If thats ok, I will start working on xl support. > > > Olaf > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Patrick Colp
2011-Apr-02 19:29 UTC
Re: [Xen-devel] [PATCH 3 of 7] xenpaging: remove srand call
On 1 April 2011 01:20, Olaf Hering <olaf@aepfle.de> wrote:> On Thu, Mar 31, Patrick Colp wrote: > >> Yeah, I saw that. Is it actually possible to run out of pages to >> nominate? I would think the only way this would happen is if you >> specified that 100% of the guest memory is paged out. If it is >> possible, then would it maybe be better to add a check to the random >> policy to detect when it''s tried all the pages? Of course, if linear >> performs just as well (or poorly) as random, then there''s no point >> changing it from what it is now. > > There is a wrap check in policy_choose_victim(). If 100% pages should be > swapped, nominate fails for a few and 100% cant be reached. I think > thats not easy to detect from within policy_choose_victim(). > I havent done any performance analysis in the policy, nor in gneral. > The performance with a linear approach is eventually better because the > loop does need to wait for a random gfn number thats still free. The > bottleneck is likely the IO and the stopped vcpus, not testing an array > of bits.The main thing you want to reduce is the number of misses in the guest, though, rather than worrying too much about what the page-in code itself is doing. I don''t really think it''ll make much of a difference what way it''s done (though it would be curious to know what it is). The way you''ve done it has the wrap check, though, which is great. Patrick> > Olaf > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel