anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 00/14] xen device model support
From: Anthony PERARD <anthony.perard@citrix.com> Hi all, this is the fourth version of the patch series that adds xen device model support in qemu. This is the list of changes we made on top of the last version: - we addressed the code style change requests; - we have split mapcache in two file xen-mapcache.c and xen-mapcache-stub.c with check in configure to use one or the other; - we have fixed the compilation issue with user-only target and with older Xen release (3.3.0, 3.4.0 and 4.0.1), this come with more check in the configure script; - we have replaced -enable-xen by a more generic options, so we introduce -accel options (use for kvm and xen). For the next time we have to remove the Xen specific ACPI Implementation. Anthony PERARD (14): xen: Replace some tab-indents with spaces (clean-up). xen: Support new libxc calls from xen unstable. xen: Add xen_machine_fv Introduce -accel command option. xen: Add xen in -accel option. xen: Add the Xen platform pci device piix_pci: Introduces Xen specific call for irq. xen: add a 8259 Interrupt Controller xen: Introduce the Xen mapcache Introduce qemu_ram_ptr_unlock. vl.c: Introduce getter for shutdown_requested and reset_requested. xen: Initialize event channels and io rings xen: Set running state in xenstore. xen: Add a Xen specific ACPI Implementation to target-xen Makefile.target | 13 ++ configure | 70 +++++++- cpu-common.h | 1 + exec.c | 71 ++++++- hw/hw.h | 3 + hw/pci_ids.h | 2 + hw/piix_pci.c | 28 +++- hw/xen.h | 26 +++ hw/xen_acpi_piix4.c | 411 +++++++++++++++++++++++++++++++++++++ hw/xen_backend.c | 314 +++++++++++++++--------------- hw/xen_backend.h | 2 +- hw/xen_common.h | 51 ++++-- hw/xen_disk.c | 414 +++++++++++++++++++------------------- hw/xen_domainbuild.c | 2 +- hw/xen_machine_fv.c | 156 ++++++++++++++ hw/xen_nic.c | 230 +++++++++++----------- hw/xen_platform.c | 431 +++++++++++++++++++++++++++++++++++++++ hw/xen_platform.h | 8 + qemu-options.hx | 10 + sysemu.h | 2 + vl.c | 98 ++++++++- xen-all.c | 546 ++++++++++++++++++++++++++++++++++++++++++++++++++ xen-mapcache-stub.c | 33 +++ xen-mapcache.c | 335 +++++++++++++++++++++++++++++++ xen-mapcache.h | 14 ++ xen-stub.c | 34 +++ 26 files changed, 2788 insertions(+), 517 deletions(-) create mode 100644 hw/xen_acpi_piix4.c create mode 100644 hw/xen_machine_fv.c create mode 100644 hw/xen_platform.c create mode 100644 hw/xen_platform.h create mode 100644 xen-all.c create mode 100644 xen-mapcache-stub.c create mode 100644 xen-mapcache.c create mode 100644 xen-mapcache.h create mode 100644 xen-stub.c -- Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 01/14] xen: Replace some tab-indents with spaces (clean-up).
From: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> --- hw/xen_backend.c | 308 ++++++++++++++++++++-------------------- hw/xen_disk.c | 412 +++++++++++++++++++++++++++--------------------------- hw/xen_nic.c | 222 +++++++++++++++--------------- 3 files changed, 471 insertions(+), 471 deletions(-) diff --git a/hw/xen_backend.c b/hw/xen_backend.c index a2e408f..860b038 100644 --- a/hw/xen_backend.c +++ b/hw/xen_backend.c @@ -59,7 +59,7 @@ int xenstore_write_str(const char *base, const char *node, const char *val) snprintf(abspath, sizeof(abspath), "%s/%s", base, node); if (!xs_write(xenstore, 0, abspath, val, strlen(val))) - return -1; + return -1; return 0; } @@ -95,7 +95,7 @@ int xenstore_read_int(const char *base, const char *node, int *ival) val = xenstore_read_str(base, node); if (val && 1 == sscanf(val, "%d", ival)) - rc = 0; + rc = 0; qemu_free(val); return rc; } @@ -134,16 +134,16 @@ int xenstore_read_fe_int(struct XenDevice *xendev, const char *node, int *ival) const char *xenbus_strstate(enum xenbus_state state) { - static const char *const name[] = { - [ XenbusStateUnknown ] = "Unknown", - [ XenbusStateInitialising ] = "Initialising", - [ XenbusStateInitWait ] = "InitWait", - [ XenbusStateInitialised ] = "Initialised", - [ XenbusStateConnected ] = "Connected", - [ XenbusStateClosing ] = "Closing", - [ XenbusStateClosed ] = "Closed", - }; - return (state < ARRAY_SIZE(name)) ? name[state] : "INVALID"; + static const char *const name[] = { + [ XenbusStateUnknown ] = "Unknown", + [ XenbusStateInitialising ] = "Initialising", + [ XenbusStateInitWait ] = "InitWait", + [ XenbusStateInitialised ] = "Initialised", + [ XenbusStateConnected ] = "Connected", + [ XenbusStateClosing ] = "Closing", + [ XenbusStateClosed ] = "Closed", + }; + return (state < ARRAY_SIZE(name)) ? name[state] : "INVALID"; } int xen_be_set_state(struct XenDevice *xendev, enum xenbus_state state) @@ -152,9 +152,9 @@ int xen_be_set_state(struct XenDevice *xendev, enum xenbus_state state) rc = xenstore_write_be_int(xendev, "state", state); if (rc < 0) - return rc; + return rc; xen_be_printf(xendev, 1, "backend state: %s -> %s\n", - xenbus_strstate(xendev->be_state), xenbus_strstate(state)); + xenbus_strstate(xendev->be_state), xenbus_strstate(state)); xendev->be_state = state; return 0; } @@ -166,13 +166,13 @@ struct XenDevice *xen_be_find_xendev(const char *type, int dom, int dev) struct XenDevice *xendev; QTAILQ_FOREACH(xendev, &xendevs, next) { - if (xendev->dom != dom) - continue; - if (xendev->dev != dev) - continue; - if (strcmp(xendev->type, type) != 0) - continue; - return xendev; + if (xendev->dom != dom) + continue; + if (xendev->dev != dev) + continue; + if (strcmp(xendev->type, type) != 0) + continue; + return xendev; } return NULL; } @@ -188,7 +188,7 @@ static struct XenDevice *xen_be_get_xendev(const char *type, int dom, int dev, xendev = xen_be_find_xendev(type, dom, dev); if (xendev) - return xendev; + return xendev; /* init new xendev */ xendev = qemu_mallocz(ops->size); @@ -199,9 +199,9 @@ static struct XenDevice *xen_be_get_xendev(const char *type, int dom, int dev, dom0 = xs_get_domain_path(xenstore, 0); snprintf(xendev->be, sizeof(xendev->be), "%s/backend/%s/%d/%d", - dom0, xendev->type, xendev->dom, xendev->dev); + dom0, xendev->type, xendev->dom, xendev->dev); snprintf(xendev->name, sizeof(xendev->name), "%s-%d", - xendev->type, xendev->dev); + xendev->type, xendev->dev); free(dom0); xendev->debug = debug; @@ -209,28 +209,28 @@ static struct XenDevice *xen_be_get_xendev(const char *type, int dom, int dev, xendev->evtchndev = xc_evtchn_open(); if (xendev->evtchndev < 0) { - xen_be_printf(NULL, 0, "can''t open evtchn device\n"); - qemu_free(xendev); - return NULL; + xen_be_printf(NULL, 0, "can''t open evtchn device\n"); + qemu_free(xendev); + return NULL; } fcntl(xc_evtchn_fd(xendev->evtchndev), F_SETFD, FD_CLOEXEC); if (ops->flags & DEVOPS_FLAG_NEED_GNTDEV) { - xendev->gnttabdev = xc_gnttab_open(); - if (xendev->gnttabdev < 0) { - xen_be_printf(NULL, 0, "can''t open gnttab device\n"); - xc_evtchn_close(xendev->evtchndev); - qemu_free(xendev); - return NULL; - } + xendev->gnttabdev = xc_gnttab_open(); + if (xendev->gnttabdev < 0) { + xen_be_printf(NULL, 0, "can''t open gnttab device\n"); + xc_evtchn_close(xendev->evtchndev); + qemu_free(xendev); + return NULL; + } } else { - xendev->gnttabdev = -1; + xendev->gnttabdev = -1; } QTAILQ_INSERT_TAIL(&xendevs, xendev, next); if (xendev->ops->alloc) - xendev->ops->alloc(xendev); + xendev->ops->alloc(xendev); return xendev; } @@ -251,28 +251,28 @@ static struct XenDevice *xen_be_del_xendev(int dom, int dev) xendev = xnext; xnext = xendev->next.tqe_next; - if (xendev->dom != dom) - continue; - if (xendev->dev != dev && dev != -1) - continue; + if (xendev->dom != dom) + continue; + if (xendev->dev != dev && dev != -1) + continue; - if (xendev->ops->free) - xendev->ops->free(xendev); + if (xendev->ops->free) + xendev->ops->free(xendev); - if (xendev->fe) { - char token[XEN_BUFSIZE]; - snprintf(token, sizeof(token), "fe:%p", xendev); - xs_unwatch(xenstore, xendev->fe, token); - qemu_free(xendev->fe); - } + if (xendev->fe) { + char token[XEN_BUFSIZE]; + snprintf(token, sizeof(token), "fe:%p", xendev); + xs_unwatch(xenstore, xendev->fe, token); + qemu_free(xendev->fe); + } - if (xendev->evtchndev >= 0) - xc_evtchn_close(xendev->evtchndev); - if (xendev->gnttabdev >= 0) - xc_gnttab_close(xendev->gnttabdev); + if (xendev->evtchndev >= 0) + xc_evtchn_close(xendev->evtchndev); + if (xendev->gnttabdev >= 0) + xc_gnttab_close(xendev->gnttabdev); - QTAILQ_REMOVE(&xendevs, xendev, next); - qemu_free(xendev); + QTAILQ_REMOVE(&xendevs, xendev, next); + qemu_free(xendev); } return NULL; } @@ -285,14 +285,14 @@ static struct XenDevice *xen_be_del_xendev(int dom, int dev) static void xen_be_backend_changed(struct XenDevice *xendev, const char *node) { if (node == NULL || strcmp(node, "online") == 0) { - if (xenstore_read_be_int(xendev, "online", &xendev->online) == -1) - xendev->online = 0; + if (xenstore_read_be_int(xendev, "online", &xendev->online) == -1) + xendev->online = 0; } if (node) { - xen_be_printf(xendev, 2, "backend update: %s\n", node); - if (xendev->ops->backend_changed) - xendev->ops->backend_changed(xendev, node); + xen_be_printf(xendev, 2, "backend update: %s\n", node); + if (xendev->ops->backend_changed) + xendev->ops->backend_changed(xendev, node); } } @@ -301,25 +301,25 @@ static void xen_be_frontend_changed(struct XenDevice *xendev, const char *node) int fe_state; if (node == NULL || strcmp(node, "state") == 0) { - if (xenstore_read_fe_int(xendev, "state", &fe_state) == -1) - fe_state = XenbusStateUnknown; - if (xendev->fe_state != fe_state) - xen_be_printf(xendev, 1, "frontend state: %s -> %s\n", - xenbus_strstate(xendev->fe_state), - xenbus_strstate(fe_state)); - xendev->fe_state = fe_state; + if (xenstore_read_fe_int(xendev, "state", &fe_state) == -1) + fe_state = XenbusStateUnknown; + if (xendev->fe_state != fe_state) + xen_be_printf(xendev, 1, "frontend state: %s -> %s\n", + xenbus_strstate(xendev->fe_state), + xenbus_strstate(fe_state)); + xendev->fe_state = fe_state; } if (node == NULL || strcmp(node, "protocol") == 0) { - qemu_free(xendev->protocol); - xendev->protocol = xenstore_read_fe_str(xendev, "protocol"); - if (xendev->protocol) - xen_be_printf(xendev, 1, "frontend protocol: %s\n", xendev->protocol); + qemu_free(xendev->protocol); + xendev->protocol = xenstore_read_fe_str(xendev, "protocol"); + if (xendev->protocol) + xen_be_printf(xendev, 1, "frontend protocol: %s\n", xendev->protocol); } if (node) { - xen_be_printf(xendev, 2, "frontend update: %s\n", node); - if (xendev->ops->frontend_changed) - xendev->ops->frontend_changed(xendev, node); + xen_be_printf(xendev, 2, "frontend update: %s\n", node); + if (xendev->ops->frontend_changed) + xendev->ops->frontend_changed(xendev, node); } } @@ -340,28 +340,28 @@ static int xen_be_try_setup(struct XenDevice *xendev) int be_state; if (xenstore_read_be_int(xendev, "state", &be_state) == -1) { - xen_be_printf(xendev, 0, "reading backend state failed\n"); - return -1; + xen_be_printf(xendev, 0, "reading backend state failed\n"); + return -1; } if (be_state != XenbusStateInitialising) { - xen_be_printf(xendev, 0, "initial backend state is wrong (%s)\n", - xenbus_strstate(be_state)); - return -1; + xen_be_printf(xendev, 0, "initial backend state is wrong (%s)\n", + xenbus_strstate(be_state)); + return -1; } xendev->fe = xenstore_read_be_str(xendev, "frontend"); if (xendev->fe == NULL) { - xen_be_printf(xendev, 0, "reading frontend path failed\n"); - return -1; + xen_be_printf(xendev, 0, "reading frontend path failed\n"); + return -1; } /* setup frontend watch */ snprintf(token, sizeof(token), "fe:%p", xendev); if (!xs_watch(xenstore, xendev->fe, token)) { - xen_be_printf(xendev, 0, "watching frontend path (%s) failed\n", - xendev->fe); - return -1; + xen_be_printf(xendev, 0, "watching frontend path (%s) failed\n", + xendev->fe); + return -1; } xen_be_set_state(xendev, XenbusStateInitialising); @@ -383,15 +383,15 @@ static int xen_be_try_init(struct XenDevice *xendev) int rc = 0; if (!xendev->online) { - xen_be_printf(xendev, 1, "not online\n"); - return -1; + xen_be_printf(xendev, 1, "not online\n"); + return -1; } if (xendev->ops->init) - rc = xendev->ops->init(xendev); + rc = xendev->ops->init(xendev); if (rc != 0) { - xen_be_printf(xendev, 1, "init() failed\n"); - return rc; + xen_be_printf(xendev, 1, "init() failed\n"); + return rc; } xenstore_write_be_str(xendev, "hotplug-status", "connected"); @@ -411,20 +411,20 @@ static int xen_be_try_connect(struct XenDevice *xendev) int rc = 0; if (xendev->fe_state != XenbusStateInitialised && - xendev->fe_state != XenbusStateConnected) { - if (xendev->ops->flags & DEVOPS_FLAG_IGNORE_STATE) { - xen_be_printf(xendev, 2, "frontend not ready, ignoring\n"); - } else { - xen_be_printf(xendev, 2, "frontend not ready (yet)\n"); - return -1; - } + xendev->fe_state != XenbusStateConnected) { + if (xendev->ops->flags & DEVOPS_FLAG_IGNORE_STATE) { + xen_be_printf(xendev, 2, "frontend not ready, ignoring\n"); + } else { + xen_be_printf(xendev, 2, "frontend not ready (yet)\n"); + return -1; + } } if (xendev->ops->connect) - rc = xendev->ops->connect(xendev); + rc = xendev->ops->connect(xendev); if (rc != 0) { - xen_be_printf(xendev, 0, "connect() failed\n"); - return rc; + xen_be_printf(xendev, 0, "connect() failed\n"); + return rc; } xen_be_set_state(xendev, XenbusStateConnected); @@ -441,7 +441,7 @@ static void xen_be_disconnect(struct XenDevice *xendev, enum xenbus_state state) if (xendev->be_state != XenbusStateClosing && xendev->be_state != XenbusStateClosed && xendev->ops->disconnect) - xendev->ops->disconnect(xendev); + xendev->ops->disconnect(xendev); if (xendev->be_state != state) xen_be_set_state(xendev, state); } @@ -468,31 +468,31 @@ void xen_be_check_state(struct XenDevice *xendev) /* frontend may request shutdown from almost anywhere */ if (xendev->fe_state == XenbusStateClosing || - xendev->fe_state == XenbusStateClosed) { - xen_be_disconnect(xendev, xendev->fe_state); - return; + xendev->fe_state == XenbusStateClosed) { + xen_be_disconnect(xendev, xendev->fe_state); + return; } /* check for possible backend state transitions */ for (;;) { - switch (xendev->be_state) { - case XenbusStateUnknown: - rc = xen_be_try_setup(xendev); - break; - case XenbusStateInitialising: - rc = xen_be_try_init(xendev); - break; - case XenbusStateInitWait: - rc = xen_be_try_connect(xendev); - break; + switch (xendev->be_state) { + case XenbusStateUnknown: + rc = xen_be_try_setup(xendev); + break; + case XenbusStateInitialising: + rc = xen_be_try_init(xendev); + break; + case XenbusStateInitWait: + rc = xen_be_try_connect(xendev); + break; case XenbusStateClosed: rc = xen_be_try_reset(xendev); break; - default: - rc = -1; - } - if (rc != 0) - break; + default: + rc = -1; + } + if (rc != 0) + break; } } @@ -511,26 +511,26 @@ static int xenstore_scan(const char *type, int dom, struct XenDevOps *ops) snprintf(path, sizeof(path), "%s/backend/%s/%d", dom0, type, dom); free(dom0); if (!xs_watch(xenstore, path, token)) { - xen_be_printf(NULL, 0, "xen be: watching backend path (%s) failed\n", path); - return -1; + xen_be_printf(NULL, 0, "xen be: watching backend path (%s) failed\n", path); + return -1; } /* look for backends */ dev = xs_directory(xenstore, 0, path, &cdev); if (!dev) - return 0; + return 0; for (j = 0; j < cdev; j++) { - xendev = xen_be_get_xendev(type, dom, atoi(dev[j]), ops); - if (xendev == NULL) - continue; - xen_be_check_state(xendev); + xendev = xen_be_get_xendev(type, dom, atoi(dev[j]), ops); + if (xendev == NULL) + continue; + xen_be_check_state(xendev); } free(dev); return 0; } static void xenstore_update_be(char *watch, char *type, int dom, - struct XenDevOps *ops) + struct XenDevOps *ops) { struct XenDevice *xendev; char path[XEN_BUFSIZE], *dom0; @@ -540,24 +540,24 @@ static void xenstore_update_be(char *watch, char *type, int dom, len = snprintf(path, sizeof(path), "%s/backend/%s/%d", dom0, type, dom); free(dom0); if (strncmp(path, watch, len) != 0) - return; + return; if (sscanf(watch+len, "/%u/%255s", &dev, path) != 2) { - strcpy(path, ""); - if (sscanf(watch+len, "/%u", &dev) != 1) - dev = -1; + strcpy(path, ""); + if (sscanf(watch+len, "/%u", &dev) != 1) + dev = -1; } if (dev == -1) - return; + return; if (0) { - /* FIXME: detect devices being deleted from xenstore ... */ - xen_be_del_xendev(dom, dev); + /* FIXME: detect devices being deleted from xenstore ... */ + xen_be_del_xendev(dom, dev); } xendev = xen_be_get_xendev(type, dom, dev, ops); if (xendev != NULL) { - xen_be_backend_changed(xendev, path); - xen_be_check_state(xendev); + xen_be_backend_changed(xendev, path); + xen_be_check_state(xendev); } } @@ -568,9 +568,9 @@ static void xenstore_update_fe(char *watch, struct XenDevice *xendev) len = strlen(xendev->fe); if (strncmp(xendev->fe, watch, len) != 0) - return; + return; if (watch[len] != ''/'') - return; + return; node = watch + len + 1; xen_be_frontend_changed(xendev, node); @@ -585,13 +585,13 @@ static void xenstore_update(void *unused) vec = xs_read_watch(xenstore, &count); if (vec == NULL) - goto cleanup; + goto cleanup; if (sscanf(vec[XS_WATCH_TOKEN], "be:%" PRIxPTR ":%d:%" PRIxPTR, &type, &dom, &ops) == 3) - xenstore_update_be(vec[XS_WATCH_PATH], (void*)type, dom, (void*)ops); + xenstore_update_be(vec[XS_WATCH_PATH], (void*)type, dom, (void*)ops); if (sscanf(vec[XS_WATCH_TOKEN], "fe:%" PRIxPTR, &ptr) == 1) - xenstore_update_fe(vec[XS_WATCH_PATH], (void*)ptr); + xenstore_update_fe(vec[XS_WATCH_PATH], (void*)ptr); cleanup: free(vec); @@ -604,14 +604,14 @@ static void xen_be_evtchn_event(void *opaque) port = xc_evtchn_pending(xendev->evtchndev); if (port != xendev->local_port) { - xen_be_printf(xendev, 0, "xc_evtchn_pending returned %d (expected %d)\n", - port, xendev->local_port); - return; + xen_be_printf(xendev, 0, "xc_evtchn_pending returned %d (expected %d)\n", + port, xendev->local_port); + return; } xc_evtchn_unmask(xendev->evtchndev, port); if (xendev->ops->event) - xendev->ops->event(xendev); + xendev->ops->event(xendev); } /* -------------------------------------------------------------------- */ @@ -620,17 +620,17 @@ int xen_be_init(void) { xenstore = xs_daemon_open(); if (!xenstore) { - xen_be_printf(NULL, 0, "can''t connect to xenstored\n"); - return -1; + xen_be_printf(NULL, 0, "can''t connect to xenstored\n"); + return -1; } if (qemu_set_fd_handler(xs_fileno(xenstore), xenstore_update, NULL, NULL) < 0) - goto err; + goto err; xen_xc = xc_interface_open(); if (xen_xc == -1) { - xen_be_printf(NULL, 0, "can''t open xen interface\n"); - goto err; + xen_be_printf(NULL, 0, "can''t open xen interface\n"); + goto err; } return 0; @@ -650,23 +650,23 @@ int xen_be_register(const char *type, struct XenDevOps *ops) int xen_be_bind_evtchn(struct XenDevice *xendev) { if (xendev->local_port != -1) - return 0; + return 0; xendev->local_port = xc_evtchn_bind_interdomain - (xendev->evtchndev, xendev->dom, xendev->remote_port); + (xendev->evtchndev, xendev->dom, xendev->remote_port); if (xendev->local_port == -1) { - xen_be_printf(xendev, 0, "xc_evtchn_bind_interdomain failed\n"); - return -1; + xen_be_printf(xendev, 0, "xc_evtchn_bind_interdomain failed\n"); + return -1; } xen_be_printf(xendev, 2, "bind evtchn port %d\n", xendev->local_port); qemu_set_fd_handler(xc_evtchn_fd(xendev->evtchndev), - xen_be_evtchn_event, NULL, xendev); + xen_be_evtchn_event, NULL, xendev); return 0; } void xen_be_unbind_evtchn(struct XenDevice *xendev) { if (xendev->local_port == -1) - return; + return; qemu_set_fd_handler(xc_evtchn_fd(xendev->evtchndev), NULL, NULL, NULL); xc_evtchn_unbind(xendev->evtchndev, xendev->local_port); xen_be_printf(xendev, 2, "unbind evtchn port %d\n", xendev->local_port); diff --git a/hw/xen_disk.c b/hw/xen_disk.c index 134ac33..47280ee 100644 --- a/hw/xen_disk.c +++ b/hw/xen_disk.c @@ -120,17 +120,17 @@ static struct ioreq *ioreq_start(struct XenBlkDev *blkdev) struct ioreq *ioreq = NULL; if (QLIST_EMPTY(&blkdev->freelist)) { - if (blkdev->requests_total >= max_requests) - goto out; - /* allocate new struct */ - ioreq = qemu_mallocz(sizeof(*ioreq)); - ioreq->blkdev = blkdev; - blkdev->requests_total++; + if (blkdev->requests_total >= max_requests) + goto out; + /* allocate new struct */ + ioreq = qemu_mallocz(sizeof(*ioreq)); + ioreq->blkdev = blkdev; + blkdev->requests_total++; qemu_iovec_init(&ioreq->v, BLKIF_MAX_SEGMENTS_PER_REQUEST); } else { - /* get one from freelist */ - ioreq = QLIST_FIRST(&blkdev->freelist); - QLIST_REMOVE(ioreq, list); + /* get one from freelist */ + ioreq = QLIST_FIRST(&blkdev->freelist); + QLIST_REMOVE(ioreq, list); qemu_iovec_reset(&ioreq->v); } QLIST_INSERT_HEAD(&blkdev->inflight, ioreq, list); @@ -173,26 +173,26 @@ static int ioreq_parse(struct ioreq *ioreq) int i; xen_be_printf(&blkdev->xendev, 3, - "op %d, nr %d, handle %d, id %" PRId64 ", sector %" PRId64 "\n", - ioreq->req.operation, ioreq->req.nr_segments, - ioreq->req.handle, ioreq->req.id, ioreq->req.sector_number); + "op %d, nr %d, handle %d, id %" PRId64 ", sector %" PRId64 "\n", + ioreq->req.operation, ioreq->req.nr_segments, + ioreq->req.handle, ioreq->req.id, ioreq->req.sector_number); switch (ioreq->req.operation) { case BLKIF_OP_READ: - ioreq->prot = PROT_WRITE; /* to memory */ - break; + ioreq->prot = PROT_WRITE; /* to memory */ + break; case BLKIF_OP_WRITE_BARRIER: - if (!syncwrite) - ioreq->presync = ioreq->postsync = 1; - /* fall through */ + if (!syncwrite) + ioreq->presync = ioreq->postsync = 1; + /* fall through */ case BLKIF_OP_WRITE: - ioreq->prot = PROT_READ; /* from memory */ - if (syncwrite) - ioreq->postsync = 1; - break; + ioreq->prot = PROT_READ; /* from memory */ + if (syncwrite) + ioreq->postsync = 1; + break; default: - xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n", - ioreq->req.operation); - goto err; + xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n", + ioreq->req.operation); + goto err; }; if (ioreq->req.operation != BLKIF_OP_READ && blkdev->mode[0] != ''w'') { @@ -202,29 +202,29 @@ static int ioreq_parse(struct ioreq *ioreq) ioreq->start = ioreq->req.sector_number * blkdev->file_blk; for (i = 0; i < ioreq->req.nr_segments; i++) { - if (i == BLKIF_MAX_SEGMENTS_PER_REQUEST) { - xen_be_printf(&blkdev->xendev, 0, "error: nr_segments too big\n"); - goto err; - } - if (ioreq->req.seg[i].first_sect > ioreq->req.seg[i].last_sect) { - xen_be_printf(&blkdev->xendev, 0, "error: first > last sector\n"); - goto err; - } - if (ioreq->req.seg[i].last_sect * BLOCK_SIZE >= XC_PAGE_SIZE) { - xen_be_printf(&blkdev->xendev, 0, "error: page crossing\n"); - goto err; - } - - ioreq->domids[i] = blkdev->xendev.dom; - ioreq->refs[i] = ioreq->req.seg[i].gref; - - mem = ioreq->req.seg[i].first_sect * blkdev->file_blk; - len = (ioreq->req.seg[i].last_sect - ioreq->req.seg[i].first_sect + 1) * blkdev->file_blk; + if (i == BLKIF_MAX_SEGMENTS_PER_REQUEST) { + xen_be_printf(&blkdev->xendev, 0, "error: nr_segments too big\n"); + goto err; + } + if (ioreq->req.seg[i].first_sect > ioreq->req.seg[i].last_sect) { + xen_be_printf(&blkdev->xendev, 0, "error: first > last sector\n"); + goto err; + } + if (ioreq->req.seg[i].last_sect * BLOCK_SIZE >= XC_PAGE_SIZE) { + xen_be_printf(&blkdev->xendev, 0, "error: page crossing\n"); + goto err; + } + + ioreq->domids[i] = blkdev->xendev.dom; + ioreq->refs[i] = ioreq->req.seg[i].gref; + + mem = ioreq->req.seg[i].first_sect * blkdev->file_blk; + len = (ioreq->req.seg[i].last_sect - ioreq->req.seg[i].first_sect + 1) * blkdev->file_blk; qemu_iovec_add(&ioreq->v, (void*)mem, len); } if (ioreq->start + ioreq->v.size > blkdev->file_size) { - xen_be_printf(&blkdev->xendev, 0, "error: access beyond end of file\n"); - goto err; + xen_be_printf(&blkdev->xendev, 0, "error: access beyond end of file\n"); + goto err; } return 0; @@ -241,23 +241,23 @@ static void ioreq_unmap(struct ioreq *ioreq) if (ioreq->v.niov == 0) return; if (batch_maps) { - if (!ioreq->pages) - return; - if (xc_gnttab_munmap(gnt, ioreq->pages, ioreq->v.niov) != 0) - xen_be_printf(&ioreq->blkdev->xendev, 0, "xc_gnttab_munmap failed: %s\n", - strerror(errno)); - ioreq->blkdev->cnt_map -= ioreq->v.niov; - ioreq->pages = NULL; + if (!ioreq->pages) + return; + if (xc_gnttab_munmap(gnt, ioreq->pages, ioreq->v.niov) != 0) + xen_be_printf(&ioreq->blkdev->xendev, 0, "xc_gnttab_munmap failed: %s\n", + strerror(errno)); + ioreq->blkdev->cnt_map -= ioreq->v.niov; + ioreq->pages = NULL; } else { - for (i = 0; i < ioreq->v.niov; i++) { - if (!ioreq->page[i]) - continue; - if (xc_gnttab_munmap(gnt, ioreq->page[i], 1) != 0) - xen_be_printf(&ioreq->blkdev->xendev, 0, "xc_gnttab_munmap failed: %s\n", - strerror(errno)); - ioreq->blkdev->cnt_map--; - ioreq->page[i] = NULL; - } + for (i = 0; i < ioreq->v.niov; i++) { + if (!ioreq->page[i]) + continue; + if (xc_gnttab_munmap(gnt, ioreq->page[i], 1) != 0) + xen_be_printf(&ioreq->blkdev->xendev, 0, "xc_gnttab_munmap failed: %s\n", + strerror(errno)); + ioreq->blkdev->cnt_map--; + ioreq->page[i] = NULL; + } } } @@ -269,32 +269,32 @@ static int ioreq_map(struct ioreq *ioreq) if (ioreq->v.niov == 0) return 0; if (batch_maps) { - ioreq->pages = xc_gnttab_map_grant_refs - (gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot); - if (ioreq->pages == NULL) { - xen_be_printf(&ioreq->blkdev->xendev, 0, - "can''t map %d grant refs (%s, %d maps)\n", - ioreq->v.niov, strerror(errno), ioreq->blkdev->cnt_map); - return -1; - } - for (i = 0; i < ioreq->v.niov; i++) - ioreq->v.iov[i].iov_base = ioreq->pages + i * XC_PAGE_SIZE + - (uintptr_t)ioreq->v.iov[i].iov_base; - ioreq->blkdev->cnt_map += ioreq->v.niov; + ioreq->pages = xc_gnttab_map_grant_refs + (gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot); + if (ioreq->pages == NULL) { + xen_be_printf(&ioreq->blkdev->xendev, 0, + "can''t map %d grant refs (%s, %d maps)\n", + ioreq->v.niov, strerror(errno), ioreq->blkdev->cnt_map); + return -1; + } + for (i = 0; i < ioreq->v.niov; i++) + ioreq->v.iov[i].iov_base = ioreq->pages + i * XC_PAGE_SIZE + + (uintptr_t)ioreq->v.iov[i].iov_base; + ioreq->blkdev->cnt_map += ioreq->v.niov; } else { - for (i = 0; i < ioreq->v.niov; i++) { - ioreq->page[i] = xc_gnttab_map_grant_ref - (gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot); - if (ioreq->page[i] == NULL) { - xen_be_printf(&ioreq->blkdev->xendev, 0, - "can''t map grant ref %d (%s, %d maps)\n", - ioreq->refs[i], strerror(errno), ioreq->blkdev->cnt_map); - ioreq_unmap(ioreq); - return -1; - } - ioreq->v.iov[i].iov_base = ioreq->page[i] + (uintptr_t)ioreq->v.iov[i].iov_base; - ioreq->blkdev->cnt_map++; - } + for (i = 0; i < ioreq->v.niov; i++) { + ioreq->page[i] = xc_gnttab_map_grant_ref + (gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot); + if (ioreq->page[i] == NULL) { + xen_be_printf(&ioreq->blkdev->xendev, 0, + "can''t map grant ref %d (%s, %d maps)\n", + ioreq->refs[i], strerror(errno), ioreq->blkdev->cnt_map); + ioreq_unmap(ioreq); + return -1; + } + ioreq->v.iov[i].iov_base = ioreq->page[i] + (uintptr_t)ioreq->v.iov[i].iov_base; + ioreq->blkdev->cnt_map++; + } } return 0; } @@ -306,51 +306,51 @@ static int ioreq_runio_qemu_sync(struct ioreq *ioreq) off_t pos; if (ioreq_map(ioreq) == -1) - goto err; + goto err; if (ioreq->presync) - bdrv_flush(blkdev->bs); + bdrv_flush(blkdev->bs); switch (ioreq->req.operation) { case BLKIF_OP_READ: - pos = ioreq->start; - for (i = 0; i < ioreq->v.niov; i++) { - rc = bdrv_read(blkdev->bs, pos / BLOCK_SIZE, - ioreq->v.iov[i].iov_base, - ioreq->v.iov[i].iov_len / BLOCK_SIZE); - if (rc != 0) { - xen_be_printf(&blkdev->xendev, 0, "rd I/O error (%p, len %zd)\n", - ioreq->v.iov[i].iov_base, - ioreq->v.iov[i].iov_len); - goto err; - } - len += ioreq->v.iov[i].iov_len; - pos += ioreq->v.iov[i].iov_len; - } - break; + pos = ioreq->start; + for (i = 0; i < ioreq->v.niov; i++) { + rc = bdrv_read(blkdev->bs, pos / BLOCK_SIZE, + ioreq->v.iov[i].iov_base, + ioreq->v.iov[i].iov_len / BLOCK_SIZE); + if (rc != 0) { + xen_be_printf(&blkdev->xendev, 0, "rd I/O error (%p, len %zd)\n", + ioreq->v.iov[i].iov_base, + ioreq->v.iov[i].iov_len); + goto err; + } + len += ioreq->v.iov[i].iov_len; + pos += ioreq->v.iov[i].iov_len; + } + break; case BLKIF_OP_WRITE: case BLKIF_OP_WRITE_BARRIER: - pos = ioreq->start; - for (i = 0; i < ioreq->v.niov; i++) { - rc = bdrv_write(blkdev->bs, pos / BLOCK_SIZE, - ioreq->v.iov[i].iov_base, - ioreq->v.iov[i].iov_len / BLOCK_SIZE); - if (rc != 0) { - xen_be_printf(&blkdev->xendev, 0, "wr I/O error (%p, len %zd)\n", - ioreq->v.iov[i].iov_base, - ioreq->v.iov[i].iov_len); - goto err; - } - len += ioreq->v.iov[i].iov_len; - pos += ioreq->v.iov[i].iov_len; - } - break; + pos = ioreq->start; + for (i = 0; i < ioreq->v.niov; i++) { + rc = bdrv_write(blkdev->bs, pos / BLOCK_SIZE, + ioreq->v.iov[i].iov_base, + ioreq->v.iov[i].iov_len / BLOCK_SIZE); + if (rc != 0) { + xen_be_printf(&blkdev->xendev, 0, "wr I/O error (%p, len %zd)\n", + ioreq->v.iov[i].iov_base, + ioreq->v.iov[i].iov_len); + goto err; + } + len += ioreq->v.iov[i].iov_len; + pos += ioreq->v.iov[i].iov_len; + } + break; default: - /* unknown operation (shouldn''t happen -- parse catches this) */ - goto err; + /* unknown operation (shouldn''t happen -- parse catches this) */ + goto err; } if (ioreq->postsync) - bdrv_flush(blkdev->bs); + bdrv_flush(blkdev->bs); ioreq->status = BLKIF_RSP_OKAY; ioreq_unmap(ioreq); @@ -387,11 +387,11 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) struct XenBlkDev *blkdev = ioreq->blkdev; if (ioreq_map(ioreq) == -1) - goto err; + goto err; ioreq->aio_inflight++; if (ioreq->presync) - bdrv_flush(blkdev->bs); /* FIXME: aio_flush() ??? */ + bdrv_flush(blkdev->bs); /* FIXME: aio_flush() ??? */ switch (ioreq->req.operation) { case BLKIF_OP_READ: @@ -399,21 +399,21 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) bdrv_aio_readv(blkdev->bs, ioreq->start / BLOCK_SIZE, &ioreq->v, ioreq->v.size / BLOCK_SIZE, qemu_aio_complete, ioreq); - break; + break; case BLKIF_OP_WRITE: case BLKIF_OP_WRITE_BARRIER: ioreq->aio_inflight++; bdrv_aio_writev(blkdev->bs, ioreq->start / BLOCK_SIZE, &ioreq->v, ioreq->v.size / BLOCK_SIZE, qemu_aio_complete, ioreq); - break; + break; default: - /* unknown operation (shouldn''t happen -- parse catches this) */ - goto err; + /* unknown operation (shouldn''t happen -- parse catches this) */ + goto err; } if (ioreq->postsync) - bdrv_flush(blkdev->bs); /* FIXME: aio_flush() ??? */ + bdrv_flush(blkdev->bs); /* FIXME: aio_flush() ??? */ qemu_aio_complete(ioreq, 0); return 0; @@ -438,36 +438,36 @@ static int blk_send_response_one(struct ioreq *ioreq) /* Place on the response ring for the relevant domain. */ switch (blkdev->protocol) { case BLKIF_PROTOCOL_NATIVE: - dst = RING_GET_RESPONSE(&blkdev->rings.native, blkdev->rings.native.rsp_prod_pvt); - break; + dst = RING_GET_RESPONSE(&blkdev->rings.native, blkdev->rings.native.rsp_prod_pvt); + break; case BLKIF_PROTOCOL_X86_32: dst = RING_GET_RESPONSE(&blkdev->rings.x86_32_part, blkdev->rings.x86_32_part.rsp_prod_pvt); - break; + break; case BLKIF_PROTOCOL_X86_64: dst = RING_GET_RESPONSE(&blkdev->rings.x86_64_part, blkdev->rings.x86_64_part.rsp_prod_pvt); - break; + break; default: - dst = NULL; + dst = NULL; } memcpy(dst, &resp, sizeof(resp)); blkdev->rings.common.rsp_prod_pvt++; RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blkdev->rings.common, send_notify); if (blkdev->rings.common.rsp_prod_pvt == blkdev->rings.common.req_cons) { - /* - * Tail check for pending requests. Allows frontend to avoid - * notifications if requests are already in flight (lower - * overheads and promotes batching). - */ - RING_FINAL_CHECK_FOR_REQUESTS(&blkdev->rings.common, have_requests); + /* + * Tail check for pending requests. Allows frontend to avoid + * notifications if requests are already in flight (lower + * overheads and promotes batching). + */ + RING_FINAL_CHECK_FOR_REQUESTS(&blkdev->rings.common, have_requests); } else if (RING_HAS_UNCONSUMED_REQUESTS(&blkdev->rings.common)) { - have_requests = 1; + have_requests = 1; } if (have_requests) - blkdev->more_work++; + blkdev->more_work++; return send_notify; } @@ -479,28 +479,28 @@ static void blk_send_response_all(struct XenBlkDev *blkdev) while (!QLIST_EMPTY(&blkdev->finished)) { ioreq = QLIST_FIRST(&blkdev->finished); - send_notify += blk_send_response_one(ioreq); - ioreq_release(ioreq); + send_notify += blk_send_response_one(ioreq); + ioreq_release(ioreq); } if (send_notify) - xen_be_send_notify(&blkdev->xendev); + xen_be_send_notify(&blkdev->xendev); } static int blk_get_request(struct XenBlkDev *blkdev, struct ioreq *ioreq, RING_IDX rc) { switch (blkdev->protocol) { case BLKIF_PROTOCOL_NATIVE: - memcpy(&ioreq->req, RING_GET_REQUEST(&blkdev->rings.native, rc), - sizeof(ioreq->req)); - break; + memcpy(&ioreq->req, RING_GET_REQUEST(&blkdev->rings.native, rc), + sizeof(ioreq->req)); + break; case BLKIF_PROTOCOL_X86_32: blkif_get_x86_32_req(&ioreq->req, RING_GET_REQUEST(&blkdev->rings.x86_32_part, rc)); - break; + break; case BLKIF_PROTOCOL_X86_64: blkif_get_x86_64_req(&ioreq->req, RING_GET_REQUEST(&blkdev->rings.x86_64_part, rc)); - break; + break; } return 0; } @@ -581,44 +581,44 @@ static int blk_init(struct XenDevice *xendev) /* read xenstore entries */ if (blkdev->params == NULL) { - blkdev->params = xenstore_read_be_str(&blkdev->xendev, "params"); + blkdev->params = xenstore_read_be_str(&blkdev->xendev, "params"); h = strchr(blkdev->params, '':''); - if (h != NULL) { - blkdev->fileproto = blkdev->params; - blkdev->filename = h+1; - *h = 0; - } else { - blkdev->fileproto = "<unset>"; - blkdev->filename = blkdev->params; - } + if (h != NULL) { + blkdev->fileproto = blkdev->params; + blkdev->filename = h+1; + *h = 0; + } else { + blkdev->fileproto = "<unset>"; + blkdev->filename = blkdev->params; + } } if (blkdev->mode == NULL) - blkdev->mode = xenstore_read_be_str(&blkdev->xendev, "mode"); + blkdev->mode = xenstore_read_be_str(&blkdev->xendev, "mode"); if (blkdev->type == NULL) - blkdev->type = xenstore_read_be_str(&blkdev->xendev, "type"); + blkdev->type = xenstore_read_be_str(&blkdev->xendev, "type"); if (blkdev->dev == NULL) - blkdev->dev = xenstore_read_be_str(&blkdev->xendev, "dev"); + blkdev->dev = xenstore_read_be_str(&blkdev->xendev, "dev"); if (blkdev->devtype == NULL) - blkdev->devtype = xenstore_read_be_str(&blkdev->xendev, "device-type"); + blkdev->devtype = xenstore_read_be_str(&blkdev->xendev, "device-type"); /* do we have all we need? */ if (blkdev->params == NULL || - blkdev->mode == NULL || - blkdev->type == NULL || - blkdev->dev == NULL) - return -1; + blkdev->mode == NULL || + blkdev->type == NULL || + blkdev->dev == NULL) + return -1; /* read-only ? */ if (strcmp(blkdev->mode, "w") == 0) { - qflags = BDRV_O_RDWR; + qflags = BDRV_O_RDWR; } else { - qflags = 0; - info |= VDISK_READONLY; + qflags = 0; + info |= VDISK_READONLY; } /* cdrom ? */ if (blkdev->devtype && !strcmp(blkdev->devtype, "cdrom")) - info |= VDISK_CDROM; + info |= VDISK_CDROM; /* init qemu block driver */ index = (blkdev->xendev.dev - 202 * 256) / 16; @@ -626,21 +626,21 @@ static int blk_init(struct XenDevice *xendev) if (!blkdev->dinfo) { /* setup via xenbus -> create new block driver instance */ xen_be_printf(&blkdev->xendev, 2, "create new bdrv (xenbus setup)\n"); - blkdev->bs = bdrv_new(blkdev->dev); - if (blkdev->bs) { - if (bdrv_open(blkdev->bs, blkdev->filename, qflags, + blkdev->bs = bdrv_new(blkdev->dev); + if (blkdev->bs) { + if (bdrv_open(blkdev->bs, blkdev->filename, qflags, bdrv_find_whitelisted_format(blkdev->fileproto)) != 0) { - bdrv_delete(blkdev->bs); - blkdev->bs = NULL; - } - } - if (!blkdev->bs) - return -1; + bdrv_delete(blkdev->bs); + blkdev->bs = NULL; + } + } + if (!blkdev->bs) + return -1; } else { /* setup via qemu cmdline -> already setup for us */ xen_be_printf(&blkdev->xendev, 2, "get configured bdrv (cmdline setup)\n"); - blkdev->bs = blkdev->dinfo->bdrv; + blkdev->bs = blkdev->dinfo->bdrv; } blkdev->file_blk = BLOCK_SIZE; blkdev->file_size = bdrv_getlength(blkdev->bs); @@ -648,21 +648,21 @@ static int blk_init(struct XenDevice *xendev) xen_be_printf(&blkdev->xendev, 1, "bdrv_getlength: %d (%s) | drv %s\n", (int)blkdev->file_size, strerror(-blkdev->file_size), blkdev->bs->drv ? blkdev->bs->drv->format_name : "-"); - blkdev->file_size = 0; + blkdev->file_size = 0; } have_barriers = blkdev->bs->drv && blkdev->bs->drv->bdrv_flush ? 1 : 0; xen_be_printf(xendev, 1, "type \"%s\", fileproto \"%s\", filename \"%s\"," - " size %" PRId64 " (%" PRId64 " MB)\n", - blkdev->type, blkdev->fileproto, blkdev->filename, - blkdev->file_size, blkdev->file_size >> 20); + " size %" PRId64 " (%" PRId64 " MB)\n", + blkdev->type, blkdev->fileproto, blkdev->filename, + blkdev->file_size, blkdev->file_size >> 20); /* fill info */ xenstore_write_be_int(&blkdev->xendev, "feature-barrier", have_barriers); xenstore_write_be_int(&blkdev->xendev, "info", info); xenstore_write_be_int(&blkdev->xendev, "sector-size", blkdev->file_blk); xenstore_write_be_int(&blkdev->xendev, "sectors", - blkdev->file_size / blkdev->file_blk); + blkdev->file_size / blkdev->file_blk); return 0; } @@ -671,10 +671,10 @@ static int blk_connect(struct XenDevice *xendev) struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev); if (xenstore_read_fe_int(&blkdev->xendev, "ring-ref", &blkdev->ring_ref) == -1) - return -1; + return -1; if (xenstore_read_fe_int(&blkdev->xendev, "event-channel", &blkdev->xendev.remote_port) == -1) - return -1; + return -1; blkdev->protocol = BLKIF_PROTOCOL_NATIVE; if (blkdev->xendev.protocol) { @@ -685,42 +685,42 @@ static int blk_connect(struct XenDevice *xendev) } blkdev->sring = xc_gnttab_map_grant_ref(blkdev->xendev.gnttabdev, - blkdev->xendev.dom, - blkdev->ring_ref, - PROT_READ | PROT_WRITE); + blkdev->xendev.dom, + blkdev->ring_ref, + PROT_READ | PROT_WRITE); if (!blkdev->sring) - return -1; + return -1; blkdev->cnt_map++; switch (blkdev->protocol) { case BLKIF_PROTOCOL_NATIVE: { - blkif_sring_t *sring_native = blkdev->sring; - BACK_RING_INIT(&blkdev->rings.native, sring_native, XC_PAGE_SIZE); - break; + blkif_sring_t *sring_native = blkdev->sring; + BACK_RING_INIT(&blkdev->rings.native, sring_native, XC_PAGE_SIZE); + break; } case BLKIF_PROTOCOL_X86_32: { - blkif_x86_32_sring_t *sring_x86_32 = blkdev->sring; + blkif_x86_32_sring_t *sring_x86_32 = blkdev->sring; BACK_RING_INIT(&blkdev->rings.x86_32_part, sring_x86_32, XC_PAGE_SIZE); - break; + break; } case BLKIF_PROTOCOL_X86_64: { - blkif_x86_64_sring_t *sring_x86_64 = blkdev->sring; + blkif_x86_64_sring_t *sring_x86_64 = blkdev->sring; BACK_RING_INIT(&blkdev->rings.x86_64_part, sring_x86_64, XC_PAGE_SIZE); - break; + break; } } xen_be_bind_evtchn(&blkdev->xendev); xen_be_printf(&blkdev->xendev, 1, "ok: proto %s, ring-ref %d, " - "remote port %d, local port %d\n", - blkdev->xendev.protocol, blkdev->ring_ref, - blkdev->xendev.remote_port, blkdev->xendev.local_port); + "remote port %d, local port %d\n", + blkdev->xendev.protocol, blkdev->ring_ref, + blkdev->xendev.remote_port, blkdev->xendev.local_port); return 0; } @@ -734,14 +734,14 @@ static void blk_disconnect(struct XenDevice *xendev) bdrv_close(blkdev->bs); bdrv_delete(blkdev->bs); } - blkdev->bs = NULL; + blkdev->bs = NULL; } xen_be_unbind_evtchn(&blkdev->xendev); if (blkdev->sring) { - xc_gnttab_munmap(blkdev->xendev.gnttabdev, blkdev->sring, 1); - blkdev->cnt_map--; - blkdev->sring = NULL; + xc_gnttab_munmap(blkdev->xendev.gnttabdev, blkdev->sring, 1); + blkdev->cnt_map--; + blkdev->sring = NULL; } } @@ -751,10 +751,10 @@ static int blk_free(struct XenDevice *xendev) struct ioreq *ioreq; while (!QLIST_EMPTY(&blkdev->freelist)) { - ioreq = QLIST_FIRST(&blkdev->freelist); + ioreq = QLIST_FIRST(&blkdev->freelist); QLIST_REMOVE(ioreq, list); qemu_iovec_destroy(&ioreq->v); - qemu_free(ioreq); + qemu_free(ioreq); } qemu_free(blkdev->params); diff --git a/hw/xen_nic.c b/hw/xen_nic.c index 08055b8..8fcf856 100644 --- a/hw/xen_nic.c +++ b/hw/xen_nic.c @@ -75,19 +75,19 @@ static void net_tx_response(struct XenNetDev *netdev, netif_tx_request_t *txp, i #if 0 if (txp->flags & NETTXF_extra_info) - RING_GET_RESPONSE(&netdev->tx_ring, ++i)->status = NETIF_RSP_NULL; + RING_GET_RESPONSE(&netdev->tx_ring, ++i)->status = NETIF_RSP_NULL; #endif netdev->tx_ring.rsp_prod_pvt = ++i; RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&netdev->tx_ring, notify); if (notify) - xen_be_send_notify(&netdev->xendev); + xen_be_send_notify(&netdev->xendev); if (i == netdev->tx_ring.req_cons) { - int more_to_do; - RING_FINAL_CHECK_FOR_REQUESTS(&netdev->tx_ring, more_to_do); - if (more_to_do) - netdev->tx_work++; + int more_to_do; + RING_FINAL_CHECK_FOR_REQUESTS(&netdev->tx_ring, more_to_do); + if (more_to_do) + netdev->tx_work++; } } @@ -101,10 +101,10 @@ static void net_tx_error(struct XenNetDev *netdev, netif_tx_request_t *txp, RING RING_IDX cons = netdev->tx_ring.req_cons; do { - make_tx_response(netif, txp, NETIF_RSP_ERROR); - if (cons >= end) - break; - txp = RING_GET_REQUEST(&netdev->tx_ring, cons++); + make_tx_response(netif, txp, NETIF_RSP_ERROR); + if (cons >= end) + break; + txp = RING_GET_REQUEST(&netdev->tx_ring, cons++); } while (1); netdev->tx_ring.req_cons = cons; netif_schedule_work(netif); @@ -122,75 +122,75 @@ static void net_tx_packets(struct XenNetDev *netdev) void *tmpbuf = NULL; for (;;) { - rc = netdev->tx_ring.req_cons; - rp = netdev->tx_ring.sring->req_prod; - xen_rmb(); /* Ensure we see queued requests up to ''rp''. */ + rc = netdev->tx_ring.req_cons; + rp = netdev->tx_ring.sring->req_prod; + xen_rmb(); /* Ensure we see queued requests up to ''rp''. */ - while ((rc != rp)) { - if (RING_REQUEST_CONS_OVERFLOW(&netdev->tx_ring, rc)) - break; - memcpy(&txreq, RING_GET_REQUEST(&netdev->tx_ring, rc), sizeof(txreq)); - netdev->tx_ring.req_cons = ++rc; + while ((rc != rp)) { + if (RING_REQUEST_CONS_OVERFLOW(&netdev->tx_ring, rc)) + break; + memcpy(&txreq, RING_GET_REQUEST(&netdev->tx_ring, rc), sizeof(txreq)); + netdev->tx_ring.req_cons = ++rc; #if 1 - /* should not happen in theory, we don''t announce the * - * feature-{sg,gso,whatelse} flags in xenstore (yet?) */ - if (txreq.flags & NETTXF_extra_info) { - xen_be_printf(&netdev->xendev, 0, "FIXME: extra info flag\n"); - net_tx_error(netdev, &txreq, rc); - continue; - } - if (txreq.flags & NETTXF_more_data) { - xen_be_printf(&netdev->xendev, 0, "FIXME: more data flag\n"); - net_tx_error(netdev, &txreq, rc); - continue; - } + /* should not happen in theory, we don''t announce the * + * feature-{sg,gso,whatelse} flags in xenstore (yet?) */ + if (txreq.flags & NETTXF_extra_info) { + xen_be_printf(&netdev->xendev, 0, "FIXME: extra info flag\n"); + net_tx_error(netdev, &txreq, rc); + continue; + } + if (txreq.flags & NETTXF_more_data) { + xen_be_printf(&netdev->xendev, 0, "FIXME: more data flag\n"); + net_tx_error(netdev, &txreq, rc); + continue; + } #endif - if (txreq.size < 14) { - xen_be_printf(&netdev->xendev, 0, "bad packet size: %d\n", txreq.size); - net_tx_error(netdev, &txreq, rc); - continue; - } - - if ((txreq.offset + txreq.size) > XC_PAGE_SIZE) { - xen_be_printf(&netdev->xendev, 0, "error: page crossing\n"); - net_tx_error(netdev, &txreq, rc); - continue; - } - - xen_be_printf(&netdev->xendev, 3, "tx packet ref %d, off %d, len %d, flags 0x%x%s%s%s%s\n", - txreq.gref, txreq.offset, txreq.size, txreq.flags, - (txreq.flags & NETTXF_csum_blank) ? " csum_blank" : "", - (txreq.flags & NETTXF_data_validated) ? " data_validated" : "", - (txreq.flags & NETTXF_more_data) ? " more_data" : "", - (txreq.flags & NETTXF_extra_info) ? " extra_info" : ""); - - page = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, - netdev->xendev.dom, - txreq.gref, PROT_READ); - if (page == NULL) { - xen_be_printf(&netdev->xendev, 0, "error: tx gref dereference failed (%d)\n", + if (txreq.size < 14) { + xen_be_printf(&netdev->xendev, 0, "bad packet size: %d\n", txreq.size); + net_tx_error(netdev, &txreq, rc); + continue; + } + + if ((txreq.offset + txreq.size) > XC_PAGE_SIZE) { + xen_be_printf(&netdev->xendev, 0, "error: page crossing\n"); + net_tx_error(netdev, &txreq, rc); + continue; + } + + xen_be_printf(&netdev->xendev, 3, "tx packet ref %d, off %d, len %d, flags 0x%x%s%s%s%s\n", + txreq.gref, txreq.offset, txreq.size, txreq.flags, + (txreq.flags & NETTXF_csum_blank) ? " csum_blank" : "", + (txreq.flags & NETTXF_data_validated) ? " data_validated" : "", + (txreq.flags & NETTXF_more_data) ? " more_data" : "", + (txreq.flags & NETTXF_extra_info) ? " extra_info" : ""); + + page = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, + netdev->xendev.dom, + txreq.gref, PROT_READ); + if (page == NULL) { + xen_be_printf(&netdev->xendev, 0, "error: tx gref dereference failed (%d)\n", txreq.gref); - net_tx_error(netdev, &txreq, rc); - continue; - } - if (txreq.flags & NETTXF_csum_blank) { + net_tx_error(netdev, &txreq, rc); + continue; + } + if (txreq.flags & NETTXF_csum_blank) { /* have read-only mapping -> can''t fill checksum in-place */ if (!tmpbuf) tmpbuf = qemu_malloc(XC_PAGE_SIZE); memcpy(tmpbuf, page + txreq.offset, txreq.size); - net_checksum_calculate(tmpbuf, txreq.size); + net_checksum_calculate(tmpbuf, txreq.size); qemu_send_packet(&netdev->nic->nc, tmpbuf, txreq.size); } else { qemu_send_packet(&netdev->nic->nc, page + txreq.offset, txreq.size); } - xc_gnttab_munmap(netdev->xendev.gnttabdev, page, 1); - net_tx_response(netdev, &txreq, NETIF_RSP_OKAY); - } - if (!netdev->tx_work) - break; - netdev->tx_work = 0; + xc_gnttab_munmap(netdev->xendev.gnttabdev, page, 1); + net_tx_response(netdev, &txreq, NETIF_RSP_OKAY); + } + if (!netdev->tx_work) + break; + netdev->tx_work = 0; } qemu_free(tmpbuf); } @@ -198,9 +198,9 @@ static void net_tx_packets(struct XenNetDev *netdev) /* ------------------------------------------------------------- */ static void net_rx_response(struct XenNetDev *netdev, - netif_rx_request_t *req, int8_t st, - uint16_t offset, uint16_t size, - uint16_t flags) + netif_rx_request_t *req, int8_t st, + uint16_t offset, uint16_t size, + uint16_t flags) { RING_IDX i = netdev->rx_ring.rsp_prod_pvt; netif_rx_response_t *resp; @@ -212,15 +212,15 @@ static void net_rx_response(struct XenNetDev *netdev, resp->id = req->id; resp->status = (int16_t)size; if (st < 0) - resp->status = (int16_t)st; + resp->status = (int16_t)st; xen_be_printf(&netdev->xendev, 3, "rx response: idx %d, status %d, flags 0x%x\n", - i, resp->status, resp->flags); + i, resp->status, resp->flags); netdev->rx_ring.rsp_prod_pvt = ++i; RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&netdev->rx_ring, notify); if (notify) - xen_be_send_notify(&netdev->xendev); + xen_be_send_notify(&netdev->xendev); } #define NET_IP_ALIGN 2 @@ -231,16 +231,16 @@ static int net_rx_ok(VLANClientState *nc) RING_IDX rc, rp; if (netdev->xendev.be_state != XenbusStateConnected) - return 0; + return 0; rc = netdev->rx_ring.req_cons; rp = netdev->rx_ring.sring->req_prod; xen_rmb(); if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) { - xen_be_printf(&netdev->xendev, 2, "%s: no rx buffers (%d/%d)\n", - __FUNCTION__, rc, rp); - return 0; + xen_be_printf(&netdev->xendev, 2, "%s: no rx buffers (%d/%d)\n", + __FUNCTION__, rc, rp); + return 0; } return 1; } @@ -253,33 +253,33 @@ static ssize_t net_rx_packet(VLANClientState *nc, const uint8_t *buf, size_t siz void *page; if (netdev->xendev.be_state != XenbusStateConnected) - return -1; + return -1; rc = netdev->rx_ring.req_cons; rp = netdev->rx_ring.sring->req_prod; xen_rmb(); /* Ensure we see queued requests up to ''rp''. */ if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) { - xen_be_printf(&netdev->xendev, 2, "no buffer, drop packet\n"); - return -1; + xen_be_printf(&netdev->xendev, 2, "no buffer, drop packet\n"); + return -1; } if (size > XC_PAGE_SIZE - NET_IP_ALIGN) { - xen_be_printf(&netdev->xendev, 0, "packet too big (%lu > %ld)", - (unsigned long)size, XC_PAGE_SIZE - NET_IP_ALIGN); - return -1; + xen_be_printf(&netdev->xendev, 0, "packet too big (%lu > %ld)", + (unsigned long)size, XC_PAGE_SIZE - NET_IP_ALIGN); + return -1; } memcpy(&rxreq, RING_GET_REQUEST(&netdev->rx_ring, rc), sizeof(rxreq)); netdev->rx_ring.req_cons = ++rc; page = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, - netdev->xendev.dom, - rxreq.gref, PROT_WRITE); + netdev->xendev.dom, + rxreq.gref, PROT_WRITE); if (page == NULL) { - xen_be_printf(&netdev->xendev, 0, "error: rx gref dereference failed (%d)\n", + xen_be_printf(&netdev->xendev, 0, "error: rx gref dereference failed (%d)\n", rxreq.gref); - net_rx_response(netdev, &rxreq, NETIF_RSP_ERROR, 0, 0, 0); - return -1; + net_rx_response(netdev, &rxreq, NETIF_RSP_ERROR, 0, 0, 0); + return -1; } memcpy(page + NET_IP_ALIGN, buf, size); xc_gnttab_munmap(netdev->xendev.gnttabdev, page, 1); @@ -303,11 +303,11 @@ static int net_init(struct XenDevice *xendev) /* read xenstore entries */ if (netdev->mac == NULL) - netdev->mac = xenstore_read_be_str(&netdev->xendev, "mac"); + netdev->mac = xenstore_read_be_str(&netdev->xendev, "mac"); /* do we have all we need? */ if (netdev->mac == NULL) - return -1; + return -1; if (net_parse_macaddr(netdev->conf.macaddr.a, netdev->mac) < 0) return -1; @@ -334,41 +334,41 @@ static int net_connect(struct XenDevice *xendev) int rx_copy; if (xenstore_read_fe_int(&netdev->xendev, "tx-ring-ref", - &netdev->tx_ring_ref) == -1) - return -1; + &netdev->tx_ring_ref) == -1) + return -1; if (xenstore_read_fe_int(&netdev->xendev, "rx-ring-ref", - &netdev->rx_ring_ref) == -1) - return 1; + &netdev->rx_ring_ref) == -1) + return 1; if (xenstore_read_fe_int(&netdev->xendev, "event-channel", - &netdev->xendev.remote_port) == -1) - return -1; + &netdev->xendev.remote_port) == -1) + return -1; if (xenstore_read_fe_int(&netdev->xendev, "request-rx-copy", &rx_copy) == -1) - rx_copy = 0; + rx_copy = 0; if (rx_copy == 0) { - xen_be_printf(&netdev->xendev, 0, "frontend doesn''t support rx-copy.\n"); - return -1; + xen_be_printf(&netdev->xendev, 0, "frontend doesn''t support rx-copy.\n"); + return -1; } netdev->txs = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, - netdev->xendev.dom, - netdev->tx_ring_ref, - PROT_READ | PROT_WRITE); + netdev->xendev.dom, + netdev->tx_ring_ref, + PROT_READ | PROT_WRITE); netdev->rxs = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, - netdev->xendev.dom, - netdev->rx_ring_ref, - PROT_READ | PROT_WRITE); + netdev->xendev.dom, + netdev->rx_ring_ref, + PROT_READ | PROT_WRITE); if (!netdev->txs || !netdev->rxs) - return -1; + return -1; BACK_RING_INIT(&netdev->tx_ring, netdev->txs, XC_PAGE_SIZE); BACK_RING_INIT(&netdev->rx_ring, netdev->rxs, XC_PAGE_SIZE); xen_be_bind_evtchn(&netdev->xendev); xen_be_printf(&netdev->xendev, 1, "ok: tx-ring-ref %d, rx-ring-ref %d, " - "remote port %d, local port %d\n", - netdev->tx_ring_ref, netdev->rx_ring_ref, - netdev->xendev.remote_port, netdev->xendev.local_port); + "remote port %d, local port %d\n", + netdev->tx_ring_ref, netdev->rx_ring_ref, + netdev->xendev.remote_port, netdev->xendev.local_port); net_tx_packets(netdev); return 0; @@ -381,12 +381,12 @@ static void net_disconnect(struct XenDevice *xendev) xen_be_unbind_evtchn(&netdev->xendev); if (netdev->txs) { - xc_gnttab_munmap(netdev->xendev.gnttabdev, netdev->txs, 1); - netdev->txs = NULL; + xc_gnttab_munmap(netdev->xendev.gnttabdev, netdev->txs, 1); + netdev->txs = NULL; } if (netdev->rxs) { - xc_gnttab_munmap(netdev->xendev.gnttabdev, netdev->rxs, 1); - netdev->rxs = NULL; + xc_gnttab_munmap(netdev->xendev.gnttabdev, netdev->rxs, 1); + netdev->rxs = NULL; } if (netdev->nic) { qemu_del_vlan_client(&netdev->nic->nc); -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 02/14] xen: Support new libxc calls from xen unstable.
From: Anthony PERARD <anthony.perard@citrix.com> Update the libxenctrl calls in Qemu to use the new interface, otherwise Qemu wouldn''t be able to build against new versions of the library. We also check libxenctrl version in configure, from Xen 3.3.0 to Xen unstable. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- configure | 67 ++++++++++++++++++++++++++++++++++++++++++++++++- hw/xen_backend.c | 10 +++--- hw/xen_backend.h | 2 +- hw/xen_common.h | 38 +++++++++++++++++----------- hw/xen_disk.c | 12 ++++---- hw/xen_domainbuild.c | 2 +- hw/xen_nic.c | 16 ++++++------ 7 files changed, 109 insertions(+), 38 deletions(-) diff --git a/configure b/configure index 4061cb7..18c3fa0 100755 --- a/configure +++ b/configure @@ -274,6 +274,7 @@ vnc_jpeg="" vnc_png="" vnc_thread="no" xen="" +xen_ctrl_version="" linux_aio="" attr="" vhost_net="" @@ -1110,20 +1111,81 @@ fi if test "$xen" != "no" ; then xen_libs="-lxenstore -lxenctrl -lxenguest" + + # Xen unstable cat > $TMPC <<EOF #include <xenctrl.h> #include <xs.h> -int main(void) { xs_daemon_open(); xc_interface_open(); return 0; } +#include <stdint.h> +#include <xen/hvm/hvm_info_table.h> +#if !defined(HVM_MAX_VCPUS) +# error HVM_MAX_VCPUS not defined +#endif +int main(void) { + xc_interface *xc; + xs_daemon_open(); + xc_interface_open(0, 0, 0); + xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0); + xc_gnttab_open(xc); + return 0; +} EOF if compile_prog "" "$xen_libs" ; then + xen_ctrl_version=410 xen=yes - libs_softmmu="$xen_libs $libs_softmmu" + + # Xen 4.0.0 + elif ( + cat > $TMPC <<EOF +#include <xenctrl.h> +#include <xs.h> +#include <stdint.h> +#include <xen/hvm/hvm_info_table.h> +#if !defined(HVM_MAX_VCPUS) +# error HVM_MAX_VCPUS not defined +#endif +int main(void) { + xs_daemon_open(); + xc_interface_open(); + xc_gnttab_open(); + xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0); + return 0; +} +EOF + compile_prog "" "$xen_libs" + ) ; then + xen_ctrl_version=400 + xen=yes + + # Xen 3.3.0, 3.4.0 + elif ( + cat > $TMPC <<EOF +#include <xenctrl.h> +#include <xs.h> +int main(void) { + xs_daemon_open(); + xc_interface_open(); + xc_gnttab_open(); + xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0); + return 0; +} +EOF + compile_prog "" "$xen_libs" + ) ; then + xen_ctrl_version=330 + xen=yes + + # Xen not found or unsupported else if test "$xen" = "yes" ; then feature_not_found "xen" fi xen=no fi + + if test "$xen" = "yes"; then + libs_softmmu="$xen_libs $libs_softmmu" + fi fi ########################################## @@ -2429,6 +2491,7 @@ if test "$bluez" = "yes" ; then fi if test "$xen" = "yes" ; then echo "CONFIG_XEN=y" >> $config_host_mak + echo "CONFIG_XEN_CTRL_INTERFACE_VERSION=$xen_ctrl_version" >> $config_host_mak fi if test "$io_thread" = "yes" ; then echo "CONFIG_IOTHREAD=y" >> $config_host_mak diff --git a/hw/xen_backend.c b/hw/xen_backend.c index 860b038..3e99751 100644 --- a/hw/xen_backend.c +++ b/hw/xen_backend.c @@ -43,7 +43,7 @@ /* ------------------------------------------------------------- */ /* public */ -int xen_xc; +qemu_xc_interface xen_xc = XC_HANDLER_INITIAL_VALUE; struct xs_handle *xenstore = NULL; const char *xen_protocol; @@ -216,7 +216,7 @@ static struct XenDevice *xen_be_get_xendev(const char *type, int dom, int dev, fcntl(xc_evtchn_fd(xendev->evtchndev), F_SETFD, FD_CLOEXEC); if (ops->flags & DEVOPS_FLAG_NEED_GNTDEV) { - xendev->gnttabdev = xc_gnttab_open(); + xendev->gnttabdev = xc_gnttab_open(xen_xc); if (xendev->gnttabdev < 0) { xen_be_printf(NULL, 0, "can''t open gnttab device\n"); xc_evtchn_close(xendev->evtchndev); @@ -269,7 +269,7 @@ static struct XenDevice *xen_be_del_xendev(int dom, int dev) if (xendev->evtchndev >= 0) xc_evtchn_close(xendev->evtchndev); if (xendev->gnttabdev >= 0) - xc_gnttab_close(xendev->gnttabdev); + xc_gnttab_close(xen_xc, xendev->gnttabdev); QTAILQ_REMOVE(&xendevs, xendev, next); qemu_free(xendev); @@ -627,8 +627,8 @@ int xen_be_init(void) if (qemu_set_fd_handler(xs_fileno(xenstore), xenstore_update, NULL, NULL) < 0) goto err; - xen_xc = xc_interface_open(); - if (xen_xc == -1) { + xen_xc = xc_interface_open(NULL, NULL, 0); + if (xen_xc == XC_HANDLER_INITIAL_VALUE) { xen_be_printf(NULL, 0, "can''t open xen interface\n"); goto err; } diff --git a/hw/xen_backend.h b/hw/xen_backend.h index 292126d..1f23cde 100644 --- a/hw/xen_backend.h +++ b/hw/xen_backend.h @@ -55,7 +55,7 @@ struct XenDevice { /* ------------------------------------------------------------- */ /* variables */ -extern int xen_xc; +extern qemu_xc_interface xen_xc; extern struct xs_handle *xenstore; extern const char *xen_protocol; diff --git a/hw/xen_common.h b/hw/xen_common.h index 8a55b44..9f75e52 100644 --- a/hw/xen_common.h +++ b/hw/xen_common.h @@ -1,6 +1,8 @@ #ifndef QEMU_HW_XEN_COMMON_H #define QEMU_HW_XEN_COMMON_H 1 +#include "config-host.h" + #include <stddef.h> #include <inttypes.h> @@ -13,22 +15,28 @@ #include "qemu-queue.h" /* - * tweaks needed to build with different xen versions - * 0x00030205 -> 3.1.0 - * 0x00030207 -> 3.2.0 - * 0x00030208 -> unstable + * We don''t support Xen prior to 3.3.0. */ -#include <xen/xen-compat.h> -#if __XEN_LATEST_INTERFACE_VERSION__ < 0x00030205 -# define evtchn_port_or_error_t int -#endif -#if __XEN_LATEST_INTERFACE_VERSION__ < 0x00030207 -# define xc_map_foreign_pages xc_map_foreign_batch -#endif -#if __XEN_LATEST_INTERFACE_VERSION__ < 0x00030208 -# define xen_mb() mb() -# define xen_rmb() rmb() -# define xen_wmb() wmb() + +/* Xen unstable */ +#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 410 +typedef int qemu_xc_interface; +# define XC_HANDLER_INITIAL_VALUE -1 +# define xc_fd(xen_xc) xen_xc +# define xc_interface_open(l, dl, f) xc_interface_open() +# define xc_gnttab_open(xc) xc_gnttab_open() +# define xc_gnttab_map_grant_ref(xc, gnt, domid, ref, flags) \ + xc_gnttab_map_grant_ref(gnt, domid, ref, flags) +# define xc_gnttab_map_grant_refs(xc, gnt, count, domids, refs, flags) \ + xc_gnttab_map_grant_refs(gnt, count, domids, refs, flags) +# define xc_gnttab_munmap(xc, gnt, pages, niov) xc_gnttab_munmap(gnt, pages, niov) +# define xc_gnttab_close(xc, dev) xc_gnttab_close(dev) +#else +typedef xc_interface *qemu_xc_interface; +# define XC_HANDLER_INITIAL_VALUE NULL +/* FIXME The fd of xen_xc is now xen_xc->fd */ +/* fd is the first field, so this works */ +# define xc_fd(xen_xc) (*(int*)xen_xc) #endif #endif /* QEMU_HW_XEN_COMMON_H */ diff --git a/hw/xen_disk.c b/hw/xen_disk.c index 47280ee..ec1365c 100644 --- a/hw/xen_disk.c +++ b/hw/xen_disk.c @@ -243,7 +243,7 @@ static void ioreq_unmap(struct ioreq *ioreq) if (batch_maps) { if (!ioreq->pages) return; - if (xc_gnttab_munmap(gnt, ioreq->pages, ioreq->v.niov) != 0) + if (xc_gnttab_munmap(xen_xc, gnt, ioreq->pages, ioreq->v.niov) != 0) xen_be_printf(&ioreq->blkdev->xendev, 0, "xc_gnttab_munmap failed: %s\n", strerror(errno)); ioreq->blkdev->cnt_map -= ioreq->v.niov; @@ -252,7 +252,7 @@ static void ioreq_unmap(struct ioreq *ioreq) for (i = 0; i < ioreq->v.niov; i++) { if (!ioreq->page[i]) continue; - if (xc_gnttab_munmap(gnt, ioreq->page[i], 1) != 0) + if (xc_gnttab_munmap(xen_xc, gnt, ioreq->page[i], 1) != 0) xen_be_printf(&ioreq->blkdev->xendev, 0, "xc_gnttab_munmap failed: %s\n", strerror(errno)); ioreq->blkdev->cnt_map--; @@ -270,7 +270,7 @@ static int ioreq_map(struct ioreq *ioreq) return 0; if (batch_maps) { ioreq->pages = xc_gnttab_map_grant_refs - (gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot); + (xen_xc, gnt, ioreq->v.niov, ioreq->domids, ioreq->refs, ioreq->prot); if (ioreq->pages == NULL) { xen_be_printf(&ioreq->blkdev->xendev, 0, "can''t map %d grant refs (%s, %d maps)\n", @@ -284,7 +284,7 @@ static int ioreq_map(struct ioreq *ioreq) } else { for (i = 0; i < ioreq->v.niov; i++) { ioreq->page[i] = xc_gnttab_map_grant_ref - (gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot); + (xen_xc, gnt, ioreq->domids[i], ioreq->refs[i], ioreq->prot); if (ioreq->page[i] == NULL) { xen_be_printf(&ioreq->blkdev->xendev, 0, "can''t map grant ref %d (%s, %d maps)\n", @@ -684,7 +684,7 @@ static int blk_connect(struct XenDevice *xendev) blkdev->protocol = BLKIF_PROTOCOL_X86_64; } - blkdev->sring = xc_gnttab_map_grant_ref(blkdev->xendev.gnttabdev, + blkdev->sring = xc_gnttab_map_grant_ref(xen_xc, blkdev->xendev.gnttabdev, blkdev->xendev.dom, blkdev->ring_ref, PROT_READ | PROT_WRITE); @@ -739,7 +739,7 @@ static void blk_disconnect(struct XenDevice *xendev) xen_be_unbind_evtchn(&blkdev->xendev); if (blkdev->sring) { - xc_gnttab_munmap(blkdev->xendev.gnttabdev, blkdev->sring, 1); + xc_gnttab_munmap(xen_xc, blkdev->xendev.gnttabdev, blkdev->sring, 1); blkdev->cnt_map--; blkdev->sring = NULL; } diff --git a/hw/xen_domainbuild.c b/hw/xen_domainbuild.c index 7f1fd66..232a456 100644 --- a/hw/xen_domainbuild.c +++ b/hw/xen_domainbuild.c @@ -176,7 +176,7 @@ static int xen_domain_watcher(void) for (i = 3; i < n; i++) { if (i == fd[0]) continue; - if (i == xen_xc) + if (i == xc_fd(xen_xc)) continue; close(i); } diff --git a/hw/xen_nic.c b/hw/xen_nic.c index 8fcf856..2575541 100644 --- a/hw/xen_nic.c +++ b/hw/xen_nic.c @@ -166,7 +166,7 @@ static void net_tx_packets(struct XenNetDev *netdev) (txreq.flags & NETTXF_more_data) ? " more_data" : "", (txreq.flags & NETTXF_extra_info) ? " extra_info" : ""); - page = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, + page = xc_gnttab_map_grant_ref(xen_xc, netdev->xendev.gnttabdev, netdev->xendev.dom, txreq.gref, PROT_READ); if (page == NULL) { @@ -185,7 +185,7 @@ static void net_tx_packets(struct XenNetDev *netdev) } else { qemu_send_packet(&netdev->nic->nc, page + txreq.offset, txreq.size); } - xc_gnttab_munmap(netdev->xendev.gnttabdev, page, 1); + xc_gnttab_munmap(xen_xc, netdev->xendev.gnttabdev, page, 1); net_tx_response(netdev, &txreq, NETIF_RSP_OKAY); } if (!netdev->tx_work) @@ -272,7 +272,7 @@ static ssize_t net_rx_packet(VLANClientState *nc, const uint8_t *buf, size_t siz memcpy(&rxreq, RING_GET_REQUEST(&netdev->rx_ring, rc), sizeof(rxreq)); netdev->rx_ring.req_cons = ++rc; - page = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, + page = xc_gnttab_map_grant_ref(xen_xc, netdev->xendev.gnttabdev, netdev->xendev.dom, rxreq.gref, PROT_WRITE); if (page == NULL) { @@ -282,7 +282,7 @@ static ssize_t net_rx_packet(VLANClientState *nc, const uint8_t *buf, size_t siz return -1; } memcpy(page + NET_IP_ALIGN, buf, size); - xc_gnttab_munmap(netdev->xendev.gnttabdev, page, 1); + xc_gnttab_munmap(xen_xc, netdev->xendev.gnttabdev, page, 1); net_rx_response(netdev, &rxreq, NETIF_RSP_OKAY, NET_IP_ALIGN, size, 0); return size; @@ -350,11 +350,11 @@ static int net_connect(struct XenDevice *xendev) return -1; } - netdev->txs = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, + netdev->txs = xc_gnttab_map_grant_ref(xen_xc, netdev->xendev.gnttabdev, netdev->xendev.dom, netdev->tx_ring_ref, PROT_READ | PROT_WRITE); - netdev->rxs = xc_gnttab_map_grant_ref(netdev->xendev.gnttabdev, + netdev->rxs = xc_gnttab_map_grant_ref(xen_xc, netdev->xendev.gnttabdev, netdev->xendev.dom, netdev->rx_ring_ref, PROT_READ | PROT_WRITE); @@ -381,11 +381,11 @@ static void net_disconnect(struct XenDevice *xendev) xen_be_unbind_evtchn(&netdev->xendev); if (netdev->txs) { - xc_gnttab_munmap(netdev->xendev.gnttabdev, netdev->txs, 1); + xc_gnttab_munmap(xen_xc, netdev->xendev.gnttabdev, netdev->txs, 1); netdev->txs = NULL; } if (netdev->rxs) { - xc_gnttab_munmap(netdev->xendev.gnttabdev, netdev->rxs, 1); + xc_gnttab_munmap(xen_xc, netdev->xendev.gnttabdev, netdev->rxs, 1); netdev->rxs = NULL; } if (netdev->nic) { -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 03/14] xen: Add xen_machine_fv
From: Anthony PERARD <anthony.perard@citrix.com> Add the Xen FV (Fully Virtualized) machine to Qemu; this is groundwork to add Xen device model support in Qemu. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- Makefile.target | 3 + hw/xen_common.h | 5 ++ hw/xen_machine_fv.c | 158 +++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 166 insertions(+), 0 deletions(-) create mode 100644 hw/xen_machine_fv.c diff --git a/Makefile.target b/Makefile.target index a4e80b1..7adbc20 100644 --- a/Makefile.target +++ b/Makefile.target @@ -183,6 +183,9 @@ QEMU_CFLAGS += $(VNC_PNG_CFLAGS) # xen backend driver support obj-$(CONFIG_XEN) += xen_machine_pv.o xen_domainbuild.o +# xen full virtualized machine +obj-i386-$(CONFIG_XEN) += xen_machine_fv.o + # USB layer obj-$(CONFIG_USB_OHCI) += usb-ohci.o diff --git a/hw/xen_common.h b/hw/xen_common.h index 9f75e52..4c0f97d 100644 --- a/hw/xen_common.h +++ b/hw/xen_common.h @@ -18,6 +18,11 @@ * We don''t support Xen prior to 3.3.0. */ +/* Before Xen 4.0.0 */ +#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 400 +# define HVM_MAX_VCPUS 32 +#endif + /* Xen unstable */ #if CONFIG_XEN_CTRL_INTERFACE_VERSION < 410 typedef int qemu_xc_interface; diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c new file mode 100644 index 0000000..260cda3 --- /dev/null +++ b/hw/xen_machine_fv.c @@ -0,0 +1,158 @@ +/* + * QEMU Xen FV Machine + * + * Copyright (c) 2003-2007 Fabrice Bellard + * Copyright (c) 2007 Red Hat + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "hw.h" +#include "pc.h" +#include "pci.h" +#include "usb-uhci.h" +#include "net.h" +#include "boards.h" +#include "ide.h" +#include "sysemu.h" +#include "blockdev.h" + +#include "xen_common.h" +#include "xen/hvm/hvm_info_table.h" + +#define MAX_IDE_BUS 2 + +static void xen_init_fv(ram_addr_t ram_size, + const char *boot_device, + const char *kernel_filename, + const char *kernel_cmdline, + const char *initrd_filename, + const char *cpu_model) +{ + int i; + ram_addr_t below_4g_mem_size, above_4g_mem_size = 0; + PCIBus *pci_bus; + PCII440FXState *i440fx_state; + int piix3_devfn = -1; + qemu_irq *cpu_irq; + qemu_irq *isa_irq; + qemu_irq *i8259; + qemu_irq *cmos_s3; + qemu_irq *smi_irq; + IsaIrqState *isa_irq_state; + DriveInfo *hd[MAX_IDE_BUS * MAX_IDE_DEVS]; + FDCtrl *floppy_controller; + BusState *idebus[MAX_IDE_BUS]; + ISADevice *rtc_state; + + CPUState *env; + + /* Initialize a dummy CPU */ + if (cpu_model == NULL) { +#ifdef TARGET_X86_64 + cpu_model = "qemu64"; +#else + cpu_model = "qemu32"; +#endif + } + env = cpu_init(cpu_model); + env->halted = 1; + + cpu_irq = pc_allocate_cpu_irq(); + i8259 = i8259_init(cpu_irq[0]); + isa_irq_state = qemu_mallocz(sizeof (*isa_irq_state)); + isa_irq_state->i8259 = i8259; + + isa_irq = qemu_allocate_irqs(isa_irq_handler, isa_irq_state, 24); + + pci_bus = i440fx_init(&i440fx_state, &piix3_devfn, isa_irq, ram_size); + isa_bus_irqs(isa_irq); + + pc_register_ferr_irq(isa_reserve_irq(13)); + + pc_vga_init(pci_bus); + + /* init basic PC hardware */ + pc_basic_device_init(isa_irq, &floppy_controller, &rtc_state); + + for (i = 0; i < nb_nics; i++) { + NICInfo *nd = &nd_table[i]; + + if (nd->model && strcmp(nd->model, "ne2k_isa") == 0) + pc_init_ne2k_isa(nd); + else + pci_nic_init_nofail(nd, "e1000", NULL); + } + + if (drive_get_max_bus(IF_IDE) >= MAX_IDE_BUS) { + fprintf(stderr, "qemu: too many IDE bus\n"); + exit(1); + } + + for (i = 0; i < MAX_IDE_BUS * MAX_IDE_DEVS; i++) { + hd[i] = drive_get(IF_IDE, i / MAX_IDE_DEVS, i % MAX_IDE_DEVS); + } + + PCIDevice *dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1); + idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0"); + idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1"); + + pc_audio_init(pci_bus, isa_irq); + + if (ram_size >= 0xe0000000 ) { + above_4g_mem_size = ram_size - 0xe0000000; + below_4g_mem_size = 0xe0000000; + } else { + below_4g_mem_size = ram_size; + } + pc_cmos_init(below_4g_mem_size, above_4g_mem_size, boot_device, + idebus[0], idebus[1], floppy_controller, rtc_state); + + if (usb_enabled) { + usb_uhci_piix3_init(pci_bus, piix3_devfn + 2); + } + + if (acpi_enabled) { + cmos_s3 = qemu_allocate_irqs(pc_cmos_set_s3_resume, rtc_state, 1); + smi_irq = qemu_allocate_irqs(pc_acpi_smi_interrupt, first_cpu, 1); + piix4_pm_init(pci_bus, piix3_devfn + 3, 0xb100, + isa_reserve_irq(9), *cmos_s3, *smi_irq, + 0); + } + + if (i440fx_state) { + i440fx_init_memory_mappings(i440fx_state); + } + + pc_pci_device_init(pci_bus); +} + +static QEMUMachine xenfv_machine = { + .name = "xenfv", + .desc = "Xen Fully-virtualized PC", + .init = xen_init_fv, + .max_cpus = HVM_MAX_VCPUS, +}; + +static void xenfv_machine_init(void) +{ + qemu_register_machine(&xenfv_machine); +} + +machine_init(xenfv_machine_init); -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 04/14] Introduce -accel command option.
From: Anthony PERARD <anthony.perard@citrix.com> This option gives the ability to switch one "accelerator" like kvm, xen or the default one tcg. We can specify more than one accelerator by separate them by a comma. QEMU will try each one and use the first whose works. So, -accel xen,kvm,tcg which would try Xen support first, then KVM and finaly tcg if none of the other works. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> --- qemu-options.hx | 10 ++++++ vl.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++++------- 2 files changed, 85 insertions(+), 11 deletions(-) diff --git a/qemu-options.hx b/qemu-options.hx index a0b5ae9..53c4d35 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -1904,6 +1904,16 @@ Enable KVM full virtualization support. This option is only available if KVM support is enabled when compiling. ETEXI +DEF("accel", HAS_ARG, QEMU_OPTION_accel, \ + "-accel accel use an accelerator (kvm,xen,tcg), default is tcg\n", QEMU_ARCH_ALL) +STEXI +@item -accel @var{accel}[,@var{accel}[,...]] +@findex -accel +This is use to enable an accelerator, in kvm,xen,tcg. +By default, it use only tcg. If there a more than one accelerator +specified, the next one is used if the first don''t work. +ETEXI + DEF("xen-domid", HAS_ARG, QEMU_OPTION_xen_domid, "-xen-domid id specify xen guest domain id\n", QEMU_ARCH_ALL) DEF("xen-create", 0, QEMU_OPTION_xen_create, diff --git a/vl.c b/vl.c index 3f45aa9..797f04d 100644 --- a/vl.c +++ b/vl.c @@ -1747,6 +1747,74 @@ static int debugcon_parse(const char *devname) return 0; } +static struct { + const char *opt_name; + const char *name; + int (*available)(void); + int (*init)(int smp_cpus); + int *allowed; +} accel_list[] = { + { "tcg", "tcg", NULL, NULL, NULL }, + { "kvm", "KVM", kvm_available, kvm_init, &kvm_allowed }, +}; + +static int accel_parse_init(const char *opts) +{ + const char *p = opts; + char buf[10]; + int i, ret; + bool accel_initalised = 0; + bool init_failed = 0; + + while (!accel_initalised && *p != ''\0'') { + if (*p == '','') { + p++; + } + p = get_opt_name(buf, sizeof (buf), p, '',''); + for (i = 0; i < ARRAY_SIZE(accel_list); i++) { + if (strcmp(accel_list[i].opt_name, buf) == 0) { + if (accel_list[i].init) { + ret = accel_list[i].init(smp_cpus); + } else { + ret = 0; + } + if (ret < 0) { + init_failed = 1; + if (!accel_list[i].available()) { + printf("%s not supported for this target\n", + accel_list[i].name); + } else { + fprintf(stderr, "failed to initialize %s: %s\n", + accel_list[i].name, + strerror(-ret)); + } + } else { + accel_initalised = 1; + if (accel_list[i].allowed) { + *(accel_list[i].allowed) = 1; + } + } + break; + } + } + if (i == ARRAY_SIZE(accel_list) + 1) { + fprintf(stderr, "\"%s\" accelerator does not exist.\n", buf); + exit(1); + } + } + + if (!accel_initalised) { + fprintf(stderr, "No accelerator found!\n"); + exit(1); + } + + if (init_failed) { + fprintf(stderr, "Back to %s accelerator.\n", accel_list[i].name); + } + + return !accel_initalised; +} + void qemu_add_exit_notifier(Notifier *notify) { notifier_list_add(&exit_notifiers, notify); @@ -1826,6 +1894,7 @@ int main(int argc, char **argv, char **envp) const char *incoming = NULL; int show_vnc_port = 0; int defconfig = 1; + const char *accel_list_opts = "tcg"; #ifdef CONFIG_SIMPLE_TRACE const char *trace_file = NULL; @@ -2446,7 +2515,10 @@ int main(int argc, char **argv, char **envp) do_smbios_option(optarg); break; case QEMU_OPTION_enable_kvm: - kvm_allowed = 1; + accel_list_opts = "kvm"; + break; + case QEMU_OPTION_accel: + accel_list_opts = optarg; break; case QEMU_OPTION_usb: usb_enabled = 1; @@ -2744,16 +2816,8 @@ int main(int argc, char **argv, char **envp) exit(1); } - if (kvm_allowed) { - int ret = kvm_init(smp_cpus); - if (ret < 0) { - if (!kvm_available()) { - printf("KVM not supported for this target\n"); - } else { - fprintf(stderr, "failed to initialize KVM: %s\n", strerror(-ret)); - } - exit(1); - } + if (accel_list_opts) { + accel_parse_init(accel_list_opts); } if (qemu_init_main_loop()) { -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 05/14] xen: Add xen in -accel option.
From: Anthony PERARD <anthony.perard@citrix.com> This come with the initialisation of Xen. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> --- Makefile.target | 5 +++++ hw/xen.h | 10 ++++++++++ vl.c | 2 ++ xen-all.c | 25 +++++++++++++++++++++++++ xen-stub.c | 17 +++++++++++++++++ 5 files changed, 59 insertions(+), 0 deletions(-) create mode 100644 xen-all.c create mode 100644 xen-stub.c diff --git a/Makefile.target b/Makefile.target index 7adbc20..7a5eb71 100644 --- a/Makefile.target +++ b/Makefile.target @@ -2,6 +2,7 @@ GENERATED_HEADERS = config-target.h CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y) +CONFIG_NO_XEN = $(if $(subst n,,$(CONFIG_XEN)),n,y) include ../config-host.mak include config-devices.mak @@ -183,6 +184,10 @@ QEMU_CFLAGS += $(VNC_PNG_CFLAGS) # xen backend driver support obj-$(CONFIG_XEN) += xen_machine_pv.o xen_domainbuild.o +# xen support +obj-$(CONFIG_XEN) += xen-all.o +obj-$(CONFIG_NO_XEN) += xen-stub.o + # xen full virtualized machine obj-i386-$(CONFIG_XEN) += xen_machine_fv.o diff --git a/hw/xen.h b/hw/xen.h index 780dcf7..14bbb6e 100644 --- a/hw/xen.h +++ b/hw/xen.h @@ -18,4 +18,14 @@ enum xen_mode { extern uint32_t xen_domid; extern enum xen_mode xen_mode; +extern int xen_allowed; + +#if defined CONFIG_XEN +#define xen_enabled() (xen_allowed) +#else +#define xen_enabled() (0) +#endif + +int xen_init(int smp_cpus); + #endif /* QEMU_HW_XEN_H */ diff --git a/vl.c b/vl.c index 797f04d..c0c9d32 100644 --- a/vl.c +++ b/vl.c @@ -243,6 +243,7 @@ static NotifierList exit_notifiers NOTIFIER_LIST_INITIALIZER(exit_notifiers); int kvm_allowed = 0; +int xen_allowed = 0; uint32_t xen_domid; enum xen_mode xen_mode = XEN_EMULATE; @@ -1755,6 +1756,7 @@ static struct { int *allowed; } accel_list[] = { { "tcg", "tcg", NULL, NULL, NULL }, + { "xen", "Xen", xen_available, xen_init, &xen_allowed }, { "kvm", "KVM", kvm_available, kvm_init, &kvm_allowed }, }; diff --git a/xen-all.c b/xen-all.c new file mode 100644 index 0000000..42c93ba --- /dev/null +++ b/xen-all.c @@ -0,0 +1,25 @@ +/* + * Copyright (C) 2010 Citrix Ltd. + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + * + */ + +#include "config.h" + +#include "hw/xen_common.h" +#include "hw/xen_backend.h" + +/* Initialise Xen */ + +int xen_init(int smp_cpus) +{ + xen_xc = xc_interface_open(NULL, NULL, 0); + if (xen_xc == XC_HANDLER_INITIAL_VALUE) { + xen_be_printf(NULL, 0, "can''t open xen interface\n"); + return -1; + } + + return 0; +} diff --git a/xen-stub.c b/xen-stub.c new file mode 100644 index 0000000..0fa9c51 --- /dev/null +++ b/xen-stub.c @@ -0,0 +1,17 @@ +/* + * Copyright (C) 2010 Citrix Ltd. + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + * + */ + +#include "config.h" + +#include "qemu-common.h" +#include "hw/xen.h" + +int xen_init(int smp_cpus) +{ + return -ENOSYS; +} -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 06/14] xen: Add the Xen platform pci device
From: Anthony PERARD <anthony.perard@citrix.com> Introduce a new emulated PCI device, specific to fully virtualized Xen guests. The device is necessary for PV on HVM drivers to work. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- Makefile.target | 1 + hw/hw.h | 3 + hw/pci_ids.h | 2 + hw/xen_machine_fv.c | 3 + hw/xen_platform.c | 431 +++++++++++++++++++++++++++++++++++++++++++++++++++ hw/xen_platform.h | 8 + 6 files changed, 448 insertions(+), 0 deletions(-) create mode 100644 hw/xen_platform.c create mode 100644 hw/xen_platform.h diff --git a/Makefile.target b/Makefile.target index 7a5eb71..9353627 100644 --- a/Makefile.target +++ b/Makefile.target @@ -190,6 +190,7 @@ obj-$(CONFIG_NO_XEN) += xen-stub.o # xen full virtualized machine obj-i386-$(CONFIG_XEN) += xen_machine_fv.o +obj-i386-$(CONFIG_XEN) += xen_platform.o # USB layer obj-$(CONFIG_USB_OHCI) += usb-ohci.o diff --git a/hw/hw.h b/hw/hw.h index 4405092..67f3369 100644 --- a/hw/hw.h +++ b/hw/hw.h @@ -653,6 +653,9 @@ extern const VMStateDescription vmstate_i2c_slave; #define VMSTATE_INT32_LE(_f, _s) \ VMSTATE_SINGLE(_f, _s, 0, vmstate_info_int32_le, int32_t) +#define VMSTATE_UINT8_TEST(_f, _s, _t) \ + VMSTATE_SINGLE_TEST(_f, _s, _t, 0, vmstate_info_uint8, uint8_t) + #define VMSTATE_UINT16_TEST(_f, _s, _t) \ VMSTATE_SINGLE_TEST(_f, _s, _t, 0, vmstate_info_uint16, uint16_t) diff --git a/hw/pci_ids.h b/hw/pci_ids.h index 39e9f1d..1f2e0dd 100644 --- a/hw/pci_ids.h +++ b/hw/pci_ids.h @@ -105,3 +105,5 @@ #define PCI_DEVICE_ID_INTEL_82371AB 0x7111 #define PCI_DEVICE_ID_INTEL_82371AB_2 0x7112 #define PCI_DEVICE_ID_INTEL_82371AB_3 0x7113 + +#define PCI_VENDOR_ID_XENSOURCE 0x5853 diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c index 260cda3..39ee7c7 100644 --- a/hw/xen_machine_fv.c +++ b/hw/xen_machine_fv.c @@ -35,6 +35,7 @@ #include "xen_common.h" #include "xen/hvm/hvm_info_table.h" +#include "xen_platform.h" #define MAX_IDE_BUS 2 @@ -88,6 +89,8 @@ static void xen_init_fv(ram_addr_t ram_size, pc_vga_init(pci_bus); + pci_xen_platform_init(pci_bus); + /* init basic PC hardware */ pc_basic_device_init(isa_irq, &floppy_controller, &rtc_state); diff --git a/hw/xen_platform.c b/hw/xen_platform.c new file mode 100644 index 0000000..7551c81 --- /dev/null +++ b/hw/xen_platform.c @@ -0,0 +1,431 @@ +/* + * XEN platform pci device, formerly known as the event channel device + * + * Copyright (c) 2003-2004 Intel Corp. + * Copyright (c) 2006 XenSource + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "hw.h" +#include "pc.h" +#include "pci.h" +#include "irq.h" +#include "xen_common.h" +#include "net.h" +#include "xen_platform.h" +#include "xen_backend.h" +#include "qemu-log.h" +#include "rwhandler.h" + +#include <assert.h> +#include <xenguest.h> + +//#define DEBUG_PLATFORM + +#ifdef DEBUG_PLATFORM +#define DPRINTF(fmt, ...) do { \ + fprintf(stderr, "xen_platform: " fmt, ## __VA_ARGS__); \ +} while (0) +#else +#define DPRINTF(fmt, ...) do { } while (0) +#endif + +#define PFFLAG_ROM_LOCK 1 /* Sets whether ROM memory area is RW or RO */ + +typedef struct PCIXenPlatformState { + PCIDevice pci_dev; + uint8_t flags; /* used only for version_id == 2 */ + int drivers_blacklisted; + uint16_t driver_product_version; + + /* Log from guest drivers */ + int throttling_disabled; + char log_buffer[4096]; + int log_buffer_off; +} PCIXenPlatformState; + +#define XEN_PLATFORM_IOPORT 0x10 + +/* We throttle access to dom0 syslog, to avoid DOS attacks. This is + modelled as a token bucket, with one token for every byte of log. + The bucket size is 128KB (->1024 lines of 128 bytes each) and + refills at 256B/s. It starts full. The guest is blocked if no + tokens are available when it tries to generate a log message. */ +#define BUCKET_MAX_SIZE (128*1024) +#define BUCKET_FILL_RATE 256 + +static void throttle(PCIXenPlatformState *s, unsigned count) +{ + static unsigned available; + static int64_t last_refill; + static int started; + static int warned; + + int64_t waiting_for, now; + int64_t delay; + + if (s->throttling_disabled) { + return; + } + + if (!started) { + last_refill = qemu_get_clock_ns(rt_clock); + available = BUCKET_MAX_SIZE; + started = 1; + } + + if (count > BUCKET_MAX_SIZE) { + DPRINTF("tried to get %u tokens, but bucket size is %u\n", + BUCKET_MAX_SIZE, count); + exit(1); + } + + if (available < count) { + /* The bucket is empty. Refill it */ + + /* When will it be full enough to handle this request? */ + delay = muldiv64(count - available, 1000000000, BUCKET_FILL_RATE); + + waiting_for = last_refill + delay; + + /* How long do we have to wait? (might be negative) */ + waiting_for = waiting_for - qemu_get_clock_ns(rt_clock); + + /* Wait for it. */ + if (waiting_for > 0) { + struct timespec ts; + if (!warned) { + DPRINTF("throttling guest access to syslog"); + warned = 1; + } + ts.tv_sec = waiting_for / 1000000000; + ts.tv_nsec = waiting_for % 1000000000; + while (nanosleep(&ts, &ts) < 0 && errno == EINTR) { + } + } + + /* Refill */ + now = qemu_get_clock_ns(rt_clock); + available += muldiv64(now - last_refill, + BUCKET_FILL_RATE, + 1000000000); + if (available > BUCKET_MAX_SIZE) { + available = BUCKET_MAX_SIZE; + } + last_refill = now; + } + + assert(available >= count); + + available -= count; +} + +/* Xen Platform, Fixed IOPort */ + +static void platform_fixed_ioport_writew(void *opaque, uint32_t addr, uint32_t val) +{ + PCIXenPlatformState *s = opaque; + + switch (addr - XEN_PLATFORM_IOPORT) { + case 0: + /* TODO: */ + /* Unplug devices. Value is a bitmask of which devices to + unplug, with bit 0 the IDE devices, bit 1 the network + devices, and bit 2 the non-primary-master IDE devices. */ + break; + case 2: + switch (val) { + case 1: + DPRINTF("Citrix Windows PV drivers loaded in guest\n"); + break; + case 0: + DPRINTF("Guest claimed to be running PV product 0?\n"); + break; + default: + DPRINTF("Unknown PV product %d loaded in guest\n", val); + break; + } + s->driver_product_version = val; + break; + } +} + +static void platform_fixed_ioport_writel(void *opaque, uint32_t addr, + uint32_t val) +{ + switch (addr - XEN_PLATFORM_IOPORT) { + case 0: + /* PV driver version */ + break; + } +} + +static void platform_fixed_ioport_writeb(void *opaque, uint32_t addr, uint32_t val) +{ + PCIXenPlatformState *s = opaque; + + switch (addr - XEN_PLATFORM_IOPORT) { + case 0: /* Platform flags */ { + hvmmem_type_t mem_type = (val & PFFLAG_ROM_LOCK) ? + HVMMEM_ram_ro : HVMMEM_ram_rw; + if (xc_hvm_set_mem_type(xen_xc, xen_domid, mem_type, 0xc0, 0x40)) { + DPRINTF("unable to change ro/rw state of ROM memory area!\n"); + } else { + s->flags = val & PFFLAG_ROM_LOCK; + DPRINTF("changed ro/rw state of ROM memory area. now is %s state.\n", + (mem_type == HVMMEM_ram_ro ? "ro":"rw")); + } + break; + } + case 2: + /* Send bytes to syslog */ + if (val == ''\n'' || s->log_buffer_off == sizeof(s->log_buffer) - 1) { + /* Flush buffer */ + s->log_buffer[s->log_buffer_off] = 0; + throttle(s, s->log_buffer_off); + DPRINTF("%s\n", s->log_buffer); + s->log_buffer_off = 0; + break; + } + s->log_buffer[s->log_buffer_off++] = val; + break; + } +} + +static uint32_t platform_fixed_ioport_readw(void *opaque, uint32_t addr) +{ + PCIXenPlatformState *s = opaque; + + switch (addr - XEN_PLATFORM_IOPORT) { + case 0: + if (s->drivers_blacklisted) { + /* The drivers will recognise this magic number and refuse + * to do anything. */ + return 0xd249; + } else { + /* Magic value so that you can identify the interface. */ + return 0x49d2; + } + default: + return 0xffff; + } +} + +static uint32_t platform_fixed_ioport_readb(void *opaque, uint32_t addr) +{ + PCIXenPlatformState *s = opaque; + + switch (addr - XEN_PLATFORM_IOPORT) { + case 0: + /* Platform flags */ + return s->flags; + case 2: + /* Version number */ + return 1; + default: + return 0xff; + } +} + +static void platform_fixed_ioport_reset(void *opaque) +{ + PCIXenPlatformState *s = opaque; + + platform_fixed_ioport_writeb(s, XEN_PLATFORM_IOPORT, 0); +} + +static void platform_fixed_ioport_init(PCIXenPlatformState* s) +{ + register_ioport_write(XEN_PLATFORM_IOPORT, 16, 4, platform_fixed_ioport_writel, s); + register_ioport_write(XEN_PLATFORM_IOPORT, 16, 2, platform_fixed_ioport_writew, s); + register_ioport_write(XEN_PLATFORM_IOPORT, 16, 1, platform_fixed_ioport_writeb, s); + register_ioport_read(XEN_PLATFORM_IOPORT, 16, 2, platform_fixed_ioport_readw, s); + register_ioport_read(XEN_PLATFORM_IOPORT, 16, 1, platform_fixed_ioport_readb, s); +} + +/* Xen Platform PCI Device */ + +static uint32_t xen_platform_ioport_readb(void *opaque, uint32_t addr) +{ + addr &= 0xff; + + if (addr == 0) { + return platform_fixed_ioport_readb(opaque, XEN_PLATFORM_IOPORT); + } else { + return ~0u; + } +} + +static void xen_platform_ioport_writeb(void *opaque, uint32_t addr, uint32_t val) +{ + PCIXenPlatformState *s = opaque; + + addr &= 0xff; + val &= 0xff; + + switch (addr) { + case 0: /* Platform flags */ + platform_fixed_ioport_writeb(opaque, XEN_PLATFORM_IOPORT, val); + break; + case 8: + { + if (val == ''\n'' || s->log_buffer_off == sizeof(s->log_buffer) - 1) { + /* Flush buffer */ + s->log_buffer[s->log_buffer_off] = 0; + throttle(s, s->log_buffer_off); + DPRINTF("%s\n", s->log_buffer); + s->log_buffer_off = 0; + break; + } + s->log_buffer[s->log_buffer_off++] = val; + } + break; + default: + break; + } +} + +static void platform_ioport_map(PCIDevice *pci_dev, int region_num, pcibus_t addr, pcibus_t size, int type) +{ + PCIXenPlatformState *d = DO_UPCAST(PCIXenPlatformState, pci_dev, pci_dev); + + register_ioport_write(addr, size, 1, xen_platform_ioport_writeb, d); + register_ioport_read(addr, size, 1, xen_platform_ioport_readb, d); +} + +static uint32_t platform_mmio_read(ReadWriteHandler *handler, pcibus_t addr, int len) +{ + DPRINTF("Warning: attempted read from physical address " + "0x" TARGET_FMT_plx " in xen platform mmio space\n", addr); + + return 0; +} + +static void platform_mmio_write(ReadWriteHandler *handler, pcibus_t addr, + uint32_t val, int len) +{ + DPRINTF("Warning: attempted write of 0x%x to physical " + "address 0x" TARGET_FMT_plx " in xen platform mmio space\n", + val, addr); +} + +static ReadWriteHandler platform_mmio_handler = { + .read = &platform_mmio_read, + .write = &platform_mmio_write, +}; + +static void platform_mmio_map(PCIDevice *d, int region_num, + pcibus_t addr, pcibus_t size, int type) +{ + int mmio_io_addr; + + mmio_io_addr = cpu_register_io_memory_simple(&platform_mmio_handler); + + cpu_register_physical_memory(addr, size, mmio_io_addr); +} + +static int xen_platform_post_load(void *opaque, int version_id) +{ + PCIXenPlatformState *s = opaque; + + platform_fixed_ioport_writeb(s, XEN_PLATFORM_IOPORT, s->flags); + + return 0; +} + +static const VMStateDescription vmstate_xen_platform = { + .name = "platform", + .version_id = 4, + .minimum_version_id = 4, + .minimum_version_id_old = 4, + .post_load = xen_platform_post_load, + .fields = (VMStateField []) { + VMSTATE_PCI_DEVICE(pci_dev, PCIXenPlatformState), + VMSTATE_UINT8(flags, PCIXenPlatformState), + VMSTATE_END_OF_LIST() + } +}; + +static int xen_platform_initfn(PCIDevice *dev) +{ + PCIXenPlatformState *d = DO_UPCAST(PCIXenPlatformState, pci_dev, dev); + uint8_t *pci_conf; + + pci_conf = d->pci_dev.config; + + pci_config_set_vendor_id(pci_conf, PCI_VENDOR_ID_XENSOURCE); + pci_config_set_device_id(pci_conf, 0x0001); + pci_set_word(pci_conf + PCI_COMMAND, PCI_COMMAND_IO | PCI_COMMAND_MEMORY); + + pci_config_set_revision(pci_conf, 1); + pci_config_set_prog_interface(pci_conf, 0); + + pci_config_set_class(pci_conf, PCI_CLASS_OTHERS << 8 | 0x80); + + pci_conf[PCI_INTERRUPT_PIN] = 1; + + /* Microsoft WHQL requires non-zero subsystem IDs. */ + /* http://www.pcisig.com/reflector/msg02205.html. */ + pci_set_word(pci_conf + PCI_SUBSYSTEM_VENDOR_ID, pci_conf[PCI_VENDOR_ID]); + pci_set_word(pci_conf + PCI_SUBSYSTEM_ID, 0x0001); + + pci_register_bar(&d->pci_dev, 0, 0x100, + PCI_BASE_ADDRESS_SPACE_IO, platform_ioport_map); + + /* reserve 16MB mmio address for share memory*/ + pci_register_bar(&d->pci_dev, 1, 0x1000000, + PCI_BASE_ADDRESS_MEM_PREFETCH, platform_mmio_map); + + platform_fixed_ioport_init(d); + + return 0; +} + +static void platform_reset(DeviceState *dev) +{ + PCIXenPlatformState *s = DO_UPCAST(PCIXenPlatformState, pci_dev.qdev, dev); + + platform_fixed_ioport_reset(s); +} + +void pci_xen_platform_init(PCIBus *bus) +{ + PCIDevice *dev; + + dev = pci_create(bus, -1, "xen-platform"); + + qdev_init_nofail(&dev->qdev); +} + +static PCIDeviceInfo xen_platform_info = { + .init = xen_platform_initfn, + .qdev.name = "xen-platform", + .qdev.desc = "XEN platform pci device", + .qdev.size = sizeof(PCIXenPlatformState), + .qdev.vmsd = &vmstate_xen_platform, + .qdev.reset = platform_reset, +}; + +static void xen_platform_register(void) +{ + pci_qdev_register(&xen_platform_info); +} + +device_init(xen_platform_register); diff --git a/hw/xen_platform.h b/hw/xen_platform.h new file mode 100644 index 0000000..574eecd --- /dev/null +++ b/hw/xen_platform.h @@ -0,0 +1,8 @@ +#ifndef XEN_PLATFORM_H +#define XEN_PLATFORM_H + +#include "hw/pci.h" + +void pci_xen_platform_init(PCIBus *bus); + +#endif -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 07/14] piix_pci: Introduces Xen specific call for irq.
From: Anthony PERARD <anthony.perard@citrix.com> This patch introduces Xen specific call in piix_pci. The specific part for Xen is in write_config, set_irq and get_pirq. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- hw/piix_pci.c | 28 ++++++++++++++++++++++++++-- hw/xen.h | 6 ++++++ xen-all.c | 31 +++++++++++++++++++++++++++++++ xen-stub.c | 13 +++++++++++++ 4 files changed, 76 insertions(+), 2 deletions(-) diff --git a/hw/piix_pci.c b/hw/piix_pci.c index f152a0f..6d96b18 100644 --- a/hw/piix_pci.c +++ b/hw/piix_pci.c @@ -28,6 +28,7 @@ #include "pci_host.h" #include "isa.h" #include "sysbus.h" +#include "xen.h" /* * I440FX chipset data sheet. @@ -150,6 +151,13 @@ static void i440fx_write_config(PCIDevice *dev, } } +static void i440fx_write_config_xen(PCIDevice *dev, + uint32_t address, uint32_t val, int len) +{ + xen_piix_pci_write_config_client(address, val, len); + i440fx_write_config(dev, address, val, len); +} + static int i440fx_load_old(QEMUFile* f, void *opaque, int version_id) { PCII440FXState *d = opaque; @@ -229,13 +237,21 @@ PCIBus *i440fx_init(PCII440FXState **pi440fx_state, int *piix3_devfn, qemu_irq * s->bus = b; qdev_init_nofail(dev); - d = pci_create_simple(b, 0, "i440FX"); + if (xen_enabled()) { + d = pci_create_simple(b, 0, "i440FX-xen"); + } else { + d = pci_create_simple(b, 0, "i440FX"); + } *pi440fx_state = DO_UPCAST(PCII440FXState, dev, d); piix3 = DO_UPCAST(PIIX3State, dev, pci_create_simple_multifunction(b, -1, true, "PIIX3")); piix3->pic = pic; - pci_bus_irqs(b, piix3_set_irq, pci_slot_get_pirq, piix3, 4); + if (xen_enabled()) { + pci_bus_irqs(b, xen_piix3_set_irq, xen_pci_slot_get_pirq, piix3, 4); + } else { + pci_bus_irqs(b, piix3_set_irq, pci_slot_get_pirq, piix3, 4); + } (*pi440fx_state)->piix3 = piix3; *piix3_devfn = piix3->dev.devfn; @@ -350,6 +366,14 @@ static PCIDeviceInfo i440fx_info[] = { .init = i440fx_initfn, .config_write = i440fx_write_config, },{ + .qdev.name = "i440FX-xen", + .qdev.desc = "Host bridge", + .qdev.size = sizeof(PCII440FXState), + .qdev.vmsd = &vmstate_i440fx, + .qdev.no_user = 1, + .init = i440fx_initfn, + .config_write = i440fx_write_config_xen, + },{ .qdev.name = "PIIX3", .qdev.desc = "ISA bridge", .qdev.size = sizeof(PIIX3State), diff --git a/hw/xen.h b/hw/xen.h index 14bbb6e..c5189b1 100644 --- a/hw/xen.h +++ b/hw/xen.h @@ -8,6 +8,8 @@ */ #include <inttypes.h> +#include "qemu-common.h" + /* xen-machine.c */ enum xen_mode { XEN_EMULATE = 0, // xen emulation, using xenner (default) @@ -26,6 +28,10 @@ extern int xen_allowed; #define xen_enabled() (0) #endif +int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num); +void xen_piix3_set_irq(void *opaque, int irq_num, int level); +void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len); + int xen_init(int smp_cpus); #endif /* QEMU_HW_XEN_H */ diff --git a/xen-all.c b/xen-all.c index 42c93ba..f913e1f 100644 --- a/xen-all.c +++ b/xen-all.c @@ -8,9 +8,40 @@ #include "config.h" +#include "hw/pci.h" #include "hw/xen_common.h" #include "hw/xen_backend.h" +/* Xen specific function for piix pci */ + +int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num) +{ + return irq_num + ((pci_dev->devfn >> 3) << 2); +} + +void xen_piix3_set_irq(void *opaque, int irq_num, int level) +{ + xc_hvm_set_pci_intx_level(xen_xc, xen_domid, 0, 0, irq_num >> 2, + irq_num & 3, level); +} + +void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len) +{ + int i; + + /* Scan for updates to PCI link routes (0x60-0x63). */ + for (i = 0; i < len; i++) { + uint8_t v = (val >> (8 * i)) & 0xff; + if (v & 0x80) { + v = 0; + } + v &= 0xf; + if (((address + i) >= 0x60) && ((address + i) <= 0x63)) { + xc_hvm_set_pci_link_route(xen_xc, xen_domid, address + i - 0x60, v); + } + } +} + /* Initialise Xen */ int xen_init(int smp_cpus) diff --git a/xen-stub.c b/xen-stub.c index 0fa9c51..07e64bc 100644 --- a/xen-stub.c +++ b/xen-stub.c @@ -11,6 +11,19 @@ #include "qemu-common.h" #include "hw/xen.h" +int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num) +{ + return -1; +} + +void xen_piix3_set_irq(void *opaque, int irq_num, int level) +{ +} + +void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len) +{ +} + int xen_init(int smp_cpus) { return -ENOSYS; -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 08/14] xen: add a 8259 Interrupt Controller
From: Anthony PERARD <anthony.perard@citrix.com> Introduce a 8259 Interrupt Controller for target-xen; every set_irq call makes a Xen hypercall. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- hw/xen_common.h | 2 ++ hw/xen_machine_fv.c | 5 ++--- xen-all.c | 12 ++++++++++++ 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/hw/xen_common.h b/hw/xen_common.h index 4c0f97d..a24bcb3 100644 --- a/hw/xen_common.h +++ b/hw/xen_common.h @@ -44,4 +44,6 @@ typedef xc_interface *qemu_xc_interface; # define xc_fd(xen_xc) (*(int*)xen_xc) #endif +qemu_irq *i8259_xen_init(void); + #endif /* QEMU_HW_XEN_COMMON_H */ diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c index 39ee7c7..fe4491f 100644 --- a/hw/xen_machine_fv.c +++ b/hw/xen_machine_fv.c @@ -36,6 +36,7 @@ #include "xen_common.h" #include "xen/hvm/hvm_info_table.h" #include "xen_platform.h" +#include "xen_common.h" #define MAX_IDE_BUS 2 @@ -51,7 +52,6 @@ static void xen_init_fv(ram_addr_t ram_size, PCIBus *pci_bus; PCII440FXState *i440fx_state; int piix3_devfn = -1; - qemu_irq *cpu_irq; qemu_irq *isa_irq; qemu_irq *i8259; qemu_irq *cmos_s3; @@ -75,8 +75,7 @@ static void xen_init_fv(ram_addr_t ram_size, env = cpu_init(cpu_model); env->halted = 1; - cpu_irq = pc_allocate_cpu_irq(); - i8259 = i8259_init(cpu_irq[0]); + i8259 = i8259_xen_init(); isa_irq_state = qemu_mallocz(sizeof (*isa_irq_state)); isa_irq_state->i8259 = i8259; diff --git a/xen-all.c b/xen-all.c index f913e1f..90c03eb 100644 --- a/xen-all.c +++ b/xen-all.c @@ -42,6 +42,18 @@ void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len) } } +/* i8259 */ + +static void i8259_set_irq(void *opaque, int irq, int level) +{ + xc_hvm_set_isa_irq_level(xen_xc, xen_domid, irq, level); +} + +qemu_irq *i8259_xen_init(void) +{ + return qemu_allocate_irqs(i8259_set_irq, NULL, 16); +} + /* Initialise Xen */ int xen_init(int smp_cpus) -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 09/14] xen: Introduce the Xen mapcache
From: Anthony PERARD <anthony.perard@citrix.com> The mapcache maps chucks of guest memory on demand, unmaps them when they are not needed anymore. Each call to qemu_get_ram_ptr makes a call to qemu_map_cache with the lock option, so mapcache will not unmap these ram_ptr. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> --- Makefile.target | 3 + configure | 3 + exec.c | 39 ++++++- hw/xen.h | 10 ++ hw/xen_common.h | 2 + xen-all.c | 64 +++++++++++ xen-mapcache-stub.c | 33 ++++++ xen-mapcache.c | 301 +++++++++++++++++++++++++++++++++++++++++++++++++++ xen-mapcache.h | 14 +++ xen-stub.c | 4 + 10 files changed, 469 insertions(+), 4 deletions(-) create mode 100644 xen-mapcache-stub.c create mode 100644 xen-mapcache.c create mode 100644 xen-mapcache.h diff --git a/Makefile.target b/Makefile.target index 9353627..fddce71 100644 --- a/Makefile.target +++ b/Makefile.target @@ -185,8 +185,11 @@ QEMU_CFLAGS += $(VNC_PNG_CFLAGS) obj-$(CONFIG_XEN) += xen_machine_pv.o xen_domainbuild.o # xen support +CONFIG_NO_XEN_MAPCACHE = $(if $(subst n,,$(CONFIG_XEN_MAPCACHE)),n,y) obj-$(CONFIG_XEN) += xen-all.o obj-$(CONFIG_NO_XEN) += xen-stub.o +obj-$(CONFIG_XEN_MAPCACHE) += xen-mapcache.o +obj-$(CONFIG_NO_XEN_MAPCACHE) += xen-mapcache-stub.o # xen full virtualized machine obj-i386-$(CONFIG_XEN) += xen_machine_fv.o diff --git a/configure b/configure index 18c3fa0..e915583 100755 --- a/configure +++ b/configure @@ -2812,6 +2812,9 @@ case "$target_arch2" in i386|x86_64) if test "$xen" = "yes" -a "$target_softmmu" = "yes" ; then echo "CONFIG_XEN=y" >> $config_target_mak + if test "$cpu" = "i386" -o "$cpu" = "x86_64"; then + echo "CONFIG_XEN_MAPCACHE=y" >> $config_target_mak + fi fi esac case "$target_arch2" in diff --git a/exec.c b/exec.c index 380dab5..0de9e32 100644 --- a/exec.c +++ b/exec.c @@ -39,6 +39,7 @@ #include "hw/qdev.h" #include "osdep.h" #include "kvm.h" +#include "hw/xen.h" #include "qemu-timer.h" #if defined(CONFIG_USER_ONLY) #include <qemu.h> @@ -58,8 +59,11 @@ #include <libutil.h> #endif #endif +#else /* !CONFIG_USER_ONLY */ +#include "xen-mapcache.h" #endif + //#define DEBUG_TB_INVALIDATE //#define DEBUG_FLUSH //#define DEBUG_TLB @@ -2833,6 +2837,7 @@ ram_addr_t qemu_ram_alloc_from_ptr(DeviceState *dev, const char *name, } } + new_block->offset = find_ram_offset(size); if (host) { new_block->host = host; } else { @@ -2856,15 +2861,17 @@ ram_addr_t qemu_ram_alloc_from_ptr(DeviceState *dev, const char *name, PROT_EXEC|PROT_READ|PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); #else - new_block->host = qemu_vmalloc(size); + if (xen_mapcache_enabled()) { + xen_ram_alloc(new_block->offset, size); + } else { + new_block->host = qemu_vmalloc(size); + } #endif #ifdef MADV_MERGEABLE madvise(new_block->host, size, MADV_MERGEABLE); #endif } } - - new_block->offset = find_ram_offset(size); new_block->length = size; QLIST_INSERT_HEAD(&ram_list.blocks, new_block, next); @@ -2905,7 +2912,11 @@ void qemu_ram_free(ram_addr_t addr) #if defined(TARGET_S390X) && defined(CONFIG_KVM) munmap(block->host, block->length); #else - qemu_vfree(block->host); + if (xen_mapcache_enabled()) { + qemu_invalidate_entry(block->host); + } else { + qemu_vfree(block->host); + } #endif } qemu_free(block); @@ -2931,6 +2942,15 @@ void *qemu_get_ram_ptr(ram_addr_t addr) if (addr - block->offset < block->length) { QLIST_REMOVE(block, next); QLIST_INSERT_HEAD(&ram_list.blocks, block, next); + if (xen_mapcache_enabled()) { + /* We need to check if the requested address is in the RAM + * because we don''t want to map the entire memory in QEMU. + */ + if (block->offset == 0) { + return qemu_map_cache(addr, 0, 1); + } + block->host = qemu_map_cache(block->offset, block->length, 1); + } return block->host + (addr - block->offset); } } @@ -2949,11 +2969,19 @@ ram_addr_t qemu_ram_addr_from_host(void *ptr) uint8_t *host = ptr; QLIST_FOREACH(block, &ram_list.blocks, next) { + /* This case append when the block is not mapped. */ + if (block->host == NULL) { + continue; + } if (host - block->host < block->length) { return block->offset + (host - block->host); } } + if (xen_mapcache_enabled()) { + return qemu_ram_addr_from_mapcache(ptr); + } + fprintf(stderr, "Bad ram pointer %p\n", ptr); abort(); @@ -3728,6 +3756,9 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len, if (is_write) { cpu_physical_memory_write(bounce.addr, bounce.buffer, access_len); } + if (xen_enabled()) { + qemu_invalidate_entry(buffer); + } qemu_vfree(bounce.buffer); bounce.buffer = NULL; cpu_notify_map_clients(); diff --git a/hw/xen.h b/hw/xen.h index c5189b1..0261ae6 100644 --- a/hw/xen.h +++ b/hw/xen.h @@ -28,10 +28,20 @@ extern int xen_allowed; #define xen_enabled() (0) #endif +#if defined CONFIG_XEN_MAPCACHE +# define xen_mapcache_enabled() (xen_enabled()) +#else +# define xen_mapcache_enabled() (0) +#endif + int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num); void xen_piix3_set_irq(void *opaque, int irq_num, int level); void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len); int xen_init(int smp_cpus); +#if defined(NEED_CPU_H) && !defined(CONFIG_USER_ONLY) +void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size); +#endif + #endif /* QEMU_HW_XEN_H */ diff --git a/hw/xen_common.h b/hw/xen_common.h index a24bcb3..2773b45 100644 --- a/hw/xen_common.h +++ b/hw/xen_common.h @@ -36,6 +36,8 @@ typedef int qemu_xc_interface; xc_gnttab_map_grant_refs(gnt, count, domids, refs, flags) # define xc_gnttab_munmap(xc, gnt, pages, niov) xc_gnttab_munmap(gnt, pages, niov) # define xc_gnttab_close(xc, dev) xc_gnttab_close(dev) +# define xc_map_foreign_bulk(xc, domid, opts, pfns, err, size) \ + xc_map_foreign_batch(xc, domid, opts, pfns, size) #else typedef xc_interface *qemu_xc_interface; # define XC_HANDLER_INITIAL_VALUE NULL diff --git a/xen-all.c b/xen-all.c index 90c03eb..3048c4d 100644 --- a/xen-all.c +++ b/xen-all.c @@ -12,6 +12,8 @@ #include "hw/xen_common.h" #include "hw/xen_backend.h" +#include "xen-mapcache.h" + /* Xen specific function for piix pci */ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num) @@ -54,6 +56,64 @@ qemu_irq *i8259_xen_init(void) return qemu_allocate_irqs(i8259_set_irq, NULL, 16); } + +/* Memory Ops */ + +static void xen_ram_init(ram_addr_t ram_size) +{ + RAMBlock *new_block; + ram_addr_t below_4g_mem_size, above_4g_mem_size = 0; + + new_block = qemu_mallocz(sizeof (*new_block)); + pstrcpy(new_block->idstr, sizeof (new_block->idstr), "xen.ram"); + new_block->host = NULL; + new_block->offset = 0; + new_block->length = ram_size; + + QLIST_INSERT_HEAD(&ram_list.blocks, new_block, next); + + ram_list.phys_dirty = qemu_realloc(ram_list.phys_dirty, + new_block->length >> TARGET_PAGE_BITS); + memset(ram_list.phys_dirty + (new_block->offset >> TARGET_PAGE_BITS), + 0xff, new_block->length >> TARGET_PAGE_BITS); + + if (ram_size >= 0xe0000000 ) { + above_4g_mem_size = ram_size - 0xe0000000; + below_4g_mem_size = 0xe0000000; + } else { + below_4g_mem_size = ram_size; + } + + cpu_register_physical_memory(0, below_4g_mem_size, new_block->offset); +#if TARGET_PHYS_ADDR_BITS > 32 + if (above_4g_mem_size > 0) { + cpu_register_physical_memory(0x100000000ULL, above_4g_mem_size, + new_block->offset + below_4g_mem_size); + } +#endif +} + +void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size) +{ + unsigned long nr_pfn; + xen_pfn_t *pfn_list; + int i; + + nr_pfn = size >> TARGET_PAGE_BITS; + pfn_list = qemu_malloc(sizeof (*pfn_list) * nr_pfn); + + for (i = 0; i < nr_pfn; i++) { + pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i; + } + + if (xc_domain_memory_populate_physmap(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) { + hw_error("xen: failed to populate ram at %lx", ram_addr); + } + + qemu_free(pfn_list); +} + + /* Initialise Xen */ int xen_init(int smp_cpus) @@ -64,5 +124,9 @@ int xen_init(int smp_cpus) return -1; } + /* Init RAM management */ + qemu_map_cache_init(); + xen_ram_init(ram_size); + return 0; } diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c new file mode 100644 index 0000000..69ce2e7 --- /dev/null +++ b/xen-mapcache-stub.c @@ -0,0 +1,33 @@ +#include "config.h" + +#include "exec-all.h" +#include "qemu-common.h" +#include "cpu-common.h" +#include "xen-mapcache.h" + +int qemu_map_cache_init(void) +{ + return 0; +} + +uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, uint8_t lock) +{ + return qemu_get_ram_ptr(phys_addr); +} + +void qemu_map_cache_unlock(void *buffer) +{ +} + +ram_addr_t qemu_ram_addr_from_mapcache(void *ptr) +{ + return -1; +} + +void qemu_invalidate_map_cache(void) +{ +} + +void qemu_invalidate_entry(uint8_t *buffer) +{ +} diff --git a/xen-mapcache.c b/xen-mapcache.c new file mode 100644 index 0000000..c7e69e6 --- /dev/null +++ b/xen-mapcache.c @@ -0,0 +1,301 @@ +#include "config.h" + +#include "hw/xen_backend.h" +#include "blockdev.h" + +#include <xen/hvm/params.h> +#include <sys/mman.h> + +#include "xen-mapcache.h" + + +//#define MAPCACHE_DEBUG + +#ifdef MAPCACHE_DEBUG +# define DPRINTF(fmt, ...) do { \ + fprintf(stderr, "xen_mapcache: " fmt, ## __VA_ARGS__); \ +} while (0) +#else +# define DPRINTF(fmt, ...) do { } while (0) +#endif + +#if defined(__i386__) +# define MAX_MCACHE_SIZE 0x40000000 /* 1GB max for x86 */ +# define MCACHE_BUCKET_SHIFT 16 +#elif defined(__x86_64__) +# define MAX_MCACHE_SIZE 0x1000000000 /* 64GB max for x86_64 */ +# define MCACHE_BUCKET_SHIFT 20 +#endif +#define MCACHE_BUCKET_SIZE (1UL << MCACHE_BUCKET_SHIFT) + +#define BITS_PER_LONG (sizeof(long) * 8) +#define BITS_TO_LONGS(bits) (((bits) + BITS_PER_LONG - 1) / BITS_PER_LONG) +#define DECLARE_BITMAP(name, bits) unsigned long name[BITS_TO_LONGS(bits)] +#define test_bit(bit, map) \ + (!!((map)[(bit) / BITS_PER_LONG] & (1UL << ((bit) % BITS_PER_LONG)))) + +typedef struct MapCacheEntry { + target_phys_addr_t paddr_index; + uint8_t *vaddr_base; + DECLARE_BITMAP(valid_mapping, MCACHE_BUCKET_SIZE >> XC_PAGE_SHIFT); + uint8_t lock; + struct MapCacheEntry *next; +} MapCacheEntry; + +typedef struct MapCacheRev { + uint8_t *vaddr_req; + target_phys_addr_t paddr_index; + QTAILQ_ENTRY(MapCacheRev) next; +} MapCacheRev; + +typedef struct MapCache { + MapCacheEntry *entry; + unsigned long nr_buckets; + QTAILQ_HEAD(map_cache_head, MapCacheRev) locked_entries; + + /* For most cases (>99.9%), the page address is the same. */ + target_phys_addr_t last_address_index; + uint8_t *last_address_vaddr; +} MapCache; + +static MapCache *mapcache; + + +int qemu_map_cache_init(void) +{ + unsigned long size; + + mapcache = qemu_mallocz(sizeof (MapCache)); + + QTAILQ_INIT(&mapcache->locked_entries); + mapcache->last_address_index = ~0UL; + + mapcache->nr_buckets = (((MAX_MCACHE_SIZE >> XC_PAGE_SHIFT) + + (1UL << (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT)) - 1) >> + (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT)); + + /* + * Use mmap() directly: lets us allocate a big hash table with no up-front + * cost in storage space. The OS will allocate memory only for the buckets + * that we actually use. All others will contain all zeroes. + */ + size = mapcache->nr_buckets * sizeof (MapCacheEntry); + size = (size + XC_PAGE_SIZE - 1) & ~(XC_PAGE_SIZE - 1); + DPRINTF("qemu_map_cache_init, nr_buckets = %lx size %lu\n", mapcache->nr_buckets, size); + mapcache->entry = mmap(NULL, size, PROT_READ|PROT_WRITE, + MAP_SHARED|MAP_ANON, -1, 0); + if (mapcache->entry == MAP_FAILED) { + return -1; + } + + return 0; +} + +static void qemu_remap_bucket(MapCacheEntry *entry, + target_phys_addr_t size, + target_phys_addr_t address_index) +{ + uint8_t *vaddr_base; + xen_pfn_t *pfns; + int *err; + unsigned int i, j; + target_phys_addr_t nb_pfn = size >> XC_PAGE_SHIFT; + + pfns = qemu_mallocz(nb_pfn * sizeof (xen_pfn_t)); + err = qemu_mallocz(nb_pfn * sizeof (int)); + + if (entry->vaddr_base != NULL) { + if (munmap(entry->vaddr_base, size) != 0) { + perror("unmap fails"); + exit(-1); + } + } + + for (i = 0; i < nb_pfn; i++) { + pfns[i] = (address_index << (MCACHE_BUCKET_SHIFT-XC_PAGE_SHIFT)) + i; + } + + vaddr_base = xc_map_foreign_bulk(xen_xc, xen_domid, PROT_READ|PROT_WRITE, + pfns, err, nb_pfn); + if (vaddr_base == NULL) { + perror("xc_map_foreign_bulk"); + exit(-1); + } + + entry->vaddr_base = vaddr_base; + entry->paddr_index = address_index; + + for (i = 0; i < nb_pfn; i += BITS_PER_LONG) { + unsigned long word = 0; + if ((i + BITS_PER_LONG) > nb_pfn) { + j = nb_pfn % BITS_PER_LONG; + } else { + j = BITS_PER_LONG; + } + while (j > 0) { + word = (word << 1) | !err[i + --j]; + } + entry->valid_mapping[i / BITS_PER_LONG] = word; + } + + qemu_free(pfns); + qemu_free(err); +} + +uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, uint8_t lock) +{ + MapCacheEntry *entry, *pentry = NULL; + target_phys_addr_t address_index = phys_addr >> MCACHE_BUCKET_SHIFT; + target_phys_addr_t address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1); + + if (address_index == mapcache->last_address_index && !lock) { + return mapcache->last_address_vaddr + address_offset; + } + + entry = &mapcache->entry[address_index % mapcache->nr_buckets]; + + while (entry && entry->lock && entry->paddr_index != address_index && entry->vaddr_base) { + pentry = entry; + entry = entry->next; + } + if (!entry) { + entry = qemu_mallocz(sizeof (MapCacheEntry)); + pentry->next = entry; + qemu_remap_bucket(entry, size ? : MCACHE_BUCKET_SIZE, address_index); + } else if (!entry->lock) { + if (!entry->vaddr_base || entry->paddr_index != address_index || + !test_bit(address_offset >> XC_PAGE_SHIFT, entry->valid_mapping)) { + qemu_remap_bucket(entry, size ? : MCACHE_BUCKET_SIZE, address_index); + } + } + + if (!test_bit(address_offset >> XC_PAGE_SHIFT, entry->valid_mapping)) { + mapcache->last_address_index = ~0UL; + return NULL; + } + + mapcache->last_address_index = address_index; + mapcache->last_address_vaddr = entry->vaddr_base; + if (lock) { + MapCacheRev *reventry = qemu_mallocz(sizeof(MapCacheRev)); + entry->lock++; + reventry->vaddr_req = mapcache->last_address_vaddr + address_offset; + reventry->paddr_index = mapcache->last_address_index; + QTAILQ_INSERT_TAIL(&mapcache->locked_entries, reventry, next); + } + + return mapcache->last_address_vaddr + address_offset; +} + +ram_addr_t qemu_ram_addr_from_mapcache(void *ptr) +{ + MapCacheRev *reventry; + target_phys_addr_t paddr_index; + int found = 0; + + QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) { + if (reventry->vaddr_req == ptr) { + paddr_index = reventry->paddr_index; + found = 1; + break; + } + } + if (!found) { + fprintf(stderr, "qemu_ram_addr_from_mapcache, could not find %p\n", ptr); + QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) { + DPRINTF(" %lx -> %p is present\n", reventry->paddr_index, + reventry->vaddr_req); + } + abort(); + return 0; + } + + return paddr_index << MCACHE_BUCKET_SHIFT; +} + +void qemu_invalidate_entry(uint8_t *buffer) +{ + MapCacheEntry *entry = NULL, *pentry = NULL; + MapCacheRev *reventry; + target_phys_addr_t paddr_index; + int found = 0; + + if (mapcache->last_address_vaddr == buffer) { + mapcache->last_address_index = ~0UL; + } + + QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) { + if (reventry->vaddr_req == buffer) { + paddr_index = reventry->paddr_index; + found = 1; + break; + } + } + if (!found) { + DPRINTF("qemu_invalidate_entry, could not find %p\n", buffer); + QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) { + DPRINTF(" %lx -> %p is present\n", reventry->paddr_index, reventry->vaddr_req); + } + return; + } + QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next); + qemu_free(reventry); + + entry = &mapcache->entry[paddr_index % mapcache->nr_buckets]; + while (entry && entry->paddr_index != paddr_index) { + pentry = entry; + entry = entry->next; + } + if (!entry) { + DPRINTF("Trying to unmap address %p that is not in the mapcache!\n", buffer); + return; + } + entry->lock--; + if (entry->lock > 0 || pentry == NULL) { + return; + } + + pentry->next = entry->next; + if (munmap(entry->vaddr_base, MCACHE_BUCKET_SIZE) != 0) { + perror("unmap fails"); + exit(-1); + } + qemu_free(entry); +} + +void qemu_invalidate_map_cache(void) +{ + unsigned long i; + MapCacheRev *reventry; + + qemu_aio_flush(); + + QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) { + DPRINTF("There should be no locked mappings at this time, " + "but %lx -> %p is present\n", + reventry->paddr_index, reventry->vaddr_req); + } + + mapcache_lock(); + + for (i = 0; i < mapcache->nr_buckets; i++) { + MapCacheEntry *entry = &mapcache->entry[i]; + + if (entry->vaddr_base == NULL) { + continue; + } + + if (munmap(entry->vaddr_base, MCACHE_BUCKET_SIZE) != 0) { + perror("unmap fails"); + exit(-1); + } + + entry->paddr_index = 0; + entry->vaddr_base = NULL; + } + + mapcache->last_address_index = ~0UL; + mapcache->last_address_vaddr = NULL; + + mapcache_unlock(); +} diff --git a/xen-mapcache.h b/xen-mapcache.h new file mode 100644 index 0000000..86a017b --- /dev/null +++ b/xen-mapcache.h @@ -0,0 +1,14 @@ +#ifndef XEN_MAPCACHE_H +#define XEN_MAPCACHE_H + +int qemu_map_cache_init(void); +uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, uint8_t lock); +void qemu_map_cache_unlock(void *phys_addr); +ram_addr_t qemu_ram_addr_from_mapcache(void *ptr); +void qemu_invalidate_entry(uint8_t *buffer); +void qemu_invalidate_map_cache(void); + +#define mapcache_lock() ((void)0) +#define mapcache_unlock() ((void)0) + +#endif /* !XEN_MAPCACHE_H */ diff --git a/xen-stub.c b/xen-stub.c index 07e64bc..c9f477d 100644 --- a/xen-stub.c +++ b/xen-stub.c @@ -24,6 +24,10 @@ void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len) { } +void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size) +{ +} + int xen_init(int smp_cpus) { return -ENOSYS; -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 10/14] Introduce qemu_ram_ptr_unlock.
From: Anthony PERARD <anthony.perard@citrix.com> This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when used with Xen. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> --- cpu-common.h | 1 + exec.c | 32 +++++++++++++++++++++++++++++--- xen-mapcache.c | 34 ++++++++++++++++++++++++++++++++++ 3 files changed, 64 insertions(+), 3 deletions(-) diff --git a/cpu-common.h b/cpu-common.h index 0426bc8..378eea8 100644 --- a/cpu-common.h +++ b/cpu-common.h @@ -46,6 +46,7 @@ ram_addr_t qemu_ram_alloc(DeviceState *dev, const char *name, ram_addr_t size); void qemu_ram_free(ram_addr_t addr); /* This should only be used for ram local to a device. */ void *qemu_get_ram_ptr(ram_addr_t addr); +void qemu_ram_ptr_unlock(void *addr); /* This should not be used by devices. */ ram_addr_t qemu_ram_addr_from_host(void *ptr); diff --git a/exec.c b/exec.c index 0de9e32..0612ee4 100644 --- a/exec.c +++ b/exec.c @@ -2961,6 +2961,13 @@ void *qemu_get_ram_ptr(ram_addr_t addr) return NULL; } +void qemu_ram_ptr_unlock(void *addr) +{ + if (xen_mapcache_enabled()) { + qemu_map_cache_unlock(addr); + } +} + /* Some of the softmmu routines need to translate from a host pointer (typically a TLB entry) back to a ram offset. */ ram_addr_t qemu_ram_addr_from_host(void *ptr) @@ -3067,6 +3074,7 @@ static void notdirty_mem_writeb(void *opaque, target_phys_addr_t ram_addr, uint32_t val) { int dirty_flags; + void *vaddr; dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); if (!(dirty_flags & CODE_DIRTY_FLAG)) { #if !defined(CONFIG_USER_ONLY) @@ -3074,19 +3082,22 @@ static void notdirty_mem_writeb(void *opaque, target_phys_addr_t ram_addr, dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); #endif } - stb_p(qemu_get_ram_ptr(ram_addr), val); + vaddr = qemu_get_ram_ptr(ram_addr); + stb_p(vaddr, val); dirty_flags |= (0xff & ~CODE_DIRTY_FLAG); cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr); + qemu_ram_ptr_unlock(vaddr); } static void notdirty_mem_writew(void *opaque, target_phys_addr_t ram_addr, uint32_t val) { int dirty_flags; + void *vaddr; dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); if (!(dirty_flags & CODE_DIRTY_FLAG)) { #if !defined(CONFIG_USER_ONLY) @@ -3094,19 +3105,22 @@ static void notdirty_mem_writew(void *opaque, target_phys_addr_t ram_addr, dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); #endif } - stw_p(qemu_get_ram_ptr(ram_addr), val); + vaddr = qemu_get_ram_ptr(ram_addr); + stw_p(vaddr, val); dirty_flags |= (0xff & ~CODE_DIRTY_FLAG); cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr); + qemu_ram_ptr_unlock(vaddr); } static void notdirty_mem_writel(void *opaque, target_phys_addr_t ram_addr, uint32_t val) { int dirty_flags; + void *vaddr; dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); if (!(dirty_flags & CODE_DIRTY_FLAG)) { #if !defined(CONFIG_USER_ONLY) @@ -3114,13 +3128,15 @@ static void notdirty_mem_writel(void *opaque, target_phys_addr_t ram_addr, dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); #endif } - stl_p(qemu_get_ram_ptr(ram_addr), val); + vaddr = qemu_get_ram_ptr(ram_addr); + stl_p(vaddr, val); dirty_flags |= (0xff & ~CODE_DIRTY_FLAG); cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr); + qemu_ram_ptr_unlock(vaddr); } static CPUReadMemoryFunc * const error_mem_read[3] = { @@ -3540,6 +3556,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, cpu_physical_memory_set_dirty_flags( addr1, (0xff & ~CODE_DIRTY_FLAG)); } + qemu_ram_ptr_unlock(ptr); } } else { if ((pd & ~TARGET_PAGE_MASK) > IO_MEM_ROM && @@ -3570,6 +3587,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); memcpy(buf, ptr, l); + qemu_ram_ptr_unlock(ptr); } } len -= l; @@ -3610,6 +3628,7 @@ void cpu_physical_memory_write_rom(target_phys_addr_t addr, /* ROM/RAM case */ ptr = qemu_get_ram_ptr(addr1); memcpy(ptr, buf, l); + qemu_ram_ptr_unlock(ptr); } len -= l; buf += l; @@ -3792,6 +3811,7 @@ uint32_t ldl_phys(target_phys_addr_t addr) ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); val = ldl_p(ptr); + qemu_ram_ptr_unlock(ptr); } return val; } @@ -3830,6 +3850,7 @@ uint64_t ldq_phys(target_phys_addr_t addr) ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); val = ldq_p(ptr); + qemu_ram_ptr_unlock(ptr); } return val; } @@ -3870,6 +3891,7 @@ uint32_t lduw_phys(target_phys_addr_t addr) ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); val = lduw_p(ptr); + qemu_ram_ptr_unlock(ptr); } return val; } @@ -3900,6 +3922,7 @@ void stl_phys_notdirty(target_phys_addr_t addr, uint32_t val) unsigned long addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); ptr = qemu_get_ram_ptr(addr1); stl_p(ptr, val); + qemu_ram_ptr_unlock(ptr); if (unlikely(in_migration)) { if (!cpu_physical_memory_is_dirty(addr1)) { @@ -3942,6 +3965,7 @@ void stq_phys_notdirty(target_phys_addr_t addr, uint64_t val) ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); stq_p(ptr, val); + qemu_ram_ptr_unlock(ptr); } } @@ -3971,6 +3995,7 @@ void stl_phys(target_phys_addr_t addr, uint32_t val) /* RAM case */ ptr = qemu_get_ram_ptr(addr1); stl_p(ptr, val); + qemu_ram_ptr_unlock(ptr); if (!cpu_physical_memory_is_dirty(addr1)) { /* invalidate code */ tb_invalidate_phys_page_range(addr1, addr1 + 4, 0); @@ -4014,6 +4039,7 @@ void stw_phys(target_phys_addr_t addr, uint32_t val) /* RAM case */ ptr = qemu_get_ram_ptr(addr1); stw_p(ptr, val); + qemu_ram_ptr_unlock(ptr); if (!cpu_physical_memory_is_dirty(addr1)) { /* invalidate code */ tb_invalidate_phys_page_range(addr1, addr1 + 2, 0); diff --git a/xen-mapcache.c b/xen-mapcache.c index c7e69e6..e407949 100644 --- a/xen-mapcache.c +++ b/xen-mapcache.c @@ -187,6 +187,40 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u return mapcache->last_address_vaddr + address_offset; } +void qemu_map_cache_unlock(void *buffer) +{ + MapCacheEntry *entry = NULL, *pentry = NULL; + MapCacheRev *reventry; + target_phys_addr_t paddr_index; + int found = 0; + + QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) { + if (reventry->vaddr_req == buffer) { + paddr_index = reventry->paddr_index; + found = 1; + break; + } + } + if (!found) { + return; + } + QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next); + qemu_free(reventry); + + entry = &mapcache->entry[paddr_index % mapcache->nr_buckets]; + while (entry && entry->paddr_index != paddr_index) { + pentry = entry; + entry = entry->next; + } + if (!entry) { + return; + } + entry->lock--; + if (entry->lock > 0) { + entry->lock--; + } +} + ram_addr_t qemu_ram_addr_from_mapcache(void *ptr) { MapCacheRev *reventry; -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 11/14] vl.c: Introduce getter for shutdown_requested and reset_requested.
From: Anthony PERARD <anthony.perard@citrix.com> Introduce two functions qemu_shutdown_requested_get and qemu_reset_requested_get to get the value of shutdown/reset_requested without reset it. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- sysemu.h | 2 ++ vl.c | 10 ++++++++++ 2 files changed, 12 insertions(+), 0 deletions(-) diff --git a/sysemu.h b/sysemu.h index a1f6466..7facfae 100644 --- a/sysemu.h +++ b/sysemu.h @@ -51,6 +51,8 @@ void cpu_disable_ticks(void); void qemu_system_reset_request(void); void qemu_system_shutdown_request(void); void qemu_system_powerdown_request(void); +int qemu_shutdown_requested_get(void); +int qemu_reset_requested_get(void); int qemu_shutdown_requested(void); int qemu_reset_requested(void); int qemu_powerdown_requested(void); diff --git a/vl.c b/vl.c index c0c9d32..cdd9bca 100644 --- a/vl.c +++ b/vl.c @@ -1134,6 +1134,16 @@ static int powerdown_requested; int debug_requested; int vmstop_requested; +int qemu_shutdown_requested_get(void) +{ + return shutdown_requested; +} + +int qemu_reset_requested_get(void) +{ + return reset_requested; +} + int qemu_shutdown_requested(void) { int r = shutdown_requested; -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 12/14] xen: Initialize event channels and io rings
From: Anthony PERARD <anthony.perard@citrix.com> Open and bind event channels; map ioreq and buffered ioreq rings. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- hw/xen_common.h | 3 + xen-all.c | 395 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 398 insertions(+), 0 deletions(-) diff --git a/hw/xen_common.h b/hw/xen_common.h index 2773b45..b3d1fe3 100644 --- a/hw/xen_common.h +++ b/hw/xen_common.h @@ -26,6 +26,7 @@ /* Xen unstable */ #if CONFIG_XEN_CTRL_INTERFACE_VERSION < 410 typedef int qemu_xc_interface; +# define XC_INTERFACE_FMT "%i" # define XC_HANDLER_INITIAL_VALUE -1 # define xc_fd(xen_xc) xen_xc # define xc_interface_open(l, dl, f) xc_interface_open() @@ -40,6 +41,7 @@ typedef int qemu_xc_interface; xc_map_foreign_batch(xc, domid, opts, pfns, size) #else typedef xc_interface *qemu_xc_interface; +# define XC_INTERFACE_FMT "%p" # define XC_HANDLER_INITIAL_VALUE NULL /* FIXME The fd of xen_xc is now xen_xc->fd */ /* fd is the first field, so this works */ @@ -47,5 +49,6 @@ typedef xc_interface *qemu_xc_interface; #endif qemu_irq *i8259_xen_init(void); +void destroy_hvm_domain(void); #endif /* QEMU_HW_XEN_COMMON_H */ diff --git a/xen-all.c b/xen-all.c index 3048c4d..c33773a 100644 --- a/xen-all.c +++ b/xen-all.c @@ -8,12 +8,56 @@ #include "config.h" +#include <sys/mman.h> + #include "hw/pci.h" #include "hw/xen_common.h" #include "hw/xen_backend.h" #include "xen-mapcache.h" +#include <xen/hvm/ioreq.h> +#include <xen/hvm/params.h> + +//#define DEBUG_XEN + +#ifdef DEBUG_XEN +#define DPRINTF(fmt, ...) \ + do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0) +#else +#define DPRINTF(fmt, ...) \ + do { } while (0) +#endif + +/* Compatibility with older version */ +#if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a +# define xen_vcpu_eport(shared_page, i) \ + (shared_page->vcpu_iodata[i].vp_eport) +# define xen_vcpu_ioreq(shared_page, vcpu) \ + (shared_page->vcpu_iodata[vcpu].vp_ioreq) +# define FMT_ioreq_size PRIx64 +#else +# define xen_vcpu_eport(shared_page, i) \ + (shared_page->vcpu_ioreq[i].vp_eport) +# define xen_vcpu_ioreq(shared_page, vcpu) \ + (shared_page->vcpu_ioreq[vcpu]) +# define FMT_ioreq_size "u" +#endif + +#define BUFFER_IO_MAX_DELAY 100 + +typedef struct XenIOState { + shared_iopage_t *shared_page; + buffered_iopage_t *buffered_io_page; + QEMUTimer *buffered_io_timer; + /* the evtchn port for polling the notification, */ + evtchn_port_t *ioreq_local_port; + /* the evtchn fd for polling */ + int xce_handle; + /* which vcpu we are serving */ + int send_vcpu; +} XenIOState; + /* Xen specific function for piix pci */ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num) @@ -114,19 +158,370 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size) } +/* VCPU Operations, MMIO, IO ring ... */ + +/* get the ioreq packets from share mem */ +static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu) +{ + ioreq_t *req = xen_vcpu_ioreq(&state->shared_page, vcpu); + + if (req->state != STATE_IOREQ_READY) { + DPRINTF("I/O request not ready: " + "%x, ptr: %x, port: %"PRIx64", " + "data: %"PRIx64", count: %" FMT_ioreq_size ", size: %" FMT_ioreq_size "\n", + req->state, req->data_is_ptr, req->addr, + req->data, req->count, req->size); + return NULL; + } + + xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */ + + req->state = STATE_IOREQ_INPROCESS; + return req; +} + +/* use poll to get the port notification */ +/* ioreq_vec--out,the */ +/* retval--the number of ioreq packet */ +static ioreq_t *cpu_get_ioreq(XenIOState *state) +{ + int i; + evtchn_port_t port; + + port = xc_evtchn_pending(state->xce_handle); + if (port != -1) { + for (i = 0; i < smp_cpus; i++) { + if (state->ioreq_local_port[i] == port) { + break; + } + } + + if (i == smp_cpus) { + hw_error("Fatal error while trying to get io event!\n"); + } + + /* unmask the wanted port again */ + xc_evtchn_unmask(state->xce_handle, port); + + /* get the io packet from shared memory */ + state->send_vcpu = i; + return cpu_get_ioreq_from_shared_memory(state, i); + } + + /* read error or read nothing */ + return NULL; +} + +static uint32_t do_inp(pio_addr_t addr, unsigned long size) +{ + switch (size) { + case 1: + return cpu_inb(addr); + case 2: + return cpu_inw(addr); + case 4: + return cpu_inl(addr); + default: + hw_error("inp: bad size: %04"FMT_pioaddr" %lx", addr, size); + } +} + +static void do_outp(pio_addr_t addr, + unsigned long size, uint32_t val) +{ + switch (size) { + case 1: + return cpu_outb(addr, val); + case 2: + return cpu_outw(addr, val); + case 4: + return cpu_outl(addr, val); + default: + hw_error("outp: bad size: %04"FMT_pioaddr" %lx", addr, size); + } +} + +static void cpu_ioreq_pio(ioreq_t *req) +{ + int i, sign; + + sign = req->df ? -1 : 1; + + if (req->dir == IOREQ_READ) { + if (!req->data_is_ptr) { + req->data = do_inp(req->addr, req->size); + } else { + uint32_t tmp; + + for (i = 0; i < req->count; i++) { + tmp = do_inp(req->addr, req->size); + cpu_physical_memory_write(req->data + (sign * i * req->size), + (uint8_t *) &tmp, req->size); + } + } + } else if (req->dir == IOREQ_WRITE) { + if (!req->data_is_ptr) { + do_outp(req->addr, req->size, req->data); + } else { + for (i = 0; i < req->count; i++) { + uint32_t tmp = 0; + + cpu_physical_memory_read(req->data + (sign * i * req->size), + (uint8_t*) &tmp, req->size); + do_outp(req->addr, req->size, tmp); + } + } + } +} + +static void cpu_ioreq_move(ioreq_t *req) +{ + int i, sign; + + sign = req->df ? -1 : 1; + + if (!req->data_is_ptr) { + if (req->dir == IOREQ_READ) { + for (i = 0; i < req->count; i++) { + cpu_physical_memory_read(req->addr + (sign * i * req->size), + (uint8_t *) &req->data, req->size); + } + } else if (req->dir == IOREQ_WRITE) { + for (i = 0; i < req->count; i++) { + cpu_physical_memory_write(req->addr + (sign * i * req->size), + (uint8_t *) &req->data, req->size); + } + } + } else { + target_ulong tmp; + + if (req->dir == IOREQ_READ) { + for (i = 0; i < req->count; i++) { + cpu_physical_memory_read(req->addr + (sign * i * req->size), + (uint8_t*) &tmp, req->size); + cpu_physical_memory_write(req->data + (sign * i * req->size), + (uint8_t*) &tmp, req->size); + } + } else if (req->dir == IOREQ_WRITE) { + for (i = 0; i < req->count; i++) { + cpu_physical_memory_read(req->data + (sign * i * req->size), + (uint8_t*) &tmp, req->size); + cpu_physical_memory_write(req->addr + (sign * i * req->size), + (uint8_t*) &tmp, req->size); + } + } + } +} + +static void handle_ioreq(ioreq_t *req) +{ + if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) && + (req->size < sizeof (target_ulong))) { + req->data &= ((target_ulong) 1 << (8 * req->size)) - 1; + } + + switch (req->type) { + case IOREQ_TYPE_PIO: + cpu_ioreq_pio(req); + break; + case IOREQ_TYPE_COPY: + cpu_ioreq_move(req); + break; + case IOREQ_TYPE_TIMEOFFSET: + break; + case IOREQ_TYPE_INVALIDATE: + qemu_invalidate_map_cache(); + break; + default: + hw_error("Invalid ioreq type 0x%x\n", req->type); + } +} + +static void handle_buffered_iopage(XenIOState *state) +{ + buf_ioreq_t *buf_req = NULL; + ioreq_t req; + int qw; + + if (!state->buffered_io_page) { + return; + } + + while (state->buffered_io_page->read_pointer != state->buffered_io_page->write_pointer) { + buf_req = &state->buffered_io_page->buf_ioreq[ + state->buffered_io_page->read_pointer % IOREQ_BUFFER_SLOT_NUM]; + req.size = 1UL << buf_req->size; + req.count = 1; + req.addr = buf_req->addr; + req.data = buf_req->data; + req.state = STATE_IOREQ_READY; + req.dir = buf_req->dir; + req.df = 1; + req.type = buf_req->type; + req.data_is_ptr = 0; + qw = (req.size == 8); + if (qw) { + buf_req = &state->buffered_io_page->buf_ioreq[ + (state->buffered_io_page->read_pointer + 1) % IOREQ_BUFFER_SLOT_NUM]; + req.data |= ((uint64_t)buf_req->data) << 32; + } + + handle_ioreq(&req); + + xen_mb(); + state->buffered_io_page->read_pointer += qw ? 2 : 1; + } +} + +static void handle_buffered_io(void *opaque) +{ + XenIOState *state = opaque; + + handle_buffered_iopage(state); + qemu_mod_timer(state->buffered_io_timer, + BUFFER_IO_MAX_DELAY + qemu_get_clock(rt_clock)); +} + +static void cpu_handle_ioreq(void *opaque) +{ + XenIOState *state = opaque; + ioreq_t *req = cpu_get_ioreq(state); + + handle_buffered_iopage(state); + if (req) { + handle_ioreq(req); + + if (req->state != STATE_IOREQ_INPROCESS) { + fprintf(stderr, "Badness in I/O request ... not in service?!: " + "%x, ptr: %x, port: %"PRIx64", " + "data: %"PRIx64", count: %" FMT_ioreq_size ", size: %" FMT_ioreq_size "\n", + req->state, req->data_is_ptr, req->addr, + req->data, req->count, req->size); + destroy_hvm_domain(); + return; + } + + xen_wmb(); /* Update ioreq contents /then/ update state. */ + + /* + * We do this before we send the response so that the tools + * have the opportunity to pick up on the reset before the + * guest resumes and does a hlt with interrupts disabled which + * causes Xen to powerdown the domain. + */ + if (vm_running) { + if (qemu_shutdown_requested_get()) { + destroy_hvm_domain(); + } + if (qemu_reset_requested_get()) { + qemu_system_reset(); + } + } + + req->state = STATE_IORESP_READY; + xc_evtchn_notify(state->xce_handle, state->ioreq_local_port[state->send_vcpu]); + } +} + +static void xen_main_loop_prepare(XenIOState *state) +{ + int evtchn_fd = state->xce_handle == -1 ? -1 : xc_evtchn_fd(state->xce_handle); + + state->buffered_io_timer = qemu_new_timer(rt_clock, handle_buffered_io, + state); + qemu_mod_timer(state->buffered_io_timer, qemu_get_clock(rt_clock)); + + if (evtchn_fd != -1) { + qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state); + } +} + + /* Initialise Xen */ +static void xen_vm_change_state_handler(void *opaque, int running, int reason) +{ + XenIOState *state = opaque; + if (running) { + xen_main_loop_prepare(state); + } +} + int xen_init(int smp_cpus) { + int i, rc; + unsigned long ioreq_pfn; + XenIOState *state; + xen_xc = xc_interface_open(NULL, NULL, 0); if (xen_xc == XC_HANDLER_INITIAL_VALUE) { xen_be_printf(NULL, 0, "can''t open xen interface\n"); return -1; } + state = qemu_mallocz(sizeof (XenIOState)); + + state->xce_handle = xc_evtchn_open(); + if (state->xce_handle == -1) { + perror("open"); + return -errno; + } + + xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN, &ioreq_pfn); + DPRINTF("shared page at pfn %lx\n", ioreq_pfn); + state->shared_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE, + PROT_READ|PROT_WRITE, ioreq_pfn); + if (state->shared_page == NULL) { + hw_error("map shared IO page returned error %d handle=" XC_INTERFACE_FMT, + errno, xen_xc); + } + + xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_PFN, &ioreq_pfn); + DPRINTF("buffered io page at pfn %lx\n", ioreq_pfn); + state->buffered_io_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE, + PROT_READ|PROT_WRITE, ioreq_pfn); + if (state->buffered_io_page == NULL) { + hw_error("map buffered IO page returned error %d", errno); + } + + state->ioreq_local_port = qemu_mallocz(smp_cpus * sizeof (evtchn_port_t)); + + /* FIXME: how about if we overflow the page here? */ + for (i = 0; i < smp_cpus; i++) { + rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid, + xen_vcpu_eport(state->shared_page, i)); + if (rc == -1) { + fprintf(stderr, "bind interdomain ioctl error %d\n", errno); + return -1; + } + state->ioreq_local_port[i] = rc; + } + /* Init RAM management */ qemu_map_cache_init(); xen_ram_init(ram_size); + qemu_add_vm_change_state_handler(xen_vm_change_state_handler, state); + return 0; } + +void destroy_hvm_domain(void) +{ + qemu_xc_interface xc_handle; + int sts; + + xc_handle = xc_interface_open(NULL, NULL, 0); + if (xc_handle == XC_HANDLER_INITIAL_VALUE) { + fprintf(stderr, "Cannot acquire xenctrl handle\n"); + } else { + sts = xc_domain_shutdown(xc_handle, xen_domid, SHUTDOWN_poweroff); + if (sts != 0) { + fprintf(stderr, "? xc_domain_shutdown failed to issue poweroff, " + "sts %d, %s\n", sts, strerror(errno)); + } else { + fprintf(stderr, "Issued domain %d poweroff\n", xen_domid); + } + xc_interface_close(xc_handle); + } +} -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 13/14] xen: Set running state in xenstore.
From: Anthony PERARD <anthony.perard@citrix.com> This tells to the xen management tool that the machine can begin run. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> --- xen-all.c | 19 +++++++++++++++++++ 1 files changed, 19 insertions(+), 0 deletions(-) diff --git a/xen-all.c b/xen-all.c index c33773a..d69ad16 100644 --- a/xen-all.c +++ b/xen-all.c @@ -423,6 +423,22 @@ static void cpu_handle_ioreq(void *opaque) } } +static void xenstore_record_dm_state(const char *state) +{ + char *path = NULL; + struct xs_handle *xenstore = xs_daemon_open(); + + if (asprintf(&path, "/local/domain/0/device-model/%u/state", xen_domid) == -1) { + fprintf(stderr, "out of memory recording dm state\n"); + exit(1); + } + if (!xs_write(xenstore, XBT_NULL, path, state, strlen(state))) { + fprintf(stderr, "error recording dm state\n"); + exit(1); + } + free(path); +} + static void xen_main_loop_prepare(XenIOState *state) { int evtchn_fd = state->xce_handle == -1 ? -1 : xc_evtchn_fd(state->xce_handle); @@ -434,6 +450,9 @@ static void xen_main_loop_prepare(XenIOState *state) if (evtchn_fd != -1) { qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state); } + + /* record state running */ + xenstore_record_dm_state("running"); } -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
anthony.perard@citrix.com
2010-Sep-28 15:01 UTC
[Xen-devel] [PATCH RFC V4 14/14] xen: Add a Xen specific ACPI Implementation to target-xen
From: Anthony PERARD <anthony.perard@citrix.com> Xen currently uses a different BIOS (hvmloader + rombios) therefore the Qemu acpi_piix4 implementation wouldn''t work correctly with Xen. We plan on fixing this properly but at the moment we are just adding a new Xen specific acpi_piix4 implementation. This patch is optional; without it the VM boots but it cannot shutdown properly or go to S3. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- Makefile.target | 1 + hw/xen_acpi_piix4.c | 411 +++++++++++++++++++++++++++++++++++++++++++++++++++ hw/xen_common.h | 3 + hw/xen_machine_fv.c | 6 +- 4 files changed, 416 insertions(+), 5 deletions(-) create mode 100644 hw/xen_acpi_piix4.c diff --git a/Makefile.target b/Makefile.target index fddce71..c796566 100644 --- a/Makefile.target +++ b/Makefile.target @@ -194,6 +194,7 @@ obj-$(CONFIG_NO_XEN_MAPCACHE) += xen-mapcache-stub.o # xen full virtualized machine obj-i386-$(CONFIG_XEN) += xen_machine_fv.o obj-i386-$(CONFIG_XEN) += xen_platform.o +obj-i386-$(CONFIG_XEN) += xen_acpi_piix4.o # USB layer obj-$(CONFIG_USB_OHCI) += usb-ohci.o diff --git a/hw/xen_acpi_piix4.c b/hw/xen_acpi_piix4.c new file mode 100644 index 0000000..03f0629 --- /dev/null +++ b/hw/xen_acpi_piix4.c @@ -0,0 +1,411 @@ + /* + * PIIX4 ACPI controller emulation + * + * Winston liwen Wang, winston.l.wang@intel.com + * Copyright (c) 2006 , Intel Corporation. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "hw.h" +#include "pc.h" +#include "pci.h" +#include "sysemu.h" +#include "acpi.h" + +#include "xen_backend.h" +#include "xen_common.h" +#include "qemu-log.h" + +#include <xen/hvm/ioreq.h> +#include <xen/hvm/params.h> + +#define PIIX4ACPI_LOG_ERROR 0 +#define PIIX4ACPI_LOG_INFO 1 +#define PIIX4ACPI_LOG_DEBUG 2 +#define PIIX4ACPI_LOGLEVEL PIIX4ACPI_LOG_INFO +#define PIIX4ACPI_LOG(level, fmt, ...) do { \ + if (level <= PIIX4ACPI_LOGLEVEL) qemu_log(fmt, ## __VA_ARGS__); \ +} while (0) + +/* Sleep state type codes as defined by the \_Sx objects in the DSDT. */ +/* These must be kept in sync with the DSDT (hvmloader/acpi/dsdt.asl) */ +#define SLP_TYP_S4 (6 << 10) +#define SLP_TYP_S3 (5 << 10) +#define SLP_TYP_S5 (7 << 10) + +#define ACPI_DBG_IO_ADDR 0xb044 +#define ACPI_PHP_IO_ADDR 0x10c0 + +#define PHP_EVT_ADD 0x0 +#define PHP_EVT_REMOVE 0x3 + +/* The bit in GPE0_STS/EN to notify the pci hotplug event */ +#define ACPI_PHP_GPE_BIT 3 + +#define DEVFN_TO_PHP_SLOT_REG(devfn) (devfn >> 1) +#define PHP_SLOT_REG_TO_DEVFN(reg, hilo) ((reg << 1) | hilo) + +/* ioport to monitor cpu add/remove status */ +#define PROC_BASE 0xaf00 + +typedef struct GPEState { + /* GPE0 block */ + uint8_t gpe0_sts[ACPI_GPE0_BLK_LEN / 2]; + uint8_t gpe0_en[ACPI_GPE0_BLK_LEN / 2]; + + /* CPU bitmap */ + uint8_t cpus_sts[32]; + + /* SCI IRQ level */ + uint8_t sci_asserted; + + qemu_irq sci_irq; +} GPEState; + +typedef struct PCIACPIState { + PCIDevice dev; + uint16_t pm1_control; /* pm1a_ECNT_BLK */ + qemu_irq irq; + qemu_irq cmos_s3; + + GPEState gpe_state; +} PCIACPIState; + + +static const VMStateDescription vmstate_acpi = { + .name = "PIIX4 ACPI", + .version_id = 1, + .fields = (VMStateField []) { + VMSTATE_PCI_DEVICE(dev, PCIACPIState), + VMSTATE_UINT16(pm1_control, PCIACPIState), + VMSTATE_END_OF_LIST() + } +}; + +static void acpi_pm1_control_writeb(void *opaque, uint32_t addr, uint32_t val) +{ + PCIACPIState *s = opaque; + s->pm1_control = (s->pm1_control & 0xff00) | (val & 0xff); +} + +static uint32_t acpi_pm1_control_readb(void *opaque, uint32_t addr) +{ + PCIACPIState *s = opaque; + /* Mask out the write-only bits */ + return (uint8_t)(s->pm1_control & + ~(ACPI_BITMASK_GLOBAL_LOCK_RELEASE | ACPI_BITMASK_SLEEP_ENABLE)); +} + +static void acpi_shutdown(PCIACPIState *s, uint32_t val) +{ + if (!(val & ACPI_BITMASK_SLEEP_ENABLE)) { + return; + } + + switch (val & ACPI_BITMASK_SLEEP_TYPE) { + case SLP_TYP_S3: + qemu_system_reset(); + qemu_irq_raise(s->cmos_s3); + xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 3); + break; + case SLP_TYP_S4: + case SLP_TYP_S5: + qemu_system_shutdown_request(); + break; + default: + break; + } +} + +static void acpi_pm1_control_p1_writeb(void *opaque, uint32_t addr, uint32_t val) +{ + PCIACPIState *s = opaque; + + val <<= 8; + s->pm1_control = ((s->pm1_control & 0xff) | val) & ~ACPI_BITMASK_SLEEP_ENABLE; + + acpi_shutdown(s, val); +} + +static uint32_t acpi_pm1_control_p1_readb(void *opaque, uint32_t addr) +{ + PCIACPIState *s = opaque; + /* Mask out the write-only bits */ + return (uint8_t)((s->pm1_control & ~(ACPI_BITMASK_GLOBAL_LOCK_RELEASE | ACPI_BITMASK_SLEEP_ENABLE)) >> 8); +} + +static void acpi_pm1_control_writew(void *opaque, uint32_t addr, uint32_t val) +{ + PCIACPIState *s = opaque; + + s->pm1_control = val & ~ACPI_BITMASK_SLEEP_ENABLE; + + acpi_shutdown(s, val); +} + +static uint32_t acpi_pm1_control_readw(void *opaque, uint32_t addr) +{ + PCIACPIState *s = opaque; + /* Mask out the write-only bits */ + return s->pm1_control & ~(ACPI_BITMASK_GLOBAL_LOCK_RELEASE | ACPI_BITMASK_SLEEP_ENABLE); +} + +static void acpi_map(PCIDevice *pci_dev, int region_num, + uint32_t addr, uint32_t size, int type) +{ + PCIACPIState *d = (PCIACPIState *)pci_dev; + + /* Byte access */ + register_ioport_write(addr + 4, 1, 1, acpi_pm1_control_writeb, d); + register_ioport_read(addr + 4, 1, 1, acpi_pm1_control_readb, d); + register_ioport_write(addr + 4 + 1, 1, 1, acpi_pm1_control_p1_writeb, d); + register_ioport_read(addr + 4 + 1, 1, 1, acpi_pm1_control_p1_readb, d); + + /* Word access */ + register_ioport_write(addr + 4, 2, 2, acpi_pm1_control_writew, d); + register_ioport_read(addr + 4, 2, 2, acpi_pm1_control_readw, d); +} + +static inline int test_bit(uint8_t *map, int bit) +{ + return map[bit / 8] & (1 << (bit % 8)); +} + +static inline void set_bit(uint8_t *map, int bit) +{ + map[bit / 8] |= (1 << (bit % 8)); +} + +static inline void clear_bit(uint8_t *map, int bit) +{ + map[bit / 8] &= ~(1 << (bit % 8)); +} + +static void acpi_dbg_writel(void *opaque, uint32_t addr, uint32_t val) +{ + PIIX4ACPI_LOG(PIIX4ACPI_LOG_DEBUG, "ACPI: DBG: 0x%08x\n", val); + PIIX4ACPI_LOG(PIIX4ACPI_LOG_INFO, "ACPI:debug: write addr=0x%x, val=0x%x.\n", addr, val); +} + +/* GPEx_STS occupy 1st half of the block, while GPEx_EN 2nd half */ +static uint32_t gpe_sts_read(void *opaque, uint32_t addr) +{ + GPEState *s = opaque; + + return s->gpe0_sts[addr - ACPI_GPE0_BLK_ADDRESS]; +} + +/* write 1 to clear specific GPE bits */ +static void gpe_sts_write(void *opaque, uint32_t addr, uint32_t val) +{ + GPEState *s = opaque; + int hotplugged = 0; + + PIIX4ACPI_LOG(PIIX4ACPI_LOG_DEBUG, "gpe_sts_write: addr=0x%x, val=0x%x.\n", addr, val); + + hotplugged = test_bit(&s->gpe0_sts[0], ACPI_PHP_GPE_BIT); + s->gpe0_sts[addr - ACPI_GPE0_BLK_ADDRESS] &= ~val; + if ( s->sci_asserted && + hotplugged && + !test_bit(&s->gpe0_sts[0], ACPI_PHP_GPE_BIT)) { + PIIX4ACPI_LOG(PIIX4ACPI_LOG_INFO, "Clear the GPE0_STS bit for ACPI hotplug & deassert the IRQ.\n"); + qemu_irq_lower(s->sci_irq); + } +} + +static uint32_t gpe_en_read(void *opaque, uint32_t addr) +{ + GPEState *s = opaque; + + return s->gpe0_en[addr - (ACPI_GPE0_BLK_ADDRESS + ACPI_GPE0_BLK_LEN / 2)]; +} + +/* write 0 to clear en bit */ +static void gpe_en_write(void *opaque, uint32_t addr, uint32_t val) +{ + GPEState *s = opaque; + int reg_count; + + PIIX4ACPI_LOG(PIIX4ACPI_LOG_DEBUG, "gpe_en_write: addr=0x%x, val=0x%x.\n", addr, val); + reg_count = addr - (ACPI_GPE0_BLK_ADDRESS + ACPI_GPE0_BLK_LEN / 2); + s->gpe0_en[reg_count] = val; + /* If disable GPE bit right after generating SCI on it, + * need deassert the intr to avoid redundant intrs + */ + if ( s->sci_asserted && + reg_count == (ACPI_PHP_GPE_BIT / 8) && + !(val & (1 << (ACPI_PHP_GPE_BIT % 8))) ) { + PIIX4ACPI_LOG(PIIX4ACPI_LOG_INFO, "deassert due to disable GPE bit.\n"); + s->sci_asserted = 0; + qemu_irq_lower(s->sci_irq); + } +} + +static const VMStateDescription vmstate_gpe = { + .name = "gpe", + .version_id = 2, + .minimum_version_id = 2, + .minimum_version_id_old = 2, + .fields = (VMStateField []) { + VMSTATE_BUFFER(gpe0_sts, GPEState), + VMSTATE_BUFFER(gpe0_en, GPEState), + VMSTATE_UINT8(sci_asserted, GPEState), + VMSTATE_END_OF_LIST() + } +}; + +static uint32_t gpe_cpus_readb(void *opaque, uint32_t addr) +{ + uint32_t val = 0; + GPEState *g = opaque; + + switch (addr) { + case PROC_BASE ... PROC_BASE + 31: + val = g->cpus_sts[addr - PROC_BASE]; + break; + default: + break; + } + + return val; +} + +static void gpe_cpus_writeb(void *opaque, uint32_t addr, uint32_t val) +{ + /* GPEState *g = opaque; */ + + switch (addr) { + case PROC_BASE ... PROC_BASE + 31: + /* don''t allow to change cpus_sts from inside a guest */ + break; + default: + break; + } +} + +static void gpe_acpi_init(PCIACPIState *acpi_state) +{ + GPEState *s = &acpi_state->gpe_state; + memset(s, 0, sizeof (GPEState)); + + s->cpus_sts[0] = 1; + + register_ioport_read(PROC_BASE, 32, 1, gpe_cpus_readb, s); + register_ioport_write(PROC_BASE, 32, 1, gpe_cpus_writeb, s); + + register_ioport_read(ACPI_GPE0_BLK_ADDRESS, + ACPI_GPE0_BLK_LEN / 2, + 1, + gpe_sts_read, + s); + register_ioport_read(ACPI_GPE0_BLK_ADDRESS + ACPI_GPE0_BLK_LEN / 2, + ACPI_GPE0_BLK_LEN / 2, + 1, + gpe_en_read, + s); + + register_ioport_write(ACPI_GPE0_BLK_ADDRESS, + ACPI_GPE0_BLK_LEN / 2, + 1, + gpe_sts_write, + s); + register_ioport_write(ACPI_GPE0_BLK_ADDRESS + ACPI_GPE0_BLK_LEN / 2, + ACPI_GPE0_BLK_LEN / 2, + 1, + gpe_en_write, + s); + + vmstate_register(NULL, 0, &vmstate_gpe, s); +} + +static void piix4_pm_xen_reset(void *opaque) +{ + PCIACPIState *s = opaque; + + s->pm1_control = ACPI_BITMASK_SCI_ENABLE; +} +static int piix4_pm_xen_initfn(PCIDevice *dev) +{ + PCIACPIState *s = DO_UPCAST(PCIACPIState, dev, dev); + uint8_t *pci_conf; + + pci_conf = s->dev.config; + pci_config_set_vendor_id(pci_conf, PCI_VENDOR_ID_INTEL); + pci_config_set_device_id(pci_conf, PCI_DEVICE_ID_INTEL_82371AB_3); + pci_conf[0x08] = 0x01; /* B0 stepping */ + pci_conf[0x09] = 0x00; /* base class */ + pci_config_set_class(pci_conf, PCI_CLASS_BRIDGE_OTHER); + pci_conf[PCI_HEADER_TYPE] = PCI_HEADER_TYPE_NORMAL; /* header_type */ + pci_conf[0x3d] = 0x01; /* Hardwired to PIRQA is used */ + + /* PMBA POWER MANAGEMENT BASE ADDRESS, hardcoded to 0x1f40 + * to make shutdown work for IPF, due to IPF Guest Firmware + * will enumerate pci devices. + * + * TODO: if Guest Firmware or Guest OS will change this PMBA, + * More logic will be added. + */ + pci_conf[0x40] = 0x41; /* Special device-specific BAR at 0x40 */ + pci_conf[0x41] = 0x1f; + pci_conf[0x42] = 0x00; + pci_conf[0x43] = 0x00; + + qemu_register_reset(piix4_pm_xen_reset, s); + + acpi_map((PCIDevice *)s, 0, 0x1f40, 0x10, PCI_BASE_ADDRESS_SPACE_IO); + + gpe_acpi_init(s); + + register_ioport_write(ACPI_DBG_IO_ADDR, 4, 4, acpi_dbg_writel, s); + + return 0; +} + +void piix4_pm_xen_init(PCIBus *bus, int devfn, qemu_irq sci_irq_spec, qemu_irq cmos_s3) +{ + PCIDevice *dev; + PCIACPIState *s; + + dev = pci_create(bus, devfn, "PIIX4 ACPI"); + + s = DO_UPCAST(PCIACPIState, dev, dev); + + s->irq = sci_irq_spec; + s->gpe_state.sci_irq = sci_irq_spec; + + s->cmos_s3 = cmos_s3; + + qdev_init_nofail(&dev->qdev); +} + +static PCIDeviceInfo piix4_pm_xen_info = { + .qdev.name = "PIIX4 ACPI", + .qdev.desc = "dm", + .qdev.size = sizeof(PCIACPIState), + .qdev.vmsd = &vmstate_acpi, + .init = piix4_pm_xen_initfn, +}; + +static void piix4_pm_xen_register(void) +{ + pci_qdev_register(&piix4_pm_xen_info); +} + +device_init(piix4_pm_xen_register); diff --git a/hw/xen_common.h b/hw/xen_common.h index b3d1fe3..ca7a76c 100644 --- a/hw/xen_common.h +++ b/hw/xen_common.h @@ -51,4 +51,7 @@ typedef xc_interface *qemu_xc_interface; qemu_irq *i8259_xen_init(void); void destroy_hvm_domain(void); +/* hw/xen_acpi_piix4.c */ +void piix4_pm_xen_init(PCIBus *bus, int devfn, qemu_irq sci_irq_spec, qemu_irq cmos_s3); + #endif /* QEMU_HW_XEN_COMMON_H */ diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c index fe4491f..30bcc47 100644 --- a/hw/xen_machine_fv.c +++ b/hw/xen_machine_fv.c @@ -55,7 +55,6 @@ static void xen_init_fv(ram_addr_t ram_size, qemu_irq *isa_irq; qemu_irq *i8259; qemu_irq *cmos_s3; - qemu_irq *smi_irq; IsaIrqState *isa_irq_state; DriveInfo *hd[MAX_IDE_BUS * MAX_IDE_DEVS]; FDCtrl *floppy_controller; @@ -132,10 +131,7 @@ static void xen_init_fv(ram_addr_t ram_size, if (acpi_enabled) { cmos_s3 = qemu_allocate_irqs(pc_cmos_set_s3_resume, rtc_state, 1); - smi_irq = qemu_allocate_irqs(pc_acpi_smi_interrupt, first_cpu, 1); - piix4_pm_init(pci_bus, piix3_devfn + 3, 0xb100, - isa_reserve_irq(9), *cmos_s3, *smi_irq, - 0); + piix4_pm_xen_init(pci_bus, piix3_devfn + 3, isa_reserve_irq(9), *cmos_s3); } if (i440fx_state) { -- 1.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Anthony Liguori
2010-Sep-28 15:14 UTC
[Xen-devel] Re: [PATCH RFC V4 10/14] Introduce qemu_ram_ptr_unlock.
On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote:> From: Anthony PERARD<anthony.perard@citrix.com> > > This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After > a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when > used with Xen. > > Signed-off-by: Anthony PERARD<anthony.perard@citrix.com> >Why isn''t hooking cpu_physical_memory_{map,unmap}() not enough? That''s really the intention of the API. You only really care about guest RAM, not device memory, correct? Regards, Anthony Liguori> --- > cpu-common.h | 1 + > exec.c | 32 +++++++++++++++++++++++++++++--- > xen-mapcache.c | 34 ++++++++++++++++++++++++++++++++++ > 3 files changed, 64 insertions(+), 3 deletions(-) > > diff --git a/cpu-common.h b/cpu-common.h > index 0426bc8..378eea8 100644 > --- a/cpu-common.h > +++ b/cpu-common.h > @@ -46,6 +46,7 @@ ram_addr_t qemu_ram_alloc(DeviceState *dev, const char *name, ram_addr_t size); > void qemu_ram_free(ram_addr_t addr); > /* This should only be used for ram local to a device. */ > void *qemu_get_ram_ptr(ram_addr_t addr); > +void qemu_ram_ptr_unlock(void *addr); > /* This should not be used by devices. */ > ram_addr_t qemu_ram_addr_from_host(void *ptr); > > diff --git a/exec.c b/exec.c > index 0de9e32..0612ee4 100644 > --- a/exec.c > +++ b/exec.c > @@ -2961,6 +2961,13 @@ void *qemu_get_ram_ptr(ram_addr_t addr) > return NULL; > } > > +void qemu_ram_ptr_unlock(void *addr) > +{ > + if (xen_mapcache_enabled()) { > + qemu_map_cache_unlock(addr); > + } > +} > + > /* Some of the softmmu routines need to translate from a host pointer > (typically a TLB entry) back to a ram offset. */ > ram_addr_t qemu_ram_addr_from_host(void *ptr) > @@ -3067,6 +3074,7 @@ static void notdirty_mem_writeb(void *opaque, target_phys_addr_t ram_addr, > uint32_t val) > { > int dirty_flags; > + void *vaddr; > dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); > if (!(dirty_flags& CODE_DIRTY_FLAG)) { > #if !defined(CONFIG_USER_ONLY) > @@ -3074,19 +3082,22 @@ static void notdirty_mem_writeb(void *opaque, target_phys_addr_t ram_addr, > dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); > #endif > } > - stb_p(qemu_get_ram_ptr(ram_addr), val); > + vaddr = qemu_get_ram_ptr(ram_addr); > + stb_p(vaddr, val); > dirty_flags |= (0xff& ~CODE_DIRTY_FLAG); > cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); > /* we remove the notdirty callback only if the code has been > flushed */ > if (dirty_flags == 0xff) > tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr); > + qemu_ram_ptr_unlock(vaddr); > } > > static void notdirty_mem_writew(void *opaque, target_phys_addr_t ram_addr, > uint32_t val) > { > int dirty_flags; > + void *vaddr; > dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); > if (!(dirty_flags& CODE_DIRTY_FLAG)) { > #if !defined(CONFIG_USER_ONLY) > @@ -3094,19 +3105,22 @@ static void notdirty_mem_writew(void *opaque, target_phys_addr_t ram_addr, > dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); > #endif > } > - stw_p(qemu_get_ram_ptr(ram_addr), val); > + vaddr = qemu_get_ram_ptr(ram_addr); > + stw_p(vaddr, val); > dirty_flags |= (0xff& ~CODE_DIRTY_FLAG); > cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); > /* we remove the notdirty callback only if the code has been > flushed */ > if (dirty_flags == 0xff) > tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr); > + qemu_ram_ptr_unlock(vaddr); > } > > static void notdirty_mem_writel(void *opaque, target_phys_addr_t ram_addr, > uint32_t val) > { > int dirty_flags; > + void *vaddr; > dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); > if (!(dirty_flags& CODE_DIRTY_FLAG)) { > #if !defined(CONFIG_USER_ONLY) > @@ -3114,13 +3128,15 @@ static void notdirty_mem_writel(void *opaque, target_phys_addr_t ram_addr, > dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); > #endif > } > - stl_p(qemu_get_ram_ptr(ram_addr), val); > + vaddr = qemu_get_ram_ptr(ram_addr); > + stl_p(vaddr, val); > dirty_flags |= (0xff& ~CODE_DIRTY_FLAG); > cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); > /* we remove the notdirty callback only if the code has been > flushed */ > if (dirty_flags == 0xff) > tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr); > + qemu_ram_ptr_unlock(vaddr); > } > > static CPUReadMemoryFunc * const error_mem_read[3] = { > @@ -3540,6 +3556,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, > cpu_physical_memory_set_dirty_flags( > addr1, (0xff& ~CODE_DIRTY_FLAG)); > } > + qemu_ram_ptr_unlock(ptr); > } > } else { > if ((pd& ~TARGET_PAGE_MASK)> IO_MEM_ROM&& > @@ -3570,6 +3587,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, > ptr = qemu_get_ram_ptr(pd& TARGET_PAGE_MASK) + > (addr& ~TARGET_PAGE_MASK); > memcpy(buf, ptr, l); > + qemu_ram_ptr_unlock(ptr); > } > } > len -= l; > @@ -3610,6 +3628,7 @@ void cpu_physical_memory_write_rom(target_phys_addr_t addr, > /* ROM/RAM case */ > ptr = qemu_get_ram_ptr(addr1); > memcpy(ptr, buf, l); > + qemu_ram_ptr_unlock(ptr); > } > len -= l; > buf += l; > @@ -3792,6 +3811,7 @@ uint32_t ldl_phys(target_phys_addr_t addr) > ptr = qemu_get_ram_ptr(pd& TARGET_PAGE_MASK) + > (addr& ~TARGET_PAGE_MASK); > val = ldl_p(ptr); > + qemu_ram_ptr_unlock(ptr); > } > return val; > } > @@ -3830,6 +3850,7 @@ uint64_t ldq_phys(target_phys_addr_t addr) > ptr = qemu_get_ram_ptr(pd& TARGET_PAGE_MASK) + > (addr& ~TARGET_PAGE_MASK); > val = ldq_p(ptr); > + qemu_ram_ptr_unlock(ptr); > } > return val; > } > @@ -3870,6 +3891,7 @@ uint32_t lduw_phys(target_phys_addr_t addr) > ptr = qemu_get_ram_ptr(pd& TARGET_PAGE_MASK) + > (addr& ~TARGET_PAGE_MASK); > val = lduw_p(ptr); > + qemu_ram_ptr_unlock(ptr); > } > return val; > } > @@ -3900,6 +3922,7 @@ void stl_phys_notdirty(target_phys_addr_t addr, uint32_t val) > unsigned long addr1 = (pd& TARGET_PAGE_MASK) + (addr& ~TARGET_PAGE_MASK); > ptr = qemu_get_ram_ptr(addr1); > stl_p(ptr, val); > + qemu_ram_ptr_unlock(ptr); > > if (unlikely(in_migration)) { > if (!cpu_physical_memory_is_dirty(addr1)) { > @@ -3942,6 +3965,7 @@ void stq_phys_notdirty(target_phys_addr_t addr, uint64_t val) > ptr = qemu_get_ram_ptr(pd& TARGET_PAGE_MASK) + > (addr& ~TARGET_PAGE_MASK); > stq_p(ptr, val); > + qemu_ram_ptr_unlock(ptr); > } > } > > @@ -3971,6 +3995,7 @@ void stl_phys(target_phys_addr_t addr, uint32_t val) > /* RAM case */ > ptr = qemu_get_ram_ptr(addr1); > stl_p(ptr, val); > + qemu_ram_ptr_unlock(ptr); > if (!cpu_physical_memory_is_dirty(addr1)) { > /* invalidate code */ > tb_invalidate_phys_page_range(addr1, addr1 + 4, 0); > @@ -4014,6 +4039,7 @@ void stw_phys(target_phys_addr_t addr, uint32_t val) > /* RAM case */ > ptr = qemu_get_ram_ptr(addr1); > stw_p(ptr, val); > + qemu_ram_ptr_unlock(ptr); > if (!cpu_physical_memory_is_dirty(addr1)) { > /* invalidate code */ > tb_invalidate_phys_page_range(addr1, addr1 + 2, 0); > diff --git a/xen-mapcache.c b/xen-mapcache.c > index c7e69e6..e407949 100644 > --- a/xen-mapcache.c > +++ b/xen-mapcache.c > @@ -187,6 +187,40 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u > return mapcache->last_address_vaddr + address_offset; > } > > +void qemu_map_cache_unlock(void *buffer) > +{ > + MapCacheEntry *entry = NULL, *pentry = NULL; > + MapCacheRev *reventry; > + target_phys_addr_t paddr_index; > + int found = 0; > + > + QTAILQ_FOREACH(reventry,&mapcache->locked_entries, next) { > + if (reventry->vaddr_req == buffer) { > + paddr_index = reventry->paddr_index; > + found = 1; > + break; > + } > + } > + if (!found) { > + return; > + } > + QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next); > + qemu_free(reventry); > + > + entry =&mapcache->entry[paddr_index % mapcache->nr_buckets]; > + while (entry&& entry->paddr_index != paddr_index) { > + pentry = entry; > + entry = entry->next; > + } > + if (!entry) { > + return; > + } > + entry->lock--; > + if (entry->lock> 0) { > + entry->lock--; > + } > +} > + > ram_addr_t qemu_ram_addr_from_mapcache(void *ptr) > { > MapCacheRev *reventry; >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Stefano Stabellini
2010-Sep-28 15:25 UTC
[Xen-devel] Re: [PATCH RFC V4 10/14] Introduce qemu_ram_ptr_unlock.
On Tue, 28 Sep 2010, Anthony Liguori wrote:> On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote: > > From: Anthony PERARD<anthony.perard@citrix.com> > > > > This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After > > a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when > > used with Xen. > > > > Signed-off-by: Anthony PERARD<anthony.perard@citrix.com> > > > > Why isn''t hooking cpu_physical_memory_{map,unmap}() not enough? That''s > really the intention of the API. > > You only really care about guest RAM, not device memory, correct?Yes, however at the moment all the calls to qemu_get_ram_ptr imply the mapping in qemu address space to remain valid for an unlimited amount of time. While we can do that because now the mapcache allows to "lock" a mapping, it would be nice if an explicit qemu_ram_ptr_unlock would be provided. It is not required for xen support though. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Anthony Liguori
2010-Sep-28 16:01 UTC
[Xen-devel] Re: [PATCH RFC V4 10/14] Introduce qemu_ram_ptr_unlock.
On 09/28/2010 10:25 AM, Stefano Stabellini wrote:> On Tue, 28 Sep 2010, Anthony Liguori wrote: > >> On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote: >> >>> From: Anthony PERARD<anthony.perard@citrix.com> >>> >>> This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After >>> a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when >>> used with Xen. >>> >>> Signed-off-by: Anthony PERARD<anthony.perard@citrix.com> >>> >>> >> Why isn''t hooking cpu_physical_memory_{map,unmap}() not enough? That''s >> really the intention of the API. >> >> You only really care about guest RAM, not device memory, correct? >> > Yes, however at the moment all the calls to qemu_get_ram_ptr imply the > mapping in qemu address space to remain valid for an unlimited amount of > time. >Yes, but qemu_get_ram_ptr() is not a general purpose API. It really should only have one use--within exec.c to implement cpu_physical_memory_* functions. There are a few uses in hw/* but they''re all wrong and should be removed. Fortunately, for the purposes of the Xen machine, almost none of them actually matter. What I''m thinking is that RAM in Xen should not be backed at all from a RAMBlock. Instead, cpu_physical_memory_* functions should call an explicit map/unmap() function that can be implemented as qemu_get_ram_ptr() and a nop in the TCG/KVM case and as explicit map cache operations in the Xen case. Regards, Anthony Liguori> While we can do that because now the mapcache allows to "lock" a > mapping, it would be nice if an explicit qemu_ram_ptr_unlock would be > provided. It is not required for xen support though. > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Stefano Stabellini
2010-Sep-28 18:04 UTC
[Xen-devel] Re: [PATCH RFC V4 10/14] Introduce qemu_ram_ptr_unlock.
On Tue, 28 Sep 2010, Anthony Liguori wrote:> On 09/28/2010 10:25 AM, Stefano Stabellini wrote: > > On Tue, 28 Sep 2010, Anthony Liguori wrote: > > > >> On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote: > >> > >>> From: Anthony PERARD<anthony.perard@citrix.com> > >>> > >>> This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After > >>> a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when > >>> used with Xen. > >>> > >>> Signed-off-by: Anthony PERARD<anthony.perard@citrix.com> > >>> > >>> > >> Why isn''t hooking cpu_physical_memory_{map,unmap}() not enough? That''s > >> really the intention of the API. > >> > >> You only really care about guest RAM, not device memory, correct? > >> > > Yes, however at the moment all the calls to qemu_get_ram_ptr imply the > > mapping in qemu address space to remain valid for an unlimited amount of > > time. > > > > Yes, but qemu_get_ram_ptr() is not a general purpose API. It really > should only have one use--within exec.c to implement > cpu_physical_memory_* functions. There are a few uses in hw/* but > they''re all wrong and should be removed. Fortunately, for the purposes > of the Xen machine, almost none of them actually matter. >If this is the case, it is even better :) Can we replace the call to qemu_get_ram_ptr with cpu_physical_memory_map in the vga code?> What I''m thinking is that RAM in Xen should not be backed at all from a > RAMBlock. Instead, cpu_physical_memory_* functions should call an > explicit map/unmap() function that can be implemented as > qemu_get_ram_ptr() and a nop in the TCG/KVM case and as explicit map > cache operations in the Xen case. >Yes, we basically followed a very similar line of thought: in the current implementation we have just one ramblock as a placeholder for the guest''s ram, then we have three hooks in qemu_ram_alloc_from_ptr, qemu_get_ram_ptr and qemu_ram_free for xen specific ways to allocate, map and free memory but we reuse everything else. Let''s take cpu_physical_memory_map for example: we completely reuse the generic implementation, that ends up calling either qemu_get_ram_ptr or cpu_physical_memory_rw. In case of qemu_get_ram_ptr, we still reuse the generic code but we have a xen specific hook to call the mapcache. In case of cpu_physical_memory_rw, we didn''t need to change anything to do the mapping because it is implemented using qemu_get_ram_ptr (see above), we just added a call to qemu_ram_ptr_unlock to unlock the mapping at the end of the function (a nop for TCG/KVM). So qemu_get_ram_ptr and qemu_ram_ptr_unlock are basically the explicit map/unmap() functions you are referring to. We could probably remove the single ramblock we added and provide a xen specific implementation of qemu_ram_alloc_from_ptr and qemu_ram_free that don''t iterate over the ramblock list if you think is better. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Gerd Hoffmann
2010-Sep-29 07:38 UTC
[Xen-devel] Re: [Qemu-devel] Re: [PATCH RFC V4 10/14] Introduce qemu_ram_ptr_unlock.
On 09/28/10 17:14, Anthony Liguori wrote:> On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote: >> From: Anthony PERARD<anthony.perard@citrix.com> >> >> This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After >> a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when >> used with Xen. >> >> Signed-off-by: Anthony PERARD<anthony.perard@citrix.com> > > Why isn''t hooking cpu_physical_memory_{map,unmap}() not enough? That''s > really the intention of the API.I think quite a bunch of stuff stops working then because it doesn''t properly use the map/unmap API ... cheers, Gerd _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel