Andres Lagar-Cavilla
2011-Dec-05 15:11 UTC
[PATCH 0 of 5] Mem event interface improvements
In this patch series we add a series of improevements to the memory event interface. Namely: - Allow batched consumption of multiple outstanding responses - Signal Xen to consume responses solely with an event channel kick - Extra flag to tag events generated byforeign-domains - Allow placing dummy domains in the ring, to keep it in sync if a state transition does not require an explicit response. All these changes are orthogonal to the outstanding proposals for enhancing ring management. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Signed-off-by: Adin Scannell <adin@scannell.ca> Signed-off-by: Keir Fraser <keir@xen.org> xen/arch/x86/mm/mem_event.c | 6 +++ xen/include/public/mem_event.h | 1 + xen/arch/ia64/vmx/vmx_init.c | 2 +- xen/arch/x86/hvm/hvm.c | 7 ++- xen/arch/x86/mm/mem_event.c | 3 +- xen/common/event_channel.c | 75 +++++++++++++++++++++++++++++++--------- xen/include/xen/event.h | 5 ++- xen/include/xen/sched.h | 2 +- xen/arch/x86/mm/mem_event.c | 26 ++++++++++--- xen/arch/x86/mm/mem_event.c | 10 ++++- xen/arch/x86/mm/mem_sharing.c | 13 +++--- xen/arch/x86/mm/p2m.c | 52 ++++++++++++++------------- xen/include/asm-x86/mem_event.h | 2 +- xen/arch/x86/mm/mem_sharing.c | 2 + xen/arch/x86/mm/p2m.c | 4 ++ xen/include/public/mem_event.h | 7 +++ 16 files changed, 154 insertions(+), 63 deletions(-)
Andres Lagar-Cavilla
2011-Dec-05 15:12 UTC
[PATCH 1 of 5] Flag events caused by foreign domains
xen/arch/x86/mm/mem_event.c | 6 ++++++
xen/include/public/mem_event.h | 1 +
2 files changed, 7 insertions(+), 0 deletions(-)
Add a new flag for mem events, as consumers might need to discriminate
foreign domain-caused from guest-caused events. The vcpu field of an
event is bogus from a consumer p.o.v. for foreign domain-caused events.
Also assert that we shouldn''t be pausing foreign vcpus.
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla>
diff -r 8ad7af68e017 -r 2a4ec2e2ae36 xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -143,6 +143,12 @@ void mem_event_put_request(struct domain
front_ring = &med->front_ring;
req_prod = front_ring->req_prod_pvt;
+ if ( current->domain != d )
+ {
+ req->flags |= MEM_EVENT_FLAG_FOREIGN;
+ ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+ }
+
/* Copy request */
memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
req_prod++;
diff -r 8ad7af68e017 -r 2a4ec2e2ae36 xen/include/public/mem_event.h
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -39,6 +39,7 @@
#define MEM_EVENT_FLAG_VCPU_PAUSED (1 << 0)
#define MEM_EVENT_FLAG_DROP_PAGE (1 << 1)
#define MEM_EVENT_FLAG_EVICT_FAIL (1 << 2)
+#define MEM_EVENT_FLAG_FOREIGN (1 << 3)
/* Reasons for the memory event request */
#define MEM_EVENT_REASON_UNKNOWN 0 /* typical reason */
Andres Lagar-Cavilla
2011-Dec-05 15:12 UTC
[PATCH 2 of 5] Create a generic callback mechanism for Xen-bound event channels
xen/arch/ia64/vmx/vmx_init.c | 2 +-
xen/arch/x86/hvm/hvm.c | 7 ++-
xen/arch/x86/mm/mem_event.c | 3 +-
xen/common/event_channel.c | 75 ++++++++++++++++++++++++++++++++++---------
xen/include/xen/event.h | 5 ++-
xen/include/xen/sched.h | 2 +-
6 files changed, 70 insertions(+), 24 deletions(-)
For event channels for which Xen is the consumer, there currently is
a single action. With this patch, we allow event channel creators to
specify a generic callback (or no callback). Because the expectation
is that there will be few callbacks, they are stored in a small table.
Signed-off-by: Adin Scannell <adin@scannell.ca>
Signed-off-by: Keir Fraser <keir@xen.org>
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
diff -r 2a4ec2e2ae36 -r 4c19931b40d5 xen/arch/ia64/vmx/vmx_init.c
--- a/xen/arch/ia64/vmx/vmx_init.c
+++ b/xen/arch/ia64/vmx/vmx_init.c
@@ -377,7 +377,7 @@ vmx_vcpu_initialise(struct vcpu *v)
{
struct vmx_ioreq_page *iorp = &v->domain->arch.hvm_domain.ioreq;
- int rc = alloc_unbound_xen_event_channel(v, 0);
+ int rc = alloc_unbound_xen_event_channel(v, 0, NULL);
if (rc < 0)
return rc;
v->arch.arch_vmx.xen_port = rc;
diff -r 2a4ec2e2ae36 -r 4c19931b40d5 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -979,7 +979,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
goto fail3;
/* Create ioreq event channel. */
- rc = alloc_unbound_xen_event_channel(v, 0);
+ rc = alloc_unbound_xen_event_channel(v, 0, NULL);
if ( rc < 0 )
goto fail4;
@@ -989,7 +989,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
if ( v->vcpu_id == 0 )
{
/* Create bufioreq event channel. */
- rc = alloc_unbound_xen_event_channel(v, 0);
+ rc = alloc_unbound_xen_event_channel(v, 0, NULL);
if ( rc < 0 )
goto fail2;
v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
rc;
@@ -3561,7 +3561,8 @@ long do_hvm_op(unsigned long op, XEN_GUE
for_each_vcpu ( d, v )
{
int old_port, new_port;
- new_port = alloc_unbound_xen_event_channel(v, a.value);
+ new_port = alloc_unbound_xen_event_channel(
+ v, a.value, NULL);
if ( new_port < 0 )
{
rc = new_port;
diff -r 2a4ec2e2ae36 -r 4c19931b40d5 xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -93,7 +93,8 @@ static int mem_event_enable(struct domai
/* Allocate event channel */
rc = alloc_unbound_xen_event_channel(d->vcpu[0],
- current->domain->domain_id);
+ current->domain->domain_id,
+ NULL);
if ( rc < 0 )
goto err;
diff -r 2a4ec2e2ae36 -r 4c19931b40d5 xen/common/event_channel.c
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -57,6 +57,51 @@
goto out; \
} while ( 0 )
+#define consumer_is_xen(e) (!!(e)->xen_consumer)
+
+/*
+ * The function alloc_unbound_xen_event_channel() allows an arbitrary
+ * notifier function to be specified. However, very few unique functions
+ * are specified in practice, so to prevent bloating the evtchn structure
+ * with a pointer, we stash them dynamically in a small lookup array which
+ * can be indexed by a small integer.
+ */
+static xen_event_channel_notification_t xen_consumers[8];
+
+/* Default notification action: wake up from wait_on_xen_event_channel(). */
+static void default_xen_notification_fn(struct vcpu *v, unsigned int port)
+{
+ /* Consumer needs notification only if blocked. */
+ if ( test_and_clear_bit(_VPF_blocked_in_xen, &v->pause_flags) )
+ vcpu_wake(v);
+}
+
+/*
+ * Given a notification function, return the value to stash in
+ * the evtchn->xen_consumer field.
+ */
+static uint8_t get_xen_consumer(xen_event_channel_notification_t fn)
+{
+ unsigned int i;
+
+ if ( fn == NULL )
+ fn = default_xen_notification_fn;
+
+ for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
+ {
+ if ( xen_consumers[i] == NULL )
+ xen_consumers[i] = fn;
+ if ( xen_consumers[i] == fn )
+ break;
+ }
+
+ BUG_ON(i >= ARRAY_SIZE(xen_consumers));
+ return i+1;
+}
+
+/* Get the notification function for a given Xen-bound event channel. */
+#define xen_notification_fn(e) (xen_consumers[(e)->xen_consumer-1])
+
static int evtchn_set_pending(struct vcpu *v, int port);
static int virq_is_global(int virq)
@@ -397,7 +442,7 @@ static long __evtchn_close(struct domain
chn1 = evtchn_from_port(d1, port1);
/* Guest cannot close a Xen-attached event channel. */
- if ( unlikely(chn1->consumer_is_xen) )
+ if ( unlikely(consumer_is_xen(chn1)) )
{
rc = -EINVAL;
goto out;
@@ -537,7 +582,7 @@ int evtchn_send(struct domain *d, unsign
lchn = evtchn_from_port(ld, lport);
/* Guest cannot send via a Xen-attached event channel. */
- if ( unlikely(lchn->consumer_is_xen) )
+ if ( unlikely(consumer_is_xen(lchn)) )
{
spin_unlock(&ld->event_lock);
return -EINVAL;
@@ -554,13 +599,8 @@ int evtchn_send(struct domain *d, unsign
rport = lchn->u.interdomain.remote_port;
rchn = evtchn_from_port(rd, rport);
rvcpu = rd->vcpu[rchn->notify_vcpu_id];
- if ( rchn->consumer_is_xen )
- {
- /* Xen consumers need notification only if they are blocked. */
- if ( test_and_clear_bit(_VPF_blocked_in_xen,
- &rvcpu->pause_flags) )
- vcpu_wake(rvcpu);
- }
+ if ( consumer_is_xen(rchn) )
+ (*xen_notification_fn(rchn))(rvcpu, rport);
else
{
evtchn_set_pending(rvcpu, rport);
@@ -787,7 +827,7 @@ long evtchn_bind_vcpu(unsigned int port,
chn = evtchn_from_port(d, port);
/* Guest cannot re-bind a Xen-attached event channel. */
- if ( unlikely(chn->consumer_is_xen) )
+ if ( unlikely(consumer_is_xen(chn)) )
{
rc = -EINVAL;
goto out;
@@ -998,7 +1038,8 @@ long do_event_channel_op(int cmd, XEN_GU
int alloc_unbound_xen_event_channel(
- struct vcpu *local_vcpu, domid_t remote_domid)
+ struct vcpu *local_vcpu, domid_t remote_domid,
+ xen_event_channel_notification_t notification_fn)
{
struct evtchn *chn;
struct domain *d = local_vcpu->domain;
@@ -1011,7 +1052,7 @@ int alloc_unbound_xen_event_channel(
chn = evtchn_from_port(d, port);
chn->state = ECS_UNBOUND;
- chn->consumer_is_xen = 1;
+ chn->xen_consumer = get_xen_consumer(notification_fn);
chn->notify_vcpu_id = local_vcpu->vcpu_id;
chn->u.unbound.remote_domid = remote_domid;
@@ -1038,8 +1079,8 @@ void free_xen_event_channel(
BUG_ON(!port_is_valid(d, port));
chn = evtchn_from_port(d, port);
- BUG_ON(!chn->consumer_is_xen);
- chn->consumer_is_xen = 0;
+ BUG_ON(!consumer_is_xen(chn));
+ chn->xen_consumer = 0;
spin_unlock(&d->event_lock);
@@ -1063,7 +1104,7 @@ void notify_via_xen_event_channel(struct
ASSERT(port_is_valid(ld, lport));
lchn = evtchn_from_port(ld, lport);
- ASSERT(lchn->consumer_is_xen);
+ ASSERT(consumer_is_xen(lchn));
if ( likely(lchn->state == ECS_INTERDOMAIN) )
{
@@ -1106,7 +1147,7 @@ void evtchn_destroy(struct domain *d)
/* Close all existing event channels. */
for ( i = 0; port_is_valid(d, i); i++ )
{
- evtchn_from_port(d, i)->consumer_is_xen = 0;
+ evtchn_from_port(d, i)->xen_consumer = 0;
(void)__evtchn_close(d, i);
}
@@ -1192,7 +1233,7 @@ static void domain_dump_evtchn_info(stru
printk(" v=%d", chn->u.virq);
break;
}
- printk(" x=%d\n", chn->consumer_is_xen);
+ printk(" x=%d\n", chn->xen_consumer);
}
spin_unlock(&d->event_lock);
diff -r 2a4ec2e2ae36 -r 4c19931b40d5 xen/include/xen/event.h
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -51,8 +51,11 @@ int evtchn_unmask(unsigned int port);
void evtchn_move_pirqs(struct vcpu *v);
/* Allocate/free a Xen-attached event channel port. */
+typedef void (*xen_event_channel_notification_t)(
+ struct vcpu *v, unsigned int port);
int alloc_unbound_xen_event_channel(
- struct vcpu *local_vcpu, domid_t remote_domid);
+ struct vcpu *local_vcpu, domid_t remote_domid,
+ xen_event_channel_notification_t notification_fn);
void free_xen_event_channel(
struct vcpu *local_vcpu, int port);
diff -r 2a4ec2e2ae36 -r 4c19931b40d5 xen/include/xen/sched.h
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -47,7 +47,7 @@ struct evtchn
#define ECS_VIRQ 5 /* Channel is bound to a virtual IRQ line. */
#define ECS_IPI 6 /* Channel is bound to a virtual IPI line. */
u8 state; /* ECS_* */
- u8 consumer_is_xen; /* Consumed by Xen or by guest? */
+ u8 xen_consumer; /* Consumer in Xen, if any? (0 = send to guest) */
u16 notify_vcpu_id; /* VCPU for local delivery notification */
union {
struct {
Andres Lagar-Cavilla
2011-Dec-05 15:12 UTC
[PATCH 3 of 5] Allow memevent responses to be signaled via the event channel
xen/arch/x86/mm/mem_event.c | 26 ++++++++++++++++++++------
1 files changed, 20 insertions(+), 6 deletions(-)
Don''t require a separate domctl to notify the memevent interface that
an event
has occured. This domctl can be taxing, particularly when you are scaling
events and paging to many domains across a single system. Instead, we use the
existing event channel to signal when we place something in the ring (as per
normal ring operation).
Signed-off-by: Adin Scannell <adin@scannell.ca>
Signed-off-by: Keir Fraser <keir@xen.org>
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
diff -r 4c19931b40d5 -r 22b05954f754 xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -37,9 +37,11 @@
#define mem_event_ring_lock(_med) spin_lock(&(_med)->ring_lock)
#define mem_event_ring_unlock(_med) spin_unlock(&(_med)->ring_lock)
-static int mem_event_enable(struct domain *d,
- xen_domctl_mem_event_op_t *mec,
- struct mem_event_domain *med)
+static int mem_event_enable(
+ struct domain *d,
+ xen_domctl_mem_event_op_t *mec,
+ struct mem_event_domain *med,
+ xen_event_channel_notification_t notification_fn)
{
int rc;
struct domain *dom_mem_event = current->domain;
@@ -94,7 +96,7 @@ static int mem_event_enable(struct domai
/* Allocate event channel */
rc = alloc_unbound_xen_event_channel(d->vcpu[0],
current->domain->domain_id,
- NULL);
+ notification_fn);
if ( rc < 0 )
goto err;
@@ -233,6 +235,18 @@ int mem_event_check_ring(struct domain *
return ring_full;
}
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_paging_notification(struct vcpu *v, unsigned int port)
+{
+ p2m_mem_paging_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_access_notification(struct vcpu *v, unsigned int port)
+{
+ p2m_mem_access_resume(v->domain);
+}
+
int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
XEN_GUEST_HANDLE(void) u_domctl)
{
@@ -294,7 +308,7 @@ int mem_event_domctl(struct domain *d, x
if ( p2m->pod.entry_count )
break;
- rc = mem_event_enable(d, mec, med);
+ rc = mem_event_enable(d, mec, med, mem_paging_notification);
}
break;
@@ -333,7 +347,7 @@ int mem_event_domctl(struct domain *d, x
if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
break;
- rc = mem_event_enable(d, mec, med);
+ rc = mem_event_enable(d, mec, med, mem_access_notification);
}
break;
Andres Lagar-Cavilla
2011-Dec-05 15:12 UTC
[PATCH 4 of 5] Consume multiple mem event responses off the ring
xen/arch/x86/mm/mem_event.c | 10 +++++++-
xen/arch/x86/mm/mem_sharing.c | 13 +++++----
xen/arch/x86/mm/p2m.c | 52 +++++++++++++++++++++-------------------
xen/include/asm-x86/mem_event.h | 2 +-
4 files changed, 44 insertions(+), 33 deletions(-)
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Signed-off-by: Adin Scannell <adin@scanneel.ca>
diff -r 22b05954f754 -r c71f4c5b42f0 xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -166,7 +166,7 @@ void mem_event_put_request(struct domain
notify_via_xen_event_channel(d, med->xen_port);
}
-void mem_event_get_response(struct mem_event_domain *med, mem_event_response_t
*rsp)
+int mem_event_get_response(struct mem_event_domain *med, mem_event_response_t
*rsp)
{
mem_event_front_ring_t *front_ring;
RING_IDX rsp_cons;
@@ -176,6 +176,12 @@ void mem_event_get_response(struct mem_e
front_ring = &med->front_ring;
rsp_cons = front_ring->rsp_cons;
+ if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
+ {
+ mem_event_ring_unlock(med);
+ return 0;
+ }
+
/* Copy response */
memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
rsp_cons++;
@@ -185,6 +191,8 @@ void mem_event_get_response(struct mem_e
front_ring->sring->rsp_event = rsp_cons + 1;
mem_event_ring_unlock(med);
+
+ return 1;
}
void mem_event_unpause_vcpus(struct domain *d)
diff -r 22b05954f754 -r c71f4c5b42f0 xen/arch/x86/mm/mem_sharing.c
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -300,12 +300,13 @@ int mem_sharing_sharing_resume(struct do
{
mem_event_response_t rsp;
- /* Get request off the ring */
- mem_event_get_response(&d->mem_event->share, &rsp);
-
- /* Unpause domain/vcpu */
- if( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
- vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ /* Get all requests off the ring */
+ while ( mem_event_get_response(&d->mem_event->share, &rsp) )
+ {
+ /* Unpause domain/vcpu */
+ if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+ vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ }
return 0;
}
diff -r 22b05954f754 -r c71f4c5b42f0 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1066,31 +1066,31 @@ void p2m_mem_paging_resume(struct domain
p2m_access_t a;
mfn_t mfn;
- /* Pull the response off the ring */
- mem_event_get_response(&d->mem_event->paging, &rsp);
-
- /* Fix p2m entry if the page was not dropped */
- if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
+ /* Pull all responses off the ring */
+ while( mem_event_get_response(&d->mem_event->paging, &rsp) )
{
- p2m_lock(p2m);
- mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, p2m_query,
NULL);
- /* Allow only pages which were prepared properly, or pages which
- * were nominated but not evicted */
- if ( mfn_valid(mfn) &&
- (p2mt == p2m_ram_paging_in || p2mt == p2m_ram_paging_in_start) )
+ /* Fix p2m entry if the page was not dropped */
+ if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
{
- set_p2m_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
- paging_mode_log_dirty(d) ? p2m_ram_logdirty :
p2m_ram_rw,
- a);
- set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+ p2m_lock(p2m);
+ mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, p2m_query,
NULL);
+ /* Allow only pages which were prepared properly, or pages which
+ * were nominated but not evicted */
+ if ( mfn_valid(mfn) &&
+ (p2mt == p2m_ram_paging_in || p2mt == p2m_ram_paging_in_start)
)
+ {
+ set_p2m_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
+ paging_mode_log_dirty(d) ? p2m_ram_logdirty :
+ p2m_ram_rw, a);
+ set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+ }
+ p2m_unlock(p2m);
}
- p2m_unlock(p2m);
+ /* Unpause domain */
+ if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+ vcpu_unpause(d->vcpu[rsp.vcpu_id]);
}
- /* Unpause domain */
- if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
- vcpu_unpause(d->vcpu[rsp.vcpu_id]);
-
/* Unpause any domains that were paused because the ring was full */
mem_event_unpause_vcpus(d);
}
@@ -1174,11 +1174,13 @@ void p2m_mem_access_resume(struct domain
{
mem_event_response_t rsp;
- mem_event_get_response(&d->mem_event->access, &rsp);
-
- /* Unpause domain */
- if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
- vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ /* Pull all responses off the ring */
+ while( mem_event_get_response(&d->mem_event->access, &rsp) )
+ {
+ /* Unpause domain */
+ if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+ vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ }
/* Unpause any domains that were paused because the ring was full or no
listener
* was available */
diff -r 22b05954f754 -r c71f4c5b42f0 xen/include/asm-x86/mem_event.h
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -29,7 +29,7 @@ void mem_event_mark_and_pause(struct vcp
int mem_event_check_ring(struct domain *d, struct mem_event_domain *med);
void mem_event_put_req_producers(struct mem_event_domain *med);
void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
mem_event_request_t *req);
-void mem_event_get_response(struct mem_event_domain *med, mem_event_response_t
*rsp);
+int mem_event_get_response(struct mem_event_domain *med, mem_event_response_t
*rsp);
void mem_event_unpause_vcpus(struct domain *d);
int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
Andres Lagar-Cavilla
2011-Dec-05 15:12 UTC
[PATCH 5 of 5] Ignore memory events with an obviously invalid gfn
xen/arch/x86/mm/mem_sharing.c | 2 ++
xen/arch/x86/mm/p2m.c | 4 ++++
xen/include/public/mem_event.h | 7 +++++++
3 files changed, 13 insertions(+), 0 deletions(-)
Ring semantics require that for every request, a response be put. This
allows consumer to place a dummy response if need be.
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
diff -r c71f4c5b42f0 -r 790cd814bee8 xen/arch/x86/mm/mem_sharing.c
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -303,6 +303,8 @@ int mem_sharing_sharing_resume(struct do
/* Get all requests off the ring */
while ( mem_event_get_response(&d->mem_event->share, &rsp) )
{
+ if ( DUMMY_MEM_EVENT_RSP(&rsp) )
+ continue;
/* Unpause domain/vcpu */
if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
vcpu_unpause(d->vcpu[rsp.vcpu_id]);
diff -r c71f4c5b42f0 -r 790cd814bee8 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1069,6 +1069,8 @@ void p2m_mem_paging_resume(struct domain
/* Pull all responses off the ring */
while( mem_event_get_response(&d->mem_event->paging, &rsp) )
{
+ if ( DUMMY_MEM_EVENT_RSP(&rsp) )
+ continue;
/* Fix p2m entry if the page was not dropped */
if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
{
@@ -1177,6 +1179,8 @@ void p2m_mem_access_resume(struct domain
/* Pull all responses off the ring */
while( mem_event_get_response(&d->mem_event->access, &rsp) )
{
+ if ( DUMMY_MEM_EVENT_RSP(&rsp) )
+ continue;
/* Unpause domain */
if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
vcpu_unpause(d->vcpu[rsp.vcpu_id]);
diff -r c71f4c5b42f0 -r 790cd814bee8 xen/include/public/mem_event.h
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -76,6 +76,13 @@ typedef struct mem_event_st {
DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
+#define INVALID_RSP_GFN -1UL
+
+#define DUMMY_MEM_EVENT_RSP(_r) ((_r)->gfn == INVALID_RSP_GFN)
+#define MAKE_MEM_EVENT_RSP_DUMMY(_r) ((_r)->gfn = INVALID_RSP_GFN)
+#define DECLARE_DUMMY_MEM_EVENT_RSP(_name) \
+ mem_event_response_t _name = { .gfn = INVALID_RSP_GFN }
+
#endif
/*
Tim Deegan
2011-Dec-05 16:01 UTC
Re: [PATCH 5 of 5] Ignore memory events with an obviously invalid gfn
At 10:12 -0500 on 05 Dec (1323079924), Andres Lagar-Cavilla wrote:> xen/arch/x86/mm/mem_sharing.c | 2 ++ > xen/arch/x86/mm/p2m.c | 4 ++++ > xen/include/public/mem_event.h | 7 +++++++ > 3 files changed, 13 insertions(+), 0 deletions(-) > > > Ring semantics require that for every request, a response be put. This > allows consumer to place a dummy response if need be. > > diff -r c71f4c5b42f0 -r 790cd814bee8 xen/include/public/mem_event.h > --- a/xen/include/public/mem_event.h > +++ b/xen/include/public/mem_event.h > @@ -76,6 +76,13 @@ typedef struct mem_event_st { > > DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t); > > +#define INVALID_RSP_GFN -1UL > +On a 32-bit system this doesn''t do what you want. Actually, can you use a flag for this, please? Wedging it into the gfn field seems a bit of a hack, and we have plenty of flag bits left. Tim.> +#define DUMMY_MEM_EVENT_RSP(_r) ((_r)->gfn == INVALID_RSP_GFN) > +#define MAKE_MEM_EVENT_RSP_DUMMY(_r) ((_r)->gfn = INVALID_RSP_GFN) > +#define DECLARE_DUMMY_MEM_EVENT_RSP(_name) \ > + mem_event_response_t _name = { .gfn = INVALID_RSP_GFN } > + > #endif > > /* > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel
Andres Lagar-Cavilla
2011-Dec-05 16:20 UTC
Re: [PATCH 5 of 5] Ignore memory events with an obviously invalid gfn
> At 10:12 -0500 on 05 Dec (1323079924), Andres Lagar-Cavilla wrote: >> xen/arch/x86/mm/mem_sharing.c | 2 ++ >> xen/arch/x86/mm/p2m.c | 4 ++++ >> xen/include/public/mem_event.h | 7 +++++++ >> 3 files changed, 13 insertions(+), 0 deletions(-) >> >> >> Ring semantics require that for every request, a response be put. This >> allows consumer to place a dummy response if need be. >> >> diff -r c71f4c5b42f0 -r 790cd814bee8 xen/include/public/mem_event.h >> --- a/xen/include/public/mem_event.h >> +++ b/xen/include/public/mem_event.h >> @@ -76,6 +76,13 @@ typedef struct mem_event_st { >> >> DEFINE_RING_TYPES(mem_event, mem_event_request_t, >> mem_event_response_t); >> >> +#define INVALID_RSP_GFN -1UL >> + > > On a 32-bit system this doesn''t do what you want. > > Actually, can you use a flag for this, please? Wedging it into the gfn > field seems a bit of a hack, and we have plenty of flag bits left. > > Tim.Good point, here it is: # HG changeset patch # Parent c71f4c5b42f0c7cc04e4ae758c8ab775305f2c6a Ignore memory events with an obviously invalid gfn. Ring semantics require that for every request, a response be put. This allows consumer to place a dummy response if need be. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> diff -r c71f4c5b42f0 xen/arch/x86/mm/mem_sharing.c --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -303,6 +303,8 @@ int mem_sharing_sharing_resume(struct do /* Get all requests off the ring */ while ( mem_event_get_response(&d->mem_event->share, &rsp) ) { + if ( DUMMY_MEM_EVENT_RSP(&rsp) ) + continue; /* Unpause domain/vcpu */ if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) vcpu_unpause(d->vcpu[rsp.vcpu_id]); diff -r c71f4c5b42f0 xen/arch/x86/mm/p2m.c --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1069,6 +1069,8 @@ void p2m_mem_paging_resume(struct domain /* Pull all responses off the ring */ while( mem_event_get_response(&d->mem_event->paging, &rsp) ) { + if ( DUMMY_MEM_EVENT_RSP(&rsp) ) + continue; /* Fix p2m entry if the page was not dropped */ if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) ) { @@ -1177,6 +1179,8 @@ void p2m_mem_access_resume(struct domain /* Pull all responses off the ring */ while( mem_event_get_response(&d->mem_event->access, &rsp) ) { + if ( DUMMY_MEM_EVENT_RSP(&rsp) ) + continue; /* Unpause domain */ if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) vcpu_unpause(d->vcpu[rsp.vcpu_id]); diff -r c71f4c5b42f0 xen/include/public/mem_event.h --- a/xen/include/public/mem_event.h +++ b/xen/include/public/mem_event.h @@ -40,6 +40,7 @@ #define MEM_EVENT_FLAG_DROP_PAGE (1 << 1) #define MEM_EVENT_FLAG_EVICT_FAIL (1 << 2) #define MEM_EVENT_FLAG_FOREIGN (1 << 3) +#define MEM_EVENT_FLAG_DUMMY (1 << 4) /* Reasons for the memory event request */ #define MEM_EVENT_REASON_UNKNOWN 0 /* typical reason */ @@ -76,6 +77,13 @@ typedef struct mem_event_st { DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t); +#define DUMMY_MEM_EVENT_RSP(_r) \ + ((_r)->flags & MEM_EVENT_FLAG_DUMMY) +#define MAKE_MEM_EVENT_RSP_DUMMY(_r) \ + ((_r)->flags |= MEM_EVENT_FLAG_DUMMY) +#define DECLARE_DUMMY_MEM_EVENT_RSP(_name) \ + mem_event_response_t _name = { .flags = MEM_EVENT_FLAG_DUMMY } + #endif /*
On Mon, Dec 05, Andres Lagar-Cavilla wrote:> In this patch series we add a series of improevements to the memory > event interface. Namely: > - Allow batched consumption of multiple outstanding responses > - Signal Xen to consume responses solely with an event channel kick > - Extra flag to tag events generated byforeign-domains > - Allow placing dummy domains in the ring, to keep it in sync if > a state transition does not require an explicit response.Assuming that these patches have been tested, I''m ok with them. Olaf
At 10:11 -0500 on 05 Dec (1323079919), Andres Lagar-Cavilla wrote:> In this patch series we add a series of improevements to the memory > event interface. Namely: > - Allow batched consumption of multiple outstanding responses > - Signal Xen to consume responses solely with an event channel kick > - Extra flag to tag events generated byforeign-domains > - Allow placing dummy domains in the ring, to keep it in sync if > a state transition does not require an explicit response.Applied, thanks. I stripped out the ''MEM_EVENT_RSP'' macros from patch 5, as they didn''t seem useful now that it''s just a flag, so you might have to fix up your client code to match. Cheers, Tim.
Andres Lagar-Cavilla
2011-Dec-06 20:39 UTC
Re: [PATCH 0 of 5] Mem event interface improvements
> At 10:11 -0500 on 05 Dec (1323079919), Andres Lagar-Cavilla wrote: >> In this patch series we add a series of improevements to the memory >> event interface. Namely: >> - Allow batched consumption of multiple outstanding responses >> - Signal Xen to consume responses solely with an event channel kick >> - Extra flag to tag events generated byforeign-domains >> - Allow placing dummy domains in the ring, to keep it in sync if >> a state transition does not require an explicit response. > > Applied, thanks. I stripped out the ''MEM_EVENT_RSP'' macros from patch > 5, as they didn''t seem useful now that it''s just a flag, so you might > have to fix up your client code to match. > > Cheers, > > Tim.Thanks! no biggie about macros Andres> >