Christoph Egger
2010-Dec-20 16:05 UTC
[Xen-devel] [PATCH 04/12] Nested Virtualization: core
-- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2010-Dec-27 07:54 UTC
RE: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
Dong, Eddie wrote:> # HG changeset patch > # User cegger > # Date 1292839432 -3600 > Nested Virtualization core implementation > > Signed-off-by: Christoph Egger <Christoph.Egger@amd.com> > > diff -r e43ab6fb0ee2 -r a9465de5a794 xen/arch/x86/hvm/Makefile > --- a/xen/arch/x86/hvm/Makefile > +++ b/xen/arch/x86/hvm/Makefile > @@ -10,6 +10,7 @@ obj-y += intercept.o > obj-y += io.o > obj-y += irq.o > obj-y += mtrr.o > +obj-y += nestedhvm.o > obj-y += pmtimer.o > obj-y += quirks.o > obj-y += rtc.o > diff -r e43ab6fb0ee2 -r a9465de5a794 xen/arch/x86/hvm/nestedhvm.c > --- /dev/null > +++ b/xen/arch/x86/hvm/nestedhvm.c > @@ -0,0 +1,198 @@ > +/* > + * Nested HVM > + * Copyright (c) 2010, Advanced Micro Devices, Inc. > + * Author: Christoph Egger <Christoph.Egger@amd.com> > + * > + * This program is free software; you can redistribute it and/or > modify it + * under the terms and conditions of the GNU General > Public License, + * version 2, as published by the Free Software > Foundation. + * > + * This program is distributed in the hope it will be useful, but > WITHOUT + * ANY WARRANTY; without even the implied warranty of > MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU > General Public License for + * more details. > + * > + * You should have received a copy of the GNU General Public License > along with + * this program; if not, write to the Free Software > Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA > 02111-1307 USA. + */ > + > +#include <asm/msr.h> > +#include <asm/hvm/support.h> /* for HVM_DELIVER_NO_ERROR_CODE */ > +#include <asm/hvm/hvm.h> > +#include <asm/hvm/nestedhvm.h> > +#include <asm/event.h> /* for local_event_delivery_(en|dis)able */ > +#include <asm/paging.h> /* for paging_mode_hap() */ > + > + > +/* Nested HVM on/off per domain */ > +bool_t > +nestedhvm_enabled(struct domain *d) > +{ > + bool_t enabled; > + > + enabled = !!(d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM]); > + /* sanity check */ > + BUG_ON(enabled && !is_hvm_domain(d)); > + > + if (!is_hvm_domain(d)) > + return 0; > + > + return enabled; > +} > + > +/* Nested VCPU */ > +bool_t > +nestedhvm_vcpu_in_guestmode(struct vcpu *v) > +{ > + return vcpu_nestedhvm(v).nv_guestmode; > +} > + > +void > +nestedhvm_vcpu_reset(struct vcpu *v) > +{ > + struct nestedvcpu *nv = &vcpu_nestedhvm(v); > + > + if (nv->nv_vmcx) > + hvm_unmap_guest_frame(nv->nv_vmcx); > + nv->nv_vmcx = NULL; > + nv->nv_vmcxaddr = VMCX_EADDR; > + nv->nv_flushp2m = 0; > + nv->nv_p2m = NULL; > + > + nhvm_vcpu_reset(v); > + > + /* vcpu is in host mode */ > + nestedhvm_vcpu_exit_guestmode(v); > +} > + > +int > +nestedhvm_vcpu_initialise(struct vcpu *v) > +{ > + int rc; > + struct nestedvcpu *nv = &vcpu_nestedhvm(v); > + > + if (!nestedhvm_enabled(v->domain)) > + return 0; > + > + memset(nv, 0x0, sizeof(struct nestedvcpu)); > + > + /* initialise hostsave, for example */ > + rc = nhvm_vcpu_initialise(v); > + if (rc) { > + nhvm_vcpu_destroy(v); > + return rc; > + } > + > + nestedhvm_vcpu_reset(v); > + return 0; > +} > + > +int > +nestedhvm_vcpu_destroy(struct vcpu *v) > +{ > + if (!nestedhvm_enabled(v->domain)) > + return 0; > + > + return nhvm_vcpu_destroy(v); > +} > + > +void > +nestedhvm_vcpu_enter_guestmode(struct vcpu *v) > +{ > + vcpu_nestedhvm(v).nv_guestmode = 1; > +} > + > +void > +nestedhvm_vcpu_exit_guestmode(struct vcpu *v) > +{ > + vcpu_nestedhvm(v).nv_guestmode = 0; > +} > + > +/* Common shadow IO Permission bitmap */ > + > +struct shadow_iomap { > + /* same format and size as hvm_io_bitmap */ > + unsigned long iomap[3*PAGE_SIZE/BYTES_PER_LONG]; > + int refcnt; > +}; > + > +/* There four global patterns of io bitmap each guest can > + * choose one of them depending on interception of io port 0x80 > and/or + * 0xED (shown in table below). Each shadow iomap pattern is > + * implemented as a singleton to minimize memory consumption while > + * providing a provider/consumer interface to the users. > + * The users are in SVM/VMX specific code. > + * > + * bitmap port 0x80 port 0xed > + * hvm_io_bitmap cleared cleared > + * iomap[0] cleared set > + * iomap[1] set cleared > + * iomap[2] set set > + */ > +static struct shadow_iomap *nhvm_io_bitmap[3]; > + > +unsigned long * > +nestedhvm_vcpu_iomap_get(bool_t port_80, bool_t port_ed) > +{ > + int i; > + extern int hvm_port80_allowed; > + > + if (!hvm_port80_allowed) > + port_80 = 1; > + > + if (port_80 == 0) { > + if (port_ed == 0) > + return hvm_io_bitmap; > + i = 0; > + } else { > + if (port_ed == 0) > + i = 1; > + else > + i = 2; > + } > + > + if (nhvm_io_bitmap[i] == NULL) { > + nhvm_io_bitmap[i] > + _xmalloc(sizeof(struct shadow_iomap), PAGE_SIZE); > + nhvm_io_bitmap[i]->refcnt = 0; > + /* set all bits */ > + memset(nhvm_io_bitmap[i]->iomap, ~0, > sizeof(nhvm_io_bitmap[i]->iomap)); + switch (i) { > + case 0: > + __clear_bit(0x80, nhvm_io_bitmap[i]->iomap); > + break; > + case 1: > + __clear_bit(0xed, nhvm_io_bitmap[i]->iomap); > + break; > + case 2: > + break; > + } > + } > +This is overcomplicated. Static table should serve this much simple and efficient. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christoph Egger
2011-Jan-03 15:58 UTC
Re: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
On Monday 27 December 2010 08:54:16 Dong, Eddie wrote:> Dong, Eddie wrote: > > # HG changeset patch > > # User cegger > > # Date 1292839432 -3600 > > Nested Virtualization core implementation > > > > Signed-off-by: Christoph Egger <Christoph.Egger@amd.com> > > > > diff -r e43ab6fb0ee2 -r a9465de5a794 xen/arch/x86/hvm/Makefile > > --- a/xen/arch/x86/hvm/Makefile > > +++ b/xen/arch/x86/hvm/Makefile > > @@ -10,6 +10,7 @@ obj-y += intercept.o > > obj-y += io.o > > obj-y += irq.o > > obj-y += mtrr.o > > +obj-y += nestedhvm.o > > obj-y += pmtimer.o > > obj-y += quirks.o > > obj-y += rtc.o > > diff -r e43ab6fb0ee2 -r a9465de5a794 xen/arch/x86/hvm/nestedhvm.c > > --- /dev/null > > +++ b/xen/arch/x86/hvm/nestedhvm.c > > @@ -0,0 +1,198 @@ > > +/* > > + * Nested HVM > > + * Copyright (c) 2010, Advanced Micro Devices, Inc. > > + * Author: Christoph Egger <Christoph.Egger@amd.com> > > + * > > + * This program is free software; you can redistribute it and/or > > modify it + * under the terms and conditions of the GNU General > > Public License, + * version 2, as published by the Free Software > > Foundation. + * > > + * This program is distributed in the hope it will be useful, but > > WITHOUT + * ANY WARRANTY; without even the implied warranty of > > MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > General Public License for + * more details. > > + * > > + * You should have received a copy of the GNU General Public License > > along with + * this program; if not, write to the Free Software > > Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA > > 02111-1307 USA. + */ > > + > > +#include <asm/msr.h> > > +#include <asm/hvm/support.h> /* for HVM_DELIVER_NO_ERROR_CODE */ > > +#include <asm/hvm/hvm.h> > > +#include <asm/hvm/nestedhvm.h> > > +#include <asm/event.h> /* for local_event_delivery_(en|dis)able */ > > +#include <asm/paging.h> /* for paging_mode_hap() */ > > + > > + > > +/* Nested HVM on/off per domain */ > > +bool_t > > +nestedhvm_enabled(struct domain *d) > > +{ > > + bool_t enabled; > > + > > + enabled = !!(d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM]); > > + /* sanity check */ > > + BUG_ON(enabled && !is_hvm_domain(d)); > > + > > + if (!is_hvm_domain(d)) > > + return 0; > > + > > + return enabled; > > +} > > + > > +/* Nested VCPU */ > > +bool_t > > +nestedhvm_vcpu_in_guestmode(struct vcpu *v) > > +{ > > + return vcpu_nestedhvm(v).nv_guestmode; > > +} > > + > > +void > > +nestedhvm_vcpu_reset(struct vcpu *v) > > +{ > > + struct nestedvcpu *nv = &vcpu_nestedhvm(v); > > + > > + if (nv->nv_vmcx) > > + hvm_unmap_guest_frame(nv->nv_vmcx); > > + nv->nv_vmcx = NULL; > > + nv->nv_vmcxaddr = VMCX_EADDR; > > + nv->nv_flushp2m = 0; > > + nv->nv_p2m = NULL; > > + > > + nhvm_vcpu_reset(v); > > + > > + /* vcpu is in host mode */ > > + nestedhvm_vcpu_exit_guestmode(v); > > +} > > + > > +int > > +nestedhvm_vcpu_initialise(struct vcpu *v) > > +{ > > + int rc; > > + struct nestedvcpu *nv = &vcpu_nestedhvm(v); > > + > > + if (!nestedhvm_enabled(v->domain)) > > + return 0; > > + > > + memset(nv, 0x0, sizeof(struct nestedvcpu)); > > + > > + /* initialise hostsave, for example */ > > + rc = nhvm_vcpu_initialise(v); > > + if (rc) { > > + nhvm_vcpu_destroy(v); > > + return rc; > > + } > > + > > + nestedhvm_vcpu_reset(v); > > + return 0; > > +} > > + > > +int > > +nestedhvm_vcpu_destroy(struct vcpu *v) > > +{ > > + if (!nestedhvm_enabled(v->domain)) > > + return 0; > > + > > + return nhvm_vcpu_destroy(v); > > +} > > + > > +void > > +nestedhvm_vcpu_enter_guestmode(struct vcpu *v) > > +{ > > + vcpu_nestedhvm(v).nv_guestmode = 1; > > +} > > + > > +void > > +nestedhvm_vcpu_exit_guestmode(struct vcpu *v) > > +{ > > + vcpu_nestedhvm(v).nv_guestmode = 0; > > +} > > + > > +/* Common shadow IO Permission bitmap */ > > + > > +struct shadow_iomap { > > + /* same format and size as hvm_io_bitmap */ > > + unsigned long iomap[3*PAGE_SIZE/BYTES_PER_LONG]; > > + int refcnt; > > +}; > > + > > +/* There four global patterns of io bitmap each guest can > > + * choose one of them depending on interception of io port 0x80 > > and/or + * 0xED (shown in table below). Each shadow iomap pattern is > > + * implemented as a singleton to minimize memory consumption while > > + * providing a provider/consumer interface to the users. > > + * The users are in SVM/VMX specific code. > > + * > > + * bitmap port 0x80 port 0xed > > + * hvm_io_bitmap cleared cleared > > + * iomap[0] cleared set > > + * iomap[1] set cleared > > + * iomap[2] set set > > + */ > > +static struct shadow_iomap *nhvm_io_bitmap[3]; > > + > > +unsigned long * > > +nestedhvm_vcpu_iomap_get(bool_t port_80, bool_t port_ed) > > +{ > > + int i; > > + extern int hvm_port80_allowed; > > + > > + if (!hvm_port80_allowed) > > + port_80 = 1; > > + > > + if (port_80 == 0) { > > + if (port_ed == 0) > > + return hvm_io_bitmap; > > + i = 0; > > + } else { > > + if (port_ed == 0) > > + i = 1; > > + else > > + i = 2; > > + } > > + > > + if (nhvm_io_bitmap[i] == NULL) { > > + nhvm_io_bitmap[i] > > + _xmalloc(sizeof(struct shadow_iomap), PAGE_SIZE); > > + nhvm_io_bitmap[i]->refcnt = 0; > > + /* set all bits */ > > + memset(nhvm_io_bitmap[i]->iomap, ~0, > > sizeof(nhvm_io_bitmap[i]->iomap)); + switch (i) { > > + case 0: > > + __clear_bit(0x80, nhvm_io_bitmap[i]->iomap); > > + break; > > + case 1: > > + __clear_bit(0xed, nhvm_io_bitmap[i]->iomap); > > + break; > > + case 2: > > + break; > > + } > > + } > > + > > This is overcomplicated. Static table should serve this much simple and > efficient.The logic to select the right static table will be still needed. I am not sure if removing the _xmalloc() call simplifies this part a lot. I appreciate opinions from other people on this. Christoph -- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2011-Jan-06 17:33 UTC
RE: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
>> This is overcomplicated. Static table should serve this much simple >> and efficient. > > The logic to select the right static table will be still needed. I am > not sure if removing the _xmalloc() call simplifies this part a lot.It will be much simple. You don''t need the nestedhvm_vcpu_iomap_get/put api, nor the refcnt. The thing more important is policy: If you are in favoring of memory size or simplicity. If it is for memory size, then you should only allocate 2 io_bitmap pages for VMX.> > I appreciate opinions from other people on this. >Besides, ideally we should implement per guest io bitmap page, by reusing L1 guest io_bitmap + write protection of the page table. At least for both Xen & KVM, the io bitmap is not modified at runtime once it is initialized. The readibility can be improved & the memory page can be saved. We only need 2 bits per L1 guest. But if we want simplicity, I am ok too, however the current patch doesn''t fit for either of the goal. thx, eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christoph Egger
2011-Jan-07 10:24 UTC
Re: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
On Thursday 06 January 2011 18:33:56 Dong, Eddie wrote:> >> This is overcomplicated. Static table should serve this much simple > >> and efficient. > > > > The logic to select the right static table will be still needed. I am > > not sure if removing the _xmalloc() call simplifies this part a lot. > > It will be much simple. You don''t need the nestedhvm_vcpu_iomap_get/put > api, nor the refcnt.It is intended that the api behaves like a pool from the caller site while it is implemented as a singleton. The refcnt (or should I call it usagecnt) is needed by the singleton design pattern. When I remove the refcnt then I have to implement the api as a real pool which will result in allocating an io bitmap for each vcpu for each l1 guest at runtime.> > The thing more important is policy: If you are in favoring of memory size > or simplicity. If it is for memory size, then you should only allocate 2 > io_bitmap pages for VMX. > > > I appreciate opinions from other people on this. > > Besides, ideally we should implement per guest io bitmap page, by reusing > L1 guest io_bitmap + write protection of the page table.That will work fine with 4kb pages but I guess it won''t be very efficient with 2MB and 1GB pages. Most time will be spent with emulating write accesses to the address ranges outside of the io bitmaps with large pages.> At least for both Xen & KVM, the io bitmap is not modified at runtime once > it is initialized.Yep, that''s why we only need to deal with four possible patterns of shadow io bitmaps in Xen. We can''t assume the l1 guest is not modifying it.> The readibility can be improved & the memory page can be saved. We only > need 2 bits per L1 guest. > > But if we want simplicity, I am ok too, however the current patch doesn''t > fit for either of the goal.hmm... I think, I need to move that part of common logic into SVM to reach consensus... pity. Christoph -- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2011-Jan-07 14:12 UTC
Re: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
At 15:58 +0000 on 03 Jan (1294070335), Christoph Egger wrote:> On Monday 27 December 2010 08:54:16 Dong, Eddie wrote: > > This is overcomplicated. Static table should serve this much simple and > > efficient. > > The logic to select the right static table will be still needed. I am > not sure if removing the _xmalloc() call simplifies this part a lot. > > I appreciate opinions from other people on this.I think that you should allocate the three static bitmaps once at boot time and not bother refcounting them. It''s only 36KiB of overhead for the entire host. Otherwise you''d have to decide what to do if _xmalloc() returned NULL. Cheers, Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, Xen Platform Team Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christoph Egger
2011-Jan-07 15:56 UTC
Re: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
On Friday 07 January 2011 15:12:51 Tim Deegan wrote:> At 15:58 +0000 on 03 Jan (1294070335), Christoph Egger wrote: > > On Monday 27 December 2010 08:54:16 Dong, Eddie wrote: > > > This is overcomplicated. Static table should serve this much simple and > > > efficient. > > > > The logic to select the right static table will be still needed. I am > > not sure if removing the _xmalloc() call simplifies this part a lot. > > > > I appreciate opinions from other people on this. > > I think that you should allocate the three static bitmaps once at boot > time and not bother refcounting them. It''s only 36KiB of overhead for > the entire host. > > Otherwise you''d have to decide what to do if _xmalloc() returned NULL.I did not want to waste memory in the non-nested-virtualization case but ok, I will go that way then. Christoph -- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christoph Egger
2011-Jan-07 16:31 UTC
Re: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
On Thursday 06 January 2011 18:33:56 Dong, Eddie wrote:> >> This is overcomplicated. Static table should serve this much simple > >> and efficient. > > > > The logic to select the right static table will be still needed. I am > > not sure if removing the _xmalloc() call simplifies this part a lot. > > It will be much simple. You don''t need the nestedhvm_vcpu_iomap_get/put > api, nor the refcnt. > > The thing more important is policy: If you are in favoring of memory size > or simplicity. If it is for memory size, then you should only allocate 2 > io_bitmap pages for VMX. > > > I appreciate opinions from other people on this. > > Besides, ideally we should implement per guest io bitmap page, by reusing > L1 guest io_bitmap + write protection of the page table. At least for both > Xen & KVM, the io bitmap is not modified at runtime once it is initialized. > The readibility can be improved & the memory page can be saved. We only > need 2 bits per L1 guest. > > But if we want simplicity, I am ok too, however the current patch doesn''t > fit for either of the goal.I think, I need to excuse. Now that I did the change in my local tree I have to mention that did not realize that I can remove the put function completely. This in fact is a simplification. Christoph -- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2011-Jan-07 20:39 UTC
RE: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
Glad to see that you eventually take our proposal. Simple is beautiful, that is my truth. BTW, comments to your previous question in case you are still interested in them.>> >> It will be much simple. You don''t need the >> nestedhvm_vcpu_iomap_get/put api, nor the refcnt. > > It is intended that the api behaves like a pool from the caller site > while it is implemented as a singleton. > The refcnt (or should I call it usagecnt) is needed by the singleton > design pattern. > > When I remove the refcnt then I have to implement the api as a real > pool which will result in allocating an io bitmap for each vcpu for > each l1 guest at runtime.You don''t need the apis w/ pre-allocated pages.> >> >> The thing more important is policy: If you are in favoring of memory >> size or simplicity. If it is for memory size, then you should only >> allocate 2 io_bitmap pages for VMX. >> >>> I appreciate opinions from other people on this. >> >> Besides, ideally we should implement per guest io bitmap page, by >> reusing L1 guest io_bitmap + write protection of the page table. > > That will work fine with 4kb pages but I guess it won''t be very > efficient with 2MB and 1GB pages. Most time will be spent with > emulating write accesses to the address ranges outside of the io > bitmaps with large pages.That is not true. Even the guest got contuguous machine large pages, but the host should still be able to handle mixed page size. a typical usage for this is that host may not always be able to get contiguous large page such as after migration. In this case, of course we only protect 2* 4K pages. That doesn''t introduce any additional issues.> >> At least for both Xen & KVM, the io bitmap is not modified at >> runtime once it is initialized. > > Yep, that''s why we only need to deal with four possible patterns of > shadow io bitmaps in Xen. We can''t assume the l1 guest is not > modifying it.While, for performance, we can assume. For correctness, we need to handle the rare situation. That is also why we need to write-protect the bitmap pages, and create another seperate shadow io bitmap pages if the guest want to do that for correctness. However in dominant case, the host can reuse guest io butmap pages for host usage with 2 bits indicating original guest state. Four pattern really is not related with the topic.> >> The readibility can be improved & the memory page can be saved. We >> only need 2 bits per L1 guest. >> >> But if we want simplicity, I am ok too, however the current patch >> doesn''t fit for either of the goal. > > hmm... I think, I need to move that part of common logic into SVM to > reach consensus... pity. >My idea was given twice before you publically post it :( Thx, Eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel