Keir, I have found a VT-d scalability issue and want to some feed backs. When I assign a pass-through NIC to a linux VM and increase the num of VMs, the iperf throughput for each VM drops greatly. Say, start 8 VM running on a machine with 8 physical cpus, start 8 iperf client to connect each of them, the final result is only 60% of 1 VM. Further investigation shows vcpu migration cause "cold" cache for pass-through domain. following code in vmx_do_resume try to invalidate orig processor''s cache when 14 migration if this domain has pass-through device and no support for wbinvd vmexit. 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) { int cpu = v->arch.hvm_vmx.active_cpu; if ( cpu != -1 ) on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, } So we want to pin vcpu to free processor for domains with pass-through device in creation process, just like what we did for NUMA system. What do you think of it? Or have other ideas? Thanks, -- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> When I assign a pass-through NIC to a linux VM and increase the num ofVMs, the> iperf throughput for each VM drops greatly. Say, start 8 VM running on > a machine with 8 physical cpus, start 8 iperf client to connect eachof them, the> final result is only 60% of 1 VM. > > Further investigation shows vcpu migration cause "cold" cache forpass-> through domain.Just so I understand the experiment, does each VM have a pass-through NIC, or just one?> following code in vmx_do_resume try to invalidate orig processor''s > cache when > 14 migration if this domain has pass-through device and no support for > wbinvd vmexit. > 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) > { > int cpu = v->arch.hvm_vmx.active_cpu; > if ( cpu != -1 ) > on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > > } > > So we want to pin vcpu to free processor for domains with pass-through > device in creation process, just like what we did for NUMA system.What pinning functionality would we need beyond what''s already there? Thanks, Ian> What do you think of it? Or have other ideas? > > Thanks, > > > -- > best rgds, > edwin > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 9/9/08 10:04, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:> following code in vmx_do_resume try to invalidate orig processor''s cache when > 14 migration if this domain has pass-through device and no support for wbinvd > vmexit. > 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) > { > int cpu = v->arch.hvm_vmx.active_cpu; > if ( cpu != -1 ) > on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > > } > > So we want to pin vcpu to free processor for domains with pass-through device > in > creation process, just like what we did for NUMA system.(a) pinning support already exists.Maybe list this as ''best practice'' but I don''t see any need for xend changes, for example. (b) presumably your upcoming (and existing current?) processors support wbinvd exiting anyway? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Sep 09, 2008 at 10:28:59AM +0100, Ian Pratt wrote:> > When I assign a pass-through NIC to a linux VM and increase the num of > VMs, the > > iperf throughput for each VM drops greatly. Say, start 8 VM running on > > a machine with 8 physical cpus, start 8 iperf client to connect each > of them, the > > final result is only 60% of 1 VM. > > > > Further investigation shows vcpu migration cause "cold" cache for > pass- > > through domain. > > Just so I understand the experiment, does each VM have a pass-through > NIC, or just one?Each VM have a pass-through device.> > > following code in vmx_do_resume try to invalidate orig processor''s > > cache when > > 14 migration if this domain has pass-through device and no support for > > wbinvd vmexit. > > 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) > > { > > int cpu = v->arch.hvm_vmx.active_cpu; > > if ( cpu != -1 ) > > on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > > > > } > > > > So we want to pin vcpu to free processor for domains with pass-through > > device in creation process, just like what we did for NUMA system. > > What pinning functionality would we need beyond what''s already there?I think you mean the "cpus" in config file for vcpu affinity. It requires extra efforts from end user. We just want to pin vcpu for VTd domain automatically in xend, like we pin vcpu to a free node in NUMA system.> > Thanks, > Ian > > > > What do you think of it? Or have other ideas? > > > > Thanks, > > > > > > -- > > best rgds, > > edwin > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Sep 09, 2008 at 11:22:15AM +0100, Keir Fraser wrote:> > > > On 9/9/08 10:04, "Zhai, Edwin" <edwin.zhai@intel.com> wrote: > > > following code in vmx_do_resume try to invalidate orig processor''s cache when > > 14 migration if this domain has pass-through device and no support for wbinvd > > vmexit. > > 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) > > { > > int cpu = v->arch.hvm_vmx.active_cpu; > > if ( cpu != -1 ) > > on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > > > > } > > > > So we want to pin vcpu to free processor for domains with pass-through device > > in > > creation process, just like what we did for NUMA system. > > (a) pinning support already exists.Maybe list this as ''best practice'' but I > don''t see any need for xend changes, for example.So end user need explicitly call "xm vcpu-pin" for VTd domain. But where to put this ''best practice''?> (b) presumably your upcoming (and existing current?) processors support > wbinvd exiting anyway?Yes, but exist system has such problem.> > -- Keir > >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 10/9/08 01:09, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:>> (b) presumably your upcoming (and existing current?) processors support >> wbinvd exiting anyway? > > Yes, but exist system has such problem.Is any significant number of people really using VT-d yet? I must say I''m skeptical as to whether the existing hardware base really matters all that much. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> On 10/9/08 01:09, "Zhai, Edwin" <edwin.zhai@intel.com> wrote: > > >> (b) presumably your upcoming (and existing current?) processors > support > >> wbinvd exiting anyway? > > > > Yes, but exist system has such problem. > > Is any significant number of people really using VT-d yet? I must say > I''m skeptical as to whether the existing hardware base really mattersall> that much.Which of the current shipping CPUs do support wbinvd exiting? Not even Wikipedia knows the answer to that question and it''s usually the best source of information about Intel CPU, certainly more help than the Intel web site :-) Ian> -- Keir > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Not regarding the other questions/objections in this thread for a moment --- what kind of performance improvments are we talking of here if the vcpus are pinned? Is it close to 1 VM or is there still some performance degradation due to IOTLB pressure? [And talking of IOTLB pressure, why can''t Intel document the IOTLB sizes in the chipset docs? Or even better, why can''t these values be queried from the chipset?] eSk [Edwin Zhai]> Keir, > I have found a VT-d scalability issue and want to some feed backs.> When I assign a pass-through NIC to a linux VM and increase the num > of VMs, the iperf throughput for each VM drops greatly. Say, start 8 > VM running on a machine with 8 physical cpus, start 8 iperf client > to connect each of them, the final result is only 60% of 1 VM.> Further investigation shows vcpu migration cause "cold" cache for > pass-through domain. following code in vmx_do_resume try to > invalidate orig processor''s cache when 14 migration if this domain > has pass-through device and no support for wbinvd vmexit.> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) > { > int cpu = v->arch.hvm_vmx.active_cpu; > if ( cpu != -1 ) > on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1,> }> So we want to pin vcpu to free processor for domains with > pass-through device in creation process, just like what we did for > NUMA system.> What do you think of it? Or have other ideas?> Thanks,> -- > best rgds, > edwin> _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Espen Skoglund wrote:> Not regarding the other questions/objections in this thread for a > moment --- what kind of performance improvments are we talking of here > if the vcpus are pinned? Is it close to 1 VM or is there still some > performance degradation due to IOTLB pressure?Definitely performance will degrade due to IOTLB pressure when there are many VMs which exhausts IOTLB. Randy (Weidong)> > [And talking of IOTLB pressure, why can''t Intel document the IOTLB > sizes in the chipset docs? Or even better, why can''t these values be > queried from the chipset?] > > eSk > > > [Edwin Zhai] >> Keir, >> I have found a VT-d scalability issue and want to some feed backs. > >> When I assign a pass-through NIC to a linux VM and increase the num >> of VMs, the iperf throughput for each VM drops greatly. Say, start 8 >> VM running on a machine with 8 physical cpus, start 8 iperf client >> to connect each of them, the final result is only 60% of 1 VM. > >> Further investigation shows vcpu migration cause "cold" cache for >> pass-through domain. following code in vmx_do_resume try to >> invalidate orig processor''s cache when 14 migration if this domain >> has pass-through device and no support for wbinvd vmexit. > >> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) { >> int cpu = v->arch.hvm_vmx.active_cpu; >> if ( cpu != -1 ) >> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > >> } > >> So we want to pin vcpu to free processor for domains with >> pass-through device in creation process, just like what we did for >> NUMA system. > >> What do you think of it? Or have other ideas? > >> Thanks, > > >> -- >> best rgds, >> edwin > >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
[Weidong Han]> Espen Skoglund wrote: >> Not regarding the other questions/objections in this thread for a >> moment --- what kind of performance improvments are we talking of here >> if the vcpus are pinned? Is it close to 1 VM or is there still some >> performance degradation due to IOTLB pressure?> Definitely performance will degrade due to IOTLB pressure when there are > many VMs which exhausts IOTLB.But how much of the degradation is due to IOTLB pressure and how much is due to vcpu pinning? If vcpu pinning doesn''t give you much then why add the automatic pinning just to get a little improvement on older CPUs hooked up to a VT-d chipset? eSk> Randy (Weidong)>> >> [And talking of IOTLB pressure, why can''t Intel document the IOTLB >> sizes in the chipset docs? Or even better, why can''t these values be >> queried from the chipset?] >> >> eSk >> >> >> [Edwin Zhai] >>> Keir, >>> I have found a VT-d scalability issue and want to some feed backs. >> >>> When I assign a pass-through NIC to a linux VM and increase the num >>> of VMs, the iperf throughput for each VM drops greatly. Say, start 8 >>> VM running on a machine with 8 physical cpus, start 8 iperf client >>> to connect each of them, the final result is only 60% of 1 VM. >> >>> Further investigation shows vcpu migration cause "cold" cache for >>> pass-through domain. following code in vmx_do_resume try to >>> invalidate orig processor''s cache when 14 migration if this domain >>> has pass-through device and no support for wbinvd vmexit. >> >>> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) { >>> int cpu = v->arch.hvm_vmx.active_cpu; >>> if ( cpu != -1 ) >>> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, >> >>> } >> >>> So we want to pin vcpu to free processor for domains with >>> pass-through device in creation process, just like what we did for >>> NUMA system. >> >>> What do you think of it? Or have other ideas? >> >>> Thanks, >> >> >>> -- >>> best rgds, >>> edwin >> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Espen Skoglund wrote:> [Weidong Han] >> Espen Skoglund wrote: >>> Not regarding the other questions/objections in this thread for a >>> moment --- what kind of performance improvments are we talking of >>> here if the vcpus are pinned? Is it close to 1 VM or is there >>> still some performance degradation due to IOTLB pressure? > >> Definitely performance will degrade due to IOTLB pressure when there >> are many VMs which exhausts IOTLB. > > But how much of the degradation is due to IOTLB pressure and how much > is due to vcpu pinning? If vcpu pinning doesn''t give you much then > why add the automatic pinning just to get a little improvement on > older CPUs hooked up to a VT-d chipset? >I don''t know the performance data about this. I think Edwin can answer this. Randy (weidong)> eSk > > >> Randy (Weidong) > >>> >>> [And talking of IOTLB pressure, why can''t Intel document the IOTLB >>> sizes in the chipset docs? Or even better, why can''t these values >>> be queried from the chipset?] >>> >>> eSk >>> >>> >>> [Edwin Zhai] >>>> Keir, >>>> I have found a VT-d scalability issue and want to some feed backs. >>> >>>> When I assign a pass-through NIC to a linux VM and increase the num >>>> of VMs, the iperf throughput for each VM drops greatly. Say, start >>>> 8 VM running on a machine with 8 physical cpus, start 8 iperf >>>> client to connect each of them, the final result is only 60% of 1 >>>> VM. >>> >>>> Further investigation shows vcpu migration cause "cold" cache for >>>> pass-through domain. following code in vmx_do_resume try to >>>> invalidate orig processor''s cache when 14 migration if this domain >>>> has pass-through device and no support for wbinvd vmexit. >>> >>>> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) { >>>> int cpu = v->arch.hvm_vmx.active_cpu; >>>> if ( cpu != -1 ) >>>> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, >>> >>>> } >>> >>>> So we want to pin vcpu to free processor for domains with >>>> pass-through device in creation process, just like what we did for >>>> NUMA system. >>> >>>> What do you think of it? Or have other ideas? >>> >>>> Thanks, >>> >>> >>>> -- >>>> best rgds, >>>> edwin >>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >>> >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, Sep 10, 2008 at 10:27:12AM +0100, Espen Skoglund wrote:> [Weidong Han] > > Espen Skoglund wrote: > >> Not regarding the other questions/objections in this thread for a > >> moment --- what kind of performance improvments are we talking of here > >> if the vcpus are pinned? Is it close to 1 VM or is there still some > >> performance degradation due to IOTLB pressure? > > > Definitely performance will degrade due to IOTLB pressure when there are > > many VMs which exhausts IOTLB. > > But how much of the degradation is due to IOTLB pressure and how much > is due to vcpu pinning? If vcpu pinning doesn''t give you much then > why add the automatic pinning just to get a little improvement on > older CPUs hooked up to a VT-d chipset?Say, throughput of 1 pass-through domain is 100%, if not pin vcpu, average throughput of 8 pass-through domain is 59%. If pin vcpu, average is 95%. So you can see how much vcpu pinning contribute to the performance.> > eSk > > > > Randy (Weidong) > > >> > >> [And talking of IOTLB pressure, why can''t Intel document the IOTLB > >> sizes in the chipset docs? Or even better, why can''t these values be > >> queried from the chipset?] > >> > >> eSk > >> > >> > >> [Edwin Zhai] > >>> Keir, > >>> I have found a VT-d scalability issue and want to some feed backs. > >> > >>> When I assign a pass-through NIC to a linux VM and increase the num > >>> of VMs, the iperf throughput for each VM drops greatly. Say, start 8 > >>> VM running on a machine with 8 physical cpus, start 8 iperf client > >>> to connect each of them, the final result is only 60% of 1 VM. > >> > >>> Further investigation shows vcpu migration cause "cold" cache for > >>> pass-through domain. following code in vmx_do_resume try to > >>> invalidate orig processor''s cache when 14 migration if this domain > >>> has pass-through device and no support for wbinvd vmexit. > >> > >>> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) { > >>> int cpu = v->arch.hvm_vmx.active_cpu; > >>> if ( cpu != -1 ) > >>> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > >> > >>> } > >> > >>> So we want to pin vcpu to free processor for domains with > >>> pass-through device in creation process, just like what we did for > >>> NUMA system. > >> > >>> What do you think of it? Or have other ideas? > >> > >>> Thanks, > >> > >> > >>> -- > >>> best rgds, > >>> edwin > >> > >>> _______________________________________________ > >>> Xen-devel mailing list > >>> Xen-devel@lists.xensource.com > >>> http://lists.xensource.com/xen-devel > >> > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > But how much of the degradation is due to IOTLB pressure and howmuch> > is due to vcpu pinning? If vcpu pinning doesn''t give you much then > > why add the automatic pinning just to get a little improvement on > > older CPUs hooked up to a VT-d chipset? > > Say, throughput of 1 pass-through domain is 100%, > if not pin vcpu, average throughput of 8 pass-through domain is 59%. > If pin vcpu, average is 95%. > > So you can see how much vcpu pinning contribute to the performance.For comparison, what are the results if you use a penryn with wbinvd exit support? Thanks, Ian> > > > eSk > > > > > > > Randy (Weidong) > > > > >> > > >> [And talking of IOTLB pressure, why can''t Intel document theIOTLB> > >> sizes in the chipset docs? Or even better, why can''t thesevalues> be > > >> queried from the chipset?] > > >> > > >> eSk > > >> > > >> > > >> [Edwin Zhai] > > >>> Keir, > > >>> I have found a VT-d scalability issue and want to some feed > backs. > > >> > > >>> When I assign a pass-through NIC to a linux VM and increase the > num > > >>> of VMs, the iperf throughput for each VM drops greatly. Say, > start 8 > > >>> VM running on a machine with 8 physical cpus, start 8 iperf > client > > >>> to connect each of them, the final result is only 60% of 1 VM. > > >> > > >>> Further investigation shows vcpu migration cause "cold" cachefor> > >>> pass-through domain. following code in vmx_do_resume try to > > >>> invalidate orig processor''s cache when 14 migration if this > domain > > >>> has pass-through device and no support for wbinvd vmexit. > > >> > > >>> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) { > > >>> int cpu = v->arch.hvm_vmx.active_cpu; > > >>> if ( cpu != -1 ) > > >>> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > > >> > > >>> } > > >> > > >>> So we want to pin vcpu to free processor for domains with > > >>> pass-through device in creation process, just like what we did > for > > >>> NUMA system. > > >> > > >>> What do you think of it? Or have other ideas? > > >> > > >>> Thanks, > > >> > > >> > > >>> -- > > >>> best rgds, > > >>> edwin > > >> > > >>> _______________________________________________ > > >>> Xen-devel mailing list > > >>> Xen-devel@lists.xensource.com > > >>> http://lists.xensource.com/xen-devel > > >> > > >> > > >> _______________________________________________ > > >> Xen-devel mailing list > > >> Xen-devel@lists.xensource.com > > >> http://lists.xensource.com/xen-devel > > > > > > -- > best rgds, > edwin > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thu, Sep 11, 2008 at 09:44:58AM +0100, Ian Pratt wrote:> > > But how much of the degradation is due to IOTLB pressure and how > much > > > is due to vcpu pinning? If vcpu pinning doesn''t give you much then > > > why add the automatic pinning just to get a little improvement on > > > older CPUs hooked up to a VT-d chipset? > > > > Say, throughput of 1 pass-through domain is 100%, > > if not pin vcpu, average throughput of 8 pass-through domain is 59%. > > If pin vcpu, average is 95%. > > > > So you can see how much vcpu pinning contribute to the performance. > > For comparison, what are the results if you use a penryn with wbinvd > exit support?The penryn system has no much processor for scalability test, so I have no data:(> > Thanks, > Ian > > > > > > > > > eSk > > > > > > > > > > Randy (Weidong) > > > > > > >> > > > >> [And talking of IOTLB pressure, why can''t Intel document the > IOTLB > > > >> sizes in the chipset docs? Or even better, why can''t these > values > > be > > > >> queried from the chipset?] > > > >> > > > >> eSk > > > >> > > > >> > > > >> [Edwin Zhai] > > > >>> Keir, > > > >>> I have found a VT-d scalability issue and want to some feed > > backs. > > > >> > > > >>> When I assign a pass-through NIC to a linux VM and increase the > > num > > > >>> of VMs, the iperf throughput for each VM drops greatly. Say, > > start 8 > > > >>> VM running on a machine with 8 physical cpus, start 8 iperf > > client > > > >>> to connect each of them, the final result is only 60% of 1 VM. > > > >> > > > >>> Further investigation shows vcpu migration cause "cold" cache > for > > > >>> pass-through domain. following code in vmx_do_resume try to > > > >>> invalidate orig processor''s cache when 14 migration if this > > domain > > > >>> has pass-through device and no support for wbinvd vmexit. > > > >> > > > >>> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) { > > > >>> int cpu = v->arch.hvm_vmx.active_cpu; > > > >>> if ( cpu != -1 ) > > > >>> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1, > > > >> > > > >>> } > > > >> > > > >>> So we want to pin vcpu to free processor for domains with > > > >>> pass-through device in creation process, just like what we did > > for > > > >>> NUMA system. > > > >> > > > >>> What do you think of it? Or have other ideas? > > > >> > > > >>> Thanks, > > > >> > > > >> > > > >>> -- > > > >>> best rgds, > > > >>> edwin > > > >> > > > >>> _______________________________________________ > > > >>> Xen-devel mailing list > > > >>> Xen-devel@lists.xensource.com > > > >>> http://lists.xensource.com/xen-devel > > > >> > > > >> > > > >> _______________________________________________ > > > >> Xen-devel mailing list > > > >> Xen-devel@lists.xensource.com > > > >> http://lists.xensource.com/xen-devel > > > > > > > > > > -- > > best rgds, > > edwin > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel