xen.org
2011-Oct-24 19:40 UTC
[Xen-devel] [xen-unstable test] 9593: regressions - trouble: broken/fail/pass
flight 9593 xen-unstable real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/9593/ Regressions :-( Tests which did not succeed and are blocking: test-amd64-i386-pair 20 leak-check/check/src_host fail REGR. vs. 9355 test-amd64-i386-pair 21 leak-check/check/dst_host fail REGR. vs. 9355 test-amd64-amd64-pair 21 leak-check/check/dst_host fail REGR. vs. 9355 test-amd64-i386-xl 7 debian-install fail REGR. vs. 9355 test-amd64-i386-xl-multivcpu 7 debian-install fail REGR. vs. 9355 test-amd64-i386-xl-credit2 12 guest-saverestore.2 fail REGR. vs. 9355 test-amd64-i386-rhel6hvm-intel 7 redhat-install fail REGR. vs. 9355 test-i386-i386-win 7 windows-install fail REGR. vs. 9355 test-amd64-amd64-xl-win 7 windows-install fail REGR. vs. 9355 test-amd64-i386-win 7 windows-install fail REGR. vs. 9355 Tests which did not succeed, but are not blocking, including regressions (tests previously passed) regarded as allowable: test-amd64-amd64-xl-pcipt-intel 9 guest-start fail never pass test-amd64-i386-rhel6hvm-amd 9 guest-start.2 fail never pass test-amd64-i386-win-vcpus1 16 leak-check/check fail never pass test-i386-i386-xl-win 13 guest-stop fail never pass test-amd64-i386-xl-win-vcpus1 13 guest-stop fail never pass test-amd64-amd64-win 16 leak-check/check fail never pass version targeted for testing: xen ffe861c1d5df baseline version: xen 6c583d35d76d ------------------------------------------------------------ People who touched revisions under test: Christoph Egger <Christoph.Egger@amd.com> Jan Beulich <jbeulich@suse.com> Keir Fraser <keir@xen.org> Roger Pau Monne <roger.pau@entel.upc.edu> Tim Deegan <tim@xen.org> ------------------------------------------------------------ jobs: build-amd64 pass build-i386 pass build-amd64-oldkern pass build-i386-oldkern pass build-amd64-pvops pass build-i386-pvops pass test-amd64-amd64-xl pass test-amd64-i386-xl fail test-i386-i386-xl pass test-amd64-i386-rhel6hvm-amd fail test-amd64-i386-xl-credit2 fail test-amd64-amd64-xl-pcipt-intel fail test-amd64-i386-rhel6hvm-intel fail test-amd64-i386-xl-multivcpu fail test-amd64-amd64-pair broken test-amd64-i386-pair broken test-i386-i386-pair pass test-amd64-amd64-pv pass test-amd64-i386-pv pass test-i386-i386-pv pass test-amd64-amd64-xl-sedf pass test-amd64-i386-win-vcpus1 fail test-amd64-i386-xl-win-vcpus1 fail test-amd64-amd64-win fail test-amd64-i386-win fail test-i386-i386-win fail test-amd64-amd64-xl-win fail test-i386-i386-xl-win fail ------------------------------------------------------------ sg-report-flight on woking.cam.xci-test.com logs: /home/xc_osstest/logs images: /home/xc_osstest/images Logs, config files, etc. are available at http://www.chiark.greenend.org.uk/~xensrcts/logs Test harness code can be found at http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary Not pushing. ------------------------------------------------------------ changeset: 23992:ffe861c1d5df tag: tip user: Tim Deegan <tim@xen.org> date: Mon Oct 24 11:29:08 2011 +0100 nestedhvm: handle l2 guest MMIO access Hyper-V starts a root domain which effectively an l2 guest. Hyper-V passes its devices through to the root domain and let it do the MMIO accesses. The emulation is done by Xen (host) and Hyper-V forwards the interrupts to the l2 guest. Signed-off-by: Christoph Egger <Christoph.Egger@amd.com> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org> changeset: 23991:a7ccbc79fc17 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:45:24 2011 +0200 cpumask <=> xenctl_cpumap: allocate CPU masks and byte maps dynamically Generally there was a NR_CPUS-bits wide array in these functions and another (through a cpumask_t) on their callers'' stacks, which may get a little large for big NR_CPUS. As the functions can fail anyway, do the allocation in there. For the x86/MCA case this require a little code restructuring: By using different CPU mask accessors it was possible to avoid allocating a mask in the broadcast case. Also, this was the only user that failed to check the return value of the conversion function (which could have led to undefined behvior). Also constify the input parameters of the two functions. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23990:1c8789852eaf user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:44:47 2011 +0200 x86/hpet: allocate CPU masks dynamically Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23989:8269826353d8 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:44:03 2011 +0200 credit: allocate CPU masks dynamically Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23988:53528bab2eb4 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:43:35 2011 +0200 cpupools: allocate CPU masks dynamically Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23987:2682094bc243 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:42:47 2011 +0200 x86/p2m: allocate CPU masks dynamically Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Keir Fraser <keir@xen.org> --- 2011-10-18.orig/xen/arch/x86/hvm/nestedhvm.c 2011-10-11 17:24:46.000000000 +0200 +++ 2011-10-18/xen/arch/x86/hvm/nestedhvm.c 2011-10-18 16:45:02.000000000 +0200 @@ -114,9 +114,9 @@ nestedhvm_flushtlb_ipi(void *info) void nestedhvm_vmcx_flushtlb(struct p2m_domain *p2m) { - on_selected_cpus(&p2m->p2m_dirty_cpumask, nestedhvm_flushtlb_ipi, + on_selected_cpus(p2m->dirty_cpumask, nestedhvm_flushtlb_ipi, p2m->domain, 1); - cpumask_clear(&p2m->p2m_dirty_cpumask); + cpumask_clear(p2m->dirty_cpumask); } bool_t --- 2011-10-18.orig/xen/arch/x86/mm/hap/nested_hap.c 2011-10-21 09:24:51.000000000 +0200 +++ 2011-10-18/xen/arch/x86/mm/hap/nested_hap.c 2011-10-18 16:44:35.000000000 +0200 @@ -88,7 +88,7 @@ nestedp2m_write_p2m_entry(struct p2m_dom safe_write_pte(p, new); if (old_flags & _PAGE_PRESENT) - flush_tlb_mask(&p2m->p2m_dirty_cpumask); + flush_tlb_mask(p2m->dirty_cpumask); paging_unlock(d); } --- 2011-10-18.orig/xen/arch/x86/mm/p2m.c 2011-10-14 09:47:46.000000000 +0200 +++ 2011-10-18/xen/arch/x86/mm/p2m.c 2011-10-21 09:28:33.000000000 +0200 @@ -81,7 +81,6 @@ static void p2m_initialise(struct domain p2m->default_access = p2m_access_rwx; p2m->cr3 = CR3_EADDR; - cpumask_clear(&p2m->p2m_dirty_cpumask); if ( hap_enabled(d) && (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) ) ept_p2m_init(p2m); @@ -102,6 +101,8 @@ p2m_init_nestedp2m(struct domain *d) d->arch.nested_p2m[i] = p2m = xzalloc(struct p2m_domain); if (p2m == NULL) return -ENOMEM; + if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) ) + return -ENOMEM; p2m_initialise(d, p2m); p2m->write_p2m_entry = nestedp2m_write_p2m_entry; list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list); @@ -118,6 +119,11 @@ int p2m_init(struct domain *d) p2m_get_hostp2m(d) = p2m = xzalloc(struct p2m_domain); if ( p2m == NULL ) return -ENOMEM; + if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) ) + { + xfree(p2m); + return -ENOMEM; + } p2m_initialise(d, p2m); /* Must initialise nestedp2m unconditionally @@ -333,6 +339,9 @@ static void p2m_teardown_nestedp2m(struc uint8_t i; for (i = 0; i < MAX_NESTEDP2M; i++) { + if ( !d->arch.nested_p2m[i] ) + continue; + free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask); xfree(d->arch.nested_p2m[i]); d->arch.nested_p2m[i] = NULL; } @@ -341,8 +350,12 @@ static void p2m_teardown_nestedp2m(struc void p2m_final_teardown(struct domain *d) { /* Iterate over all p2m tables per domain */ - xfree(d->arch.p2m); - d->arch.p2m = NULL; + if ( d->arch.p2m ) + { + free_cpumask_var(d->arch.p2m->dirty_cpumask); + xfree(d->arch.p2m); + d->arch.p2m = NULL; + } /* We must teardown unconditionally because * we initialise them unconditionally. @@ -1200,7 +1213,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64 if (p2m->cr3 == CR3_EADDR) hvm_asid_flush_vcpu(v); p2m->cr3 = cr3; - cpu_set(v->processor, p2m->p2m_dirty_cpumask); + cpumask_set_cpu(v->processor, p2m->dirty_cpumask); p2m_unlock(p2m); nestedp2m_unlock(d); return p2m; @@ -1217,7 +1230,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64 p2m->cr3 = cr3; nv->nv_flushp2m = 0; hvm_asid_flush_vcpu(v); - cpu_set(v->processor, p2m->p2m_dirty_cpumask); + cpumask_set_cpu(v->processor, p2m->dirty_cpumask); p2m_unlock(p2m); nestedp2m_unlock(d); --- 2011-10-18.orig/xen/include/asm-x86/p2m.h 2011-10-21 09:24:51.000000000 +0200 +++ 2011-10-18/xen/include/asm-x86/p2m.h 2011-10-18 16:39:34.000000000 +0200 @@ -198,7 +198,7 @@ struct p2m_domain { * this p2m and those physical cpus whose vcpu''s are in * guestmode. */ - cpumask_t p2m_dirty_cpumask; + cpumask_var_t dirty_cpumask; struct domain *domain; /* back pointer to domain */ changeset: 23986:253073b522f8 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:23:05 2011 +0200 allocate CPU sibling and core maps dynamically ... thus reducing the per-CPU data area size back to one page even when building for large NR_CPUS. At once eliminate the old __cpu{mask,list}_scnprintf() helpers. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23985:eef4641d6726 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:22:02 2011 +0200 x86: allocate IRQ actions'' cpu_eoi_map dynamically Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23984:07d303ff2757 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:21:09 2011 +0200 eliminate direct assignments of CPU masks Use cpumask_copy() instead of direct variable assignments for copying CPU masks. While direct assignments are not a problem when both sides are variables actually defined as cpumask_t (except for possibly copying *much* more than would actually need to be copied), they must not happen when the original variable is of type cpumask_var_t (which may have lass space allocated to it than a full cpumask_t). Eliminate as many of such assignments as possible (in several cases it''s even possible to collapse two operations [copy then clear one bit] into one [cpumask_andnot()]), and thus set the way for reducing the allocation size in alloc_cpumask_var(). Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23983:1a4223c62ee7 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:19:44 2011 +0200 eliminate cpumask accessors referencing NR_CPUS ... in favor of using the new, nr_cpumask_bits-based ones. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23982:511d5e65a302 user: Jan Beulich <jbeulich@suse.com> date: Fri Oct 21 09:17:42 2011 +0200 introduce and use nr_cpu_ids and nr_cpumask_bits The former is the runtime equivalent of NR_CPUS (and users of NR_CPUS, where necessary, get adjusted accordingly), while the latter is for the sole use of determining the allocation size when dynamically allocating CPU masks (done later in this series). Adjust accessors to use either of the two to bound their bitmap operations - which one gets used depends on whether accessing the bits in the gap between nr_cpu_ids and nr_cpumask_bits is benign but more efficient. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> changeset: 23981:6c583d35d76d user: Tim Deegan <tim@xen.org> date: Thu Oct 20 15:36:01 2011 +0100 x86/mm/p2m: don''t leak state if nested-p2m init fails. Signed-off-by: Tim Deegan <tim@xen.org> =======================================commit 25378e0a76b282127e9ab8933a4defbc91db3862 Author: Roger Pau Monne <roger.pau@entel.upc.edu> Date: Thu Oct 6 18:38:08 2011 +0100 remove blktap when building for NetBSD NetBSD has no blktap support, so remove the use of the blktap if the OS is NetBSD. Signed-off-by: Roger Pau Monne <roger.pau@entel.upc.edu> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-25 08:23 UTC
Re: [Xen-devel] [xen-unstable test] 9593: regressions - trouble: broken/fail/pass
This doesn''t appear to be a quirk of the test systems log generation, the patch appears to have ended up in the changelog. There is some stuff actually applied too so perhaps it is ok but someone who knows what was supposed to be in there should probably double check! On Mon, 2011-10-24 at 20:40 +0100, xen.org wrote:> > > changeset: 23987:2682094bc243 > user: Jan Beulich <jbeulich@suse.com> > date: Fri Oct 21 09:42:47 2011 +0200 > > x86/p2m: allocate CPU masks dynamically > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > Acked-by: Tim Deegan <tim@xen.org> > Acked-by: Keir Fraser <keir@xen.org> > > --- 2011-10-18.orig/xen/arch/x86/hvm/nestedhvm.c 2011-10-11 > 17:24:46.000000000 +0200 > +++ 2011-10-18/xen/arch/x86/hvm/nestedhvm.c 2011-10-18 > 16:45:02.000000000 +0200 > @@ -114,9 +114,9 @@ nestedhvm_flushtlb_ipi(void *info) > void > nestedhvm_vmcx_flushtlb(struct p2m_domain *p2m) > { > - on_selected_cpus(&p2m->p2m_dirty_cpumask, > nestedhvm_flushtlb_ipi, > + on_selected_cpus(p2m->dirty_cpumask, nestedhvm_flushtlb_ipi, > p2m->domain, 1); > - cpumask_clear(&p2m->p2m_dirty_cpumask); > + cpumask_clear(p2m->dirty_cpumask); > } > > bool_t > --- 2011-10-18.orig/xen/arch/x86/mm/hap/nested_hap.c > 2011-10-21 09:24:51.000000000 +0200 > +++ 2011-10-18/xen/arch/x86/mm/hap/nested_hap.c 2011-10-18 > 16:44:35.000000000 +0200 > @@ -88,7 +88,7 @@ nestedp2m_write_p2m_entry(struct p2m_dom > safe_write_pte(p, new); > > if (old_flags & _PAGE_PRESENT) > - flush_tlb_mask(&p2m->p2m_dirty_cpumask); > + flush_tlb_mask(p2m->dirty_cpumask); > > paging_unlock(d); > } > --- 2011-10-18.orig/xen/arch/x86/mm/p2m.c 2011-10-14 > 09:47:46.000000000 +0200 > +++ 2011-10-18/xen/arch/x86/mm/p2m.c 2011-10-21 > 09:28:33.000000000 +0200 > @@ -81,7 +81,6 @@ static void p2m_initialise(struct domain > p2m->default_access = p2m_access_rwx; > > p2m->cr3 = CR3_EADDR; > - cpumask_clear(&p2m->p2m_dirty_cpumask); > > if ( hap_enabled(d) && (boot_cpu_data.x86_vendor => X86_VENDOR_INTEL) ) > ept_p2m_init(p2m); > @@ -102,6 +101,8 @@ p2m_init_nestedp2m(struct domain *d) > d->arch.nested_p2m[i] = p2m = xzalloc(struct p2m_domain); > if (p2m == NULL) > return -ENOMEM; > + if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) ) > + return -ENOMEM; > p2m_initialise(d, p2m); > p2m->write_p2m_entry = nestedp2m_write_p2m_entry; > list_add(&p2m->np2m_list, > &p2m_get_hostp2m(d)->np2m_list); > @@ -118,6 +119,11 @@ int p2m_init(struct domain *d) > p2m_get_hostp2m(d) = p2m = xzalloc(struct p2m_domain); > if ( p2m == NULL ) > return -ENOMEM; > + if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) ) > + { > + xfree(p2m); > + return -ENOMEM; > + } > p2m_initialise(d, p2m); > > /* Must initialise nestedp2m unconditionally > @@ -333,6 +339,9 @@ static void p2m_teardown_nestedp2m(struc > uint8_t i; > > for (i = 0; i < MAX_NESTEDP2M; i++) { > + if ( !d->arch.nested_p2m[i] ) > + continue; > + free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask); > xfree(d->arch.nested_p2m[i]); > d->arch.nested_p2m[i] = NULL; > } > @@ -341,8 +350,12 @@ static void p2m_teardown_nestedp2m(struc > void p2m_final_teardown(struct domain *d) > { > /* Iterate over all p2m tables per domain */ > - xfree(d->arch.p2m); > - d->arch.p2m = NULL; > + if ( d->arch.p2m ) > + { > + free_cpumask_var(d->arch.p2m->dirty_cpumask); > + xfree(d->arch.p2m); > + d->arch.p2m = NULL; > + } > > /* We must teardown unconditionally because > * we initialise them unconditionally. > @@ -1200,7 +1213,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64 > if (p2m->cr3 == CR3_EADDR) > hvm_asid_flush_vcpu(v); > p2m->cr3 = cr3; > - cpu_set(v->processor, p2m->p2m_dirty_cpumask); > + cpumask_set_cpu(v->processor, p2m->dirty_cpumask); > p2m_unlock(p2m); > nestedp2m_unlock(d); > return p2m; > @@ -1217,7 +1230,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64 > p2m->cr3 = cr3; > nv->nv_flushp2m = 0; > hvm_asid_flush_vcpu(v); > - cpu_set(v->processor, p2m->p2m_dirty_cpumask); > + cpumask_set_cpu(v->processor, p2m->dirty_cpumask); > p2m_unlock(p2m); > nestedp2m_unlock(d); > > --- 2011-10-18.orig/xen/include/asm-x86/p2m.h 2011-10-21 > 09:24:51.000000000 +0200 > +++ 2011-10-18/xen/include/asm-x86/p2m.h 2011-10-18 > 16:39:34.000000000 +0200 > @@ -198,7 +198,7 @@ struct p2m_domain { > * this p2m and those physical cpus whose vcpu''s are in > * guestmode. > */ > - cpumask_t p2m_dirty_cpumask; > + cpumask_var_t dirty_cpumask; > > struct domain *domain; /* back pointer to domain */ > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jan Beulich
2011-Nov-02 10:00 UTC
Re: [Xen-devel] [xen-unstable test] 9593: regressions - trouble: broken/fail/pass
>>> On 25.10.11 at 10:23, Ian Campbell <Ian.Campbell@citrix.com> wrote: > This doesn''t appear to be a quirk of the test systems log generation, > the patch appears to have ended up in the changelog. There is some stuff > actually applied too so perhaps it is ok but someone who knows what was > supposed to be in there should probably double check!Yeah, I somehow managed to leave the patch body in the file that was to become the commit message. I''m sorry for that, and as I realized this only after pushing I also didn''t know how to rectify it. Jan> On Mon, 2011-10-24 at 20:40 +0100, xen.org wrote: >> >> >> changeset: 23987:2682094bc243 >> user: Jan Beulich <jbeulich@suse.com> >> date: Fri Oct 21 09:42:47 2011 +0200 >> >> x86/p2m: allocate CPU masks dynamically >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> >> Acked-by: Tim Deegan <tim@xen.org> >> Acked-by: Keir Fraser <keir@xen.org> >> >> --- 2011-10-18.orig/xen/arch/x86/hvm/nestedhvm.c 2011-10-11 >> 17:24:46.000000000 +0200 >> +++ 2011-10-18/xen/arch/x86/hvm/nestedhvm.c 2011-10-18 >> 16:45:02.000000000 +0200 >> @@ -114,9 +114,9 @@ nestedhvm_flushtlb_ipi(void *info) >> void >> nestedhvm_vmcx_flushtlb(struct p2m_domain *p2m) >> { >> - on_selected_cpus(&p2m->p2m_dirty_cpumask, >> nestedhvm_flushtlb_ipi, >> + on_selected_cpus(p2m->dirty_cpumask, nestedhvm_flushtlb_ipi, >> p2m->domain, 1); >> - cpumask_clear(&p2m->p2m_dirty_cpumask); >> + cpumask_clear(p2m->dirty_cpumask); >> } >> >> bool_t >> --- 2011-10-18.orig/xen/arch/x86/mm/hap/nested_hap.c >> 2011-10-21 09:24:51.000000000 +0200 >> +++ 2011-10-18/xen/arch/x86/mm/hap/nested_hap.c 2011-10-18 >> 16:44:35.000000000 +0200 >> @@ -88,7 +88,7 @@ nestedp2m_write_p2m_entry(struct p2m_dom >> safe_write_pte(p, new); >> >> if (old_flags & _PAGE_PRESENT) >> - flush_tlb_mask(&p2m->p2m_dirty_cpumask); >> + flush_tlb_mask(p2m->dirty_cpumask); >> >> paging_unlock(d); >> } >> --- 2011-10-18.orig/xen/arch/x86/mm/p2m.c 2011-10-14 >> 09:47:46.000000000 +0200 >> +++ 2011-10-18/xen/arch/x86/mm/p2m.c 2011-10-21 >> 09:28:33.000000000 +0200 >> @@ -81,7 +81,6 @@ static void p2m_initialise(struct domain >> p2m->default_access = p2m_access_rwx; >> >> p2m->cr3 = CR3_EADDR; >> - cpumask_clear(&p2m->p2m_dirty_cpumask); >> >> if ( hap_enabled(d) && (boot_cpu_data.x86_vendor =>> X86_VENDOR_INTEL) ) >> ept_p2m_init(p2m); >> @@ -102,6 +101,8 @@ p2m_init_nestedp2m(struct domain *d) >> d->arch.nested_p2m[i] = p2m = xzalloc(struct p2m_domain); >> if (p2m == NULL) >> return -ENOMEM; >> + if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) ) >> + return -ENOMEM; >> p2m_initialise(d, p2m); >> p2m->write_p2m_entry = nestedp2m_write_p2m_entry; >> list_add(&p2m->np2m_list, >> &p2m_get_hostp2m(d)->np2m_list); >> @@ -118,6 +119,11 @@ int p2m_init(struct domain *d) >> p2m_get_hostp2m(d) = p2m = xzalloc(struct p2m_domain); >> if ( p2m == NULL ) >> return -ENOMEM; >> + if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) ) >> + { >> + xfree(p2m); >> + return -ENOMEM; >> + } >> p2m_initialise(d, p2m); >> >> /* Must initialise nestedp2m unconditionally >> @@ -333,6 +339,9 @@ static void p2m_teardown_nestedp2m(struc >> uint8_t i; >> >> for (i = 0; i < MAX_NESTEDP2M; i++) { >> + if ( !d->arch.nested_p2m[i] ) >> + continue; >> + free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask); >> xfree(d->arch.nested_p2m[i]); >> d->arch.nested_p2m[i] = NULL; >> } >> @@ -341,8 +350,12 @@ static void p2m_teardown_nestedp2m(struc >> void p2m_final_teardown(struct domain *d) >> { >> /* Iterate over all p2m tables per domain */ >> - xfree(d->arch.p2m); >> - d->arch.p2m = NULL; >> + if ( d->arch.p2m ) >> + { >> + free_cpumask_var(d->arch.p2m->dirty_cpumask); >> + xfree(d->arch.p2m); >> + d->arch.p2m = NULL; >> + } >> >> /* We must teardown unconditionally because >> * we initialise them unconditionally. >> @@ -1200,7 +1213,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64 >> if (p2m->cr3 == CR3_EADDR) >> hvm_asid_flush_vcpu(v); >> p2m->cr3 = cr3; >> - cpu_set(v->processor, p2m->p2m_dirty_cpumask); >> + cpumask_set_cpu(v->processor, p2m->dirty_cpumask); >> p2m_unlock(p2m); >> nestedp2m_unlock(d); >> return p2m; >> @@ -1217,7 +1230,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64 >> p2m->cr3 = cr3; >> nv->nv_flushp2m = 0; >> hvm_asid_flush_vcpu(v); >> - cpu_set(v->processor, p2m->p2m_dirty_cpumask); >> + cpumask_set_cpu(v->processor, p2m->dirty_cpumask); >> p2m_unlock(p2m); >> nestedp2m_unlock(d); >> >> --- 2011-10-18.orig/xen/include/asm-x86/p2m.h 2011-10-21 >> 09:24:51.000000000 +0200 >> +++ 2011-10-18/xen/include/asm-x86/p2m.h 2011-10-18 >> 16:39:34.000000000 +0200 >> @@ -198,7 +198,7 @@ struct p2m_domain { >> * this p2m and those physical cpus whose vcpu''s are in >> * guestmode. >> */ >> - cpumask_t p2m_dirty_cpumask; >> + cpumask_var_t dirty_cpumask; >> >> struct domain *domain; /* back pointer to domain */ >> >> >>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel