flight 18114 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/18114/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-winxpsp3-vcpus1 10 guest-saverestore.2 fail REGR. vs. 18111
Tests which did not succeed, but are not blocking:
test-amd64-amd64-xl-pcipt-intel 9 guest-start fail never pass
test-amd64-i386-xend-winxpsp3 16 leak-check/check fail never pass
test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop fail never pass
test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop fail never pass
test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop fail never pass
test-amd64-i386-xl-win7-amd64 13 guest-stop fail never pass
test-amd64-amd64-xl-win7-amd64 13 guest-stop fail never pass
test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop fail never pass
test-amd64-amd64-xl-winxpsp3 13 guest-stop fail never pass
test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop fail never pass
test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail never pass
test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail never pass
version targeted for testing:
xen 2caac1caa19bdaeb9ab14b2baf1342e00c4d0495
baseline version:
xen 61c6dfce3296da2643c4c4f90eaab6fa3c1cf8b3
------------------------------------------------------------
People who touched revisions under test:
Ian Campbell <ian.campbell@citrix.com>
Julien Grall <julien.grall@linaro.org>
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------
jobs:
build-amd64 pass
build-armhf pass
build-i386 pass
build-amd64-oldkern pass
build-i386-oldkern pass
build-amd64-pvops pass
build-i386-pvops pass
test-amd64-amd64-xl pass
test-amd64-i386-xl pass
test-amd64-i386-rhel6hvm-amd pass
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 fail
test-amd64-amd64-xl-win7-amd64 fail
test-amd64-i386-xl-win7-amd64 fail
test-amd64-i386-xl-credit2 pass
test-amd64-amd64-xl-pcipt-intel fail
test-amd64-i386-rhel6hvm-intel pass
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel pass
test-amd64-i386-xl-multivcpu pass
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-xl-sedf-pin pass
test-amd64-amd64-pv pass
test-amd64-i386-pv pass
test-amd64-amd64-xl-sedf pass
test-amd64-i386-xl-qemut-winxpsp3-vcpus1 fail
test-amd64-i386-xl-winxpsp3-vcpus1 fail
test-amd64-i386-xend-qemut-winxpsp3 fail
test-amd64-amd64-xl-qemut-winxpsp3 fail
test-amd64-amd64-xl-qemuu-winxpsp3 fail
test-amd64-i386-xend-winxpsp3 fail
test-amd64-amd64-xl-winxpsp3 fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
Not pushing.
------------------------------------------------------------
commit 2caac1caa19bdaeb9ab14b2baf1342e00c4d0495
Author: Julien Grall <julien.grall@linaro.org>
Date: Thu Jun 13 15:52:49 2013 +0100
xen/arm: Use the right GICD register to initialize IRQs routing
Currently IRQs routing is initialized to the wrong register and overwrites
interrupt configuration register (ICFGRn).
Reported-by: Sander Bogaert <sander.bogaert@elis.ugent.be>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
commit c4cf4c30a8a282ee874a0ab8aed43493cf6a928c
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue Jun 4 16:18:17 2013 +0100
xen/arm: define PAGE_HYPERVISOR as WRITEALLOC
Use stage 1 attribute indexes for PAGE_HYPERVISOR, the appriopriate one
for normal memory hypervisor mappings in Xen is WRITEALLOC.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
commit 97e3e84c6dfa94f09b97d21454d4252fc3c190d8
Author: Ian Campbell <ian.campbell@citrix.com>
Date: Tue Jun 4 11:54:10 2013 +0100
xen/arm64: fix stack dump in show_trace
On aarch64 the frame pointer points to the next frame pointer and the return
address is the previous stack slot (so below on the downward growing stack,
therefore above in memory):
|<RETURN ADDR> ^addresses grow up
FP -> |<NEXT FP> |
| |
v |
stack grows down.
This is contrary to aarch32 where the frame pointer points to the return
address and the next frame pointer is the next stack slot (so above on the
downward growing stack, below in memory):
FP -> |<RETURN ADDR> ^addresses grow up
|<NEXT FP> |
| |
v |
stack grows down.
In addition print out LR as part of the trace, since it may contain the
penultimate return address e.g. if the ultimate function is a leaf function.
Lastly nuke some unnecessary braces.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
commit 0c6781ee6a5f0df55eab6be8a92853d3154c0c7b
Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu Jun 13 16:09:03 2013 +0200
tmem: Don''t use map_domain_page for long-life-time pages
When using tmem with Xen 4.3 (and debug build) we end up with:
(XEN) Xen BUG at domain_page.c:143
(XEN) ----[ Xen-4.3-unstable x86_64 debug=y Not tainted ]----
(XEN) CPU: 3
(XEN) RIP: e008:[<ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
..
(XEN) Xen call trace:
(XEN) [<ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
(XEN) [<ffff82c4c01373de>] cli_get_page+0x15e/0x17b
(XEN) [<ffff82c4c01377c4>] tmh_copy_from_client+0x150/0x284
(XEN) [<ffff82c4c0135929>] do_tmem_put+0x323/0x5c4
(XEN) [<ffff82c4c0136510>] do_tmem_op+0x5a0/0xbd0
(XEN) [<ffff82c4c022391b>] syscall_enter+0xeb/0x145
(XEN)
A bit of debugging revealed that the map_domain_page and unmap_domain_page
are meant for short life-time mappings. And that those mappings are finite.
In the 2 VCPU guest we only have 32 entries and once we have exhausted those
we trigger the BUG_ON condition.
The two functions - tmh_persistent_pool_page_[get,put] are used by the
xmem_pool
when xmem_pool_[alloc,free] are called. These xmem_pool_* function are
wrapped
in macro and functions - the entry points are via: tmem_malloc
and tmem_page_alloc. In both cases the users are in the hypervisor and they
do not seem to suffer from using the hypervisor virtual addresses.
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
(qemu changes not included)