flight 12876 xen-unstable real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/12876/ Failures :-/ but no regressions. Tests which are failing intermittently (not blocking): test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail pass in 12860 test-amd64-i386-win 7 windows-install fail pass in 12860 test-i386-i386-xl-win 7 windows-install fail pass in 12860 test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 12860 pass in 12876 Regressions which are regarded as allowable (not blocking): test-amd64-i386-qemuu-rhel6hvm-amd 9 guest-start.2 fail like 12858 test-amd64-amd64-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail in 12860 like 12858 Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-pcipt-intel 9 guest-start fail never pass test-amd64-i386-rhel6hvm-intel 11 leak-check/check fail never pass test-amd64-i386-qemuu-rhel6hvm-intel 9 guest-start.2 fail never pass test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop fail never pass test-amd64-amd64-win 16 leak-check/check fail never pass test-amd64-i386-xend-winxpsp3 16 leak-check/check fail never pass test-amd64-i386-win-vcpus1 16 leak-check/check fail never pass test-amd64-i386-rhel6hvm-amd 11 leak-check/check fail never pass test-i386-i386-xl-winxpsp3 13 guest-stop fail never pass test-amd64-amd64-xl-win7-amd64 13 guest-stop fail never pass test-amd64-amd64-xl-winxpsp3 13 guest-stop fail never pass test-amd64-i386-xl-win7-amd64 13 guest-stop fail never pass test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop fail never pass test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop fail never pass test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop fail never pass test-amd64-i386-xl-win-vcpus1 13 guest-stop fail never pass test-amd64-amd64-xl-win 13 guest-stop fail never pass test-i386-i386-win 16 leak-check/check fail never pass test-amd64-i386-win 16 leak-check/check fail in 12860 never pass test-i386-i386-xl-win 13 guest-stop fail in 12860 never pass version targeted for testing: xen f8279258e3c9 baseline version: xen cd4dd23a831d jobs: build-amd64 pass build-i386 pass build-amd64-oldkern pass build-i386-oldkern pass build-amd64-pvops pass build-i386-pvops pass test-amd64-amd64-xl pass test-amd64-i386-xl pass test-i386-i386-xl pass test-amd64-i386-rhel6hvm-amd fail test-amd64-i386-qemuu-rhel6hvm-amd fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-amd64-xl-win7-amd64 fail test-amd64-i386-xl-win7-amd64 fail test-amd64-i386-xl-credit2 pass test-amd64-amd64-xl-pcipt-intel fail test-amd64-i386-rhel6hvm-intel fail test-amd64-i386-qemuu-rhel6hvm-intel fail test-amd64-i386-xl-multivcpu pass test-amd64-amd64-pair pass test-amd64-i386-pair pass test-i386-i386-pair pass test-amd64-amd64-xl-sedf-pin fail test-amd64-amd64-pv pass test-amd64-i386-pv pass test-i386-i386-pv pass test-amd64-amd64-xl-sedf pass test-amd64-i386-win-vcpus1 fail test-amd64-i386-xl-win-vcpus1 fail test-amd64-i386-xl-winxpsp3-vcpus1 fail test-amd64-amd64-win fail test-amd64-i386-win fail test-i386-i386-win fail test-amd64-amd64-xl-win fail test-i386-i386-xl-win fail test-amd64-amd64-xl-qemuu-winxpsp3 fail test-i386-i386-xl-qemuu-winxpsp3 fail test-amd64-i386-xend-winxpsp3 fail test-amd64-amd64-xl-winxpsp3 fail test-i386-i386-xl-winxpsp3 fail ------------------------------------------------------------ sg-report-flight on woking.cam.xci-test.com logs: /home/xc_osstest/logs images: /home/xc_osstest/images Logs, config files, etc. are available at http://www.chiark.greenend.org.uk/~xensrcts/logs Test harness code can be found at http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary Pushing revision : + branch=xen-unstable + revision=f8279258e3c9 + . cri-lock-repos ++ . cri-common +++ umask 002 +++ getconfig Repos +++ perl -e '' use Osstest; readconfigonly(); print $c{Repos} or die $!; '' ++ repos=/export/home/osstest/repos ++ repos_lock=/export/home/osstest/repos/lock ++ ''['' x ''!='' x/export/home/osstest/repos/lock '']'' ++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock ++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable f8279258e3c9 + branch=xen-unstable + revision=f8279258e3c9 + . cri-lock-repos ++ . cri-common +++ umask 002 +++ getconfig Repos +++ perl -e '' use Osstest; readconfigonly(); print $c{Repos} or die $!; '' ++ repos=/export/home/osstest/repos ++ repos_lock=/export/home/osstest/repos/lock ++ ''['' x/export/home/osstest/repos/lock ''!='' x/export/home/osstest/repos/lock '']'' + . cri-common ++ umask 002 + select_xenbranch + case "$branch" in + tree=xen + xenbranch=xen-unstable + ''['' xxen = xlinux '']'' + linuxbranch=linux + : master + : tested/2.6.39.x + . ap-common ++ : xen@xenbits.xensource.com ++ : http://xenbits.xen.org/staging/xen-unstable.hg ++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git ++ : git://git.kernel.org ++ : git://git.kernel.org/pub/scm/linux/kernel/git ++ : git ++ : git://xenbits.xen.org/linux-pvops.git ++ : master ++ : xen@xenbits.xensource.com:git/linux-pvops.git ++ : git://xenbits.xen.org/linux-pvops.git ++ : master ++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git ++ : tested/2.6.39.x ++ : daily-cron.xen-unstable ++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27 ++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git ++ : daily-cron.xen-unstable + TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git + TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git + info_linux_tree xen-unstable + case $1 in + return 1 + case "$branch" in + cd /export/home/osstest/repos/xen-unstable.hg + hg push -r f8279258e3c9 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 8 changesets with 53 changes to 53 files
On Tue, 2012-05-15 at 08:12 +0100, xen.org wrote:> flight 12876 xen-unstable real [real] > http://www.chiark.greenend.org.uk/~xensrcts/logs/12876/ > > Failures :-/ but no regressions. > > Tests which are failing intermittently (not blocking):(re-ordered to get the short ones over with first)> test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail pass in 12860Known intermittent bug in non-default scheduler. Don''t care for 4.2.> test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 12860 pass in 12876qemuu is still a bit flakey. It''s not the default qemu for 4.2 but would still be nice to investigate.> test-i386-i386-xl-win 7 windows-install fail pass in 12860The test harness seems to die ~11 mins after starting the guest. The timeout was 7000 seconds which is 100+ mins. 2012-05-15 03:22:39 Z executing ssh ... root@10.80.249.56 xl create /etc/xen/win.guest.osstest.cfg WARNING: ignoring "kernel" directive for HVM guest. Use "firmware_override" instead if you really want a non-default firmware xc: info: VIRTUAL MEMORY ARRANGEMENT: Loader: 0000000000100000->000000000019bca4 TOTAL: 0000000000000000->000000001f800000 ENTRY ADDRESS: 0000000000100000 xc: info: PHYSICAL MEMORY ALLOCATION: 4KB PAGES: 0x0000000000000200 2MB PAGES: 0x00000000000000fb 1GB PAGES: 0x0000000000000000 libxl: error: libxl.c:3162:libxl_sched_credit_domain_set: Cpu weight out of range, valid values are within range from 1 to 65535 Parsing config file /etc/xen/win.guest.osstest.cfg Daemon running with PID 2548 2012-05-15 03:22:40 Z guest win.guest.osstest 5a:36:0e:4c:00:1d 8936 link/ip/tcp: waiting 7000s... 2012-05-15 03:22:40 Z guest win.guest.osstest 5a:36:0e:4c:00:1d 8936 link/ip/tcp: no active lease (waiting) ... Died at Osstest.pm line 1833, <GEN1332> line 2553. ... + rc=255 + date -u ''+%Y-%m-%d %H:%M:%S Z exit status 255'' 2012-05-15 03:33:45 Z exit status 255 The screenshot shows that the guest was in the middle of an XP boot. Hopefully a harness problem but otherwise needs looking at for 4.2?> test-amd64-i386-win 7 windows-install fail pass in 12860This timed out after ~2 hours. The screenshot shows that the Windows installer was still at the blue and yellow text mode stage, and a fairly early looking one at that. It seems rather like the guest has hung, or at least stalled. Needs investigation for 4.2? Towards the end of the host serial log is a stack trace which suggests that we were at least in the guest context. May 15 01:56:45.470932 (XEN) *** Dumping CPU1 host state: *** May 15 01:56:45.470966 (XEN) ----[ Xen-4.2-unstable x86_64 debug=y Not tainted ]---- May 15 01:56:45.478913 (XEN) CPU: 1 May 15 01:56:45.478937 (XEN) RIP: e008:[<ffff82c4801cd933>] vmx_vmcs_enter+0xc5/0xc6 May 15 01:56:45.478977 (XEN) RFLAGS: 0000000000000246 CONTEXT: hypervisor May 15 01:56:45.490925 (XEN) rax: ffff8301281bff18 rbx: ffff8300cfd19000 rcx: 0000000000010093 May 15 01:56:45.498909 (XEN) rdx: 0000000000000093 rsi: 0000000000000000 rdi: ffff8300cfd19000 May 15 01:56:45.498947 (XEN) rbp: ffff8301281bf5f8 rsp: ffff8301281bf5b0 r8: ffff82f602289d68 May 15 01:56:45.510917 (XEN) r9: 0000000000000000 r10: 0000000000000000 r11: 0000000000000000 May 15 01:56:45.518908 (XEN) r12: 0000000000000005 r13: 000000000000ffff r14: 0000000000000000 May 15 01:56:45.518945 (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: 00000000000426f0 May 15 01:56:45.530913 (XEN) cr3: 000000010ce45000 cr2: 0000000000000000 May 15 01:56:45.530947 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: e008 May 15 01:56:45.538921 (XEN) Xen stack trace from rsp=ffff8301281bf5b0: May 15 01:56:45.538954 (XEN) ffff82c4801d0405 ffff8301281bf688 00000093cfd19000 ffff8301281bf5f8 May 15 01:56:45.550915 (XEN) ffff8300cfd19000 0000000000000005 0000000000000005 0000000080010021 May 15 01:56:45.558914 (XEN) ffff8301281bf628 ffff8301281bf6d8 ffff82c4801d227c 00000000000244e2 May 15 01:56:45.566906 (XEN) 0000000000000024 ffff8301281bf688 00000000801e0cc8 0000ffff009b2000 May 15 01:56:45.566945 (XEN) 0000000000020000 0000ffff009322f3 0000000000022f30 0000ffff009322f3 May 15 01:56:45.578924 (XEN) 0000000000022f30 0000ffff00930000 0000000000000000 0000ffff00930030 May 15 01:56:45.586968 (XEN) 0000000000000300 0000ffff00930000 0000000000000000 00000077008b0028 May 15 01:56:45.587027 (XEN) 0000000000024460 ffff8301281bf708 ffff8301281bf700 ffff8301281bfdd8 May 15 01:56:45.598935 (XEN) 0000000080000011 ffff8300cfd19000 0000000000000010 00000000001144eb May 15 01:56:45.607211 (XEN) 0000000000000039 ffff8301281bf738 ffff82c4801b0b76 ffff830100000001 May 15 01:56:45.607248 (XEN) ffff8300cfd19000 ffff8301281bf728 0000000000000007 ffff8301281bfdd8 May 15 01:56:45.618930 (XEN) 0000000000000000 0000000080000011 0000000000000000 0000000000000000 May 15 01:56:45.626921 (XEN) ffff82c480265500 ffff8301281bf778 ffff82c4801ab15a ffff8301281bf778 May 15 01:56:45.626960 (XEN) ffff8301281bfdd8 ffff8301281bf778 ffff82c480189e69 ffff8301281bfdd8 May 15 01:56:45.638934 (XEN) 0000000000000022 ffff8301281bfcd8 ffff82c480198204 0000000100000008 May 15 01:56:45.646931 (XEN) 0000000000000000 01ff82f6025a14c0 0000000000000000 ffff8301281bff00 May 15 01:56:45.646969 (XEN) ffff8301281bff18 ffff8301281bf800 ffff82c4801a5182 01ff8300cfe6b000 May 15 01:56:45.658921 (XEN) 0000000800000000 ffff8301281bffc0 000000008016fec4 0000000100000008 May 15 01:56:45.666915 (XEN) 0001000000000000 0100000000000000 0000000000000002 ffff8301281b0000 May 15 01:56:45.666951 (XEN) 0000000000000001 0000000000000048 00000000001144d5 0000000000000002 May 15 01:56:45.678914 (XEN) Xen call trace: May 15 01:56:45.678940 (XEN) [<ffff82c4801cd933>] vmx_vmcs_enter+0xc5/0xc6 May 15 01:56:45.686917 (XEN) [<ffff82c4801d227c>] vmx_update_guest_cr+0x252/0x663 May 15 01:56:45.686954 (XEN) [<ffff82c4801b0b76>] hvm_set_cr0+0x562/0x5e8 May 15 01:56:45.698910 (XEN) [<ffff82c4801ab15a>] hvmemul_write_cr+0x91/0xd4 May 15 01:56:45.698945 (XEN) [<ffff82c480198204>] x86_emulate+0xdba9/0xfc89 May 15 01:56:45.706917 (XEN) [<ffff82c4801aad74>] hvm_emulate_one+0x120/0x1af May 15 01:56:45.706953 (XEN) [<ffff82c4801cc796>] realmode_emulate_one+0x3b/0x21a May 15 01:56:45.718907 (XEN) [<ffff82c4801cca75>] vmx_realmode+0x100/0x25b May 15 01:56:45.718942 (XEN) May 15 01:56:45.718968 (XEN) *** Dumping CPU1 guest state (d1:v0): *** May 15 01:56:45.726921 (XEN) ----[ Xen-4.2-unstable x86_64 debug=y Not tainted ]---- May 15 01:56:45.726959 (XEN) CPU: 1 May 15 01:56:45.738920 (XEN) RIP: 2000:[<0000000000000252>] May 15 01:56:45.738950 (XEN) RFLAGS: 0000000000010086 CONTEXT: hvm guest May 15 01:56:45.738987 (XEN) rax: 0000000080000011 rbx: 0000000000067ff2 rcx: 00000000000000b6 May 15 01:56:45.746923 (XEN) rdx: 0000000000000001 rsi: 0000000000011d68 rdi: 0000000006000000 May 15 01:56:45.758908 (XEN) rbp: 0000000000060b88 rsp: 0000000000067ff2 r8: 0000000000000000 May 15 01:56:45.758945 (XEN) r9: 0000000000000000 r10: 0000000000000000 r11: 0000000000000000 May 15 01:56:45.766919 (XEN) r12: 0000000000000000 r13: 0000000000000000 r14: 0000000000000000 May 15 01:56:45.778914 (XEN) r15: 0000000000000000 cr0: 0000000080000011 cr4: 0000000000000000 May 15 01:56:45.778951 (XEN) cr3: 0000000000039000 cr2: 0000000000000000 May 15 01:56:45.786916 (XEN) ds: 0060 es: 0000 fs: 0030 gs: 0000 ss: 22f3 cs: 2000 May 15 01:56:45.786952 (XEN)
>>> On 15.05.12 at 11:27, Ian Campbell <Ian.Campbell@citrix.com> wrote: > On Tue, 2012-05-15 at 08:12 +0100, xen.org wrote: >> test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 12860 pass in 12876 > > qemuu is still a bit flakey. It''s not the default qemu for 4.2 but would > still be nice to investigate.Isn''t it being made the default at least for pv guests? After having fixed the gntdev driver in our kernels and the pvops-centric shortcomings in both qemu-s, the qdisk backend still looks somewhat unreliable in testing that Olaf has performed. We haven''t narrowed it so far, but a resulting question of course is whether using that backend (and/or qemu-upstream) by default for any guests is a good idea. Jan
On Tue, 2012-05-15 at 10:50 +0100, Jan Beulich wrote:> >>> On 15.05.12 at 11:27, Ian Campbell <Ian.Campbell@citrix.com> wrote: > > On Tue, 2012-05-15 at 08:12 +0100, xen.org wrote: > >> test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 12860 pass in 12876 > > > > qemuu is still a bit flakey. It''s not the default qemu for 4.2 but would > > still be nice to investigate. > > Isn''t it being made the default at least for pv guests?Right, yes. I should have made it clear I was talking about HVM here.> After having fixed > the gntdev driver in our kernels and the pvops-centric shortcomings in > both qemu-s, the qdisk backend still looks somewhat unreliable in > testing that Olaf has performed. We haven''t narrowed it so far, but > a resulting question of course is whether using that backend (and/or > qemu-upstream) by default for any guests is a good idea.CCing Stefano who made the patch to have PV guests use this guy. Please do share details when you have them. Ian.
Stefano Stabellini
2012-May-15 10:27 UTC
Re: [xen-unstable test] 12876: tolerable FAIL - PUSHED
On Tue, 15 May 2012, Ian Campbell wrote:> > After having fixed > > the gntdev driver in our kernels and the pvops-centric shortcomings in > > both qemu-s, the qdisk backend still looks somewhat unreliable in > > testing that Olaf has performed. We haven''t narrowed it so far, but > > a resulting question of course is whether using that backend (and/or > > qemu-upstream) by default for any guests is a good idea. > > CCing Stefano who made the patch to have PV guests use this guy. Please > do share details when you have them.I would prefer precise bug reports, and possibly patches, to "somewhat unreliable" :-) Please note that the userspace disk backend is basically the same in upstream QEMU and qemu-xen-traditional, so switching back to the old QEMU for pv guests wouldn''t improve anything.
>>> On 15.05.12 at 12:27, Stefano Stabellini <stefano.stabellini@eu.citrix.com>wrote:> On Tue, 15 May 2012, Ian Campbell wrote: >> > After having fixed >> > the gntdev driver in our kernels and the pvops-centric shortcomings in >> > both qemu-s, the qdisk backend still looks somewhat unreliable in >> > testing that Olaf has performed. We haven''t narrowed it so far, but >> > a resulting question of course is whether using that backend (and/or >> > qemu-upstream) by default for any guests is a good idea. >> >> CCing Stefano who made the patch to have PV guests use this guy. Please >> do share details when you have them. > > I would prefer precise bug reports, and possibly patches, to "somewhat > unreliable" :-)Of course. But we barely got past all the basic issues...> Please note that the userspace disk backend is basically the same in > upstream QEMU and qemu-xen-traditional,I understand that, ...> so switching back to the old > QEMU for pv guests wouldn''t improve anything.... and I didn''t mean to suggest that. I was rather trying to hint towards continuing to use blkback as default backend. Jan
Stefano Stabellini
2012-May-15 11:25 UTC
Re: [xen-unstable test] 12876: tolerable FAIL - PUSHED
On Tue, 15 May 2012, Jan Beulich wrote:> >>> On 15.05.12 at 12:27, Stefano Stabellini <stefano.stabellini@eu.citrix.com> > wrote: > > On Tue, 15 May 2012, Ian Campbell wrote: > >> > After having fixed > >> > the gntdev driver in our kernels and the pvops-centric shortcomings in > >> > both qemu-s, the qdisk backend still looks somewhat unreliable in > >> > testing that Olaf has performed. We haven''t narrowed it so far, but > >> > a resulting question of course is whether using that backend (and/or > >> > qemu-upstream) by default for any guests is a good idea. > >> > >> CCing Stefano who made the patch to have PV guests use this guy. Please > >> do share details when you have them. > > > > I would prefer precise bug reports, and possibly patches, to "somewhat > > unreliable" :-) > > Of course. But we barely got past all the basic issues... > > > Please note that the userspace disk backend is basically the same in > > upstream QEMU and qemu-xen-traditional, > > I understand that, ... > > > so switching back to the old > > QEMU for pv guests wouldn''t improve anything. > > ... and I didn''t mean to suggest that. I was rather trying to hint > towards continuing to use blkback as default backend.blkback is still the default backend for physical partitions and LVM volumes, but without direct_IO support in loop.c is unsafe for files. I wouldn''t want to run my VM on a disk that is basically stored in RAM. Also we don''t really have choice when it comes to QCOW and QCOW2 images.