changeset: 9226:7d8efd4f1ac7 tag: tip user: kaf24@firebug.cl.cam.ac.uk date: Tue Mar 14 08:18:35 2006 +0100 summary: Initialise blkfront_info to zeroes after allocating it. Hardware: x460 ******************** x86_32(no PAE): *************************** * dom0: SLES9 SP2 * dom0 boots fine * xend starts without problem --- Linux HVM domain status: --- ISSUES: * xm-test does not complete. * able to boot HVM domains but cannot read from console. It complains about not being able to read from xenstored. The boot process gets stuck after recognizing the pcnet32 driver. * Bug 560 is present. SUMMARY: N/A --- Windows HVM domain status: --- * I''ll get to testing the other Windowns OS when I automate the test cases. +-------------------------------------------------------------------+ | Category | Test Case | OS | | | |-----------------------| | | | Win2k| Win2k3 | WinXP | +-------------------------------------------------------------------+ | Networking | Open IE | | | pass | | | Ping Remote sys. | | | pass | | | BSO Authentication | | | pass | | | Telnet to remote sys. | | | pass | | | Copy 512MB from remote sys. | | | pass | | | Start Media Player | | | pass | | Graphics | Start Solitaire | | | pass | | | run commands in CMD.exe | | | pass | | Disk I/O | Load a large file in Notepad| | | pass | | Xen | Concurrent Win. HVM Doms. | | | pass | | | Dom shutdown/destruction | | | pass | +-------------------------------------------------------------------+ NOTE: for test case details, go to https://ltc.linux.ibm.com/wiki/XenFullVirtTestPlan I am working on automating these test cases. ISSUES: N/A SUMMERY: N/A ********************** x86_64: ********************************* * dom0: SLES9 SP2 * dom0 boots fine * xend starts without problem * able to boot HVM domains. --- Linux HVM domain status: --- ISSUES: N/A SUMMERY: Xm-test execution summary: PASS: 69 FAIL: 6 XPASS: 1 XFAIL: 2 Details: FAIL: 11_create_concurrent_pos [1] Failed to create domain FAIL: 12_create_concurrent_stress_pos Failed to start 12_create_concurrent_stress_pos-1142373110 XFAIL: 02_network_local_ping_pos ping loopback failed for size 65507. ping eth0 failed for size 65507. XFAIL: 11_network_domU_ping_pos Failed to create domain FAIL: 12_network_domU_tcp_pos Failed to create domain FAIL: 13_network_domU_udp_pos Failed to create domain --- Windows HVM domain status: --- * I''ll get windows testing done as soon as I can. ISSUES: N/A SUMMERY: N/A regards, _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
For xm-test result, I have report 02_network_local_ping_pos to bugzilla as #572, and the other failure is caused by using one image file to start multiple VMXs, which basically is not allowable. I have a suggestion, could xm-test modify the name "domU" to a general term in test case, for currently xm-test supports both domU and hvm?>-----Original Message----- >From: xen-devel-bounces@lists.xensource.com >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Rick Gonzalez >Sent: 2006年3月15日 6:49 >To: xen-devel@lists.xensource.com >Subject: [Xen-devel] Daily Xen-HVM Builds: cs9226 > >changeset: 9226:7d8efd4f1ac7 >tag: tip >user: kaf24@firebug.cl.cam.ac.uk >date: Tue Mar 14 08:18:35 2006 +0100 >summary: Initialise blkfront_info to zeroes after allocating it. > > > >Hardware: x460 > >******************** x86_32(no PAE): *************************** > >* dom0: SLES9 SP2 >* dom0 boots fine >* xend starts without problem > >--- Linux HVM domain status: --- > >ISSUES: > >* xm-test does not complete. >* able to boot HVM domains but cannot read from console. It >complains about not being able to read from xenstored. The boot process >gets stuck after recognizing the pcnet32 driver. >* Bug 560 is present. > >SUMMARY: > >N/A > >--- Windows HVM domain status: --- > >* I''ll get to testing the other Windowns OS when I automate the test cases. > >+-------------------------------------------------------------------+ >| Category | Test Case | OS | >| | |-----------------------| >| | | Win2k| Win2k3 | WinXP | >+-------------------------------------------------------------------+ >| Networking | Open IE | | | pass | >| | Ping Remote sys. | | | pass | >| | BSO Authentication | | | pass | >| | Telnet to remote sys. | | | pass | >| | Copy 512MB from remote sys. | | | pass | >| | Start Media Player | | | pass | >| Graphics | Start Solitaire | | | pass | >| | run commands in CMD.exe | | | pass | >| Disk I/O | Load a large file in Notepad| | | pass | >| Xen | Concurrent Win. HVM Doms. | | | pass | >| | Dom shutdown/destruction | | | pass | >+-------------------------------------------------------------------+ >NOTE: for test case details, go to >https://ltc.linux.ibm.com/wiki/XenFullVirtTestPlan > I am working on automating these test cases. > > > >ISSUES: > >N/A > >SUMMERY: > >N/A > > >********************** x86_64: ********************************* > >* dom0: SLES9 SP2 >* dom0 boots fine >* xend starts without problem >* able to boot HVM domains. > >--- Linux HVM domain status: --- > >ISSUES: > >N/A > >SUMMERY: > >Xm-test execution summary: > PASS: 69 > FAIL: 6 > XPASS: 1 > XFAIL: 2 > > >Details: > > FAIL: 11_create_concurrent_pos > [1] Failed to create domain > > FAIL: 12_create_concurrent_stress_pos > Failed to start 12_create_concurrent_stress_pos-1142373110 > >XFAIL: 02_network_local_ping_pos > ping loopback failed for size 65507. ping eth0 failed for size >65507. > >XFAIL: 11_network_domU_ping_pos > Failed to create domain > > FAIL: 12_network_domU_tcp_pos > Failed to create domain > > FAIL: 13_network_domU_udp_pos > Failed to create domain > >--- Windows HVM domain status: --- > >* I''ll get windows testing done as soon as I can. > > >ISSUES: > >N/A > >SUMMERY: > >N/A > > > >regards, > >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.xensource.com >http://lists.xensource.com/xen-devel > > >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.xensource.com >http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, 2006-03-15 at 17:24 +0800, Yu, Ping Y wrote:> For xm-test result, I have report 02_network_local_ping_pos to bugzilla as #572, > and the other failure is caused by using one image file to start multiple VMXs, > which basically is not allowable. > I have a suggestion, could xm-test modify the name "domU" to a general term in > test case, for currently xm-test supports both domU and hvm?The network ping tests that fail with the specific packet size are a known issue and non-HVM related. So, I don''t think we need another bug to track it. Using the single disk image for multiple concurrent domains currently isn''t an issue, we aren''t writing to the disk image. You are right, however, that this needs to change if we''re to support new tests like block-attach and io tests where we need to write to the image. I think Dan Smith had a good suggestion of using device-mapper to create disk images for xm-test HVM domains. Device-mapper will let us create a single boot image and then let us attach writable private disks to HVM domains. It will also let us get rid of lilo for grub and make things neater. This is on my list of todos - although I don''t know when I''ll get to it. I''m currently working with networking infrastructure and tests. If you''d like to do the device-mapper addition, that would be great. Thanks, Dan> >-----Original Message----- > >From: xen-devel-bounces@lists.xensource.com > >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Rick Gonzalez > >Sent: 2006年3月15日 6:49 > >To: xen-devel@lists.xensource.com > >Subject: [Xen-devel] Daily Xen-HVM Builds: cs9226 > > > >changeset: 9226:7d8efd4f1ac7 > >tag: tip > >user: kaf24@firebug.cl.cam.ac.uk > >date: Tue Mar 14 08:18:35 2006 +0100 > >summary: Initialise blkfront_info to zeroes after allocating it. > > > > > > > >Hardware: x460 > > > >******************** x86_32(no PAE): *************************** > > > >* dom0: SLES9 SP2 > >* dom0 boots fine > >* xend starts without problem > > > >--- Linux HVM domain status: --- > > > >ISSUES: > > > >* xm-test does not complete. > >* able to boot HVM domains but cannot read from console. It > >complains about not being able to read from xenstored. The boot process > >gets stuck after recognizing the pcnet32 driver. > >* Bug 560 is present. > > > >SUMMARY: > > > >N/A > > > >--- Windows HVM domain status: --- > > > >* I''ll get to testing the other Windowns OS when I automate the test cases. > > > >+-------------------------------------------------------------------+ > >| Category | Test Case | OS | > >| | |-----------------------| > >| | | Win2k| Win2k3 | WinXP | > >+-------------------------------------------------------------------+ > >| Networking | Open IE | | | pass | > >| | Ping Remote sys. | | | pass | > >| | BSO Authentication | | | pass | > >| | Telnet to remote sys. | | | pass | > >| | Copy 512MB from remote sys. | | | pass | > >| | Start Media Player | | | pass | > >| Graphics | Start Solitaire | | | pass | > >| | run commands in CMD.exe | | | pass | > >| Disk I/O | Load a large file in Notepad| | | pass | > >| Xen | Concurrent Win. HVM Doms. | | | pass | > >| | Dom shutdown/destruction | | | pass | > >+-------------------------------------------------------------------+ > >NOTE: for test case details, go to > >https://ltc.linux.ibm.com/wiki/XenFullVirtTestPlan > > I am working on automating these test cases. > > > > > > > >ISSUES: > > > >N/A > > > >SUMMERY: > > > >N/A > > > > > >********************** x86_64: ********************************* > > > >* dom0: SLES9 SP2 > >* dom0 boots fine > >* xend starts without problem > >* able to boot HVM domains. > > > >--- Linux HVM domain status: --- > > > >ISSUES: > > > >N/A > > > >SUMMERY: > > > >Xm-test execution summary: > > PASS: 69 > > FAIL: 6 > > XPASS: 1 > > XFAIL: 2 > > > > > >Details: > > > > FAIL: 11_create_concurrent_pos > > [1] Failed to create domain > > > > FAIL: 12_create_concurrent_stress_pos > > Failed to start 12_create_concurrent_stress_pos-1142373110 > > > >XFAIL: 02_network_local_ping_pos > > ping loopback failed for size 65507. ping eth0 failed for size > >65507. > > > >XFAIL: 11_network_domU_ping_pos > > Failed to create domain > > > > FAIL: 12_network_domU_tcp_pos > > Failed to create domain > > > > FAIL: 13_network_domU_udp_pos > > Failed to create domain > > > >--- Windows HVM domain status: --- > > > >* I''ll get windows testing done as soon as I can. > > > > > >ISSUES: > > > >N/A > > > >SUMMERY: > > > >N/A > > > > > > > >regards, > > > >_______________________________________________ > >Xen-devel mailing list > >Xen-devel@lists.xensource.com > >http://lists.xensource.com/xen-devel > > > > > >_______________________________________________ > >Xen-devel mailing list > >Xen-devel@lists.xensource.com > >http://lists.xensource.com/xen-devel > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel, Currently, HVM support multiple disks in QEMU configure, and you can add extra disks by configuring options in "disk", for example, disk = [ ''file:/var/images/min-el3-i386.img,ioemu:hda,w'', ''file:/var/images/min-el3-i386_2.img,ioemu:hdb,w'' ] Does it meet your requirement? Currently problem is that strict check is added in VBD and forbid one image for multiple HVM and all those test cases in xm-test failed, see information below: [dom0] Running `xm create /tmp/xm-test.conf'' Using config file "/tmp/xm-test.conf". Error: Device 768 (vbd) could not be connected. File /opt/vmm/control_panel/xm-test/ramdisk/disk.img is loopback-mounted through /dev/loop0, which is mounted in a guest domain, and so cannot be mounted now. Failed to create test domain because: Using config file "/tmp/xm-test.conf". Error: Device 768 (vbd) could not be connected. File /opt/vmm/control_panel/xm-test/ramdisk/disk.img is loopback-mounted through /dev/loop0, which is mounted in a guest domain, and so cannot be mounted now. REASON: Failed to create domain>-----Original Message----- >From: Daniel Stekloff [mailto:dsteklof@us.ibm.com] >Sent: 2006年3月15日 23:09 >To: Yu, Ping Y >Cc: Rick Gonzalez; xen-devel@lists.xensource.com >Subject: RE: [Xen-devel] Daily Xen-HVM Builds: cs9226 > >On Wed, 2006-03-15 at 17:24 +0800, Yu, Ping Y wrote: >> For xm-test result, I have report 02_network_local_ping_pos to bugzilla as >#572, >> and the other failure is caused by using one image file to start multiple VMXs, >> which basically is not allowable. >> I have a suggestion, could xm-test modify the name "domU" to a general term >in >> test case, for currently xm-test supports both domU and hvm? > > >The network ping tests that fail with the specific packet size are a >known issue and non-HVM related. So, I don''t think we need another bug >to track it. > >Using the single disk image for multiple concurrent domains currently >isn''t an issue, we aren''t writing to the disk image. You are right, >however, that this needs to change if we''re to support new tests like >block-attach and io tests where we need to write to the image. > >I think Dan Smith had a good suggestion of using device-mapper to create >disk images for xm-test HVM domains. Device-mapper will let us create a >single boot image and then let us attach writable private disks to HVM >domains. It will also let us get rid of lilo for grub and make things >neater. > >This is on my list of todos - although I don''t know when I''ll get to it. >I''m currently working with networking infrastructure and tests. If you''d >like to do the device-mapper addition, that would be great. > >Thanks, > >Dan > > > > > >> >-----Original Message----- >> >From: xen-devel-bounces@lists.xensource.com >> >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Rick Gonzalez >> >Sent: 2006年3月15日 6:49 >> >To: xen-devel@lists.xensource.com >> >Subject: [Xen-devel] Daily Xen-HVM Builds: cs9226 >> > >> >changeset: 9226:7d8efd4f1ac7 >> >tag: tip >> >user: kaf24@firebug.cl.cam.ac.uk >> >date: Tue Mar 14 08:18:35 2006 +0100 >> >summary: Initialise blkfront_info to zeroes after allocating it. >> > >> > >> > >> >Hardware: x460 >> > >> >******************** x86_32(no PAE): *************************** >> > >> >* dom0: SLES9 SP2 >> >* dom0 boots fine >> >* xend starts without problem >> > >> >--- Linux HVM domain status: --- >> > >> >ISSUES: >> > >> >* xm-test does not complete. >> >* able to boot HVM domains but cannot read from console. It >> >complains about not being able to read from xenstored. The boot process >> >gets stuck after recognizing the pcnet32 driver. >> >* Bug 560 is present. >> > >> >SUMMARY: >> > >> >N/A >> > >> >--- Windows HVM domain status: --- >> > >> >* I''ll get to testing the other Windowns OS when I automate the test cases. >> > >> >+-------------------------------------------------------------------+ >> >| Category | Test Case | OS | >> >| | |-----------------------| >> >| | | Win2k| Win2k3 | WinXP | >> >+-------------------------------------------------------------------+ >> >| Networking | Open IE | | | pass | >> >| | Ping Remote sys. | | | pass | >> >| | BSO Authentication | | | pass | >> >| | Telnet to remote sys. | | | pass | >> >| | Copy 512MB from remote sys. | | | pass | >> >| | Start Media Player | | | pass | >> >| Graphics | Start Solitaire | | | pass | >> >| | run commands in CMD.exe | | | pass | >> >| Disk I/O | Load a large file in Notepad| | | pass | >> >| Xen | Concurrent Win. HVM Doms. | | | pass | >> >| | Dom shutdown/destruction | | | pass | >> >+-------------------------------------------------------------------+ >> >NOTE: for test case details, go to >> >https://ltc.linux.ibm.com/wiki/XenFullVirtTestPlan >> > I am working on automating these test cases. >> > >> > >> > >> >ISSUES: >> > >> >N/A >> > >> >SUMMERY: >> > >> >N/A >> > >> > >> >********************** x86_64: ********************************* >> > >> >* dom0: SLES9 SP2 >> >* dom0 boots fine >> >* xend starts without problem >> >* able to boot HVM domains. >> > >> >--- Linux HVM domain status: --- >> > >> >ISSUES: >> > >> >N/A >> > >> >SUMMERY: >> > >> >Xm-test execution summary: >> > PASS: 69 >> > FAIL: 6 >> > XPASS: 1 >> > XFAIL: 2 >> > >> > >> >Details: >> > >> > FAIL: 11_create_concurrent_pos >> > [1] Failed to create domain >> > >> > FAIL: 12_create_concurrent_stress_pos >> > Failed to start 12_create_concurrent_stress_pos-1142373110 >> > >> >XFAIL: 02_network_local_ping_pos >> > ping loopback failed for size 65507. ping eth0 failed for size >> >65507. >> > >> >XFAIL: 11_network_domU_ping_pos >> > Failed to create domain >> > >> > FAIL: 12_network_domU_tcp_pos >> > Failed to create domain >> > >> > FAIL: 13_network_domU_udp_pos >> > Failed to create domain >> > >> >--- Windows HVM domain status: --- >> > >> >* I''ll get windows testing done as soon as I can. >> > >> > >> >ISSUES: >> > >> >N/A >> > >> >SUMMERY: >> > >> >N/A >> > >> > >> > >> >regards, >> > >> >_______________________________________________ >> >Xen-devel mailing list >> >Xen-devel@lists.xensource.com >> >http://lists.xensource.com/xen-devel >> > >> > >> >_______________________________________________ >> >Xen-devel mailing list >> >Xen-devel@lists.xensource.com >> >http://lists.xensource.com/xen-devel >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thu, 2006-03-16 at 13:25 +0800, Yu, Ping Y wrote:> Daniel, > > Currently, HVM support multiple disks in QEMU configure, and you can add > extra disks by configuring options in "disk", for example, > disk = [ ''file:/var/images/min-el3-i386.img,ioemu:hda,w'', ''file:/var/images/min-el3-i386_2.img,ioemu:hdb,w'' ] > Does it meet your requirement?My requirement for what? I know HVM domains can support more than one disk image, the idea is to get xm-test to automate creating disk images for testing HVM domains. My plan is to eventually is to use device-mapper to present a read only root image, that all the xm-test HVM test domains will share and then add writable partitions as needed to test domains.> Currently problem is that strict check is added in VBD and forbid one > image for multiple HVM and all those test cases in xm-test failed, see information > below: > > [dom0] Running `xm create /tmp/xm-test.conf'' > Using config file "/tmp/xm-test.conf". > Error: Device 768 (vbd) could not be connected. > File /opt/vmm/control_panel/xm-test/ramdisk/disk.img is loopback-mounted through /dev/loop0, > which is mounted in a guest domain, > and so cannot be mounted now. > Failed to create test domain because: > Using config file "/tmp/xm-test.conf". > Error: Device 768 (vbd) could not be connected. > File /opt/vmm/control_panel/xm-test/ramdisk/disk.img is loopback-mounted through /dev/loop0, > which is mounted in a guest domain, > and so cannot be mounted now. > > REASON: Failed to create domainThe vbd issue wasn''t that only one image could be loaded for one HVM domain, if that''s what you''re saying. The issue was exceeding the number of loopback devices on the system. Qemu-dm loads disk images using loopback devices - so you are therefore the number of disk images able to be mounted is limited to the number of configured loopback devices. There was a bug in 11_create_concurrent_pos.py in xm-test because it goes and creates as many concurrent domains as possible based on memory and a cutoff of 50. This is fine for para virt, but broke for HVM because of the loopback device limit. I have patched the test and it should work for you. I have run 11_create_concurrent_pos.py on my x366 where I''ve changed the kernel option max_loop=256 and been able to load 50 disk images all using the same disk image. Thanks, Dan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> >On Thu, 2006-03-16 at 13:25 +0800, Yu, Ping Y wrote: >> Daniel, >> >> Currently, HVM support multiple disks in QEMU configure, and you canadd>> extra disks by configuring options in "disk", for example, >> disk = [ ''file:/var/images/min-el3-i386.img,ioemu:hda,w'', >''file:/var/images/min-el3-i386_2.img,ioemu:hdb,w'' ] >> Does it meet your requirement? > > >My requirement for what? I know HVM domains can support more than one >disk image, the idea is to get xm-test to automate creating disk images >for testing HVM domains. My plan is to eventually is to use >device-mapper to present a read only root image, that all the xm-test >HVM test domains will share and then add writable partitions as needed >to test domains.Maybe my idea is a little different from yours, for I notice that there are an "r/w" control bit for image, thus we can make use of it. As you said, we can present a read only root image and add writeable partition as needed. :-)> > >> Currently problem is that strict check is added in VBD and forbid one >> image for multiple HVM and all those test cases in xm-test failed,see>information >> below: >> >> [dom0] Running `xm create /tmp/xm-test.conf'' >> Using config file "/tmp/xm-test.conf". >> Error: Device 768 (vbd) could not be connected. >> File /opt/vmm/control_panel/xm-test/ramdisk/disk.img isloopback-mounted>through /dev/loop0, >> which is mounted in a guest domain, >> and so cannot be mounted now. >> Failed to create test domain because: >> Using config file "/tmp/xm-test.conf". >> Error: Device 768 (vbd) could not be connected. >> File /opt/vmm/control_panel/xm-test/ramdisk/disk.img isloopback-mounted>through /dev/loop0, >> which is mounted in a guest domain, >> and so cannot be mounted now. >> >> REASON: Failed to create domain > > >The vbd issue wasn''t that only one image could be loaded for one HVM >domain, if that''s what you''re saying. The issue was exceeding thenumber>of loopback devices on the system. Qemu-dm loads disk images using >loopback devices - so you are therefore the number of disk images able >to be mounted is limited to the number of configured loopback devices. > >There was a bug in 11_create_concurrent_pos.py in xm-test because it >goes and creates as many concurrent domains as possible based on memory >and a cutoff of 50. This is fine for para virt, but broke for HVM >because of the loopback device limit. I have patched the test and it >should work for you. I have run 11_create_concurrent_pos.py on my x366 >where I''ve changed the kernel option max_loop=256 and been able to load >50 disk images all using the same disk image.Daniel, from my observation, I found that it is not caused by shortage of loop device number, but VBD protection. If I modify the control bit "w" to "r", xm-test works quite well, and based on that, I will send out a patch to fix xm-test''s current problem. Hope you can review it.> >Thanks, > >Dan_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel