On PPC we have the following problem with xm-test: http://wiki.xensource.com/xenwiki/XenFaq#head- baa7000e8fc28fd168650114dd2741b7f21da8fa Where we are told: "all you need to do is disable the entire scsi subsystem" This is undesirable since our (the XenPPC Team) goal is to have one Linux image to run everywhere. We could switch over all the xm-tests to use xvd (Xen Virtual Block Device), but before we do that we''d like to know what the support for xvd is and what if any issues/opinion are there with doing this? -JX _______________________________________________ Xen-ppc-devel mailing list Xen-ppc-devel@lists.xensource.com http://lists.xensource.com/xen-ppc-devel
Ewan, any thoughts on this issue? -JX On Oct 23, 2006, at 10:43 AM, Jimi Xenidis wrote:> On PPC we have the following problem with xm-test: > http://wiki.xensource.com/xenwiki/XenFaq#head- > baa7000e8fc28fd168650114dd2741b7f21da8fa > > Where we are told: > "all you need to do is disable the entire scsi subsystem" > > This is undesirable since our (the XenPPC Team) goal is to have one > Linux image to run everywhere. > > We could switch over all the xm-tests to use xvd (Xen Virtual Block > Device), but before we do that we''d like to know what the support > for xvd is and what if any issues/opinion are there with doing this? > > -JX > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Oct 24, 2006 at 10:01:28AM -0400, Jimi Xenidis wrote:> Ewan, any thoughts on this issue? > -JX > On Oct 23, 2006, at 10:43 AM, Jimi Xenidis wrote: > > >On PPC we have the following problem with xm-test: > > http://wiki.xensource.com/xenwiki/XenFaq#head- > >baa7000e8fc28fd168650114dd2741b7f21da8fa > > > >Where we are told: > > "all you need to do is disable the entire scsi subsystem" > > > >This is undesirable since our (the XenPPC Team) goal is to have one > >Linux image to run everywhere. > > > >We could switch over all the xm-tests to use xvd (Xen Virtual Block > >Device), but before we do that we''d like to know what the support > >for xvd is and what if any issues/opinion are there with doing this?We really should start switching over to using xvd. We''ve been allocated that major for exactly this reason -- to ensure that one Linux image can have both the real SCSI subsystem and the Xen frontend drivers at the same time. If you would like to do the work to switch xm-test over to use xvd, that would be greatly appreciated. xvd has not been widely tested, and extending the coverage to xm-test would be very useful indeed. Cheers, Ewan. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Katz
2006-Oct-24 15:18 UTC
[XenPPC] Re: [Xen-devel] Status of xvd and problems with sd.
On Tue, 2006-10-24 at 15:28 +0100, Ewan Mellor wrote:> If you would like to do the work to switch xm-test over to use xvd, that would > be greatly appreciated. xvd has not been widely tested, and extending the > coverage to xm-test would be very useful indeed.Not entirely true... the normal Fedora installation path has been using xvd for quite a while now Jeremy _______________________________________________ Xen-ppc-devel mailing list Xen-ppc-devel@lists.xensource.com http://lists.xensource.com/xen-ppc-devel
On Tue, Oct 24, 2006 at 11:18:48AM -0400, Jeremy Katz wrote:> On Tue, 2006-10-24 at 15:28 +0100, Ewan Mellor wrote: > > If you would like to do the work to switch xm-test over to use xvd, that would > > be greatly appreciated. xvd has not been widely tested, and extending the > > coverage to xm-test would be very useful indeed. > > Not entirely true... the normal Fedora installation path has been using > xvd for quite a while nowExcellent news! That''s your green light, Jimi ;-) Ewan. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Oct 24, 2006 at 04:43:30PM +0100, Ewan Mellor wrote:> On Tue, Oct 24, 2006 at 11:18:48AM -0400, Jeremy Katz wrote: > > > On Tue, 2006-10-24 at 15:28 +0100, Ewan Mellor wrote: > > > If you would like to do the work to switch xm-test over to use xvd, that would > > > be greatly appreciated. xvd has not been widely tested, and extending the > > > coverage to xm-test would be very useful indeed. > > > > Not entirely true... the normal Fedora installation path has been using > > xvd for quite a while now > > Excellent news! That''s your green light, Jimi ;-)Cool. I''ll tackle this. I expect we''ll neeed to create some extra device nodes (/dev/xvd*) in the initrd so we may need to bump the xm-test version to 1.1 when this goes in to make sure everything stays in sync. Yours Tony linux.conf.au http://linux.conf.au/ || http://lca2007.linux.org.au/ Jan 15-20 2007 The Australian Linux Technical Conference! _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tony Breeds
2006-Oct-26 05:34 UTC
[RFC] Re: [Xen-devel] Status of xvd and problems with sd.
On Wed, Oct 25, 2006 at 02:45:36PM +1000, Tony Breeds wrote:> Cool. I''ll tackle this. > > I expect we''ll neeed to create some extra device nodes (/dev/xvd*) in > the initrd so we may need to bump the xm-test version to 1.1 when this > goes in to make sure everything stays in sync.The following patch isn''t complete but It''s posted here for review. Things todo: * Add generation of xvd* device nodes into the ramdisk * Bump the Minor version number as this will these test will need a specific ramdisk To test I mounted my initrd on loopback and: --- sudo mknod ./xvda b 202 0 sudo mknod ./xvda1 b 202 1 sudo mknod ./xvda2 b 202 2 sudo mknod ./xvdb b 202 16 sudo mknod ./xvdb1 b 202 17 sudo mknod ./xvdb2 b 202 18 sudo mknod ./xvdc b 202 32 sudo mknod ./xvdc1 b 202 33 sudo mknod ./xvdc2 b 202 34 --- (until I get item 1 in the todo list, implemented) Running all the block-*/* tests generated no failures for me on PPC, I''m currently testing on x86. The patch below, changes all all SCSI and IDE device references in the block tests to Xen Virtual Block devices, updates device encodings, and fixes some whitespace (Which Really should be a seperate patch). Feedback welcome. ---- diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/01_block_attach_device_pos.py --- a/tools/xm-test/tests/block-create/01_block_attach_device_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/01_block_attach_device_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -32,12 +32,12 @@ except ConsoleError, e: FAIL(str(e)) -block_attach(domain, "phy:ram1", "sdb1") +block_attach(domain, "phy:ram1", "xvda1") -try: - run = console.runCmd("cat /proc/partitions") +try: + run = console.runCmd("cat /proc/partitions") except ConsoleError, e: - FAIL(str(e)) + FAIL(str(e)) # Close the console domain.closeConsole() @@ -45,5 +45,5 @@ domain.closeConsole() # Stop the domain (nice shutdown) domain.stop() -if not re.search("sdb1",run["output"]): +if not re.search("xvda1",run["output"]): FAIL("Device is not actually connected to the domU") diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/02_block_attach_file_device_pos.py --- a/tools/xm-test/tests/block-create/02_block_attach_file_device_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/02_block_attach_file_device_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -32,10 +32,10 @@ except ConsoleError, e: FAIL(str(e)) -block_attach(domain, "file:/dev/ram1", "sdb2") +block_attach(domain, "file:/dev/ram1", "xvda1") try: - run = console.runCmd("cat /proc/partitions") + run = console.runCmd("cat /proc/partitions") except ConsoleError, e: FAIL(str(e)) @@ -45,5 +45,5 @@ domain.closeConsole() # Stop the domain (nice shutdown) domain.stop() -if not re.search("sdb2",run["output"]): - FAIL("Device is not actually connected to the domU") +if not re.search("xvda1",run["output"]): + FAIL("Device is not actually connected to the domU") diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/04_block_attach_device_repeatedly_pos.py --- a/tools/xm-test/tests/block-create/04_block_attach_device_repeatedly_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/04_block_attach_device_repeatedly_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -30,14 +30,14 @@ except ConsoleError, e: FAIL(str(e)) for i in range(10): - status, output = traceCommand("xm block-attach %s phy:ram1 sdb1 w" % domain.getName()) - if i == 0 and status != 0: - FAIL("xm block attach returned invalid %i != 0" % status) - if i > 0 and status == 0: - FAIL("xm block-attach (repeat) returned invalid %i > 0" % status) - run = console.runCmd("cat /proc/partitions") - if not re.search("sdb1", run[''output'']): - FAIL("Device is not actually attached to domU") + status, output = traceCommand("xm block-attach %s phy:ram1 xvda1 w" % domain.getName()) + if i == 0 and status != 0: + FAIL("xm block attach returned invalid %i != 0" % status) + if i > 0 and status == 0: + FAIL("xm block-attach (repeat) returned invalid %i > 0" % status) + run = console.runCmd("cat /proc/partitions") + if not re.search("xvda1", run[''output'']): + FAIL("Device is not actually attached to domU") # Close the console domain.closeConsole() diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/05_block_attach_and_dettach_device_repeatedly_pos.py --- a/tools/xm-test/tests/block-create/05_block_attach_and_dettach_device_repeatedly_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/05_block_attach_and_dettach_device_repeatedly_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -32,15 +32,15 @@ except ConsoleError, e: for i in range(10): - block_attach(domain, "phy:ram1", "sdb1") - run = console.runCmd("cat /proc/partitions") - if not re.search("sdb1", run["output"]): - FAIL("Failed to attach block device: /proc/partitions does not show that!") - - block_detach(domain, "sdb1") - run = console.runCmd("cat /proc/partitions") - if re.search("sdb1", run["output"]): - FAIL("Failed to dettach block device: /proc/partitions still showing that!") + block_attach(domain, "phy:ram1", "xvda1") + run = console.runCmd("cat /proc/partitions") + if not re.search("xvda1", run["output"]): + FAIL("Failed to attach block device: /proc/partitions does not show that!") + + block_detach(domain, "xvda1") + run = console.runCmd("cat /proc/partitions") + if re.search("xvda1", run["output"]): + FAIL("Failed to dettach block device: /proc/partitions still showing that!") # Close the console domain.closeConsole() diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/06_block_attach_baddomain_neg.py --- a/tools/xm-test/tests/block-create/06_block_attach_baddomain_neg.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/06_block_attach_baddomain_neg.py Thu Oct 26 13:24:05 2006 +1000 @@ -8,13 +8,11 @@ if ENABLE_HVM_SUPPORT: if ENABLE_HVM_SUPPORT: SKIP("Block-attach not supported for HVM domains") -status, output = traceCommand("xm block-attach NOT-EXIST phy:ram1 sdb1 w") +status, output = traceCommand("xm block-attach NOT-EXIST phy:ram1 xvda1 w") eyecatcher = "Error" where = output.find(eyecatcher) if status == 0: - FAIL("xm block-attach returned bad status, expected non 0, status is: %i" % status ) + FAIL("xm block-attach returned bad status, expected non 0, status is: %i" % status ) elif where == -1: - FAIL("xm block-attach returned bad output, expected Error, output is: %s" % output ) - - + FAIL("xm block-attach returned bad output, expected Error, output is: %s" % output ) diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/07_block_attach_baddevice_neg.py --- a/tools/xm-test/tests/block-create/07_block_attach_baddevice_neg.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/07_block_attach_baddevice_neg.py Thu Oct 26 13:24:05 2006 +1000 @@ -30,18 +30,18 @@ except ConsoleError, e: FAIL(str(e)) -status, output = traceCommand("xm block-attach %s phy:NOT-EXIST sdb1 w" % domain.getName()) +status, output = traceCommand("xm block-attach %s phy:NOT-EXIST xvda1 w" % domain.getName()) eyecatcher = "Error" where = output.find(eyecatcher) if status == 0: - FAIL("xm block-attach returned bad status, expected non 0, status is: %i" % status ) + FAIL("xm block-attach returned bad status, expected non 0, status is: %i" % status ) elif where == -1: - FAIL("xm block-attach returned bad output, expected Error, output is: %s" % output ) + FAIL("xm block-attach returned bad output, expected Error, output is: %s" % output ) try: - run = console.runCmd("cat /proc/partitions") + run = console.runCmd("cat /proc/partitions") except ConsoleError, e: - FAIL(str(e)) + FAIL(str(e)) # Close the console domain.closeConsole() @@ -49,5 +49,5 @@ domain.closeConsole() # Stop the domain (nice shutdown) domain.stop() -if re.search("sdb1",run["output"]): - FAIL("Non existent Device was connected to the domU") +if re.search("xvda1",run["output"]): + FAIL("Non existent Device was connected to the domU") diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/08_block_attach_bad_filedevice_neg.py --- a/tools/xm-test/tests/block-create/08_block_attach_bad_filedevice_neg.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/08_block_attach_bad_filedevice_neg.py Thu Oct 26 13:24:05 2006 +1000 @@ -29,18 +29,18 @@ except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) -status, output = traceCommand("xm block-attach %s file:/dev/NOT-EXIST sdb1 w" % domain.getName()) +status, output = traceCommand("xm block-attach %s file:/dev/NOT-EXIST xvda1 w" % domain.getName()) eyecatcher = "Error" where = output.find(eyecatcher) if status == 0: - FAIL("xm block-attach returned bad status, expected non 0, status is: %i" % status ) + FAIL("xm block-attach returned bad status, expected non 0, status is: %i" % status ) elif where == -1: - FAIL("xm block-attach returned bad output, expected Error, output is: %s" % output ) - + FAIL("xm block-attach returned bad output, expected Error, output is: %s" % output ) + try: - run = console.runCmd("cat /proc/partitions") + run = console.runCmd("cat /proc/partitions") except ConsoleError, e: - FAIL(str(e)) + FAIL(str(e)) # Close the console domain.closeConsole() @@ -48,5 +48,5 @@ domain.closeConsole() # Stop the domain (nice shutdown) domain.stop() -if re.search("sdb1",run["output"]): - FAIL("Non existent Device was connected to the domU") +if re.search("xvda1",run["output"]): + FAIL("Non existent Device was connected to the domU") diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/09_block_attach_and_dettach_device_check_data_pos.py --- a/tools/xm-test/tests/block-create/09_block_attach_and_dettach_device_check_data_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/09_block_attach_and_dettach_device_check_data_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -12,7 +12,7 @@ if ENABLE_HVM_SUPPORT: SKIP("Block-attach not supported for HVM domains") # Create a domain (default XmTestDomain, with our ramdisk) -domain = XmTestDomain() +domain = XmTestDomain(extraConfig={"extra":"rw"}) try: console = domain.start() @@ -35,27 +35,27 @@ if s != 0: FAIL("mke2fs returned %i != 0" % s) for i in range(10): - block_attach(domain, "phy:ram1", "hda1") - run = console.runCmd("cat /proc/partitions") - if not re.search("hda1", run["output"]): - FAIL("Failed to attach block device: /proc/partitions does not show that!") - - console.runCmd("mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1") - - if i: - run = console.runCmd("cat /mnt/hda1/myfile | grep %s" % (i-1)) - if run[''return'']: - FAIL("File created was lost or not updated!") - - console.runCmd("echo \"%s\" > /mnt/hda1/myfile" % i) - run = console.runCmd("cat /mnt/hda1/myfile") - print run[''output''] - console.runCmd("umount /mnt/hda1") - - block_detach(domain, "hda1") - run = console.runCmd("cat /proc/partitions") - if re.search("hda1", run["output"]): - FAIL("Failed to dettach block device: /proc/partitions still showing that!") + block_attach(domain, "phy:ram1", "xvda1") + run = console.runCmd("cat /proc/partitions") + if not re.search("xvda1", run["output"]): + FAIL("Failed to attach block device: /proc/partitions does not show that!") + + console.runCmd("mkdir -p /mnt/xvda1; mount /dev/xvda1 /mnt/xvda1") + + if i: + run = console.runCmd("cat /mnt/xvda1/myfile | grep %s" % (i-1)) + if run[''return'']: + FAIL("File created was lost or not updated!") + + console.runCmd("echo \"%s\" > /mnt/xvda1/myfile" % i) + run = console.runCmd("cat /mnt/xvda1/myfile") + print run[''output''] + console.runCmd("umount /mnt/xvda1") + + block_detach(domain, "xvda1") + run = console.runCmd("cat /proc/partitions") + if re.search("xvda1", run["output"]): + FAIL("Failed to dettach block device: /proc/partitions still showing that!") # Close the console domain.closeConsole() diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/10_block_attach_dettach_multiple_devices.py --- a/tools/xm-test/tests/block-create/10_block_attach_dettach_multiple_devices.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/10_block_attach_dettach_multiple_devices.py Thu Oct 26 13:24:05 2006 +1000 @@ -15,7 +15,7 @@ def availableRamdisks(): def availableRamdisks(): i = 0 while os.access("/dev/ram%d" % i, os.F_OK ): - i += 1 + i += 1 return i @@ -36,7 +36,7 @@ def detach(devname): return -2, "Failed to detach block device: /proc/partitions still showing that!" return 0, None - + if ENABLE_HVM_SUPPORT: SKIP("Block-attach not supported for HVM domains") @@ -69,22 +69,22 @@ while i < ramdisks or devices: op = random.randint(0,1) # 1 = attach, 0 = detach if (not devices or op) and i < ramdisks: i += 1 - devname = "/dev/hda%d" % i - phy = "/dev/ram%d" % i - print "Attaching %s to %s" % (devname, phy) - status, msg = attach( phy, devname ) - if status: - FAIL(msg) - else: - devices.append(devname) + devname = "/dev/xvda%d" % i + phy = "/dev/ram%d" % i + print "Attaching %s to %s" % (devname, phy) + status, msg = attach( phy, devname ) + if status: + FAIL(msg) + else: + devices.append(devname) elif devices: devname = random.choice(devices) - devices.remove(devname) - print "Detaching %s" % devname - status, msg = detach(devname) - if status: - FAIL(msg) + devices.remove(devname) + print "Detaching %s" % devname + status, msg = detach(devname) + if status: + FAIL(msg) # Close the console domain.closeConsole() diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/11_block_attach_shared_dom0.py --- a/tools/xm-test/tests/block-create/11_block_attach_shared_dom0.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/11_block_attach_shared_dom0.py Thu Oct 26 13:24:05 2006 +1000 @@ -24,7 +24,7 @@ if s != 0: # Now try to start a DomU with write access to /dev/ram0 -config = {"disk":"phy:/dev/ram0,hda1,w"} +config = {"disk":"phy:/dev/ram0,xvda1,w"} domain = XmTestDomain(extraConfig=config); diff -r aeaccdf5ad2f tools/xm-test/tests/block-create/12_block_attach_shared_domU.py --- a/tools/xm-test/tests/block-create/12_block_attach_shared_domU.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-create/12_block_attach_shared_domU.py Thu Oct 26 13:24:05 2006 +1000 @@ -8,7 +8,7 @@ if ENABLE_HVM_SUPPORT: if ENABLE_HVM_SUPPORT: SKIP("Block-attach not supported for HVM domains") -config = {"disk":"phy:/dev/ram0,hda1,w"} +config = {"disk":"phy:/dev/ram0,xvda1,w"} dom1 = XmTestDomain(extraConfig=config) dom2 = XmTestDomain(dom1.getName() + "-2", diff -r aeaccdf5ad2f tools/xm-test/tests/block-destroy/01_block-destroy_btblock_pos.py --- a/tools/xm-test/tests/block-destroy/01_block-destroy_btblock_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-destroy/01_block-destroy_btblock_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -9,7 +9,7 @@ if ENABLE_HVM_SUPPORT: if ENABLE_HVM_SUPPORT: SKIP("Block-detach not supported for HVM domains") -config = {"disk":"phy:/dev/ram0,hda1,w"} +config = {"disk":"phy:/dev/ram0,xvda1,w"} domain = XmTestDomain(extraConfig=config) try: @@ -21,7 +21,7 @@ except DomainError, e: try: console.setHistorySaveCmds(value=True) - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") run2 = console.runCmd("cat /proc/partitions") except ConsoleError, e: FAIL(str(e)) @@ -29,10 +29,10 @@ if run["return"] != 0: if run["return"] != 0: FAIL("block device isn''t attached; can''t detach!") -block_detach(domain, "hda1") +block_detach(domain, "xvda1") try: - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-destroy/02_block-destroy_rtblock_pos.py --- a/tools/xm-test/tests/block-destroy/02_block-destroy_rtblock_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-destroy/02_block-destroy_rtblock_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -18,9 +18,9 @@ except DomainError, e: print e.extra FAIL("Unable to create domain") -block_attach(domain, "phy:/dev/ram0", "hda1") +block_attach(domain, "phy:/dev/ram0", "xvda1") try: - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) @@ -28,9 +28,9 @@ if run["return"] != 0: if run["return"] != 0: FAIL("Failed to verify that block dev is attached") -block_detach(domain, "hda1") +block_detach(domain, "xvda1") try: - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-destroy/04_block-destroy_nonattached_neg.py --- a/tools/xm-test/tests/block-destroy/04_block-destroy_nonattached_neg.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-destroy/04_block-destroy_nonattached_neg.py Thu Oct 26 13:24:05 2006 +1000 @@ -19,7 +19,7 @@ except DomainError, e: print e.extra FAIL("Unable to create domain") -status, output = traceCommand("xm block-detach %s sda1" % domain.getId()) +status, output = traceCommand("xm block-detach %s xvda1" % domain.getId()) eyecatcher1 = "Error:" eyecatcher2 = "Traceback" diff -r aeaccdf5ad2f tools/xm-test/tests/block-destroy/05_block-destroy_byname_pos.py --- a/tools/xm-test/tests/block-destroy/05_block-destroy_byname_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-destroy/05_block-destroy_byname_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -9,7 +9,7 @@ if ENABLE_HVM_SUPPORT: if ENABLE_HVM_SUPPORT: SKIP("Block-detach not supported for HVM domains") -config = {"disk":"phy:/dev/ram0,hda1,w"} +config = {"disk":"phy:/dev/ram0,xvda1,w"} domain = XmTestDomain(extraConfig=config) try: @@ -20,7 +20,7 @@ except DomainError, e: FAIL("Unable to create domain") try: - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") run2 = console.runCmd("cat /proc/partitions") except ConsoleError, e: FAIL(str(e)) @@ -28,10 +28,10 @@ if run["return"] != 0: if run["return"] != 0: FAIL("block device isn''t attached; can''t detach!") -block_detach(domain, "hda1") +block_detach(domain, "xvda1") try: - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-destroy/06_block-destroy_check_list_pos.py --- a/tools/xm-test/tests/block-destroy/06_block-destroy_check_list_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-destroy/06_block-destroy_check_list_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -12,7 +12,7 @@ def checkXmLongList(domain): s, o = traceCommand("xm list --long %s" % domain.getName()) if s != 0: FAIL("xm list --long <dom> failed") - if re.search("hda1", o): + if re.search("xvda1", o): return True else: return False @@ -27,12 +27,12 @@ except DomainError,e: except DomainError,e: FAIL(str(e)) -block_attach(domain, "phy:/dev/ram0", "hda1") +block_attach(domain, "phy:/dev/ram0", "xvda1") if not checkXmLongList(domain): - FAIL("xm long list does not show that hda1 was attached") + FAIL("xm long list does not show that xvda1 was attached") -block_detach(domain, "hda1") +block_detach(domain, "xvda1") if checkXmLongList(domain): - FAIL("xm long list does not show that hda1 was removed") + FAIL("xm long list does not show that xvda1 was removed") diff -r aeaccdf5ad2f tools/xm-test/tests/block-integrity/01_block_device_read_verify.py --- a/tools/xm-test/tests/block-integrity/01_block_device_read_verify.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-integrity/01_block_device_read_verify.py Thu Oct 26 13:24:05 2006 +1000 @@ -33,10 +33,10 @@ s, o = traceCommand("md5sum /dev/ram1") dom0_md5sum_match = re.search(r"^[\dA-Fa-f]{32}", o, re.M) -block_attach(domain, "phy:ram1", "hda1") +block_attach(domain, "phy:ram1", "xvda1") try: - run = console.runCmd("md5sum /dev/hda1") + run = console.runCmd("md5sum /dev/xvda1") except ConsoleError, e: FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-integrity/02_block_device_write_verify.py --- a/tools/xm-test/tests/block-integrity/02_block_device_write_verify.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-integrity/02_block_device_write_verify.py Thu Oct 26 13:24:05 2006 +1000 @@ -28,12 +28,12 @@ except DomainError, e: console.setHistorySaveCmds(value=True) -block_attach(domain, "phy:ram1", "hda1") +block_attach(domain, "phy:ram1", "xvda1") console.setTimeout(120) try: - run = console.runCmd("dd if=/dev/urandom bs=512 count=`cat /sys/block/hda1/size` | tee /dev/hda1 | md5sum") + run = console.runCmd("dd if=/dev/urandom bs=512 count=`cat /sys/block/xvda1/size` | tee /dev/xvda1 | md5sum") except ConsoleError, e: FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-list/01_block-list_pos.py --- a/tools/xm-test/tests/block-list/01_block-list_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-list/01_block-list_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -11,7 +11,7 @@ if ENABLE_HVM_SUPPORT: if ENABLE_HVM_SUPPORT: SKIP("Block-list not supported for HVM domains") -config = {"disk":"phy:/dev/ram0,hda1,w"} +config = {"disk":"phy:/dev/ram0,xvda1,w"} domain = XmTestDomain(extraConfig=config) try: @@ -22,7 +22,7 @@ except DomainError, e: FAIL("Unable to create domain") status, output = traceCommand("xm block-list %s" % domain.getId()) -eyecatcher = "769" +eyecatcher = "51713" where = output.find(eyecatcher) if status != 0: FAIL("xm block-list returned bad status, expected 0, status is %i" % status) @@ -31,7 +31,7 @@ elif where < 0: #Verify the block device on DomainU try: - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-list/02_block-list_attachbd_pos.py --- a/tools/xm-test/tests/block-list/02_block-list_attachbd_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-list/02_block-list_attachbd_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -22,11 +22,11 @@ except DomainError, e: FAIL("Unable to create domain") #Attach one virtual block device to domainU -block_attach(domain, "phy:/dev/ram0", "hda1") +block_attach(domain, "phy:/dev/ram0", "xvda1") #Verify block-list on Domain0 status, output = traceCommand("xm block-list %s" % domain.getId()) -eyecatcher = "769" +eyecatcher = "51713" where = output.find(eyecatcher) if status != 0: FAIL("xm block-list returned bad status, expected 0, status is %i" % status) @@ -35,7 +35,7 @@ elif where < 0 : #Verify attached block device on DomainU try: - run = console.runCmd("cat /proc/partitions | grep hda1") + run = console.runCmd("cat /proc/partitions | grep xvda1") except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-list/03_block-list_anotherbd_pos.py --- a/tools/xm-test/tests/block-list/03_block-list_anotherbd_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-list/03_block-list_anotherbd_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -11,7 +11,7 @@ if ENABLE_HVM_SUPPORT: if ENABLE_HVM_SUPPORT: SKIP("Block-list not supported for HVM domains") -config = {"disk":"phy:/dev/ram0,hda1,w"} +config = {"disk":"phy:/dev/ram0,xvda1,w"} domain = XmTestDomain(extraConfig=config) try: @@ -26,14 +26,14 @@ if status != 0: FAIL("Fail to list block device") #Add another virtual block device to the domain -status, output = traceCommand("xm block-attach %s phy:/dev/ram1 hda2 w" % domain.getId()) +status, output = traceCommand("xm block-attach %s phy:/dev/ram1 xvda2 w" % domain.getId()) if status != 0: FAIL("Fail to attach block device") #Verify block-list on Domain0 status, output = traceCommand("xm block-list %s" % domain.getId()) -eyecatcher1 = "769" -eyecatcher2 = "770" +eyecatcher1 = "51713" +eyecatcher2 = "51714" where1 = output.find(eyecatcher1) where2 = output.find(eyecatcher2) if status != 0: @@ -43,7 +43,7 @@ elif (where1 < 0) and (where2 < 0): #Verify attached block device on DomainU try: - run = console.runCmd("cat /proc/partitions | grep hda1;cat /proc/partitions | grep hda2") + run = console.runCmd("cat /proc/partitions | grep xvda1;cat /proc/partitions | grep xvda2") except ConsoleError, e: saveLog(console.getHistory()) FAIL(str(e)) diff -r aeaccdf5ad2f tools/xm-test/tests/block-list/06_block-list_checkremove_pos.py --- a/tools/xm-test/tests/block-list/06_block-list_checkremove_pos.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/block-list/06_block-list_checkremove_pos.py Thu Oct 26 13:24:05 2006 +1000 @@ -22,39 +22,39 @@ if o: if o: FAIL("block-list without devices reported something!") -block_attach(domain, "phy:/dev/ram0", "hda1") +block_attach(domain, "phy:/dev/ram0", "xvda1") s, o = traceCommand("xm block-list %s" % domain.getName()) if s != 0: FAIL("block-list failed") -if o.find("769") == -1: +if o.find("51713") == -1: FAIL("block-list didn''t show the block device I just attached!") -block_attach(domain, "phy:/dev/ram1", "hda2") +block_attach(domain, "phy:/dev/ram1", "xvda2") s, o = traceCommand("xm block-list %s" % domain.getName()) if s != 0: FAIL("block-list failed") -if o.find("770") == -1: +if o.find("51714") == -1: FAIL("block-list didn''t show the other block device I just attached!") -block_detach(domain, "hda1") +block_detach(domain, "xvda1") s, o = traceCommand("xm block-list %s" % domain.getName()) if s != 0: FAIL("block-list failed after detaching a device") -if o.find("769") != -1: - FAIL("hda1 still shown in block-list after detach!") -if o.find("770") == -1: - FAIL("hda2 not shown after detach of hda1!") +if o.find("51713") != -1: + FAIL("xvda1 still shown in block-list after detach!") +if o.find("51714") == -1: + FAIL("xvda2 not shown after detach of xvda1!") -block_detach(domain, "hda2") +block_detach(domain, "xvda2") s, o = traceCommand("xm block-list %s" % domain.getName()) if s != 0: FAIL("block-list failed after detaching another device") -if o.find("770") != -1: - FAIL("hda2 still shown in block-list after detach!") +if o.find("51714") != -1: + FAIL("xvda2 still shown in block-list after detach!") if o: FAIL("block-list still shows something after all devices detached!") diff -r aeaccdf5ad2f tools/xm-test/tests/network-attach/04_network_attach_baddomain_neg.py --- a/tools/xm-test/tests/network-attach/04_network_attach_baddomain_neg.py Wed Oct 25 09:59:17 2006 +1000 +++ b/tools/xm-test/tests/network-attach/04_network_attach_baddomain_neg.py Thu Oct 26 13:24:05 2006 +1000 @@ -10,8 +10,6 @@ eyecatcher = "Error" eyecatcher = "Error" where = output.find(eyecatcher) if status == 0: - FAIL("xm block-attach returned bad status, expected non 0, status is: %i" % status ) + FAIL("xm network-attach returned bad status, expected non 0, status is: %i" % status ) elif where == -1: - FAIL("xm block-attach returned bad output, expected Error, output is: %s" % output ) - - + FAIL("xm network-attach returned bad output, expected Error, output is: %s" % output ) ---- Yours Tony linux.conf.au http://linux.conf.au/ || http://lca2007.linux.org.au/ Jan 15-20 2007 The Australian Linux Technical Conference! _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel