I tested out end to end ISO provisioning tonight. First off, the good news... I managed to get a VM created and booting Windows XP install ISO. I even got WinXP installed on an iSCSI LUN and booting consistently. So that's a good start. To test I set up an NFS share (/local/isos) on physical.priv.ovirt.org and put the WinXP iso in there. I then used the cobbler image add command to add a Image called WinXP. Then I created a VM using this image to provision. Started the VM and it booted from the ISO and started the installation process. Now here are the issues: Linux installs follow this sort of process: 1. Boot install DVD 2. Install from DVD 3. DVD is no longer needed and can be ejected 3. Reboot from hard disk Windows installs follow this process: 1. Boot install CD 2. Stage 1 install from CD 3. Reboot 4. Boot from hard disk for Stage 2 install using CD in drive 5. Reboot 6. CD is no longer needed and can be ejected 7. Boot from hard disk So Windows installs right now work because the soft reboot in step 3 does not cause the vm to shutdown. If it did that, the restart of the vm would come up without the ISO in the drive (since that is a temporary association) We've talked in the past about changing domains so that on reboot the vm is destroyed and then restarted by taskomatic. This way the boot device can be toggled between reboots. This fixes linux PXE provisioning, which right now has a problem because after OS installation over PXE boot the domain soft reboots and tries to PXE again. So it seems the requirements for Windows and Linux provisioning are somewhat at odds. What we may have to do is make it so taskomatic is aware of the OS type and sets up the semantics of domain rebooting accordingly. Second problem... I tried to provision a second guest using the same ISO image off of the same NFS share. This failed with the following error in taskomatic:> libvir: Storage error : no storage vol with matching name > start_vm > Task action processing failed: Libvirt::RetrieveError: Call to function virStorageVolLookupByName failed > /usr/share/ovirt-server/task-omatic/./utils.rb:124:in `lookup_volume_by_name' > /usr/share/ovirt-server/task-omatic/./utils.rb:124:in `connect_storage_pools' > /usr/share/ovirt-server/task-omatic/./utils.rb:81:in `each' > /usr/share/ovirt-server/task-omatic/./utils.rb:81:in `connect_storage_pools' > /usr/share/ovirt-server/task-omatic/./task_vm.rb:314:in `start_vm' > /usr/share/ovirt-server/task-omatic/taskomatic.rb:99 > /usr/share/ovirt-server/task-omatic/taskomatic.rb:88:in `each' > /usr/share/ovirt-server/task-omatic/taskomatic.rb:88 > /usr/share/ovirt-server/task-omatic/taskomatic.rb:68:in `loop' > /usr/share/ovirt-server/task-omatic/taskomatic.rb:68Seems that if the NFS share is previously mounted as a storage pool in libvirt, it causes taskomatic to fail. If I manually destroy the storage pool for the NFS mount on the host and undefine it and then try again, the start_vm succeeds and recreates the storage pool. This needs to be fixed... Perry -- |=- Red Hat, Engineering, Emerging Technologies, Boston -=| |=- Email: pmyers at redhat.com -=| |=- Office: +1 412 474 3552 Mobile: +1 703 362 9622 -=| |=- GnuPG: E65E4F3D 88F9 F1C9 C2F3 1303 01FE 817C C5D2 8B91 E65E 4F3D -=|
Perry Myers wrote:> I tried to provision a second guest using the same ISO image off of the > same NFS share. This failed with the following error in taskomatic: >> libvir: Storage error : no storage vol with matching name >> start_vm >> Task action processing failed: Libvirt::RetrieveError: Call to >> function virStorageVolLookupByName failed >> /usr/share/ovirt-server/task-omatic/./utils.rb:124:in >> `lookup_volume_by_name' >> /usr/share/ovirt-server/task-omatic/./utils.rb:124:in >> `connect_storage_pools' >> /usr/share/ovirt-server/task-omatic/./utils.rb:81:in `each' >> /usr/share/ovirt-server/task-omatic/./utils.rb:81:in >> `connect_storage_pools' >> /usr/share/ovirt-server/task-omatic/./task_vm.rb:314:in `start_vm' >> /usr/share/ovirt-server/task-omatic/taskomatic.rb:99 >> /usr/share/ovirt-server/task-omatic/taskomatic.rb:88:in `each' >> /usr/share/ovirt-server/task-omatic/taskomatic.rb:88 >> /usr/share/ovirt-server/task-omatic/taskomatic.rb:68:in `loop' >> /usr/share/ovirt-server/task-omatic/taskomatic.rb:68 > > Seems that if the NFS share is previously mounted as a storage pool in > libvirt, it causes taskomatic to fail. If I manually destroy the > storage pool for the NFS mount on the host and undefine it and then try > again, the start_vm succeeds and recreates the storage pool. > > This needs to be fixed...You shouldn't have to create a storage pool at all to provision an ISO from Cobbler. As long as the ISO image is on an NFS exported file system the Image record was given the hostname and export path, a storage pool need never be defined for it in oVirt. But, I'll dig into this problem and see what I find. -- Darryl L. Pierce <dpierce at redhat.com> : GPG KEYID: 6C4E7F1B -------------- next part -------------- A non-text attachment was scrubbed... Name: dpierce.vcf Type: text/x-vcard Size: 333 bytes Desc: not available URL: <http://listman.redhat.com/archives/ovirt-devel/attachments/20081015/164f3078/attachment.vcf>
On Tue, Oct 14, 2008 at 10:11:01PM -0400, Perry Myers wrote: [snip]> > So Windows installs right now work because the soft reboot in step 3 does > not cause the vm to shutdown. If it did that, the restart of the vm > would come up without the ISO in the drive (since that is a temporary > association) > > We've talked in the past about changing domains so that on reboot the vm > is destroyed and then restarted by taskomatic. This way the boot device > can be toggled between reboots. This fixes linux PXE provisioning, which > right now has a problem because after OS installation over PXE boot the > domain soft reboots and tries to PXE again. > > So it seems the requirements for Windows and Linux provisioning are > somewhat at odds. What we may have to do is make it so taskomatic is > aware of the OS type and sets up the semantics of domain rebooting > accordingly.Yes, this is exactly the same song-and-dance we went through with python-virtinst when getting Windows installs to work from virt-manager. What we need to do is figure out a good abstraction for the various possible install modes and either a. allow the user to specify what's desired, or preferably b. figure it out ourselves. What we want is some way to say "OK, the install is finished, now change the VM to boot in the proper permanent configuration." If we always set VMs to halt on reboot and we have a way for taskomatic to detect that a VM has halted in mid-install, we could have the UI pop up a message asking the user if the install finished? It would be difficult to make that clear but it could work, I think... Thoughts? (BTW great news that we can even do this at all) --Hugh
On Tue, 14 Oct 2008 22:11:01 -0400 Perry Myers <pmyers at redhat.com> wrote: [snip]> Second problem... > > I tried to provision a second guest using the same ISO image off of the > same NFS share. This failed with the following error in taskomatic: > > libvir: Storage error : no storage vol with matching name > > start_vm > > Task action processing failed: Libvirt::RetrieveError: Call to function virStorageVolLookupByName failed > > /usr/share/ovirt-server/task-omatic/./utils.rb:124:in `lookup_volume_by_name' > > /usr/share/ovirt-server/task-omatic/./utils.rb:124:in `connect_storage_pools' > > /usr/share/ovirt-server/task-omatic/./utils.rb:81:in `each' > > /usr/share/ovirt-server/task-omatic/./utils.rb:81:in `connect_storage_pools' > > /usr/share/ovirt-server/task-omatic/./task_vm.rb:314:in `start_vm' > > /usr/share/ovirt-server/task-omatic/taskomatic.rb:99 > > /usr/share/ovirt-server/task-omatic/taskomatic.rb:88:in `each' > > /usr/share/ovirt-server/task-omatic/taskomatic.rb:88 > > /usr/share/ovirt-server/task-omatic/taskomatic.rb:68:in `loop' > > /usr/share/ovirt-server/task-omatic/taskomatic.rb:68 > > Seems that if the NFS share is previously mounted as a storage pool in > libvirt, it causes taskomatic to fail. If I manually destroy the storage > pool for the NFS mount on the host and undefine it and then try again, the > start_vm succeeds and recreates the storage pool.I've been trying to reproduce this but it seems fine. I even have a cobbler image and a disk image both being used on the same nfs share by the same vm while other vms are using that iso image too. All worked.. ? Ian