search for: storagedevs

Displaying 10 results from an estimated 10 matches for "storagedevs".

2009 May 28
0
[PATCH server] Use qpid for migration and add more debugging to taskomatic.
...'name' => volume_name, 'storagePool' => pool.object_id) raise "Unable to find volume #{volume_name} attached to pool #{pool.name}." unless volume + @logger.debug "Verified volume of pool #{volume.path}" + storagedevs << volume.path end @@ -342,6 +357,8 @@ class TaskOmatic volumes = [] volumes += db_vm.storage_volumes volumes << image_volume if image_volume + + @logger.debug("Connecting volumes: #{volumes}") storagedevs = connect_storage_pools(node, volumes)...
2010 Aug 25
2
[PATCH] Virtio support
...net_interfaces.push({ :mac => nic.mac, :interface => net_device, :virtio => nic.virtio }) } xml = create_vm_xml(db_vm.description, db_vm.uuid, db_vm.memory_allocated, db_vm.memory_used, db_vm.num_vcpus_allocated, db_vm.boot_device, - net_interfaces, storagedevs) + db_vm.virtio, net_interfaces, storagedevs) @logger.debug("XML Domain definition: #{xml}") -- 1.7.2.1
2010 Sep 02
1
[PATCH 1/1] Introduce an option to always pxe-boot a vm.
..._device + end + xml = create_vm_xml(db_vm.description, db_vm.uuid, db_vm.memory_allocated, - db_vm.memory_used, db_vm.num_vcpus_allocated, db_vm.boot_device, + db_vm.memory_used, db_vm.num_vcpus_allocated, boot_device, db_vm.virtio, net_interfaces, storagedevs) @logger.debug("XML Domain definition: #{xml}") @@ -443,7 +451,8 @@ class TaskOmatic # This information is not available via the libvirt interface. db_vm.memory_used = db_vm.memory_allocated - db_vm.boot_device = Vm::BOOT_DEV_HD + # Revert to HD booting unless we...
2009 Sep 24
0
[PATCH server] Make volume finding more robust.
...e, + 'storagePool' => pool.object_id) + raise "Unable to find volume by key (#{volume_key}) or filename (#{db_volume.filename}), giving up." unless volume + end @logger.debug "Verified volume of pool #{volume.path}" storagedevs << volume.path -- 1.6.2.5
2009 Jul 24
1
permit many-to-many vms / networks relationship redux
redux patchset permitting a vm to be associated with multiple networks and vice-versa. updated patchset so as to be applicable against current oVirt server HEAD these patches may be applied in any order, they all need to be pushed together
2009 Jul 09
2
permit many-to-many vms / networks relationship
This patchset contains changes to the ovirt server frontend, backend, and tests components, permitting vms to be associated with multiple networks and vice versa. Also included are two patches which are required for the frontend bits; a patch adding collapsable sections to the vm form, which in itself depends on the second patch that provides default values for the cpu and memory vm table fields
2007 Nov 13
4
sd_max_throttle
Hello, we are using hardware array and its vendor recommends the following setting in /etc/system: set sd:sd_max_throttle = <value> or set ssd:ssd_max_throttle = <value> Is it possible to monitor *somehow* whether the variable becomes sort of bottleneck ? Or how its value influences io traffic ? Regards przemol
2009 Jul 13
1
[PATCH] Use volume key instead of path to identify volume.
This patch teaches taskomatic to use the volume 'key' instead of the path from libvirt to key the volume off of in the database. This fixes the duplicate iscsi volume bug we were seeing. The issue was that libvirt changed the way they name storage volumes and included a local ID that changed each time it was attached. Note that the first run with this new patch will cause duplicate
2009 Jun 30
0
[PATCH server] permit many-to-many vms / networks relationship
...# FIXME ensure host is on all networks a vm's assigned to + # db_vm.nics.each { |nic| ignore if nic.network ! in host } possible_hosts.push(curr) end end @@ -360,31 +362,34 @@ class TaskOmatic @logger.debug("Connecting volumes: #{volumes}") storagedevs = connect_storage_pools(node, volumes) - # determine if vm has been assigned to physical or - # virtual network and assign nic / bonding accordingly - # FIXME instead of trying to find a nic or bonding here, given - # a specified host and network, we should try earlier on to find a ho...
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set