Ian Main
2009-Jul-13 20:45 UTC
[Ovirt-devel] [PATCH] Use volume key instead of path to identify volume.
This patch teaches taskomatic to use the volume 'key' instead of the path from libvirt to key the volume off of in the database. This fixes the duplicate iscsi volume bug we were seeing. The issue was that libvirt changed the way they name storage volumes and included a local ID that changed each time it was attached. Note that the first run with this new patch will cause duplicate volumes because of the key change. Ideally you would delete all storage pools and readd them after applying this patch. Signed-off-by: Ian Main <imain at redhat.com> --- src/task-omatic/taskomatic.rb | 13 +++++++------ 1 files changed, 7 insertions(+), 6 deletions(-) diff --git a/src/task-omatic/taskomatic.rb b/src/task-omatic/taskomatic.rb index b3c0592..6c68397 100755 --- a/src/task-omatic/taskomatic.rb +++ b/src/task-omatic/taskomatic.rb @@ -219,7 +219,7 @@ class TaskOmatic @logger.debug "Pool mounted: #{pool.name}; state: #{pool.state}" volume = @session.object(:class => 'volume', - 'name' => volume_name, + 'key' => volume_name, 'storagePool' => pool.object_id) raise "Unable to find volume #{volume_name} attached to pool #{pool.name}." unless volume @logger.debug "Verified volume of pool #{volume.path}" @@ -591,7 +591,7 @@ class TaskOmatic storage_volume.path = volume.path storage_volume.size = volume.capacity / 1024 storage_volume.storage_pool_id = db_pool.id - storage_volume.write_attribute(storage_volume.volume_name, volume.name) + storage_volume.write_attribute(storage_volume.volume_name, volume.key) storage_volume.lv_owner_perms = owner storage_volume.lv_group_perms = group storage_volume.lv_mode_perms = mode @@ -644,13 +644,14 @@ class TaskOmatic existing_vol = StorageVolume.find(:first, :conditions => ["storage_pool_id = ? AND #{storage_volume.volume_name} = ?", - db_pool_phys.id, volume.name]) + db_pool_phys.id, volume.key]) + puts "Existing volume is #{existing_vol}, searched for storage volume name and #{volume.key}" # Only add if it's not already there. if not existing_vol add_volume_to_db(db_pool_phys, volume); else - @logger.debug "Scanned volume #{volume.name} already exists in db.." + @logger.debug "Scanned volume #{volume.key} already exists in db.." end # Now check for an LVM pool carving up this volume. @@ -692,11 +693,11 @@ class TaskOmatic lvm_storage_volume = StorageVolume.factory(lvm_db_pool.get_type_label) existing_vol = StorageVolume.find(:first, :conditions => ["storage_pool_id = ? AND #{lvm_storage_volume.volume_name} = ?", - lvm_db_pool.id, lvm_volume.name]) + lvm_db_pool.id, lvm_volume.key]) if not existing_vol add_volume_to_db(lvm_db_pool, lvm_volume, "0744", "0744", "0744"); else - @logger.info "volume #{lvm_volume.name} already exists in db.." + @logger.info "volume #{lvm_volume.key} already exists in db.." end end end -- 1.6.0.6
Ian Main
2009-Jul-14 20:26 UTC
[Ovirt-devel] [PATCH] Use volume key instead of path to identify volume.
This patch teaches taskomatic to use the volume 'key' instead of the path from libvirt to key the volume off of in the database. This fixes the duplicate iscsi volume bug we were seeing. The issue was that libvirt changed the way they name storage volumes and included a local ID that changed each time it was attached. This patch now adds a 'key' field to the storage_volumes table and uses that to key off for various taskomatic related activities. Note that the first run with this new patch will cause duplicate volumes because of the key change. Also because volumes are now looked up by 'key' any VMs using old volumes will not work, so you will have to delete all your VMs and rescan your storage volumes to make ovirt work properly again. Signed-off-by: Ian Main <imain at redhat.com> --- src/db/migrate/040_add_key_to_volumes.rb | 28 ++++++++++++++++++++++++ src/task-omatic/taskomatic.rb | 34 ++++++++++++++++++----------- 2 files changed, 49 insertions(+), 13 deletions(-) create mode 100644 src/db/migrate/040_add_key_to_volumes.rb diff --git a/src/db/migrate/040_add_key_to_volumes.rb b/src/db/migrate/040_add_key_to_volumes.rb new file mode 100644 index 0000000..085aba7 --- /dev/null +++ b/src/db/migrate/040_add_key_to_volumes.rb @@ -0,0 +1,28 @@ +# +# Copyright (C) 2009 Red Hat, Inc. +# Written by Ian Main <imain at redhat.com> +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; version 2 of the License. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, +# MA 02110-1301, USA. A copy of the GNU General Public License is +# also available at http://www.gnu.org/copyleft/gpl.html. + +class AddKeyToVolumes < ActiveRecord::Migration + def self.up + add_column :storage_volumes, :key, :string, :null => true + end + + def self.down + remove_column :storage_volumes, :key + end +end diff --git a/src/task-omatic/taskomatic.rb b/src/task-omatic/taskomatic.rb index b3c0592..8088769 100755 --- a/src/task-omatic/taskomatic.rb +++ b/src/task-omatic/taskomatic.rb @@ -211,17 +211,17 @@ class TaskOmatic libvirt_pool.connect(@session, node) # OK, the pool should be all set. The last thing we need to do is get - # the path based on the volume name + # the path based on the volume key - volume_name = db_volume.read_attribute(db_volume.volume_name) + volume_key = db_volume.key pool = libvirt_pool.remote_pool @logger.debug "Pool mounted: #{pool.name}; state: #{pool.state}" volume = @session.object(:class => 'volume', - 'name' => volume_name, + 'key' => volume_key, 'storagePool' => pool.object_id) - raise "Unable to find volume #{volume_name} attached to pool #{pool.name}." unless volume + raise "Unable to find volume #{volume_key} attached to pool #{pool.name}." unless volume @logger.debug "Verified volume of pool #{volume.path}" storagedevs << volume.path @@ -579,7 +579,9 @@ class TaskOmatic @logger.info "host #{host.hostname} is disabled" next end + puts "searching for node with hostname #{host.hostname}" node = @session.object(:class => 'node', 'hostname' => host.hostname) + puts "node returned is #{node}" return node if node end @@ -592,6 +594,7 @@ class TaskOmatic storage_volume.size = volume.capacity / 1024 storage_volume.storage_pool_id = db_pool.id storage_volume.write_attribute(storage_volume.volume_name, volume.name) + storage_volume.key = volume.key storage_volume.lv_owner_perms = owner storage_volume.lv_group_perms = group storage_volume.lv_mode_perms = mode @@ -643,14 +646,15 @@ class TaskOmatic storage_volume = StorageVolume.factory(db_pool_phys.get_type_label) existing_vol = StorageVolume.find(:first, :conditions => - ["storage_pool_id = ? AND #{storage_volume.volume_name} = ?", - db_pool_phys.id, volume.name]) + ["storage_pool_id = ? AND key = ?", + db_pool_phys.id, volume.key]) + puts "Existing volume is #{existing_vol}, searched for storage volume key and #{volume.key}" # Only add if it's not already there. if not existing_vol add_volume_to_db(db_pool_phys, volume); else - @logger.debug "Scanned volume #{volume.name} already exists in db.." + @logger.debug "Scanned volume #{volume.key} already exists in db.." end # Now check for an LVM pool carving up this volume. @@ -691,12 +695,12 @@ class TaskOmatic lvm_storage_volume = StorageVolume.factory(lvm_db_pool.get_type_label) existing_vol = StorageVolume.find(:first, :conditions => - ["storage_pool_id = ? AND #{lvm_storage_volume.volume_name} = ?", - lvm_db_pool.id, lvm_volume.name]) + ["storage_pool_id = ? AND key = ?", + lvm_db_pool.id, lvm_volume.key]) if not existing_vol add_volume_to_db(lvm_db_pool, lvm_volume, "0744", "0744", "0744"); else - @logger.info "volume #{lvm_volume.name} already exists in db.." + @logger.info "volume #{lvm_volume.key} already exists in db.." end end end @@ -737,9 +741,8 @@ class TaskOmatic @logger.debug " property: #{key}, #{val}" end - # FIXME: Should have this too I think.. - #db_volume.key = volume.key db_volume.reload + db_volume.key = volume.key db_volume.path = volume.path db_volume.state = StorageVolume::STATE_AVAILABLE db_volume.save! @@ -747,6 +750,8 @@ class TaskOmatic db_pool.reload db_pool.state = StoragePool::STATE_AVAILABLE db_pool.save! + rescue => ex + @logger.error "Error saving new volume: #{ex.class}: #{ex.message}" ensure libvirt_pool.shutdown end @@ -795,7 +800,7 @@ class TaskOmatic begin volume = @session.object(:class => 'volume', 'storagePool' => libvirt_pool.remote_pool.object_id, - 'path' => db_volume.path) + 'key' => db_volume.key) @logger.error "Unable to find volume to delete" unless volume # FIXME: we actually probably want to zero out the whole volume @@ -811,6 +816,8 @@ class TaskOmatic # it up, so just carry on here.. volume.delete if volume + @logger.info "Volume deleted successfully." + # Note: we have to nil out the task_target because when we delete the # volume object, that also deletes all dependent tasks (including this # one), which leads to accessing stale tasks. Orphan the task, then @@ -854,6 +861,7 @@ class TaskOmatic @logger.info("Reconnected, resuming task checking..") if was_disconnected was_disconnected = false + @session.object(:class => 'agent') tasks = Array.new begin -- 1.6.0.6
Apparently Analagous Threads
- [PATCH] fix storage problem.
- re-sending outstanding controller refactoring patches after rebase
- [PATCH server] Use qpid for migration and add more debugging to taskomatic.
- [PATCH server] split StorageVolumeService.svc_new into two methods.
- [PATCH server] Add more debugging to storage tasks