Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 0/8] v2v: various fixed for -o rhv-upload
This patch series fixes various issues in the rhv-upload output mode: - properly find and use RHV resources - cleanup orphan disks, and possibly the current disk transfer on failure In reality, the first 4 patches deal with resources, and the other 4 with cleanups. The latter block can be theoretically sent alone -- I just happened to start working on it as part of my "let's fix rhv-upload oddities" initiative. Pino Toscano (8): v2v: -o rhv-upload: split vmcheck out of precheck v2v: -o rhv-upload: change precheck script to return a JSON v2v: -o rhv-upload: improve lookup of specified resources (RHBZ#1612653) v2v: -o rhv-upload: tell whether a SD actually exists v2v: add output#disk_copied hook v2v: -o rhv-upload: collect disks UUIDs right after copy v2v: -o rhv-upload: remove uploaded disks on failure v2v: -o rhv-upload: cancel disk transfer on failure v2v/Makefile.am | 14 ++- v2v/output_rhv_upload.ml | 106 ++++++++++++++----- v2v/output_rhv_upload_deletedisks_source.mli | 19 ++++ v2v/output_rhv_upload_vmcheck_source.mli | 19 ++++ v2v/rhv-upload-createvm.py | 11 +- v2v/rhv-upload-deletedisks.py | 71 +++++++++++++ v2v/rhv-upload-plugin.py | 2 + v2v/rhv-upload-precheck.py | 55 +++++++--- v2v/rhv-upload-vmcheck.py | 73 +++++++++++++ v2v/types.ml | 1 + v2v/types.mli | 4 + v2v/v2v.ml | 9 +- 12 files changed, 331 insertions(+), 53 deletions(-) create mode 100644 v2v/output_rhv_upload_deletedisks_source.mli create mode 100644 v2v/output_rhv_upload_vmcheck_source.mli create mode 100644 v2v/rhv-upload-deletedisks.py create mode 100644 v2v/rhv-upload-vmcheck.py -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 1/8] v2v: -o rhv-upload: split vmcheck out of precheck
Split the VM existance check out of the precheck script to a new vmcheck script, and invoke that in #prepare_targets. Invoke the precheck script in #precheck, as now it can be run with only values of command line options. This does not change which checks are performed; however, an invalid cluster name will make virt-v2v fail way earlier (even before connecting to the source). --- v2v/Makefile.am | 8 ++- v2v/output_rhv_upload.ml | 14 +++-- v2v/output_rhv_upload_vmcheck_source.mli | 19 ++++++ v2v/rhv-upload-precheck.py | 10 ---- v2v/rhv-upload-vmcheck.py | 73 ++++++++++++++++++++++++ 5 files changed, 109 insertions(+), 15 deletions(-) create mode 100644 v2v/output_rhv_upload_vmcheck_source.mli create mode 100644 v2v/rhv-upload-vmcheck.py diff --git a/v2v/Makefile.am b/v2v/Makefile.am index 2b939368e..067e11049 100644 --- a/v2v/Makefile.am +++ b/v2v/Makefile.am @@ -26,7 +26,8 @@ BUILT_SOURCES = \ config.ml \ output_rhv_upload_createvm_source.ml \ output_rhv_upload_plugin_source.ml \ - output_rhv_upload_precheck_source.ml + output_rhv_upload_precheck_source.ml \ + output_rhv_upload_vmcheck_source.ml EXTRA_DIST = \ $(SOURCES_MLI) $(SOURCES_ML) $(SOURCES_C) \ @@ -36,6 +37,7 @@ EXTRA_DIST = \ rhv-upload-createvm.py \ rhv-upload-plugin.py \ rhv-upload-precheck.py \ + rhv-upload-vmcheck.py \ var_expander_tests.ml \ v2v_unit_tests.ml \ virt-v2v.pod \ @@ -88,6 +90,7 @@ SOURCES_MLI = \ output_rhv_upload_createvm_source.mli \ output_rhv_upload_plugin_source.mli \ output_rhv_upload_precheck_source.mli \ + output_rhv_upload_vmcheck_source.mli \ output_vdsm.mli \ parse_ova.mli \ parse_ovf_from_ova.mli \ @@ -153,6 +156,7 @@ SOURCES_ML = \ output_rhv_upload_createvm_source.ml \ output_rhv_upload_plugin_source.ml \ output_rhv_upload_precheck_source.ml \ + output_rhv_upload_vmcheck_source.ml \ output_rhv_upload.ml \ output_vdsm.ml \ output_openstack.ml \ @@ -173,6 +177,8 @@ output_rhv_upload_plugin_source.ml: $(srcdir)/rhv-upload-plugin.py $(srcdir)/embed.sh code $^ $@ output_rhv_upload_precheck_source.ml: $(srcdir)/rhv-upload-precheck.py $(srcdir)/embed.sh code $^ $@ +output_rhv_upload_vmcheck_source.ml: $(srcdir)/rhv-upload-vmcheck.py + $(srcdir)/embed.sh code $^ $@ if HAVE_OCAML diff --git a/v2v/output_rhv_upload.ml b/v2v/output_rhv_upload.ml index 5bc6a4007..91552536a 100644 --- a/v2v/output_rhv_upload.ml +++ b/v2v/output_rhv_upload.ml @@ -94,10 +94,13 @@ class output_rhv_upload output_alloc output_conn let diskid_file_of_id id = tmpdir // sprintf "diskid.%d" id in - (* Create Python scripts for precheck, plugin and create VM. *) + (* Create Python scripts for precheck, vmcheck, plugin and create VM. *) let precheck_script Python_script.create ~name:"rhv-upload-precheck.py" Output_rhv_upload_precheck_source.code in + let vmcheck_script + Python_script.create ~name:"rhv-upload-vmcheck.py" + Output_rhv_upload_vmcheck_source.code in let plugin_script Python_script.create ~name:"rhv-upload-plugin.py" Output_rhv_upload_plugin_source.code in @@ -226,6 +229,9 @@ object error_unless_nbdkit_working (); error_unless_nbdkit_python_plugin_working (); error_unless_output_alloc_sparse (); + (* Python code prechecks. *) + if Python_script.run_command precheck_script json_params [] <> 0 then + error (f_"failed server prechecks, see earlier errors"); if have_selinux then error_unless_nbdkit_compiled_with_selinux () @@ -247,11 +253,11 @@ object let json_params ("output_name", JSON.String output_name) :: json_params in - (* Python code prechecks. These can't run in #precheck because + (* Check that the VM does not exist. This can't run in #precheck because * we need to know the name of the virtual machine. *) - if Python_script.run_command precheck_script json_params [] <> 0 then - error (f_"failed server prechecks, see earlier errors"); + if Python_script.run_command vmcheck_script json_params [] <> 0 then + error (f_"failed vmchecks, see earlier errors"); (* Create an nbdkit instance for each disk and set the * target URI to point to the NBD socket. diff --git a/v2v/output_rhv_upload_vmcheck_source.mli b/v2v/output_rhv_upload_vmcheck_source.mli new file mode 100644 index 000000000..c1bafa15b --- /dev/null +++ b/v2v/output_rhv_upload_vmcheck_source.mli @@ -0,0 +1,19 @@ +(* virt-v2v + * Copyright (C) 2018 Red Hat Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + *) + +val code : string diff --git a/v2v/rhv-upload-precheck.py b/v2v/rhv-upload-precheck.py index b79f91b4a..6253c26ac 100644 --- a/v2v/rhv-upload-precheck.py +++ b/v2v/rhv-upload-precheck.py @@ -60,16 +60,6 @@ connection = sdk.Connection( system_service = connection.system_service() -# Find if a virtual machine already exists with that name. -vms_service = system_service.vms_service() -vms = vms_service.list( - search = ("name=%s" % params['output_name']), -) -if len(vms) > 0: - vm = vms[0] - raise RuntimeError("VM already exists with name ‘%s’, id ‘%s’" % - (params['output_name'], vm.id)) - # Check whether the specified cluster exists. clusters_service = system_service.clusters_service() clusters = clusters_service.list( diff --git a/v2v/rhv-upload-vmcheck.py b/v2v/rhv-upload-vmcheck.py new file mode 100644 index 000000000..76a1a23c1 --- /dev/null +++ b/v2v/rhv-upload-vmcheck.py @@ -0,0 +1,73 @@ +# -*- python -*- +# oVirt or RHV pre-upload VM checks used by ‘virt-v2v -o rhv-upload’ +# Copyright (C) 2018 Red Hat Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License along +# with this program; if not, write to the Free Software Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + +import json +import logging +import sys +import time + +from http.client import HTTPSConnection +from urllib.parse import urlparse + +import ovirtsdk4 as sdk +import ovirtsdk4.types as types + +# Parameters are passed in via a JSON doc from the OCaml code. +# Because this Python code ships embedded inside virt-v2v there +# is no formal API here. +params = None + +if len(sys.argv) != 2: + raise RuntimeError("incorrect number of parameters") + +# Parameters are passed in via a JSON document. +with open(sys.argv[1], 'r') as fp: + params = json.load(fp) + +# What is passed in is a password file, read the actual password. +with open(params['output_password'], 'r') as fp: + output_password = fp.read() +output_password = output_password.rstrip() + +# Parse out the username from the output_conn URL. +parsed = urlparse(params['output_conn']) +username = parsed.username or "admin@internal" + +# Connect to the server. +connection = sdk.Connection( + url = params['output_conn'], + username = username, + password = output_password, + ca_file = params['rhv_cafile'], + log = logging.getLogger(), + insecure = params['insecure'], +) + +system_service = connection.system_service() + +# Find if a virtual machine already exists with that name. +vms_service = system_service.vms_service() +vms = vms_service.list( + search = ("name=%s" % params['output_name']), +) +if len(vms) > 0: + vm = vms[0] + raise RuntimeError("VM already exists with name ‘%s’, id ‘%s’" % + (params['output_name'], vm.id)) + +# Otherwise everything is OK, exit with no error. -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 2/8] v2v: -o rhv-upload: change precheck script to return a JSON
This way it is possible to communicate data from the precheck script back to virt-v2v. For now there are no results, so the resulting JSON is discarded. --- v2v/output_rhv_upload.ml | 7 ++++++- v2v/rhv-upload-precheck.py | 6 +++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/v2v/output_rhv_upload.ml b/v2v/output_rhv_upload.ml index 91552536a..e74a897b9 100644 --- a/v2v/output_rhv_upload.ml +++ b/v2v/output_rhv_upload.ml @@ -230,8 +230,13 @@ object error_unless_nbdkit_python_plugin_working (); error_unless_output_alloc_sparse (); (* Python code prechecks. *) - if Python_script.run_command precheck_script json_params [] <> 0 then + let precheck_fn = tmpdir // "v2vprecheck.json" in + let fd = Unix.openfile precheck_fn [O_WRONLY; O_CREAT] 0o600 in + if Python_script.run_command ~stdout_fd:fd precheck_script json_params [] <> 0 then error (f_"failed server prechecks, see earlier errors"); + let json = JSON_parser.json_parser_tree_parse_file precheck_fn in + debug "precheck output parsed as: %s" + (JSON.string_of_doc ~fmt:JSON.Indented ["", json]); if have_selinux then error_unless_nbdkit_compiled_with_selinux () diff --git a/v2v/rhv-upload-precheck.py b/v2v/rhv-upload-precheck.py index 6253c26ac..a317d997c 100644 --- a/v2v/rhv-upload-precheck.py +++ b/v2v/rhv-upload-precheck.py @@ -70,4 +70,8 @@ if len(clusters) == 0: raise RuntimeError("The cluster ‘%s’ does not exist" % (params['rhv_cluster'])) -# Otherwise everything is OK, exit with no error. +# Otherwise everything is OK, print a JSON with the results. +results = { +} + +json.dump(results, sys.stdout) -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 3/8] v2v: -o rhv-upload: improve lookup of specified resources (RHBZ#1612653)
Improve the way the precheck script checks for the specified resources: - look directly for a data center with the specified storage domain - get the storage domain object from the storage domains attached to the data center found - similarly, look for the specified cluster among the ones attached to the data center found When everything is found, return the UUID of the storage domain, and of the cluster back to virt-v2v, which will store them. Similarly, rework the createvm script to directly get the requested cluster, instead of looking for it once again. Also, since the UUID of the storage domain is available in virt-v2v already, use it directly instead of using a placeholder. This should fix a number of issues: - unexisting/unattached storage domains are rejected outright - the cluster is rejected if not part of the same data center of the selected storage domain - renaming the specified storage domain during the data copying will not cause the conversion to fail (which will still use the specified storage domain, no matter the new name) --- v2v/output_rhv_upload.ml | 24 +++++++++++++++++++----- v2v/rhv-upload-createvm.py | 11 ++++------- v2v/rhv-upload-precheck.py | 30 ++++++++++++++++++++++++------ 3 files changed, 47 insertions(+), 18 deletions(-) diff --git a/v2v/output_rhv_upload.ml b/v2v/output_rhv_upload.ml index e74a897b9..5599ef2c2 100644 --- a/v2v/output_rhv_upload.ml +++ b/v2v/output_rhv_upload.ml @@ -223,6 +223,11 @@ See also the virt-v2v-output-rhv(1) manual.") object inherit output + (* The storage domain UUID. *) + val mutable rhv_storagedomain_uuid = None + (* The cluster UUID. *) + val mutable rhv_cluster_uuid = None + method precheck () Python_script.error_unless_python_interpreter_found (); error_unless_ovirtsdk4_module_available (); @@ -237,6 +242,10 @@ object let json = JSON_parser.json_parser_tree_parse_file precheck_fn in debug "precheck output parsed as: %s" (JSON.string_of_doc ~fmt:JSON.Indented ["", json]); + rhv_storagedomain_uuid <- + Some (JSON_parser.object_get_string "rhv_storagedomain_uuid" json); + rhv_cluster_uuid <- + Some (JSON_parser.object_get_string "rhv_cluster_uuid" json); if have_selinux then error_unless_nbdkit_compiled_with_selinux () @@ -383,11 +392,11 @@ If the messages above are not sufficient to diagnose the problem then add the diskid ) targets in - (* We don't have the storage domain UUID, but instead we write - * in a magic value which the Python code (which can get it) - * will substitute. - *) - let sd_uuid = "@SD_UUID@" in + (* The storage domain UUID. *) + let sd_uuid + match rhv_storagedomain_uuid with + | None -> assert false + | Some uuid -> uuid in (* The volume and VM UUIDs are made up. *) let vol_uuids = List.map (fun _ -> uuidgen ()) targets @@ -401,6 +410,11 @@ If the messages above are not sufficient to diagnose the problem then add the OVirt in let ovf = DOM.doc_to_string ovf in + let json_params + match rhv_cluster_uuid with + | None -> assert false + | Some uuid -> ("rhv_cluster_uuid", JSON.String uuid) :: json_params in + let ovf_file = tmpdir // "vm.ovf" in with_open_out ovf_file (fun chan -> output_string chan ovf); if Python_script.run_command createvm_script json_params [ovf_file] <> 0 diff --git a/v2v/rhv-upload-createvm.py b/v2v/rhv-upload-createvm.py index 1d0e8c95d..ed57a9b20 100644 --- a/v2v/rhv-upload-createvm.py +++ b/v2v/rhv-upload-createvm.py @@ -65,17 +65,14 @@ connection = sdk.Connection( system_service = connection.system_service() -# Get the storage domain UUID and substitute it into the OVF doc. -sds_service = system_service.storage_domains_service() -sd = sds_service.list(search=("name=%s" % params['output_storage']))[0] -sd_uuid = sd.id - -ovf = ovf.replace("@SD_UUID@", sd_uuid) +# Get the cluster. +cluster = system_service.clusters_service().cluster_service(params['rhv_cluster_uuid']) +cluster = cluster.get() vms_service = system_service.vms_service() vm = vms_service.add( types.Vm( - cluster=types.Cluster(name = params['rhv_cluster']), + cluster=cluster, initialization=types.Initialization( configuration = types.Configuration( type = types.ConfigurationType.OVA, diff --git a/v2v/rhv-upload-precheck.py b/v2v/rhv-upload-precheck.py index a317d997c..9ccfd1fdf 100644 --- a/v2v/rhv-upload-precheck.py +++ b/v2v/rhv-upload-precheck.py @@ -60,18 +60,36 @@ connection = sdk.Connection( system_service = connection.system_service() -# Check whether the specified cluster exists. -clusters_service = system_service.clusters_service() -clusters = clusters_service.list( - search='name=%s' % params['rhv_cluster'], +# Check whether there is a datacenter for the specified storage. +data_centers = system_service.data_centers_service().list( + search='storage.name=%s' % params['output_storage'], case_sensitive=True, ) +if len(data_centers) == 0: + # The storage domain is not attached to a datacenter + # (shouldn't happen, would fail on disk creation). + raise RuntimeError("The storage domain ‘%s’ is not attached to a DC" % + (params['output_storage'])) +datacenter = data_centers[0] + +# Get the storage domain. +storage_domains = connection.follow_link(datacenter.storage_domains) +storage_domain = [sd for sd in storage_domains if sd.name == params['output_storage']][0] + +# Get the cluster. +clusters = connection.follow_link(datacenter.clusters) +clusters = [cluster for cluster in clusters if cluster.name == params['rhv_cluster']] if len(clusters) == 0: - raise RuntimeError("The cluster ‘%s’ does not exist" % - (params['rhv_cluster'])) + raise RuntimeError("The cluster ‘%s’ is not part of the DC ‘%s’, " + "where the storage domain ‘%s’ is" % + (params['rhv_cluster'], datacenter.name, + params['output_storage'])) +cluster = clusters[0] # Otherwise everything is OK, print a JSON with the results. results = { + "rhv_storagedomain_uuid": storage_domain.id, + "rhv_cluster_uuid": cluster.id, } json.dump(results, sys.stdout) -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 4/8] v2v: -o rhv-upload: tell whether a SD actually exists
If there is no DC with the specified storage domain attached to it, it can mean that the SD does not exist. --- v2v/rhv-upload-precheck.py | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/v2v/rhv-upload-precheck.py b/v2v/rhv-upload-precheck.py index 9ccfd1fdf..0b8087adb 100644 --- a/v2v/rhv-upload-precheck.py +++ b/v2v/rhv-upload-precheck.py @@ -66,6 +66,15 @@ data_centers = system_service.data_centers_service().list( case_sensitive=True, ) if len(data_centers) == 0: + storage_domains = system_service.storage_domains_service().list( + search='name=%s' % params['output_storage'], + case_sensitive=True, + ) + if len(storage_domains) == 0: + # The storage domain does not even exist. + raise RuntimeError("The storage domain ‘%s’ does not exist" % + (params['output_storage'])) + # The storage domain is not attached to a datacenter # (shouldn't happen, would fail on disk creation). raise RuntimeError("The storage domain ‘%s’ is not attached to a DC" % -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 5/8] v2v: add output#disk_copied hook
Add a simple method in the Output class to do work right after a disk was successfully copied. --- v2v/types.ml | 1 + v2v/types.mli | 4 ++++ v2v/v2v.ml | 9 ++++++++- 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/v2v/types.ml b/v2v/types.ml index 77f879200..714b30014 100644 --- a/v2v/types.ml +++ b/v2v/types.ml @@ -521,6 +521,7 @@ class virtual output = object method override_output_format (_ : overlay) = (None : string option) method virtual prepare_targets : source -> (string * overlay) list -> target_buses -> guestcaps -> inspect -> target_firmware -> target_file list method disk_create = (open_guestfs ())#disk_create + method disk_copied (_ : target) (_ : int) (_ : int) = () method virtual create_metadata : source -> target list -> target_buses -> guestcaps -> inspect -> target_firmware -> unit method keep_serial_console = true method install_rhev_apt = false diff --git a/v2v/types.mli b/v2v/types.mli index be9406100..bf573d56d 100644 --- a/v2v/types.mli +++ b/v2v/types.mli @@ -485,6 +485,10 @@ class virtual output : object (** Called in order to create disks on the target. The method has the same signature as Guestfs#disk_create. Normally you should {b not} define this since the default method calls Guestfs#disk_create. *) + method disk_copied : target -> int -> int -> unit + (** Called after a disk was successfully copied on the target. + The second parameter is the index of the copied disk (starting + from 0), and the third is the number of disks in total. *) method virtual create_metadata : source -> target list -> target_buses -> guestcaps -> inspect -> target_firmware -> unit (** Called after conversion and copying to create metadata and do any finalization. *) diff --git a/v2v/v2v.ml b/v2v/v2v.ml index c056aa787..4ee15663f 100644 --- a/v2v/v2v.ml +++ b/v2v/v2v.ml @@ -792,7 +792,14 @@ and copy_targets cmdline targets input output pc; if pc < 0. then eprintf " ! ESTIMATE TOO LOW !"; eprintf "\n%!"; - ) + ); + + (* Let the output mode know that the disk was copied successfully, + * so it can perform any operations without waiting for all the + * other disks to be copied (i.e. before the metadata is actually + * created). + *) + output#disk_copied t i nr_disks ) targets (* Update the target_actual_size field in the target structure. *) -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 6/8] v2v: -o rhv-upload: collect disks UUIDs right after copy
Instead of waiting for the completion of the nbdkit transfers to get the UUIDs of the disks, use the new #disk_copied hook to do that after each disk is copied. This has almost no behaviour on rhv-upload, except for the --no-copy mode: - previously it used to hit the 5 minute timeout while waiting for the finalization of the first disk - now it asserts on the different number of collected UUIDs vs the actual targets; at the moment there is nothing else that can be done, as this assumption is needed e.g. when creating the OVF file --- v2v/output_rhv_upload.ml | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/v2v/output_rhv_upload.ml b/v2v/output_rhv_upload.ml index 5599ef2c2..0952af188 100644 --- a/v2v/output_rhv_upload.ml +++ b/v2v/output_rhv_upload.ml @@ -227,6 +227,8 @@ object val mutable rhv_storagedomain_uuid = None (* The cluster UUID. *) val mutable rhv_cluster_uuid = None + (* List of disk UUIDs. *) + val mutable disks_uuids = [] method precheck () Python_script.error_unless_python_interpreter_found (); @@ -374,23 +376,21 @@ If the messages above are not sufficient to diagnose the problem then add the TargetURI ("json:" ^ JSON.string_of_doc json_params) ) overlays - method create_metadata source targets _ guestcaps inspect target_firmware - (* Get the UUIDs of each disk image. These files are written - * out by the nbdkit plugins on successful finalization of the + method disk_copied t i nr_disks + (* Get the UUID of the disk image. This file is written + * out by the nbdkit plugin on successful finalization of the * transfer. *) - let nr_disks = List.length targets in - let image_uuids - List.mapi ( - fun i t -> - let id = t.target_overlay.ov_source.s_disk_id in - let diskid_file = diskid_file_of_id id in - if not (wait_for_file diskid_file finalization_timeout) then - error (f_"transfer of disk %d/%d failed, see earlier error messages") - (i+1) nr_disks; - let diskid = read_whole_file diskid_file in - diskid - ) targets in + let id = t.target_overlay.ov_source.s_disk_id in + let diskid_file = diskid_file_of_id id in + if not (wait_for_file diskid_file finalization_timeout) then + error (f_"transfer of disk %d/%d failed, see earlier error messages") + (i+1) nr_disks; + let diskid = read_whole_file diskid_file in + disks_uuids <- disks_uuids @ [diskid]; + + method create_metadata source targets _ guestcaps inspect target_firmware + assert (List.length disks_uuids = List.length targets); (* The storage domain UUID. *) let sd_uuid @@ -406,7 +406,7 @@ If the messages above are not sufficient to diagnose the problem then add the let ovf Create_ovf.create_ovf source targets guestcaps inspect target_firmware output_alloc - sd_uuid image_uuids vol_uuids vm_uuid + sd_uuid disks_uuids vol_uuids vm_uuid OVirt in let ovf = DOM.doc_to_string ovf in -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 7/8] v2v: -o rhv-upload: remove uploaded disks on failure
In case the whole conversion fails, run a new Python script to cleanup all the uploaded (and finalized) disks. --- v2v/Makefile.am | 6 ++ v2v/output_rhv_upload.ml | 29 +++++++- v2v/output_rhv_upload_deletedisks_source.mli | 19 ++++++ v2v/rhv-upload-deletedisks.py | 71 ++++++++++++++++++++ 4 files changed, 124 insertions(+), 1 deletion(-) create mode 100644 v2v/output_rhv_upload_deletedisks_source.mli create mode 100644 v2v/rhv-upload-deletedisks.py diff --git a/v2v/Makefile.am b/v2v/Makefile.am index 067e11049..84b56d259 100644 --- a/v2v/Makefile.am +++ b/v2v/Makefile.am @@ -25,6 +25,7 @@ BUILT_SOURCES = \ $(generator_built) \ config.ml \ output_rhv_upload_createvm_source.ml \ + output_rhv_upload_deletedisks_source.ml \ output_rhv_upload_plugin_source.ml \ output_rhv_upload_precheck_source.ml \ output_rhv_upload_vmcheck_source.ml @@ -35,6 +36,7 @@ EXTRA_DIST = \ copy_to_local.mli \ embed.sh \ rhv-upload-createvm.py \ + rhv-upload-deletedisks.py \ rhv-upload-plugin.py \ rhv-upload-precheck.py \ rhv-upload-vmcheck.py \ @@ -88,6 +90,7 @@ SOURCES_MLI = \ output_rhv.mli \ output_rhv_upload.mli \ output_rhv_upload_createvm_source.mli \ + output_rhv_upload_deletedisks_source.mli \ output_rhv_upload_plugin_source.mli \ output_rhv_upload_precheck_source.mli \ output_rhv_upload_vmcheck_source.mli \ @@ -154,6 +157,7 @@ SOURCES_ML = \ output_qemu.ml \ output_rhv.ml \ output_rhv_upload_createvm_source.ml \ + output_rhv_upload_deletedisks_source.ml \ output_rhv_upload_plugin_source.ml \ output_rhv_upload_precheck_source.ml \ output_rhv_upload_vmcheck_source.ml \ @@ -173,6 +177,8 @@ SOURCES_C = \ # These files are generated and contain *.py embedded as an OCaml string. output_rhv_upload_createvm_source.ml: $(srcdir)/rhv-upload-createvm.py $(srcdir)/embed.sh code $^ $@ +output_rhv_upload_deletedisks_source.ml: $(srcdir)/rhv-upload-deletedisks.py + $(srcdir)/embed.sh code $^ $@ output_rhv_upload_plugin_source.ml: $(srcdir)/rhv-upload-plugin.py $(srcdir)/embed.sh code $^ $@ output_rhv_upload_precheck_source.ml: $(srcdir)/rhv-upload-precheck.py diff --git a/v2v/output_rhv_upload.ml b/v2v/output_rhv_upload.ml index 0952af188..89d8e9612 100644 --- a/v2v/output_rhv_upload.ml +++ b/v2v/output_rhv_upload.ml @@ -107,6 +107,9 @@ class output_rhv_upload output_alloc output_conn let createvm_script Python_script.create ~name:"rhv-upload-createvm.py" Output_rhv_upload_createvm_source.code in + let deletedisks_script + Python_script.create ~name:"rhv-upload-deletedisks.py" + Output_rhv_upload_deletedisks_source.code in (* Check that the 'ovirtsdk4' Python module is available. *) let error_unless_ovirtsdk4_module_available () @@ -220,6 +223,18 @@ See also the virt-v2v-output-rhv(1) manual.") else args in args in + (* Delete disks. + * + * This ignores errors since the only time we are doing this is on + * the failure path. + *) + let delete_disks uuids + let ids = List.map (fun uuid -> JSON.String uuid) uuids in + let json_params + ("disk_uuids", JSON.List ids) :: json_params in + ignore (Python_script.run_command deletedisks_script json_params []) + in + object inherit output @@ -275,6 +290,13 @@ object if Python_script.run_command vmcheck_script json_params [] <> 0 then error (f_"failed vmchecks, see earlier errors"); + (* Set up an at-exit handler so we delete the orphan disks on failure. *) + at_exit ( + fun () -> + if disks_uuids <> [] then + delete_disks disks_uuids + ); + (* Create an nbdkit instance for each disk and set the * target URI to point to the NBD socket. *) @@ -419,7 +441,12 @@ If the messages above are not sufficient to diagnose the problem then add the with_open_out ovf_file (fun chan -> output_string chan ovf); if Python_script.run_command createvm_script json_params [ovf_file] <> 0 then - error (f_"failed to create virtual machine, see earlier errors") + error (f_"failed to create virtual machine, see earlier errors"); + + (* The virtual machine was created successfully, so there are no disks + * to remove manually. + *) + disks_uuids <- [] end diff --git a/v2v/output_rhv_upload_deletedisks_source.mli b/v2v/output_rhv_upload_deletedisks_source.mli new file mode 100644 index 000000000..c1bafa15b --- /dev/null +++ b/v2v/output_rhv_upload_deletedisks_source.mli @@ -0,0 +1,19 @@ +(* virt-v2v + * Copyright (C) 2018 Red Hat Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + *) + +val code : string diff --git a/v2v/rhv-upload-deletedisks.py b/v2v/rhv-upload-deletedisks.py new file mode 100644 index 000000000..7e6c819f8 --- /dev/null +++ b/v2v/rhv-upload-deletedisks.py @@ -0,0 +1,71 @@ +# -*- python -*- +# oVirt or RHV upload create VM used by ‘virt-v2v -o rhv-upload’ +# Copyright (C) 2018 Red Hat Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License along +# with this program; if not, write to the Free Software Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + +import json +import logging +import sys +import time + +from http.client import HTTPSConnection +from urllib.parse import urlparse + +import ovirtsdk4 as sdk +import ovirtsdk4.types as types + +# Parameters are passed in via a JSON doc from the OCaml code. +# Because this Python code ships embedded inside virt-v2v there +# is no formal API here. +params = None + +if len(sys.argv) != 2: + raise RuntimeError("incorrect number of parameters") + +# Parameters are passed in via a JSON document. +with open(sys.argv[1], 'r') as fp: + params = json.load(fp) + +# What is passed in is a password file, read the actual password. +with open(params['output_password'], 'r') as fp: + output_password = fp.read() +output_password = output_password.rstrip() + +# Parse out the username from the output_conn URL. +parsed = urlparse(params['output_conn']) +username = parsed.username or "admin@internal" + +# Connect to the server. +connection = sdk.Connection( + url = params['output_conn'], + username = username, + password = output_password, + ca_file = params['rhv_cafile'], + log = logging.getLogger(), + insecure = params['insecure'], +) + +system_service = connection.system_service() +disks_service = system_service.disks_service() + +for uuid in params['disk_uuids']: + # Try to get and remove the disk, however do not fail + # if it does not exist (maybe removed in the meanwhile). + try: + disk_service = disks_service.disk_service(uuid) + disk_service.remove() + except sdk.NotFoundError: + pass -- 2.21.0
Pino Toscano
2019-Sep-16 17:13 UTC
[Libguestfs] [PATCH 8/8] v2v: -o rhv-upload: cancel disk transfer on failure
Make sure to cancel the trasfer in RHV in case of failure during the copying of a disk: this way, the disk can be actually removed by RHV itself. --- v2v/rhv-upload-plugin.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/v2v/rhv-upload-plugin.py b/v2v/rhv-upload-plugin.py index 4d61a089b..57e90484f 100644 --- a/v2v/rhv-upload-plugin.py +++ b/v2v/rhv-upload-plugin.py @@ -485,6 +485,8 @@ def flush(h): r.read() def delete_disk_on_failure(h): + transfer_service = h['transfer_service'] + transfer_service.cancel() disk_service = h['disk_service'] disk_service.remove() -- 2.21.0
Martin Kletzander
2019-Sep-16 19:05 UTC
Re: [Libguestfs] [PATCH 0/8] v2v: various fixed for -o rhv-upload
On Mon, Sep 16, 2019 at 07:13:43PM +0200, Pino Toscano wrote:>This patch series fixes various issues in the rhv-upload output mode: >- properly find and use RHV resources >- cleanup orphan disks, and possibly the current disk transfer on > failure > >In reality, the first 4 patches deal with resources, and the other 4 >with cleanups. The latter block can be theoretically sent alone -- >I just happened to start working on it as part of my "let's fix >rhv-upload oddities" initiative. >From the looks of it it all makes sense to me, but I did not have a chance to try it out. I might have some tomorrow if you want someone to test it as well. I've just one nitpick and that is the headers of newly created files which mention (a) wring copyright year and/or (b) wrong file description. But that's just a triviality due to copy pasting some duplicated comments.
Richard W.M. Jones
2019-Sep-17 10:17 UTC
Re: [Libguestfs] [PATCH 1/8] v2v: -o rhv-upload: split vmcheck out of precheck
On Mon, Sep 16, 2019 at 07:13:44PM +0200, Pino Toscano wrote:> Split the VM existance check out of the precheck script to a new vmcheck > script, and invoke that in #prepare_targets. Invoke the precheck script > in #precheck, as now it can be run with only values of command line > options. > > This does not change which checks are performed; however, an invalid > cluster name will make virt-v2v fail way earlier (even before connecting > to the source).It's only cosmetic but you might want to change:> +(* virt-v2v > + * Copyright (C) 2018 Red Hat Inc.to "2018-2019" (in a few places). The patch makes sense, so: ACK. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW
Richard W.M. Jones
2019-Sep-17 10:33 UTC
Re: [Libguestfs] [PATCH 3/8] v2v: -o rhv-upload: improve lookup of specified resources (RHBZ#1612653)
On Mon, Sep 16, 2019 at 07:13:46PM +0200, Pino Toscano wrote:> - (* We don't have the storage domain UUID, but instead we write > - * in a magic value which the Python code (which can get it) > - * will substitute. > - *) > - let sd_uuid = "@SD_UUID@" in > + (* The storage domain UUID. *) > + let sd_uuid > + match rhv_storagedomain_uuid with > + | None -> assert false > + | Some uuid -> uuid inNice! ACK patches 2 & 3. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html
Richard W.M. Jones
2019-Sep-17 10:34 UTC
Re: [Libguestfs] [PATCH 4/8] v2v: -o rhv-upload: tell whether a SD actually exists
On Mon, Sep 16, 2019 at 07:13:47PM +0200, Pino Toscano wrote:> If there is no DC with the specified storage domain attached to it, it > can mean that the SD does not exist. > --- > v2v/rhv-upload-precheck.py | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/v2v/rhv-upload-precheck.py b/v2v/rhv-upload-precheck.py > index 9ccfd1fdf..0b8087adb 100644 > --- a/v2v/rhv-upload-precheck.py > +++ b/v2v/rhv-upload-precheck.py > @@ -66,6 +66,15 @@ data_centers = system_service.data_centers_service().list( > case_sensitive=True, > ) > if len(data_centers) == 0: > + storage_domains = system_service.storage_domains_service().list( > + search='name=%s' % params['output_storage'], > + case_sensitive=True, > + ) > + if len(storage_domains) == 0: > + # The storage domain does not even exist. > + raise RuntimeError("The storage domain ‘%s’ does not exist" % > + (params['output_storage'])) > + > # The storage domain is not attached to a datacenter > # (shouldn't happen, would fail on disk creation). > raise RuntimeError("The storage domain ‘%s’ is not attached to a DC" % > -- > 2.21.0Looks good, ACK. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
Richard W.M. Jones
2019-Sep-17 10:35 UTC
Re: [Libguestfs] [PATCH 5/8] v2v: add output#disk_copied hook
On Mon, Sep 16, 2019 at 07:13:48PM +0200, Pino Toscano wrote:> Add a simple method in the Output class to do work right after a disk > was successfully copied. > --- > v2v/types.ml | 1 + > v2v/types.mli | 4 ++++ > v2v/v2v.ml | 9 ++++++++- > 3 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/v2v/types.ml b/v2v/types.ml > index 77f879200..714b30014 100644 > --- a/v2v/types.ml > +++ b/v2v/types.ml > @@ -521,6 +521,7 @@ class virtual output = object > method override_output_format (_ : overlay) = (None : string option) > method virtual prepare_targets : source -> (string * overlay) list -> target_buses -> guestcaps -> inspect -> target_firmware -> target_file list > method disk_create = (open_guestfs ())#disk_create > + method disk_copied (_ : target) (_ : int) (_ : int) = () > method virtual create_metadata : source -> target list -> target_buses -> guestcaps -> inspect -> target_firmware -> unit > method keep_serial_console = true > method install_rhev_apt = false > diff --git a/v2v/types.mli b/v2v/types.mli > index be9406100..bf573d56d 100644 > --- a/v2v/types.mli > +++ b/v2v/types.mli > @@ -485,6 +485,10 @@ class virtual output : object > (** Called in order to create disks on the target. The method has the > same signature as Guestfs#disk_create. Normally you should {b not} > define this since the default method calls Guestfs#disk_create. *) > + method disk_copied : target -> int -> int -> unit > + (** Called after a disk was successfully copied on the target. > + The second parameter is the index of the copied disk (starting > + from 0), and the third is the number of disks in total. *) > method virtual create_metadata : source -> target list -> target_buses -> guestcaps -> inspect -> target_firmware -> unit > (** Called after conversion and copying to create metadata and > do any finalization. *) > diff --git a/v2v/v2v.ml b/v2v/v2v.ml > index c056aa787..4ee15663f 100644 > --- a/v2v/v2v.ml > +++ b/v2v/v2v.ml > @@ -792,7 +792,14 @@ and copy_targets cmdline targets input output > pc; > if pc < 0. then eprintf " ! ESTIMATE TOO LOW !"; > eprintf "\n%!"; > - ) > + ); > + > + (* Let the output mode know that the disk was copied successfully, > + * so it can perform any operations without waiting for all the > + * other disks to be copied (i.e. before the metadata is actually > + * created). > + *) > + output#disk_copied t i nr_disks > ) targets > > (* Update the target_actual_size field in the target structure. *)There's a Unicode-art diagram in v2v/types.mli which also should be updated: https://github.com/libguestfs/libguestfs/blob/dea9636c596acd030c9955057863cf080bdd89fb/v2v/types.mli#L419 ACK if you also update this diagram. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
Richard W.M. Jones
2019-Sep-17 10:38 UTC
Re: [Libguestfs] [PATCH 6/8] v2v: -o rhv-upload: collect disks UUIDs right after copy
On Mon, Sep 16, 2019 at 07:13:49PM +0200, Pino Toscano wrote:> Instead of waiting for the completion of the nbdkit transfers to get the > UUIDs of the disks, use the new #disk_copied hook to do that after each > disk is copied. > > This has almost no behaviour on rhv-upload, except for the --no-copy > mode: > - previously it used to hit the 5 minute timeout while waiting for the > finalization of the first disk > - now it asserts on the different number of collected UUIDs vs the > actual targets; at the moment there is nothing else that can be done, > as this assumption is needed e.g. when creating the OVF file > --- > v2v/output_rhv_upload.ml | 32 ++++++++++++++++---------------- > 1 file changed, 16 insertions(+), 16 deletions(-) > > diff --git a/v2v/output_rhv_upload.ml b/v2v/output_rhv_upload.ml > index 5599ef2c2..0952af188 100644 > --- a/v2v/output_rhv_upload.ml > +++ b/v2v/output_rhv_upload.ml > @@ -227,6 +227,8 @@ object > val mutable rhv_storagedomain_uuid = None > (* The cluster UUID. *) > val mutable rhv_cluster_uuid = None > + (* List of disk UUIDs. *) > + val mutable disks_uuids = [] > > method precheck () > Python_script.error_unless_python_interpreter_found (); > @@ -374,23 +376,21 @@ If the messages above are not sufficient to diagnose the problem then add the > TargetURI ("json:" ^ JSON.string_of_doc json_params) > ) overlays > > - method create_metadata source targets _ guestcaps inspect target_firmware > - (* Get the UUIDs of each disk image. These files are written > - * out by the nbdkit plugins on successful finalization of the > + method disk_copied t i nr_disks > + (* Get the UUID of the disk image. This file is written > + * out by the nbdkit plugin on successful finalization of the > * transfer. > *) > - let nr_disks = List.length targets in > - let image_uuids > - List.mapi ( > - fun i t -> > - let id = t.target_overlay.ov_source.s_disk_id in > - let diskid_file = diskid_file_of_id id in > - if not (wait_for_file diskid_file finalization_timeout) then > - error (f_"transfer of disk %d/%d failed, see earlier error messages") > - (i+1) nr_disks; > - let diskid = read_whole_file diskid_file in > - diskid > - ) targets in > + let id = t.target_overlay.ov_source.s_disk_id in > + let diskid_file = diskid_file_of_id id in > + if not (wait_for_file diskid_file finalization_timeout) then > + error (f_"transfer of disk %d/%d failed, see earlier error messages") > + (i+1) nr_disks; > + let diskid = read_whole_file diskid_file in > + disks_uuids <- disks_uuids @ [diskid]; > + > + method create_metadata source targets _ guestcaps inspect target_firmware > + assert (List.length disks_uuids = List.length targets); > > (* The storage domain UUID. *) > let sd_uuid > @@ -406,7 +406,7 @@ If the messages above are not sufficient to diagnose the problem then add the > let ovf > Create_ovf.create_ovf source targets guestcaps inspect > target_firmware output_alloc > - sd_uuid image_uuids vol_uuids vm_uuid > + sd_uuid disks_uuids vol_uuids vm_uuid > OVirt in > let ovf = DOM.doc_to_string ovf inThis is certainly cleaner than the existing code, so the change in behaviour for --no-copy isn't important (and it's still an error in either case), therefore: ACK Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org
Richard W.M. Jones
2019-Sep-17 10:43 UTC
Re: [Libguestfs] [PATCH 7/8] v2v: -o rhv-upload: remove uploaded disks on failure
I think we should do this slightly differently for consistency with other classes. See here for example: https://github.com/libguestfs/libguestfs/blob/dea9636c596acd030c9955057863cf080bdd89fb/v2v/output_openstack.ml#L395-L398 (1) Declare a val mutable in the object: val mutable delete_disks_on_exit = true> @@ -275,6 +290,13 @@ object > if Python_script.run_command vmcheck_script json_params [] <> 0 then > error (f_"failed vmchecks, see earlier errors"); > > + (* Set up an at-exit handler so we delete the orphan disks on failure. *) > + at_exit ( > + fun () -> > + if disks_uuids <> [] then(2) Change this check to: if delete_disks_on_exit then> + delete_disks disks_uuids > + ); > + > (* Create an nbdkit instance for each disk and set the > * target URI to point to the NBD socket. > *) > @@ -419,7 +441,12 @@ If the messages above are not sufficient to diagnose the problem then add the > with_open_out ovf_file (fun chan -> output_string chan ovf); > if Python_script.run_command createvm_script json_params [ovf_file] <> 0 > then > - error (f_"failed to create virtual machine, see earlier errors") > + error (f_"failed to create virtual machine, see earlier errors"); > + > + (* The virtual machine was created successfully, so there are no disks > + * to remove manually. > + *) > + disks_uuids <- [](3) Remove this hunk. I would be happy with this patch if it was change in the way described above. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW
Richard W.M. Jones
2019-Sep-17 10:43 UTC
Re: [Libguestfs] [PATCH 8/8] v2v: -o rhv-upload: cancel disk transfer on failure
On Mon, Sep 16, 2019 at 07:13:51PM +0200, Pino Toscano wrote:> Make sure to cancel the trasfer in RHV in case of failure during the > copying of a disk: this way, the disk can be actually removed by RHV > itself. > --- > v2v/rhv-upload-plugin.py | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/v2v/rhv-upload-plugin.py b/v2v/rhv-upload-plugin.py > index 4d61a089b..57e90484f 100644 > --- a/v2v/rhv-upload-plugin.py > +++ b/v2v/rhv-upload-plugin.py > @@ -485,6 +485,8 @@ def flush(h): > r.read() > > def delete_disk_on_failure(h): > + transfer_service = h['transfer_service'] > + transfer_service.cancel() > disk_service = h['disk_service'] > disk_service.remove()ACK Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org
Apparently Analagous Threads
- [PATCH 1/2] v2v: add optional tmpdir parameter for Python_script
- Re: [PATCH 2/2] v2v: -o rhv-upload: use same tmpdir for Python scripts
- [PATCH virt-v2v v2 0/2] v2v: Large temporary directory handling.
- [v2v PATCH 0/7] Various build fixes
- [PATCH v5 0/4] v2v: Add -o rhv-upload output mode.