Richard W.M. Jones
2018-Mar-06 22:18 UTC
[Libguestfs] [PATCH v4 0/3] v2v: Add -o rhv-upload output mode.
Previous versions: v3: https://www.redhat.com/archives/libguestfs/2018-March/msg00000.html v2: https://www.redhat.com/archives/libguestfs/2018-February/msg00177.html v1: https://www.redhat.com/archives/libguestfs/2018-February/msg00139.html This completely rethinks the approach taken by the previous patches. Instead of trying to involve qemu's curl driver, this uses a small Python 3 nbdkit plugin to interface between qemu and the oVirt server. The data path is: qemu-img convert -------> nbdkit -------> oVirt imageio nbd https There are two Python scripts included. One is the nbdkit plugin. The other creates the VM. As with the previous patches, these scripts get embeded in virt-v2v at compile time, so effectively there is no API contract between virt-v2v & the Python code. With this patch series I am able to (mostly) successfully convert VMs from local disk to oVirt 4.2, with full end-to-end streaming. There is some room for optimization -- in particular uploads are currently rather slow because we rely on qemu-img batching small requests into large ones which it doesn't do well, and instead the nbdkit plugin could batch small writes into larger ones. Also I noticed (but only one time) that very long transfers would cause the oVirt ticket to expire, even though we were writing the whole time. There are still a few unresolved issues (see patch 3/3) so this is not quite ready to go upstream yet, but can still be reviewed. Patches 1 & 2 are the same as posted before. I did not yet test qcow2 uploads. Those are "interestingly" different because qcow2 will require us to read from the remote oVirt server as well as just stream/write to it. The pread method for that is written but has not been tested. Rich.
Richard W.M. Jones
2018-Mar-06 22:18 UTC
[Libguestfs] [PATCH v4 1/3] v2v: rhv: Fix virtio-rng and memballoon OVF fragment for RHV.
Without this extra element, oVirt will crash with a Java NullPointerException (see https://bugzilla.redhat.com/1550123). Fixes commit dac5fc53acdd1e51be2957c67e1e063e2132e680. --- v2v/create_ovf.ml | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/v2v/create_ovf.ml b/v2v/create_ovf.ml index f5e34d79f..87245fdc8 100644 --- a/v2v/create_ovf.ml +++ b/v2v/create_ovf.ml @@ -444,6 +444,9 @@ let rec create_ovf source targets guestcaps inspect e "rasd:ResourceType" [] [PCData "0"]; e "Type" [] [PCData "rng"]; e "Device" [] [PCData "virtio"]; + e "SpecParams" [] [ + e "source" [] [PCData "urandom"] + ] ] ); if guestcaps.gcaps_virtio_balloon then @@ -454,6 +457,9 @@ let rec create_ovf source targets guestcaps inspect e "rasd:ResourceType" [] [PCData "0"]; e "Type" [] [PCData "balloon"]; e "Device" [] [PCData "memballoon"]; + e "SpecParams" [] [ + e "model" [] [PCData "virtio"] + ] ] ); -- 2.13.2
Richard W.M. Jones
2018-Mar-06 22:18 UTC
[Libguestfs] [PATCH v4 2/3] v2v: Add -op (output password file) option.
Currently unused, in a future commit this will allow you to pass in a password to be used when connecting to the target hypervisor. --- v2v/cmdline.ml | 18 ++++++++++++++++++ v2v/test-v2v-docs.sh | 2 +- v2v/virt-v2v.pod | 7 +++++++ 3 files changed, 26 insertions(+), 1 deletion(-) diff --git a/v2v/cmdline.ml b/v2v/cmdline.ml index 58dc72d09..d725ae022 100644 --- a/v2v/cmdline.ml +++ b/v2v/cmdline.ml @@ -62,6 +62,7 @@ let parse_cmdline () let output_conn = ref None in let output_format = ref None in let output_name = ref None in + let output_password = ref None in let output_storage = ref None in let password_file = ref None in let vddk_config = ref None in @@ -219,6 +220,8 @@ let parse_cmdline () s_"Set output format"; [ M"on" ], Getopt.String ("name", set_string_option_once "-on" output_name), s_"Rename guest when converting"; + [ M"op" ], Getopt.String ("filename", set_string_option_once "-op" output_password), + s_"Use password from file to connect to output hypervisor"; [ M"os" ], Getopt.String ("storage", set_string_option_once "-os" output_storage), s_"Set output storage location"; [ L"password-file" ], Getopt.String ("file", set_string_option_once "--password-file" password_file), @@ -314,6 +317,7 @@ read the man page virt-v2v(1). let output_format = !output_format in let output_mode = !output_mode in let output_name = !output_name in + let output_password = !output_password in let output_storage = !output_storage in let password_file = !password_file in let print_source = !print_source in @@ -461,6 +465,8 @@ read the man page virt-v2v(1). | `Glance -> if output_conn <> None then error_option_cannot_be_used_in_output_mode "glance" "-oc"; + if output_password <> None then + error_option_cannot_be_used_in_output_mode "glance" "-op"; if output_storage <> None then error_option_cannot_be_used_in_output_mode "glance" "-os"; if qemu_boot then @@ -472,6 +478,8 @@ read the man page virt-v2v(1). | `Not_set | `Libvirt -> + if output_password <> None then + error_option_cannot_be_used_in_output_mode "libvirt" "-op"; let output_storage = Option.default "default" output_storage in if qemu_boot then error_option_cannot_be_used_in_output_mode "libvirt" "--qemu-boot"; @@ -481,6 +489,8 @@ read the man page virt-v2v(1). output_format, output_alloc | `Local -> + if output_password <> None then + error_option_cannot_be_used_in_output_mode "local" "-op"; let os match output_storage with | None -> @@ -500,6 +510,8 @@ read the man page virt-v2v(1). error_option_cannot_be_used_in_output_mode "null" "-oc"; if output_format <> None then error_option_cannot_be_used_in_output_mode "null" "-of"; + if output_password <> None then + error_option_cannot_be_used_in_output_mode "null" "-op"; if output_storage <> None then error_option_cannot_be_used_in_output_mode "null" "-os"; if qemu_boot then @@ -509,6 +521,8 @@ read the man page virt-v2v(1). Some "raw", Sparse | `QEmu -> + if output_password <> None then + error_option_cannot_be_used_in_output_mode "qemu" "-op"; let os match output_storage with | None -> @@ -520,6 +534,8 @@ read the man page virt-v2v(1). output_format, output_alloc | `RHV -> + if output_password <> None then + error_option_cannot_be_used_in_output_mode "rhv" "-op"; let os match output_storage with | None -> @@ -531,6 +547,8 @@ read the man page virt-v2v(1). output_format, output_alloc | `VDSM -> + if output_password <> None then + error_option_cannot_be_used_in_output_mode "vdsm" "-op"; let os match output_storage with | None -> diff --git a/v2v/test-v2v-docs.sh b/v2v/test-v2v-docs.sh index 5d034c465..0e3bd916a 100755 --- a/v2v/test-v2v-docs.sh +++ b/v2v/test-v2v-docs.sh @@ -22,4 +22,4 @@ $TEST_FUNCTIONS skip_if_skipped $top_srcdir/podcheck.pl virt-v2v.pod virt-v2v \ - --ignore=--debug-overlay,--ic,--if,--it,--no-trim,--oa,--oc,--of,--on,--os,--vmtype + --ignore=--debug-overlay,--ic,--if,--it,--no-trim,--oa,--oc,--of,--on,--op,--os,--vmtype diff --git a/v2v/virt-v2v.pod b/v2v/virt-v2v.pod index c67b67e48..d51e7ed2f 100644 --- a/v2v/virt-v2v.pod +++ b/v2v/virt-v2v.pod @@ -569,6 +569,13 @@ If not specified, then the input format is used. Rename the guest when converting it. If this option is not used then the output name is the same as the input name. +=item B<-op> file + +Supply a file containing a password to be used when connecting to the +target hypervisor. Note the file should contain the whole password, +B<without any trailing newline>, and for security the file should have +mode C<0600> so that others cannot read it. + =item B<-os> storage The location of the storage for the converted guest. -- 2.13.2
Richard W.M. Jones
2018-Mar-06 22:18 UTC
[Libguestfs] [PATCH v4 3/3] v2v: Add -o rhv-upload output mode.
PROBLEMS: - Check if VM exists already before starting upload. - Target cluster defaults to "Default". - Using Insecure = True, is that bad? This adds a new output mode to virt-v2v. virt-v2v -o rhv-upload streams images directly to an oVirt or RHV >= 4 Data Domain using the oVirt SDK v4. It is more efficient than -o rhv because it does not need to go via the Export Storage Domain, and is possible for humans to use unlike -o vdsm. The implementation uses the Python SDK (‘ovirtsdk4’ module). An nbdkit Python 3 plugin translates NBD calls from qemu into HTTPS requests to oVirt via the SDK. --- .gitignore | 2 + v2v/Makefile.am | 34 +++- v2v/cmdline.ml | 38 ++++ v2v/output_rhv_upload.ml | 320 ++++++++++++++++++++++++++++++ v2v/output_rhv_upload.mli | 27 +++ v2v/output_rhv_upload_createvm_source.mli | 19 ++ v2v/output_rhv_upload_plugin_source.mli | 19 ++ v2v/rhv-upload-createvm.py | 85 ++++++++ v2v/rhv-upload-plugin.py | 250 +++++++++++++++++++++++ v2v/virt-v2v.pod | 90 +++++++-- 10 files changed, 869 insertions(+), 15 deletions(-) diff --git a/.gitignore b/.gitignore index d72447d1d..930d2fe0c 100644 --- a/.gitignore +++ b/.gitignore @@ -654,6 +654,8 @@ Makefile.in /utils/qemu-speed-test/qemu-speed-test /v2v/.depend /v2v/oUnit-* +/v2v/output_rhv_upload_createvm_source.ml +/v2v/output_rhv_upload_plugin_source.ml /v2v/real-*.d/ /v2v/real-*.img /v2v/real-*.xml diff --git a/v2v/Makefile.am b/v2v/Makefile.am index c2eb31097..fd3223250 100644 --- a/v2v/Makefile.am +++ b/v2v/Makefile.am @@ -22,12 +22,16 @@ generator_built = \ uefi.mli BUILT_SOURCES = \ - $(generator_built) + $(generator_built) \ + output_rhv_upload_createvm_source.ml \ + output_rhv_upload_plugin_source.ml EXTRA_DIST = \ $(SOURCES_MLI) $(SOURCES_ML) $(SOURCES_C) \ copy_to_local.ml \ copy_to_local.mli \ + rhv-upload-createvm.py \ + rhv-upload-plugin.py \ v2v_slow_unit_tests.ml \ v2v-slow-unit-tests.sh \ v2v_unit_tests.ml \ @@ -64,6 +68,9 @@ SOURCES_MLI = \ output_null.mli \ output_qemu.mli \ output_rhv.mli \ + output_rhv_upload.mli \ + output_rhv_upload_createvm_source.mli \ + output_rhv_upload_plugin_source.mli \ output_vdsm.mli \ parse_ovf_from_ova.mli \ parse_libvirt_xml.mli \ @@ -116,6 +123,9 @@ SOURCES_ML = \ output_local.ml \ output_qemu.ml \ output_rhv.ml \ + output_rhv_upload_createvm_source.ml \ + output_rhv_upload_plugin_source.ml \ + output_rhv_upload.ml \ output_vdsm.ml \ inspect_source.ml \ target_bus_assignment.ml \ @@ -126,6 +136,28 @@ SOURCES_C = \ libvirt_utils-c.c \ qemuopts-c.c +# This file is generated and just contains rhv-upload-createvm.py +# embedded as an OCaml string. +output_rhv_upload_createvm_source.ml: rhv-upload-createvm.py + rm -f $@ $@-t + echo '(* Generated by v2v/Makefile.am *)' > $@-t + echo 'let code = "' >> $@-t + $(SED) -e 's/\(["\]\)/\\\1/g' < $< >> $@-t + echo '"' >> $@-t + mv $@-t $@ + chmod -w $@ + +# This file is generated and just contains rhv-upload-plugin.py +# embedded as an OCaml string. +output_rhv_upload_plugin_source.ml: rhv-upload-plugin.py + rm -f $@ $@-t + echo '(* Generated by v2v/Makefile.am *)' > $@-t + echo 'let code = "' >> $@-t + $(SED) -e 's/\(["\]\)/\\\1/g' < $< >> $@-t + echo '"' >> $@-t + mv $@-t $@ + chmod -w $@ + if HAVE_OCAML bin_PROGRAMS = virt-v2v virt-v2v-copy-to-local diff --git a/v2v/cmdline.ml b/v2v/cmdline.ml index d725ae022..c53d1703b 100644 --- a/v2v/cmdline.ml +++ b/v2v/cmdline.ml @@ -65,6 +65,8 @@ let parse_cmdline () let output_password = ref None in let output_storage = ref None in let password_file = ref None in + let rhv_cafile = ref None in + let rhv_direct = ref false in let vddk_config = ref None in let vddk_cookie = ref None in let vddk_libdir = ref None in @@ -143,6 +145,8 @@ let parse_cmdline () | "disk" | "local" -> output_mode := `Local | "null" -> output_mode := `Null | "ovirt" | "rhv" | "rhev" -> output_mode := `RHV + | "ovirt-upload" | "ovirt_upload" | "rhv-upload" | "rhv_upload" -> + output_mode := `RHV_Upload | "qemu" -> output_mode := `QEmu | "vdsm" -> output_mode := `VDSM | s -> @@ -229,6 +233,9 @@ let parse_cmdline () [ L"print-source" ], Getopt.Set print_source, s_"Print source and stop"; [ L"qemu-boot" ], Getopt.Set qemu_boot, s_"Boot in qemu (-o qemu only)"; + [ L"rhv-cafile" ], Getopt.String ("ca.pem", set_string_option_once "--rhv-cafile" rhv_cafile), + s_"For -o rhv-upload, set ‘ca.pem’ file"; + [ L"rhv-direct" ], Getopt.Set rhv_direct, s_"Use direct transfer mode"; [ L"root" ], Getopt.String ("ask|... ", set_root_choice), s_"How to choose root filesystem"; [ L"vddk-config" ], Getopt.String ("filename", set_string_option_once "--vddk-config" vddk_config), @@ -322,6 +329,8 @@ read the man page virt-v2v(1). let password_file = !password_file in let print_source = !print_source in let qemu_boot = !qemu_boot in + let rhv_cafile = !rhv_cafile in + let rhv_direct = !rhv_direct in let root_choice = !root_choice in let vddk_options { vddk_config = !vddk_config; @@ -546,6 +555,35 @@ read the man page virt-v2v(1). Output_rhv.output_rhv os output_alloc, output_format, output_alloc + | `RHV_Upload -> + let output_conn + match output_conn with + | None -> + error (f_"-o rhv-upload: use ‘-oc’ to point to the oVirt or RHV server REST API URL, which is usually https://servername/ovirt-engine/api") + | Some oc -> oc in + (* In theory we could make the password optional in future. *) + let output_password + match output_password with + | None -> + error (f_"-o rhv-upload: output password file was not specified, use ‘-op’ to point to a file which contains the password used to connect to the oVirt or RHV server") + | Some op -> op in + let os + match output_storage with + | None -> + error (f_"-o rhv-upload: output storage was not specified, use ‘-os’"); + | Some os -> os in + if qemu_boot then + error_option_cannot_be_used_in_output_mode "rhv-upload" "--qemu-boot"; + let rhv_cafile + match rhv_cafile with + | None -> + error (f_"-o rhv-upload: must use ‘--rhv-cafile’ to supply the path to the oVirt or RHV server’s ‘ca.pem’ file") + | Some rhv_cafile -> rhv_cafile in + Output_rhv_upload.output_rhv_upload output_alloc output_conn + output_password os + rhv_cafile rhv_direct, + output_format, output_alloc + | `VDSM -> if output_password <> None then error_option_cannot_be_used_in_output_mode "vdsm" "-op"; diff --git a/v2v/output_rhv_upload.ml b/v2v/output_rhv_upload.ml new file mode 100644 index 000000000..99f2ece08 --- /dev/null +++ b/v2v/output_rhv_upload.ml @@ -0,0 +1,320 @@ +(* virt-v2v + * Copyright (C) 2009-2018 Red Hat Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + *) + +open Printf +open Unix + +open Std_utils +open Tools_utils +open Unix_utils +open Common_gettext.Gettext + +open Types +open Utils + +let pidfile_timeout = 30 +let finalization_timeout = 5*60 + +(* Wait for a file to appear until a timeout. *) +let rec wait_for_file filename timeout + if Sys.file_exists filename then true + else if timeout = 0 then false + else ( + sleep 1; + wait_for_file filename (timeout-1) + ) + +class output_rhv_upload output_alloc output_conn + output_password output_storage + rhv_cafile rhv_direct + (* Create a temporary directory which will be deleted on exit. *) + let tmpdir + let base_dir = (open_guestfs ())#get_cachedir () in + let t = Mkdtemp.temp_dir ~base_dir "rhvupload." in + rmdir_on_exit t; + t in + + let diskid_file_of_id id = tmpdir // sprintf "diskid.%d" id in + + (* Write the Python plugin and create VM to a temporary file. *) + let plugin + let plugin = tmpdir // "rhv-upload-plugin.py" in + with_open_out + plugin + (fun chan -> output_string chan Output_rhv_upload_plugin_source.code); + plugin in + let createvm + let createvm = tmpdir // "rhv-upload-createvm.py" in + with_open_out + createvm + (fun chan -> output_string chan Output_rhv_upload_createvm_source.code); + createvm in + + (* Is SELinux enabled and enforcing on the host? *) + let have_selinux + 0 = Sys.command "getenforce 2>/dev/null | grep -isq Enforcing" in + + (* Check that nbdkit is available and new enough. *) + let error_unless_nbdkit_working () + if 0 <> Sys.command "nbdkit --version >/dev/null" then + error (f_"nbdkit is not installed or not working. It is required to use ‘-o rhv-upload’. See \"OUTPUT TO RHV\" in the virt-v2v(1) manual."); + + (* Check it's a new enough version. The latest features we + * require are ‘--exit-with-parent’ and ‘--selinux-label’, both + * added in 1.1.14. (We use 1.1.16 as the minimum here because + * it also adds the selinux=yes|no flag in --dump-config). + *) + let lines = external_command "nbdkit --help" in + let lines = String.concat " " lines in + if String.find lines "exit-with-parent" == -1 || + String.find lines "selinux-label" == -1 then + error (f_"nbdkit is not new enough, you need to upgrade to nbdkit ≥ 1.1.16") + in + + (* Check that the python3 plugin is installed and working + * and can load the plugin script. + *) + let error_unless_nbdkit_python3_working () + let cmd = sprintf "nbdkit python3 %s --dump-plugin >/dev/null" + (quote plugin) in + if Sys.command cmd <> 0 then + error (f_"nbdkit Python 3 plugin is not installed or not working. It is required if you want to use ‘-o rhv-upload’. + +See also \"OUTPUT TO RHV\" in the virt-v2v(1) manual.") + in + + (* Check that nbdkit was compiled with SELinux support (for the + * --selinux-label option). + *) + let error_unless_nbdkit_compiled_with_selinux () + let lines = external_command "nbdkit --dump-config" in + (* In nbdkit <= 1.1.15 the selinux attribute was not present + * at all in --dump-config output so there was no way to tell. + * Ignore this case because there will be an error later when + * we try to use the --selinux-label parameter. + *) + if List.mem "selinux=no" (List.map String.trim lines) then + error (f_"nbdkit was compiled without SELinux support. You will have to recompile nbdkit with libselinux-devel installed, or else set SELinux to Permissive mode while doing the conversion.") + in + + (* JSON parameters which are invariant between disks. *) + let json_params = [ + "output_conn", JSON.String output_conn; + "output_password", JSON.String output_password; + "output_storage", JSON.String output_storage; + "output_sparse", JSON.Bool (match output_alloc with + | Sparse -> true + | Preallocated -> false); + "rhv_cafile", JSON.String rhv_cafile; + "rhv_direct", JSON.Bool rhv_direct; + ] in + + (* nbdkit command line args which are invariant between disks. *) + let nbdkit_args + let args = [ + "nbdkit"; + + "--foreground"; (* run in foreground *) + "--exit-with-parent"; (* exit when virt-v2v exits *) + "--newstyle"; (* use newstyle NBD protocol *) + "--exportname"; "/"; + + "python3"; (* use the Python 3 plugin *) + plugin; (* Python plugin script *) + ] in + let args = if verbose () then args @ ["--verbose"] else args in + let args + (* label the socket so qemu can open it *) + if have_selinux then + args @ ["--selinux-label"; "system_u:object_r:svirt_t:s0"] + else args in + args in + +object + inherit output + + method precheck () + error_unless_nbdkit_working (); + error_unless_nbdkit_python3_working (); + if have_selinux then + error_unless_nbdkit_compiled_with_selinux () + + method as_options + "-o rhv-upload" ^ + (match output_alloc with + | Sparse -> "" (* default, don't need to print it *) + | Preallocated -> " -oa preallocated") ^ + sprintf " -oc %s -op %s -os %s" + output_conn output_password output_storage + + method supported_firmware = [ TargetBIOS ] + + method prepare_targets source targets + (* Create an nbdkit instance for each disk and set the + * target URI to point to the NBD socket. + *) + List.map ( + fun t -> + let id = t.target_overlay.ov_source.s_disk_id in + let disk_name + "disk_name", JSON.String (sprintf "%s-%03d" source.s_name id) in + + let disk_format + "disk_format", + JSON.String ( + match t.target_format with + | ("raw" | "qcow2") as fmt -> fmt + | _ -> + error (f_"rhv-upload: -of %s: Only output format ‘raw’ or ‘qcow2’ is supported. If the input is in a different format then force one of these output formats by adding either ‘-of raw’ or ‘-of qcow2’ on the command line.") + t.target_format + ) in + + let disk_size + "disk_size", + JSON.Int64 t.target_overlay.ov_virtual_size in + + (* Ask the plugin to write the disk ID to a special file. *) + let diskid_file + "diskid_file", JSON.String (diskid_file_of_id id) in + + (* Write the JSON parameters to a file. *) + let json_params + [ disk_name; disk_format; disk_size; diskid_file ] + @ json_params in + let json_param_file = tmpdir // sprintf "params%d.json" id in + with_open_out + json_param_file + (fun chan -> output_string chan (JSON.string_of_doc json_params)); + + let sock = tmpdir // sprintf "nbdkit%d.sock" id in + let pidfile = tmpdir // sprintf "nbdkit%d.pid" id in + + (* Add common arguments to per-target arguments. *) + let args + nbdkit_args @ [ "--pidfile"; pidfile; + "--unix"; sock; + sprintf "params=%s" json_param_file ] in + + (* Print the full command we are about to run when debugging. *) + if verbose () then ( + eprintf "running nbdkit:\n"; + List.iter (fun arg -> eprintf " %s" (quote arg)) args; + prerr_newline () + ); + + (* Start an nbdkit instance in the background. By using + * --exit-with-parent we don't have to worry about clean-up. + *) + let args = Array.of_list args in + let pid = fork () in + if pid = 0 then ( + (* Child process (nbdkit). *) + execvp "nbdkit" args + ); + + (* Wait for the pidfile to appear so we know that nbdkit + * is listening for requests. + *) + if not (wait_for_file pidfile pidfile_timeout) then ( + if verbose () then + error (f_"nbdkit did not start up. See previous debugging messages for problems.") + else + error (f_"nbdkit did not start up. There may be errors printed by nbdkit above. + +If the messages above are not sufficient to diagnose the problem then add the ‘virt-v2v -v -x’ options and examine the debugging output carefully.") + ); + + if have_selinux then ( + (* Note that Unix domain sockets have both a file label and + * a socket/process label. Using --selinux-label above + * only set the socket label, but we must also set the file + * label. + *) + ignore ( + run_command ["chcon"; "system_u:object_r:svirt_image_t:s0"; + sock] + ); + ); + (* ... and the regular Unix permissions, in case qemu is + * running as another user. + *) + chmod sock 0o777; + + (* Tell ‘qemu-img convert’ to write to the nbd socket which is + * connected to nbdkit. + *) + let json_params = [ + "file.driver", JSON.String "nbd"; + "file.path", JSON.String sock; + "file.export", JSON.String "/"; + ] in + let target_file + TargetURI ("json:" ^ JSON.string_of_doc json_params) in + { t with target_file } + ) targets + + method create_metadata source targets _ guestcaps inspect target_firmware + (* Get the UUIDs of each disk image. These files are written + * out by the nbdkit plugins on successful finalization of the + * transfer. + *) + let nr_disks = List.length targets in + let image_uuids + List.map ( + fun t -> + let id = t.target_overlay.ov_source.s_disk_id in + let diskid_file = diskid_file_of_id id in + if not (wait_for_file diskid_file finalization_timeout) then + error (f_"transfer of disk %d/%d failed, see earlier error messages") + (id+1) nr_disks; + let diskid = read_whole_file diskid_file in + diskid + ) targets in + + (* We don't have the storage domain UUID, but instead we write + * in a magic value which the Python code (which can get it) + * will substitute. + *) + let sd_uuid = "@SD_UUID@" in + + (* The volume and VM UUIDs are made up. *) + let vol_uuids = List.map (fun _ -> uuidgen ()) targets + and vm_uuid = uuidgen () in + + (* Create the metadata. *) + let ovf + Create_ovf.create_ovf source targets guestcaps inspect + output_alloc + sd_uuid image_uuids vol_uuids vm_uuid + OVirt in + let ovf = DOM.doc_to_string ovf in + + let json_param_file = tmpdir // "params.json" in + with_open_out + json_param_file + (fun chan -> output_string chan (JSON.string_of_doc json_params)); + + let ovf_file = tmpdir // "vm.ovf" in + with_open_out ovf_file (fun chan -> output_string chan ovf); + if run_command [ "python3"; createvm; json_param_file; ovf_file ] <> 0 then + error (f_"failed to create virtual machine, see earlier errors") + +end + +let output_rhv_upload = new output_rhv_upload +let () = Modules_list.register_output_module "rhv-upload" diff --git a/v2v/output_rhv_upload.mli b/v2v/output_rhv_upload.mli new file mode 100644 index 000000000..3e7086f85 --- /dev/null +++ b/v2v/output_rhv_upload.mli @@ -0,0 +1,27 @@ +(* virt-v2v + * Copyright (C) 2009-2018 Red Hat Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + *) + +(** [-o rhv-upload] target. *) + +val output_rhv_upload : Types.output_allocation -> string -> string -> + string -> string -> bool -> + Types.output +(** [output_rhv_upload output_alloc output_conn output_password output_storage + rhv_cafile rhv_direct] + creates and returns a new {!Types.output} object specialized for writing + output to oVirt or RHV directly via RHV APIs. *) diff --git a/v2v/output_rhv_upload_createvm_source.mli b/v2v/output_rhv_upload_createvm_source.mli new file mode 100644 index 000000000..c1bafa15b --- /dev/null +++ b/v2v/output_rhv_upload_createvm_source.mli @@ -0,0 +1,19 @@ +(* virt-v2v + * Copyright (C) 2018 Red Hat Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + *) + +val code : string diff --git a/v2v/output_rhv_upload_plugin_source.mli b/v2v/output_rhv_upload_plugin_source.mli new file mode 100644 index 000000000..c1bafa15b --- /dev/null +++ b/v2v/output_rhv_upload_plugin_source.mli @@ -0,0 +1,19 @@ +(* virt-v2v + * Copyright (C) 2018 Red Hat Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + *) + +val code : string diff --git a/v2v/rhv-upload-createvm.py b/v2v/rhv-upload-createvm.py new file mode 100644 index 000000000..47c2574bb --- /dev/null +++ b/v2v/rhv-upload-createvm.py @@ -0,0 +1,85 @@ +# -*- python -*- +# oVirt or RHV upload create VM used by ‘virt-v2v -o rhv-upload’ +# Copyright (C) 2018 Red Hat Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License along +# with this program; if not, write to the Free Software Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + +import json +import logging +import ovirtsdk4 as sdk +import ovirtsdk4.types as types +import sys +import time + +from http.client import HTTPSConnection +from urllib.parse import urlparse + +# Parameters are passed in via a JSON doc from the OCaml code. +# Because this Python code ships embedded inside virt-v2v there +# is no formal API here. +params = None +ovf = None # OVF file + +if len(sys.argv) != 3: + raise RuntimeError("incorrect number of parameters") + +# Parameters are passed in via a JSON document. +with open(sys.argv[1], 'r') as fp: + params = json.load(fp) + +# What is passed in is a password file, read the actual password. +with open(params['output_password'], 'r') as fp: + output_password = fp.read() +output_password = output_password.rstrip() + +# Read the OVF document. +with open(sys.argv[2], 'r') as fp: + ovf = fp.read() + +# Parse out the username from the output_conn URL. +parsed = urlparse(params['output_conn']) +username = parsed.username or "admin@internal" + +# Connect to the server. +connection = sdk.Connection( + url = params['output_conn'], + username = username, + password = output_password, + ca_file = params['rhv_cafile'], + log = logging.getLogger(), + insecure = True, # XXX? +) + +system_service = connection.system_service() + +# Get the storage domain UUID and substitute it into the OVF doc. +sds_service = system_service.storage_domains_service() +sd = sds_service.list(search=("name=%s" % params['output_storage']))[0] +sd_uuid = sd.id + +ovf.replace("@SD_UUID@", sd_uuid) + +vms_service = system_service.vms_service() +vm = vms_service.add( + types.Vm( + cluster=types.Cluster(name = "Default"), # XXX + initialization=types.Initialization( + configuration = types.Configuration( + type = types.ConfigurationType.OVA, + data = ovf, + ) + ) + ) +) diff --git a/v2v/rhv-upload-plugin.py b/v2v/rhv-upload-plugin.py new file mode 100644 index 000000000..a9cb6a727 --- /dev/null +++ b/v2v/rhv-upload-plugin.py @@ -0,0 +1,250 @@ +# -*- python -*- +# oVirt or RHV upload nbdkit plugin used by ‘virt-v2v -o rhv-upload’ +# Copyright (C) 2018 Red Hat Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License along +# with this program; if not, write to the Free Software Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + +import builtins +import json +import logging +import ovirtsdk4 as sdk +import ovirtsdk4.types as types +import ssl +import sys +import time + +from http.client import HTTPSConnection +from urllib.parse import urlparse + +# Timeout to wait for oVirt disks to change status, or the transfer +# object to finish initializing [seconds]. +timeout = 5*60 + +# Parameters are passed in via a JSON doc from the OCaml code. +# Because this Python code ships embedded inside virt-v2v there +# is no formal API here. +params = None + +def config(key, value): + global params + + if key == "params": + with builtins.open(value, 'r') as fp: + params = json.load(fp) + else: + raise RuntimeError("unknown configuration key '%s'" % key) + +def config_complete(): + global params + + if params is None: + raise RuntimeError("missing configuration parameters") + +def open(readonly): + global params + + # Parse out the username from the output_conn URL. + parsed = urlparse(params['output_conn']) + username = parsed.username or "admin@internal" + + # Read the password from file. + with builtins.open(params['output_password'], 'r') as fp: + password = fp.read() + password = password.rstrip() + + # Connect to the server. + connection = sdk.Connection( + url = params['output_conn'], + username = username, + password = password, + ca_file = params['rhv_cafile'], + log = logging.getLogger(), + insecure = True, # XXX? + ) + + system_service = connection.system_service() + + # Create the disk. + disks_service = system_service.disks_service() + if params['disk_format'] == "raw": + disk_format = types.DiskFormat.RAW + else: + disk_format = types.DiskFormat.COW + disk = disks_service.add( + disk = types.Disk( + name = params['disk_name'], + description = "Uploaded by virt-v2v", + format = disk_format, + provisioned_size = params['disk_size'], + sparse = params['output_sparse'], + storage_domains = [ + types.StorageDomain( + name = params['output_storage'], + ) + ], + ) + ) + + # Wait till the disk is up, as the transfer can't start if the + # disk is locked: + disk_service = disks_service.disk_service(disk.id) + + endt = time.time() + timeout + while True: + time.sleep(5) + disk = disk_service.get() + if disk.status == types.DiskStatus.OK: + break + if time.time() > endt: + raise RuntimeError("timed out waiting for disk to become unlocked") + + # Get a reference to the transfer service. + transfers_service = system_service.image_transfers_service() + + # Create a new image transfer. + transfer = transfers_service.add( + types.ImageTransfer( + image = types.Image( + id = disk.id + ) + ) + ) + + # Get a reference to the created transfer service. + transfer_service = transfers_service.image_transfer_service(transfer.id) + + # After adding a new transfer for the disk, the transfer's status + # will be INITIALIZING. Wait until the init phase is over. The + # actual transfer can start when its status is "Transferring". + endt = time.time() + timeout + while True: + time.sleep(5) + transfer = transfer_service.get() + if transfer.phase != types.ImageTransferPhase.INITIALIZING: + break + if time.time() > endt: + raise RuntimeError("timed out waiting for transfer status != INITIALIZING") + + # Now we have permission to start the transfer. + if params['rhv_direct']: + if transfer.transfer_url is None: + raise RuntimeError("direct upload to host not supported, requires ovirt-engine >= 4.2 and only works when virt-v2v is run within the oVirt/RHV environment, eg. on an ovirt node.") + destination_url = urlparse(transfer.transfer_url) + else: + destination_url = urlparse(transfer.proxy_url) + + context = ssl.create_default_context() + context.load_verify_locations(cafile = params['rhv_cafile']) + + http = HTTPSConnection( + destination_url.hostname, + destination_url.port, + context = context + ) + + # Save everything we need to make requests in the handle. + return { + 'connection': connection, + 'disk': disk, + 'disk_service': disk_service, + 'failed': False, + 'http': http, + 'path': destination_url.path, + 'requests': 0, + 'transfer': transfer, + 'transfer_service': transfer_service, + } + +def get_size(h): + global params + + return params['disk_size'] + +def pread(h, count, offset): + h['requests'] += 1 + + http = h['http'] + transfer=h['transfer'] + transfer_service=h['transfer_service'] + + http.putrequest("GET", h['path']) + http.putheader("Authorization", transfer.signed_ticket) + http.putheader("Range", str(offset) + "-" + str(count+offset-1)) + http.endheaders() + + r = http.getresponse() + if r.status != 200: + h['transfer_service'].pause() + h['failed'] = True + raise RuntimeError("could not read sector (%d, %d): %d: %s" % + (offset, count, r.status, r.reason)) + return http.read() + +def pwrite(h, buf, offset): + h['requests'] += 1 + count = len(buf) + + http = h['http'] + transfer=h['transfer'] + transfer_service=h['transfer_service'] + + http.putrequest("PUT", h['path']) + http.putheader("Authorization", transfer.signed_ticket) + http.putheader("Range", str(offset) + "-" + str(count+offset-1)) + http.putheader("Content-Length", str(count)) + http.endheaders() + http.send(buf) + + r = http.getresponse() + if r.status != 200: + h['transfer_service'].pause() + h['failed'] = True + raise RuntimeError("could not write sector (%d, %d): %d: %s" % + (offset, count, r.status, r.reason)) + +# qemu-img convert starts by trying to zero/trim the whole device. +# Since we've just created a new disk it's safe to ignore these +# requests when they are the very first requests on the handle. +# After that we must emulate them with writes. +def zero(h, count, offset, may_trim): + if h['requests'] > 0: + buf = bytearray(count) + pwrite(h, buf, count) + +def close(h): + global params + + http = h['http'] + connection = h['connection'] + + http.close() + + # If we didn't fail, then finalize the transfer. + if not h['failed']: + disk = h['disk'] + transfer_service=h['transfer_service'] + + transfer_service.finalize() + + # Write the disk ID file. Only do this on successful completion. + with builtins.open(params['diskid_file'], 'w') as fp: + fp.write(disk.id) + + # Otherwise if we did fail then we should delete the disk. + else: + disk_service = h['disk_service'] + disk_service.remove() + + connection.close() diff --git a/v2v/virt-v2v.pod b/v2v/virt-v2v.pod index d51e7ed2f..11b03d14f 100644 --- a/v2v/virt-v2v.pod +++ b/v2v/virt-v2v.pod @@ -6,15 +6,18 @@ virt-v2v - Convert a guest to use KVM virt-v2v -ic vpx://vcenter.example.com/Datacenter/esxi vmware_guest - virt-v2v -ic vpx://vcenter.example.com/Datacenter/esxi vmware_guest \ - -o rhv -os rhv.nfs:/export_domain --network ovirtmgmt - virt-v2v -i libvirtxml guest-domain.xml -o local -os /var/tmp virt-v2v -i disk disk.img -o local -os /var/tmp virt-v2v -i disk disk.img -o glance + virt-v2v -ic vpx://vcenter.example.com/Datacenter/esxi vmware_guest \ + -o rhv-upload -oc https://ovirt-engine.example.com/ovirt-engine/api \ + -os ovirt-data -op /tmp/ovirt-admin-password \ + --rhv-cafile /tmp/ca.pem --rhv-direct \ + --network ovirtmgmt + virt-v2v -ic qemu:///system qemu_guest --in-place =head1 DESCRIPTION @@ -42,9 +45,9 @@ libguestfs E<ge> 1.28. Xen ───▶│ -i libvirt ──▶ │ │ │ (default) │ ... ───▶│ (default) │ │ │ ──┐ └────────────┘ └────────────┘ │ │ ─┐└──────▶ -o glance - -i libvirtxml ─────────▶ │ │ ┐└─────────▶ -o rhv - -i vmx ────────────────▶ │ │ └──────────▶ -o vdsm - └────────────┘ + -i libvirtxml ─────────▶ │ │ ┐├─────────▶ -o rhv + -i vmx ────────────────▶ │ │ │└─────────▶ -o vdsm + └────────────┘ └──────────▶ -o rhv-upload Virt-v2v has a number of possible input and output modes, selected using the I<-i> and I<-o> options. Only one input and output mode can @@ -103,20 +106,18 @@ For more information see L</INPUT FROM VMWARE VCENTER SERVER> below. =head2 Convert from VMware to RHV/oVirt This is the same as the previous example, except you want to send the -guest to a RHV-M Export Storage Domain which is located remotely -(over NFS) at C<rhv.nfs:/export_domain>. If you are unclear about -the location of the Export Storage Domain you should check the -settings on your RHV-M management console. Guest network +guest to a RHV Data Domain using the RHV REST API. Guest network interface(s) are connected to the target network called C<ovirtmgmt>. virt-v2v -ic vpx://vcenter.example.com/Datacenter/esxi vmware_guest \ - -o rhv -os rhv.nfs:/export_domain --network ovirtmgmt + -o rhv-upload -oc https://ovirt-engine.example.com/ovirt-engine/api \ + -os ovirt-data -op /tmp/ovirt-admin-password \ + --rhv-cafile /tmp/ca.pem --rhv-direct \ + --network ovirtmgmt In this case the host running virt-v2v acts as a B<conversion server>. -Note that after conversion, the guest will appear in the RHV-M Export -Storage Domain, from where you will need to import it using the RHV-M -user interface. (See L</OUTPUT TO RHV>). +For more information see L</OUTPUT TO RHV> below. =head2 Convert from ESXi hypervisor over SSH to local libvirt @@ -509,6 +510,10 @@ written. This is the same as I<-o rhv>. +=item B<-o> B<ovirt-upload> + +This is the same as I<-o rhv-upload>. + =item B<-o> B<qemu> Set the output method to I<qemu>. @@ -533,6 +538,16 @@ I<-os> parameter must also be used to specify the location of the Export Storage Domain. Note this does not actually import the guest into RHV. You have to do that manually later using the UI. +See L</OUTPUT TO RHV (OLD METHOD)> below. + +=item B<-o> B<rhv-upload> + +Set the output method to I<rhv-upload>. + +The converted guest is written directly to a RHV Data Domain. +This is a faster method than I<-o rhv>, but requires oVirt +or RHV E<ge> 4.2. + See L</OUTPUT TO RHV> below. =item B<-o> B<vdsm> @@ -1870,6 +1885,53 @@ Define the final guest in libvirt: =head1 OUTPUT TO RHV +This new method to upload guests to oVirt or RHV directly via the REST +API requires oVirt/RHV E<ge> 4.2. + +You need to specify I<-o rhv-upload> as well as the following extra +parameters: + +=over 4 + +=item I<-oc> C<https://ovirt-engine.example.com/ovirt-engine/api> + +The URL of the REST API which is usually the server name with +C</ovirt-engine/api> appended, but might be different if you installed +oVirt Engine on a different path. + +You can optionally add a username and port number to the URL. If the +username is not specified then virt-v2v defaults to using +C<admin@internal> which is the typical superuser account for oVirt +instances. + +=item I<-op> F<password-file> + +A file containing a password to be used when connecting to the oVirt +engine. Note the file should contain the whole password, B<without +any trailing newline>, and for security the file should have mode +C<0600> so that others cannot read it. + +=item I<-os> C<ovirt-data> + +The storage domain. + +=item I<--rhv-cafile> F<ca.pem> + +The F<ca.pem> file (Certificate Authority), copied from +F</etc/pki/ovirt-engine/ca.pem> on the oVirt engine. + +=item I<--rhv-direct> + +If this option is given then virt-v2v will attempt to directly upload +the disk to the oVirt node, otherwise it will proxy the upload through +the oVirt engine. Direct upload requires that you have network access +to the oVirt nodes. Non-direct upload is slightly slower but should +work in all situations. + +=back + +=head1 OUTPUT TO RHV (OLD METHOD) + This section only applies to the I<-o rhv> output mode. If you use virt-v2v from the RHV-M user interface, then behind the scenes the import is managed by VDSM using the I<-o vdsm> output mode (which end -- 2.13.2
Nir Soffer
2018-Mar-08 12:13 UTC
Re: [Libguestfs] [PATCH v4 0/3] v2v: Add -o rhv-upload output mode.
On Wed, Mar 7, 2018 at 12:18 AM Richard W.M. Jones <rjones@redhat.com> wrote:> Previous versions: > v3: https://www.redhat.com/archives/libguestfs/2018-March/msg00000.html > v2: > https://www.redhat.com/archives/libguestfs/2018-February/msg00177.html > v1: > https://www.redhat.com/archives/libguestfs/2018-February/msg00139.html > > This completely rethinks the approach taken by the previous patches. > > Instead of trying to involve qemu's curl driver, this uses a small > Python 3 nbdkit plugin to interface between qemu and the oVirt server. > > The data path is: > > qemu-img convert -------> nbdkit -------> oVirt imageio > nbd https >What is the advantage of this for raw files? Why not: v2v -> ovirt imageio? And how qcow2 files will be handled? when I tried nbdkit few month ago I could not make it handle qcow2 files. Maybe I had to write a plugin? We considered using this flow when we download/upload images, to support on-the-fly image conversion: raw file -> qemu-img convert -> nbdkit -> qcow2 stream -> imageio -> http client And same for uploading, e.g. uploading qcow2 and writing raw image. If this is possible using nbdkit plugin, can we ruse the same plugin in different applications, or we must implement the plugin in each application?> There are two Python scripts included. One is the nbdkit plugin. The > other creates the VM. As with the previous patches, these scripts get > embeded in virt-v2v at compile time, so effectively there is no API > contract between virt-v2v & the Python code. > > With this patch series I am able to (mostly) successfully convert VMs > from local disk to oVirt 4.2, with full end-to-end streaming. There > is some room for optimization -- in particular uploads are currently > rather slow because we rely on qemu-img batching small requests into > large ones which it doesn't do well, and instead the nbdkit plugin > could batch small writes into larger ones. Also I noticed (but only > one time) that very long transfers would cause the oVirt ticket to > expire, even though we were writing the whole time. >On the host, the ticket is extended regularly, based on the activity. On the proxy we currently have 3600 seconds timeout, and the ticket is never extended. I think we should have the same mechanism as we do on the host. Nir There are still a few unresolved issues (see patch 3/3) so this is not> quite ready to go upstream yet, but can still be reviewed. Patches 1 > & 2 are the same as posted before. > > I did not yet test qcow2 uploads. Those are "interestingly" different > because qcow2 will require us to read from the remote oVirt server as > well as just stream/write to it. The pread method for that is written > but has not been tested. > > Rich. > > _______________________________________________ > Libguestfs mailing list > Libguestfs@redhat.com > https://www.redhat.com/mailman/listinfo/libguestfs >
Richard W.M. Jones
2018-Mar-08 12:29 UTC
Re: [Libguestfs] [PATCH v4 0/3] v2v: Add -o rhv-upload output mode.
On Thu, Mar 08, 2018 at 12:13:01PM +0000, Nir Soffer wrote:> On Wed, Mar 7, 2018 at 12:18 AM Richard W.M. Jones <rjones@redhat.com> > wrote: > > > Previous versions: > > v3: https://www.redhat.com/archives/libguestfs/2018-March/msg00000.html > > v2: > > https://www.redhat.com/archives/libguestfs/2018-February/msg00177.html > > v1: > > https://www.redhat.com/archives/libguestfs/2018-February/msg00139.html > > > > This completely rethinks the approach taken by the previous patches. > > > > Instead of trying to involve qemu's curl driver, this uses a small > > Python 3 nbdkit plugin to interface between qemu and the oVirt server. > > > > The data path is: > > > > qemu-img convert -------> nbdkit -------> oVirt imageio > > nbd https > > > > What is the advantage of this for raw files? Why not: > > v2v -> ovirt imageio?Not sure I understand what you mean? virt-v2v always runs ‘qemu-img convert’ to do the copy (not conversion), so the question is how do we connect qemu-img to the oVirt server. One way would be to extend qemu so it knows how to write on an https connection (which it does not do now) but that has a number of disadvantages, as well as being hard to implement.> And how qcow2 files will be handled?We'll add ‘-O qcow2’ to the qemu-img convert command line, and qemu will then write out a qcow2 file. However it's not quite so straightforward (and in fact I didn't get it to work yet). qemu will try to first read from the target (invoking pread calls in the nbdkit Python plugin which will try to read from oVirt over https). Unfortunately it fails here for a couple of reasons: (1) My pread method is broken. I saw your suggested fixes to it (and pwrite) and will try those later. (2) In any case it won't work because the disk at this point is empty and full of zeroes, and it's looking for a qcow2 header. To fix this we'll have to write a qcow2 header to the disk first (TBD).> when I tried nbdkit few month ago I could not make it handle qcow2 > files. Maybe I had to write a plugin?NBD (the protocol) doesn't "know" about qcow2 files. You can serve any file you want as a range of bytes, including qcow2, but that requires whatever is consuming those bytes to then do the qcow2 en-/decoding. (Which means effectively the client has to be qemu because nothing else can parse qcow2 reliably). In the qemu-img convert case above this all works because qemu-img (ie. qemu) is the client, and it does the encoding of qcow2, and we're just shuffling a byte stream to oVirt imageio.> We considered using this flow when we download/upload images, > to support on-the-fly image conversion: > > raw file -> qemu-img convert -> nbdkit -> qcow2 stream -> imageio -> > http client > > And same for uploading, e.g. uploading qcow2 and writing raw image. > > If this is possible using nbdkit plugin, can we ruse the same plugin in > different applications, or we must implement the plugin in each application?This should be possible (modulo fixing issues (1) & (2) above). The plugin I have written is very specific to the virt-v2v task, but it could be evolved into something which would handle this case. nbdkit is designed around the idea that you can make small plugins in familiar scripting languages for specific tasks. I did think about having a generic "oVirt plugin" which we'd ship with upstream nbdkit, but making it generic enough to handle a useful range of cases seemed difficult.> > There are two Python scripts included. One is the nbdkit plugin. The > > other creates the VM. As with the previous patches, these scripts get > > embeded in virt-v2v at compile time, so effectively there is no API > > contract between virt-v2v & the Python code. > > > > With this patch series I am able to (mostly) successfully convert VMs > > from local disk to oVirt 4.2, with full end-to-end streaming. There > > is some room for optimization -- in particular uploads are currently > > rather slow because we rely on qemu-img batching small requests into > > large ones which it doesn't do well, and instead the nbdkit plugin > > could batch small writes into larger ones. Also I noticed (but only > > one time) that very long transfers would cause the oVirt ticket to > > expire, even though we were writing the whole time. > > > > On the host, the ticket is extended regularly, based on the activity. > > On the proxy we currently have 3600 seconds timeout, and the ticket > is never extended. I think we should have the same mechanism as > we do on the host.I have only seen this once and never again, so hopefully it was just a network blip causing a > 3600 second timeout. If it happens again I'll take a closer look. Thanks for the review, it does seem like pwrite is rather broken. Unfortunately my ovirt node hangs hard when I try to boot any guest (seems like a kernel or even hardware bug) so I have never been able to test that the transferred guest works :-( Rich.> Nir > > There are still a few unresolved issues (see patch 3/3) so this is not > > quite ready to go upstream yet, but can still be reviewed. Patches 1 > > & 2 are the same as posted before. > > > > I did not yet test qcow2 uploads. Those are "interestingly" different > > because qcow2 will require us to read from the remote oVirt server as > > well as just stream/write to it. The pread method for that is written > > but has not been tested. > > > > Rich. > > > > _______________________________________________ > > Libguestfs mailing list > > Libguestfs@redhat.com > > https://www.redhat.com/mailman/listinfo/libguestfs > >-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v
Maybe Matching Threads
- [PATCH nbdkit] plugins: python: Add imageio plugin example
- [PATCH] RFC: rhv-upload-plugin: Use imageio client
- [PATCH v2 0/3] rhv-upload: Support import to qcow2 disk
- Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
- Re: [PATCH v7 6/6] v2v: Add -o rhv-upload output mode (RHBZ#1557273).