Denis V. Lunev
2023-Feb-24 11:55 UTC
[Libguestfs] [V2V PATCH 0/5] Bring support for virtio-scsi back to Windows
On 2/24/23 05:56, Laszlo Ersek wrote:> On 2/23/23 12:48, Denis V. Lunev wrote: >> On 2/23/23 11:43, Laszlo Ersek wrote: >>> On 2/22/23 19:20, Andrey Drobyshev wrote: >>>> Since commits b28cd1dc ("Remove requested_guestcaps / rcaps"), f0afc439 >>>> ("Remove guestcaps_block_type Virtio_SCSI") support for installing >>>> virtio-scsi driver is missing in virt-v2v.? AFAIU plans and demands for >>>> bringing this feature back have been out there for a while.? E.g. I've >>>> found a corresponding issue which is still open [1]. >>>> >>>> The code in b28cd1dc, f0afc439 was removed due to removing the old >>>> in-place >>>> support.? However, having the new in-place support present and bringing >>>> this same code (partially) back with several additions and improvements, >>>> I'm able to successfully convert and boot a Win guest with a virtio-scsi >>>> disk controller.? So please consider the following implementation of >>>> this feature. >>>> >>>> [1] https://github.com/libguestfs/virt-v2v/issues/12 >>> (Preamble: I'm 100% deferring to Rich on this, so take my comments for >>> what they are worth.) >>> >>> In my opinion, the argument made is weak. This cover letter does not say >>> "why" -- it does not explain why virtio-blk is insufficient for >>> *Virtuozzo*. >>> >>> Second, reference [1] -- issue #12 -- doesn't sound too convincing. It >>> writes, "opinionated qemu-based VMs that exclusively use UEFI and only >>> virtio devices". "Opinionated" is the key word there. They're entitled >>> to an opinion, they're not entitled to others conforming to their >>> opinion. I happen to be opinionated as well, and I hold the opposite >>> view. >>> >>> (BTW even if they insist on UEFI + virtio, which I do sympathize with, >>> requiring virtio-scsi exclusively is hard to sell. In particular, >>> virtio-blk nowadays has support for trim/discard, so the main killer >>> feature of virtio-scsi is no longer unique to virtio-scsi. Virtio-blk is >>> also simpler code and arguably faster. Note: I don't want to convince >>> anyone about what *they* support, just pointing out that virt-v2v >>> outputting solely virtio-blk disks is entirely fine, as far as >>> virt-v2v's mission is concerned -- "salvage 'pet' (not 'cattle') VMs >>> from proprietary hypervisors, and make sure they boot". Virtio-blk is >>> sufficient for booting, further tweaks are up to the admin (again, >>> virt-v2v is not for mass/cattle conversions). The "Hetzner Cloud" is not >>> a particular output module of virt-v2v, so I don't know why virt-v2v's >>> mission should extend to making the converted VM bootable on "Hetzner >>> Cloud".) >>> >>> Rich has recently added tools for working with the virtio devices in >>> windows guests; maybe those can be employed as extra (manual) steps >>> before or after the conversion. >>> >>> Third, the last patch in the series is overreaching IMO; it switches the >>> default. That causes a behavior change for such conversions that have >>> been working well, and have been thoroughly tested. It doesn't just add >>> a new use case, it throws away an existent use case for the new one's >>> sake, IIUC. I don't like that. >>> >>> Again -- fully deferring to Rich on the final verdict (and the review). >>> >>> Laszlo >> OK. Let me clarify the situation a bit. >> >> These patches (for sure) originates from good old 2017 year, when >> VirtIO BLK was completely unacceptable to us due to missed >> discard feature which is now in. >> >> Thus you are completely right about the default and default >> changing (if that will happen) should be in the separate patch. >> Anyway, from the first glance it should not be needed. >> >> Normally, in inplace mode, which we are mostly worrying >> about v2v should bring the guest configuration in sync >> with what is written in domain.xml and that does not involve >> any defaults. >> >> VirtIO SCSI should be supported as users should have >> a freedom to choose between VirtIO SCSI and VirtIO BLK >> even after the guest installation. >> >> Does this sounds acceptable? > I've got zero experience with in-place conversions. I've skimmed > <https://libguestfs.org/virt-v2v-in-place.1.html> now, but the use case > continues to elude me. > > What is in-place conversion good for? If you already have a libvirt > domain XML (i.e., one *not* output by virt-v2v as the result of a > conversion from a foreign hypervisor), what do you need > virt-v2v-in-place for? > > My understanding is that virt-v2v produces both an output disk (set) and > a domain description (be it a QEMU cmdline, a libvirt domain XML, an > OVF, ...), *and* that these two kinds of output belong together, there > is not one without the other. What's the data flow with inplace conversion? > > Laszlo >We use v2v as guest convertor engine and prepare VM configuration ourselves. This looks more appropriate for us as we have different constraints under different conditions. This makes sense outside of foreign hypervisor as we could change bus of the disk and then call v2v to teach the guest to boot from new location. This was revealed very useful to fix some strange issues on the customer's side. That is it. Den P.S. Resent (original mail was accidentally sent off-list)
Laszlo Ersek
2023-Feb-24 12:02 UTC
[Libguestfs] [V2V PATCH 0/5] Bring support for virtio-scsi back to Windows
On 2/24/23 12:55, Denis V. Lunev wrote:> On 2/24/23 05:56, Laszlo Ersek wrote:>> I've got zero experience with in-place conversions. I've skimmed >> <https://libguestfs.org/virt-v2v-in-place.1.html> now, but the use case >> continues to elude me. >> >> What is in-place conversion good for? If you already have a libvirt >> domain XML (i.e., one *not* output by virt-v2v as the result of a >> conversion from a foreign hypervisor), what do you need >> virt-v2v-in-place for? >> >> My understanding is that virt-v2v produces both an output disk (set) and >> a domain description (be it a QEMU cmdline, a libvirt domain XML, an >> OVF, ...), *and* that these two kinds of output belong together, there >> is not one without the other. What's the data flow with inplace >> conversion? >> >> Laszlo >> > We use v2v as guest convertor engine and prepare VM configuration > ourselves. This looks more appropriate for us as we have different > constraints under different conditions. > > This makes sense outside of foreign hypervisor as we could change > bus of the disk and then call v2v to teach the guest to boot from > new location. This was revealed very useful to fix some strange > issues on the customer's side. > > That is it. > > Den > > P.S. Resent (original mail was accidentally sent off-list) >So the use case is more or less "-i libvirtxml", with the domain XML created from scratch (or liberally tweaked, starting from a "more authentic" original domain XML). The cover letter mentions the following commit: commit b28cd1dcfeb40e7002e8d0b0ce9dcc4ce86beb6c Author: Richard W.M. Jones <rjones at redhat.com> Date: Mon Nov 8 09:00:20 2021 +0000 Remove requested_guestcaps / rcaps This was part of the old in-place support. When we add new in-place support we'll do something else, but currently this is dead code so remove it completely. Note this removes the code for installing the virtio-scsi driver (only ever using virtio-blk). This was also dead code in the current implementation of virt-v2v, but worth remembering in case we want to resurrect this feature in a future version. Acked-by: Laszlo Ersek <lersek at redhat.com> So I guess that "something else" is what I'm missing here. Right now the proposed use case seems to be: - tweak the domain XML in some way (or generate it in some "bespoke" way from the outset) - perform an in-place conversion with virt-v2v, making sure that virt-v2v prefers virtio-scsi as first priority. This will end up injecting the virtio-scsi guest driver into the guest. This looks less than ideal to me. Even if we offer virtio-scsi, that should be driven by a specific knob. I initially thought that knob could be a new command line option. Then, upon reading we could change bus of the disk and then call v2v to teach the guest to boot from new location I thought that the contents of the tweaked domain XML would *steer* the guest driver selection. (This seemed plausible, because we used to have "source" domain properties, and I vaguely recalled that they once had been relevant for in-place conversion.) But these patches don't do any such steering, AFAICT. Why is the "rcaps" logic from before commit b28cd1dc not being brought back? Laszlo