Laszlo Ersek
2023-Feb-23 10:43 UTC
[Libguestfs] [V2V PATCH 0/5] Bring support for virtio-scsi back to Windows
On 2/22/23 19:20, Andrey Drobyshev wrote:> Since commits b28cd1dc ("Remove requested_guestcaps / rcaps"), f0afc439 > ("Remove guestcaps_block_type Virtio_SCSI") support for installing > virtio-scsi driver is missing in virt-v2v. AFAIU plans and demands for > bringing this feature back have been out there for a while. E.g. I've > found a corresponding issue which is still open [1]. > > The code in b28cd1dc, f0afc439 was removed due to removing the old in-place > support. However, having the new in-place support present and bringing > this same code (partially) back with several additions and improvements, > I'm able to successfully convert and boot a Win guest with a virtio-scsi > disk controller. So please consider the following implementation of > this feature. > > [1] https://github.com/libguestfs/virt-v2v/issues/12(Preamble: I'm 100% deferring to Rich on this, so take my comments for what they are worth.) In my opinion, the argument made is weak. This cover letter does not say "why" -- it does not explain why virtio-blk is insufficient for *Virtuozzo*. Second, reference [1] -- issue #12 -- doesn't sound too convincing. It writes, "opinionated qemu-based VMs that exclusively use UEFI and only virtio devices". "Opinionated" is the key word there. They're entitled to an opinion, they're not entitled to others conforming to their opinion. I happen to be opinionated as well, and I hold the opposite view. (BTW even if they insist on UEFI + virtio, which I do sympathize with, requiring virtio-scsi exclusively is hard to sell. In particular, virtio-blk nowadays has support for trim/discard, so the main killer feature of virtio-scsi is no longer unique to virtio-scsi. Virtio-blk is also simpler code and arguably faster. Note: I don't want to convince anyone about what *they* support, just pointing out that virt-v2v outputting solely virtio-blk disks is entirely fine, as far as virt-v2v's mission is concerned -- "salvage 'pet' (not 'cattle') VMs from proprietary hypervisors, and make sure they boot". Virtio-blk is sufficient for booting, further tweaks are up to the admin (again, virt-v2v is not for mass/cattle conversions). The "Hetzner Cloud" is not a particular output module of virt-v2v, so I don't know why virt-v2v's mission should extend to making the converted VM bootable on "Hetzner Cloud".) Rich has recently added tools for working with the virtio devices in windows guests; maybe those can be employed as extra (manual) steps before or after the conversion. Third, the last patch in the series is overreaching IMO; it switches the default. That causes a behavior change for such conversions that have been working well, and have been thoroughly tested. It doesn't just add a new use case, it throws away an existent use case for the new one's sake, IIUC. I don't like that. Again -- fully deferring to Rich on the final verdict (and the review). Laszlo> > > v2v: > > Andrey Drobyshev (2): > Revert "Remove guestcaps_block_type Virtio_SCSI" > convert_windows: add Inject_virtio_win.Virtio_SCSI as a possible block > type > > convert/convert.ml | 2 +- > convert/convert_linux.ml | 9 +++++++-- > convert/convert_windows.ml | 1 + > convert/target_bus_assignment.ml | 1 + > lib/create_ovf.ml | 1 + > lib/types.ml | 3 ++- > lib/types.mli | 2 +- > output/openstack_image_properties.ml | 7 +++++++ > 9 files changed, 22 insertions(+), 6 deletions(-) > > common: > > Andrey Drobyshev (2): > inject_virtio_win: add Virtio_SCSI to block_type > inject_virtio_win: make virtio-scsi the default block driver > > Roman Kagan (1): > inject_virtio_win: match only vendor/device > > mlcustomize/inject_virtio_win.ml | 25 ++++++++++++++++--------- > mlcustomize/inject_virtio_win.mli | 2 +- > 2 files changed, 17 insertions(+), 10 deletions(-) > > -- > 2.31.1 >
Richard W.M. Jones
2023-Feb-23 10:56 UTC
[Libguestfs] [V2V PATCH 0/5] Bring support for virtio-scsi back to Windows
I'm probably not going to get around to looking at this before next week. But just wanted to say that while switching the default probably won't be acceptable, adding support for virtio-scsi may be OK. I'll look at the patches later! Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW
Denis V. Lunev
2023-Feb-23 11:48 UTC
[Libguestfs] [V2V PATCH 0/5] Bring support for virtio-scsi back to Windows
On 2/23/23 11:43, Laszlo Ersek wrote:> On 2/22/23 19:20, Andrey Drobyshev wrote: >> Since commits b28cd1dc ("Remove requested_guestcaps / rcaps"), f0afc439 >> ("Remove guestcaps_block_type Virtio_SCSI") support for installing >> virtio-scsi driver is missing in virt-v2v. AFAIU plans and demands for >> bringing this feature back have been out there for a while. E.g. I've >> found a corresponding issue which is still open [1]. >> >> The code in b28cd1dc, f0afc439 was removed due to removing the old in-place >> support. However, having the new in-place support present and bringing >> this same code (partially) back with several additions and improvements, >> I'm able to successfully convert and boot a Win guest with a virtio-scsi >> disk controller. So please consider the following implementation of >> this feature. >> >> [1] https://github.com/libguestfs/virt-v2v/issues/12 > (Preamble: I'm 100% deferring to Rich on this, so take my comments for > what they are worth.) > > In my opinion, the argument made is weak. This cover letter does not say > "why" -- it does not explain why virtio-blk is insufficient for *Virtuozzo*. > > Second, reference [1] -- issue #12 -- doesn't sound too convincing. It > writes, "opinionated qemu-based VMs that exclusively use UEFI and only > virtio devices". "Opinionated" is the key word there. They're entitled > to an opinion, they're not entitled to others conforming to their > opinion. I happen to be opinionated as well, and I hold the opposite view. > > (BTW even if they insist on UEFI + virtio, which I do sympathize with, > requiring virtio-scsi exclusively is hard to sell. In particular, > virtio-blk nowadays has support for trim/discard, so the main killer > feature of virtio-scsi is no longer unique to virtio-scsi. Virtio-blk is > also simpler code and arguably faster. Note: I don't want to convince > anyone about what *they* support, just pointing out that virt-v2v > outputting solely virtio-blk disks is entirely fine, as far as > virt-v2v's mission is concerned -- "salvage 'pet' (not 'cattle') VMs > from proprietary hypervisors, and make sure they boot". Virtio-blk is > sufficient for booting, further tweaks are up to the admin (again, > virt-v2v is not for mass/cattle conversions). The "Hetzner Cloud" is not > a particular output module of virt-v2v, so I don't know why virt-v2v's > mission should extend to making the converted VM bootable on "Hetzner > Cloud".) > > Rich has recently added tools for working with the virtio devices in > windows guests; maybe those can be employed as extra (manual) steps > before or after the conversion. > > Third, the last patch in the series is overreaching IMO; it switches the > default. That causes a behavior change for such conversions that have > been working well, and have been thoroughly tested. It doesn't just add > a new use case, it throws away an existent use case for the new one's > sake, IIUC. I don't like that. > > Again -- fully deferring to Rich on the final verdict (and the review). > > LaszloOK. Let me clarify the situation a bit. These patches (for sure) originates from good old 2017 year, when VirtIO BLK was completely unacceptable to us due to missed discard feature which is now in. Thus you are completely right about the default and default changing (if that will happen) should be in the separate patch. Anyway, from the first glance it should not be needed. Normally, in inplace mode, which we are mostly worrying about v2v should bring the guest configuration in sync with what is written in domain.xml and that does not involve any defaults. VirtIO SCSI should be supported as users should have a freedom to choose between VirtIO SCSI and VirtIO BLK even after the guest installation. Does this sounds acceptable? Den
Daniel P. Berrangé
2023-Feb-23 12:20 UTC
[Libguestfs] [V2V PATCH 0/5] Bring support for virtio-scsi back to Windows
On Thu, Feb 23, 2023 at 11:43:38AM +0100, Laszlo Ersek wrote:> On 2/22/23 19:20, Andrey Drobyshev wrote: > > Since commits b28cd1dc ("Remove requested_guestcaps / rcaps"), f0afc439 > > ("Remove guestcaps_block_type Virtio_SCSI") support for installing > > virtio-scsi driver is missing in virt-v2v. AFAIU plans and demands for > > bringing this feature back have been out there for a while. E.g. I've > > found a corresponding issue which is still open [1]. > > > > The code in b28cd1dc, f0afc439 was removed due to removing the old in-place > > support. However, having the new in-place support present and bringing > > this same code (partially) back with several additions and improvements, > > I'm able to successfully convert and boot a Win guest with a virtio-scsi > > disk controller. So please consider the following implementation of > > this feature. > > > > [1] https://github.com/libguestfs/virt-v2v/issues/12 > > (Preamble: I'm 100% deferring to Rich on this, so take my comments for > what they are worth.) > > In my opinion, the argument made is weak. This cover letter does not say > "why" -- it does not explain why virtio-blk is insufficient for *Virtuozzo*. > > Second, reference [1] -- issue #12 -- doesn't sound too convincing. It > writes, "opinionated qemu-based VMs that exclusively use UEFI and only > virtio devices". "Opinionated" is the key word there. They're entitled > to an opinion, they're not entitled to others conforming to their > opinion. I happen to be opinionated as well, and I hold the opposite view.I think that issue shouldn't have used the word 'opionated' as it gives the wrong impression that the choice is somewhat arbitrary and interchangable. I think there are rational reasons why virtio-scsi is the better choice that they likely evaluated. The main tradeoffs for virtio-blk vs virtio-scsi are outlined by QEMU maintainers here: https://www.qemu.org/2021/01/19/virtio-blk-scsi-configuration/ TL;DR virtio-blk is preferred for maximum speed, while virtio-scsi is preferred if you want to be able to add lots of disks free of worrying about PCI slot availability. I can totally understand why public clouds would pick to only support virtio-scsi. The speed benefits of virtio-blk are likely not relevant / visible to their customers because when you're overcommiting hosts to serve many VM, the bottleneck is almost certainly somewhere other than the guest disk device choice. The ease of adding many disks is very interesting to public clouds though, especially if the VM has many NICs already taking up PCI slots. Getting into adding PCI bridges adds more complexity than using SCSI. OpenStack maintainers have also considered preferring virtio-scsi over virtio-blk for specifically this reason in the past and might not have even used virtio-blk at all, if it were not for the backcompat concerns. So I'd say it is a reasonable desire to want to (optionally) emit VMs that are setup for use of virtio-scsi instead of virtio-blk With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|