Laszlo Ersek
2022-Feb-14 15:52 UTC
[Libguestfs] [PATCH v2] v2v/v2v.ml: Use larger request size for -o rhv-upload
On 02/14/22 14:01, Richard W.M. Jones wrote:> On Mon, Feb 14, 2022 at 12:53:01PM +0100, Laszlo Ersek wrote: >> On 02/14/22 10:56, Richard W.M. Jones wrote: >>> This change slowed things down (slightly) for me, although the change >>> is within the margin of error so it probably made no difference. >>> >>> Before: >>> >>> $ time ./run virt-v2v -i disk /var/tmp/fedora-35.qcow2 -o rhv-upload -oc https://ovirt4410/ovirt-engine/api -op /tmp/ovirt-passwd -oo rhv-direct -os ovirt-data -on test14 -of raw >>> [ 0.0] Setting up the source: -i disk /var/tmp/fedora-35.qcow2 >>> [ 1.0] Opening the source >>> [ 6.5] Inspecting the source >>> [ 10.5] Checking for sufficient free disk space in the guest >>> [ 10.5] Converting Fedora Linux 35 (Thirty Five) to run on KVM >>> virt-v2v: warning: /files/boot/grub2/device.map/hd0 references unknown >>> device "vda". You may have to fix this entry manually after conversion. >>> virt-v2v: This guest has virtio drivers installed. >>> [ 57.0] Mapping filesystem data to avoid copying unused and blank areas >>> [ 59.0] Closing the overlay >>> [ 59.6] Assigning disks to buses >>> [ 59.6] Checking if the guest needs BIOS or UEFI to boot >>> [ 59.6] Setting up the destination: -o rhv-upload -oc https://ovirt4410/ovirt-engine/api -os ovirt-data >>> [ 79.3] Copying disk 1/1 >>> ? 100% [****************************************] >>> [ 89.9] Creating output metadata >>> [ 94.0] Finishing off >>> >>> real 1m34.213s >>> user 0m6.585s >>> sys 0m11.880s >>> >>> >>> After: >>> >>> $ time ./run virt-v2v -i disk /var/tmp/fedora-35.qcow2 -o rhv-upload -oc https://ovirt4410/ovirt-engine/api -op /tmp/ovirt-passwd -oo rhv-direct -os ovirt-data -on test15 -of raw >>> [ 0.0] Setting up the source: -i disk /var/tmp/fedora-35.qcow2 >>> [ 1.0] Opening the source >>> [ 7.4] Inspecting the source >>> [ 11.7] Checking for sufficient free disk space in the guest >>> [ 11.7] Converting Fedora Linux 35 (Thirty Five) to run on KVM >>> virt-v2v: warning: /files/boot/grub2/device.map/hd0 references unknown >>> device "vda". You may have to fix this entry manually after conversion. >>> virt-v2v: This guest has virtio drivers installed. >>> [ 59.6] Mapping filesystem data to avoid copying unused and blank areas >>> [ 61.5] Closing the overlay >>> [ 62.2] Assigning disks to buses >>> [ 62.2] Checking if the guest needs BIOS or UEFI to boot >>> [ 62.2] Setting up the destination: -o rhv-upload -oc https://ovirt4410/ovirt-engine/api -os ovirt-data >>> [ 81.6] Copying disk 1/1 >>> ? 100% [****************************************] >>> [ 91.3] Creating output metadata >>> [ 96.0] Finishing off >>> >>> real 1m36.275s >>> user 0m4.700s >>> sys 0m14.070s >> >> My ACK on Nir's v2 patch basically means that I defer to you on its >> review -- I don't have anything against it, but I understand it's >> (perhaps a temporary) workaround until we find a more sustainable (and >> likely much more complex) solution. > > Sure, I don't mind taking this as a temporary solution. The code > itself is perfectly fine. The request size here is essentially an > optimization hint, it doesn't affect the architecture. > > An architectural problem that affects both nbdkit & nbdcopy is that > NBD commands drive the nbdkit backend and the nbdcopy loop. If we > make the nbdcopy --request-size larger, NBD commands ask for more > data, nbdkit-vddk-plugin makes larger VixDiskLib_ReadAsynch requests, > which at some point breaks the VMware server. (This is fairly easy to > solve in nbdkit-vddk-plugin or with a filter.) > > But nbdcopy needs to be reworked to make the input and output requests > separate, so that nbdcopy will coalesce and split blocks as it copies. > This is difficult. > > Another problem I'm finding (eg > https://bugzilla.redhat.com/show_bug.cgi?id=2039255#c9) is that > performance of new virt-v2v is extremely specific to input and output > mode, and hardware and network configurations. For reasons that I > don't fully understand.How are the nbdcopy source and destination coupled with each other? From work I'd done a decade ago, I remember that connecting two network-oriented (UDP) processes with a small-buffer pipe between them caused very bad effects. Whenever either process was blocked on the network (or on a timer, for example), the pipe went immediately full or empty (dependent on the particular blocked process), which in turn blocked the other process almost immediately. So the mitigation for that was to create a simple local app, to be inserted between the two network-oriented processes in the pipeline, just to de-couple them from each other, and make sure that a write to the pipe, or a read from it, would effectively never block. (The app-in-the-middle did have a maximum buffer size, but it was configurable, so not a practical limitation; it could be multiple tens of MB if needed.) If nbdcopy does some internal queueing (perhaps implicitly, i.e. by allowing multiple requests to be in flight at the same time), then seeing some stats on those "in real time" could be enlightening. Just guessing of course... Thanks Laszlo
Richard W.M. Jones
2022-Feb-14 16:08 UTC
[Libguestfs] [PATCH v2] v2v/v2v.ml: Use larger request size for -o rhv-upload
On Mon, Feb 14, 2022 at 04:52:17PM +0100, Laszlo Ersek wrote:> On 02/14/22 14:01, Richard W.M. Jones wrote: > > But nbdcopy needs to be reworked to make the input and output requests > > separate, so that nbdcopy will coalesce and split blocks as it copies. > > This is difficult. > > > > Another problem I'm finding (eg > > https://bugzilla.redhat.com/show_bug.cgi?id=2039255#c9) is that > > performance of new virt-v2v is extremely specific to input and output > > mode, and hardware and network configurations. For reasons that I > > don't fully understand. > > How are the nbdcopy source and destination coupled with each other? From > work I'd done a decade ago, I remember that connecting two > network-oriented (UDP) processes with a small-buffer pipe between them > caused very bad effects. Whenever either process was blocked on the > network (or on a timer, for example), the pipe went immediately full or > empty (dependent on the particular blocked process), which in turn > blocked the other process almost immediately. So the mitigation for that > was to create a simple local app, to be inserted between the two > network-oriented processes in the pipeline, just to de-couple them from > each other, and make sure that a write to the pipe, or a read from it, > would effectively never block. (The app-in-the-middle did have a maximum > buffer size, but it was configurable, so not a practical limitation; it > could be multiple tens of MB if needed.) > > If nbdcopy does some internal queueing (perhaps implicitly, i.e. by > allowing multiple requests to be in flight at the same time), then > seeing some stats on those "in real time" could be enlightening.So the way it works at the moment is it's event driven. Ignoring extents to keep the description simple, we issue asynch read requests (ie. nbd_aio_pread) and in the completion callbacks of those requests, asynchronous write requests are started (ie. nbd_aio_pwrite). https://gitlab.com/nbdkit/libnbd/-/blob/6725fa0e129f9a60d7b89707ef8604e0aeeeaf43/copy/multi-thread-copying.c#L372 There is a limit on the number of parallel requests in flight (nbdcopy --requests, default 64). This limits the implicit buffer to max_requests * request_size. That's 16MB in the default configuration. Quite small actually ... https://gitlab.com/nbdkit/libnbd/-/blob/6725fa0e129f9a60d7b89707ef8604e0aeeeaf43/copy/multi-thread-copying.c#L239 I've managed to reproduce the problem locally now so I can try playing with this limit. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top