Richard W.M. Jones
2018-Apr-10 10:44 UTC
[Libguestfs] v2v: -o rhv-upload: Long time spent zeroing the disk
We now have true zeroing support in oVirt imageio, thanks for that.
However a problem is that ‘qemu-img convert’ issues zero requests for
the whole disk before starting the transfer. It does this using 32 MB
requests which take approx. 1 second each to execute on the oVirt side.
Two problems therefore:
(1) Zeroing the disk can take a long time (eg. 40 GB is approx.
20 minutes). Furthermore there is no progress indication while this
is happening.
Nothing bad happens: because it is making frequent requests there
is no timeout.
(2) I suspect that because we don't have trim support that this is
actually causing the disk to get fully allocated on the target.
The NBD requests are sent with may_trim=1 so we could turn these
into trim requests, but obviously cannot do that while there is no
trim support.
Note that this is (sort of) a regression over previous versions of the
‘-o rhv-upload’ patch since the old code ignored some zero requests.
I'm not sure what to do about this. Possibly we could fix the
progress bar issue in qemu. It's also possible we could do something
in nbdkit such as having is signal back that "stuff is being done" and
turn that into some indication in virt-v2v.
Anyway it's something to be aware of, in case you try out the
forthcoming virt-v2v rhvpreview and it appears to sit at 0/100% "doing
nothing" for a long time.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
Nir Soffer
2018-Apr-10 13:03 UTC
Re: [Libguestfs] v2v: -o rhv-upload: Long time spent zeroing the disk
On Tue, Apr 10, 2018 at 1:44 PM Richard W.M. Jones <rjones@redhat.com> wrote:> We now have true zeroing support in oVirt imageio, thanks for that. > > However a problem is that ‘qemu-img convert’ issues zero requests for > the whole disk before starting the transfer. It does this using 32 MB > requests which take approx. 1 second each to execute on the oVirt side.> Two problems therefore: > > (1) Zeroing the disk can take a long time (eg. 40 GB is approx. > 20 minutes). Furthermore there is no progress indication while this > is happening. >> Nothing bad happens: because it is making frequent requests there > is no timeout. > > (2) I suspect that because we don't have trim support that this is > actually causing the disk to get fully allocated on the target. > > The NBD requests are sent with may_trim=1 so we could turn these > into trim requests, but obviously cannot do that while there is no > trim support. >It sounds like nbdkit is emulating trim with zero instead of noop. I'm not sure why qemu-img is trying to do, I hope the nbd maintainer on qemu side can explain this. However, since you suggest that we could use "trim" request for these requests, it means that these requests are advisory (since trim is), and we can just ignore them if the server does not support trim. This will also solve the timeout issue you reported in private mail. Adding Eric and qemu-block. Nir
Kevin Wolf
2018-Apr-10 13:48 UTC
Re: [Libguestfs] [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
Am 10.04.2018 um 15:03 hat Nir Soffer geschrieben:> On Tue, Apr 10, 2018 at 1:44 PM Richard W.M. Jones <rjones@redhat.com> > wrote: > > > We now have true zeroing support in oVirt imageio, thanks for that. > > > > However a problem is that ‘qemu-img convert’ issues zero requests for > > the whole disk before starting the transfer. It does this using 32 MB > > requests which take approx. 1 second each to execute on the oVirt side. > > > > Two problems therefore: > > > > (1) Zeroing the disk can take a long time (eg. 40 GB is approx. > > 20 minutes). Furthermore there is no progress indication while this > > is happening. > > > > > Nothing bad happens: because it is making frequent requests there > > is no timeout. > > > > (2) I suspect that because we don't have trim support that this is > > actually causing the disk to get fully allocated on the target. > > > > The NBD requests are sent with may_trim=1 so we could turn these > > into trim requests, but obviously cannot do that while there is no > > trim support. > > > > It sounds like nbdkit is emulating trim with zero instead of noop. > > I'm not sure why qemu-img is trying to do, I hope the nbd maintainer on > qemu side can explain this.qemu-img tries to efficiently zero out the whole device at once so that it doesn't have to use individual small write requests for unallocated parts of the image later on. The problem is that the NBD block driver has max_pwrite_zeroes = 32 MB, so it's not that efficient after all. I'm not sure if there is a real reason for this, but Eric should know.> However, since you suggest that we could use "trim" request for these > requests, it means that these requests are advisory (since trim is), and > we can just ignore them if the server does not support trim.What qemu-img sends shouldn't be a NBD_CMD_TRIM request (which is indeed advisory), but a NBD_CMD_WRITE_ZEROES request. qemu-img relies on the image actually being zeroed after this. Kevin
Possibly Parallel Threads
- Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
- v2v: -o rhv-upload: Long time spent zeroing the disk
- Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
- Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
- Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk