search for: max_pwrite_zero

Displaying 7 results from an estimated 7 matches for "max_pwrite_zero".

Did you mean: max_pwrite_zeros
2018 Apr 10
4
Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
...trying to do, I hope the nbd maintainer on > qemu side can explain this. qemu-img tries to efficiently zero out the whole device at once so that it doesn't have to use individual small write requests for unallocated parts of the image later on. The problem is that the NBD block driver has max_pwrite_zeroes = 32 MB, so it's not that efficient after all. I'm not sure if there is a real reason for this, but Eric should know. > However, since you suggest that we could use "trim" request for these > requests, it means that these requests are advisory (since trim is), and > we...
2018 Apr 10
4
Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
...eroing since a block > device may contain junk data (we usually get dirty empty images from our > local > xtremio server). (Off topic for qemu-block but ...) We don't have enough information at our end to know about any of this. > > The problem is that the NBD block driver has max_pwrite_zeroes = 32 MB, > > so it's not that efficient after all. I'm not sure if there is a real > > reason for this, but Eric should know. > > > > We support zero with unlimited size without sending any payload to oVirt, > so > there is no reason to limit zero request by...
2018 Apr 10
0
Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
...ve enough information > at our end to know about any of this. Yep, see my other email about a possible NBD protocol extension to actually let the client learn up-front if the exported device is known to start in an all-zero state. > >>> The problem is that the NBD block driver has max_pwrite_zeroes = 32 MB, >>> so it's not that efficient after all. I'm not sure if there is a real >>> reason for this, but Eric should know. >>> >> >> We support zero with unlimited size without sending any payload to oVirt, >> so >> there is no reason...
2018 Apr 10
0
Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
...ince the plugin created a new image, and we know that the image is empty. When the destination is a block device we cannot avoid zeroing since a block device may contain junk data (we usually get dirty empty images from our local xtremio server). > The problem is that the NBD block driver has max_pwrite_zeroes = 32 MB, > so it's not that efficient after all. I'm not sure if there is a real > reason for this, but Eric should know. > We support zero with unlimited size without sending any payload to oVirt, so there is no reason to limit zero request by max_pwrite_zeros. This limit may m...
2018 Apr 10
0
Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
...any of this. > Can't use use this logic in the oVirt plugin? file based storage -> skip initial zeroing block based storage -> use initial zeroing Do you think that publishing disk capabilities in the sdk will solve this? > > > The problem is that the NBD block driver has max_pwrite_zeroes = 32 MB, > > > so it's not that efficient after all. I'm not sure if there is a real > > > reason for this, but Eric should know. > > > > > > > We support zero with unlimited size without sending any payload to oVirt, > > so > > there is...
2018 Apr 10
1
Re: [Qemu-block] v2v: -o rhv-upload: Long time spent zeroing the disk
...ur > local > xtremio server). And that's why qemu-img is starting life with write zeroes requests - because it needs to guarantee that the image either already started as all zeroes, or that zeroes are written to overwrite junk data. >> The problem is that the NBD block driver has max_pwrite_zeroes = 32 MB, >> so it's not that efficient after all. I'm not sure if there is a real >> reason for this, but Eric should know. >> Yes, I do know. But it missed qemu 2.12; it's another NBD spec proposal where I'm also going to submit a qemu patch: https://lists.deb...
2018 Apr 10
2
v2v: -o rhv-upload: Long time spent zeroing the disk
We now have true zeroing support in oVirt imageio, thanks for that. However a problem is that ‘qemu-img convert’ issues zero requests for the whole disk before starting the transfer. It does this using 32 MB requests which take approx. 1 second each to execute on the oVirt side. Two problems therefore: (1) Zeroing the disk can take a long time (eg. 40 GB is approx. 20 minutes).