Nir Soffer
2021-Jan-22 22:45 UTC
[Libguestfs] [PATCH 6/6] v2v: nbdkit: Match qemu-img number of parallel coroutines
qemu-img is using 8 parallel coroutines by default. I tests up to 16 parallel coroutines and it seems that 8 gives good results. nbdkit uses 16 threads by default. Testing nbdkit with qemu-img show that 8 threads give good results. I think for rhv upload plugin matching the number of threads to the number of connections would be optimal. We need to improve this later to use the optimal number for the configured input and output plugins. Testing rhv-upload-plugin show small improvement (~6%) in total connection time. Compared with last version using single connection, we are now 50% faster. Results are not stable, we need to test this with bigger images and real environment. [connection 1 ops, 3.561693 s] [dispatch 550 ops, 2.808350 s] [write 470 ops, 2.482875 s, 316.06 MiB, 127.30 MiB/s] [zero 78 ops, 0.178174 s, 1.26 GiB, 7.05 GiB/s] [flush 2 ops, 0.000211 s] [connection 1 ops, 3.561724 s] [dispatch 543 ops, 2.836738 s] [write 472 ops, 2.503561 s, 341.62 MiB, 136.46 MiB/s] [zero 69 ops, 0.162465 s, 1.12 GiB, 6.89 GiB/s] [flush 2 ops, 0.000181 s] [connection 1 ops, 3.566931 s] [dispatch 536 ops, 2.807226 s] [write 462 ops, 2.508345 s, 326.12 MiB, 130.02 MiB/s] [zero 72 ops, 0.141442 s, 1.30 GiB, 9.20 GiB/s] [flush 2 ops, 0.000158 s] [connection 1 ops, 3.564396 s] [dispatch 563 ops, 2.853623 s] [write 503 ops, 2.592482 s, 361.44 MiB, 139.42 MiB/s] [zero 58 ops, 0.113708 s, 1.01 GiB, 8.88 GiB/s] [flush 2 ops, 0.000149 s] Signed-off-by: Nir Soffer <nsoffer at redhat.com> --- v2v/nbdkit.ml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/v2v/nbdkit.ml b/v2v/nbdkit.ml index 46b20c9d..caa76342 100644 --- a/v2v/nbdkit.ml +++ b/v2v/nbdkit.ml @@ -137,6 +137,9 @@ let run_unix cmd add_arg "--pidfile"; add_arg pidfile; add_arg "--unix"; add_arg sock; + (* Match qemu-img default number of parallel coroutines *) + add_arg "--threads"; add_arg "8"; + (* Reduce verbosity in nbdkit >= 1.17.4. *) let version = version (config ()) in if version >= (1, 17, 4) then ( -- 2.26.2
Nir Soffer
2021-Jan-22 23:15 UTC
[Libguestfs] [PATCH 6/6] v2v: nbdkit: Match qemu-img number of parallel coroutines
On Sat, Jan 23, 2021 at 12:45 AM Nir Soffer <nirsof at gmail.com> wrote:
Here is an example nbdkit output:
hw_id = 'd8635601-ea8b-4c5c-a624-41bd72b862d6'
datacenter = ost
host.id = '9de3098d-89e8-444c-95ed-0fc2ea74e170'
disk.id = '6263dab4-e299-43c4-8f10-9a227b9bed01'
transfer.id = 'fc27844a-00bc-4035-afd6-83c9a456cae4'
using https connection
using 4 connections
using unix socket '\x00/org/ovirt/imageio'
using unix socket '\x00/org/ovirt/imageio'
using unix socket '\x00/org/ovirt/imageio'
using unix socket '\x00/org/ovirt/imageio'
imageio features: flush=True zero=True
unix_socket='\x00/org/ovirt/imageio'max_readers=8 max_writers=8
nbdkit: python[1]: debug: python: open returned handle 0x7f17840016d0
nbdkit: python[1]: debug: python: prepare readonly=0
nbdkit: python[1]: debug: python: get_size
nbdkit: python[1]: debug: python: can_write
nbdkit: python[1]: debug: python: can_zero
nbdkit: python[1]: debug: python: can_fast_zero
nbdkit: python[1]: debug: python: can_trim
nbdkit: python[1]: debug: python: can_fua
nbdkit: python[1]: debug: python: can_flush
nbdkit: python[1]: debug: python: is_rotational
nbdkit: python[1]: debug: python: can_multi_conn
nbdkit: python[1]: debug: python: can_cache
nbdkit: python[1]: debug: python: can_extents
nbdkit: python[1]: debug: newstyle negotiation: flags: export 0xcd
nbdkit: python[1]: debug: newstyle negotiation: NBD_OPT_GO: ignoring
NBD_INFO_* request 3 (NBD_INFO_BLOCK_SIZE)
nbdkit: python[1]: debug: handshake complete, processing requests with 8 threads
nbdkit: debug: starting worker thread python.1
nbdkit: debug: starting worker thread python.4
nbdkit: debug: starting worker thread python.6
nbdkit: debug: starting worker thread python.5
nbdkit: debug: starting worker thread python.0
nbdkit: debug: starting worker thread python.3
nbdkit: debug: starting worker thread python.7
nbdkit: debug: starting worker thread python.2
(100.00/100%)
nbdkit: python.1: debug: client sent NBD_CMD_DISC, closing connection
nbdkit: python.1: debug: exiting worker thread python.1
nbdkit: python.4: debug: exiting worker thread python.4
nbdkit: python.5: debug: exiting worker thread python.5
nbdkit: python.2: debug: exiting worker thread python.2
nbdkit: python.3: debug: exiting worker thread python.3
nbdkit: python.0: debug: exiting worker thread python.0
nbdkit: python.7: debug: exiting worker thread python.7
nbdkit: python.6: debug: exiting worker thread python.6
nbdkit: virtual copying rate: 6518.0 M bits/sec
python[1]: debug: python: finalize
nbdkit: python[1]: debug: python: close
finalizing transfer fc27844a-00bc-4035-afd6-83c9a456cae4
transfer fc27844a-00bc-4035-afd6-83c9a456cae4 finalized in 2.031 seconds
Example imageio log:
2021-01-23 00:44:38,239 INFO (Thread-427) [http] OPEN
connection=427 client=local
2021-01-23 00:44:38,240 INFO (Thread-427) [tickets] [local] ADD
ticket={'dirty': False, 'ops': ['write'],
'size': 6442450944,
'sparse': True, 'transfer_id':
'fc27844a-00bc-4035-afd6-83c9a456cae4',
'uuid': 'ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b',
'timeout': 300, 'url':
'nbd:unix:/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'}
2021-01-23 00:44:38,240 INFO (Thread-427) [http] CLOSE
connection=427 client=local [connection 1 ops, 0.000776 s] [dispatch 1
ops, 0.000259 s]
2021-01-23 00:44:38,375 INFO (Thread-428) [http] OPEN
connection=428 client=local
2021-01-23 00:44:38,376 INFO (Thread-428) [http] CLOSE
connection=428 client=local [connection 1 ops, 0.000708 s] [dispatch 1
ops, 0.000187 s]
2021-01-23 00:44:39,276 INFO (Thread-429) [http] OPEN
connection=429 client=::ffff:192.168.122.23
2021-01-23 00:44:39,276 INFO (Thread-429) [images]
[::ffff:192.168.122.23] OPTIONS
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:39,277 INFO (Thread-429) [backends.nbd] Open
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
export_name='' sparse=True max_connections=8
2021-01-23 00:44:39,277 INFO (Thread-429) [backends.nbd] Close
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
2021-01-23 00:44:39,278 INFO (Thread-429) [http] CLOSE
connection=429 client=::ffff:192.168.122.23 [connection 1 ops,
0.001338 s] [dispatch 1 ops, 0.000824 s]
2021-01-23 00:44:39,287 INFO (Thread-430) [http] OPEN
connection=430 client=local
2021-01-23 00:44:39,288 INFO (Thread-431) [http] OPEN
connection=431 client=local
2021-01-23 00:44:39,289 INFO (Thread-432) [http] OPEN
connection=432 client=local
2021-01-23 00:44:39,289 INFO (Thread-433) [http] OPEN
connection=433 client=local
2021-01-23 00:44:39,290 INFO (Thread-430) [backends.nbd] Open
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
export_name='' sparse=True max_connections=8
2021-01-23 00:44:39,290 INFO (Thread-431) [backends.nbd] Open
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
export_name='' sparse=True max_connections=8
2021-01-23 00:44:39,291 INFO (Thread-432) [backends.nbd] Open
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
export_name='' sparse=True max_connections=8
2021-01-23 00:44:39,294 INFO (Thread-433) [backends.nbd] Open
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
export_name='' sparse=True max_connections=8
2021-01-23 00:44:40,399 INFO (Thread-434) [http] OPEN
connection=434 client=local
2021-01-23 00:44:40,401 INFO (Thread-434) [http] CLOSE
connection=434 client=local [connection 1 ops, 0.001947 s] [dispatch 1
ops, 0.000121 s]
2021-01-23 00:44:43,154 INFO (Thread-432) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,155 INFO (Thread-430) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,156 INFO (Thread-433) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,157 INFO (Thread-431) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,158 INFO (Thread-432) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,159 INFO (Thread-430) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,160 INFO (Thread-433) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,161 INFO (Thread-431) [images] [local] FLUSH
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:43,164 INFO (Thread-431) [backends.nbd] Close
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
2021-01-23 00:44:43,164 INFO (Thread-431) [http] CLOSE
connection=431 client=local [connection 1 ops, 3.875194 s] [dispatch
554 ops, 2.995257 s] [zero 62 ops, 0.139778 s, 793.44 MiB, 5.54 GiB/s]
[zero.zero 62 ops, 0.138456 s, 793.44 MiB, 5.60 GiB/s] [write 490 ops,
2.655315 s, 328.25 MiB, 123.62 MiB/s] [write.read 490 ops, 0.570355 s,
328.25 MiB, 575.52 MiB/s] [write.write 490 ops, 2.057818 s, 328.25
MiB, 159.51 MiB/s] [flush 2 ops, 0.000371 s]
2021-01-23 00:44:43,164 INFO (Thread-433) [backends.nbd] Close
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
2021-01-23 00:44:43,165 INFO (Thread-433) [http] CLOSE
connection=433 client=local [connection 1 ops, 3.874932 s] [dispatch
554 ops, 3.111739 s] [zero 67 ops, 0.137024 s, 1.20 GiB, 8.75 GiB/s]
[zero.zero 67 ops, 0.135575 s, 1.20 GiB, 8.84 GiB/s] [write 485 ops,
2.770474 s, 337.38 MiB, 121.78 MiB/s] [write.read 485 ops, 0.537087 s,
337.38 MiB, 628.16 MiB/s] [write.write 485 ops, 2.209982 s, 337.38
MiB, 152.66 MiB/s] [flush 2 ops, 0.000434 s]
2021-01-23 00:44:43,165 INFO (Thread-430) [backends.nbd] Close
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
2021-01-23 00:44:43,166 INFO (Thread-430) [http] CLOSE
connection=430 client=local [connection 1 ops, 3.877590 s] [dispatch
542 ops, 3.105687 s] [zero 78 ops, 0.171958 s, 1.35 GiB, 7.86 GiB/s]
[zero.zero 78 ops, 0.170382 s, 1.35 GiB, 7.93 GiB/s] [write 462 ops,
2.754178 s, 338.25 MiB, 122.81 MiB/s] [write.read 462 ops, 0.565677 s,
338.25 MiB, 597.96 MiB/s] [write.write 462 ops, 2.161230 s, 338.25
MiB, 156.51 MiB/s] [flush 2 ops, 0.000213 s]
2021-01-23 00:44:43,166 INFO (Thread-432) [backends.nbd] Close
backend
address='/run/vdsm/nbd/ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b.sock'
2021-01-23 00:44:43,167 INFO (Thread-432) [http] CLOSE
connection=432 client=local [connection 1 ops, 3.876394 s] [dispatch
541 ops, 3.029115 s] [zero 70 ops, 0.139170 s, 1.36 GiB, 9.80 GiB/s]
[zero.zero 70 ops, 0.137861 s, 1.36 GiB, 9.90 GiB/s] [write 469 ops,
2.709666 s, 338.56 MiB, 124.95 MiB/s] [write.read 469 ops, 0.539536 s,
338.56 MiB, 627.51 MiB/s] [write.write 469 ops, 2.146016 s, 338.56
MiB, 157.76 MiB/s] [flush 2 ops, 0.000305 s]
2021-01-23 00:44:44,440 INFO (Thread-435) [http] OPEN
connection=435 client=local
2021-01-23 00:44:44,440 INFO (Thread-435) [tickets] [local] REMOVE
ticket=ecbcd8a1-eeca-4c1d-97d6-b907a0c3999b
2021-01-23 00:44:44,441 INFO (Thread-435) [http] CLOSE
connection=435 client=local [connection 1 ops, 0.000628 s] [dispatch 1
ops, 0.000202 s]
Richard W.M. Jones
2021-Jan-23 06:42 UTC
[Libguestfs] [PATCH 6/6] v2v: nbdkit: Match qemu-img number of parallel coroutines
On Sat, Jan 23, 2021 at 12:45:24AM +0200, Nir Soffer wrote:> qemu-img is using 8 parallel coroutines by default. I tests up to 16 > parallel coroutines and it seems that 8 gives good results. > > nbdkit uses 16 threads by default. Testing nbdkit with qemu-img show > that 8 threads give good results. > > I think for rhv upload plugin matching the number of threads to the > number of connections would be optimal. We need to improve this later to > use the optimal number for the configured input and output plugins. > > Testing rhv-upload-plugin show small improvement (~6%) in total > connection time. Compared with last version using single connection, we > are now 50% faster. > > Results are not stable, we need to test this with bigger images and real > environment. > > [connection 1 ops, 3.561693 s] > [dispatch 550 ops, 2.808350 s] > [write 470 ops, 2.482875 s, 316.06 MiB, 127.30 MiB/s] > [zero 78 ops, 0.178174 s, 1.26 GiB, 7.05 GiB/s] > [flush 2 ops, 0.000211 s] > > [connection 1 ops, 3.561724 s] > [dispatch 543 ops, 2.836738 s] > [write 472 ops, 2.503561 s, 341.62 MiB, 136.46 MiB/s] > [zero 69 ops, 0.162465 s, 1.12 GiB, 6.89 GiB/s] > [flush 2 ops, 0.000181 s] > > [connection 1 ops, 3.566931 s] > [dispatch 536 ops, 2.807226 s] > [write 462 ops, 2.508345 s, 326.12 MiB, 130.02 MiB/s] > [zero 72 ops, 0.141442 s, 1.30 GiB, 9.20 GiB/s] > [flush 2 ops, 0.000158 s] > > [connection 1 ops, 3.564396 s] > [dispatch 563 ops, 2.853623 s] > [write 503 ops, 2.592482 s, 361.44 MiB, 139.42 MiB/s] > [zero 58 ops, 0.113708 s, 1.01 GiB, 8.88 GiB/s] > [flush 2 ops, 0.000149 s] > > Signed-off-by: Nir Soffer <nsoffer at redhat.com> > --- > v2v/nbdkit.ml | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/v2v/nbdkit.ml b/v2v/nbdkit.ml > index 46b20c9d..caa76342 100644 > --- a/v2v/nbdkit.ml > +++ b/v2v/nbdkit.ml > @@ -137,6 +137,9 @@ let run_unix cmd > add_arg "--pidfile"; add_arg pidfile; > add_arg "--unix"; add_arg sock; > > + (* Match qemu-img default number of parallel coroutines *) > + add_arg "--threads"; add_arg "8";This will affect all nbdkit instances -- virt-v2v uses nbdkit for both the input and output sides, eg. when converting from VMware -- so it would be better to make this configurable as a parameter to the Nbdkit module (eg. Nbdkit.set_threads, like the way Nbdkit.set_verbose works). Rich.> (* Reduce verbosity in nbdkit >= 1.17.4. *) > let version = version (config ()) in > if version >= (1, 17, 4) then ( > -- > 2.26.2-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW