Nir Soffer
2021-Mar-01 15:28 UTC
[Libguestfs] [PATCH libnbd 0/3] Playing with request size and libev
The first 2 patches add --request-size option, to allow tuning nbdcopy for particular environment. Testing in our scale lab shows significant improvement using smaller request size. The last patch adds example for integrating libnbd with libev event loop, similar to the glib main loop example. It is not related but it was useful to test it with --request-size. Nir Soffer (3): copy: Allocate required size instead of maximum copy: Add --request-size option examples: Add example for integrating with libev .gitignore | 1 + configure.ac | 19 +++ copy/main.c | 18 +++ copy/multi-thread-copying.c | 12 +- copy/nbdcopy.h | 2 + copy/nbdcopy.pod | 7 +- copy/synch-copying.c | 17 +- examples/Makefile.am | 22 +++ examples/copy-libev.c | 304 ++++++++++++++++++++++++++++++++++++ 9 files changed, 391 insertions(+), 11 deletions(-) create mode 100644 examples/copy-libev.c -- 2.26.2
Nir Soffer
2021-Mar-01 15:28 UTC
[Libguestfs] [PATCH libnbd 1/3] copy: Allocate required size instead of maximum
If we cannot write zeroes efficiently, consider the command length when allocating the buffer instead of allocating MAX_REQUEST_SIZE. Signed-off-by: Nir Soffer <nsoffer at redhat.com> --- copy/multi-thread-copying.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/copy/multi-thread-copying.c b/copy/multi-thread-copying.c index f86621b..b1cc9a5 100644 --- a/copy/multi-thread-copying.c +++ b/copy/multi-thread-copying.c @@ -501,6 +501,7 @@ static void fill_dst_range_with_zeroes (struct command *command) { char *data; + size_t data_size; if (destination_is_zero) goto free_and_return; @@ -517,7 +518,8 @@ fill_dst_range_with_zeroes (struct command *command) /* Fall back to loop writing zeroes. This is going to be slow * anyway, so do it synchronously. XXX */ - data = calloc (1, MAX_REQUEST_SIZE); + data_size = MIN (MAX_REQUEST_SIZE, command->slice.len); + data = calloc (1, data_size); if (!data) { perror ("calloc"); exit (EXIT_FAILURE); @@ -525,8 +527,8 @@ fill_dst_range_with_zeroes (struct command *command) while (command->slice.len > 0) { size_t len = command->slice.len; - if (len > MAX_REQUEST_SIZE) - len = MAX_REQUEST_SIZE; + if (len > data_size) + len = data_size; dst->ops->synch_write (dst, data, len, command->offset); command->slice.len -= len; -- 2.26.2
Nir Soffer
2021-Mar-01 15:28 UTC
[Libguestfs] [PATCH libnbd 2/3] copy: Add --request-size option
Allow the user to control the maximum request size. This can improve performance and minimize memory usage. With the new option, it is easy to test and tune the tool for particular environment. I tested this on our scale lab with FC storage, copying 100 GiB image with 66 GiB of data from local fast SDD (Dell Express Flash PM1725b 3.2TB SFF) to a qcow2 preallocated volume on FC storage domain (NETAPP,LUN C-Mode). The source and destination images are served by qemu-nbd, using same configuration used in oVirt: qemu-nbd --persistent --shared=8 --format=qcow2 --cache=none --aio=native \ --read-only /scratch/nsoffer-v2v.qcow2 --socket /tmp/src.sock qemu-nbd --persistent --shared=8 --format=qcow2 --cache=none --aio=native \ /dev/{vg-name}/{lv-name} --socket /tmp/dst.sock Tested with hyperfine using using 10 runes for every request size. Benchmark #1: ./nbdcopy --request-size=262144 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 113.299 s ? ?1.160 s ? ?[User: 7.427 s, System: 23.862 s] ? Range (min ? max): ? 112.332 s ? 115.598 s ? ?10 runs Benchmark #2: ./nbdcopy --request-size=524288 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 107.952 s ? ?0.800 s ? ?[User: 10.085 s, System: 24.392 s] ? Range (min ? max): ? 107.023 s ? 109.368 s ? ?10 runs Benchmark #3: ./nbdcopy --request-size=1048576 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 105.992 s ? ?0.442 s ? ?[User: 11.809 s, System: 24.215 s] ? Range (min ? max): ? 105.391 s ? 106.853 s ? ?10 runs Benchmark #4: ./nbdcopy --request-size=2097152 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 107.625 s ? ?1.011 s ? ?[User: 11.767 s, System: 26.629 s] ? Range (min ? max): ? 105.650 s ? 109.466 s ? ?10 runs Benchmark #5: ./nbdcopy --request-size=4194304 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 111.190 s ? ?0.874 s ? ?[User: 11.160 s, System: 27.767 s] ? Range (min ? max): ? 109.967 s ? 112.442 s ? ?10 runs Benchmark #6: ./nbdcopy --request-size=8388608 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 117.950 s ? ?1.051 s ? ?[User: 10.570 s, System: 28.344 s] ? Range (min ? max): ? 116.077 s ? 119.758 s ? ?10 runs Benchmark #7: ./nbdcopy --request-size=16777216 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 125.154 s ? ?2.121 s ? ?[User: 10.213 s, System: 28.392 s] ? Range (min ? max): ? 122.395 s ? 129.108 s ? ?10 runs Benchmark #8: ./nbdcopy --request-size=33554432 nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 130.694 s ? ?1.315 s ? ?[User: 4.459 s, System: 38.734 s] ? Range (min ? max): ? 128.872 s ? 133.255 s ? ?10 runs For reference, same copy using qemu-img convert with maximum number of coroutines: Benchmark #9: qemu-img convert -n -f raw -O raw -W -m 16 \ nbd+unix:///?socket=/tmp/src.sock nbd+unix:///?socket=/tmp/dst.sock ? ? ? ? ? ? Time (mean ? ?): ? ? 106.093 s ? ?4.616 s ? ?[User: 3.994 s, System: 24.768 s] ? Range (min ? max): ? 102.407 s ? 115.493 s ? ?10 runs We can see that current default 32 MiB request size is 23% slower and use 17% more cpu time compared with 1 MiB request size. Signed-off-by: Nir Soffer <nsoffer at redhat.com> --- copy/main.c | 18 ++++++++++++++++++ copy/multi-thread-copying.c | 6 +++--- copy/nbdcopy.h | 2 ++ copy/nbdcopy.pod | 7 ++++++- copy/synch-copying.c | 17 ++++++++++++----- 5 files changed, 41 insertions(+), 9 deletions(-) diff --git a/copy/main.c b/copy/main.c index 55c2b53..4fe7ae4 100644 --- a/copy/main.c +++ b/copy/main.c @@ -50,6 +50,7 @@ bool flush; /* --flush flag */ unsigned max_requests = 64; /* --requests */ bool progress; /* -p flag */ int progress_fd = -1; /* --progress=FD */ +unsigned request_size = MAX_REQUEST_SIZE; /* --request-size */ unsigned sparse_size = 4096; /* --sparse */ bool synchronous; /* --synchronous flag */ unsigned threads; /* --threads */ @@ -91,6 +92,7 @@ main (int argc, char *argv[]) DESTINATION_IS_ZERO_OPTION, FLUSH_OPTION, NO_EXTENTS_OPTION, + REQUEST_SIZE_OPTION, SYNCHRONOUS_OPTION, }; const char *short_options = "C:pR:S:T:vV"; @@ -103,6 +105,7 @@ main (int argc, char *argv[]) { "flush", no_argument, NULL, FLUSH_OPTION }, { "no-extents", no_argument, NULL, NO_EXTENTS_OPTION }, { "progress", optional_argument, NULL, 'p' }, + { "request-size", optional_argument, NULL, REQUEST_SIZE_OPTION }, { "requests", required_argument, NULL, 'R' }, { "short-options", no_argument, NULL, SHORT_OPTIONS }, { "sparse", required_argument, NULL, 'S' }, @@ -183,6 +186,21 @@ main (int argc, char *argv[]) } break; + case REQUEST_SIZE_OPTION: + if (sscanf (optarg, "%u", &request_size) != 1) { + fprintf (stderr, "%s: --request-size: could not parse: %s\n", + prog, optarg); + exit (EXIT_FAILURE); + } + if (request_size < MIN_REQUEST_SIZE || request_size > MAX_REQUEST_SIZE || + !is_power_of_2 (request_size)) { + fprintf (stderr, + "%s: --request-size: must be a power of 2 within %d-%d\n", + prog, MIN_REQUEST_SIZE, MAX_REQUEST_SIZE); + exit (EXIT_FAILURE); + } + break; + case 'R': if (sscanf (optarg, "%u", &max_requests) != 1 || max_requests == 0) { fprintf (stderr, "%s: --requests: could not parse: %s\n", diff --git a/copy/multi-thread-copying.c b/copy/multi-thread-copying.c index b1cc9a5..c649d2b 100644 --- a/copy/multi-thread-copying.c +++ b/copy/multi-thread-copying.c @@ -183,8 +183,8 @@ worker_thread (void *indexp) */ while (exts.ptr[i].length > 0) { len = exts.ptr[i].length; - if (len > MAX_REQUEST_SIZE) - len = MAX_REQUEST_SIZE; + if (len > request_size) + len = request_size; data = malloc (len); if (data == NULL) { perror ("malloc"); @@ -518,7 +518,7 @@ fill_dst_range_with_zeroes (struct command *command) /* Fall back to loop writing zeroes. This is going to be slow * anyway, so do it synchronously. XXX */ - data_size = MIN (MAX_REQUEST_SIZE, command->slice.len); + data_size = MIN (request_size, command->slice.len); data = calloc (1, data_size); if (!data) { perror ("calloc"); diff --git a/copy/nbdcopy.h b/copy/nbdcopy.h index e4c3d4e..e7fe1ea 100644 --- a/copy/nbdcopy.h +++ b/copy/nbdcopy.h @@ -27,6 +27,7 @@ #include "vector.h" +#define MIN_REQUEST_SIZE 4096 #define MAX_REQUEST_SIZE (32 * 1024 * 1024) /* This must be a multiple of MAX_REQUEST_SIZE. Larger is better up @@ -218,6 +219,7 @@ extern bool flush; extern unsigned max_requests; extern bool progress; extern int progress_fd; +extern unsigned request_size; extern unsigned sparse_size; extern bool synchronous; extern unsigned threads; diff --git a/copy/nbdcopy.pod b/copy/nbdcopy.pod index ae92547..c265550 100644 --- a/copy/nbdcopy.pod +++ b/copy/nbdcopy.pod @@ -7,7 +7,7 @@ nbdcopy - copy to and from an NBD server nbdcopy [--allocated] [-C N|--connections=N] [--destination-is-zero|--target-is-zero] [--flush] [--no-extents] [-p|--progress|--progress=FD] - [-R N|--requests=N] [-S N|--sparse=N] + [--request-size=N] [-R N|--requests=N] [-S N|--sparse=N] [--synchronous] [-T N|--threads=N] [-v|--verbose] SOURCE DESTINATION @@ -152,6 +152,11 @@ following shell commands: nbdcopy --progress=3 ... exec 3>&- +=item B<--request-size=>N + +Set the maximum request size in bytes. The maximum value is 32 MiB, +specified by the NBD protocol. + =item B<-R> N =item B<--requests=>N diff --git a/copy/synch-copying.c b/copy/synch-copying.c index 17bda16..c63bd2d 100644 --- a/copy/synch-copying.c +++ b/copy/synch-copying.c @@ -28,12 +28,17 @@ #include "nbdcopy.h" -static char buf[MAX_REQUEST_SIZE]; - void synch_copying (void) { uint64_t offset = 0; + unsigned char *buf; + + buf = malloc (request_size); + if (buf == NULL) { + perror ("malloc"); + exit (EXIT_FAILURE); + } /* If the source size is unknown then we copy data and cannot use * extent information. @@ -41,7 +46,7 @@ synch_copying (void) if (src->size == -1) { size_t r; - while ((r = src->ops->synch_read (src, buf, sizeof buf, offset)) > 0) { + while ((r = src->ops->synch_read (src, buf, request_size, offset)) > 0) { dst->ops->synch_write (dst, buf, r, offset); offset += r; progress_bar (offset, src->size); @@ -57,8 +62,8 @@ synch_copying (void) uint64_t count = src->size - offset; size_t i, r; - if (count > sizeof buf) - count = sizeof buf; + if (count > request_size) + count = request_size; if (extents) src->ops->get_extents (src, 0, offset, count, &exts); @@ -99,4 +104,6 @@ synch_copying (void) free (exts.ptr); } /* while */ } + + free (buf); } -- 2.26.2
Nir Soffer
2021-Mar-01 15:28 UTC
[Libguestfs] [PATCH libnbd 3/3] examples: Add example for integrating with libev
Add example for copying an image between nbd servers using libev event loop. Currently supports only dumb copying without using extents or trying to detect zeroes. The main motivation for adding this example is testing the efficiency of the home-brew event loop in nbdcopy. Testing this example shows similar performance compared with qemu-img convert. nbdcopy performs worse, but tweaking the request size shows similar performance using more cpu time. I tested this only with nbdkit memory plugin, using: nbdkit -f -r pattern size=1G -U /tmp/src.sock nbdkit -f memory size=1g -U /tmp/dst.sock I used hyperfine to run all benchmarks using --warmup=3 and --run=10. Benchmark #1: ./copy-libev nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 552.9 ms ? ?47.4 ms ? ?[User: 76.4 ms, System: 456.3 ms] ? Range (min ? max): ? 533.8 ms ? 687.6 ms ? ?10 runs qemu-img shows same performance, using slightly less cpu time: Benchmark #2: qemu-img convert -n -W nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 554.6 ms ? ?42.4 ms ? ?[User: 69.1 ms, System: 456.6 ms] ? Range (min ? max): ? 535.5 ms ? 674.9 ms ? ?10 runs nbdcopy is 78% slower, and uses 290% more cpu time: Benchmark #3: .nbdcopy --flush nbd+unix:///?socket=/tmp/src.sock \ nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 935.8 ms ? ?37.8 ms ? ?[User: 206.4 ms, System: 1340.8 ms] ? Range (min ? max): ? 890.5 ms ? 1017.6 ms ? ?10 runs Disabling extents and sparse does not make a difference, but changing the request size show similar performance: Benchmark #4: ./nbdcopy --flush --no-extents --sparse=0 --request-size=1048576 \ nbd+unix:///?socket=/tmp/src.sock nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 594.5 ms ? ?39.2 ms ? ?[User: 250.0 ms, System: 1197.7 ms] ? Range (min ? max): ? 578.2 ms ? 705.8 ms ? ?10 runs Decreasing number of requests is little faster and use less cpu time, but nbdcopy is still 5% slower and uses 240% more cpu time. Benchmark #5: ./nbdcopy --flush --no-extents --sparse=0 --request-size=1048576 --requests=16 \ nbd+unix:///?socket=/tmp/src.sock nbd+unix:///?socket=/tmp/dst.sock ? Time (mean ? ?): ? ? 583.0 ms ? ?30.7 ms ? ?[User: 243.9 ms, System: 1051.5 ms] ? Range (min ? max): ? 566.6 ms ? 658.3 ms ? ?10 runs Signed-off-by: Nir Soffer <nsoffer at redhat.com> --- .gitignore | 1 + configure.ac | 19 +++ examples/Makefile.am | 22 +++ examples/copy-libev.c | 304 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 346 insertions(+) create mode 100644 examples/copy-libev.c diff --git a/.gitignore b/.gitignore index 4935b81..f4ce15b 100644 --- a/.gitignore +++ b/.gitignore @@ -65,6 +65,7 @@ Makefile.in /examples/server-flags /examples/strict-structured-reads /examples/threaded-reads-and-writes +/examples/copy-libev /fuse/nbdfuse /fuse/nbdfuse.1 /fuzzing/libnbd-fuzz-wrapper diff --git a/configure.ac b/configure.ac index 6cf563a..6d9dbbe 100644 --- a/configure.ac +++ b/configure.ac @@ -223,6 +223,25 @@ PKG_CHECK_MODULES([GLIB], [glib-2.0], [ ]) AM_CONDITIONAL([HAVE_GLIB], [test "x$GLIB_LIBS" != "x"]) +dnl libev support for examples that interoperate with libev event loop. +PKG_CHECK_MODULES([LIBEV], [libev], [ + AC_SUBST([LIBEV_CFLAGS]) + AC_SUBST([LIBEV_LIBS]) +],[ + dnl no pkg-config for libev, searching manually: + AC_CHECK_HEADERS([ev.h], [ + AC_CHECK_LIB([ev], [ev_time], [ + AC_SUBST([LIBEV_LIBS], ["-lev"]) + ], + [ + AC_MSG_WARN([libev not found, some examples will not be compiled]) + ]) + ],[ + AC_MSG_WARN([ev.h not found, some examples will not be compiled]) + ]) +]) +AM_CONDITIONAL([HAVE_LIBEV], [test "x$LIBEV_LIBS" != "x"]) + dnl FUSE is optional to build the FUSE module. AC_ARG_ENABLE([fuse], AS_HELP_STRING([--disable-fuse], [disable FUSE (guestmount) support]), diff --git a/examples/Makefile.am b/examples/Makefile.am index b99cac1..a8286a3 100644 --- a/examples/Makefile.am +++ b/examples/Makefile.am @@ -39,6 +39,11 @@ noinst_PROGRAMS += \ glib-main-loop endif +if HAVE_LIBEV +noinst_PROGRAMS += \ + copy-libev +endif + aio_connect_read_SOURCES = \ aio-connect-read.c \ $(NULL) @@ -213,3 +218,20 @@ glib_main_loop_LDADD = \ $(GLIB_LIBS) \ $(NULL) endif + +if HAVE_LIBEV +copy_libev_SOURCES = \ + copy-libev.c \ + $(NULL) +copy_libev_CPPFLAGS = \ + -I$(top_srcdir)/include \ + $(NULL) +copy_libev_CFLAGS = \ + $(WARNINGS_CFLAGS) \ + $(LIBEV_CFLAGS) \ + $(NULL) +copy_libev_LDADD = \ + $(top_builddir)/lib/libnbd.la \ + $(LIBEV_LIBS) \ + $(NULL) +endif diff --git a/examples/copy-libev.c b/examples/copy-libev.c new file mode 100644 index 0000000..034711a --- /dev/null +++ b/examples/copy-libev.c @@ -0,0 +1,304 @@ +/* This example shows you how to make libnbd interoperate with the + * libev event loop. For more information about libvev see: + * + * http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod + * + * To build it you need the libev-devel pacakge. + * + * To run it: + * + * nbdkit -r pattern size=1G -U /tmp/src.sock + * nbdkit memory size=1g -U /tmp/dst.sock + * ./copy-ev nbd+unix:///?socket=/tmp/src.sock nbd+unix:///?socket=/tmp/dst.sock + * + * To debug it: + * + * LIBNBD_DEBUG=1 ./copy-ev ... + */ + +#include <assert.h> +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> + +#include <libnbd.h> + +#include <ev.h> + +/* These values depend on the enviroment tested. + * + * For shared storage using direct I/O: + * + * MAX_REQUESTS 16 + * REQUEST_SIZE (1024 * 1024) + * + * For nbdkit memory plugin: + * + * MAX_REQUESTS 8 + * REQUEST_SIZE (128 * 1024) + */ +#define MAX_REQUESTS 16 +#define REQUEST_SIZE (1024 * 1024) + +#define MIN(a,b) (a) < (b) ? (a) : (b) + +#define DEBUG(fmt, ...) \ + do { \ + if (debug) \ + fprintf (stderr, "copy-libev: " fmt "\n", ## __VA_ARGS__); \ + } while (0) + +struct connection { + ev_io watcher; + struct nbd_handle *nbd; +}; + +struct request { + int64_t offset; + size_t length; + unsigned char *data; +}; + +static struct ev_loop *loop; +static ev_prepare prepare; +static struct connection src; +static struct connection dst; +static struct request requests[MAX_REQUESTS]; +static int64_t size; +static int64_t offset; +static int64_t written; +static bool debug; + +static void start_read(struct request *r); +static int read_completed(void *user_data, int *error); +static int write_completed(void *user_data, int *error); + +static inline int +get_fd(struct connection *c) +{ + return nbd_aio_get_fd (c->nbd); +} + +static inline int +get_events(struct connection *c) +{ + int events = 0; + unsigned dir = nbd_aio_get_direction (c->nbd); + + if (dir & LIBNBD_AIO_DIRECTION_WRITE) + events |= EV_WRITE; + + if (dir & LIBNBD_AIO_DIRECTION_READ) + events |= EV_READ; + + return events; +} + +static void +start_read(struct request *r) +{ + int64_t cookie; + + assert (offset < size); + + r->length = MIN (REQUEST_SIZE, size - offset); + r->offset = offset; + + DEBUG ("start read offset=%ld len=%ld", r->offset, r->length); + + cookie = nbd_aio_pread ( + src.nbd, r->data, r->length, r->offset, + (nbd_completion_callback) { .callback=read_completed, + .user_data=r }, + 0); + if (cookie == -1) { + fprintf (stderr, "start_read: %s", nbd_get_error ()); + exit (EXIT_FAILURE); + } + + offset += r->length; +} + +static int +read_completed (void *user_data, int *error) +{ + struct request *r = (struct request *)user_data; + int64_t cookie; + + DEBUG ("read completed, starting write offset=%ld len=%ld", + r->offset, r->length); + + cookie = nbd_aio_pwrite ( + dst.nbd, r->data, r->length, r->offset, + (nbd_completion_callback) { .callback=write_completed, + .user_data=r }, + 0); + if (cookie == -1) { + fprintf (stderr, "read_completed: %s", nbd_get_error ()); + exit (EXIT_FAILURE); + } + + return 1; +} + +static int +write_completed (void *user_data, int *error) +{ + struct request *r = (struct request *)user_data; + + written += r->length; + + DEBUG ("write completed offset=%ld len=%ld", r->offset, r->length); + + if (written == size) { + /* The last write completed. Stop all watchers and break out + * from the event loop. + */ + ev_io_stop (loop, &src.watcher); + ev_io_stop (loop, &dst.watcher); + ev_prepare_stop (loop, &prepare); + ev_break (loop, EVBREAK_ALL); + } + + /* If we have data to read, start a new read. */ + if (offset < size) + start_read(r); + + return 1; +} + +/* Notify libnbd about io events. */ +static void +io_cb (struct ev_loop *loop, ev_io *w, int revents) +{ + struct connection *c = (struct connection *)w; + + if (revents & EV_WRITE) + nbd_aio_notify_write (c->nbd); + + if (revents & EV_READ) + nbd_aio_notify_read (c->nbd); +} + +static inline void +update_watcher (struct connection *c) +{ + int events = get_events(c); + + if (events != c->watcher.events) { + ev_io_stop (loop, &c->watcher); + ev_io_set (&c->watcher, get_fd (c), events); + ev_io_start (loop, &c->watcher); + } +} + +/* Update watchers events based on libnbd handle state. */ +static void +prepare_cb (struct ev_loop *loop, ev_prepare *w, int revents) +{ + update_watcher (&src); + update_watcher (&dst); +} + +int +main (int argc, char *argv[]) +{ + int i; + + loop = EV_DEFAULT; + + if (argc != 3) { + fprintf (stderr, "Usage: copy-ev src-uri dst-uri\n"); + exit (EXIT_FAILURE); + } + + src.nbd = nbd_create (); + if (src.nbd == NULL) { + fprintf (stderr, "nbd_create: %s\n", nbd_get_error ()); + exit (EXIT_FAILURE); + } + + + dst.nbd = nbd_create (); + if (dst.nbd == NULL) { + fprintf (stderr, "nbd_create: %s\n", nbd_get_error ()); + exit (EXIT_FAILURE); + } + + debug = nbd_get_debug (src.nbd); + + /* Connecting is fast, so use the syncronous API. */ + + if (nbd_connect_uri (src.nbd, argv[1])) { + fprintf (stderr, "nbd_connect_uri: %s\n", nbd_get_error ()); + exit (EXIT_FAILURE); + } + + if (nbd_connect_uri (dst.nbd, argv[2])) { + fprintf (stderr, "nbd_connect_uri: %s\n", nbd_get_error ()); + exit (EXIT_FAILURE); + } + + size = nbd_get_size (src.nbd); + + if (size > nbd_get_size (dst.nbd)) { + fprintf (stderr, "destinatio is not large enough\n"); + exit (EXIT_FAILURE); + } + + /* Start the copy "loop". When request completes, it starts the + * next request, until entire image was copied. */ + + for (i = 0; i < MAX_REQUESTS && offset < size; i++) { + struct request *r = &requests[i]; + + r->data = malloc (REQUEST_SIZE); + if (r->data == NULL) { + perror ("malloc"); + exit (EXIT_FAILURE); + } + + start_read(r); + } + + /* Start watching events on src and dst handles. */ + + ev_io_init (&src.watcher, io_cb, get_fd (&src), get_events (&src)); + ev_io_start (loop, &src.watcher); + + ev_io_init (&dst.watcher, io_cb, get_fd (&dst), get_events (&dst)); + ev_io_start (loop, &dst.watcher); + + /* Register a prepare watcher for updating src and dst events once + * before the event loop waits for new events. + */ + + ev_prepare_init (&prepare, prepare_cb); + ev_prepare_start (loop, &prepare); + + /* Run the event loop. The call will return when entire image was + * copied. + */ + + ev_run (loop, 0); + + /* Copy completed - flush data to storage. */ + + DEBUG("flush"); + if (nbd_flush (dst.nbd, 0)) { + fprintf (stderr, "Cannot flush: %s", nbd_get_error ()); + exit (EXIT_FAILURE); + } + + /* We don't care about errors here since data was flushed. */ + + nbd_shutdown (dst.nbd, 0); + nbd_close (dst.nbd); + + nbd_shutdown (src.nbd, 0); + nbd_close (src.nbd); + + /* We can free requests data here, but it is not really needed. */ + + return 0; +} -- 2.26.2