Nir Soffer
2019-Nov-30 21:42 UTC
[Libguestfs] [PATCH nbdkit v2 0/3] filters: stats: More useful, more friendly
- Use more friendly output with GiB and MiB/s - Measure time per operation, providing finer grain stats - Add total stats for understanding system throughput - Add missing stats for flush I hope that these changes will help to understand and improve virt-v2v performance. Changes since v1: - Keep bytes values - Increase precision to 0.001 GiB and 0.001 MiB/s - Add total stats - Show time before size/rate for more consistent format - Streamline using new record_stat() and print_stat() v1 was here: https://www.redhat.com/archives/libguestfs/2019-November/msg00345.html Nir Soffer (3): filters: stats: Add size in GiB, show rate in MiB/s filters: stats: Measure time per operation filters: stats: Add flush stats filters/stats/stats.c | 108 ++++++++++++++++++++++++++++++++++-------- 1 file changed, 88 insertions(+), 20 deletions(-) -- 2.21.0
Nir Soffer
2019-Nov-30 21:42 UTC
[Libguestfs] [PATCH nbdkit v2 1/3] filters: stats: Add size in GiB, show rate in MiB/s
I find bytes and bits-per-second unhelpful and hard to parse. Add also size in GiB, and show rate in MiB per second. This works well for common disk images and storage. Here is an example run with this change: $ ./nbdkit --foreground \ --unix /tmp/nbd.sock \ --exportname '' \ --filter stats \ file file=/var/tmp/dst.img \ statsfile=/dev/stderr \ --run 'qemu-img convert -p -n -f raw -O raw -T none /var/tmp/fedora-30.img nbd:unix:/tmp/nbd.sock' (100.00/100%) elapsed time: 2.055 s write: 1271 ops, 1219244032 bytes, 1.136 GiB, 565.710 MiB/s zero: 1027 ops, 5223206912 bytes, 4.864 GiB, 2423.486 MiB/s extents: 1 ops, 2147483136 bytes, 2.000 GiB, 996.399 MiB/s --- filters/stats/stats.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/filters/stats/stats.c b/filters/stats/stats.c index ffc367c..6bef65c 100644 --- a/filters/stats/stats.c +++ b/filters/stats/stats.c @@ -49,6 +49,10 @@ #include "cleanup.h" #include "tvdiff.h" +#define MiB 1048576.0 +#define GiB 1073741824.0 +#define USEC 1000000.0 + static char *filename; static bool append; static FILE *fp; @@ -70,29 +74,35 @@ static stat extents_st = { "extents" }; static stat cache_st = { "cache" }; static inline double -calc_bps (uint64_t bytes, int64_t usecs) +calc_rate (uint64_t bytes, int64_t usecs) { - return 8.0 * bytes / usecs * 1000000.; + return bytes / MiB / usecs * USEC; } static inline void print_stat (const stat *st, int64_t usecs) { if (st->ops > 0) - fprintf (fp, "%s: %" PRIu64 " ops, %" PRIu64 " bytes, %g bits/s\n", - st->name, st->ops, st->bytes, calc_bps (st->bytes, usecs)); + fprintf (fp, "%s: %" PRIu64 " ops, %" PRIu64 " bytes, %.3f GiB, %.3f MiB/s\n", + st->name, + st->ops, + st->bytes, + st->bytes / GiB, + calc_rate (st->bytes, usecs)); } static inline void print_stats (int64_t usecs) { - fprintf (fp, "elapsed time: %g s\n", usecs / 1000000.); + fprintf (fp, "elapsed time: %.3f s\n", usecs / USEC); + print_stat (&pread_st, usecs); print_stat (&pwrite_st, usecs); print_stat (&trim_st, usecs); print_stat (&zero_st, usecs); print_stat (&extents_st, usecs); print_stat (&cache_st, usecs); + fflush (fp); } -- 2.21.0
Nir Soffer
2019-Nov-30 21:42 UTC
[Libguestfs] [PATCH nbdkit v2 2/3] filters: stats: Measure time per operation
Previously we measured the total time and used it to calculate the rate of different operations. This is misleading and hiding the real throughput. Change single operation stats to use the time spent in this operation. To understand total system throughput, show also new "total" stats, summing ops, bytes of other operations, and using the total time. This works well for copying images to/from nbdkit. Here is an example run with this change: $ ./nbdkit --foreground \ --unix /tmp/nbd.sock \ --exportname '' \ --filter stats \ file file=/var/tmp/dst.img \ statsfile=/dev/stderr \ --run 'qemu-img convert -p -n -f raw -O raw -T none /var/tmp/fedora-30.img nbd:unix:/tmp/nbd.sock' (100.00/100%) total: 2299 ops, 2.190 s, 6442450944 bytes, 6.000 GiB, 2804.977 MiB/s write: 1271 ops, 0.359 s, 1219244032 bytes, 1.136 GiB, 3237.466 MiB/s zero: 1027 ops, 0.012 s, 5223206912 bytes, 4.864 GiB, 398658.526 MiB/s extents: 1 ops, 0.000 s, 2147483136 bytes, 2.000 GiB, 113777750.651 MiB/s This shows that the actual time waiting for storage was only 0.4 seconds, but elapsed time was 2.1 seconds. I think the missing time is in flush() which we do not measure yet. --- filters/stats/stats.c | 70 ++++++++++++++++++++++++++++++++----------- 1 file changed, 52 insertions(+), 18 deletions(-) diff --git a/filters/stats/stats.c b/filters/stats/stats.c index 6bef65c..2c92c65 100644 --- a/filters/stats/stats.c +++ b/filters/stats/stats.c @@ -62,6 +62,7 @@ typedef struct { const char *name; uint64_t ops; uint64_t bytes; + uint64_t usecs; } stat; /* This lock protects all the stats. */ @@ -80,28 +81,42 @@ calc_rate (uint64_t bytes, int64_t usecs) } static inline void -print_stat (const stat *st, int64_t usecs) +print_stat (const stat *st) { if (st->ops > 0) - fprintf (fp, "%s: %" PRIu64 " ops, %" PRIu64 " bytes, %.3f GiB, %.3f MiB/s\n", + fprintf (fp, "%s: %" PRIu64 " ops, %.3f s, %" PRIu64 " bytes, %.3f GiB, %.3f MiB/s\n", st->name, st->ops, + st->usecs / USEC, st->bytes, st->bytes / GiB, - calc_rate (st->bytes, usecs)); + calc_rate (st->bytes, st->usecs)); } static inline void print_stats (int64_t usecs) { - fprintf (fp, "elapsed time: %.3f s\n", usecs / USEC); - - print_stat (&pread_st, usecs); - print_stat (&pwrite_st, usecs); - print_stat (&trim_st, usecs); - print_stat (&zero_st, usecs); - print_stat (&extents_st, usecs); - print_stat (&cache_st, usecs); + const stat total_st = { + .name = "total", + .ops = pread_st.ops + + pwrite_st.ops + + trim_st.ops + + zero_st.ops + + extents_st.ops, + .bytes = pread_st.bytes + + pwrite_st.bytes + + trim_st.bytes + + zero_st.bytes, + .usecs = usecs, + }; + + print_stat (&total_st); + print_stat (&pread_st); + print_stat (&pwrite_st); + print_stat (&trim_st); + print_stat (&zero_st); + print_stat (&extents_st); + print_stat (&cache_st); fflush (fp); } @@ -187,11 +202,18 @@ stats_config_complete (nbdkit_next_config_complete *next, void *nxdata) "statsappend=<BOOL> True to append to the log (default false).\n" static inline void -record_stat (stat *st, uint32_t count) +record_stat (stat *st, uint32_t count, struct timeval *start) { + struct timeval end; + uint64_t usecs; + + gettimeofday(&end, NULL); + usecs = tvdiff_usec(start, &end); + ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock); st->ops++; st->bytes += count; + st->usecs += usecs; } /* Read. */ @@ -200,10 +222,12 @@ stats_pread (struct nbdkit_next_ops *next_ops, void *nxdata, void *handle, void *buf, uint32_t count, uint64_t offset, uint32_t flags, int *err) { + struct timeval start; int r; + gettimeofday (&start, NULL); r = next_ops->pread (nxdata, buf, count, offset, flags, err); - if (r == 0) record_stat (&pread_st, count); + if (r == 0) record_stat (&pread_st, count, &start); return r; } @@ -214,10 +238,12 @@ stats_pwrite (struct nbdkit_next_ops *next_ops, void *nxdata, const void *buf, uint32_t count, uint64_t offset, uint32_t flags, int *err) { + struct timeval start; int r; + gettimeofday (&start, NULL); r = next_ops->pwrite (nxdata, buf, count, offset, flags, err); - if (r == 0) record_stat (&pwrite_st, count); + if (r == 0) record_stat (&pwrite_st, count, &start); return r; } @@ -228,10 +254,12 @@ stats_trim (struct nbdkit_next_ops *next_ops, void *nxdata, uint32_t count, uint64_t offset, uint32_t flags, int *err) { + struct timeval start; int r; + gettimeofday (&start, NULL); r = next_ops->trim (nxdata, count, offset, flags, err); - if (r == 0) record_stat (&trim_st, count); + if (r == 0) record_stat (&trim_st, count, &start); return r; } @@ -242,10 +270,12 @@ stats_zero (struct nbdkit_next_ops *next_ops, void *nxdata, uint32_t count, uint64_t offset, uint32_t flags, int *err) { + struct timeval start; int r; + gettimeofday (&start, NULL); r = next_ops->zero (nxdata, count, offset, flags, err); - if (r == 0) record_stat (&zero_st, count); + if (r == 0) record_stat (&zero_st, count, &start); return r; } @@ -256,14 +286,16 @@ stats_extents (struct nbdkit_next_ops *next_ops, void *nxdata, uint32_t count, uint64_t offset, uint32_t flags, struct nbdkit_extents *extents, int *err) { + struct timeval start; int r; + gettimeofday (&start, NULL); r = next_ops->extents (nxdata, count, offset, flags, extents, err); /* XXX There's a case for trying to determine how long the extents * will be that are returned to the client (instead of simply using * count), given the flags and the complex rules in the protocol. */ - if (r == 0) record_stat (&extents_st, count); + if (r == 0) record_stat (&extents_st, count, &start); return r; } @@ -274,10 +306,12 @@ stats_cache (struct nbdkit_next_ops *next_ops, void *nxdata, uint32_t count, uint64_t offset, uint32_t flags, int *err) { + struct timeval start; int r; + gettimeofday (&start, NULL); r = next_ops->cache (nxdata, count, offset, flags, err); - if (r == 0) record_stat (&cache_st, count); + if (r == 0) record_stat (&cache_st, count, &start); return r; } -- 2.21.0
Nir Soffer
2019-Nov-30 21:42 UTC
[Libguestfs] [PATCH nbdkit v2 3/3] filters: stats: Add flush stats
flush is usually called once but it can take significant time, so we need to include it. Here is an example run with this change: $ ./nbdkit --foreground \ --unix /tmp/nbd.sock \ --exportname '' \ --filter stats \ file file=/var/tmp/dst.img \ statsfile=/dev/stderr \ --run 'qemu-img convert -p -n -f raw -O raw -T none /var/tmp/fedora-30.img nbd:unix:/tmp/nbd.sock' (100.00/100%) total: 2299 ops, 2.172 s, 6442450944 bytes, 6.000 GiB, 2829.295 MiB/s write: 1271 ops, 0.356 s, 1219244032 bytes, 1.136 GiB, 3269.711 MiB/s zero: 1027 ops, 0.012 s, 5223206912 bytes, 4.864 GiB, 431052.118 MiB/s extents: 1 ops, 0.000 s, 2147483136 bytes, 2.000 GiB, 107789447.985 MiB/s flush: 2 ops, 1.156 s --- filters/stats/stats.c | 36 ++++++++++++++++++++++++++++++------ 1 file changed, 30 insertions(+), 6 deletions(-) diff --git a/filters/stats/stats.c b/filters/stats/stats.c index 2c92c65..1d0f242 100644 --- a/filters/stats/stats.c +++ b/filters/stats/stats.c @@ -73,6 +73,7 @@ static stat trim_st = { "trim" }; static stat zero_st = { "zero" }; static stat extents_st = { "extents" }; static stat cache_st = { "cache" }; +static stat flush_st = { "flush" }; static inline double calc_rate (uint64_t bytes, int64_t usecs) @@ -83,14 +84,20 @@ calc_rate (uint64_t bytes, int64_t usecs) static inline void print_stat (const stat *st) { - if (st->ops > 0) - fprintf (fp, "%s: %" PRIu64 " ops, %.3f s, %" PRIu64 " bytes, %.3f GiB, %.3f MiB/s\n", + if (st->ops > 0) { + fprintf (fp, "%s: %" PRIu64 " ops, %.3f s", st->name, st->ops, - st->usecs / USEC, - st->bytes, - st->bytes / GiB, - calc_rate (st->bytes, st->usecs)); + st->usecs / USEC); + + if (st->bytes > 0) + fprintf (fp, ", %" PRIu64 " bytes, %.3f GiB, %.3f MiB/s", + st->bytes, + st->bytes / GiB, + calc_rate (st->bytes, st->usecs)); + + fprintf (fp, "\n"); + } } static inline void @@ -117,6 +124,7 @@ print_stats (int64_t usecs) print_stat (&zero_st); print_stat (&extents_st); print_stat (&cache_st); + print_stat (&flush_st); fflush (fp); } @@ -263,6 +271,21 @@ stats_trim (struct nbdkit_next_ops *next_ops, void *nxdata, return r; } +/* Flush. */ +static int +stats_flush (struct nbdkit_next_ops *next_ops, void *nxdata, + void *handle, uint32_t flags, + int *err) +{ + struct timeval start; + int r; + + gettimeofday (&start, NULL); + r = next_ops->flush (nxdata, flags, err); + if (r == 0) record_stat (&flush_st, 0, &start); + return r; +} + /* Zero. */ static int stats_zero (struct nbdkit_next_ops *next_ops, void *nxdata, @@ -325,6 +348,7 @@ static struct nbdkit_filter filter = { .pread = stats_pread, .pwrite = stats_pwrite, .trim = stats_trim, + .flush = stats_flush, .zero = stats_zero, .extents = stats_extents, .cache = stats_cache, -- 2.21.0
Richard W.M. Jones
2019-Dec-02 10:28 UTC
Re: [Libguestfs] [PATCH nbdkit v2 3/3] filters: stats: Add flush stats
I have pushed some parts of these patches in order to reduce the delta between your patches and upstream. However still some problems with the series: Patch 1: Same problem with scale as discussed before. Patch 2: At least the documentation needs to be updated since it no longer matches what is printed. The idea of collecting the time taken in each operation is good on its own, so I pushed that part of it along with small const-correctness and whitespace fixes: https://github.com/libguestfs/nbdkit/commit/f280530d7d042d5e8f100125ab0618515de52142 Why don't we show the total time and time / operation on each line of output (ie. per operation), instead of synthesizing the total by adding up reads and writes? Patch 3: The idea of collecting flush stats is good so I pushed that part, as well as updating the documentation: https://github.com/libguestfs/nbdkit/commit/8adf601835aee3779e278e13cae0489231960aee I omitted the "if (st->bytes > 0)" part of the patch so this shows "0 bytes" which is of course wrong but we can fix that later. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
Seemingly Similar Threads
- [PATCH nbdkit v2 2/3] filters: stats: Measure time per operation
- [PATCH nbdkit v2 3/3] filters: stats: Add flush stats
- [PATCH nbdkit v2 0/3] filters: stats: More useful, more friendly
- [PATCH nbdkit] filters: stats: Show size and rate in human size
- [PATCH nbdkit 1/3] filters: stats: Show size in GiB, rate in MiB/s