Eric Blake
2019-Jul-26 02:24 UTC
[Libguestfs] [nbdkit PATCH] delay: Avoid numeric overflow
Attempting delay-read=1000 results in no delay whatsoever: 1000 seconds, scaled to milliseconds, stored in int, then subjected to .tv_nsec = (ms * 1000000) % 1000000000 results in .tv_nsec being set to 567587328 thanks to 32-bit overflow, but that in turn results in instant EINVAL failure of nanosleep(). Fix it by diagnosing failure to fit in an int during config, and avoid math that overflows during delay(). Forbid negative delays while at it. Signed-off-by: Eric Blake <eblake@redhat.com> --- I was trying to test what happens when a non-fatal signal is sent to nbdkit (by adding a no-op SIGUSR1 handler). Most of the time, sending SIGUSR1 to the process as a whole ended up cutting the poll() in a different thread short (and not the nanosleep()), but by directing SIGUSR1 to the thread stuck in nanosleep, I confirmed that nanosleep() indeed returns early with EINTR, and we end up with insufficient delay. But getting to that point in my testing required that I first diagnosed why my attempt at a 1000-second sleep acted like no delay at all. I plan on adding an nbdkit_sleep() wrapper function, which will resume the sleep on unrelated EINTR (fixing the theoretical problem of not sleeping long enough - theoretical since it only matters if we handle a signal but want to continue execution, but all our current handled signals instead request process termination) as well as the problem of keeping nbdkit alive too long (by specifically returning an error to let the delay filter know that it is time to exit, rather than finishing the sleep, whether it was a signal that interrupted a different thread or merely detection of EOF from the client). filters/delay/delay.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/filters/delay/delay.c b/filters/delay/delay.c index 11681c88..80eb33b0 100644 --- a/filters/delay/delay.c +++ b/filters/delay/delay.c @@ -32,6 +32,7 @@ #include <config.h> +#include <limits.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> @@ -54,7 +55,7 @@ parse_delay (const char *key, const char *value) int r; if (len > 2 && strcmp (&value[len-2], "ms") == 0) { - if (sscanf (value, "%d", &r) == 1) + if (sscanf (value, "%d", &r) == 1 && r >= 0) return r; else { nbdkit_error ("cannot parse %s in milliseconds parameter: %s", @@ -63,8 +64,14 @@ parse_delay (const char *key, const char *value) } } else { - if (sscanf (value, "%d", &r) == 1) + if (sscanf (value, "%d", &r) == 1 && r >= 0) { + if (r * 1000LL > INT_MAX) { + nbdkit_error ("seconds parameter %s is too large: %s", + key, value); + return -1; + } return r * 1000; + } else { nbdkit_error ("cannot parse %s in seconds parameter: %s", key, value); @@ -79,7 +86,7 @@ delay (int ms) if (ms > 0) { const struct timespec ts = { .tv_sec = ms / 1000, - .tv_nsec = (ms * 1000000) % 1000000000 + .tv_nsec = (ms % 1000) * 1000000, }; nanosleep (&ts, NULL); } -- 2.20.1
Richard W.M. Jones
2019-Jul-26 08:10 UTC
Re: [Libguestfs] [nbdkit PATCH] delay: Avoid numeric overflow
On Thu, Jul 25, 2019 at 09:24:19PM -0500, Eric Blake wrote:> Attempting delay-read=1000 results in no delay whatsoever: 1000 > seconds, scaled to milliseconds, stored in int, then subjected to > > .tv_nsec = (ms * 1000000) % 1000000000 > > results in .tv_nsec being set to 567587328 thanks to 32-bit overflow, > but that in turn results in instant EINVAL failure of nanosleep(). > > Fix it by diagnosing failure to fit in an int during config, and avoid > math that overflows during delay(). Forbid negative delays while at it. > > Signed-off-by: Eric Blake <eblake@redhat.com> > --- > > I was trying to test what happens when a non-fatal signal is sent to > nbdkit (by adding a no-op SIGUSR1 handler). Most of the time, sending > SIGUSR1 to the process as a whole ended up cutting the poll() in a > different thread short (and not the nanosleep()), but by directing > SIGUSR1 to the thread stuck in nanosleep, I confirmed that nanosleep() > indeed returns early with EINTR, and we end up with insufficient > delay. But getting to that point in my testing required that I first > diagnosed why my attempt at a 1000-second sleep acted like no delay at > all. > > I plan on adding an nbdkit_sleep() wrapper function, which will resume > the sleep on unrelated EINTR (fixing the theoretical problem of not > sleeping long enough - theoretical since it only matters if we handle > a signal but want to continue execution, but all our current handled > signals instead request process termination) as well as the problem of > keeping nbdkit alive too long (by specifically returning an error to > let the delay filter know that it is time to exit, rather than > finishing the sleep, whether it was a signal that interrupted a > different thread or merely detection of EOF from the client). > > filters/delay/delay.c | 13 ++++++++++--- > 1 file changed, 10 insertions(+), 3 deletions(-) > > diff --git a/filters/delay/delay.c b/filters/delay/delay.c > index 11681c88..80eb33b0 100644 > --- a/filters/delay/delay.c > +++ b/filters/delay/delay.c > @@ -32,6 +32,7 @@ > > #include <config.h> > > +#include <limits.h> > #include <stdio.h> > #include <stdlib.h> > #include <stdint.h> > @@ -54,7 +55,7 @@ parse_delay (const char *key, const char *value) > int r; > > if (len > 2 && strcmp (&value[len-2], "ms") == 0) { > - if (sscanf (value, "%d", &r) == 1) > + if (sscanf (value, "%d", &r) == 1 && r >= 0) > return r; > else { > nbdkit_error ("cannot parse %s in milliseconds parameter: %s", > @@ -63,8 +64,14 @@ parse_delay (const char *key, const char *value) > } > } > else { > - if (sscanf (value, "%d", &r) == 1) > + if (sscanf (value, "%d", &r) == 1 && r >= 0) { > + if (r * 1000LL > INT_MAX) { > + nbdkit_error ("seconds parameter %s is too large: %s", > + key, value); > + return -1; > + } > return r * 1000; > + } > else { > nbdkit_error ("cannot parse %s in seconds parameter: %s", > key, value); > @@ -79,7 +86,7 @@ delay (int ms) > if (ms > 0) { > const struct timespec ts = { > .tv_sec = ms / 1000, > - .tv_nsec = (ms * 1000000) % 1000000000 > + .tv_nsec = (ms % 1000) * 1000000, > }; > nanosleep (&ts, NULL); > } > --Oops :-( ACK Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW
Possibly Parallel Threads
- [PATCH 9/9] filters: Move rdelay/wdelay from file plugin to new delay filter.
- Re: [PATCH supermin 3/3] init: Refactor for-loop which waits for root device to show up.
- [nbdkit PATCH 3/3] server: Add and use nbdkit_nanosleep
- [PATCH supermin 0/3] Require root= parameter, refactor init.
- [nbdkit PATCH 0/3] More responsive shutdown