search for: maybe_sleep

Displaying 7 results from an estimated 7 matches for "maybe_sleep".

2019 Jul 31
13
[nbdkit PATCH 0/8] fd leak safety
There's enough here to need a review; some of it probably needs backporting to stable-1.12. This probably breaks tests on Haiku or other platforms that have not been as on-the-ball about atomic CLOEXEC; feel free to report issues that arise, and I'll help come up with workarounds (even if we end up leaving a rare fd leak on less-capable systems). Meanwhile, I'm still working on my
2019 Mar 05
0
[PATCH nbdkit] Add new filter for rate-limiting connections.
...ULL); + + return h; +} + +/* Free up the per-connection handle. */ +static void +rate_close (void *handle) +{ + struct rate_handle *h = handle; + + pthread_mutex_destroy (&h->read_bucket_lock); + pthread_mutex_destroy (&h->write_bucket_lock); + free (h); +} + +static inline void +maybe_sleep (struct bucket *bucket, pthread_mutex_t *lock, uint32_t count) +{ + struct timespec ts; + uint64_t bits; + + /* Count is in bytes, but we rate limit using bits. We could + * multiply this by 10 to include start/stop but let's not + * second-guess the transport layers underneath. + */...
2019 Mar 05
2
[PATCH nbdkit] Add new filter for rate-limiting connections.
For virt-v2v we have been discussing how to limit network bandwidth. The initial discussion has been around how to use cgroups to do this limiting, and that is still probably what we will go with in the end. However this patch gives us another possibility for certain virt-v2v inputs, especially VDDK. We could apply a filter on top of the nbdkit plugin which limits the rate at which it copies
2019 Apr 24
0
[nbdkit PATCH 4/4] filters: Check for mutex failures
...turn; - pthread_mutex_lock (lock); + ACQUIRE_LOCK_FOR_CURRENT_SCOPE (lock); old_rate = bucket_adjust_rate (bucket, new_rate); - pthread_mutex_unlock (lock); if (old_rate != new_rate) nbdkit_debug ("rate adjusted from %" PRIu64 " to %" PRIi64, @@ -245,9 +244,10 @@ maybe_sleep (struct bucket *bucket, pthread_mutex_t *lock, uint32_t count) while (bits > 0) { /* Run the token bucket algorithm. */ - pthread_mutex_lock (lock); - bits = bucket_run (bucket, bits, &ts); - pthread_mutex_unlock (lock); + { + ACQUIRE_LOCK_FOR_CURRENT_SCOPE (lock);...
2019 Aug 03
0
[nbdkit PATCH 3/3] server: Add and use nbdkit_nanosleep
...(ms > 0 && nbdkit_nanosleep (ms / 1000, (ms % 1000) * 1000000) == -1) { + *err = errno; + return -1; } return 0; } diff --git a/filters/rate/rate.c b/filters/rate/rate.c index dbd92ad6..dca5e9fc 100644 --- a/filters/rate/rate.c +++ b/filters/rate/rate.c @@ -262,12 +262,10 @@ maybe_sleep (struct bucket *bucket, pthread_mutex_t *lock, uint32_t count, bits = bucket_run (bucket, bits, &ts); } - if (bits > 0) - if (nanosleep (&ts, NULL) == -1) { - nbdkit_error ("nanosleep: %m"); - *err = errno; - return -1; - } + if...
2019 Apr 24
7
[nbdkit PATCH 0/4] More mutex sanity checking
I do have a question about whether patch 2 is right, or whether I've exposed a bigger problem in the truncate (and possibly other) filter, but the rest seem fairly straightforward. Eric Blake (4): server: Check for pthread lock failures truncate: Factor out reading real_size under mutex plugins: Check for mutex failures filters: Check for mutex failures filters/cache/cache.c
2019 Aug 03
5
[nbdkit PATCH 0/3] More responsive shutdown
We noticed while writing various libnbd tests that when the delay filter is in use, there are scenarios where we had to resort to SIGKILL to get rid of nbdkit, because it was non-responsive to SIGINT. I'm still trying to figure out the best way to add testsuite coverage of this, but already proved to myself that it works from the command line, under two scenarios that both used to cause long