Displaying 17 results from an estimated 17 matches for "cache_zero".
2019 May 11
2
[nbdkit PATCH] cache: Reduce use of bounce-buffer
...lock, err);
+ if (r != -1) {
+ memcpy (&block[blkoffs], buf, n);
+ r = blk_write (next_ops, nxdata, blknum, block, flags, err);
+ }
}
+ else
+ r = blk_write (next_ops, nxdata, blknum, buf, flags, err);
if (r == -1)
return -1;
@@ -334,6 +345,7 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
{
CLEANUP_FREE uint8_t *block = NULL;
bool need_flush = false;
+ bool clean = false;
block = malloc (blksize);
if (block == NULL) {
@@ -350,7 +362,7 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
}
while (count &g...
2018 Dec 28
0
[PATCH nbdkit 5/9] cache: Allow this filter to serve requests in parallel.
...ite (next_ops, nxdata, blknum, block, flags, err);
}
- memcpy (&block[blkoffs], buf, n);
- if (blk_write (next_ops, nxdata, blknum, block, flags, err) == -1) {
+ pthread_mutex_unlock (&lock);
+ if (r == -1) {
free (block);
return -1;
}
@@ -278,6 +297,7 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
}
while (count > 0) {
uint64_t blknum, blkoffs, n;
+ int r;
blknum = offset / BLKSIZE; /* block number */
blkoffs = offset % BLKSIZE; /* offset within the block */
@@ -285,12 +305,17 @@ cache_zero (struct nbdkit_next_ops...
2019 May 13
0
[nbdkit PATCH v2 2/2] cache, cow: Reduce use of bounce-buffer
...FOR_CURRENT_SCOPE (&lock);
+ r = blk_read (next_ops, nxdata, blknum, block, err);
+ if (r != -1) {
+ memcpy (block, buf, count);
+ r = blk_write (next_ops, nxdata, blknum, block, flags, err);
+ }
+ if (r == -1)
+ return -1;
}
if (need_flush)
@@ -333,6 +390,8 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
int *err)
{
CLEANUP_FREE uint8_t *block = NULL;
+ uint64_t blknum, blkoffs;
+ int r;
bool need_flush = false;
block = malloc (blksize);
@@ -348,15 +407,13 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
f...
2019 May 13
3
[nbdkit PATCH v2 0/2] Bounce buffer cleanups
Based on Rich's review of my v1 that touched only cache.c, I have now
tried to bring all three filters with alignment rounding in line with
one another.
There is definitely room for future improvements once we teach nbdkit
to let filters and plugins advertise block sizes, but I'm hoping to
get NBD_CMD_CACHE implemented first.
Eric Blake (2):
blocksize: Process requests in linear order
2018 Jan 22
1
[PATCH nbdkit] filters: Add caching filter.
This adds a cache filter, which works like the COW filter in reverse.
For realistic use it needs a bit more work, especially to add limits
on the size of the cache, a more sensible cache replacement policy,
and perhaps some kind of background worker to write dirty blocks out.
Rich.
2019 Apr 24
0
[nbdkit PATCH 4/4] filters: Check for mutex failures
...k);
r = blk_read (next_ops, nxdata, blknum, block, err);
if (r != -1) {
memcpy (&block[blkoffs], buf, n);
r = blk_write (next_ops, nxdata, blknum, block, flags, err);
}
- pthread_mutex_unlock (&lock);
if (r == -1)
return -1;
@@ -371,13 +370,12 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
/* Do a read-modify-write operation on the current block.
* Hold the lock over the whole operation.
*/
- pthread_mutex_lock (&lock);
+ ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock);
r = blk_read (next_ops, nxdata, blknum,...
2018 Feb 01
0
[nbdkit PATCH v2 1/3] backend: Rework internal/filter error return semantics
...return r;
}
memcpy (&block[blkoffs], buf, n);
- if (blk_writeback (next_ops, nxdata, blknum, block) == -1) {
+ r = blk_writeback (next_ops, nxdata, blknum, block);
+ if (r) {
free (block);
- return -1;
+ return r;
}
buf += n;
@@ -421,6 +428,7 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
void *handle, uint32_t count, uint64_t offset, int may_trim)
{
uint8_t *block;
+ int r;
block = malloc (BLKSIZE);
if (block == NULL) {
@@ -437,14 +445,16 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
if...
2019 Jan 04
0
[PATCH nbdkit v5 3/3] cache: Implement cache-max-size and cache space reclaim.
...* max bytes we can read from this block */
+ blknum = offset / blksize; /* block number */
+ blkoffs = offset % blksize; /* offset within the block */
+ n = blksize - blkoffs; /* max bytes we can read from this block */
if (n > count)
n = count;
@@ -278,14 +344,14 @@ cache_zero (struct nbdkit_next_ops *next_ops, void *nxdata,
uint8_t *block;
bool need_flush = false;
- block = malloc (BLKSIZE);
+ block = malloc (blksize);
if (block == NULL) {
*err = errno;
nbdkit_error ("malloc: %m");
return -1;
}
- flags &= ~NBDKIT_FLAG_MAY_...
2019 Jan 04
5
[PATCH nbdkit v5 3/3] cache: Implement cache-max-size and cache space reclaim.
v4:
https://www.redhat.com/archives/libguestfs/2019-January/msg00032.html
v5:
- Now we set the block size at run time.
I'd like to say that I was able to test this change, but
unfortunately I couldn't find any easy way to create a filesystem
on x86-64 with a block size > 4K. Ext4 doesn't support it at all,
and XFS doesn't support block size > page size (and I
2018 Dec 28
12
[PATCH nbdkit 0/9] cache: Implement cache-max-size and method of reclaiming space from the cache.
This patch series enhances the cache filter in a few ways, primarily
adding a "cache-on-read" feature (similar to qemu's copyonread); and
adding the ability to limit the cache size and the antecedent of that
which is having a method to reclaim cache blocks.
As the cache is stored as a sparse temporary file, reclaiming cache
blocks simply means punching holes in the temporary file.
2019 Apr 24
7
[nbdkit PATCH 0/4] More mutex sanity checking
I do have a question about whether patch 2 is right, or whether I've
exposed a bigger problem in the truncate (and possibly other) filter,
but the rest seem fairly straightforward.
Eric Blake (4):
server: Check for pthread lock failures
truncate: Factor out reading real_size under mutex
plugins: Check for mutex failures
filters: Check for mutex failures
filters/cache/cache.c
2018 Jan 28
3
[nbdkit PATCH 0/2] RFC: tweak error handling, add log filter
Here's what I'm currently playing with; I'm not ready to commit
anything until I rebase my FUA work on top of this, as I only
want to break filter ABI once between releases.
Eric Blake (2):
backend: Rework internal/filter error return semantics
filters: Add log filter
TODO | 2 -
docs/nbdkit-filter.pod | 84 +++++++--
docs/nbdkit.pod
2018 Mar 08
19
[nbdkit PATCH v3 00/15] Add FUA support to nbdkit
After more than a month since v2 [1], I've finally got my FUA
support series polished. This is all of my outstanding patches,
even though some of them were originally posted in separate
threads from the original FUA post [2], [3]
[1] https://www.redhat.com/archives/libguestfs/2018-January/msg00113.html
[2] https://www.redhat.com/archives/libguestfs/2018-January/msg00219.html
[3]
2018 Feb 01
6
[nbdkit PATCH v2 0/3] add log, blocksize filters
Since v1: add the blocksize filter, add testsuite coverage of the
log filter, several fixes to the log filter based on what adding
tests revealed
I'm still working on FUA flag support patches on top of this;
the patches should all be committed in the same release, as we
want to minimize the number of releases that cause a filter
ABI/API bump
Eric Blake (3):
backend: Rework internal/filter
2019 Mar 28
32
[PATCH nbdkit v5 FINAL 00/19] Implement extents.
This has already been pushed upstream. I am simply posting these here
so we have a reference in the mailing list in case we find bugs later
(as I'm sure we will - it's a complex patch series).
Great thanks to Eric Blake for tireless review on this one. It also
seems to have identified a few minor bugs in qemu along the way.
Rich.
2019 Aug 23
22
cross-project patches: Add NBD Fast Zero support
This is a cover letter to a series of patches being proposed in tandem
to four different projects:
- nbd: Document a new NBD_CMD_FLAG_FAST_ZERO command flag
- qemu: Implement the flag for both clients and server
- libnbd: Implement the flag for clients
- nbdkit: Implement the flag for servers, including the nbd passthrough
client
If you want to test the patches together, I've pushed a
2019 May 16
27
[nbdkit PATCH v2 00/24] implement NBD_CMD_CACHE
Since v1:
- rework .can_cache to be tri-state, with default of no advertisement
(ripple effect through other patches)
- add a lot more patches in order to round out filter support
And in the meantime, Rich pushed NBD_CMD_CACHE support into libnbd, so
in theory we now have a way to test cache commands through the entire
stack.
Eric Blake (24):
server: Internal hooks for implementing