search for: cache_flushes

Displaying 20 results from an estimated 38 matches for "cache_flushes".

Did you mean: cache_flush
2009 Mar 04
5
Oracle database on zfs
Hi, I am wondering if there is a guideline on how to configure ZFS on a server with Oracle database? We are experiencing some slowness on writes to ZFS filesystem. It take about 530ms to write a 2k data. We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5 EMC EMX. This is a small database with about 18gb storage allocated. Is there a tunable parameters that we can apply to
2018 Dec 01
0
[PATCH nbdkit] common: Move shared bitmap code to a common library.
The cow and cache filters both use a bitmap mapping virtual disk blocks to status stored in the bitmap. The implementation of the bitmaps is very similar because one was derived from the other when the filters were implemented. The main difference is the cow filter uses a simple bitmap (one bit per block), whereas the cache filter uses two bits per block. This commit abstracts the bitmap
2018 Dec 02
0
[PATCH nbdkit v2] common: Move shared bitmap code to a common library.
The cow and cache filters both use a bitmap mapping virtual disk blocks to status stored in the bitmap. The implementation of the bitmaps is very similar because one was derived from the other when the filters were implemented. The main difference is the cow filter uses a simple bitmap (one bit per block), whereas the cache filter uses two bits per block. This commit abstracts the bitmap
2018 Dec 03
0
[PATCH nbdkit v3] common: Move shared bitmap code to a common library.
The cow and cache filters both use a bitmap mapping virtual disk blocks to status stored in the bitmap. The implementation of the bitmaps is very similar because one was derived from the other when the filters were implemented. The main difference is the cow filter uses a simple bitmap (one bit per block), whereas the cache filter uses two bits per block. This commit abstracts the bitmap
2018 Dec 01
2
[PATCH nbdkit] common: Move shared bitmap code to a common library.
I have some patches I'm working on to fix the cache filter. However this is a prelude. It should be simply pure refactoring. All tests pass still. Rich.
2018 Dec 03
3
[PATCH nbdkit v3] common: Move shared bitmap code to a common library.
v2: https://www.redhat.com/archives/libguestfs/2018-December/msg00039.html v2 -> v3: - Fix all the issues raised in Eric's review. - Precompute some numbers to make the calculations easier. - Calculations now use bitshifts and masks in preference to division and modulo. - Clear existing bits before setting (which fixes a bug in the cache filter). Rich.
2018 Dec 02
2
[PATCH nbdkit v2] common: Move shared bitmap code to a common library.
This is exactly the same as v1: https://www.redhat.com/archives/libguestfs/2018-December/msg00004.html except that it now frees the bitmap on unload (which the old code did not - there was always a memory leak). Rich.
2007 Jul 29
0
[ANNOUNCE] Release conntrack-tools 0.9.5
Hi! The netfilter project proudly presents another development release of the conntrack-tools. The conntrack-tools are: - The userspace daemon so-called conntrackd that covers the specific aspects of stateful Linux firewalls to enable high availability solutions. It can be used as statistics collector of the firewall use as well. The daemon is highly configurable and easily extensible. - The
2008 Jul 07
1
ZFS and Caching - write() syscall with O_SYNC
IHAC using ZFS in production, and he''s opening up some files with the O_SYNC flag. This affects subsequent write()''s by providing synchronized I/O file integrity completion. That is, each write(2) will wait for both the file data and file status to be physically updated. Because of this, he''s seeing some delays on the file write()''s. This is verified with
2008 Dec 02
1
zfs_nocacheflush, nvram, and root pools
...ould i encounter problems here? (the system is an NFS server, which means lots of synchronous writes (and therefore ZFS cache flushes), so i *really* want the performance benefit from using the nvram write cache.) - river. [1] http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes -----BEGIN PGP SIGNATURE----- iD8DBQFJNRJVIXd7fCuc5vIRAgDlAJ0boVf5zmvkRySeIHVumsKm3VSVhACffyOK POEMyzG8U2yQYeZr01uJ74Q= =9eBp -----END PGP SIGNATURE-----
2007 Oct 24
1
memory issue
Hello, I received the following question from a company I am working with: We are having issues with our early experiments with ZFS with volumes mounted from a 6130. Here is what we have and what we are seeing: T2000 (geronimo) on the fibre with a 6130. 6130 configured with UFS volumes mapped and mounted on several other hosts. it''s the only host using ZFS volume (only
2018 Dec 28
0
[PATCH nbdkit 5/9] cache: Allow this filter to serve requests in parallel.
Make the implicit lock explicit, and hold it around blk_* operations. This allows us to relax the thread model for the filter to NBDKIT_THREAD_MODEL_PARALLEL. --- filters/cache/blk.h | 7 ++++++ filters/cache/cache.c | 57 +++++++++++++++++++++++++++++++------------ 2 files changed, 49 insertions(+), 15 deletions(-) diff --git a/filters/cache/blk.h b/filters/cache/blk.h index 24bf6a1..ab9134e
2018 Feb 01
0
[nbdkit PATCH v2 1/3] backend: Rework internal/filter error return semantics
Previously, we let a plugin set an error in either thread-local storage (nbdkit_set_error()) or errno, then connections.c would decode which error to use. But with filters in the mix, it is very difficult for a filter to know what error was set by the plugin (particularly since nbdkit_set_error() has no public counterpart for reading the thread-local storage). What's more, if a filter does
2018 Dec 28
0
[PATCH nbdkit 2/9] cache: Add cache-on-read mode.
The same as qemu's copyonread flag, this caches read requests. --- filters/cache/nbdkit-cache-filter.pod | 11 +++++ filters/cache/cache.c | 37 +++++++++++++-- tests/Makefile.am | 4 +- tests/test-cache-on-read.sh | 66 +++++++++++++++++++++++++++ 4 files changed, 114 insertions(+), 4 deletions(-) diff --git
2018 Dec 28
12
[PATCH nbdkit 0/9] cache: Implement cache-max-size and method of reclaiming space from the cache.
This patch series enhances the cache filter in a few ways, primarily adding a "cache-on-read" feature (similar to qemu's copyonread); and adding the ability to limit the cache size and the antecedent of that which is having a method to reclaim cache blocks. As the cache is stored as a sparse temporary file, reclaiming cache blocks simply means punching holes in the temporary file.
2018 Jan 22
1
[PATCH nbdkit] filters: Add caching filter.
This adds a cache filter, which works like the COW filter in reverse. For realistic use it needs a bit more work, especially to add limits on the size of the cache, a more sensible cache replacement policy, and perhaps some kind of background worker to write dirty blocks out. Rich.
2019 Apr 24
0
[nbdkit PATCH 4/4] filters: Check for mutex failures
Commit 975dab14 argued that for simple lock/unlock sequences, it was easier to avoid the cleanup.h macros. But since that time, we added additional sanity checking to the macros, at which point the boilerplate of inlining that sanity checking is outweighed compared to just using the macros in more places. Signed-off-by: Eric Blake <eblake@redhat.com> --- filters/cache/cache.c | 23
2019 Jan 04
0
[PATCH nbdkit v5 3/3] cache: Implement cache-max-size and cache space reclaim.
The original plan was to have a background thread doing the reclaim. However that cannot work given the design of filters, because a background thread cannot access the next_ops struct which is only available during requests. Therefore we spread the work over the request threads. Each blk_* function checks whether there is work to do, and if there is will reclaim up to two blocks from the cache
2018 Jan 28
3
[nbdkit PATCH 0/2] RFC: tweak error handling, add log filter
Here's what I'm currently playing with; I'm not ready to commit anything until I rebase my FUA work on top of this, as I only want to break filter ABI once between releases. Eric Blake (2): backend: Rework internal/filter error return semantics filters: Add log filter TODO | 2 - docs/nbdkit-filter.pod | 84 +++++++-- docs/nbdkit.pod
2019 Jan 03
2
Re: [PATCH nbdkit v2 4/4] cache: Implement cache-max-size and method of reclaiming space from the cache.
On 1/1/19 8:33 AM, Richard W.M. Jones wrote: > The original plan was to have a background thread doing the reclaim. > However that cannot work given the design of filters, because a > background thread cannot access the next_ops struct which is only > available during requests. > > Therefore we spread the work over the request threads. Each blk_* > function checks whether