search for: write_cach

Displaying 12 results from an estimated 12 matches for "write_cach".

Did you mean: write_cache
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11. According to my testing, to optimize our systems for our specific workload, I''ve determined that we get the best performance with the write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set in /etc/system. The only issue is setting the write cache permanently, or at least quickly. Right now, as it is,
2008 Feb 25
2
qemu write cacheing and DMA IDE writes
I''ve been doing some merge work between tools/ioemu and qemu upstream. I came across this commit: changeset: 11209:9bb6c1c1890a07885265bbc59f4dbb660312974e date: Sun Aug 20 23:59:34 2006 +0100 files: [...] description: [qemu] hdparm tunable IDE write cache for HVM qemu 0.8.2 has a flush callback to the storage backends, so now it is possible to implement
2007 May 05
3
Issue with adding existing EFI disks to a zpool
I spend yesterday all day evading my data of one of the Windows disks, so that I can add it to the pool. Using mount-ntfs, it''s a pain due to its slowness. But once I finished, I thought "Cool, let''s do it". So I added the disk using the zero slice notation (c0d0s0), as suggested for performance reasons. I checked the pool status and noticed however that the pool size
2017 Sep 12
0
[PATCH v2 2/5] lib: qemu: Factor out common code for reading and writing cache files.
...version qemu_version; /* Parsed qemu version number. */ }; -static int test_qemu (guestfs_h *g, struct qemu_data *data); +static int test_qemu_help (guestfs_h *g, struct qemu_data *data); +static int read_cache_qemu_help (guestfs_h *g, struct qemu_data *data, const char *filename); +static int write_cache_qemu_help (guestfs_h *g, const struct qemu_data *data, const char *filename); +static int test_qemu_devices (guestfs_h *g, struct qemu_data *data); +static int read_cache_qemu_devices (guestfs_h *g, struct qemu_data *data, const char *filename); +static int write_cache_qemu_devices (guestfs_h *g,...
2017 Sep 12
0
[PATCH v3 4/6] lib: qemu: Allow parallel qemu binaries to be used with cache conflicts.
...SON tree */ }; +static char *cache_filename (guestfs_h *g, const char *cachedir, const struct stat *, const char *suffix); static int test_qemu_help (guestfs_h *g, struct qemu_data *data); static int read_cache_qemu_help (guestfs_h *g, struct qemu_data *data, const char *filename); static int write_cache_qemu_help (guestfs_h *g, const struct qemu_data *data, const char *filename); @@ -107,12 +108,12 @@ static const struct qemu_fields { }; #define NR_FIELDS (sizeof qemu_fields / sizeof qemu_fields[0]) -/* This is saved in the qemu.stat file, so if we decide to change the +/* This is saved in th...
2008 Dec 02
1
zfs_nocacheflush, nvram, and root pools
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have a system connected to an external DAS (SCSI) array, using ZFS. the array has an nvram write cache, but it honours SCSI cache flush commands by flushing the nvram to disk. the array has no way to disable this behaviour. a well-known behaviour of ZFS is that it often issues cache flush commands to storage in order to ensure data
2017 Sep 12
8
[PATCH v3 0/6] launch: direct: Disable qemu locking when opening drives readonly.
v2 -> v3: - I addressed everything that Pino mentioned last time. - It's tricky to get a stable run when multiple copies of qemu are involved, because the same cache files get overwritten by parallel libguestfs. So I changed the names of the cache files to include the qemu binary key (size, mtime), which removes this conflict. This is in new patch 4/6. Rich.
2017 Sep 11
4
[PATCH 0/4] lib: qemu: Add test for mandatory locking.
The patch I posted last week to disable mandatory locking for readonly drives (https://www.redhat.com/archives/libguestfs/2017-September/msg00013.html) was wrong in a couple of respects. Firstly it didn't work, which I didn't detect because my tests were testing the wrong thing. Oops. Secondly it used a simple version number check to detect qemu binaries implementing mandatory locking.
2017 Sep 12
9
[PATCH v2 0/5] launch: direct: Disable qemu locking when opening drives readonly (RHBZ#1417306)
Patches 1-4 are almost the same as they are when previously posted here: https://www.redhat.com/archives/libguestfs/2017-September/msg00039.html Patch 5 actually uses the mandatory locking test to turn off locking in the narrow case where a drive is opened readonly, and then only for the drive being inspected. Passes ordinary tests (‘check-direct’ and ‘check-valgrind-direct’). Rich.
2018 May 23
3
[PATCH] block drivers/block: Use octal not symbolic permissions
...y_entry = { - .attr = {.name = "io_poll_delay", .mode = S_IRUGO | S_IWUSR }, + .attr = {.name = "io_poll_delay", .mode = 0644 }, .show = queue_poll_delay_show, .store = queue_poll_delay_store, }; static struct queue_sysfs_entry queue_wc_entry = { - .attr = {.name = "write_cache", .mode = S_IRUGO | S_IWUSR }, + .attr = {.name = "write_cache", .mode = 0644 }, .show = queue_wc_show, .store = queue_wc_store, }; static struct queue_sysfs_entry queue_fua_entry = { - .attr = {.name = "fua", .mode = S_IRUGO }, + .attr = {.name = "fua",...
2018 May 23
3
[PATCH] block drivers/block: Use octal not symbolic permissions
...y_entry = { - .attr = {.name = "io_poll_delay", .mode = S_IRUGO | S_IWUSR }, + .attr = {.name = "io_poll_delay", .mode = 0644 }, .show = queue_poll_delay_show, .store = queue_poll_delay_store, }; static struct queue_sysfs_entry queue_wc_entry = { - .attr = {.name = "write_cache", .mode = S_IRUGO | S_IWUSR }, + .attr = {.name = "write_cache", .mode = 0644 }, .show = queue_wc_show, .store = queue_wc_store, }; static struct queue_sysfs_entry queue_fua_entry = { - .attr = {.name = "fua", .mode = S_IRUGO }, + .attr = {.name = "fua",...
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time. If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can