search for: writethrough

Displaying 20 results from an estimated 131 matches for "writethrough".

2020 Aug 07
2
[PATCH nbdkit] plugins: file: More standard cache mode names
...of the page cache. When using shared storage, we may get stale data from the page cache. When writing, we flush after every write which is inefficient and unneeded. Rename the cache modes to: - writeback - write complete when the system call returned, and the data was copied to the page cache. - writethrough - write completes when the data reach storage. read marks pages as not needed to minimize use of the page cache. These terms are more aligned with other software like qemu or LIO and common caching concepts. [1] https://www.linuxjournal.com/article/7105 [2] https://www.qemu.org/docs/master/syst...
2020 Aug 08
1
Re: [PATCH nbdkit] plugins: file: More standard cache mode names
..., whether that is for read or write. It's true > that it does not completely avoid the page cache (so I agree it's not > precisely like O_DIRECT), but it tries to minimize the time that pages > spend there. Yes, for this we flush after every write, and the common name for this is writethrough. > > Rename the cache modes to: > > - writeback - write complete when the system call returned, and the data > > was copied to the page cache. > > - writethrough - write completes when the data reach storage. read marks > > pages as not needed to minimize use of t...
2020 Aug 08
0
Re: [PATCH nbdkit] plugins: file: More standard cache mode names
...not completely avoid the page cache (so I agree it's not precisely like O_DIRECT), but it tries to minimize the time that pages spend there. > Rename the cache modes to: > - writeback - write complete when the system call returned, and the data > was copied to the page cache. > - writethrough - write completes when the data reach storage. read marks > pages as not needed to minimize use of the page cache. I'm not convinced that writethrough is the same as cache=none. In qemu it only applies to writes, and says nothing about reads AFAIK. Where do you get the information that w...
2013 Jun 26
3
introduce a cache options for PV disks
...Yes +Supported values: "w" (read/write), "r" (read-only) + +* cache +Description: file or device cache mode +Mandatory: No +Supported values: "none" (no cache), "writeback" (writeback + caching), "writethrough" (writethrough caching) +* device-type +Description: type of device the backend corresponds to +Mandatory: No +Supported values: "cdrom", "disk" + + + Notes on Linux as a guest ------------------------- diff --git a/docs/misc/xl-disk-configura...
2012 Jul 04
1
[PATCH] virtio-blk: allow toggling host cache between writeback and writethrough
...+ > +static const char *virtblk_cache_types[] = { > + "write through", "write back" > +}; > + I wonder whether something that lacks space would have been better, especially for show: shells might get confused and split a string at a space. How about we change it to writethrough, writeback before it's too late? It's part of a userspace API after all, and you see to prefer writeback in one word in your own code so let's not inflict pain on others :) Also, would be nice to make it discoverable what the legal values are. Another attribute valid_cache_types with a...
2012 Jul 04
1
[PATCH] virtio-blk: allow toggling host cache between writeback and writethrough
...+ > +static const char *virtblk_cache_types[] = { > + "write through", "write back" > +}; > + I wonder whether something that lacks space would have been better, especially for show: shells might get confused and split a string at a space. How about we change it to writethrough, writeback before it's too late? It's part of a userspace API after all, and you see to prefer writeback in one word in your own code so let's not inflict pain on others :) Also, would be nice to make it discoverable what the legal values are. Another attribute valid_cache_types with a...
2011 Jan 22
2
invalid argument error when using cache parameter
hi, i tried to add a 'cache' argument to the 'driver' element in my domainxml for qemu/kvm. i tried adding cache="none" as well as "writethrough" and "writeback", but every time i get these errors: libvirtError: internal error process exited while connecting to monitor: char device redirected to /dev/pts/1 qemu: could not open disk image /vms/test.img: Invalid argument the full driver line looks like this: <driver nam...
2010 Mar 23
1
qemu disk cache mode
Hi all, I can''t find any good talk about this subject and would like some insights and advices on the cache side in xen. I discovered that a domO power outage can lead to a severe filesystem corruption of the domUs. The domO is a dual disk dell server with a PERC controler in writethrough cache mode, the disk cache is disabled, the scheduler in the domO/domU is NOOP, the domO is holding an LVM in which LVs are created to be used as physical disks (phy:) in the domU. Using xm destroy to shut the domU is ok and the filesystem doesn''t crash. Pulling the power plug from the dom...
2004 Feb 04
0
Odd result of increasing journal size?
...ted the spikes we saw at regular intervals with a large amount of simultaneous reading and writing. This weekend we increased journal size from 32MB to 256MB on a group of machines. There was one with the following configuration: 2 x P3/667 256MB ServeRAID 4L (16MB cache, writethrough) 5 x 18GB 10k Ultra160 (RAID5, 8KB stripe) Red Hat 7.2 w/ kernel 2.4.20-18.7 The rest are: 2 x Xeon/2.4 1024MB ServeRAID 6i (128MB cache, writethrough) 5 x 36GB 15k Ultra320 (RAID5EE, 8KB stripe) Red Hat 7.2 w/ kernel 2.4.20-24.7 (addl path:...
2012 Feb 05
4
qcow2 performance
Greets, I have to research performance-issues of a W2003-VM within KVM. Right now it's a qcow2-image-file w/ default settings within libvirt (configured by vmm ...) My question: what caching to use? writeback/writethrough/etc ... what to use for data integrity while not getting ultraslow performance? Found https://www.linuxfoundation.jp/jp_uploads/JLS2009/jls09_hellwig.pdf Is there any other list/doc what to use and why? Thanks, Stefan
2020 Mar 27
2
Create VM w/ cache=none on tmpfs
...isk0/disk.qcow2,format=qcow2,if=none,id=drive-ua-disk0,cache=none: Could not open backing file: Could not open '/var/run/kubevirt-private/vmi-disks/disk0/disk.img': Invalid argument')" ``` But actually proceeds, and is able to start the VM - but seems it coerces the cache value to writeThrough. Is this the expected behavior ? e.g. cache = none can't be used when the disk images are on a tmpfs file system ? I know it was, not sure about now (libvirt-5.6.0-7) ... [0] - https://bugs.launchpad.net/nova/+bug/959637
2017 Jan 09
1
changing to cache='none' on the fly?
As is already well documented, we find that we need cache='none' to support migration, otherwise there is the chance of a hang and/or failure to pivot. However we prefer the default of cache=writethrough when operating in production. Our practice is to 'shutdown' the VM completely make the change with virsh edit, then restart. Then we have to repeat the process to revert back once we migrate. Is it possible to change that function on the fly and avoid the shutdown/start process? No...
2017 Nov 06
0
Has libvirt guest pagecache level ?
Greetings Has libvirt dedicated page cache area for guest ? If not - what is the difference between cache='none' and cache='directsync' ? >The optional cache attribute controls the cache mechanism, possible >values are "default", "none", "writethrough", "writeback", "directsync" >(like "writethrough", but it bypasses the host page cache) and >"unsafe" (host may cache all disk io, and sync requests from guest >are ignored). As I understand 'unsafe' - always remove flags O_SYNC|O_DIRE...
2012 Oct 15
1
Lustre + qemu/kvm issue
...duction environment using Lustre for VM disk images? I run into the Lustre Direct I/O problematic which requires the application to read and write in 4K I/O chunks. qemu/kvm uses 512 byte as I/O size and refuses to start the VM. In this case cache is "none". If I change the cache mode to writethrough (default) then the VM stored on Lustre can be started, but it has very poor write performance. (100-times worse) Any hints? Best regards, Danny
2013 Mar 14
4
[PATCH] virtio-spec: add field for scsi command size
...index a8ce3f9..fea97ed 100644 --- a/virtio-spec.lyx +++ b/virtio-spec.lyx @@ -5826,6 +5826,16 @@ VIRTIO_BLK_F_TOPOLOGY (10) Device exports information on optimal I/O alignment. \change_inserted 1531152142 1341302349 VIRTIO_BLK_F_CONFIG_WCE (11) Device can toggle its cache between writeback and writethrough modes. +\change_inserted 1986246365 1363257418 + +\end_layout + +\begin_layout Description + +\change_inserted 1986246365 1363258629 +VIRTIO_BLK_F_CMD_SIZE (12) cmd_size field is valid. +\change_inserted 1531152142 1341302349 + \end_layout \end_deeper @@ -5994,6 +6004,30 @@ struct virtio_blk_co...
2013 Mar 14
4
[PATCH] virtio-spec: add field for scsi command size
...index a8ce3f9..fea97ed 100644 --- a/virtio-spec.lyx +++ b/virtio-spec.lyx @@ -5826,6 +5826,16 @@ VIRTIO_BLK_F_TOPOLOGY (10) Device exports information on optimal I/O alignment. \change_inserted 1531152142 1341302349 VIRTIO_BLK_F_CONFIG_WCE (11) Device can toggle its cache between writeback and writethrough modes. +\change_inserted 1986246365 1363257418 + +\end_layout + +\begin_layout Description + +\change_inserted 1986246365 1363258629 +VIRTIO_BLK_F_CMD_SIZE (12) cmd_size field is valid. +\change_inserted 1531152142 1341302349 + \end_layout \end_deeper @@ -5994,6 +6004,30 @@ struct virtio_blk_co...
2013 Nov 13
2
Lots of threads and increased IO load
Hello guys, We have a lot of small computers on top of two servers running QEMU/KVM virtualisation with libvirt. In this case, most of the VMs are not doing much work, some of them barely touches any "hardware" resources. I have two problems, they might be connected. When I do a bigger IO job on one of the VMs (like downloading files to local disk) all other VMs' IO load
2017 Jul 05
3
virt-v2v import from KVM without storage-pool ?
...;virStorageVolLookupByPath() failed', conn=self) libvirtError: Storage volume not found: no storage vol with matching path the disks in the origin VM are defined as <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writethrough'/> <source file='/dev/kvm108/kvm108_img'/> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/some/path/CentOS-7-x86_64-Minimal-1611.iso'/> is this a virt-v2v or...
2012 Aug 10
6
qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE
Hi list, Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen). To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure. Even not consider the nested case, I saw there is a bug reporting normal VM
2013 Nov 13
1
Re: Lots of threads and increased IO load
...not cause any problems, the host is dealing with a lot of them and has no problem (CPU usage and load is incredibly low in practice). Thank you for the clarification, I see why are there more threads sometimes. > What domain XML are you using? Yes, there are different disk cache > policies (writethrough vs. none) which have definite performance vs. > risk tradeoffs according to the amount of IO latency you want the guest > to see; but again, the qemu list may be a better help in determining > which policy is best for your needs. Once you know the policy you want, > then we can help yo...