similar to: invalid argument error when using cache parameter

Displaying 20 results from an estimated 7000 matches similar to: "invalid argument error when using cache parameter"

2011 Jul 26
2
python-libvirt for 0.9.3 leaking file descriptors
i've reported this issue before, so i guess this is a regression. looks like the python bindings for 0.9.3 are leaking file descriptors: root at cloud1:~# python Python 2.6.6 (r266:84292, Dec 26 2010, 22:31:48) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import libvirt >>>
2020 Aug 07
2
[PATCH nbdkit] plugins: file: More standard cache mode names
The new cache=none mode is misleading since it does not avoid usage of the page cache. When using shared storage, we may get stale data from the page cache. When writing, we flush after every write which is inefficient and unneeded. Rename the cache modes to: - writeback - write complete when the system call returned, and the data was copied to the page cache. - writethrough - write completes
2013 Jun 26
3
introduce a cache options for PV disks
Document a per-disk cache option in the xl config file to allow users to select the cache mode that the backend should use to open the disk file or device. Document backend options that are part of the vbd xenstore interface. The existing "mode" and "device-type" as well as the new "cache". Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2012 Jul 04
1
[PATCH] virtio-blk: allow toggling host cache between writeback and writethrough
On Tue, Jul 03, 2012 at 03:19:37PM +0200, Paolo Bonzini wrote: > This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE feature, > which exposes the cache mode in the configuration space and lets the > driver modify it. The cache mode is exposed via sysfs. > > Even if the host does not support the new feature, the cache mode is > visible (thanks to the existing
2012 Jul 04
1
[PATCH] virtio-blk: allow toggling host cache between writeback and writethrough
On Tue, Jul 03, 2012 at 03:19:37PM +0200, Paolo Bonzini wrote: > This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE feature, > which exposes the cache mode in the configuration space and lets the > driver modify it. The cache mode is exposed via sysfs. > > Even if the host does not support the new feature, the cache mode is > visible (thanks to the existing
2010 Mar 23
1
qemu disk cache mode
Hi all, I can''t find any good talk about this subject and would like some insights and advices on the cache side in xen. I discovered that a domO power outage can lead to a severe filesystem corruption of the domUs. The domO is a dual disk dell server with a PERC controler in writethrough cache mode, the disk cache is disabled, the scheduler in the domO/domU is NOOP, the domO is holding
2017 Jul 05
3
virt-v2v import from KVM without storage-pool ?
hi, i'm trying to import a VM in oVirt from a KVM host that doesn't use storage pools. this fails with the following message in /var/log/vdsm/vdsm.log: 2017-07-05 09:34:20,513+0200 ERROR (jsonrpc/5) [root] Error getting disk size (v2v:1089) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in _get_disk_info vol =
2020 Mar 27
2
Create VM w/ cache=none on tmpfs
Hi, I've seen that in the past, libvirt couldn't start VMs when the disk image was stored on a file system that doesn't support direct I/O having the 'cache=none' configuration [0]. On the KubeVirt project, we have some storage tests on a particular provider which does just that - try to create / start a VM whose disk is on tmpfs and whose definition features
2020 Aug 08
1
Re: [PATCH nbdkit] plugins: file: More standard cache mode names
On Sun, Aug 9, 2020 at 12:28 AM Richard W.M. Jones <rjones@redhat.com> wrote: > > On Sat, Aug 08, 2020 at 01:24:02AM +0300, Nir Soffer wrote: > > The new cache=none mode is misleading since it does not avoid usage of > > the page cache. When using shared storage, we may get stale data from > > the page cache. When writing, we flush after every write which is > >
2012 Feb 05
4
qcow2 performance
Greets, I have to research performance-issues of a W2003-VM within KVM. Right now it's a qcow2-image-file w/ default settings within libvirt (configured by vmm ...) My question: what caching to use? writeback/writethrough/etc ... what to use for data integrity while not getting ultraslow performance? Found https://www.linuxfoundation.jp/jp_uploads/JLS2009/jls09_hellwig.pdf Is there
2013 Mar 14
4
[PATCH] virtio-spec: add field for scsi command size
Add field for guest to specify command size for virtio-blk. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- virtio-spec.lyx | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 78 insertions(+), 5 deletions(-) diff --git a/virtio-spec.lyx b/virtio-spec.lyx index a8ce3f9..fea97ed 100644 --- a/virtio-spec.lyx +++ b/virtio-spec.lyx @@ -5826,6 +5826,16 @@
2013 Mar 14
4
[PATCH] virtio-spec: add field for scsi command size
Add field for guest to specify command size for virtio-blk. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- virtio-spec.lyx | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 78 insertions(+), 5 deletions(-) diff --git a/virtio-spec.lyx b/virtio-spec.lyx index a8ce3f9..fea97ed 100644 --- a/virtio-spec.lyx +++ b/virtio-spec.lyx @@ -5826,6 +5826,16 @@
2012 Aug 10
6
qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE
Hi list, Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen). To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure. Even not consider the nested case, I saw there is a bug reporting normal VM
2011 Jul 29
3
issue with GlusterFS to store KVM guests
i'm having difficulty running KVM virtual machines off of a glusterFS volume mounted using the glusterFS client. i am running centOS 6, 64-bit. i am using virt-install to create my images but encountering the following error: qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img: Invalid argument (see below for a more lengthy version of the error) i have found an example of
2012 Jun 23
4
Can't run KVM Virtual Machines on a Gluster volume
I just built a 2 node(4 bricks), Distributed-Replicated and everything mounts fine. Each node mounts using GlusterFS client on its hostname (mount -t glusterfs hostname:VOLUME /virtual-machines) When creating a new Virtual Machine using virt-manager it creates the file on the storage, but when trying to power it On, it doesn't work and gives back an error message.(See below. Yes the folder has
2017 Jul 07
3
Re: virt-v2v import from KVM without storage-pool ?
I could reproduce customer's problem Packages: rhv:4.1.3-0.1.el7 vdsm-4.19.20-1.el7ev.x86_64 virt-v2v-1.36.3-6.el7.x86_64 libguestfs-1.36.3-6.el7.x86_64 Steps: 1.Prepare a guest which is not listed storage pool # virsh dumpxml avocado-vt-vm1 .... <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2014 Aug 11
2
Behavior of disk caching with qcow2 disks
Hello, I am running several virtualization servers with QEMU 1.4.x and libvirt 1.0.2 on Ubuntu 12.04 and am working on optimizing the cache= and aio= options for the virtual machines. These VM images are mostly qcow2, and are served both from a local ext4 filesystem (with data=ordered,barrier) and from an NFS mountpoint (with sync). The local filesystem sits on top of an md software RAID of SATA
2012 May 09
2
serial console
hi, when creating a domain using libvirt python api, how would i tell libvirt to use a unique serial port number for domains? in the?libvirt.virConnect.defineXML(conn,domainxml) call when i create the domainxml object, i need to ensure that the part below has a unique # for target port <serial type='pty'> ? <target port='0'/> </serial> <console
2013 Nov 13
2
Lots of threads and increased IO load
Hello guys, We have a lot of small computers on top of two servers running QEMU/KVM virtualisation with libvirt. In this case, most of the VMs are not doing much work, some of them barely touches any "hardware" resources. I have two problems, they might be connected. When I do a bigger IO job on one of the VMs (like downloading files to local disk) all other VMs' IO load