Displaying 20 results from an estimated 23 matches for "directsync".
2019 Jul 29
0
Re: Why librbd disallow VM live migration if the disk cache mode is not none or directsync
On 7/29/19 3:51 AM, Ming-Hung Tsai wrote:
> I'm curious that why librbd sets this limitation? The rule first
> appeared in librbd.git commit d57485f73ab. Theoretically, a
> write-through cache is also safe for VM migration, if the cache
> implementation guarantees that cache invalidation and disk write are
> synchronous operations.
>
> For example, I'm using Ceph RBD
2019 Jul 29
3
Why librbd disallow VM live migration if the disk cache mode is not none or directsync
I'm curious that why librbd sets this limitation? The rule first
appeared in librbd.git commit d57485f73ab. Theoretically, a
write-through cache is also safe for VM migration, if the cache
implementation guarantees that cache invalidation and disk write are
synchronous operations.
For example, I'm using Ceph RBD images as VM storage backend. The Ceph
librbd supports synchronous
2012 Oct 17
0
cgroup blkio.weight working, but not for KVM guests
...harserial0 -device
isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc
127.0.0.1:1 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
For fun I tried a few different cache options to try to force a bypass the
host buffercache, including writethough and directsync, but the number of
virtio kernel threads appeared to explode (especially for directsync) and
the throughput dropped quite low: ~50% of "none" for writethrough and ~5%
for directsync.
With cache=none, when I generate write loads inside the VMs, I do see growth
in the host's buffer...
2017 Apr 03
1
(Live) Migration safe vs. unsafe
...e. When a
migration is considered to be unsafe it is rejected unless the --unsafe
option is prodivided.
As a part of those checks virsh considers the cache settings for the
underlying storage resources. In this context only cache="none" is
considered to be safe.
I wonder why cache="directsync" might be harmfull as it bypasses the
host page cache completely.
Regards,
Michael
2017 Nov 06
0
Has libvirt guest pagecache level ?
Greetings
Has libvirt dedicated page cache area for guest ?
If not - what is the difference between
cache='none' and cache='directsync' ?
>The optional cache attribute controls the cache mechanism, possible >values are "default", "none", "writethrough", "writeback", "directsync" >(like "writethrough", but it bypasses the host page cache) and >"unsafe&...
2016 Dec 05
2
How to best I/O performance for Window2008 and MSSQL guest VM
...dge instead to go through to macvtap, so I'm
not sure what is the best in this case.
Regarding disk, since it's LVM I've chosen to go to cache mode none and IO mode native. Also
here cannot judge what's the best setup for the workload. I'm undecided to use IO mode threads
along directsync cache mode.
Finally I've set "ionice -c 1 -p <qemu-pid> -n 0" and "renice -n -10 <qemu-pid>" for the interested
VM that I want to get best performance possible.
Even with the above setup, the MSSQL VM has performance similar to the old machine running
ESXi 5.5,...
2019 Jul 30
1
Researching why different cache modes result in 'some' guest filesystem corruption..
...s with LVM.
After chasing down this issue some more and attempting various
things (build the image on Fedora29, build a real XFS filesystem inside a
VM and use the generated qcow2 as a basis instead of virt-format)..
..I've noticed that the SATA disk of each of those guests were using
'directsync' (instead of 'Hypervisor Default'). As soon as I switched to
'None', the XFS issues disappeared and I've now applied several
consecutive kernel updates without issues. Also, 'directsync' or
'writethrough', while providing decent performance, both exhibited...
2017 Feb 17
2
Libvirt behavior when mixing io=native and cache=writeback
...cache=writeback without problems, to a CentOS7 host, live
migration aborts with an error:
[root@source] virsh migrate Totem --live --copy-storage-all --persistent
--verbose --unsafe qemu+ssh://root@172.31.255.11/system
error: unsupported configuration: native I/O needs either no disk cache
or directsync cache mode, QEMU will fallback to aio=threads
This error persists even if the VM config file is changed to use
"io=threads". Of course, the running image continue to have "io=native"
on its parameters list, but it does not really honor the "io=native"
directive, so...
2015 Nov 30
1
Re: libvirtd doesn't attach Sheepdog storage VDI disk correctly
Hi,
I tried two different approaches.
1.) Convert an existing Image with qemu-img
================================================
qemu-img convert -t directsync lubuntu-14.04.3-desktop-i386.iso
sheepdog:lubuntu1404.iso
=================================================
results in
====================================================
root@orion2:/var/lib/libvirt/xml# virsh vol-dumpxml --pool herd
lubuntu1404.iso
<volume type='network'>
<n...
2016 Dec 06
0
Re: How to best I/O performance for Window2008 and MSSQL guest VM
...h to macvtap, so I'm
> not sure what is the best in this case.
>
> Regarding disk, since it's LVM I've chosen to go to cache mode none and IO mode native. Also
> here cannot judge what's the best setup for the workload. I'm undecided to use IO mode threads
> along directsync cache mode.
>
> Finally I've set "ionice -c 1 -p <qemu-pid> -n 0" and "renice -n -10 <qemu-pid>" for the interested
> VM that I want to get best performance possible.
>
> Even with the above setup, the MSSQL VM has performance similar to the old ma...
2020 Jan 28
0
NFS and unsafe migration
...on about NFS datastore and unsafe migration.
When migrating a virtual machine having a virtio disk in writeback cache
between two hosts sharing a single NFS datastore, I get the following error:
"Unsafe migration: Migration may lead to data corruption if disks use
cache != none or cache != directsync"
I understand why libvirt alerts for unsafe migration in cases where no
coherency is enfoced by the underlying system; however, is it really the
case for nfs?
From what I know (and from the man page), by default nfs has
open-to-close consistency, which seems quite right for migrating a...
2020 Sep 07
0
Re: Read-only iscsi disk? Or hot-plug?
...> I have an iSCSI disk with a backup. I want to use that backup on another
> machine to test putting back data.
>
> What I use now is this:
> ----
> <disk type='block' device='disk'>
> <driver name='qemu' type='raw' cache='directsync' io='native'/>
> <source dev='/dev/disk/by-id/scsi-36e843b6afdddf65dc4e9d4dc2dab66de'/>
> <target dev='vdh' bus='virtio'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x0c'
&g...
2017 Feb 20
0
Re: Libvirt behavior when mixing io=native and cache=writeback
...e Totem --live --copy-storage-all --persistent
>--verbose --unsafe qemu+ssh://root@172.31.255.11/system
>
How about using --xml to supply the destination XML which would have
io="threads" specified?
>error: unsupported configuration: native I/O needs either no disk cache
>or directsync cache mode, QEMU will fallback to aio=threads
>
>This error persists even if the VM config file is changed to use
>"io=threads". Of course, the running image continue to have "io=native"
>on its parameters list, but it does not really honor the "io=native"...
2015 Nov 24
2
libvirtd doesn't attach Sheepdog storage VDI disk correctly
Hi,
I am trying to use libvirt with sheepdog.
I am using Debian 8 stable with libvirt V1.21.0
I am encountering a Problem which already has been reported.
=================================================================
See here: http://www.spinics.net/lists/virt-tools/msg08363.html
=================================================================
qemu/libvirtd is not setting the path
2013 May 24
0
Re: [Qemu-devel] use O_DIRECT to open disk images for IDE failed under xen-4.1.2 and qemu upstream
...that because Qemu use write-back flag to open disk images by default.
> so I hope to use O_DIRECT to avoid meeting that problem, but I'm failed under Xen platform with Qemu upstream.
cache=none does not eliminate the need for flush. If you want to do
that, try cache=writethrough or cache=directsync.
Regarding the EFAULT you are seeing, did you check if the I/O buffer
address is valid? Try breaking in gdb and inspecting /proc/<pid>/maps
or just x <buffer-address> in gdb to see if the buffer is readable.
Stefan
_______________________________________________
Xen-devel mailing li...
2015 Feb 10
1
host Linux - guest Win7 fast file sharing [was: getting guestfs_rsync_out to work]
Update:
Using mount (read-only) and unmount commands in a script that loops continuously, we are able to see near
instantaneous file modifications (less than 0.5 sec) performed on a Win7 live guest. We have to use the sync.exe
utility (technet.microsoft.com/en-us/sysinternals/bb897438.aspx), otherwise changes are not visible for 10-20 sec.
We tested sync.exe with our libguestfs test program
2016 Dec 06
1
Re: How to best I/O performance for Window2008 and MSSQL guest VM
...39;m
>> not sure what is the best in this case.
>>
>> Regarding disk, since it's LVM I've chosen to go to cache mode none and IO mode native. Also
>> here cannot judge what's the best setup for the workload. I'm undecided to use IO mode threads
>> along directsync cache mode.
>>
>> Finally I've set "ionice -c 1 -p <qemu-pid> -n 0" and "renice -n -10 <qemu-pid>" for the interested
>> VM that I want to get best performance possible.
>>
>> Even with the above setup, the MSSQL VM has performance s...
2012 Jul 03
8
[PATCH 0/7 v2] Fix and workaround for qcow2 issues in qemu causing data corruption.
...hard disk to flush its cache.
libguestfs used sync(2) to force changes to disk. We didn't expect
that qemu was caching anything because we used 'cache=none' for all
writable disks, but it turns out that qemu creates a writeback cache
anyway when you do this (you need to use 'cache=directsync' when you
don't want a cache at all).
(2) qemu's qcow2 disk cache code is buggy. If there are I/Os in
flight when qemu shuts down, then qemu segfaults or assert fails.
This can result in unwritten data. Unfortunately libguestfs ignored
the result of waitpid(2) so we didn't see th...
2012 Jun 11
11
KVM on top of BTRFS
What are the recommendations for running KVM images on BTRFS systems using kernel 3.4? I saw older posts on the web complaining about poor performance, but I know a lot of work has gone into btrfs since then. There also seemed to be the nocow option, but I didn''t find anything that said it actualy helped.
Anybody have ideas?
Thanks,
Matt
--
To unsubscribe from this list: send the line
2014 Jun 15
2
Re: ERROR: Domain not found: no domain with matching name 'ubuntu'
...d file use 'file' as IDE hard disk 2/3 image
-cdrom file use 'file' as IDE cdrom image (cdrom is ide1 master)
-drive [file=file][,if=type][,bus=n][,unit=m][,media=d][,index=i]
[,cyls=c,heads=h,secs=s[,trans=t]][,snapshot=on|off]
[,cache=writethrough|writeback|none|directsync|unsafe][,format=f]
[,serial=s][,addr=A][,id=name][,aio=threads|native]
[,readonly=on|off][,copy-on-read=on|off]
[[,bps=b]|[[,bps_rd=r][,bps_wr=w]]][[,iops=i]|[[,iops_rd=r][,iops_wr=w]]
use 'file' as a drive image
-set group.id.arg=value
set <...