similar to: Windows Guest I/O performance issues (already using virtio)

Displaying 20 results from an estimated 2000 matches similar to: "Windows Guest I/O performance issues (already using virtio)"

2018 Aug 09
0
Re: Windows Guest I/O performance issues (already using virtio) (Matt Schumacher)
I think performance is not just about your xml, the host system will have a bigger impact. Maybe you can see this link: Https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html-single/virtualization_tuning_and_optimization_guide/index >Date: Wed, 8 Aug 2018 17:35:11 +0000 >From: Matt Schumacher <matt.s@aptalaska.com> >To: "libvirt-users@redhat.com"
2009 Mar 31
3
Bad SWAP performance from zvol
I''ve upgraded my system from ufs to zfs (root pool). By default, it creates a zvol for dump and swap. It''s a 4GB Ultra-45 and every late night/morning I run a job which takes around 2GB of memory. With a zvol swap, the system becomes unusable and the Sun Ray client often goes into "26B". So I removed the zvol swap and now I have a standard swap partition. The
2020 Sep 14
0
Re: [ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work
On Mon, Sep 14, 2020 at 8:42 AM Yedidyah Bar David <didi@redhat.com> wrote: > > On Mon, Sep 14, 2020 at 12:28 AM wodel youchi <wodel.youchi@gmail.com> wrote: > > > > Hi, > > > > Thanks for the help, I think I found the solution using this link : https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qemu-kvm-on-x86-hosts/ > > > >
2008 Oct 14
4
Change the volblocksize of a ZFS volume
Dear all, Background: I have a ZFS volume with the incorrect volume blocksize for the filesystem (NTFS) that it is supporting. This volume contains important data that is proving impossible to copy using Windows XP Xen HVM that "owns" the data. The disparity in volume blocksize (current set to 512bytes!!) is causing significant performance problems. Question : Is there a way to
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be less than the product of used and compressratio? For example, # zfs get -p all home1/home1mm01 NAME PROPERTY VALUE SOURCE home1/home1mm01 type volume - home1/home1mm01 creation 1254440045 - home1/home1mm01 used 14902492672
2020 Jun 15
1
Reintroduce modern CPU in model selection
Hi list, in virt-manager ver. 2.2.1 (fully upgraded CentOS 8.1), the CPU model list only shows ancient CPU (the most recent is Nehalem-IBRS). On the other hand, in virt-manager 1.5.x (fully upgraded CentOS 7.8) we have a rich selection of CPU (as recent as Icelake). Why was the list in newer virt-manager so much trimmed? Is it possible to enlarge it? Thanks. -- Danti Gionatan Supporto
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap too) off a raw, hardware-mirrored device. Before I do (and I''ll write up any results) I''d like to know
2008 Jun 01
1
capacity query
Hi, My swap is on raidz1. Df -k and swap -l are showing almost no usage of swap, while zfs list and zpool list are showing me 96% capacity. Which should i believe? Justin # df -hk Filesystem size used avail capacity Mounted on /dev/dsk/c3t0d0s1 14G 4.0G 10G 28% / /devices 0K 0K 0K 0% /devices ctfs
2007 Jan 26
10
UFS on zvol: volblocksize and maxcontig
Hi all! First off, if this has been discussed, please point me in that direction. I have searched high and low and really can''t find much info on the subject. We have a large-ish (200gb) UFS file system on a Sun Enterprise 250 that is being shared with samba (lots of files, mostly random IO). OS is Solaris 10u3. Disk set is 7x36gb 10k scsi, 4 internal 3 external. For several
2005 Nov 30
2
Trying to understand volblocksize ?
Hi, I am trying to understand the use of volblocksize in emulated volumes. If I create a volume in pool and I want a database engine to read and write, say 16K blocks. Should I then set volblocksize to 16K ? Regards, Patrik This message posted from opensolaris.org
2012 Aug 31
0
oops with btrfs on zvol
Hi, I''m experimenting with btrfs on top of zvol block device (using zfsonlinux), and got oops on a simple mount test. While I''m sure that zfsonlinux is somehow also at fault here (since the same test with zram works fine), the oops only shows things btrfs-related without any usable mention of zfs/zvol. Could anyone help me interpret the kernel logs, which btrfs-zvol interaction
2014 Sep 03
2
howto force shutdown if nut-snmp Communications lost
Hallo, I have one ups with snmp and any ups without snmp. Shutdown works while network online. But some batteries are empty, so that the communication break early. Howto force shutdown if nut-snmp Communications lost? My 1. suggestion was option DEADTIME, but seem to be information only. (s. below) Any suggestions? The last way would be a ugly cronjob (like: ping || shutdown). regards Heiko
2013 Jan 11
1
libvirt RPC error
Hi, I'm using qemu+ssh://username at hostname/system as the remote URI. Libvirt seems to be communicating fine until some 2 minutes (we poll every 5 seconds) and then it throws up RPC error and many counters are wrong. But if I collect on localhost the counters seems to be coming fine. Test and testvm are guests on the local machine where as Win8 and Ubuntu* are remote URI's connected
2017 Jul 06
1
samba 4.5.8 @ debian 9 - wrong groups IDs for PAM authorization
Hello list. I’m using samba4 authorization with debian 8 without any problems. But in debian 9 very same config causes problems - unable to change GID. Here is my smb.conf: [global] netbios name = testvm security = ADS workgroup = WRKGRP realm = EXAMPLE.COM password server = 172.24.0.253 wins server = 172.24.0.253 wins proxy = no
2016 Mar 10
0
different uuids, but still "Attempt to migrate guest to same host" error
Background: ---------- I'm trying to debug a two-node pacemaker/corosync cluster where I want to be able to do live migration of KVM/qemu VMs. Storage is backed via dual-primary DRBD (yes, fencing is in place). When moving the VM between nodes via 'pcs resource move RES NODENAME', the live migration fails although pacemaker will shut down the VM and restart it on the other node.
2005 May 04
0
hfc isdn cards & zaphfc in domU
Hello, After searching the archives, I''ve found only two messages regarding this case, but there were no solution. I have a similar problem, so I try to ask here. I have twi HFC chipset ISDN cards in my pc: 0000:00:0d.0 Network controller: Cologne Chip Designs GmbH ISDN network controller [HFC-PCI] (rev 02) Subsystem: Cologne Chip Designs GmbH ISDN Board Flags: bus
2020 Nov 04
0
Re: Libvirt driver iothread property for virtio-scsi disks
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote: > The docs[1] say: > > - The optional iothread attribute assigns the disk to an IOThread as defined by > the range for the domain iothreads value. Multiple disks may be assigned to > the same IOThread and are numbered from 1 to the domain iothreads value. > Available for a disk device target configured to use
2011 Jul 29
3
issue with GlusterFS to store KVM guests
i'm having difficulty running KVM virtual machines off of a glusterFS volume mounted using the glusterFS client. i am running centOS 6, 64-bit. i am using virt-install to create my images but encountering the following error: qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img: Invalid argument (see below for a more lengthy version of the error) i have found an example of
2017 Jun 14
0
Re: virtual drive performance
Hi Dominik, Sure, I beleive you can improve using: <cpu mode='host-passthrough'> </cpu> add io='native' <driver name='qemu' type='qcow2' cache='none' io='native'/> After that, please try again, but I can see other thing, for example, change the hda=IDE to virtio. Cheers! Thiago 2017-06-14 5:26 GMT-03:00 Dominik Psenner
2012 May 19
1
Migration with rbd storage backend
Hi, Seems that such migration is currently broken, at least for 0.9.11|0.9.12, with 0.9.8 all works fine: virsh migrate --live testvm qemu+tcp://towerbig/system ---snip--- 2012-05-17 21:22:30.250+0000: 16926: debug : qemuDriverCloseCallbackGet:605 : vm=testvm, uuid=feb7ccb6-1087-8661-9284-62e3a1e9f44a, conn=(nil) 2012-05-17 21:22:30.250+0000: 16926: debug : qemuDriverCloseCallbackGet:611 :