Displaying 20 results from an estimated 20000 matches similar to: "HVM domU disk i/o slow after resume"
2012 Aug 12
1
tuned-adm fixed Windows VM disk write performance on CentOS 6
On a 32bit Windows 2008 Server guest VM on a CentOS 5 host, iometer
reported a disk write speed of 37MB/s
The same VM on a CentOS 6 host reported 0.3MB/s. i.e. The VM was unusable.
Write performance in a CentOS 6 VM was also much worse, but it was usable.
(See http://lists.centos.org/pipermail/centos-virt/2012-August/002961.html)
With iometer still running in the guest, I installed tuned on
2018 May 28
0
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
On Mon, May 28, 2018 at 02:05:05PM +0200, Kashyap Chamarthy wrote:
> Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org),
[Sigh; now add the QEMU BLock Layer e-mail list to Cc, without typos.]
> who might
> have more insights here; and wrap long lines.
>
> On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote:
> > Hi, everyone.
> >
> > Recently
2014 Dec 03
2
Problem with AIO random read
Hello list,
I setup Iometer to test AIO for 100% random read.
If "Transfer Request Size" is more than or equal to 256 kilobytes,in the
beginning the transmission is good.
But 3~5 seconds later,the throughput will drop to zero.
Server OS:
Ubuntu Server 14.04.1 LTS
Samba:
Version 4.1.6-Ubuntu
Dialect:
SMB 2.0
AIO settings :
aio read size = 1
aio write size = 1
vfs objects =
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might
have more insights here; and wrap long lines.
On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote:
> Hi, everyone.
>
> Recently I am doing some tests on the VM storage+memory migration with
> KVM/QEMU/libvirt. I use the following migrate command through virsh:
> "virsh migrate --live
2012 Apr 28
1
SMB2 write performace slower than SMB1 in 10Gb network
Hi forks:
I've been testing SMB2 with samba 3.6.4 performance these days,
and I find a weird benchmark that SMB2 write performance is
slower than SMB1 in 10Gb ethernet network.
Server
-----------------------
Linux: Redhat Enterprise 6.1 x64
Kernel: 2.6.31 x86_64
Samba: 3.6.4 (almost using the default configuration)
Network: Chelsio T4 T420-SO-CR 10GbE network adapter
RAID:
Adaptec 51645 RAID
2012 Oct 11
0
samba performance downgrade with glusterfs backend
Hi folks,
We found that samba performance downgrade a lot with glusterfs backend. volume info as followed,
Volume Name: vol1
Type: Distribute
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: pana53:/data/
Options Reconfigured:
auth.allow: 192.168.*
features.quota: on
nfs.disable: on
Use dd (bs=1MB) or iozone (block=1MB) to test write performance, about 400MB/s.
#dd
2008 Nov 01
3
Unable to boot HVM domU with more than 1 disk
When I try adding a second disk to an HVM domU, I get this when I try to boot:
Error: Device 5632 (vbd) could not be connected. Backend device not found.
The backend device DOES exist. I''d like to use separate ZFS volumes for the boot disk and data disk on a Windows HVM so I can manage snapshots of either one independently. Is there a limitation on HVM guests to only use a single
2013 Jun 18
3
Bug#712661: xen-utils-common: xl start HVM domU instead of PV if disk placed on file
Package: xen-utils-common
Version: 4.1.4-3+deb7u1
Severity: normal
Dear Maintainer,
i changed toolkit to xl, after that i observe that my domU started as HVM domains.
I found same problem here: http://mail-index.netbsd.org/port-xen/2012/04/11/msg007216.html
When i manualy setup loop devices and specify it as disks in my VM conf file, domU started as PV.
-- System Information:
Debian Release: 7.1
2011 Feb 22
8
Xen 4.0.1 HVM 2008R2 Citrix PV Drivers
I have been google for a few days now trying to get the Citrix PV Drivers to work on a server 2008 R2 HVM guest. I get the machine up and running then I install the cirtix PV drivers. I have tried both making and not making the registry change. Once I reboot the server I get stuck at the boot screen.
The xen log says:
XENUTIL: WARNING: CloseFrontend: timed out in
2008 May 08
1
Restoring a DomU HVM-Domain is "slow" (Bandwidth 23MB/sec from a ramdisk, xen3.2.1)
Hi,
i do some tests with restoring HVM winxp-domU''s.
Even, if i restore a saved DomU from a ramdisk, the
Restoring-process has only a bandwidth about 23MB/sec:
Here a example restoring an 512MB HVM opensuse10.3 DomU
......
[2008-05-07 22:40:12 3314] DEBUG (XendCheckpoint:218) restore:shadow=0x5, _static_max=0x20000000, _static_min=0x0,
[2008-05-07 22:40:12 3314] DEBUG (balloon:132)
2008 Oct 14
2
very slow I/O performance in domU
I''ve got a Debian Lenny Dom0 with Debian''s 2.6.26 Xen paravirt_ops kernel
and a Debian Lenny DomU. xen-hypervisor is Debian''s 3.2.1 package. The
dom0 has two dual core Opteron CPUs without hardware virtualisation
support.
In the domU I/O performance is very bad. In the dom0, I get about 46 MB/s:
# dd if=/dev/zero of=/root/zeroes bs=20M count=20
20+0 records in
20+0
2016 Feb 17
2
Amount CPU's
Quick question.
In my host, I've got two processors with each 6 cores and each core has two threads.
I use iometer to do some testings on hard drive performance.
I get the idea that using more cores give me better results in iometer. (if it will improve the speed of my guest is an other question...)
For a Windows 2012 R2 server guest, can I just give the guest 24 cores? Just to make
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all,
I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of
their exaggerated claims of sustained performance. I was thinking about
getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive,
one of the STEC products, etc. Of course the key question i''m trying to
answer is: is the price premium worth it?
--- What is the (average/min/max)
2010 Dec 05
1
Dungeon Siege slow disk I/O
Yesterday after a lot of trial and error i found a way to get Dungeon siege to install. I installed it in Windows 7 then rebooted to Ubuntu 10.04 to run it. Because fo the way Dungeon Siege works this method is fine. It even asks the user to agree to the EULA.
It is currently updated to the latest 1.11 version. Besides the commonly known intro cut scene bug there is a major issue which has
2015 Jan 28
0
Very slow disk I/O
On 01/28/2015 01:32 PM, Jatin Davey wrote:
> Hi Users
>
> I am using RHEL 6.5 on my server.
>
> From top command i can see that the processors in my server are
> spending a lot of time on wait for I/O.
> I can see high percentage in terms of 30-50% on "wa" time.
>
> Here is the df output about the disk space in my system:
>
> **********
> [root at
2015 Jan 28
0
Very slow disk I/O
On 1/28/2015 4:32 AM, Jatin Davey wrote:
> I am using RHEL 6.5 on my server.
>
> From top command i can see that the processors in my server are
> spending a lot of time on wait for I/O.
> I can see high percentage in terms of 30-50% on "wa" time.
>
> Here is the df output about the disk space in my system:
>
> **********
> [root at localhost images]# df
2015 Jan 28
0
Very slow disk I/O
On 01/28/2015 04:32 AM, Jatin Davey wrote:
> Could someone point me on how to improve the disk I/O on my server and
> reduce the wait time on I/O.
Start by identifying your disk and controller. Assuming that this is a
single SATA disk:
# smartctl -a /dev/sda | egrep 'Model:|Rate:|SATA Version'
# lspci | grep SATA
Next install and run iotop. If there's something on your
2015 Jan 29
0
Very slow disk I/O
On 01/29/2015 05:07 AM, Jatin Davey wrote:
> Yes , it is a SATA disk. I am not sure of the speed. Can you tell me
> how to find out this information ? Additionally we are using RAID 10
> configuration with 4 disks.
What RAID controller are you using?
# lspci | grep RAID
2015 Feb 03
0
Very slow disk I/O
On 2/2/2015 10:32 AM, John R Pierce wrote:
> On 2/1/2015 8:25 PM, Jatin Davey wrote:
>>
>> On 2/2/2015 9:25 AM, John R Pierce wrote:
>>> On 2/1/2015 7:31 PM, Jatin Davey wrote:
>>>>
>>>> I ran your script and here is the output for it:
>>>>
>>>> Start of the Output***************************
>>>> [root at localhost
2015 Feb 03
0
Very slow disk I/O
On 2/3/2015 10:00 AM, John R Pierce wrote:
> On 2/2/2015 8:11 PM, Jatin Davey wrote:
>> disk 252:1 | 0-0-0 | 9XG7TNQVST91000640NS CC03 | Online, Spun Up
>> disk 252:2 | 0-0-1 | 9XG4M4X3ST91000640NS CC03 | Online, Spun Up
>> disk 252:3 | 0-1-1 | 9XG4LY7JST91000640NS CC03 | Online, Spun Up
>> disk 252:4 | 0-1-0 | 9XG51233ST91000640NS CC03 | Online, Spun Up
>> End of