任我飞
2011-Dec-09 09:26 UTC
xen the i / O performance, network performance is only 20-30% of the real machine?
First,sorry my poor english~
Here is this test:
Virtualization performance comparison test
Test environment
Physical machine:
Cpu 8-core
8G memory
HDD: 147G
xen virtual machine:
cpu 2 core
4G memory
30G hard drive
wmware virtual machine:
cpu 2 core
4G memory
30G hard drive
Optical disk array (san)
Size: 7.7T
Speed: 6G/sec
Testing and structural
I / 0 performance test
Test methods
Test performance by dd, the script is as follows:
#! / Bin / bash
# Mnt
echo "/ mnt"
echo "dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50"
dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50
rm-rf / mnt/test0.date
echo "dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500"
dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500
rm-rf / mnt/test1.date
echo "dd if = / dev / zero of = / mnt/test2.date bs = 1024 count =
5000000"
dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000
rm-rf / mnt/test2.date
# /
echo "/"
echo "dd if = / dev / zero of = / test0.date bs = 100M count = 50"
dd if = / dev / zero of = / test0.date bs = 100M count = 50
rm-rf / test0.date
echo "dd if = / dev / zero of = / test1.date bs = 10M count = 500"
dd if = / dev / zero of = / test1.date bs = 10M count = 500
rm-rf / test1.date
echo "dd if = / dev / zero of = / test2.date bs = 1024 count =
5000000"
dd if = / dev / zero of = / test2.date bs = 1024 count = 5000000
rm-rf / test2.date
Respectively, the real physical machines, virtual machines, test /
directory and the / mnt directory (directly mounted optical disk array) for
the test.
Test results
Catalog record read into the record bs count read out time (xen) MB / time
(xen) time (local) MB / time (local)
/ Mnt 100M 50 50 +0 50 +0 39.7209 132 19.4492 270
/ Mnt 10M 500 500 +0 500 +0 44.5654 118 20.3288 258
/ Mnt 1024bytes 5000000 5000000 +0 5000000 +0 43.7605 117 42.1754 121
/ 100M 50 50 +0 50 +0 159.142 32.9 25.1047 209
/ 10M 500 500 +0 500 +0 183.316 28.6 28.3515 185
/ 1024bytes 5000000 5000000 +0 5000000 +0 175.724 29.1 36.3496 141
Network Performance Testing
SCP performance test
Test methods
Large files via scp (2G or more, such as RHEL system ISO file) to test
their network performance.
Testing machine with a real machine, the virtual machine must be connected
to Gigabit Ethernet, with the following command:
[Root @ rhel-PowerEdge-1 ~] # ethtool eth0
Settings for eth0:
Supported ports: [TP]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb / s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: g
Wake-on: d
Link detected: yes
Speed: 1000Mb / s that is Gigabit.
Test results
Note: wmware to workstation 7.1, for reference, the same below.
xen real machine wmware xen / machine ratio of real
scp download speed (M / s) 11.3 33.2 27.05 34.04%
scp upload speed (M / s) 12.1 28.3 26.2 42.76%
Netperf
Test methods
Prepare another machine A, the test machine B, C and A virtual machine to
connect directly to Gigabit Ethernet (ibid.).
A, B, virtual machines are installed netperf-2.4.5-1.ky3.x86_64.rpm package.
The machine being tested (such as B, or virtual machine) running the server
side:
[Root @ rhel-PowerEdge-1 ~] # netserver
Starting netserver at port 12865
Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC
A machine to modify the test client script (/ usr / local / sbin /
netclient.sh), reads as follows:
#! / Bin / sh
SERVERIP = $ 1
OUT = $ 2
if ["$ SERVERIP" == ""-o "$ OUT" == ""];
then
echo "netclient <Server IP> <OUTPUT FILE>"
exit 1
fi
netperf-H $ SERVERIP-i 10,2-I 99,5 --m 4096-s 128K-S 128K> $ OUT
netperf-H $ SERVERIP-i 10,2-I 99,5 --m 4096-s 57344-S 57344>> $ OUT
netperf-H $ SERVERIP-t TCP_CRR-r 32,1024>> $ OUT
Run the test script:
[Root @ rhel-PowerEdge-1 ~] # sh / usr / local / sbin / netclient.sh other
host or virtual machine ip test results. Log
Test results
xen real machine wmware xen / machine ratio of real
Network throughput 1 (less than the packet buffer) (10 ^ 6bit/sec) 139.16
820.64 519 16.96%
Network throughput 2 (when the cache is greater than the packet) (10 ^
6bit/sec) 151.97 819.78 485.19 18.54%
A second new TCP connection (times / s) 763.83 2508 .85 1357.3 30.45%
Note: These are average.
Network File System Performance Test
nfs-io test (dd test)
Test methods
Host C mount SAN disk array, and provide NFS services, and host A (and the
virtual machine) Gigabit Ethernet connection.
A (or virtual machine) mounted on / mnt, then the script dd.sh:
#! / Bin / bash
# Mnt
echo "/ mnt"
echo "dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50"
dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50
rm-rf / mnt/test0.date
echo "dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500"
dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500
rm-rf / mnt/test1.date
echo "dd if = / dev / zero of = / mnt/test2.date bs = 1024 count =
5000000"
dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000
rm-rf / mnt/test2.date
# Sh dd.sh 2 &> test_nfs.log
Test results
performance than the real machine xen
Network io speed M / s 7.5 87.6 8.56%
Network io speed M / s 7.6 90.5 8.40%
Network io speed M / s 7.4 86.6 8.55%
Average 7.5 88.21 8.50%
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Mike Sievers
2011-Dec-09 09:38 UTC
Re: xen the i / O performance, network performance is only 20-30% of the real machine?
Hi! Your post confuses me a litte bit. What are you running? XenDom0 with a Linux DomU? Please try hdparn -T -t /dev/sourdev first. (But I can tell you, that the performace with XEN is nearly the same like with hardware) I/O is sometime a questin with yout scheduler. (noop) Best, Mike 2011/12/9 任我飞 <renwofei423@gmail.com>> First,sorry my poor english~ > Here is this test: > > Virtualization performance comparison test > > Test environment > Physical machine: > Cpu 8-core > 8G memory > HDD: 147G > > xen virtual machine: > cpu 2 core > 4G memory > 30G hard drive > > wmware virtual machine: > cpu 2 core > 4G memory > 30G hard drive > > > Optical disk array (san) > Size: 7.7T > Speed: 6G/sec > > Testing and structural > I / 0 performance test > Test methods > Test performance by dd, the script is as follows: > > #! / Bin / bash > # Mnt > echo "/ mnt" > echo "dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50" > dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50 > rm-rf / mnt/test0.date > echo "dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500" > dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500 > rm-rf / mnt/test1.date > echo "dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000" > dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000 > rm-rf / mnt/test2.date > > # / > echo "/" > echo "dd if = / dev / zero of = / test0.date bs = 100M count = 50" > dd if = / dev / zero of = / test0.date bs = 100M count = 50 > rm-rf / test0.date > echo "dd if = / dev / zero of = / test1.date bs = 10M count = 500" > dd if = / dev / zero of = / test1.date bs = 10M count = 500 > rm-rf / test1.date > echo "dd if = / dev / zero of = / test2.date bs = 1024 count = 5000000" > dd if = / dev / zero of = / test2.date bs = 1024 count = 5000000 > rm-rf / test2.date > > Respectively, the real physical machines, virtual machines, test / > directory and the / mnt directory (directly mounted optical disk array) for > the test. > > Test results > > Catalog record read into the record bs count read out time (xen) MB / time > (xen) time (local) MB / time (local) > / Mnt 100M 50 50 +0 50 +0 39.7209 132 19.4492 270 > / Mnt 10M 500 500 +0 500 +0 44.5654 118 20.3288 258 > / Mnt 1024bytes 5000000 5000000 +0 5000000 +0 43.7605 117 42.1754 121 > / 100M 50 50 +0 50 +0 159.142 32.9 25.1047 209 > / 10M 500 500 +0 500 +0 183.316 28.6 28.3515 185 > / 1024bytes 5000000 5000000 +0 5000000 +0 175.724 29.1 36.3496 141 > > > > > > Network Performance Testing > SCP performance test > > Test methods > > Large files via scp (2G or more, such as RHEL system ISO file) to test > their network performance. > > Testing machine with a real machine, the virtual machine must be connected > to Gigabit Ethernet, with the following command: > [Root @ rhel-PowerEdge-1 ~] # ethtool eth0 > Settings for eth0: > Supported ports: [TP] > Supported link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Supports auto-negotiation: Yes > Advertised link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Advertised pause frame use: No > Advertised auto-negotiation: Yes > Speed: 1000Mb / s > Duplex: Full > Port: Twisted Pair > PHYAD: 1 > Transceiver: internal > Auto-negotiation: on > MDI-X: Unknown > Supports Wake-on: g > Wake-on: d > Link detected: yes > Speed: 1000Mb / s that is Gigabit. > > > Test results > Note: wmware to workstation 7.1, for reference, the same below. > xen real machine wmware xen / machine ratio of real > scp download speed (M / s) 11.3 33.2 27.05 34.04% > scp upload speed (M / s) 12.1 28.3 26.2 42.76% > > > Netperf > > Test methods > Prepare another machine A, the test machine B, C and A virtual machine to > connect directly to Gigabit Ethernet (ibid.). > A, B, virtual machines are installed netperf-2.4.5-1.ky3.x86_64.rpm > package. > > The machine being tested (such as B, or virtual machine) running the > server side: > [Root @ rhel-PowerEdge-1 ~] # netserver > Starting netserver at port 12865 > Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC > > A machine to modify the test client script (/ usr / local / sbin / > netclient.sh), reads as follows: > #! / Bin / sh > SERVERIP = $ 1 > OUT = $ 2 > > if ["$ SERVERIP" == ""-o "$ OUT" == ""]; then > echo "netclient <Server IP> <OUTPUT FILE>" > exit 1 > fi > > netperf-H $ SERVERIP-i 10,2-I 99,5 --m 4096-s 128K-S 128K> $ OUT > netperf-H $ SERVERIP-i 10,2-I 99,5 --m 4096-s 57344-S 57344>> $ OUT > netperf-H $ SERVERIP-t TCP_CRR-r 32,1024>> $ OUT > > > > Run the test script: > [Root @ rhel-PowerEdge-1 ~] # sh / usr / local / sbin / netclient.sh other > host or virtual machine ip test results. Log > > > Test results > > > > xen real machine wmware xen / machine ratio of real > Network throughput 1 (less than the packet buffer) (10 ^ 6bit/sec) 139.16 > 820.64 519 16.96% > Network throughput 2 (when the cache is greater than the packet) (10 ^ > 6bit/sec) 151.97 819.78 485.19 18.54% > A second new TCP connection (times / s) 763.83 2508 .85 1357.3 30.45% > > Note: These are average. > > > > > > > > > Network File System Performance Test > nfs-io test (dd test) > > Test methods > Host C mount SAN disk array, and provide NFS services, and host A (and the > virtual machine) Gigabit Ethernet connection. > A (or virtual machine) mounted on / mnt, then the script dd.sh: > #! / Bin / bash > # Mnt > echo "/ mnt" > echo "dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50" > dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50 > rm-rf / mnt/test0.date > echo "dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500" > dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500 > rm-rf / mnt/test1.date > echo "dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000" > dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000 > rm-rf / mnt/test2.date > > # Sh dd.sh 2 &> test_nfs.log > > Test results > > > performance than the real machine xen > Network io speed M / s 7.5 87.6 8.56% > Network io speed M / s 7.6 90.5 8.40% > Network io speed M / s 7.4 86.6 8.55% > Average 7.5 88.21 8.50% > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2011-Dec-09 09:52 UTC
Re: xen the i / O performance, network performance is only 20-30% of the real machine?
2011/12/9 任我飞 <renwofei423@gmail.com>:> First,sorry my poor english~I'd be more concern with your confusing tests instead of your english> Here is this test: > > Virtualization performance comparison test > > Test environment > Physical machine: > Cpu 8-core > 8G memory > HDD: 147G > > xen virtual machine: > cpu 2 core > 4G memory > 30G hard driveis this PV or HVM? which OS/distro?> > wmware virtual machine: > cpu 2 core > 4G memory > 30G hard driveSo you mean you use the SAME physical server, once as Xen dom0m and once more as vmware? If yes, what versions/variant are they?. e.g.: - XCP/xen server - Centos5 - Opensuse 11.2 - vmware server or esxi, and which version> > > Optical disk array (san)Do you know what optical disk is? http://en.wikipedia.org/wiki/Optical_disk I highly doubt that's really what you mean. Also how are you sharing the NAS? nfs? iscsi? smb?> Test resultsI have to say, without the answer to the things I asked, your test results is somewhat useless. It's not really something new if a HVM domU without PV drivers is WAAAAAAAAAAAAAAAY much slower compared to native or even vmware VM. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
任我飞
2011-Dec-11 03:06 UTC
Re: xen the i / O performance, network performance is only 20-30% of the real machine?
Thank you for the tolerance of the novice, I will be working to re-deploy and test! ^^ 2011/12/10 Florian Heigl <florian.heigl@gmail.com>> Install PV disk / network drivers in the VM. > > Without these, it will be slow. Stop testing until you have the drivers > right. > > VMWare is optmized for running reasonably fast with either fake LSI / > Intel drivers, or a little faster with their PV drivers. > > Xen is optimized to be *extremely fast* in a good configuration. > > Greetings, > Flo >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Maybe Matching Threads
- Bug#784810: Bug#784810: Xen domU try ton access to dom0 LVM Volume group
- using RBD with libvirt 0.9.13
- pxelinux tries to load ldlinux.c32 from DHCP server, instead of next-server
- Re: Libvirt and Glusterfs
- pxelinux tries to load ldlinux.c32 from DHCP server, instead of next-server