G.Bakalarski@icm.edu.pl
2012-Feb-23 10:50 UTC
read from FC lun under Xen 4 - very strange behavior
Dear Xen Users
This is my first post, so sorry for maybe newbie stupid query.
Hardware config is the following:
-------------------------------
Server: Dell PE R815 192GB RAM, 4x 600GB SAS HD, PERC 700
internal raid controler, EMULEX LPE12002-E FC 8Gbit dual port HBA
SAN: 2x Cisco MDS 9148 + OM2/O3 cables (short <3m)
Storage: tested 2 arrays: Dell MD3620F & Netapp FAS 3240 with FC
Software:
---------
mainly Debian Squeeze & Debian Wheezy (Testing) + Debian Xen packs 4.0 &
4.1
========Problem======================
LUNs are visible and accessible on Dom0. I can even configure
DM Multipathing (Dell requires RDAC, but it work).
Write performance is fine i.e. stable e.g. on Dell LUN with RAID10 on 8 disks
around 250MBytes/s (streaming write with block size 32K:
dd if=/dev/zero of=lum_mount_point/file bs=32k).
Write performace is similar with and without Xen hypervisor.
THE PROBLEM is read performace. Without Xen hypervisor read performance
is stable and high e.g. streaming reading from Dell LUN with Xen with
speed about 420MBytes/s (under iostat I can see speed in a range
from 390MBytes/s up to 450MBytes/s)
With Xen supervisor read performance is not stable and changes randomly.
Sometimes it is up to 450MBytes but sometimes as low as 16MBytes/s.
The low performance does not depend on file size, block size. The same
file reads once with high speed, after umount/mount reads with very low speed,
and after next umount/mount starts with very low, then switches to very
high for few seconds, then again very low.
When high speed is in effect the queue lenth is about 256, but with low
speed queue drops down to about 8 or 16 ...
Strangely number of IOPS seen from linux as well as from array side
with low speed is higher then with high speed (e.g. 4500 vs 3000 IOPS).
This is not directly DM Multipathing related. I obtain the similar
relults when accessing /dev/sdX device with multipathing off
(multipath -F, multipath-tools stop).
It looks like Xen 4.1.2 with newer kernels (3.1 or 3.2) performs better
than default squeeze 2.6.32-5-amd64-xen) but still not stable.
Here is example from iostat monitoring:
dev2 #> dd if=/mnt/2T/b of=/dev/null bs=32k
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
avgqu-sz await svctm %util
sde 0.00 0.00 13.38 0.00 519.00 0.09 77.57
0.00 0.26 0.21 0.28
sde 0.00 0.00 4731.00 0.00 18924.00 0.00 8.00
0.85 0.18 0.18 84.80
sde 0.00 0.00 4680.00 0.00 18720.00 0.00 8.00
0.87 0.19 0.19 87.20
sde 0.00 0.00 4432.00 0.00 17728.00 0.00 8.00
0.80 0.19 0.18 80.40
sde 0.00 0.00 4763.00 0.00 19052.00 0.00 8.00
0.86 0.18 0.18 86.40
sde 0.00 0.00 4580.00 0.00 18320.00 0.00 8.00
0.84 0.19 0.18 83.60
sde 0.00 0.00 4732.00 0.00 18928.00 0.00 8.00
0.88 0.19 0.19 87.60
sde 0.00 0.00 4729.00 0.00 18916.00 0.00 8.00
0.87 0.19 0.18 86.80
sde 0.00 0.00 4538.00 0.00 18152.00 0.00 8.00
0.86 0.19 0.19 86.40
sde 0.00 0.00 4704.00 0.00 18816.00 0.00 8.00
0.86 0.18 0.18 86.00
sde 0.00 0.00 4700.00 0.00 18800.00 0.00 8.00
0.89 0.19 0.19 89.20
sde 0.00 0.00 4715.00 0.00 18860.00 0.00 8.00
0.90 0.19 0.19 89.60
sde 0.00 0.00 4642.00 0.00 18568.00 0.00 8.00
0.88 0.19 0.19 87.60
sde 0.00 0.00 4547.00 0.00 18188.00 0.00 8.00
0.86 0.19 0.19 86.00
sde 0.00 0.00 4608.00 0.00 18432.00 0.00 8.00
0.86 0.19 0.19 86.40
sde 0.00 0.00 4594.00 0.00 18376.00 0.00 8.00
0.83 0.18 0.18 82.80
sde 0.00 0.00 4552.00 0.00 18208.00 0.00 8.00
0.87 0.19 0.19 87.20
sde 0.00 0.00 4628.00 0.00 18512.00 0.00 8.00
0.85 0.19 0.18 84.80
sde 0.00 0.00 4462.00 0.00 17848.00 0.00 8.00
0.84 0.19 0.19 84.40
sde 0.00 0.00 4586.00 0.00 18348.00 0.00 8.00
0.83 0.18 0.18 83.20
sde 0.00 0.00 4528.71 0.00 18110.89 0.00 8.00
0.81 0.18 0.18 81.19
sde 0.00 0.00 4637.00 0.00 18548.00 0.00 8.00
0.89 0.19 0.19 88.80
sde 0.00 0.00 4619.00 0.00 18476.00 0.00 8.00
0.84 0.18 0.18 83.60
sde 0.00 0.00 4555.00 0.00 18220.00 0.00 8.00
0.88 0.19 0.19 88.00
sde 0.00 0.00 4579.00 0.00 18316.00 0.00 8.00
0.85 0.19 0.19 85.20
sde 0.00 0.00 4760.00 0.00 19040.00 0.00 8.00
0.88 0.19 0.19 88.40
sde 0.00 0.00 5342.00 0.00 21368.00 0.00 8.00
0.76 0.14 0.14 75.60
sde 0.00 0.00 5342.00 0.00 21368.00 0.00 8.00
0.72 0.14 0.13 72.00
sde 0.00 0.00 4384.00 0.00 94048.00 0.00 42.91
0.91 0.21 0.19 84.80
sde 0.00 0.00 3370.00 0.00 431360.00 0.00 256.00
1.25 0.38 0.26 86.80
sde 0.00 0.00 3434.00 0.00 439552.00 0.00 256.00
1.28 0.38 0.27 92.40
sde 0.00 0.00 3534.00 0.00 452352.00 0.00 256.00
1.30 0.37 0.27 96.40
sde 0.00 0.00 3540.00 0.00 453120.00 0.00 256.00
1.34 0.38 0.28 99.60
sde 0.00 0.00 3338.00 0.00 427264.00 0.00 256.00
1.36 0.41 0.29 95.20
sde 0.00 0.00 3405.00 0.00 435840.00 0.00 256.00
1.28 0.38 0.28 94.40
sde 0.00 0.00 3414.00 0.00 436992.00 0.00 256.00
1.36 0.40 0.28 95.60
sde 0.00 0.00 3409.00 0.00 436352.00 0.00 256.00
1.31 0.39 0.27 92.40
sde 0.00 0.00 3403.00 0.00 435584.00 0.00 256.00
1.34 0.39 0.28 93.60
sde 0.00 0.00 3238.00 0.00 414464.00 0.00 256.00
1.21 0.38 0.27 88.80
sde 0.00 0.00 3322.00 0.00 425216.00 0.00 256.00
1.29 0.39 0.26 88.00
sde 0.00 0.00 3316.00 0.00 424448.00 0.00 256.00
1.25 0.38 0.27 89.20
sde 0.00 0.00 3313.00 0.00 424064.00 0.00 256.00
1.21 0.37 0.26 86.80
sde 0.00 0.00 3318.00 0.00 424704.00 0.00 256.00
1.20 0.36 0.25 82.80
sde 0.00 0.00 3179.00 0.00 406912.00 0.00 256.00
1.24 0.39 0.27 84.80
sde 0.00 0.00 3283.00 0.00 420224.00 0.00 256.00
1.19 0.37 0.25 82.40
sde 0.00 0.00 3287.00 0.00 420736.00 0.00 256.00
1.12 0.34 0.24 80.40
sde 0.00 0.00 3308.00 0.00 423424.00 0.00 256.00
1.21 0.37 0.26 84.40
sde 0.00 0.00 3302.00 0.00 422656.00 0.00 256.00
1.27 0.39 0.26 86.40
sde 0.00 0.00 3135.00 0.00 401280.00 0.00 256.00
1.26 0.41 0.27 86.00
sde 0.00 0.00 3259.00 0.00 417152.00 0.00 256.00
1.38 0.43 0.28 91.20
sde 0.00 0.00 3250.00 0.00 416000.00 0.00 256.00
1.50 0.47 0.29 93.60
sde 0.00 0.00 3258.00 0.00 417024.00 0.00 256.00
1.35 0.41 0.28 91.60
sde 0.00 0.00 3247.00 0.00 415616.00 0.00 256.00
1.43 0.45 0.29 92.80
sde 0.00 0.00 3117.00 0.00 398976.00 0.00 256.00
1.33 0.43 0.29 89.20
sde 0.00 0.00 3217.00 0.00 411776.00 0.00 256.00
1.34 0.42 0.27 88.40
sde 0.00 0.00 3216.00 0.00 411648.00 0.00 256.00
1.20 0.38 0.27 86.00
sde 0.00 0.00 3215.00 0.00 411520.00 0.00 256.00
1.24 0.39 0.27 87.20
sde 0.00 0.00 3214.00 0.00 411392.00 0.00 256.00
1.21 0.38 0.26 84.40
sde 0.00 0.00 3092.00 0.00 395776.00 0.00 256.00
1.24 0.40 0.27 84.80
sde 0.00 0.00 3168.00 0.00 405504.00 0.00 256.00
1.22 0.39 0.28 87.60
sde 0.00 0.00 3123.00 0.00 399744.00 0.00 256.00
1.23 0.39 0.28 86.40
sde 0.00 0.00 3122.00 0.00 399616.00 0.00 256.00
1.26 0.40 0.28 88.80
sde 0.00 0.00 3112.00 0.00 398336.00 0.00 256.00
1.18 0.38 0.27 83.60
sde 0.00 0.00 2971.00 0.00 380288.00 0.00 256.00
1.24 0.42 0.29 87.60
sde 0.00 0.00 3060.00 0.00 391680.00 0.00 256.00
1.25 0.41 0.30 90.80
I did not try to access a lun from DomU
Please suggest what is wrong ??? How to proceed ?
GB
Florian Heigl
2012-Feb-23 11:32 UTC
Re: read from FC lun under Xen 4 - very strange behavior
Hi, 2012/2/23 <G.Bakalarski@icm.edu.pl>:> THE PROBLEM is read performace. Without Xen hypervisor read performance > is stable and high e.g. streaming reading from Dell LUN with Xen with > speed about 420MBytes/s (under iostat I can see speed in a range > from 390MBytes/s up to 450MBytes/s)can you please try the same test with a non-Debian distro to find out if this is kernel related? I.e. the AlpineLinux dom0 ISO might be a very easy solution - although I''ve not used it in a SAN yet. Otherwise try Fedora16. If you can pin-point this as an issue in your dom0, then I think you should take it to -devel since you''re only seeing it in (less CPU intensive) reads, which is *quite* odd. Greetings, Florian -- the purpose of libvirt is to provide an abstraction layer hiding all xen features added since 2006 until they were finally understood and copied by the kvm devs.
G.Bakalarski@icm.edu.pl
2012-Feb-23 15:45 UTC
Re: read from FC lun under Xen 4 - very strange behavior
Hi Thanks for input. I''ll give a try to alpine later on. Currently I discovered this is rather not related to FC but to all block devices .... I tested similar reads on my internal PERC lvm volume (raid5 with 4 disk device) and basically it has the same problem i.e. write is stable and fine but read is unstable and very frequently very slow... It randomly varies from more about 300MBytes/s (good) to 25MBytes/s (very bad). Tested always from dom0. With different kernels (2.6.32-5-amd64, 3.1, 3.2, Xen 4.0, Xen 4.1). Without Xen reads and writes perform perfectly. With Xens very very strange ... There must be something very rude overlooked ... This has to work correctly in thousands of installations worldwide ... Kind regards, GB> 2012/2/23 <G.Bakalarski@icm.edu.pl>: >> THE PROBLEM is read performace. Without Xen hypervisor read performance >> is stable and high e.g. streaming reading from Dell LUN with Xen with >> speed about 420MBytes/s (under iostat I can see speed in a range >> from 390MBytes/s up to 450MBytes/s) > > can you please try the same test with a non-Debian distro to find out > if this is kernel related? > I.e. the AlpineLinux dom0 ISO might be a very easy solution - although > I''ve not used it in a SAN yet. > Otherwise try Fedora16. > > If you can pin-point this as an issue in your dom0, then I think you > should take it to -devel since you''re only seeing it in (less CPU > intensive) reads, which is *quite* odd. > > Greetings, > Florian > > -- > the purpose of libvirt is to provide an abstraction layer hiding all > xen features added since 2006 until they were finally understood and > copied by the kvm devs. >