Hi All, I have 5 servers each: Intel Quad Core i7 2.4Gig (8 processors) 2 *1TB HDD 2*1Gig NIC 12G RAM I installed XCP 0.5 on each and all are joining a resource pool. Then I installed 2 Centos 5.5 VM on each XCP host and I used the following command to specify the number of vCPUs for each VM: xe vm-param-set uuid=$VM_UUID VCPUs-max=4 xe vm-param-set uuid=$VM_UUID VCPUs-at-startup=4 //I assigned 8 vCPUs on each host for VMs. and for RAM size I used: xe vm-param-set uuid=$VM_UUID memory-static-max=6 xe vm-param-set uuid=$VM_UUID memory-dynamic-max=6 I knew it is wrong and I have to leave at least 512M for the host.Am I correct? Now all my VMs are halted but when I tried xentop I found CPU(%) high and some times it reaches 100. So, is it normal? if not what is the best configuration for 2 VMs on these hosts to get the best performance and use all resources? Please can any one provide me the best configuration for 2 VMs on XCP host like mine? Sorry for my English ! Thanks in advance for any advice. Inas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Henrik Andersson
2011-Mar-16 07:16 UTC
[SPAM] Re: [Xen-users] [SPAM] Xen server best configurations
In my opinnion, propably the best way to make sure nobody help''s you, is to send same question over and over again. You obviously are in a need for answers and possibly this constant re-sending is related to that, but you are achieving nothing by doing so. Also, in my opinnion your messages are spam because of the same reason. You can of course send this exact same message for the fourth time, but you still wouldn''t get any more audience or answers. -Henrik Andersson On 16 March 2011 07:16, Inas Ahmed <inas.ahmed@gmail.com> wrote:> Hi All, > > I have 5 servers each: > > Intel Quad Core i7 2.4Gig (8 processors) > 2 *1TB HDD > 2*1Gig NIC > 12G RAM > > I installed XCP 0.5 on each and all are joining a resource pool. > Then I installed 2 Centos 5.5 VM on each XCP host and I used the following > command to specify the number of vCPUs for each VM: > > xe vm-param-set uuid=$VM_UUID VCPUs-max=4 > > xe vm-param-set uuid=$VM_UUID VCPUs-at-startup=4 > > //I assigned 8 vCPUs on each host for VMs. > > > and for RAM size I used: > > > xe vm-param-set uuid=$VM_UUID memory-static-max=6 > > xe vm-param-set uuid=$VM_UUID memory-dynamic-max=6 > > I knew it is wrong and I have to leave at least 512M for the host.Am I > correct? > > Now all my VMs are halted but when I tried xentop I found CPU(%) high and > some times it reaches 100. So, is it normal? if not what is the best > configuration for 2 VMs on these hosts to get the best performance and use > all resources? > > Please can any one provide me the best configuration for 2 VMs on XCP host > like mine? > > Sorry for my English ! > > Thanks in advance for any advice. > > Inas > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Inas Ahmed
2011-Mar-16 07:19 UTC
[SPAM] Re: [Xen-users] [SPAM] Xen server best configurations
On Wed, Mar 16, 2011 at 11:16 AM, Henrik Andersson < henrik.j.andersson@gmail.com> wrote:> In my opinnion, propably the best way to make sure nobody help''s you, is to > send same question over and over again. You obviously are in a need for > answers and possibly this constant re-sending is related to that, but you > are achieving nothing by doing so. > > Also, in my opinnion your messages are spam because of the same reason. > > You can of course send this exact same message for the fourth time, but you > still wouldn''t get any more audience or answers. > > -Henrik Andersson > > On 16 March 2011 07:16, Inas Ahmed <inas.ahmed@gmail.com> wrote: > >> Hi All, >> >> I have 5 servers each: >> >> Intel Quad Core i7 2.4Gig (8 processors) >> 2 *1TB HDD >> 2*1Gig NIC >> 12G RAM >> >> I installed XCP 0.5 on each and all are joining a resource pool. >> Then I installed 2 Centos 5.5 VM on each XCP host and I used the following >> command to specify the number of vCPUs for each VM: >> >> xe vm-param-set uuid=$VM_UUID VCPUs-max=4 >> >> xe vm-param-set uuid=$VM_UUID VCPUs-at-startup=4 >> >> //I assigned 8 vCPUs on each host for VMs. >> >> >> and for RAM size I used: >> >> >> xe vm-param-set uuid=$VM_UUID memory-static-max=6 >> >> xe vm-param-set uuid=$VM_UUID memory-dynamic-max=6 >> >> I knew it is wrong and I have to leave at least 512M for the host.Am I >> correct? >> >> Now all my VMs are halted but when I tried xentop I found CPU(%) high and >> some times it reaches 100. So, is it normal? if not what is the best >> configuration for 2 VMs on these hosts to get the best performance and use >> all resources? >> >> Please can any one provide me the best configuration for 2 VMs on XCP host >> like mine? >> >> Sorry for my English ! >> >> Thanks in advance for any advice. >> >> Inas >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Inas Ahmed
2011-Mar-16 07:28 UTC
[SPAM] Re: [Xen-users] [SPAM] Xen server best configurations
First I am so so sorry :( , but I swear It was spam from first time ( I got [spam] in the subject) so I thought it because the subject was "advice please " and no one will receive my mail so I thought it will be better to send it again after changing the subject but when I received it with [SPAM] in the subject I asked why? and FYI if I can find answers to my questions by searching and if I am not so depressed because I am so confused I wouldn''t bother you or any one in the list. Any way thanks for telling me that my mails sent and some people read it. and ISA I will not send any thing to that list again and I''ll try to depend on myself. Thanks & BR Inas On Wed, Mar 16, 2011 at 11:19 AM, Inas Ahmed <inas.ahmed@gmail.com> wrote:> > > On Wed, Mar 16, 2011 at 11:16 AM, Henrik Andersson < > henrik.j.andersson@gmail.com> wrote: > >> In my opinnion, propably the best way to make sure nobody help''s you, is >> to send same question over and over again. You obviously are in a need for >> answers and possibly this constant re-sending is related to that, but you >> are achieving nothing by doing so. >> >> Also, in my opinnion your messages are spam because of the same reason. >> >> You can of course send this exact same message for the fourth time, but >> you still wouldn''t get any more audience or answers. >> >> -Henrik Andersson >> >> On 16 March 2011 07:16, Inas Ahmed <inas.ahmed@gmail.com> wrote: >> >>> Hi All, >>> >>> I have 5 servers each: >>> >>> Intel Quad Core i7 2.4Gig (8 processors) >>> 2 *1TB HDD >>> 2*1Gig NIC >>> 12G RAM >>> >>> I installed XCP 0.5 on each and all are joining a resource pool. >>> Then I installed 2 Centos 5.5 VM on each XCP host and I used the >>> following command to specify the number of vCPUs for each VM: >>> >>> xe vm-param-set uuid=$VM_UUID VCPUs-max=4 >>> >>> xe vm-param-set uuid=$VM_UUID VCPUs-at-startup=4 >>> >>> //I assigned 8 vCPUs on each host for VMs. >>> >>> >>> and for RAM size I used: >>> >>> >>> xe vm-param-set uuid=$VM_UUID memory-static-max=6 >>> >>> xe vm-param-set uuid=$VM_UUID memory-dynamic-max=6 >>> >>> I knew it is wrong and I have to leave at least 512M for the host.Am I >>> correct? >>> >>> Now all my VMs are halted but when I tried xentop I found CPU(%) high and >>> some times it reaches 100. So, is it normal? if not what is the best >>> configuration for 2 VMs on these hosts to get the best performance and use >>> all resources? >>> >>> Please can any one provide me the best configuration for 2 VMs on XCP >>> host like mine? >>> >>> Sorry for my English ! >>> >>> Thanks in advance for any advice. >>> >>> Inas >>> >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >> >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Michelle Konzack
2011-Mar-16 18:10 UTC
[Xen-users] Re: [SPAM] Re: [SPAM] Xen server best configurations
Hello Inas Ahmed, Am 2011-03-16 11:28:38, hacktest Du folgendes herunter:> First I am so so sorry :( , but I swear It was spam from first time ( I got > [spam] in the subject) so I thought it because the subject was "advice > please " and no one will receive my mail so I thought it will be better to > send it again after changing the subject but when I received it with [SPAM] > in the subject I asked why?Here this line (at the end) is triggering the SPAM Filter:> >>> I knew it is wrong and I have to leave at least 512M for the host.Am Ihost.Am If you write "hosts. Am" the SPAM tag is gone.> >>> correct?Thanks, Greetings and nice Day/Evening Michelle Konzack -- ##################### Debian GNU/Linux Consultant ###################### Development of Intranet and Embedded Systems with Debian GNU/Linux itsystems@tdnet France EURL itsystems@tdnet UG (limited liability) Owner Michelle Konzack Owner Michelle Konzack Apt. 917 (homeoffice) 50, rue de Soultz Kinzigstraße 17 67100 Strasbourg/France 77694 Kehl/Germany Tel: +33-6-61925193 mobil Tel: +49-177-9351947 mobil Tel: +33-9-52705884 fix <http://www.itsystems.tamay-dogan.net/> <http://www.flexray4linux.org/> <http://www.debian.tamay-dogan.net/> <http://www.can4linux.org/> Jabber linux4michelle@jabber.ccc.de ICQ #328449886 Linux-User #280138 with the Linux Counter, http://counter.li.org/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Inas, I hadn''t responded earlier because I don''t run XCP, but... On Tue, Mar 15, 2011 at 10:16 PM, Inas Ahmed <inas.ahmed@gmail.com> wrote: [...]> xe vm-param-set uuid=$VM_UUID memory-static-max=6 > > xe vm-param-set uuid=$VM_UUID memory-dynamic-max=6It looks like you''re trying to set memory to be 6gb, right? Are you sure this is right and it shouldn''t be instead... xe vm-param-set uuid=$VM_UUID memory-static-max=6144 xe vm-param-set uuid=$VM_UUID memory-dynamic-max=6144 I know when I''m creating VMs, I have a _BAD_ habit of specify "8" instead of "8192" for 8GB in memory lines. If the memory-static-max argument is in MB and not GB, then you may have the same problem. I had one VM where I specified 8MB and the VM started but it look like it froze which could explain your issue. Mel -- Melody Bliss Usenix, SAGE and LOPSA Charter Member Patron Member of the NRA _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, When i test the IO bandwidth it''s pretty much slower on DomU : Dom0 read : 180MB/s write : 60MB/s DomU read : 40MB/s write : 6MB/s The main storage is a soft raid5 array of 5 sata2 disk 7200rpm I tested with a physical partition on one disk and it''about the same ( without raid and without lvm ) DomU disks are Dom0 logical volumes, i use paravirtualized guests, the fs type is ext4. I already tried the ext4 options barrier=0,data=writeback , it doesn''t realy change anything. I tried too with ext2 and ext3 , it''s the same. Is this normal ? If not what do you think the problem is? Thanks. dist : debian sqeeze xen : Xen 4.0.1 kernel Dom0 & DomU : 2.6.32-5-xen-amd64 FS : ext4 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:> Hi, > When i test the IO bandwidth it''s pretty much slower on DomU : > > Dom0 read : 180MB/s write : 60MB/s > DomU read : 40MB/s write : 6MB/sJust did the same tests on my installation (not yet on Xen4): Dom0: # hdparm -Tt /dev/md5 /dev/md5: Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94 MB/sec (md5 = 6-disk RAID-5 software raid) # hdparm -Tt /dev/vg/domU_sdb1 /dev/vgvg/domU_sdb1: Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24 MB/sec DomU: # hdparm -Tt /dev/sdb1 /dev/sdb1: Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec Like you, I do see some drop in performance, but not as severe as you are experiencing.> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the > fs type is ext4.How do you pass the disks to the domU? I pass them as such: disk = [''phy:vg/domU_sda1,sda1,w'', (rest of the partitions removed for clarity)> I already tried the ext4 options barrier=0,data=writeback , it doesn''t > realy change anything. > I tried too with ext2 and ext3 , it''s the same.To avoid any "issues" with the filesystem, what does "hdparm -Tt <device>" give you?> Is this normal ?Some drop, yes. loosing 90% performance isn''t> If not what do you think the problem is?Either you are hitting a bug or it''s a configuration issue. What is the configuration for your domU? And specifically the way you pass the LVs to the domU. -- Joost Roeleveld _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Le 17/03/2011 09:31, Joost Roeleveld a écrit :> On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote: >> Hi, >> When i test the IO bandwidth it''s pretty much slower on DomU : >> >> Dom0 read : 180MB/s write : 60MB/s >> DomU read : 40MB/s write : 6MB/s > Just did the same tests on my installation (not yet on Xen4): > Dom0: > # hdparm -Tt /dev/md5 > /dev/md5: > Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec > Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94 MB/sec > > (md5 = 6-disk RAID-5 software raid) > > # hdparm -Tt /dev/vg/domU_sdb1 > /dev/vgvg/domU_sdb1: > Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec > Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24 MB/sec > > DomU: > # hdparm -Tt /dev/sdb1 > /dev/sdb1: > Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec > Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec > > Like you, I do see some drop in performance, but not as severe as you are > experiencing. > >> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the >> fs type is ext4. > How do you pass the disks to the domU? > I pass them as such: > disk = [''phy:vg/domU_sda1,sda1,w'', > (rest of the partitions removed for clarity) >My DomU conf is like this : kernel = "vmlinuz-2.6.32-5-xen-amd64" ramdisk = "initrd.img-2.6.32-5-xen-amd64" root = "/dev/mapper/pvops-root" memory = "512" disk = [ ''phy:vg0/p2p,xvda,w'' , ''phy:vg0/mmd,xvdb1,w'', ''phy:sde3,xvdb1,w'' ] vif = [ ''bridge=eth0'' ] vfb = [ ''type=vnc,vnclisten=0.0.0.0'' ] keymap = ''fr'' serial = ''pty'' vcpus = 2 on_reboot = ''restart'' on_crash = ''restart''>> I already tried the ext4 options barrier=0,data=writeback , it doesn''t >> realy change anything. >> I tried too with ext2 and ext3 , it''s the same. > To avoid any "issues" with the filesystem, what does "hdparm -Tt<device>" give > you? >Dom0 : /dev/sde( single disk ): Timing cached reads: 6086 MB in 2.00 seconds = 3050.54 MB/sec Timing buffered disk reads: 270 MB in 3.01 seconds = 89.81 MB/sec /dev/md127 ( raid 5 of 5 disks ): Timing cached reads: 6708 MB in 1.99 seconds = 3362.95 MB/sec Timing buffered disk reads: 1092 MB in 3.00 seconds = 363.96 MB/sec DomU : /dev/xvda: Timing cached reads: 5648 MB in 2.00 seconds = 2830.78 MB/sec Timing buffered disk reads: 292 MB in 3.01 seconds = 97.16 MB/sec /dev/xvda2: Timing cached reads: 5542 MB in 2.00 seconds = 2777.66 MB/sec Timing buffered disk reads: 274 MB in 3.01 seconds = 90.94 MB/sec /dev/xvdb1: Timing cached reads: 5526 MB in 2.00 seconds = 2769.20 MB/sec Timing buffered disk reads: 196 MB in 3.02 seconds = 64.85 MB/sec /dev/xvdb2: Timing cached reads: 5334 MB in 2.00 seconds = 2672.47 MB/sec Timing buffered disk reads: 166 MB in 3.03 seconds = 54.70 MB/sec>> Is this normal ? > Some drop, yes. loosing 90% performance isn''t > >> If not what do you think the problem is? > Either you are hitting a bug or it''s a configuration issue. > What is the configuration for your domU? And specifically the way you pass the > LVs to the domU.As you can see : xvda is a lv exported as a whole disk with lvm on it, so xvda2 is a lv from a vg in a lv ( ext4 => lv => vg => pv => virtual disk => lv =>vg =>pv => raid5 =>disk ) xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv => vg => pv => raid5 => disk ) xvdb2 is a physical partition exported as a partition ( ext3 => virtual part => disk ) Curiously it seems the more complicated, the better it is :/ Thanks. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote:> Le 17/03/2011 09:31, Joost Roeleveld a écrit : > > On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote: > >> Hi, > >> When i test the IO bandwidth it''s pretty much slower on DomU : > >> > >> Dom0 read : 180MB/s write : 60MB/s > >> DomU read : 40MB/s write : 6MB/s > > > > Just did the same tests on my installation (not yet on Xen4): > > Dom0: > > # hdparm -Tt /dev/md5 > > > > /dev/md5: > > Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec > > Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94 > > MB/sec > > > > (md5 = 6-disk RAID-5 software raid) > > > > # hdparm -Tt /dev/vg/domU_sdb1 > > > > /dev/vgvg/domU_sdb1: > > Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec > > Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24 > > MB/sec > > > > DomU: > > # hdparm -Tt /dev/sdb1 > > > > /dev/sdb1: > > Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec > > Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec > > > > Like you, I do see some drop in performance, but not as severe as you > > are > > experiencing. > > > >> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the > >> fs type is ext4. > > > > How do you pass the disks to the domU? > > I pass them as such: > > disk = [''phy:vg/domU_sda1,sda1,w'', > > (rest of the partitions removed for clarity) > > My DomU conf is like this : > kernel = "vmlinuz-2.6.32-5-xen-amd64" > ramdisk = "initrd.img-2.6.32-5-xen-amd64" > root = "/dev/mapper/pvops-root" > memory = "512" > disk = [ ''phy:vg0/p2p,xvda,w'' , ''phy:vg0/mmd,xvdb1,w'', ''phy:sde3,xvdb1,w'' ] > vif = [ ''bridge=eth0'' ] > vfb = [ ''type=vnc,vnclisten=0.0.0.0'' ] > keymap = ''fr'' > serial = ''pty'' > vcpus = 2 > on_reboot = ''restart'' > on_crash = ''restart''seems ok to me. Did you pin the dom0 to a dedicated cpu-core?> > Either you are hitting a bug or it''s a configuration issue. > > What is the configuration for your domU? And specifically the way you > > pass the LVs to the domU. > > As you can see : > xvda is a lv exported as a whole disk with lvm on it, so xvda2 is a lv > from a vg in a lv ( ext4 => lv => vg => pv => virtual disk => lv =>vg > =>pv => raid5 =>disk ) > xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv => vg > => pv => raid5 => disk ) > xvdb2 is a physical partition exported as a partition ( ext3 => virtual > part => disk ) > > Curiously it seems the more complicated, the better it is :/Yes, it does seem that way. Am wondering if adding more layers increases the amount of in-memory-caching which then leads to a higher "perceived" performance. One other thing, I don''t use "xvd*" for the device-names, but am still using "sd*". Wonder if that changes the way things behave internally? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Le 18/03/2011 09:00, Joost Roeleveld a écrit :> On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote: >> Le 17/03/2011 09:31, Joost Roeleveld a écrit : >>> On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote: >>>> Hi, >>>> When i test the IO bandwidth it''s pretty much slower on DomU : >>>> >>>> Dom0 read : 180MB/s write : 60MB/s >>>> DomU read : 40MB/s write : 6MB/s >>> Just did the same tests on my installation (not yet on Xen4): >>> Dom0: >>> # hdparm -Tt /dev/md5 >>> >>> /dev/md5: >>> Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec >>> Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94 >>> MB/sec >>> >>> (md5 = 6-disk RAID-5 software raid) >>> >>> # hdparm -Tt /dev/vg/domU_sdb1 >>> >>> /dev/vgvg/domU_sdb1: >>> Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec >>> Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24 >>> MB/sec >>> >>> DomU: >>> # hdparm -Tt /dev/sdb1 >>> >>> /dev/sdb1: >>> Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec >>> Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec >>> >>> Like you, I do see some drop in performance, but not as severe as you >>> are >>> experiencing. >>> >>>> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the >>>> fs type is ext4. >>> How do you pass the disks to the domU? >>> I pass them as such: >>> disk = [''phy:vg/domU_sda1,sda1,w'', >>> (rest of the partitions removed for clarity) >> My DomU conf is like this : >> kernel = "vmlinuz-2.6.32-5-xen-amd64" >> ramdisk = "initrd.img-2.6.32-5-xen-amd64" >> root = "/dev/mapper/pvops-root" >> memory = "512" >> disk = [ ''phy:vg0/p2p,xvda,w'' , ''phy:vg0/mmd,xvdb1,w'', ''phy:sde3,xvdb1,w'' ] >> vif = [ ''bridge=eth0'' ] >> vfb = [ ''type=vnc,vnclisten=0.0.0.0'' ] >> keymap = ''fr'' >> serial = ''pty'' >> vcpus = 2 >> on_reboot = ''restart'' >> on_crash = ''restart'' > seems ok to me. > Did you pin the dom0 to a dedicated cpu-core?Nop>>> Either you are hitting a bug or it''s a configuration issue. >>> What is the configuration for your domU? And specifically the way you >>> pass the LVs to the domU. >> As you can see : >> xvda is a lv exported as a whole disk with lvm on it, so xvda2 is a lv >> from a vg in a lv ( ext4 => lv => vg => pv => virtual disk => lv =>vg >> =>pv => raid5 =>disk ) >> xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv => vg >> => pv => raid5 => disk ) >> xvdb2 is a physical partition exported as a partition ( ext3 => virtual >> part => disk ) >> >> Curiously it seems the more complicated, the better it is :/ > Yes, it does seem that way. Am wondering if adding more layers increases the > amount of in-memory-caching which then leads to a higher "perceived" > performance. > > One other thing, I don''t use "xvd*" for the device-names, but am still using > "sd*". Wonder if that changes the way things behave internally?I doesn''t change with sd* I noticed that the cpu io wait occurs in domU ,nothing happen in dom0 Does someone knows a way to debug this ? at kernel level or in the hypervisor ? By the way how to get the hypervisor activity i don''t think it appears in dom0.> _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''ve found that my motherboard with AMD890GX chipset doesn''t support IOMMU virtualisation ( (XEN) I/O virtualisation disabled ) Can you tell me if yours is supporting it ( xm dmesg |grep ''I/O virtualisation'' ) Thanks Le 18/03/2011 19:14, Erwan RENIER a écrit :> Le 18/03/2011 09:00, Joost Roeleveld a écrit : >> On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote: >>> Le 17/03/2011 09:31, Joost Roeleveld a écrit : >>>> On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote: >>>>> Hi, >>>>> When i test the IO bandwidth it''s pretty much slower on DomU : >>>>> >>>>> Dom0 read : 180MB/s write : 60MB/s >>>>> DomU read : 40MB/s write : 6MB/s >>>> Just did the same tests on my installation (not yet on Xen4): >>>> Dom0: >>>> # hdparm -Tt /dev/md5 >>>> >>>> /dev/md5: >>>> Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec >>>> Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94 >>>> MB/sec >>>> >>>> (md5 = 6-disk RAID-5 software raid) >>>> >>>> # hdparm -Tt /dev/vg/domU_sdb1 >>>> >>>> /dev/vgvg/domU_sdb1: >>>> Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec >>>> Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24 >>>> MB/sec >>>> >>>> DomU: >>>> # hdparm -Tt /dev/sdb1 >>>> >>>> /dev/sdb1: >>>> Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec >>>> Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 >>>> MB/sec >>>> >>>> Like you, I do see some drop in performance, but not as severe as you >>>> are >>>> experiencing. >>>> >>>>> DomU disks are Dom0 logical volumes, i use paravirtualized guests, >>>>> the >>>>> fs type is ext4. >>>> How do you pass the disks to the domU? >>>> I pass them as such: >>>> disk = [''phy:vg/domU_sda1,sda1,w'', >>>> (rest of the partitions removed for clarity) >>> My DomU conf is like this : >>> kernel = "vmlinuz-2.6.32-5-xen-amd64" >>> ramdisk = "initrd.img-2.6.32-5-xen-amd64" >>> root = "/dev/mapper/pvops-root" >>> memory = "512" >>> disk = [ ''phy:vg0/p2p,xvda,w'' , ''phy:vg0/mmd,xvdb1,w'', >>> ''phy:sde3,xvdb1,w'' ] >>> vif = [ ''bridge=eth0'' ] >>> vfb = [ ''type=vnc,vnclisten=0.0.0.0'' ] >>> keymap = ''fr'' >>> serial = ''pty'' >>> vcpus = 2 >>> on_reboot = ''restart'' >>> on_crash = ''restart'' >> seems ok to me. >> Did you pin the dom0 to a dedicated cpu-core? > Nop >>>> Either you are hitting a bug or it''s a configuration issue. >>>> What is the configuration for your domU? And specifically the way you >>>> pass the LVs to the domU. >>> As you can see : >>> xvda is a lv exported as a whole disk with lvm on it, so xvda2 is >>> a lv >>> from a vg in a lv ( ext4 => lv => vg => pv => virtual disk => >>> lv =>vg >>> =>pv => raid5 =>disk ) >>> xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv >>> => vg >>> => pv => raid5 => disk ) >>> xvdb2 is a physical partition exported as a partition ( ext3 => >>> virtual >>> part => disk ) >>> >>> Curiously it seems the more complicated, the better it is :/ >> Yes, it does seem that way. Am wondering if adding more layers >> increases the >> amount of in-memory-caching which then leads to a higher "perceived" >> performance. >> >> One other thing, I don''t use "xvd*" for the device-names, but am >> still using >> "sd*". Wonder if that changes the way things behave internally? > I doesn''t change with sd* > I noticed that the cpu io wait occurs in domU ,nothing happen in dom0 > > Does someone knows a way to debug this ? at kernel level or in the > hypervisor ? > By the way how to get the hypervisor activity i don''t think it > appears in dom0. >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Monday 28 March 2011 20:10:40 you wrote:> Le 28/03/2011 11:59, Joost Roeleveld a écrit : > > e thing I do have, though, is dedicating a single core to dom0. > > This avoids the situation that dom0 has to wait for an available core. > > > > The advantage is that dom0 will always have CPU-resources available and > > this will speed up I/O activities as it is dom0 that is involved in all > > the disk-access. > > i tried with a dedicated cpu but it doesn''t changeHmm... then I''m at the end of my ideas here, I''m afraid. When I get round to upgrading to Xen4.x, I''ll do a performance test to see if I get the same. But I''d rather not play around with the production system. -- Joost _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users