Hello, Queries: 1 - what is the best RAID (0,1,5,10,50) for server running very, very VM instances ??? 2 - and config of stripes/element sizes (32, 64, 128, 256, etc, KB) ?? 3 - in the configuration of VM, exit one difference in performance between image file (disk:/) or LVM partition (phy:/) ??? any URL or docs for read about this theme ? mmm.. it is all for moment. thanks -- -- Victor Hugo dos Santos Linux Counter #224399 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 24 Jul 2008 19:07:18 +0200, Victor Hugo dos Santos <listas.vhs@gmail.com> wrote:> Hello, > > Queries: > > 1 - what is the best RAID (0,1,5,10,50) for server running very, very > VM instances ??? > > 2 - and config of stripes/element sizes (32, 64, 128, 256, etc, KB) ?? >I guess it depends more on the kind of disk activity of the VMs rather than the number of VMs. Database server? Desktop? Media streamer? As you may know, RAID 5 is the all around performer and large stripes give higher performance on large files.> 3 - in the configuration of VM, exit one difference in performance > between image file (disk:/) or LVM partition (phy:/) ??? >I guess you mean file:/ instead of disk:/ They theory would say using a partition is faster, since it evades the inherent filesystem overhead of using a file. Experience-wise, ''tap:aio'' performs better that ''file'' while doing the same job, have it a look. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jul 24, 2008 at 1:39 PM, Fernando Jiménez Solano <fernandojs@alumnos.upm.es> wrote:> On Thu, 24 Jul 2008 19:07:18 +0200, Victor Hugo dos Santos > <listas.vhs@gmail.com> wrote:[...]>> 1 - what is the best RAID (0,1,5,10,50) for server running very, very >> VM instances ??? >> 2 - and config of stripes/element sizes (32, 64, 128, 256, etc, KB) ?? > > I guess it depends more on the kind of disk activity of the VMs rather than > the number of VMs. Database server? Desktop? Media streamer?mmmm.. mmmm.. and mmmm but.. all virtual disks aren''t one big image for Dom0 ??? in other words, this type of data in the virtual disks is relevant ??> As you may know, RAID 5 is the all around performer and large stripes > give higher performance on large files. > >> 3 - in the configuration of VM, exit one difference in performance >> between image file (disk:/) or LVM partition (phy:/) ??? > > I guess you mean file:/ instead of disk:/ > > They theory would say using a partition is faster, since it evades > the inherent filesystem overhead of using a file. > > Experience-wise, ''tap:aio'' performs better that ''file'' while doing the same > job, have it a look.sorry.. this correct is file:/ not disk:/ thanks -- -- Victor Hugo dos Santos Linux Counter #224399 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jul 24, 2008 at 1:49 PM, Victor Hugo dos Santos <listas.vhs@gmail.com> wrote:> On Thu, Jul 24, 2008 at 1:39 PM, Fernando Jiménez Solano > <fernandojs@alumnos.upm.es> wrote: >> I guess it depends more on the kind of disk activity of the VMs rather than >> the number of VMs. Database server? Desktop? Media streamer? > > mmmm.. mmmm.. and mmmm > but.. all virtual disks aren't one big image for Dom0 ??? > in other words, this type of data in the virtual disks is relevant ??not the "type of data", but the "type of access". IOW, if your DomUs store a lot of small files, they'll do lots of small read/writes. even if the image is one big file, or one partition, the accesses would still be fine-grained. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 24 Jul 2008 20:49:49 +0200, Victor Hugo dos Santos <listas.vhs@gmail.com> wrote:> On Thu, Jul 24, 2008 at 1:39 PM, Fernando Jiménez Solano > <fernandojs@alumnos.upm.es> wrote: >> On Thu, 24 Jul 2008 19:07:18 +0200, Victor Hugo dos Santos >> <listas.vhs@gmail.com> wrote: > > [...] > >>> 1 - what is the best RAID (0,1,5,10,50) for server running very, very >>> VM instances ??? >>> 2 - and config of stripes/element sizes (32, 64, 128, 256, etc, KB) ?? >> >> I guess it depends more on the kind of disk activity of the VMs rather >> than >> the number of VMs. Database server? Desktop? Media streamer? > > mmmm.. mmmm.. and mmmm > but.. all virtual disks aren''t one big image for Dom0 ??? > in other words, this type of data in the virtual disks is relevant ??It''s not the file or data itself but how the data is accessed. Large stripes tend to perform better on large files because large files are usually read in a sequential, non latency-critical manner. This isn''t true for images, they''re accessed just like the underlying filesystem is (with some overhead). That is, rather randomly. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jul 24, 2008 at 19:49, Victor Hugo dos Santos <listas.vhs@gmail.com> wrote:> On Thu, Jul 24, 2008 at 1:39 PM, Fernando Jiménez Solano > <fernandojs@alumnos.upm.es> wrote: > > mmmm.. mmmm.. and mmmm > but.. all virtual disks aren't one big image for Dom0 ??? > in other words, this type of data in the virtual disks is relevant ??The problem is that RAID5 (and by extension RAID50) is relatively poor for writes, but good for reads. Why? Because to write to one disk you have to read from all the others (except one) in the RAID5 group to compute the parity, to write to that first disk, and then write the parity to the last disk - this means that each write involves every other disk (a minimum of 3). RAID50 reduces the impact because each group will be smaller, for the same total number of disks. RAID1 (and by extension RAID10) is good for writes as each write only involves 2 disks - no reads and only 2 writes. On the other hand, for the same number of disks RAID5 (and RAID50) can be faster on sustained reads. This is because in any disk group the reads can be spread across more disks, reducing delays. Others have covered the impact of stripe sizes. -- Please keep list traffic on the list. Rob MacGregor Whoever fights monsters should see to it that in the process he doesn't become a monster. Friedrich Nietzsche _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jul 24, 2008 at 3:20 PM, Fernando Jiménez Solano <fernandojs@alumnos.upm.es> wrote:> On Thu, 24 Jul 2008 20:49:49 +0200, Victor Hugo dos Santos > <listas.vhs@gmail.com> wrote: > >> On Thu, Jul 24, 2008 at 1:39 PM, Fernando Jiménez Solano >> <fernandojs@alumnos.upm.es> wrote: >>> >>> On Thu, 24 Jul 2008 19:07:18 +0200, Victor Hugo dos Santos >>> <listas.vhs@gmail.com> wrote: >> >> [...] >> >>>> 1 - what is the best RAID (0,1,5,10,50) for server running very, very >>>> VM instances ??? >>>> 2 - and config of stripes/element sizes (32, 64, 128, 256, etc, KB) ?? >>> >>> I guess it depends more on the kind of disk activity of the VMs rather >>> than >>> the number of VMs. Database server? Desktop? Media streamer? >> >> mmmm.. mmmm.. and mmmm >> but.. all virtual disks aren''t one big image for Dom0 ??? >> in other words, this type of data in the virtual disks is relevant ?? > > > It''s not the file or data itself but how the data is accessed. Large stripes > tend to perform better on large files because large files are usually read > in > a sequential, non latency-critical manner. This isn''t true for images, > they''re > accessed just like the underlying filesystem is (with some overhead). That > is, > rather randomly.ok.. ok.. ok.. is very, very clear now. thanks. -- -- Victor Hugo dos Santos Linux Counter #224399 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>3 - in the configuration of VM, exit one difference in performance >between image file (disk:/) or LVM partition (phy:/) ???I''m executing extensive lab based tests of various VM platforms and configurations. It won''t be done for months but my testing of file vs. LVM disk images is nearly complete. I was expecting to see drastic differences in speed between the two but was fairly surprised that bonnie++ numbers were usually within 10% of each other at most. LVM overall is faster but not by the amount I would have expected. In half the cases the 10GB file was as fast as LVM. Disk IO of a VM accessing LVM was about another 10% off the Dom0 accessing the same LVM. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thursday 24 July 2008, Grant McWilliams wrote:> >3 - in the configuration of VM, exit one difference in performance > >between image file (disk:/) or LVM partition (phy:/) ??? > > I''m executing extensive lab based tests of various VM platforms and > configurations. It won''t be done for months but my > testing of file vs. LVM disk images is nearly complete. I was expecting to > see drastic differences in speed between the two > but was fairly surprised that bonnie++ numbers were usually within 10% of > each other at most. LVM overall is faster but > not by the amount I would have expected. In half the cases the 10GB file > was as fast as LVM. Disk IO of a VM accessing LVM > was about another 10% off the Dom0 accessing the same LVM.does the choice of filesystem holding the image files make any difference? i guess ext3 would be among the best, but only if the data isn''t journalled. also might be nice to add to the OCFS2 - GFS rivalry. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello, I have a problem with clock on my virtual machines (domU) the time is wrong both with independent_wallclock set to 0 and set to 1. With every reboot / Shutdown of the domU the hwclock timing will always screw up. :( Any advice anyone? Best Regards, Choon Kiat _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jul 24, 2008 at 11:11 PM, Choon Kiat <choonkiat@hwzcorp.com> wrote:> Hello, > > I have a problem with clock on my virtual machines (domU) the time is wrong > both with independent_wallclock set to 0 and set to 1. > > With every reboot / Shutdown of the domU the hwclock timing will always > screw up. :( > > > Any advice anyone? >Spend some time searching xen.markmail.org. In particular, Dan Magenheimer on the xen-devel list has been looking a lot at clock skew. Getting a better sense of the problem may help us give you more hints and it may also help the developers... Also, versions of Xen, and guest kernels could be helpful for our diagnostics. Cheers, Todd -- Todd Deshane http://todddeshane.net check out our book: http://runningxen.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jul 24, 2008 at 7:41 PM, Javier Guerra Giraldez <javier@guerrag.com>wrote:> > > does the choice of filesystem holding the image files make any difference? > > i guess ext3 would be among the best, but only if the data isn''t > journalled. > also might be nice to add to the OCFS2 - GFS rivalry. > > -- > Javier >So far OCFS2 and GFS are dog slow in Xen. Normally you take a pretty decent hit when you use any of these over a local filesystem but under Xen the performance hit is unacceptable. If anyone has gotten either to work at a decent pace in Xen I''d like to know the methodology. Grant -- Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 7/25/08, Todd Deshane <deshantm@gmail.com> wrote:> > On Thu, Jul 24, 2008 at 11:11 PM, Choon Kiat <choonkiat@hwzcorp.com> > wrote: > > Hello, > > > > I have a problem with clock on my virtual machines (domU) the time is > wrong > > both with independent_wallclock set to 0 and set to 1. > > > > With every reboot / Shutdown of the domU the hwclock timing will always > > screw up. :( > > > > > > Any advice anyone? > > > > Wouldn''t it make sense to use ntp here? You can configure ntp on your domUs > to auto-correct the times? > > > - Chirag >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi , I am using NTP on the domU. System time is okay, is the hwclock that is scewing me up badly. L Choon Kiat From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Linux Lover Sent: Friday, July 25, 2008 6:30 PM To: xen-users@lists.xensource.com Subject: Re: [Xen-users] hwclock problems in domU On 7/25/08, Todd Deshane <deshantm@gmail.com> wrote: On Thu, Jul 24, 2008 at 11:11 PM, Choon Kiat <choonkiat@hwzcorp.com> wrote:> Hello, > > I have a problem with clock on my virtual machines (domU) the time iswrong> both with independent_wallclock set to 0 and set to 1. > > With every reboot / Shutdown of the domU the hwclock timing will always > screw up. :( > > > Any advice anyone? >Wouldn''t it make sense to use ntp here? You can configure ntp on your domUs to auto-correct the times? - Chirag _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users