Hello, I''m setting up a Xen system since I have diferent choices to create the domU''s partitions: raw partition, lvm, files. I''ve done some tests with hdparm and it all seems to be the same. Can anyone, please, share with me what if the best method. Best regards, Luis _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I''m setting up a Xen system since I have diferent choices to > create the domU''s partitions: raw partition, lvm, files. > > I''ve done some tests with hdparm and it all seems to be the same.It can if you have enough CPU in dom0. The interesting point could then be : how much each of those cost in CPU share in dom0 ? The less layers you''ll have, the better it should behave. Raw partition should be the cheapest, closely followed by LVM and lastly files (which could have different results depending on fs in use: ext2, ext3, reiser, xfs, etc.). I''m not sure if the difference would be very significant. Probably, it depends on usage, number of domUs, ... BR, -- Sylvain COUTANT ADVISEO http://www.adviseo.fr/ http://www.open-sp.fr/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I have a question who may be complet your question ... What is the best File Back-End VBD for more stability and performance ? If i use SAN to stock data, Nfs, Gfs ?? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
File backed VBDs can have pretty poor performance in some cases. Also you *mustn''t* put a ReiserFS filesystem into a file which is stored on a ReiserFS filesystem in the host. It may break when you fsck the host [1] LVM is probably the best compromise of flexibility / performance and should be pretty much as fast as partitions whilst being a lot more manageable. You can also snapshot guest disks using it but be careful - snapshots don''t scale that well if you use them long term and can run out of RAM. Cheers, Mark [1] ReiserFS fsck looks for Reiser superblocks on the disk, in case they''ve got lost or something. If you''re fsck-ing dom0''s filesystem using fsck.reiserfs and it finds the superblock from a domU filesystem there, it''ll assume there''s corruption and then proceed to do horrible things. Don''t go there. On Tuesday 25 April 2006 13:47, Sylvain Coutant wrote:> > I''m setting up a Xen system since I have diferent choices to > > create the domU''s partitions: raw partition, lvm, files. > > > > I''ve done some tests with hdparm and it all seems to be the same. > > It can if you have enough CPU in dom0. The interesting point could then be > : how much each of those cost in CPU share in dom0 ? The less layers you''ll > have, the better it should behave. Raw partition should be the cheapest, > closely followed by LVM and lastly files (which could have different > results depending on fs in use: ext2, ext3, reiser, xfs, etc.). I''m not > sure if the difference would be very significant. Probably, it depends on > usage, number of domUs, ... > > BR, > > -- > Sylvain COUTANT > > ADVISEO > http://www.adviseo.fr/ > http://www.open-sp.fr/ > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users-- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels. Mark: My wheel has a wheel! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I have a question who may be complet your question ... What is the best > File Back-End VBD for more stability and performance ? If i use SAN to > stock data, Nfs, Gfs ??You mean what''s the best way to store domU FSes in file-backed VBDs? Best if you don''t put File VBDs onto NFS. Both NFS and the loopback block driver use a load of RAM, and they tend to OOM when used together. Also, the performance will probably be bad. SANs are nice ;-) There have been threads on using GFS to store guest disk files recently. Has the advantage of making live-migration easier. Better performing might be to run CLVM on the SAN so that all dom0''s can see logical volumes stored there, then use logical volumes directly. Lower overhead than using the filesystem layer. I''ve not tried this, so it''s speculation on my part ;-) Cheers, Mark -- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels. Mark: My wheel has a wheel! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 25 Apr 2006, Mark Williamson wrote:> Better performing might be to run CLVM on the SAN so that all dom0''s can > see logical volumes stored there, then use logical volumes directly. > Lower overhead than using the filesystem layer.That''s what I''ve been doing - works great. :) ------------------------------------------------------------------------ | nate carlson | natecars@natecarlson.com | http://www.natecarlson.com | | depriving some poor village of its idiot since 1981 | ------------------------------------------------------------------------ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nate Carlson wrote:> On Tue, 25 Apr 2006, Mark Williamson wrote: >> Better performing might be to run CLVM on the SAN so that all dom0''s >> can see logical volumes stored there, then use logical volumes >> directly. Lower overhead than using the filesystem layer. > > That''s what I''ve been doing - works great. :) >Do you have the same LVs mounted by multiple VMs at the same time? I use GFS on top of LVM and performance is splendid. It''s near raw speed, although I don''t really have many concurrent writes to the same files that would stress GFS very much. -- Christopher G. Stach II _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Could you explain your setup ? exemple : SAN-CORAID -> SERVER (AOE, LVM, NFS) -> Server #1 XEN -> Server #2 XEN Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Michael Lessard wrote:> Could you explain your setup ? > > exemple : > SAN-CORAID -> SERVER (AOE, LVM, NFS) -> Server #1 XEN > > -> Server #2 XENThe machine in question is a loner and no migration is involved (at the moment.) It''s a prototype test environment for our production systems, and bits of it may be used in the next iteration of the production environment. It''s simply Ultra320 SCSI RAID 5 (should be RAID 10) inside of a single server. Domain0 (deadline scheduler) has its typical partitions for /boot, swap, /, /usr, /var. It also has a partition for LVM, inside of which are all of the LVs for each domU. They mostly consist of ext3, but there''s one swap LV for each VM and a few raw LVs for MySQL. A few of those LVs are GFS and shared between multiple groups of VMs. If I had a SAN to use for this, it would look the same, except I''d use CLVM. -- Christopher G. Stach II _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ok. So I will use LVM for the data. Does any one have any numbers for the performance of Xen with some of the choices (since I need to present my boss) Thanks for your help. Luis On Tue, 25 Apr 2006, Mark Williamson wrote:> > I have a question who may be complet your question ... What is the best > > File Back-End VBD for more stability and performance ? If i use SAN to > > stock data, Nfs, Gfs ?? > > You mean what''s the best way to store domU FSes in file-backed VBDs? > > Best if you don''t put File VBDs onto NFS. Both NFS and the loopback block > driver use a load of RAM, and they tend to OOM when used together. Also, the > performance will probably be bad. > > SANs are nice ;-) There have been threads on using GFS to store guest disk > files recently. Has the advantage of making live-migration easier. > > Better performing might be to run CLVM on the SAN so that all dom0''s can see > logical volumes stored there, then use logical volumes directly. Lower > overhead than using the filesystem layer. > > I''ve not tried this, so it''s speculation on my part ;-) > > Cheers, > Mark > > -- > Dave: Just a question. What use is a unicyle with no seat? And no pedals! > Mark: To answer a question with a question: What use is a skateboard? > Dave: Skateboards have wheels. > Mark: My wheel has a wheel! >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"Sylvain Coutant" <sco@adviseo.fr> wrote on 04/25/2006 07:47:09 AM:> > I''m setting up a Xen system since I have diferent choices to > > create the domU''s partitions: raw partition, lvm, files. > > > > I''ve done some tests with hdparm and it all seems to be the same. > > It can if you have enough CPU in dom0. The interesting point could > then be : how much each of those cost in CPU share in dom0 ? The > less layers you''ll have, the better it should behave. Raw partition > should be the cheapest, closely followed by LVM and lastly files > (which could have different results depending on fs in use: ext2, > ext3, reiser, xfs, etc.). I''m not sure if the difference would be > very significant. Probably, it depends on usage, number of domUs, ...Also be aware that the devices go through the buffer cache on dom0. I would not be surprised that the performance to a partition, LVM volume, or loop device is not that different, since they all hit the buffer cache, especially if your tests do mostly reads. Steve D. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Also be aware that the devices go through the buffer cache on dom0. I > would not be surprised that the performance to a partition, LVM volume, or > loop device is not that different, since they all hit the buffer cache, > especially if your tests do mostly reads.I''m pretty sure the blkback driver bypasses the buffer cache... Caching in dom0 might occur when using the loop device (due to the stack including the filesystem layer), but it shouldn''t with the others. Cheers, Mark -- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels. Mark: My wheel has a wheel! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Steve Dobbelstein wrote:> "Sylvain Coutant" <sco@adviseo.fr> wrote on 04/25/2006 07:47:09 AM: > >>> I''m setting up a Xen system since I have diferent choices to >>> create the domU''s partitions: raw partition, lvm, files. >>> >>> I''ve done some tests with hdparm and it all seems to be the same. >> It can if you have enough CPU in dom0. The interesting point could >> then be : how much each of those cost in CPU share in dom0 ? The >> less layers you''ll have, the better it should behave. Raw partition >> should be the cheapest, closely followed by LVM and lastly files >> (which could have different results depending on fs in use: ext2, >> ext3, reiser, xfs, etc.). I''m not sure if the difference would be >> very significant. Probably, it depends on usage, number of domUs, ... > > Also be aware that the devices go through the buffer cache on dom0. I > would not be surprised that the performance to a partition, LVM volume, or > loop device is not that different, since they all hit the buffer cache, > especially if your tests do mostly reads. >I have to disagree here. A physical partition / LVM gets directly passed to the domU and by-passes the dom0 buffer cache. Since the image (loop) file resides on the host that will be cached by dom0. Thanks, Matt Ayres _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Matt Ayres <matta@tektonic.net> wrote on 04/26/2006 09:49:58 AM:> Steve Dobbelstein wrote: > > "Sylvain Coutant" <sco@adviseo.fr> wrote on 04/25/2006 07:47:09 AM: > > > >>> I''m setting up a Xen system since I have diferent choices to > >>> create the domU''s partitions: raw partition, lvm, files. > >>> > >>> I''ve done some tests with hdparm and it all seems to be the same. > >> It can if you have enough CPU in dom0. The interesting point could > >> then be : how much each of those cost in CPU share in dom0 ? The > >> less layers you''ll have, the better it should behave. Raw partition > >> should be the cheapest, closely followed by LVM and lastly files > >> (which could have different results depending on fs in use: ext2, > >> ext3, reiser, xfs, etc.). I''m not sure if the difference would be > >> very significant. Probably, it depends on usage, number of domUs, ... > > > > Also be aware that the devices go through the buffer cache on dom0. I > > would not be surprised that the performance to a partition, LVM volume,or> > loop device is not that different, since they all hit the buffer cache, > > especially if your tests do mostly reads. > > > > I have to disagree here. A physical partition / LVM gets directly > passed to the domU and by-passes the dom0 buffer cache. Since the image > (loop) file resides on the host that will be cached by dom0.Thanks Matt and Mark for the correction. What you state is true for paravirtualized domains (the open of the device is done in the kernel). HVM domains, however, open the device in user space, which means the I/O goes through the VFS cache. I have been spending most of my time with HVM domains and incorrectly assumed that all domains have that behavior. Steve D. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Thanks Matt and Mark for the correction. What you state is true for > paravirtualized domains (the open of the device is done in the kernel). > HVM domains, however, open the device in user space, which means the I/O > goes through the VFS cache. I have been spending most of my time with HVM > domains and incorrectly assumed that all domains have that behavior.Good point; I spend most (well, all) time with paravirt domains, so I forgot about the way HVM works ;-) When - eventually - the device models have been moved out of dom0 userspace, they''ll go through the backend drivers and the behaviour will be the same as paravirt. That doesn''t look like it''s going to happen for a while yet, though. Cheers, Mark -- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels. Mark: My wheel has a wheel! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users