Which is the easiest to manage with multiple (lets say hundreds) of xen VM''s without sacrificing performance, and why? What are the pro''s and cons to each? From my research, iSCSI seems the way to go here, but all the SAN/NAS vendor''s I''ve spoken with live and die *NFS*, which I''ve had some serious issues with in the past in so far as scalability and performance... Just thought i''d get an outside/un-biased (i hope!) opinion... _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Feb 2, 2010 at 11:57 AM, Andy Pace <APace@singlehop.com> wrote:> Which is the easiest to manage with multiple (lets say hundreds) of xen VM''s without sacrificing performance, and why?The topic has been covered several times already. Try searching list archive. Something like this: http://markmail.org/message/4f4oblgqsgqqzu5d -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Andy Pace > Sent: Monday, February 01, 2010 11:57 PM > To: xen-users@lists.xensource.com > Subject: [Xen-users] iSCSI vs NFS > > What are the pro''s and cons to each? From my research, iSCSI seems theway to go> here, but all the SAN/NAS vendor''s I''ve spoken with live and die*NFS*, which I''ve had> some serious issues with in the past in so far as scalability andperformance... Those don''t have to be mutually exclusive. From block storage you can easily carve out some filesystems that you export as NFS. And if you really do have hundreds of VM''s, I doubt you''re going to do this on a single SAN appliance unless I/O is *really* light. For what it''s worth, my money''s on AoE rather than iSCSI. Fast, simple, extremely easy to setup and manage. I prefer to run the block protocols on dom0 hosts and export the block devices to domU''s (phy:). Tests show this yields better throughput. -Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Feb 2, 2010 at 6:36 AM, Jeff Sturm <jeff.sturm@eprize.com> wrote:> > -----Original Message----- > > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > > bounces@lists.xensource.com] On Behalf Of Andy Pace > > Sent: Monday, February 01, 2010 11:57 PM > > To: xen-users@lists.xensource.com > > Subject: [Xen-users] iSCSI vs NFS > > > > What are the pro''s and cons to each? From my research, iSCSI seems the > way to go > > here, but all the SAN/NAS vendor''s I''ve spoken with live and die > *NFS*, which I''ve had > > some serious issues with in the past in so far as scalability and > performance... > > Those don''t have to be mutually exclusive. From block storage you can > easily carve out some filesystems that you export as NFS. And if you > really do have hundreds of VM''s, I doubt you''re going to do this on a > single SAN appliance unless I/O is *really* light. > > For what it''s worth, my money''s on AoE rather than iSCSI. Fast, simple, > extremely easy to setup and manage. I prefer to run the block protocols > on dom0 hosts and export the block devices to domU''s (phy:). Tests show > this yields better throughput. > > -Jeff > > "Tests show". Famous last words... I''ve been throwing around a lot ofideas in this same vein. I currently have 42 VMs running off the same disk in a classroom environment. Things are fine until everyone starts installing software or formatting their disks at the same time.>From the hearsay that I''ve heard AoE is the fastest network block storagebut it''s still hard to beat NFS. The problem comes about when you want more than one VM to access a storage device and then performance goes in the toilet because the cluster FS''s are very slow. I however, don''t make decisions based on hearsay so in the coming months I''ll be testing all combinations of NFS, iSCSI, AoE, with GFS and OCFS and any other possibility I can find in common kernels. I''ll be comparing these to the speeds of local disk access via ext3 to see how much of a hit (or advantage?) we take by moving storage out of box. Of course to do fast migration the storage has to be somewhere else... Once testing is done I''ll be posting the numbers. It''s amazing how little benchmarking takes place. I did extensive tests on LVM vs disk files and have still not seen any other numbers on this. Oh well, I guess that will be my contribution. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > "Tests show". Famous last words... I''ve been throwing around a lot of > ideas in this same vein. I currently have 42 VMs running off the same > disk in a classroom environment. Things are fine until everyone starts > installing software or formatting their disks at the same time. > > From the hearsay that I''ve heard AoE is the fastest network block > storage but it''s still hard to beat NFS. The problem comes about when > you want more than one VM to access a storage device and then > performance goes in the toilet because the cluster FS''s are very slow. > I however, don''t make decisions based on hearsay so in the coming > months I''ll be testing all combinations of NFS, iSCSI, AoE, with GFS > and OCFS and any other possibility I can find in common kernels. I''ll > be comparing these to the speeds of local disk access via ext3 to see > how much of a hit (or advantage?) we take by moving storage out of > box. Of course to do fast migration the storage has to be somewhere > else... > > Once testing is done I''ll be posting the numbers. It''s amazing how > little benchmarking takes place. I did extensive tests on LVM vs disk > files and have still not seen any other numbers on this. Oh well, I > guess that will be my contribution.Grant, With what type of drives are you expecting to do the testing? I would be interested in any numbers, but have started to move over to solid state drives. Best, Frank _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Feb 2, 2010 at 11:14 AM, Frank Pikelner < frank.pikelner@netcraftcommunications.com> wrote:> > > > > "Tests show". Famous last words... I''ve been throwing around a lot of > > ideas in this same vein. I currently have 42 VMs running off the same > > disk in a classroom environment. Things are fine until everyone starts > > installing software or formatting their disks at the same time. > > > > From the hearsay that I''ve heard AoE is the fastest network block > > storage but it''s still hard to beat NFS. The problem comes about when > > you want more than one VM to access a storage device and then > > performance goes in the toilet because the cluster FS''s are very slow. > > I however, don''t make decisions based on hearsay so in the coming > > months I''ll be testing all combinations of NFS, iSCSI, AoE, with GFS > > and OCFS and any other possibility I can find in common kernels. I''ll > > be comparing these to the speeds of local disk access via ext3 to see > > how much of a hit (or advantage?) we take by moving storage out of > > box. Of course to do fast migration the storage has to be somewhere > > else... > > > > Once testing is done I''ll be posting the numbers. It''s amazing how > > little benchmarking takes place. I did extensive tests on LVM vs disk > > files and have still not seen any other numbers on this. Oh well, I > > guess that will be my contribution. > > Grant, > > With what type of drives are you expecting to do the testing? I would be > interested in any numbers, but have started to move over to solid state > drives. > > Best, > > Frank > >My tests will be relative to the network protocols. I''d assume that the differences between local storage and network storage will be the same no matter the backend hardware. I too am looking into RAIDs of SSDs for IO reasons. I need access time more than I need throughput. Drives are getting so large that if I throw eight 1.5 TB drives in an array for speed reasons I end up with 5x the storage that I need for my project. I don''t have a problem spending the same and getting less storage if I get more performance. We''ll see. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users