I have some contract work with a client that is running Ubuntu under CentOS under VMWare. This is all on a local box. Why no xen, well, I don''t have an answer for that. For some reason the client would like us to move their disk based VM instances to a NetGear 1000 NFS server. Okay, when you are done laughing, does anyone have a consolidated list of why this is bad. I would like to give them more reasons that I already have on why this is bad. Gary _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Jan 30, 2009 at 12:19 PM, Gary W. Smith <gary@primeexalia.com> wrote:> I have some contract work with a client that is running Ubuntu under > CentOS under VMWare. This is all on a local box. Why no xen, well, I > don''t have an answer for that. For some reason the client would like us > to move their disk based VM instances to a NetGear 1000 NFS server.Maybe they read this http://storagefoo.blogspot.com/2007/09/vmware-over-nfs.html http://viroptics.pancamo.com/2007/11/why-vmware-over-netapp-nfs.html> Okay, when you are done laughing, does anyone have a consolidated list > of why this is bad. I would like to give them more reasons that I > already have on why this is bad.I can only think of two : - performance. Local disk i/o thorughput should be higher than NFS provided the same disk and VM load. - nfs can provide simplicity when you have many host servers. From what you describe, I get the idea that it''s only one. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
They have multiple hosts. The problem is they are trying to do it on the cheap with a mid to low end NetGear appliance. We have done it in the past with some very beefy NFS servers but their problem is for them beefy = money, which they don''t want to spend. We have another one of these NFS servers that we do the backups to (rysnc) and the performance is such that we can''t have more than two servers backing up to it as the same time. So I doubt putting a dozen VM servers (or even Xen servers for that matter) will do very well. We are seeing about 20MB/sec throughput. Anyway, thanks for the links. I will review them and see what I can do to come up with a compelling argument to get something better. ________________________________ From: xen-users-bounces@lists.xensource.com on behalf of Fajar A. Nugraha Sent: Fri 1/30/2009 12:03 AM To: Gary W. Smith Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] storage question On Fri, Jan 30, 2009 at 12:19 PM, Gary W. Smith <gary@primeexalia.com> wrote:> I have some contract work with a client that is running Ubuntu under > CentOS under VMWare. This is all on a local box. Why no xen, well, I > don''t have an answer for that. For some reason the client would like us > to move their disk based VM instances to a NetGear 1000 NFS server.Maybe they read this http://storagefoo.blogspot.com/2007/09/vmware-over-nfs.html http://viroptics.pancamo.com/2007/11/why-vmware-over-netapp-nfs.html> Okay, when you are done laughing, does anyone have a consolidated list > of why this is bad. I would like to give them more reasons that I > already have on why this is bad.I can only think of two : - performance. Local disk i/o thorughput should be higher than NFS provided the same disk and VM load. - nfs can provide simplicity when you have many host servers. From what you describe, I get the idea that it''s only one. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> We have another one of these NFS servers that we do the backups to > (rysnc) and the performance is such that we can''t have more than two > servers backing up to it as the same time. So I doubt putting a > dozen VM servers (or even Xen servers for that matter) will do very > well. We are seeing about 20MB/sec throughput.Writes and random i/o to devices like that will always be bad, but on the other hand, you could probably make a compelling argument *to* use one depending on the use case. If all you''re doing is low-end web serving, I bet the setup would be fine, as much as the idea of serving root filesystems off of NFS irks me. At any rate, you can likely build a better-performing NFS box out of commodity components for less money. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Greetings, I am looking at iSCSI for a storage option for my vms, however I am looking at two methods of implementation. Would appreciate feedback... :-) Xen Setup - xen 3.03 CentOS 5.2 Using CLVM Using Clustering Using a GFS for the location of my config files Option 1: I present a single iscsi lun to my systems in the cluster. I then carve it up using lvm for the vms. The problem with this solution is that if I clone a vm, it takes mass network bandwidth. If I use some other methods other than dd I can throttle somewhat, but I would like to create the vms as fast if possible. Option 2: I present multiple iscsi luns to the systems in the cluster. I still add to lvm so I dont have to about labeling and such. Adding to lvm ensures things dont change with the lun on reboot. With this option I can use the storage layer (using a netapp like solution) to clone luns and such. This eliminates the possibility of saturating the network interfaces when cloning vms. However, I have to rescan the target each time I add a new vm. I am also trying to avoid using image files to minimize impact to performance. Wonding if anyone has any experience with these two options, and what they did to optimize. Thanks in advance! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Feb 1, 2009 at 5:08 AM, Ramon Moreno <rammor1@gmail.com> wrote:> Option 1: > I present a single iscsi lun to my systems in the cluster. I then > carve it up using lvm for the vms. The problem with this solution is > that if I clone a vm, it takes mass network bandwidth.True. But for most iscsi servers cloning will take mass amount of resources anyway (whether it''s network, disk I/O, or both). Using ionice during cloning process might help by giving cloning process lowest priority.> Option 2: > I present multiple iscsi luns to the systems in the cluster. I still > add to lvm so I dont have to about labeling and such. > Adding to lvm > ensures things dont change with the lun on reboot.I think you can also use /dev/disk/by-path and by-id for that purpose> With this option I > can use the storage layer (using a netapp like solution) to clone luns > and such.If you clone LUNs on storage/target side, then you can''t use LVM on the initiator. The cloning process will copy any LVM label on it making the cloned LUN a duplicate PV, which can''t be used on the same host.> This eliminates the possibility of saturating the network > interfaces when cloning vms.How does your iscsi server (netapp or whatever) clone a LUN? If it copies data, then you''d still be I/O bound. An exception is if you use zfs-backed iscsi server (like opensolaris) where cloning process requires near-zero I/O with zfs clone. Note that with option 2 you can also avoid using clustering altogether (by putting config files on NFS or synchronizing them manually), which eliminates the need of fencing. This would greatly reduce complexity. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Feb 01, 2009 at 08:12:23AM +0700, Fajar A. Nugraha wrote:> On Sun, Feb 1, 2009 at 5:08 AM, Ramon Moreno <rammor1@gmail.com> wrote: > > Option 1: > > I present a single iscsi lun to my systems in the cluster. I then > > carve it up using lvm for the vms. The problem with this solution is > > that if I clone a vm, it takes mass network bandwidth. > > True. But for most iscsi servers cloning will take mass amount of > resources anyway (whether it''s network, disk I/O, or both). Using > ionice during cloning process might help by giving cloning process > lowest priority. >And in some storage arrays is just takes 1 or 2 seconds, no matter if it''s huge or small volume :) For example Equallogic does that.. data is not really copied, since everything is virtualized. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Fajar, > > Thanks for the info. > > I think option 2 sounds most attractive. I would like to get rid of > the gfs filesystem. So going nfs with it is a great idea. > > As far as the clustering goes, I use the software mainly for the > following reasons: > > * A global view for the redistribution of resources. > * Automated failover. > > Since I have 20 nodes per cluster I am looking at, I need a more > global view of how things look, and if I become resource constrained, > I would like the clustering software to make the failover decision > based on available resources. Only thing I oversubscribe is cpu by > 50%, so if a host becomes unusable, I would like to failover vms to > another node based on policy decisions. > > Any thoughts on this would also be much appreciated... Thanks for your > reply. nfs is an excellent idea. > > On Sat, Jan 31, 2009 at 5:12 PM, Fajar A. Nugraha <fajar@fajar.net> wrote: >> On Sun, Feb 1, 2009 at 5:08 AM, Ramon Moreno <rammor1@gmail.com> wrote: >>> Option 1: >>> I present a single iscsi lun to my systems in the cluster. I then >>> carve it up using lvm for the vms. The problem with this solution is >>> that if I clone a vm, it takes mass network bandwidth. >> >> True. But for most iscsi servers cloning will take mass amount of >> resources anyway (whether it''s network, disk I/O, or both). Using >> ionice during cloning process might help by giving cloning process >> lowest priority. >> >>> Option 2: >>> I present multiple iscsi luns to the systems in the cluster. I still >>> add to lvm so I dont have to about labeling and such. >>> Adding to lvm >>> ensures things dont change with the lun on reboot. >> >> I think you can also use /dev/disk/by-path and by-id for that purpose >> >>> With this option I >>> can use the storage layer (using a netapp like solution) to clone luns >>> and such. >> >> If you clone LUNs on storage/target side, then you can''t use LVM on >> the initiator. The cloning process will copy any LVM label on it >> making the cloned LUN a duplicate PV, which can''t be used on the same >> host. >> >>> This eliminates the possibility of saturating the network >>> interfaces when cloning vms. >> >> How does your iscsi server (netapp or whatever) clone a LUN? If it >> copies data, then you''d still be I/O bound. >> >> An exception is if you use zfs-backed iscsi server (like opensolaris) >> where cloning process requires near-zero I/O with zfs clone. >> >> Note that with option 2 you can also avoid using clustering altogether >> (by putting config files on NFS or synchronizing them manually), which >> eliminates the need of fencing. This would greatly reduce complexity. >> >> Regards, >> >> Fajar >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users