Hi list :) I''m currently planning to setup the following environment: 4 - 16 physical nodes (starting with 4) Each Node is a x86 Machine with 160GB Diskspace. Each Node should run about 10 Virtual Machines. Each Virtual Machine needs about 2GB Filesystem. Live Migration should be used for maintenance times and to redistribute the VMs depending on the load they generate. Live Migration needs SAN Storage. Since SAN Storage is expensive I''d like to avoid buying a dedicated solution. This is why I''m searching for a fault tolerant solution using the hardware already available. I already read articles about: * LVS * GFS * DRBD * GlusterFS * GNBD But now I''m totally confused. Has anyone any good article or Howto on this issue? What I''m looking for is a virtual Filesystem over all the nodes which can handle if one node goes down. Current ideas are something like: Create Block Devices with DRBD using 2 nodes each. Create a stripe over all available DRBDs using LVM and export the Logical Volumes to the DomUs. The problem with that is that I''m "losing" half of the storage (mirroring) - which would be okay - and worse that I would have a single point of failure (The server which is exporting the Logical Volumes). Would Xen work with some kind of cluster Filesystem? I''m really totaly confused. Best Regards Thorsten -- ... black holes are where god divided by zero. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Marc Patino Gómez
2007-Jul-05 10:59 UTC
Re: [Xen-users] Searching for a Filesystem / Storage Solution
Hi Kyrios, I have more or less the same problem than you. I don''t want to buy an expensive SAN or NAS comercial solution. The solution to our problem will be the google filesystem but it is not opensource. I found some items to at to your list: - Lustre filesystem (CFS) - CLVM - Distributed RAID over iscsi ( I found some papers about it, in some days I will test it) I know that find a solution to it is dificult, it''s not trivial to build a system with the the following properties: - scalable - reliable - easy to manage - low cost .... Marc Kyrios wrote:> Hi list :) > > I''m currently planning to setup the following environment: > > 4 - 16 physical nodes (starting with 4) > Each Node is a x86 Machine with 160GB Diskspace. > Each Node should run about 10 Virtual Machines. > Each Virtual Machine needs about 2GB Filesystem. > Live Migration should be used for maintenance times and to > redistribute the VMs depending on the load they generate. > > Live Migration needs SAN Storage. Since SAN Storage is expensive I''d > like to avoid buying a dedicated solution. > This is why I''m searching for a fault tolerant solution using the > hardware already available. > > I already read articles about: > * LVS > * GFS > * DRBD > * GlusterFS > * GNBD > > But now I''m totally confused. > > Has anyone any good article or Howto on this issue? > > What I''m looking for is a virtual Filesystem over all the nodes which > can handle if one node goes down. > > Current ideas are something like: Create Block Devices with DRBD using > 2 nodes each. Create a stripe over all available DRBDs using LVM and > export the Logical Volumes to the DomUs. The problem with that is that > I''m "losing" half of the storage (mirroring) - which would be okay - > and worse that I would have a single point of failure (The server > which is exporting the Logical Volumes). > > Would Xen work with some kind of cluster Filesystem? > > I''m really totaly confused. > > Best Regards > Thorsten > > -- > ... black holes are where god divided by zero. > ------------------------------------------------------------------------ > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Marc Patino Gómez
2007-Jul-05 13:32 UTC
Re: [Xen-users] Searching for a Filesystem / Storage Solution
Hi Kyrios, sorry because my english, I don''t want to say that google fs can be purcharsed. I read some papers about Google fs, it looks great. The most similar fs that I found is Lustre fs, but until 2008 it will not have "Multi-server file RAID-1 (mirroring)" (acording to the clusterfs roadmap), so noadays clusterfs is not fault tolerant without shared storage. Some of the bigest cluster in the world use clusterfs. I found a blueprint from Sun Microsystems quite interesting, using clusterfs: http://www.sun.com/blueprints/0507/820-2187.html Marc Kyrios wrote:> Hi Marc, > > much thanks for your input! > > I didn''t know that GooFS can be purchased. I already thought that it > would be a good solution. > > Also if I find something else I will let you know. > > Bye > Thorsten > > On 7/5/07, *Marc Patino Gómez* <mpatino@es.clara.net > <mailto:mpatino@es.clara.net>> wrote: > > Hi Kyrios, > > I have more or less the same problem than you. I don''t want to buy an > expensive SAN or NAS comercial solution. The solution to our problem > will be the google filesystem but it is not opensource. I found some > items to at to your list: > > - Lustre filesystem (CFS) > - CLVM > - Distributed RAID over iscsi ( I found some papers about it, in some > days I will test it) > > I know that find a solution to it is dificult, it''s not trivial to > build > a system with the the following properties: > > - scalable > - reliable > - easy to manage > - low cost > .... > > > Marc > > > Kyrios wrote: > > Hi list :) > > > > I''m currently planning to setup the following environment: > > > > 4 - 16 physical nodes (starting with 4) > > Each Node is a x86 Machine with 160GB Diskspace. > > Each Node should run about 10 Virtual Machines. > > Each Virtual Machine needs about 2GB Filesystem. > > Live Migration should be used for maintenance times and to > > redistribute the VMs depending on the load they generate. > > > > Live Migration needs SAN Storage. Since SAN Storage is expensive > I''d > > like to avoid buying a dedicated solution. > > This is why I''m searching for a fault tolerant solution using the > > hardware already available. > > > > I already read articles about: > > * LVS > > * GFS > > * DRBD > > * GlusterFS > > * GNBD > > > > But now I''m totally confused. > > > > Has anyone any good article or Howto on this issue? > > > > What I''m looking for is a virtual Filesystem over all the nodes > which > > can handle if one node goes down. > > > > Current ideas are something like: Create Block Devices with DRBD > using > > 2 nodes each. Create a stripe over all available DRBDs using LVM > and > > export the Logical Volumes to the DomUs. The problem with that > is that > > I''m "losing" half of the storage (mirroring) - which would be > okay - > > and worse that I would have a single point of failure (The server > > which is exporting the Logical Volumes). > > > > Would Xen work with some kind of cluster Filesystem? > > > > I''m really totaly confused. > > > > Best Regards > > Thorsten > > > > -- > > ... black holes are where god divided by zero. > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > <mailto:Xen-users@lists.xensource.com> > > http://lists.xensource.com/xen-users > > > > > -- > ... black holes are where god divided by zero._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2007-Jul-05 14:01 UTC
Re: [Xen-users] Searching for a Filesystem / Storage Solution
On Wed, 2007-07-04 at 17:37 +0200, Kyrios wrote:> 4 - 16 physical nodes (starting with 4) > Each Node is a x86 Machine with 160GB Diskspace.What about iSCSI? Why not get two more nodes into your plan, reducing the amount of local storage you''re allocating to your dom0 nodes and instead putting it all into two nodes that will be highly available and serve as iSCSI targets? Consider shared SCSI/FC storage for these two nodes, but otherwise use something like DRDB to keep them sync''d. Heartbeat fails over a virtual IP and the iSCSI target daemon. Share the LUN''s over Gig-E to your dom0''s. iSCSI will give live migration on the cheap. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra Giraldez
2007-Jul-05 14:57 UTC
Re: [Xen-users] Searching for a Filesystem / Storage Solution
Marc Patino Gómez wrote:> sorry because my english, I don''t want to say that google fs can be > purcharsed. I read some papers about Google fs, it looks great. The most > similar fs that I found is Lustre fs, but until 2008 it will not haveAFAIK, google fs isn''t really an FS like most of us know it. i mean, even if you sneak your linux box into the googleplex and manage to plug it in, you couldn''t mount a filesystem and check it like directories and so on. it''s more like a peer-to-peer storage application. each box has a ''sharing'' deamon, and some libraries to access the data. if you have the key to a file, you can read it; or maybe just migrate the file to your local system and read/write it there. on top of that, there''s the ''bigtable'' system, to store huge amounts of data that you can process without really reading into your box; you just define the processes you want to be done on the data and it''s applied by several boxes. in short; not a general purpose file server; but hugely scalable, and wonderfully adapted to their needs. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Mornini
2007-Jul-05 16:45 UTC
Re: [Xen-users] Searching for a Filesystem / Storage Solution
How about AoE and Coraid, or gluster? -- -- Tom Mornini, CTO -- Engine Yard, Ruby on Rails Hosting -- Support, Scalability, Reliability -- (866) 518-YARD (9273) On Jul 5, 2007, at 7:01 AM, John Madden wrote:> On Wed, 2007-07-04 at 17:37 +0200, Kyrios wrote: >> 4 - 16 physical nodes (starting with 4) >> Each Node is a x86 Machine with 160GB Diskspace. > > What about iSCSI? > > Why not get two more nodes into your plan, reducing the amount of > local > storage you''re allocating to your dom0 nodes and instead putting it > all > into two nodes that will be highly available and serve as iSCSI > targets? > Consider shared SCSI/FC storage for these two nodes, but otherwise use > something like DRDB to keep them sync''d. Heartbeat fails over a > virtual > IP and the iSCSI target daemon. Share the LUN''s over Gig-E to your > dom0''s. iSCSI will give live migration on the cheap. > > John > > > > > > -- > John Madden > Sr. UNIX Systems Engineer > Ivy Tech Community College of Indiana > jmadden@ivytech.edu > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2007-Jul-05 17:17 UTC
Re: [Xen-users] Searching for a Filesystem / Storage Solution
On Thu, 2007-07-05 at 09:45 -0700, Tom Mornini wrote:> How about AoE and Coraid, or gluster?IMO, a cluster filesystem is unnecessary overhead for this situation. Unless you''re setting up an active-active cluster, why bother? John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users