Hello people of the Xeniverse, I am working on putting together a large xen cluster of 100 approximate nodes with more than one vm per node. If I am to migrate these xen sessions around from node to node can these xen sessions share the same image that is NFS mounted? Or do I need to have a different image for each xen session? This will obviously create a lot of storage constraints. Anyone have any suggestions on efficient image management? Thanks, -- ------------------------------ Christopher Vaughan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 6/23/06, Chris Vaughan <supercomputer@gmail.com> wrote:> > Hello people of the Xeniverse, > > I am working on putting together a large xen cluster of 100 approximate > nodes with more than one vm per node. If I am to migrate these xen sessions > around from node to node can these xen sessions share the same image that is > NFS mounted? Or do I need to have a different image for each xen session? > This will obviously create a lot of storage constraints. > > Anyone have any suggestions on efficient image management? > > Thanks, > > -- > ------------------------------ > Christopher Vaughan >I don''t know the answer, but what are you trying to do? Some thoughts: 1) Have a boot / root image that is readonly and then have filesystems that you mount read/write, one (or more) per VM. If you go this way, you could use a live CD like SUSE provides to be your boot / root filesystem. Obviously you would want to replace the kernel with a paravirtualized one. 2) Run SSI (Single System Image) on all of the VMs in read/write mode. In theory SSI would let you do this, but I would expect any disk i/o to be slow due to locking. I think I''ve seen some posts on their list about using xen with ssi clusters. Good Luck, and keep us informed. I for one have thought of having dozens of VMs spread across several machines, but I had only thought about using dedicated virtual disks, not trying to share them. Greg -- Greg Freemyer The Norcross Group Forensics for the 21st Century _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Jun 23, 2006 at 03:55:11PM -0600, Chris Vaughan <supercomputer@gmail.com> wrote a message of 47 lines which said:> I am working on putting together a large xen cluster of 100 > approximate nodes with more than one vm per node. If I am to > migrate these xen sessions around from node to node can these xen > sessions share the same image that is NFS mounted? Or do I need to > have a different image for each xen session? This will obviously > create a lot of storage constraints.User-Mode-Linux has a very convenient system, COW (for Copy-on-Write) which let you share an image between several virtual machines. Only the delta (which is different for each machine) See http://www.linuxjournal.com/article/8803 or the official documentation http://user-mode-linux.sourceforge.net/shared_fs.html _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Jun 25, 2006 at 07:17:04PM +0200, Stephane Bortzmeyer <stephane@sources.org> wrote a message of 22 lines which said:> User-Mode-Linux has a very convenient system, COW (for Copy-on-Write) > which let you share an image between several virtual machines. Only > the delta (which is different for each machine)Sorry, I forgot the rest of the sentence :-( Only the delta (which is different for each machine) is actually stored. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 6/23/06, Greg Freemyer <greg.freemyer@gmail.com> wrote:> > On 6/23/06, Chris Vaughan <supercomputer@gmail.com> wrote: > > > Hello people of the Xeniverse, > > > > I am working on putting together a large xen cluster of 100 approximate > > nodes with more than one vm per node. If I am to migrate these xen sessions > > around from node to node can these xen sessions share the same image that is > > NFS mounted? Or do I need to have a different image for each xen session? > > This will obviously create a lot of storage constraints. > > > > Anyone have any suggestions on efficient image management? > > > > Thanks, > > > > -- > > ------------------------------ > > Christopher Vaughan > > > > I don''t know the answer, but what are you trying to do? > > Some thoughts: > > 1) Have a boot / root image that is readonly and then have filesystems > that you mount read/write, one (or more) per VM. If you go this way, you > could use a live CD like SUSE provides to be your boot / root filesystem. > Obviously you would want to replace the kernel with a paravirtualized one. > > 2) Run SSI (Single System Image) on all of the VMs in read/write mode. In > theory SSI would let you do this, but I would expect any disk i/o to be slow > due to locking. I think I''ve seen some posts on their list about using xen > with ssi clusters. > > Good Luck, and keep us informed. I for one have thought of having dozens > of VMs spread across several machines, but I had only thought about using > dedicated virtual disks, not trying to share them. > > Greg > > > -- > Greg Freemyer > The Norcross Group > Forensics for the 21st Century >Out of curiosity I went back and looked at the SSI list to see if they had a xen solution. They do at least at some level of functionality (see below). With the below you should be able to have shared root xen VMs thus reducing your disk storage requirements. If your not familiar with SSI, I believe the below setups are designed to have one of the xen VMs be the CFS (cluster file system) master. It can directly access the virtual disk below it. The other xen VMs in the ssi cluster would make file i/o requests to the master. Each individual filesystem is seperately assigned a master. So if you had a data filesystem per xen VM node, the data filesystems could be assigned to the xen vm node actually doing the work thus eliminating most ssi induced performance issues. And drdb is used for failover. If you need it, you could assign each filesystem a backup xen vm node master. Then if the original master dies, the alternate takes over. Obviously you would want the alternate to be on a different physical computer than the primary master. Even better than drdb support would be to have a reliable shared storage facility on the backend. That to is supported in ssi, but I''ve forgetten the details. Greg>>>Thanks to OpenSSI user Owen Campbell, there are now two 2.6.10 domU Xen OpenSSI kernels available for download. == OpenSSI domU, without DRBD URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-xenu MD5: 8f26aa3f7efe3858692b3acdf3db4c21 = == OpenSSI domU, with DRBD URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-drbd-xenu MD5: 25e3688ac6e51cada1baf85004636658 = VERY IMPORTANT: standard disclaimer applies. Not official, OpenSSI accepts no liability, might blow up your machine, and so forth. Cheers, -- Ivan Krstic <krstic@fas.harvard.edu> | GPG: 0x147C722D <<< -- Greg Freemyer The Norcross Group Forensics for the 21st Century _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I have had success in the past using Xen / OpenSSI, however you want at least two physical nics per server. I''m imagining your goal is something like this : http://netkinetics.net/xen-typical.pdf What you may consider using is openQRM with the xen plugin, www.openqrm.org, which can accomplish pretty much the same thing. Some work is going to be needed on your part. I have openqrm installed to a 1.5 gb file backed VBD (CentOS 4.3 guest image) that seems to like running at 128 MB, its going into production this week to manage a farm of 50 odd blades. It does a nice job of bringing up dom-0 on a blank box upon boot, a second script then runs and hones in on that node''s role , dom-u setups and configurations. Basic grid, but basic is good when you need to fix it at 3AM. You don''t want to use nfs if you''re planning on migrating frequently, I would think AoE would be the better route or iscsi. I think for distributed sessions OpenSSI is the most (sanest) approach that does most of your sanity checking for you. Also make sure the xen interconnect has gig-e. Just curious, how many of these hosts are also SSL hosts? Have the scripts / applications been tested ok with migrating sessions? There may also be some work to do to the scripts themselves depending on how they deal with making temporary files and caching. I''ve done this only with Xen 2.0.7. None of the domains we''ve set this up for have yet to see any (real) sustained traffic such as something like the ./ effect. So it also really depends on how much they push (mb/sec), how you setup your bridging and the quality of the network you''re on. You should also take into consideration the types of files being served. For instance, if you often have people using 56k connection downloading 10+ MB files, that makes a difference, especially if using any kind of accelerator. HTH - good luck :) Tim On Mon, 2006-06-26 at 10:14 -0400, Greg Freemyer wrote:> On 6/23/06, Greg Freemyer <greg.freemyer@gmail.com> wrote: > On 6/23/06, Chris Vaughan <supercomputer@gmail.com> wrote: > > Hello people of the Xeniverse, > > I am working on putting together a large xen cluster > of 100 approximate nodes with more than one vm per > node. If I am to migrate these xen sessions around > from node to node can these xen sessions share the > same image that is NFS mounted? Or do I need to have > a different image for each xen session? This will > obviously create a lot of storage constraints. > > Anyone have any suggestions on efficient image > management? > > Thanks, > > > -- > ------------------------------ > Christopher Vaughan > > I don''t know the answer, but what are you trying to do? > > Some thoughts: > > 1) Have a boot / root image that is readonly and then have > filesystems that you mount read/write, one (or more) per VM. > If you go this way, you could use a live CD like SUSE provides > to be your boot / root filesystem. Obviously you would want > to replace the kernel with a paravirtualized one. > > 2) Run SSI (Single System Image) on all of the VMs in > read/write mode. In theory SSI would let you do this, but I > would expect any disk i/o to be slow due to locking. I think > I''ve seen some posts on their list about using xen with ssi > clusters. > > Good Luck, and keep us informed. I for one have thought of > having dozens of VMs spread across several machines, but I had > only thought about using dedicated virtual disks, not trying > to share them. > > > > Greg > > > > -- > Greg Freemyer > The Norcross Group > Forensics for the 21st Century > > Out of curiosity I went back and looked at the SSI list to see if they > had a xen solution. > > They do at least at some level of functionality (see below). With the > below you should be able to have shared root xen VMs thus reducing > your disk storage requirements. > > If your not familiar with SSI, I believe the below setups are designed > to have one of the xen VMs be the CFS (cluster file system) master. > It can directly access the virtual disk below it. The other xen VMs > in the ssi cluster would make file i/o requests to the master. > > Each individual filesystem is seperately assigned a master. So if you > had a data filesystem per xen VM node, the data filesystems could be > assigned to the xen vm node actually doing the work thus eliminating > most ssi induced performance issues. > > And drdb is used for failover. If you need it, you could assign each > filesystem a backup xen vm node master. Then if the original master > dies, the alternate takes over. Obviously you would want the > alternate to be on a different physical computer than the primary > master. > > Even better than drdb support would be to have a reliable shared > storage facility on the backend. That to is supported in ssi, but > I''ve forgetten the details. > > Greg > >>> > Thanks to OpenSSI user Owen Campbell, there are now two 2.6.10 domU > Xen > OpenSSI kernels available for download. > > == OpenSSI domU, without DRBD > URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-xenu > MD5: 8f26aa3f7efe3858692b3acdf3db4c21 > => > == OpenSSI domU, with DRBD > URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-drbd-xenu > MD5: 25e3688ac6e51cada1baf85004636658 > => > VERY IMPORTANT: standard disclaimer applies. Not official, OpenSSI > accepts no liability, might blow up your machine, and so forth. > > Cheers, > > -- > Ivan Krstic <krstic@fas.harvard.edu> | GPG: 0x147C722D > > <<< > > -- > Greg Freemyer > The Norcross Group > Forensics for the 21st Century > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for the suggestions. At this point we haven''t fully wrote our scripts. We are still in the planning stages and have been outlining potential issues that could develop down the road. Storage being one issue. Our goal is to utilize our cluster to its full potential and distribute the load over the cluster. Thus migrating the trouble areas to less utilized areas. Thanks for the input On 6/26/06, Tim Post <tim.post@netkinetics.net> wrote:> > I have had success in the past using Xen / OpenSSI, however you want at > least two physical nics per server. I''m imagining your goal is something > like this : > > http://netkinetics.net/xen-typical.pdf > > What you may consider using is openQRM with the xen plugin, > www.openqrm.org, which can accomplish pretty much the same thing. Some > work is going to be needed on your part. > > I have openqrm installed to a 1.5 gb file backed VBD (CentOS 4.3 guest > image) that seems to like running at 128 MB, its going into production > this week to manage a farm of 50 odd blades. It does a nice job of > bringing up dom-0 on a blank box upon boot, a second script then runs > and hones in on that node''s role , dom-u setups and configurations. > Basic grid, but basic is good when you need to fix it at 3AM. > > You don''t want to use nfs if you''re planning on migrating frequently, I > would think AoE would be the better route or iscsi. I think for > distributed sessions OpenSSI is the most (sanest) approach that does > most of your sanity checking for you. Also make sure the xen > interconnect has gig-e. > > Just curious, how many of these hosts are also SSL hosts? Have the > scripts / applications been tested ok with migrating sessions? There may > also be some work to do to the scripts themselves depending on how they > deal with making temporary files and caching. > > I''ve done this only with Xen 2.0.7. None of the domains we''ve set this > up for have yet to see any (real) sustained traffic such as something > like the ./ effect. So it also really depends on how much they push > (mb/sec), how you setup your bridging and the quality of the network > you''re on. > > You should also take into consideration the types of files being served. > For instance, if you often have people using 56k connection downloading > 10+ MB files, that makes a difference, especially if using any kind of > accelerator. > > HTH - good luck :) > > Tim > > > On Mon, 2006-06-26 at 10:14 -0400, Greg Freemyer wrote: > > On 6/23/06, Greg Freemyer <greg.freemyer@gmail.com> wrote: > > On 6/23/06, Chris Vaughan <supercomputer@gmail.com> wrote: > > > > Hello people of the Xeniverse, > > > > I am working on putting together a large xen cluster > > of 100 approximate nodes with more than one vm per > > node. If I am to migrate these xen sessions around > > from node to node can these xen sessions share the > > same image that is NFS mounted? Or do I need to have > > a different image for each xen session? This will > > obviously create a lot of storage constraints. > > > > Anyone have any suggestions on efficient image > > management? > > > > Thanks, > > > > > > -- > > ------------------------------ > > Christopher Vaughan > > > > I don''t know the answer, but what are you trying to do? > > > > Some thoughts: > > > > 1) Have a boot / root image that is readonly and then have > > filesystems that you mount read/write, one (or more) per VM. > > If you go this way, you could use a live CD like SUSE provides > > to be your boot / root filesystem. Obviously you would want > > to replace the kernel with a paravirtualized one. > > > > 2) Run SSI (Single System Image) on all of the VMs in > > read/write mode. In theory SSI would let you do this, but I > > would expect any disk i/o to be slow due to locking. I think > > I''ve seen some posts on their list about using xen with ssi > > clusters. > > > > Good Luck, and keep us informed. I for one have thought of > > having dozens of VMs spread across several machines, but I had > > only thought about using dedicated virtual disks, not trying > > to share them. > > > > > > > > Greg > > > > > > > > -- > > Greg Freemyer > > The Norcross Group > > Forensics for the 21st Century > > > > Out of curiosity I went back and looked at the SSI list to see if they > > had a xen solution. > > > > They do at least at some level of functionality (see below). With the > > below you should be able to have shared root xen VMs thus reducing > > your disk storage requirements. > > > > If your not familiar with SSI, I believe the below setups are designed > > to have one of the xen VMs be the CFS (cluster file system) master. > > It can directly access the virtual disk below it. The other xen VMs > > in the ssi cluster would make file i/o requests to the master. > > > > Each individual filesystem is seperately assigned a master. So if you > > had a data filesystem per xen VM node, the data filesystems could be > > assigned to the xen vm node actually doing the work thus eliminating > > most ssi induced performance issues. > > > > And drdb is used for failover. If you need it, you could assign each > > filesystem a backup xen vm node master. Then if the original master > > dies, the alternate takes over. Obviously you would want the > > alternate to be on a different physical computer than the primary > > master. > > > > Even better than drdb support would be to have a reliable shared > > storage facility on the backend. That to is supported in ssi, but > > I''ve forgetten the details. > > > > Greg > > >>> > > Thanks to OpenSSI user Owen Campbell, there are now two 2.6.10 domU > > Xen > > OpenSSI kernels available for download. > > > > == OpenSSI domU, without DRBD > > URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-xenu > > MD5: 8f26aa3f7efe3858692b3acdf3db4c21 > > => > > > == OpenSSI domU, with DRBD > > URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-drbd-xenu > > MD5: 25e3688ac6e51cada1baf85004636658 > > => > > > VERY IMPORTANT: standard disclaimer applies. Not official, OpenSSI > > accepts no liability, might blow up your machine, and so forth. > > > > Cheers, > > > > -- > > Ivan Krstic <krstic@fas.harvard.edu> | GPG: 0x147C722D > > > > <<< > > > > -- > > Greg Freemyer > > The Norcross Group > > Forensics for the 21st Century > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > >-- ------------------------------ Christopher Vaughan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users