Hello Everyone, i need to implement multiples domains, being most of then almost the same with minimal changes (same distro, same packages, different configurations). Anybody knows if there is a way to share all the common files? This way all updates will need to be done just one and the disk space will be much minor. Ghe Rivero _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
--- Ghe Rivero <ghe.rivero@gmail.com> wrote:> i need to implement multiples domains, being most of then almost the > same with minimal changes (same distro, same packages, different > configurations). > > Anybody knows if there is a way to share all the common files? This way > all updates will need to be done just one and the disk space will be > much minor.I''ve never done this but am thinking about doing it. Some thoughts: * Use a Copy On Write (COW) filesystem. The idea is that you can have 2 or 3 or 10 or 1,000 servers sharing the same root and only changes to the base are recorded. I''ve never used this, but it doesn''t seem to live up to the promise of saving hard drive space; if you think about it, over time as you install patches and upgrades, eventually you''ll end up using the same amount of hard drive space that you''d have used without COW. For example, imagine if you upgrade your installation of CentOS to a new version with new versions of glibc and whatever -- lots and lots of libraries have been compiled against the new libraries and so they would be upgraded. Over time the large majority of files will have been upgraded and you''re stuck with 2 or 3 or 10 or 1,000 individual copies of a file. Perhaps you can shrink the partitions by synchronizing them all, moving duplicate files back to the original filesystem. I''m ignorant on the use of COW FSes so this might not even be a concern. A COW doesn''t give you the quick-update ability you want... in other words, you''d still have to update each system one-at-a-time. For that, you should consider... * ...NFS exporting a read-only copy of /usr. This is usually your largest partition where most updates occur. Well-written programs will not require /usr be mounted read-write and you should be able to export at least that partition. You can do updates very quickly. This is the direction I want to go. You could even use thin-client network boot technology so that your domains don''t use *any* hard drive space. You can test your app by installing a new installation of linux or unix and giving /usr its own partition. Install the program you want to test. Edit /etc/fstab and give the /usr partition the ro flag something like this: LABEL=/usr /usr ext3 defaults,ro 1 2 Remount /usr: mount -o remount /usr Or do it without editing fstab (does not persist over reboots): mount -o ro,remount /usr Then run your app and see if it bombs. If it works, you can use a read-only NFS-mounted /usr partition. Note: This only shares /usr. If you install an update that modifies a file under /etc /var or /boot you will need to manually copy those updates. It is not wise to share /etc or /var (they are usually thought of as the place where system-specific and variable data lives) and /boot /sbin and /lib is usually needed before NFS filesystems can be mounted (usually). So any updates to these partitions must be done manually. I suppose the main server could run an update and then the NFS clients could run the same update, ignoring any /usr "read-only" errors. Seems like it would work, but then that takes the same amount of time as updating individual servers. Or you could just... * ...bite the bullet and do it the old-fashioned way. A well-tuned OS doesn''t take up much room compared to swap and data. Most of my installs are a few hundred MB (I kill the documentation and only install what I need). The average Xen system probably has a dozen domains, so that''s around 10GB. That''s nothing with today''s drives. Doesn''t give you quick-update ability but you can use something like yum or apt. I''ve installed both yum and apt servers; they''re no big deal. Hope that helps! CD You have to face a Holy God on Judgment Day. He sees lust as adultery (Matt. 5:28) and hatred as murder (1 John 3:15). Will you be guilty? Jesus took your punishment on the cross, and rose again defeating death, to save you from Hell. Repent (Luke 13:5) and trust in Him today. NeedGod.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mark Williamson
2005-Sep-20 00:45 UTC
Re: [Xen-users] Multiple Domains Sharing Root System
> > * ...NFS exporting a read-only copy of /usr. This is usually your largest > partition where most updates occur. Well-written programs will not require > /usr be mounted read-write and you should be able to export at least that > partition. You can do updates very quickly. This is the direction I want > to go. >You could also read-only share a block device for /usr between all domains. This gives good performance and you still only have one image of the filesystem. The trouble is that with block-level sharing you can''t update the filesystem whilst there are more than one domains accessing it :-( I used to do this, now I just have multiple complete installs. There are a number of people here working on other solutions for filesystem sharing that may be better in some circumstances. Cheers, Mark> You could even use thin-client network boot technology so that your domains > don''t use *any* hard drive space. > > You can test your app by installing a new installation of linux or unix and > giving /usr its own partition. Install the program you want to test. Edit > /etc/fstab and give the /usr partition the ro flag something like this: > LABEL=/usr /usr ext3 defaults,ro 1 2 > Remount /usr: > mount -o remount /usr > > Or do it without editing fstab (does not persist over reboots): > mount -o ro,remount /usr > > Then run your app and see if it bombs. If it works, you can use a > read-only NFS-mounted /usr partition. > > Note: This only shares /usr. If you install an update that modifies a file > under /etc /var or /boot you will need to manually copy those updates. It > is not wise to share /etc or /var (they are usually thought of as the place > where system-specific and variable data lives) and /boot /sbin and /lib is > usually needed before NFS filesystems can be mounted (usually). So any > updates to these partitions must be done manually. > > I suppose the main server could run an update and then the NFS clients > could run the same update, ignoring any /usr "read-only" errors. Seems > like it would work, but then that takes the same amount of time as updating > individual servers. > > > Or you could just... > > * ...bite the bullet and do it the old-fashioned way. A well-tuned OS > doesn''t take up much room compared to swap and data. Most of my installs > are a few hundred MB (I kill the documentation and only install what I > need). The average Xen system probably has a dozen domains, so that''s > around 10GB. That''s nothing with today''s drives. > > Doesn''t give you quick-update ability but you can use something like yum or > apt. I''ve installed both yum and apt servers; they''re no big deal. > > Hope that helps! > > CD > > You have to face a Holy God on Judgment Day. He sees lust as adultery > (Matt. 5:28) and hatred as murder (1 John 3:15). Will you be guilty? > > Jesus took your punishment on the cross, and rose again defeating death, to > save you from Hell. Repent (Luke 13:5) and trust in Him today. > > NeedGod.com > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I''ve been having a play with this idea too: My plan is to have an NFS root server as a domain, using multiple LVM partitions. (I could run it in dom0 if required). Using a UnionFS-patched kernel I hope to combine a (writable) domain- specific partition with a (read-only) generic image for each domain. This would allow each domain to have custom configs and apps.>From the NFS Root Server''s point of view the filesystems might besomething like this: ................ ............... ............... '' domU1 Union '' '' domU2 Union '' '' domU3 Union '' '' 640MB '' '' 640MB '' '' 640MB '' '''''''''''''''''''''''''''''''' '''''''''''''''''''''''''''''' '''''''''''''''''''''''''''''' /|\ /|\ /|\ ________|_______ _______|_______ _______|_______ | domU1fs | | domU2fs | | domU3fs | | (128MB rw) | | (128MB rw) | | (128MB rw) | |______________| |_____________| |_____________| |_______________|_______________| /|\ ________|________ | domUgenfs | | (512MB ro) | |_______________| So in this example, total virtual filesystem usage is 1920MB with only 896MB actually used. I''ve actually been trying to do this with uclibc (without success so far) to minimise the size of the NFS root server itself, but might give up and just use a minimal Debian sarge. Any suggestions? Sound like an idea? Marcus. Mark Williamson wrote:>>* ...NFS exporting a read-only copy of /usr. This is usually your largest >>partition where most updates occur. Well-written programs will not require >>/usr be mounted read-write and you should be able to export at least that >>partition. You can do updates very quickly. This is the direction I want >>to go. >> > > You could also read-only share a block device for /usr between all domains. > This gives good performance and you still only have one image of the > filesystem. The trouble is that with block-level sharing you can''t update > the filesystem whilst there are more than one domains accessing it :-( > > I used to do this, now I just have multiple complete installs. > > There are a number of people here working on other solutions for filesystem > sharing that may be better in some circumstances. > > Cheers, > Mark > > > >>You could even use thin-client network boot technology so that your domains >>don''t use *any* hard drive space. >> >>You can test your app by installing a new installation of linux or unix and >>giving /usr its own partition. Install the program you want to test. Edit >>/etc/fstab and give the /usr partition the ro flag something like this: >>LABEL=/usr /usr ext3 defaults,ro 1 2 >>Remount /usr: >>mount -o remount /usr >> >>Or do it without editing fstab (does not persist over reboots): >>mount -o ro,remount /usr >> >>Then run your app and see if it bombs. If it works, you can use a >>read-only NFS-mounted /usr partition. >> >>Note: This only shares /usr. If you install an update that modifies a file >>under /etc /var or /boot you will need to manually copy those updates. It >>is not wise to share /etc or /var (they are usually thought of as the place >>where system-specific and variable data lives) and /boot /sbin and /lib is >>usually needed before NFS filesystems can be mounted (usually). So any >>updates to these partitions must be done manually. >> >>I suppose the main server could run an update and then the NFS clients >>could run the same update, ignoring any /usr "read-only" errors. Seems >>like it would work, but then that takes the same amount of time as updating >>individual servers. >> >> >>Or you could just... >> >>* ...bite the bullet and do it the old-fashioned way. A well-tuned OS >>doesn''t take up much room compared to swap and data. Most of my installs >>are a few hundred MB (I kill the documentation and only install what I >>need). The average Xen system probably has a dozen domains, so that''s >>around 10GB. That''s nothing with today''s drives. >> >>Doesn''t give you quick-update ability but you can use something like yum or >>apt. I''ve installed both yum and apt servers; they''re no big deal. >> >>Hope that helps! >> >>CD_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
--- Marcus Brown <marcusbrutus@internode.on.net> wrote:> My plan is to have an NFS root server as a domain, using multiple LVM > partitions. (I could run it in dom0 if required). > Using a UnionFS-patched kernel I hope to combine a (writable) domain- > specific partition with a (read-only) generic image for each domain. > This would allow each domain to have custom configs and apps.As I was contemplating with the COW filesystems, wouldn''t a UnionFS over time grow to be about the same size as if you''d have done individual installs? For instance, say you have these files in a UnionFS partition: /usr/lib/mozilla-firefox (NFS server) /usr/lib/samba (NFS server) /usr/lib/glibc (NFS server) You have two NFS clients who mount /usr. NFS client number one updates Firefox. Because file writes are copied, this doubles the file space needed: /usr/lib/mozilla-firefox (NFS server) /usr/lib/mozilla-firefox (NFS client one) /usr/lib/samba (NFS server) /usr/lib/glibc (NFS server) NFS client number two updates Firefox. Because file writes are copied, this now triples the file space needed: /usr/lib/mozilla-firefox (NFS server) /usr/lib/mozilla-firefox (NFS client one) /usr/lib/mozilla-firefox (NFS client two) /usr/lib/samba (NFS server) /usr/lib/glibc (NFS server) Now you update glibc. Samba is recompiled for that new version: /usr/lib/mozilla-firefox (NFS server) /usr/lib/mozilla-firefox (NFS client one) /usr/lib/mozilla-firefox (NFS client two) /usr/lib/samba (NFS server) /usr/lib/samba (NFS client one) /usr/lib/samba (NFS client two) /usr/lib/glibc (NFS server) /usr/lib/glibc (NFS client one) /usr/lib/glibc (NFS client two) Since the NFS server is now redundant, this would use even more space than this simple back-to-basics configuration: /usr/lib/mozilla-firefox (system one) /usr/lib/samba (system one) /usr/lib/glibc (system one) /usr/lib/mozilla-firefox (system two) /usr/lib/samba (system two) /usr/lib/glibc (system two) Seems that UnionFS and COW would only save space at first or with files that are almost never expected to change over time (unlike OS files which get modified every few months). Am I missing something? I''ve never used COW and I''ve only used UnionFS in Knoppix, so I could be wayyyy off... Is there some simple method to "reconsolidate" the various identical copies in different file systems to shrink the number of incremental changes? Is this automatically done when there are identical files? CD You have to face a Holy God on Judgment Day. He sees lust as adultery (Matt. 5:28) and hatred as murder (1 John 3:15). Will you be guilty? Jesus took your punishment on the cross, and rose again defeating death, to save you from Hell. Repent (Luke 13:5) and trust in Him today. NeedGod.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi All ! I''ve done3 a bit of investigation of this and I''m heading (but haevn''t yet got very far) in a slightly different direction: 1) Assume disk space is cheapish but "keep things up to date" admin effort is expensive (my problem, at any rate) 2) Make completely independent filesystems for each domU 3) Use cfengine (www.cfengine.org <http://www.cfengine.org>) to propagate changes and updates .... 1 & 2 I have completed (! easy !) Concerning 3 this might be an opportune moment to ask if anyone has experience of cfengine? On paper it looks like a rather neat solution to the problem it''s trying to address. Regards, Nigel. On 9/20/05, Ghe Rivero <ghe.rivero@gmail.com> wrote:> > Hello Everyone, > i need to implement multiples domains, being most of then almost the > same with minimal changes (same distro, same packages, different > configurations). > > Anybody knows if there is a way to share all the common files? This way > all updates will need to be done just one and the disk space will be > much minor. > > Ghe Rivero > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- N. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 9/20/05, Nigel Head <nigel.head@gmail.com> wrote:> 1) Assume disk space is cheapish but "keep things up to date" admin effort > is expensive (my problem, at any rate) > > 2) Make completely independent filesystems for each domUThis is what I think as well. Especially since in most cases 1Gb is often more that sufficient for simple domains. That said, having the filesystems for data independent is very useful. OCFS2 or the new Xen filesystem probably be good here. At the moment I''ve used NFS mostly for this, or in some cases just more lvm-device mount points around by changing configs and restarting. NFS is not the best solution of single machine Xen based setups, has some performance issues I haven''t been able to find the time to track down.> > 3) Use cfengine (www.cfengine.org) to propagate changes and updates .... > > Concerning 3 this might be an opportune moment to ask if anyone has > experience of cfengine? On paper it looks like a rather neat solution to the > problem it''s trying to address.I figure this as well. Of course being able to generalise the unknown aspect of your sysadmin muscles in your head to some config language is another story. :) -- Nicholas Lee http://stateless.geek.nz gpg 8072 4F86 EDCD 4FC1 18EF 5BDD 07B0 9597 6D58 D70C _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Chris, Chris de Vidal wrote:> --- Marcus Brown <marcusbrutus@internode.on.net> wrote: > >>My plan is to have an NFS root server as a domain, using multiple LVM >>partitions. (I could run it in dom0 if required). >>Using a UnionFS-patched kernel I hope to combine a (writable) domain- >>specific partition with a (read-only) generic image for each domain. >>This would allow each domain to have custom configs and apps. > > > As I was contemplating with the COW filesystems, wouldn''t a UnionFS over time > grow to be about the same size as if you''d have done individual installs? >I think there is a fundamental difference here in each approach. With a COW system (eg. LVM snapshots?) I''d imagine each system needs to be updated independently ... ie. each domain would grow away from it''s origin. I can''t see re-merging data common to all domains being an easy task. With my NFS suggestion however, the generic image itself can be updated ... hence the saving. ie. master updates/upgrades are done on the generic image, and then each domain can have it''s specialised updates separately. Marcus. ps. Having sanity problems trying to compile unionfs for a Xen kernel. :) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users