Hi all, I wonder if anyone on the list has written any scripts to automate the management of VMs with loopback images. Here''s what I want to be able to do: * Store existing physical machine file systems, or pristine installs in loopback images on my Xen servers (something I''ll do manually) * Run a script that will start a VM from one of these images, automatically associate it with a loopback device, give it a name, RAM allocation, network addresses, and set various internal parameters, such as hostname, routes, etc., based on a set of arguments. So something like "script <imagename> <hostname> <netconfigs> <RAM> <.. etc. * Have the same script take another argument that will cause it to clone a filesystem image first before starting the VM, so that I can use a set of images as VM templates. I intend to have a large collection of templates which my developers can use to create VMs suited to whatever project they are working on. * After a VM machine has been instantiated, I would like to be able to start and stop it with simple "start hostname" and "stop hostname" kinds of commands. * Have management tools so that I can for example shift a VM from one Xen server to another (shift hostname xenservername). These would also be used by load balancing scripts to shift machines around to manage resources. I''d like to build a web-based management system for these scripts, so that developers are free to create and control Xen VMs (though naturally with limitations based on what the servers can handle -- so my bosses will know when they need to buy me more servers :o) ). I don''t see these as particularly difficult, but if someone has done them already .... Also, I''d appreciate any thoughts you might have on automation of this kind, particularly in terms of functionality and practicalities. Thanks for your time! Paul ------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Actually, I''d also be interested in any docs or information relating to LVM with COW as an alternative to loopback images. If I want to be able to shift my VM images about, can I do it with LVM? The section on this in the manual (5.2) is a little sparse on this subject to say the least! Guess you''re all too busy coding :o) Cheers, Paul On Tuesday 28 September 2004 07:23 am, Paul Dorman wrote:> Hi all, > > I wonder if anyone on the list has written any scripts to automate the > management of VMs with loopback images. Here''s what I want to be able to > do: > > * Store existing physical machine file systems, or pristine installs in > loopback images on my Xen servers (something I''ll do manually) > > * Run a script that will start a VM from one of these images, automatically > associate it with a loopback device, give it a name, RAM allocation, > network addresses, and set various internal parameters, such as hostname, > routes, etc., based on a set of arguments. So something like "script > <imagename> <hostname> <netconfigs> <RAM> <.. etc. > > * Have the same script take another argument that will cause it to clone a > filesystem image first before starting the VM, so that I can use a set of > images as VM templates. I intend to have a large collection of templates > which my developers can use to create VMs suited to whatever project they > are working on. > > * After a VM machine has been instantiated, I would like to be able to > start and stop it with simple "start hostname" and "stop hostname" kinds of > commands. > > * Have management tools so that I can for example shift a VM from one Xen > server to another (shift hostname xenservername). These would also be used > by load balancing scripts to shift machines around to manage resources. > > I''d like to build a web-based management system for these scripts, so that > developers are free to create and control Xen VMs (though naturally with > limitations based on what the servers can handle -- so my bosses will know > when they need to buy me more servers :o) ). > > I don''t see these as particularly difficult, but if someone has done them > already .... Also, I''d appreciate any thoughts you might have on automation > of this kind, particularly in terms of functionality and practicalities. > > Thanks for your time! > > Paul------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I wonder if anyone on the list has written any scripts to automate the > management of VMs with loopback images. Here''s what I want to be able to > do:Managing loopback block devices (and other non-physical block devices) will get friendlier than it currently is. They''ll get automatically allocated, deallocated etc by Xend. However, the functionality you want is much like we''d envisaged for the "cluster controller" some time in the future. The idea behind it is to simplify the management of multiple Xen machines as a single pool of resources. We have some preliminary design documents on this but no implementation as yet. There are other people working on their own cluster management schemes (hi Brian, hi Steve ;-) but there''s not a general-purpose Xen package for doing this. If you''re interested, we can post some of our design docs on this subject. Cheers, Mark ------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
I would certainly appreciated this. Then I can keep my automation roughly in line with what''s going to happen anyway, so I can move from my system to the official one without too much pain. I''m certain the official one will be much better anyhow :o) Another little query about the LVM stuff: I''m using kernel 2.6 so does this rule out the LVM option? The package note for Sarge''s LVM2 says that it works with 2.4 only. Thanks Mark. Regards, Paul On Tuesday 28 September 2004 09:46 am, Mark A. Williamson wrote:> > I wonder if anyone on the list has written any scripts to automate the > > management of VMs with loopback images. Here''s what I want to be able to > > do: > > Managing loopback block devices (and other non-physical block devices) will > get friendlier than it currently is. They''ll get automatically allocated, > deallocated etc by Xend. > > However, the functionality you want is much like we''d envisaged for the > "cluster controller" some time in the future. The idea behind it is to > simplify the management of multiple Xen machines as a single pool of > resources. > > We have some preliminary design documents on this but no implementation as > yet. There are other people working on their own cluster management > schemes (hi Brian, hi Steve ;-) but there''s not a general-purpose Xen > package for doing this. > > If you''re interested, we can post some of our design docs on this subject. > > Cheers, > Mark------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> Actually, I''d also be interested in any docs or information relating to LVM > with COW as an alternative to loopback images. If I want to be able to > shift my VM images about, can I do it with LVM?Shift them about between machines? The ideal thing to do for that would be to keep a copy of the base image on all machines and just copy the changes to that around your cluster when migrating. I don''t know how easy this is with LVM... Anyone?> The section on this in the > manual (5.2) is a little sparse on this subject to say the least! Guess > you''re all too busy coding :o)Yeah :-) This manual might get a bit fleshed out before the 2.0 release but I was rather hoping someone who uses LVM would help out with the LVM section ;-) I just use dedicated partitions for domains. Cheers, Mark> > Cheers, > Paul > > On Tuesday 28 September 2004 07:23 am, Paul Dorman wrote: > > Hi all, > > > > I wonder if anyone on the list has written any scripts to automate the > > management of VMs with loopback images. Here''s what I want to be able to > > do: > > > > * Store existing physical machine file systems, or pristine installs in > > loopback images on my Xen servers (something I''ll do manually) > > > > * Run a script that will start a VM from one of these images, > > automatically associate it with a loopback device, give it a name, RAM > > allocation, network addresses, and set various internal parameters, such > > as hostname, routes, etc., based on a set of arguments. So something like > > "script <imagename> <hostname> <netconfigs> <RAM> <.. etc. > > > > * Have the same script take another argument that will cause it to clone > > a filesystem image first before starting the VM, so that I can use a set > > of images as VM templates. I intend to have a large collection of > > templates which my developers can use to create VMs suited to whatever > > project they are working on. > > > > * After a VM machine has been instantiated, I would like to be able to > > start and stop it with simple "start hostname" and "stop hostname" kinds > > of commands. > > > > * Have management tools so that I can for example shift a VM from one Xen > > server to another (shift hostname xenservername). These would also be > > used by load balancing scripts to shift machines around to manage > > resources. > > > > I''d like to build a web-based management system for these scripts, so > > that developers are free to create and control Xen VMs (though naturally > > with limitations based on what the servers can handle -- so my bosses > > will know when they need to buy me more servers :o) ). > > > > I don''t see these as particularly difficult, but if someone has done them > > already .... Also, I''d appreciate any thoughts you might have on > > automation of this kind, particularly in terms of functionality and > > practicalities. > > > > Thanks for your time! > > > > Paul > > ------------------------------------------------------- > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > Project Admins to receive an Apple iPod Mini FREE for your judgement on > who ports your project to Linux PPC the best. Sponsored by IBM. > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> Another little query about the LVM stuff: I''m using kernel 2.6 > so does this rule out the LVM option? The package note for > Sarge''s LVM2 says that it works with 2.4 only.LVM2 supports snapshots as of Linux 2.6.8 Ian ------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Tue, Sep 28, 2004 at 09:50:06AM +1200, Paul Dorman wrote:> Another little query about the LVM stuff: I''m using kernel 2.6 so does this > rule out the LVM option? The package note for Sarge''s LVM2 says that it works > with 2.4 only.I''m using LVM2 from Sarge (a few weeks out of date now) and a 2.6.8.1 kernel on Xen. Everything works just fine. (Except for the occasional out-of-memory conditions when using many snapshots...) --Michael Vrable ------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Hi Michael, that''s good to know. Could I ask if there''s any special I have to do to work with Sarge and Xen (and LVM2 of course!)? Any tips would be great. I''ll be mining the mailing list for a possible ''tips and tweaks'' wiki section sometime soon (as part of my contribution to Xen), so anything you can contribute will surely make it there eventually. Cheers, Paul On Tuesday 28 September 2004 10:18 am, Michael Vrable wrote:> On Tue, Sep 28, 2004 at 09:50:06AM +1200, Paul Dorman wrote: > > Another little query about the LVM stuff: I''m using kernel 2.6 so does > > this rule out the LVM option? The package note for Sarge''s LVM2 says that > > it works with 2.4 only. > > I''m using LVM2 from Sarge (a few weeks out of date now) and a 2.6.8.1 > kernel on Xen. Everything works just fine. (Except for the occasional > out-of-memory conditions when using many snapshots...) > > --Michael Vrable > > > ------------------------------------------------------- > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > Project Admins to receive an Apple iPod Mini FREE for your judgement on > who ports your project to Linux PPC the best. Sponsored by IBM. > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
My experience (LVM2.2.00.24 - Mandrake 10.0 - 2.6.8.1-xen0) has been that when lvm2 runs out of memory (particularly when creating a new snapshot of an original that already has other snaphots against it) the whole lvm system with any xenU domains based on it becomes unusable until the xen0 system is rebooted. Has lvm2 behaved better for you in that kind of circumstance? Other than that, from a short acquaintance it looks as if it should be good. -- Peri Michael Vrable wrote:>On Tue, Sep 28, 2004 at 09:50:06AM +1200, Paul Dorman wrote: > > >>Another little query about the LVM stuff: I''m using kernel 2.6 so does this >>rule out the LVM option? The package note for Sarge''s LVM2 says that it works >>with 2.4 only. >> >> > >I''m using LVM2 from Sarge (a few weeks out of date now) and a 2.6.8.1 >kernel on Xen. Everything works just fine. (Except for the occasional >out-of-memory conditions when using many snapshots...) > >--Michael Vrable > > >------------------------------------------------------- >This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 >Project Admins to receive an Apple iPod Mini FREE for your judgement on >who ports your project to Linux PPC the best. Sponsored by IBM. >Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/xen-devel > > > >------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> My experience (LVM2.2.00.24 - Mandrake 10.0 - 2.6.8.1-xen0) has been > that when lvm2 runs out of memory (particularly when creating a new > snapshot of an original that already has other snaphots against it) the > whole lvm system with any xenU domains based on it becomes unusable > until the xen0 system is rebooted. Has lvm2 behaved better for you in > that kind of circumstance?I haven''t really used multiple snapshots enough to experience this. Do you really mean that you''re running out of memory, or that the volume for holding the snapshot deltas is filling up? I presume LVM2 just stores a cache of the remapped extents table in memory, so I''m surprised that there''s a significant memory overhead. Maybe its not really that smart. Either way, it might be interesting to see the output of /proc/slabinfo when its low on memory. Ian ------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
The best solution so far that I have found is to have 1 (or more) NFS servers on a backside network that provide disk IO to the individual XenLinux unprivileged domains. This allows me to restart a xenlinux instance on a different machine should the primary fail. This also allows me to have a more centralized filesystem that uses nfs/iSCSI on LVM on raid-5 for reliability. I''m far from having a working automated system right now. I still have the iSCSI death issues to deal with when combining 2 iSCSI target devices into a raid-1 array in Domain-0 to be exported to the XenLinux unprived domain. 8-( NFS roots still periodically "lock up" momentarilly (~2 to 10 seconds) when using nfs root. SO the solution isn''t ready yet. I do highly reccomend that people use LVM LVs for individual exportable block devices and to forget about using COWs to "save space". Adding a 200GB disk is dirt cheap (unless you are talking about scsi) so it''s not worth the savings IMHO to multi-cow a filesystem into 2+ unprived domains on a single machine. The 1 to 2 GB of saved space just isn''t worth it at $0.10 per GB of disk space. :) Add in the system overhead and seek times required for COWs after your first FS upgrade and your return is worse than using separate partitions/LVs in my experience. It''s better performance and maintainability wise IMHO to use a common RO partition for your basic OS root fs and /usr and use the traditional /etc, /var, /usr/local redirection to a tiny config partition and unique /home partition for per-vhost files. This is only worth it IF you can''t afford the ~2GB for independant root and /usr filesystems. For anyone using Debian Sarge for your Domain-0 (and unprived domains) I have a repo that''s updated when I see a "stable" snapshot go by. It can be found at http://www.terrabox.com/debian. It''s a simple debian repository of binary and source. I''m still working on getting the packaging refined and approved by my new-DM sponsor so that the snapshots will make it into debian testing distro in the next month or two. I''m working on a first draft of a Debian on xen and LVM howto. But like Ian and group, dev work comes first till it doesn''t flake out on me. ;) If anyone is interested in a working HA solution using Xen for their business, I can asist with design and deployment for a small donation to my living expences fund. ;) Brian Wolfe TerraBox.com Inc. linux and HA Contracting. On Mon, 2004-09-27 at 16:53, Mark A. Williamson wrote:> > Actually, I''d also be interested in any docs or information relating to LVM > > with COW as an alternative to loopback images. If I want to be able to > > shift my VM images about, can I do it with LVM? > > Shift them about between machines? The ideal thing to do for that would be to > keep a copy of the base image on all machines and just copy the changes to > that around your cluster when migrating. I don''t know how easy this is with > LVM... Anyone? > > > The section on this in the > > manual (5.2) is a little sparse on this subject to say the least! Guess > > you''re all too busy coding :o) > > Yeah :-) This manual might get a bit fleshed out before the 2.0 release but I > was rather hoping someone who uses LVM would help out with the LVM > section ;-) I just use dedicated partitions for domains. > > Cheers, > Mark > > > > > Cheers, > > Paul > > > > On Tuesday 28 September 2004 07:23 am, Paul Dorman wrote: > > > Hi all, > > > > > > I wonder if anyone on the list has written any scripts to automate the > > > management of VMs with loopback images. Here''s what I want to be able to > > > do: > > > > > > * Store existing physical machine file systems, or pristine installs in > > > loopback images on my Xen servers (something I''ll do manually) > > > > > > * Run a script that will start a VM from one of these images, > > > automatically associate it with a loopback device, give it a name, RAM > > > allocation, network addresses, and set various internal parameters, such > > > as hostname, routes, etc., based on a set of arguments. So something like > > > "script <imagename> <hostname> <netconfigs> <RAM> <.. etc. > > > > > > * Have the same script take another argument that will cause it to clone > > > a filesystem image first before starting the VM, so that I can use a set > > > of images as VM templates. I intend to have a large collection of > > > templates which my developers can use to create VMs suited to whatever > > > project they are working on. > > > > > > * After a VM machine has been instantiated, I would like to be able to > > > start and stop it with simple "start hostname" and "stop hostname" kinds > > > of commands. > > > > > > * Have management tools so that I can for example shift a VM from one Xen > > > server to another (shift hostname xenservername). These would also be > > > used by load balancing scripts to shift machines around to manage > > > resources. > > > > > > I''d like to build a web-based management system for these scripts, so > > > that developers are free to create and control Xen VMs (though naturally > > > with limitations based on what the servers can handle -- so my bosses > > > will know when they need to buy me more servers :o) ). > > > > > > I don''t see these as particularly difficult, but if someone has done them > > > already .... Also, I''d appreciate any thoughts you might have on > > > automation of this kind, particularly in terms of functionality and > > > practicalities. > > > > > > Thanks for your time! > > > > > > Paul > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > > Project Admins to receive an Apple iPod Mini FREE for your judgement on > > who ports your project to Linux PPC the best. Sponsored by IBM. > > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/xen-devel > > > ------------------------------------------------------- > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > Project Admins to receive an Apple iPod Mini FREE for your judgement on > who ports your project to Linux PPC the best. Sponsored by IBM. > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
It looks as if lvm2 uses quite a lot of memory analysing multiple snapshots of a single origin. I rebooted the dom0 machine and immediately did ''vchange -ay vmspace'' which makes the volume group vmspace active. This claims to run out of memory - always so far on one of a list of snapshots of a common origin, but not always on the same one. When it runs out memory adding a new snapshot, it tends to mention one of the other snapshots, and as I have said, the whole caboodle then becomes unusable until after a reboot. Here is the sequence of events, with slabinfo before and after the second ''vchange -ay vmspace'': ... login after reboot (no xenU systems started) ... [root@a4 root]# vgchange -ay vmspace device-mapper ioctl cmd 9 failed: Cannot allocate memory Couldn''t load device ''vmspace-a44''. 26 logical volume(s) in volume group "vmspace" now active ... [root@a4 root]# vgchange -an vmspace 0 logical volume(s) in volume group "vmspace" now active ... [root@a4 root]# cat /proc/slabinfo >slabinfo-1 [root@a4 root]# vgchange -ay vmspace device-mapper ioctl cmd 9 failed: Cannot allocate memory Couldn''t load device ''vmspace-a51''. 25 logical volume(s) in volume group "vmspace" now active [root@a4 root]# cat /proc/slabinfo >slabinfo-2 [root@a4 root]# lvs LV VG Attr LSize Origin Snap% Move Copy% a32 vmspace -wi-a- 2.00G a32-swap vmspace -wi-a- 64.00M a33 vmspace -wi-a- 2.00G a37 vmspace -wi-a- 2.00G a38-swap vmspace -wi-a- 64.00M a39 vmspace swi-a- 100.00M mdk10.0-a 0.02 a39-swap vmspace -wi-a- 64.00M a40 vmspace swi--- 100.00M mdk10.0 a40-swap vmspace -wi-a- 64.00M a41 vmspace swi--- 100.00M mdk10.0 a41-swap vmspace -wi-a- 64.00M a42 vmspace swi-s- 100.00M mdk10.0 36.77 a42-swap vmspace -wi-a- 64.00M a43 vmspace swi-s- 100.00M mdk10.0 35.14 a43-swap vmspace -wi-a- 64.00M a44 vmspace Swi-S- 100.00M mdk10.0 100.00 a44-swap vmspace -wi-a- 64.00M a45 vmspace swi-s- 100.00M mdk10.0 0.45 a45-swap vmspace -wi-a- 64.00M a46 vmspace swi-a- 100.00M mdk10.0-a 42.78 a46-swap vmspace -wi-a- 64.00M a47 vmspace swi-a- 100.00M mdk10.0-a 0.45 a47-swap vmspace -wi-a- 64.00M a48 vmspace swi-a- 100.00M mdk10.0-a 0.45 a48-swap vmspace -wi-a- 64.00M a49 vmspace swi-a- 100.00M mdk10.0-a 0.02 a49-swap vmspace -wi-a- 64.00M a50 vmspace swi-a- 100.00M mdk10.0-a 0.45 a50-swap vmspace -wi-a- 64.00M a51 vmspace Swi-I- 256.00M fedora-core2 100.00 a51-swap vmspace -wi-a- 64.00M a52 vmspace swi-a- 256.00M fedora-core2 0.01 a52-swap vmspace -wi-a- 64.00M a55 vmspace swi-a- 256.00M fedora-core2 2.53 a55-swap vmspace -wi-a- 64.00M a56 vmspace swi-a- 256.00M fedora-core2 11.80 a56-swap vmspace -wi-a- 64.00M a57 vmspace swi--- 256.00M fedora-core2 a57-swap vmspace -wi-a- 64.00M fedora-core2 vmspace owi--- 2.00G gentoo-2004.2 vmspace -wi-a- 6.00G mdk10.0 vmspace owi--- 6.00G mdk10.0-a vmspace owi-a- 6.00G mdk10.0-b vmspace -wi-a- 6.00G ... reboot and login again ... [root@a4 root]# lvs LV VG Attr LSize Origin Snap% Move Copy% a32 vmspace -wi--- 2.00G a32-swap vmspace -wi--- 64.00M a33 vmspace -wi--- 2.00G a37 vmspace -wi--- 2.00G a38-swap vmspace -wi--- 64.00M a39 vmspace swi--- 100.00M mdk10.0-a a39-swap vmspace -wi--- 64.00M a40 vmspace swi--- 100.00M mdk10.0 a40-swap vmspace -wi--- 64.00M a41 vmspace swi--- 100.00M mdk10.0 a41-swap vmspace -wi--- 64.00M a42 vmspace swi--- 100.00M mdk10.0 a42-swap vmspace -wi--- 64.00M a43 vmspace swi--- 100.00M mdk10.0 a43-swap vmspace -wi--- 64.00M a44 vmspace swi--- 100.00M mdk10.0 a44-swap vmspace -wi--- 64.00M a45 vmspace swi--- 100.00M mdk10.0 a45-swap vmspace -wi--- 64.00M a46 vmspace swi--- 100.00M mdk10.0-a a46-swap vmspace -wi--- 64.00M a47 vmspace swi--- 100.00M mdk10.0-a a47-swap vmspace -wi--- 64.00M a48 vmspace swi--- 100.00M mdk10.0-a a48-swap vmspace -wi--- 64.00M a49 vmspace swi--- 100.00M mdk10.0-a a49-swap vmspace -wi--- 64.00M a50 vmspace swi--- 100.00M mdk10.0-a a50-swap vmspace -wi--- 64.00M a51 vmspace swi--- 256.00M fedora-core2 a51-swap vmspace -wi--- 64.00M a52 vmspace swi--- 256.00M fedora-core2 a52-swap vmspace -wi--- 64.00M a55 vmspace swi--- 256.00M fedora-core2 a55-swap vmspace -wi--- 64.00M a56 vmspace swi--- 256.00M fedora-core2 a56-swap vmspace -wi--- 64.00M a57 vmspace swi--- 256.00M fedora-core2 a57-swap vmspace -wi--- 64.00M fedora-core2 vmspace owi--- 2.00G gentoo-2004.2 vmspace -wi--- 6.00G mdk10.0 vmspace owi--- 6.00G mdk10.0-a vmspace owi--- 6.00G mdk10.0-b vmspace -wi--- 6.00G [root@a4 root]# lvs |wc 45 201 3195 [root@a4 root]# lvchange -a y vmspace/fedora-core2 device-mapper ioctl cmd 9 failed: Cannot allocate memory Couldn''t load device ''vmspace-a52''. Here it runs out memory dealing making active an origin volume with just 5 snapshots. So there''s clearly something wrong, and as I have said, lvm2 snapshots weren''t really designed for these purposes. It may be a case of DIY, as Christian suggested. -- Peri Ian Pratt wrote:>>My experience (LVM2.2.00.24 - Mandrake 10.0 - 2.6.8.1-xen0) has been >>that when lvm2 runs out of memory (particularly when creating a new >>snapshot of an original that already has other snaphots against it) the >>whole lvm system with any xenU domains based on it becomes unusable >>until the xen0 system is rebooted. Has lvm2 behaved better for you in >>that kind of circumstance? >> >> > >I haven''t really used multiple snapshots enough to experience >this. Do you really mean that you''re running out of memory, or >that the volume for holding the snapshot deltas is filling up? > >I presume LVM2 just stores a cache of the remapped extents table >in memory, so I''m surprised that there''s a significant memory >overhead. Maybe its not really that smart. > >Either way, it might be interesting to see the output of >/proc/slabinfo when its low on memory. > >Ian > > >------------------------------------------------------- >This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 >Project Admins to receive an Apple iPod Mini FREE for your judgement on >who ports your project to Linux PPC the best. Sponsored by IBM. >Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/xen-devel > > > >
> It looks as if lvm2 uses quite a lot of memory analysing multiple > snapshots of a single origin. I rebooted the dom0 machine and > immediately did ''vchange -ay vmspace'' which makes the volume group > vmspace active. This claims to run out of memory - always so far on one > of a list of snapshots of a common origin, but not always on the same > one. When it runs out memory adding a new snapshot, it tends to mention > one of the other snapshots, and as I have said, the whole caboodle then > becomes unusable until after a reboot.> dm-snapshot-in 128 162 48 > dm-snapshot-ex 19556 19662 16 > dm_tio 14336 14464 16 > dm_io 14336 14464 1620,000 dm-snapshot-ex entries sounds like a lot, but they''re only 16 bytes each. From looking at the code it looks like it needs to store the entire exception table in memory rather than using it as a cache of what''s on disk, which is a shame. Still, 50k x16 bytes is less than 1MB. There''s nothing in slabinfo that looks crazy. I wander where all your memory is gone? BTW: how big is your dom0? It''s possible that dm-io or kcopyd is chewing up pages (which won''t show up in the slab allocator). I''m surprised they''re not just transient, though. Perhaps someone''s going to need to take a look at device mapper. Building a version with debug printk''s enabled would be a good start. I''d like to know what allocation is failing. Ian ------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Tue, Sep 28, 2004 at 04:43:25PM +0100, Ian Pratt wrote:> There''s nothing in slabinfo that looks crazy. I wander where all > your memory is gone? BTW: how big is your dom0? > > It''s possible that dm-io or kcopyd is chewing up pages (which > won''t show up in the slab allocator). I''m surprised they''re not > just transient, though.When I''ve run into memory trouble with snapshots, I''ve always seen a stack backtrace that points me at kcopyd_client_create. Following the code: when creating a snapshot, a new kcopyd client is created with 256 (SNAPSHOT_PAGES in dm-snap.c) pages (= 1 MB) dedicated to that snapshot. I think I managed to dig up the logs from one of the failures I''ve seen; I''ve attached them to this message. The problem seems to be made worse by the fact that all 256 pages are allocated in a fairly short span of time, and (at least this is my guess) the allocation fails even if it would be possible for the kernel to free up the necessary memory with a bit more work. (I''ve been able to create many more snapshots before running into trouble if I try to make sure the kernel has a bit of extra free memory before each lvcreate call--using dd to create a several megabyte file, then deleting it to free up that space in the page cache.) As has been noted, LVM doesn''t have a very graceful failure mode when this memory allocation problem is hit--I lose access to all the snapshots when that happens. I have also found that I can use dmsetup to create the COW devices myself, which did at least (if I''m remembering correctly--this was a little bit ago) have the benefit that if one snapshot failed, the others were still available. Basically, I used the same setup that LVM normally would, except that I didn''t create a snapshot-origin device layered over the original device (this is what intercepts writes to the source device and propagates a copy of the original data to each snapshot, if needed). Doing this manually isn''t ideal, however. Improvements that I think could be made: - Change the dm-snapshot driver in the kernel to (optionally?) allocate less memory for each snapshot, and fail more gracefully if unable to allocate the memory. - Adjust the LVM userspace tool to fail more gracefully if the device mapper driver gives an out-of-memory error. - Add an option to LVM for snapshots with a read-only origin (as I was doing manually with dmsetup). --Michael Vrable
On Tue, Sep 28, 2004 at 10:43:47AM +1200, Paul Dorman wrote:> that''s good to know. Could I ask if there''s any special I have to do to work > with Sarge and Xen (and LVM2 of course!)? Any tips would be great. I''ll be > mining the mailing list for a possible ''tips and tweaks'' wiki section > sometime soon (as part of my contribution to Xen), so anything you can > contribute will surely make it there eventually.No special tricks, really. My basic setup: - Set up domain-0 using raw disk partitions, so I don''t have to worry about starting LVM from an initrd. - Leave a large disk partition unused initially; it will be devoted to LVM. - Install the lvm2 package from Debian in domain-0. - Run pvcreate on the empty large partition, followed by vgcreate. - Use the LVM tools in domain-0 to allocate space for the guest domains, then export those devices to the guests. The guests don''t know the space comes from LVM. If necessary, I can add more space to domain-0 with LVM as well, but thus far I''ve been getting by just fine with the initial partitions for domain-0, and using LVM solely for space for xenU domains. --Michael Vrable ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> On Tue, Sep 28, 2004 at 04:43:25PM +0100, Ian Pratt wrote: > > There''s nothing in slabinfo that looks crazy. I wander where all > > your memory is gone? BTW: how big is your dom0? > > > > It''s possible that dm-io or kcopyd is chewing up pages (which > > won''t show up in the slab allocator). I''m surprised they''re not > > just transient, though. > > When I''ve run into memory trouble with snapshots, I''ve always seen a > stack backtrace that points me at kcopyd_client_create. Following the > code: when creating a snapshot, a new kcopyd client is created with 256 > (SNAPSHOT_PAGES in dm-snap.c) pages (= 1 MB) dedicated to that snapshot. > I think I managed to dig up the logs from one of the failures I''ve seen; > I''ve attached them to this message.It might be worth adding "| __GFP_REPEAT" to the alloc_page in drivers/md/kcopyd.c If this fixes things, we could probably get the patch accepted into mainline Linux. (Of course, GFP_REPEAT may be a nop under Linux''s VM...) Ian ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Tue, Sep 28, 2004 at 09:06:59PM +0100, Ian Pratt wrote:> It might be worth adding "| __GFP_REPEAT" to the alloc_page in > drivers/md/kcopyd.cI think that __GFP_REPEAT is a no-op for single-page allocations, as in this case (though I haven''t tried it). __GFP_NOFAIL might work, but that sounds like a cure worse than the disease. --Michael Vrable ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> On Tue, Sep 28, 2004 at 09:06:59PM +0100, Ian Pratt wrote: > > It might be worth adding "| __GFP_REPEAT" to the alloc_page in > > drivers/md/kcopyd.c > > I think that __GFP_REPEAT is a no-op for single-page allocations, as in > this case (though I haven''t tried it). __GFP_NOFAIL might work, but > that sounds like a cure worse than the disease.Yep, you''re right. From mm/page_alloc.c: if (!(gfp_mask & __GFP_NORETRY)) { if ((order <= 3) || (gfp_mask & __GFP_REPEAT)) do_retry = 1; if (gfp_mask & __GFP_NOFAIL) do_retry = 1; } if (do_retry) { blk_congestion_wait(WRITE, HZ/50); I think it''s worth trying GFP_NOFAIL just to see what happens. The correct fix is probably to wrap the page_alloc in a loop that retries a few times, maybe something like: unsigned long start = jiffies; while( (pl->page = alloc_page(GFP_KERNEL)) == NULL && jiffies - start < 5*HZ ) blk_congestion_wait(WRITE, HZ/5); Ian ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I would certainly appreciated this. Then I can keep my automation roughly > in line with what''s going to happen anyway, so I can move from my system to > the official one without too much pain. I''m certain the official one will > be much better anyhow :o)OK, here''s some thoughts that clarify the existing control structure and what a cluster controller ought to do. I attach an old design doc from before the current set of control tools, which I''ve updated to correspond better to reality. The crux of the cluster controller is that it really consists of two parts: * Stuff to manage the cluster resources in a uniform way (e.g. migration, start / stop domains anywhere in the cluster, monitor domains... + the console concentrator) * Stuff to manage VMs (image templates, etc) more effectively. These are pretty independent. We imagine all state being stored in an SQL server. Hopefully, the SQL server can provide us replication of this data, etc. On top of this, command line (and at some stage) web-based interfaces would be required to run the show. A lot of the code shouldn''t be too technically difficult, just time-consuming. The SQL database will need to be carefully designed. We would probably tend to implement daemon code in Python, like all the rest of the tools. We would love to see contributions in this area. HTH, Mark> Another little query about the LVM stuff: I''m using kernel 2.6 so does this > rule out the LVM option? The package note for Sarge''s LVM2 says that it > works with 2.4 only. > > Thanks Mark. > > Regards, > Paul > > On Tuesday 28 September 2004 09:46 am, Mark A. Williamson wrote: > > > I wonder if anyone on the list has written any scripts to automate the > > > management of VMs with loopback images. Here''s what I want to be able > > > to do: > > > > Managing loopback block devices (and other non-physical block devices) > > will get friendlier than it currently is. They''ll get automatically > > allocated, deallocated etc by Xend. > > > > However, the functionality you want is much like we''d envisaged for the > > "cluster controller" some time in the future. The idea behind it is to > > simplify the management of multiple Xen machines as a single pool of > > resources. > > > > We have some preliminary design documents on this but no implementation > > as yet. There are other people working on their own cluster management > > schemes (hi Brian, hi Steve ;-) but there''s not a general-purpose Xen > > package for doing this. > > > > If you''re interested, we can post some of our design docs on this > > subject. > > > > Cheers, > > Mark
> I think it''s worth trying GFP_NOFAIL just to see what happens. > The correct fix is probably to wrap the page_alloc in a loop that > retries a few times, maybe something like: > > unsigned long start = jiffies; > > while( (pl->page = alloc_page(GFP_KERNEL)) == NULL && > jiffies - start < 5*HZ ) > blk_congestion_wait(WRITE, HZ/5);It''s likely not helped by the fact the network driver has big allocation spikes periodically when it refills the packet receive ring. I''ll check in something to smooth out those allocations... -- Keir ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Tue, Sep 28, 2004 at 09:21:12AM -0500, Brian Wolfe wrote:> I''m far from having a working automated system right now. I still have > the iSCSI death issues to deal with when combining 2 iSCSI target > devices into a raid-1 array in Domain-0 to be exported to the XenLinux > unprived domain. 8-( NFS roots still periodically "lock up" momentarilly > (~2 to 10 seconds) when using nfs root. SO the solution isn''t ready yet. > I do highly reccomend that people use LVM LVs for individual exportable > block devices and to forget about using COWs to "save space". Adding a > 200GB disk is dirt cheap (unless you are talking about scsi) so it''s not > worth the savings IMHO to multi-cow a filesystem into 2+ unprived > domains on a single machine. The 1 to 2 GB of saved space just isn''t > worth it at $0.10 per GB of disk space. :) Add in the system overhead > and seek times required for COWs after your first FS upgrade and your > return is worse than using separate partitions/LVs in my experience.I think the best use one could get from LVM snapshots is background cloning of filesystems, i.e. you''d use a modified snapshot driver which would slowly create a 1-to-1 copy of the whole partition but allow it to be used immediately, copying being done when the disk/machine is otherwise idle. christian ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
I like this suggestion. Instead of having a bunch of templates only, I could also have standby partitions, ready to start immediately. Storage *is* cheap, so the cost is low for this. Paul On Wednesday 29 September 2004 10:36 pm, Christian Limpach wrote:> On Tue, Sep 28, 2004 at 09:21:12AM -0500, Brian Wolfe wrote: > > I''m far from having a working automated system right now. I still have > > the iSCSI death issues to deal with when combining 2 iSCSI target > > devices into a raid-1 array in Domain-0 to be exported to the XenLinux > > unprived domain. 8-( NFS roots still periodically "lock up" momentarilly > > (~2 to 10 seconds) when using nfs root. SO the solution isn''t ready yet. > > I do highly reccomend that people use LVM LVs for individual exportable > > block devices and to forget about using COWs to "save space". Adding a > > 200GB disk is dirt cheap (unless you are talking about scsi) so it''s not > > worth the savings IMHO to multi-cow a filesystem into 2+ unprived > > domains on a single machine. The 1 to 2 GB of saved space just isn''t > > worth it at $0.10 per GB of disk space. :) Add in the system overhead > > and seek times required for COWs after your first FS upgrade and your > > return is worse than using separate partitions/LVs in my experience. > > I think the best use one could get from LVM snapshots is background > cloning of filesystems, i.e. you''d use a modified snapshot driver > which would slowly create a 1-to-1 copy of the whole partition but > allow it to be used immediately, copying being done when the > disk/machine is otherwise idle. > > christian------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I like this suggestion. Instead of having a bunch of templates only, I could > also have standby partitions, ready to start immediately. Storage *is* cheap, > so the cost is low for this.I expect it wouldn''t be too hard to modify the dm-snap driver to do this. I think we''d end up with something rather more robust, as the current snapshot mechanism is very vulnerable to disk errors and other corruption. In any event, it snap-dm should probably be modified such that it shares a kcopyd memory pool among all the active snapshots rather than allocating 1MB for each. It''s a pity that the snapshot exception table is entirely memory resident, stored as a hash table. Turning it into an on-disk tree cached in memory would be better. All things for the todo list... Ian> Paul > > On Wednesday 29 September 2004 10:36 pm, Christian Limpach wrote: > > On Tue, Sep 28, 2004 at 09:21:12AM -0500, Brian Wolfe wrote: > > > I''m far from having a working automated system right now. I still have > > > the iSCSI death issues to deal with when combining 2 iSCSI target > > > devices into a raid-1 array in Domain-0 to be exported to the XenLinux > > > unprived domain. 8-( NFS roots still periodically "lock up" momentarilly > > > (~2 to 10 seconds) when using nfs root. SO the solution isn''t ready yet. > > > I do highly reccomend that people use LVM LVs for individual exportable > > > block devices and to forget about using COWs to "save space". Adding a > > > 200GB disk is dirt cheap (unless you are talking about scsi) so it''s not > > > worth the savings IMHO to multi-cow a filesystem into 2+ unprived > > > domains on a single machine. The 1 to 2 GB of saved space just isn''t > > > worth it at $0.10 per GB of disk space. :) Add in the system overhead > > > and seek times required for COWs after your first FS upgrade and your > > > return is worse than using separate partitions/LVs in my experience. > > > > I think the best use one could get from LVM snapshots is background > > cloning of filesystems, i.e. you''d use a modified snapshot driver > > which would slowly create a 1-to-1 copy of the whole partition but > > allow it to be used immediately, copying being done when the > > disk/machine is otherwise idle. > > > > christian > > > ------------------------------------------------------- > This SF.net email is sponsored by: IT Product Guide on ITManagersJournal > Use IT products in your business? Tell us what you think of them. Give us > Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more > http://productguide.itmanagersjournal.com/guidepromo.tmpl > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Mark A. Williamson
2004-Sep-30 20:58 UTC
Loop and ENBD device management (was Re: [Xen-devel] Automation scripts)
Initial support for Xend to automatically bind loop and ENBD devices is now in the unstable tree. To use a file for a domain''s disk the syntax is: ''file:/path/to/file,target_dev,mode'' For ENBD (Disclaimer! The scripts for ENBD are untested and probably won''t work yet. If someone could test this on a non-production system that''d be good): ''enbd:host:port,target_dev,mode'' Binding and unbinding these devices to local device nodes is done in the scripts /etc/xen/{block-file,block-enbd}. You can edit those scripts to change their behaviour. To add support for a device of type "foo" you can just add a script /etc/xen/block-foo and a config item in the Xend config. This should already be useful, at least for file-based disks. It''d be nice to have the ENBD script working and have some kind of iSCSI script at some stage. Cheers, Mark On Monday 27 September 2004 19:23, Paul Dorman wrote:> Hi all, > > I wonder if anyone on the list has written any scripts to automate the > management of VMs with loopback images. Here''s what I want to be able to > do: > > * Store existing physical machine file systems, or pristine installs in > loopback images on my Xen servers (something I''ll do manually) > > * Run a script that will start a VM from one of these images, automatically > associate it with a loopback device, give it a name, RAM allocation, > network addresses, and set various internal parameters, such as hostname, > routes, etc., based on a set of arguments. So something like "script > <imagename> <hostname> <netconfigs> <RAM> <.. etc. > > * Have the same script take another argument that will cause it to clone a > filesystem image first before starting the VM, so that I can use a set of > images as VM templates. I intend to have a large collection of templates > which my developers can use to create VMs suited to whatever project they > are working on. > > * After a VM machine has been instantiated, I would like to be able to > start and stop it with simple "start hostname" and "stop hostname" kinds of > commands. > > * Have management tools so that I can for example shift a VM from one Xen > server to another (shift hostname xenservername). These would also be used > by load balancing scripts to shift machines around to manage resources. > > I''d like to build a web-based management system for these scripts, so that > developers are free to create and control Xen VMs (though naturally with > limitations based on what the servers can handle -- so my bosses will know > when they need to buy me more servers :o) ). > > I don''t see these as particularly difficult, but if someone has done them > already .... Also, I''d appreciate any thoughts you might have on automation > of this kind, particularly in terms of functionality and practicalities. > > Thanks for your time! > > Paul > > > ------------------------------------------------------- > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > Project Admins to receive an Apple iPod Mini FREE for your judgement on > who ports your project to Linux PPC the best. Sponsored by IBM. > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Mark A. Williamson
2004-Sep-30 21:00 UTC
Loop and ENBD device management (was Re: [Xen-devel] Automation scripts)
Initial support for Xend to automatically bind loop and ENBD devices is now in the unstable tree. To use a file for a domain''s disk the syntax is: ''file:/path/to/file,target_dev,mode'' For ENBD (Disclaimer! The scripts for ENBD are untested and probably won''t work yet. Could someone please test this on a non-production system?) ''enbd:host:port,target_dev,mode'' Binding and unbinding these devices to local device nodes is done in the scripts /etc/xen/{block-file,block-enbd}. You can edit those scripts to change their behaviour. To add support for a device of type "foo" you can just add a script /etc/xen/block-foo and an appropriate config entry. Cheers, Mark On Monday 27 September 2004 19:23, Paul Dorman wrote:> Hi all, > > I wonder if anyone on the list has written any scripts to automate the > management of VMs with loopback images. Here''s what I want to be able to > do: > > * Store existing physical machine file systems, or pristine installs in > loopback images on my Xen servers (something I''ll do manually) > > * Run a script that will start a VM from one of these images, automatically > associate it with a loopback device, give it a name, RAM allocation, > network addresses, and set various internal parameters, such as hostname, > routes, etc., based on a set of arguments. So something like "script > <imagename> <hostname> <netconfigs> <RAM> <.. etc. > > * Have the same script take another argument that will cause it to clone a > filesystem image first before starting the VM, so that I can use a set of > images as VM templates. I intend to have a large collection of templates > which my developers can use to create VMs suited to whatever project they > are working on. > > * After a VM machine has been instantiated, I would like to be able to > start and stop it with simple "start hostname" and "stop hostname" kinds of > commands. > > * Have management tools so that I can for example shift a VM from one Xen > server to another (shift hostname xenservername). These would also be used > by load balancing scripts to shift machines around to manage resources. > > I''d like to build a web-based management system for these scripts, so that > developers are free to create and control Xen VMs (though naturally with > limitations based on what the servers can handle -- so my bosses will know > when they need to buy me more servers :o) ). > > I don''t see these as particularly difficult, but if someone has done them > already .... Also, I''d appreciate any thoughts you might have on automation > of this kind, particularly in terms of functionality and practicalities. > > Thanks for your time! > > Paul > > > ------------------------------------------------------- > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > Project Admins to receive an Apple iPod Mini FREE for your judgement on > who ports your project to Linux PPC the best. Sponsored by IBM. > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
I''ve been looking into the LVM snapshot/memory allocation troubles and will try to come up with a fix. FYI: In doing more searching for information about the problem, I did come across this: http://www.redhat.com/archives/dm-devel/2004-January/msg00068.html "The problem seems to be that dm-ioctl-v4.c sets the PF_MEMALLOC flag for the current process. Lookaing at the memory allocator (__alloc_page) this means that the VM will think the memory allocation is already running (and this is a recursion) so it will not try to free pages / rebalance page or whatever." ... I''m looking into sharing memory between the snapshots instead of giving each snapshot its own private allocation of pages for I/O. (As I''d like to scale to a large number of snapshots, and don''t want to need >1 MB of kernel memory per snapshot.) --Michael Vrable ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel