Anyone aware of any clustering package for xen, in order to gain redundancy, etc. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks Mark, I''m just reading on it now. Sounds like it allows fail over but am not sure that it''s an actual cluster, as in redundancy? Cool lead though, thanks so much. Mike On Tue, 20 Jan 2009 17:39:52 +1100, Mark Walkom wrote:> Ganeti - http://code.google.com/p/ganeti/ > > ? > > 2009/1/20 lists@grounded.net <lists@grounded.net> >> Anyone aware of any clustering package for xen, in order to gain >> redundancy, etc. >> >> Mike >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike, Can''t definitively say sorry, it''s just something I came across in my search for Xen management utilities. Cheers, Mark 2009/1/20 lists@grounded.net <lists@grounded.net>> Thanks Mark, I''m just reading on it now. Sounds like it allows fail over > but am not sure that it''s an actual cluster, as in redundancy? > > Cool lead though, thanks so much. > > Mike > > > On Tue, 20 Jan 2009 17:39:52 +1100, Mark Walkom wrote: > > Ganeti - http://code.google.com/p/ganeti/ > > > > ? > > > > 2009/1/20 lists@grounded.net <lists@grounded.net> > >> Anyone aware of any clustering package for xen, in order to gain > >> redundancy, etc. > >> > >> Mike > >> > >> > >> _______________________________________________ > >> Xen-users mailing list > >> Xen-users@lists.xensource.com > >> http://lists.xensource.com/xen-users > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I use SLES10 SP2 for my dom0, which has a few tools that make this possible: - EVMS + Heartbeat for shared block devices - OCFS2 for a clustered filesystem - Heartbeat for maintaining availability. I have a volume shared out from my SAN that''s managed with EVMS on each of my Xen servers. I created an OCFS2 filesystem on this volume and have it mounted on all of them. This way I do file-based disks for all of my domUs and they are all visible to each of my hosts. I can migrate the domUs from host to host. I''m in the process of getting Heartbeat setup to manage my domUs - Heartbeat can be configured to migrate VMs or restart them if one of the hosts fails. It isn''t a "single-click" solution - it takes a little work to get everything running, but it does work. -Nick>>> "lists@grounded.net" <lists@grounded.net> 2009/01/19 23:37 >>>Anyone aware of any clustering package for xen, in order to gain redundancy, etc. Mike This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick, Is your SAN an Active-Active or Active-Passive SAN? I''m looking to set something up like what you''re doing, but my SAN only supports Active-Passive. We originally looked at Win2K8 with Hyper-V but fortunately that requires a SAN that supports Active-Active configuration. I''m using SLES 10 SP2 for dom0, and will be running SLES 10 SP2 domU''s as well. I am running Xen 3.2. On Tue, Jan 20, 2009 at 7:21 AM, Nick Couchman <Nick.Couchman@seakr.com>wrote:> I use SLES10 SP2 for my dom0, which has a few tools that make this > possible: > - EVMS + Heartbeat for shared block devices > - OCFS2 for a clustered filesystem > - Heartbeat for maintaining availability. > > I have a volume shared out from my SAN that''s managed with EVMS on each of > my Xen servers. I created an OCFS2 filesystem on this volume and have it > mounted on all of them. This way I do file-based disks for all of my domUs > and they are all visible to each of my hosts. I can migrate the domUs from > host to host. I''m in the process of getting Heartbeat setup to manage my > domUs - Heartbeat can be configured to migrate VMs or restart them if one of > the hosts fails. > > It isn''t a "single-click" solution - it takes a little work to get > everything running, but it does work. > > -Nick > > >>> "lists@grounded.net" <lists@grounded.net> 2009/01/19 23:37 >>> > Anyone aware of any clustering package for xen, in order to gain > redundancy, etc. > > Mike > > > > ------------------------------ > This e-mail may contain confidential and privileged material for the sole > use of the intended recipient. If this email is not intended for you, or you > are not responsible for the delivery of this message to the intended > recipient, please note that this message may contain SEAKR Engineering > (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly > prohibited from downloading, photocopying, distributing or otherwise using > this message, its contents or attachments in any way. If you have received > this message in error, please notify us immediately by replying to this > e-mail and delete the message from your mailbox. Information contained in > this message that does not relate to the business of SEAKR is neither > endorsed by nor attributable to SEAKR. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jan 20, 2009 at 8:21 PM, Nick Couchman <Nick.Couchman@seakr.com> wrote:> I use SLES10 SP2 for my dom0, which has a few tools that make this possible: > - EVMS + Heartbeat for shared block devices > - OCFS2 for a clustered filesystem > - Heartbeat for maintaining availability. > > I have a volume shared out from my SAN that''s managed with EVMS on each of > my Xen servers. I created an OCFS2 filesystem on this volume and have it > mounted on all of them.That setup sounds like it has a lot of overhead. In particular, AFAIK a clustered file system (like OCFS2) has a lower I/O throughput (depends on the workload) compared to non-clustered FS. What kind of workload do you have on your domU''s? Are they I/O-hungry (e.g. busy database servers)? Also considering that (according to Wikipedia) : - IBM stopped developing EVMS in 2006 - Novell will be moving to LVM in future products IMHO it''d be better, performance and support wise, to use cLVM and put domU''s fs on LVM-backed storage. Better yet, have your SAN give each domU it''s own LUN and let all dom0s see them all. domU config files should still be in a cluster FS (OCFS2 or GFS/GFS2) though. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jan 20, 2009 at 1:59 PM, lists@grounded.net <lists@grounded.net> wrote:> Thanks Mark, I''m just reading on it now. Sounds like it allows fail over but am not sure that it''s an actual cluster, as in redundancy?Exactly what kind of redundancy are you looking for? Is it (for example) having several domUs serving the same web content and having a load balancer in front of them so traffic is balanced among working domUs? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
My SAN is Active-Active, but you still should be able to accomplish this even with an Active-Passive SAN. This shouldn''t be an issue with normal operations at all - it''ll just be a matter of whether things can fail over correctly in the event of a SAN controller failure. -Nick -----Original Message----- From: Rob Beglinger <rbeglinger@gmail.com> To: xen-users <xen-users@lists.xensource.com> Subject: Re: [Xen-users] Distributed xen or cluster? Date: Tue, 20 Jan 2009 08:26:24 -0600 Nick, Is your SAN an Active-Active or Active-Passive SAN? I''m looking to set something up like what you''re doing, but my SAN only supports Active-Passive. We originally looked at Win2K8 with Hyper-V but fortunately that requires a SAN that supports Active-Active configuration. I''m using SLES 10 SP2 for dom0, and will be running SLES 10 SP2 domU''s as well. I am running Xen 3.2. On Tue, Jan 20, 2009 at 7:21 AM, Nick Couchman <Nick.Couchman@seakr.com> wrote: I use SLES10 SP2 for my dom0, which has a few tools that make this possible: - EVMS + Heartbeat for shared block devices - OCFS2 for a clustered filesystem - Heartbeat for maintaining availability. I have a volume shared out from my SAN that''s managed with EVMS on each of my Xen servers. I created an OCFS2 filesystem on this volume and have it mounted on all of them. This way I do file-based disks for all of my domUs and they are all visible to each of my hosts. I can migrate the domUs from host to host. I''m in the process of getting Heartbeat setup to manage my domUs - Heartbeat can be configured to migrate VMs or restart them if one of the hosts fails. It isn''t a "single-click" solution - it takes a little work to get everything running, but it does work. -Nick >>> "lists@grounded.net" <lists@grounded.net> 2009/01/19 23:37 >>> Anyone aware of any clustering package for xen, in order to gain redundancy, etc. Mike ________________________________________________________________ This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Exactly what kind of redundancy are you looking for? Is it (for > example) having several domUs serving the same web content and having > a load balancer in front of them so traffic is balanced among working > domUs?Here''s a small example. I have a GFS cluster of servers which serve up LAMP services which uses two LVS (redundant) servers as a front end for load balancer. Each server has a fibre channel card to attach it to the FC network so that it can see the GFS volumes as it''s own. It''s a pretty nicely redundant service, if one server fails, nothing goes down, things just keep on running. The one problem I haven''t bothered with is that if there is a failure, the user has to reconnect because the session gets messed up. Otherwise, it''s fully redundant. With my virtualization testing, things aren''t so much fun. When a server goes down, or needs to be rebooted, or anything which causes it to have to be down, all guests on that server go down with it of course. Migrating over to another machine is pointless because it takes way too much work to migrate just to reboot a server. Of course, what would be best would be proper redundancy, so that there are multiple containers working together as one. If one goes down, the others simply keep going and no servers go down. I have no idea how I''ll deal with win servers because they tend to need stupid re-activating when moved to different hardware, at least the limited experience I''ve had so far. Either way, that''s not an issue because most of the servers, if not all, are going to be Linux. I''ve not done anything with this yet because I''m fairly new to virtualization but even new, I have quickly realized that this is very badly needed. I''ve started with VMware Server 2.0, was about to migrate over ESX but decided it was time to try xen. So, I''m looking for a way to have proper redundancy, not just migration because that''s not an option when you just need to reboot or take a server down for some reason. I''ll post another question about storage separately so that this thread doesn''t become too convoluted. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Well if you have the VM''s on a SAN like yours, you can build some form of HA. As was said before, when a heartbeat to a domU or even an entire machine fails, you mark the VM''s on it as down and reboot them from another server using the cLVM backed storage. Citrix Xenserver does this magic behind a little GUI and very little work for you. Having a proper SAN is half the battle, then it''s the setup. And yes, for a highly available environment you need several servers. As for doing maintenance on a machine, yes manually migrating all the VM''s might be a bit of a pain. You should really have some kind of script in place to "evacuate" the node, which just migrates all the nodes. live migration or otherwise (again with the SAN architecture all you need is a reboot) Regards, Barry On Tue, Jan 20, 2009 at 5:25 PM, lists@grounded.net <lists@grounded.net> wrote:>> Exactly what kind of redundancy are you looking for? Is it (for >> example) having several domUs serving the same web content and having >> a load balancer in front of them so traffic is balanced among working >> domUs? > > Here''s a small example. > > I have a GFS cluster of servers which serve up LAMP services which uses two LVS (redundant) servers as a front end for load balancer. Each server has a fibre channel card to attach it to the FC network so that it can see the GFS volumes as it''s own. It''s a pretty nicely redundant service, if one server fails, nothing goes down, things just keep on running. The one problem I haven''t bothered with is that if there is a failure, the user has to reconnect because the session gets messed up. Otherwise, it''s fully redundant. > > With my virtualization testing, things aren''t so much fun. When a server goes down, or needs to be rebooted, or anything which causes it to have to be down, all guests on that server go down with it of course. Migrating over to another machine is pointless because it takes way too much work to migrate just to reboot a server. > > Of course, what would be best would be proper redundancy, so that there are multiple containers working together as one. If one goes down, the others simply keep going and no servers go down. > > I have no idea how I''ll deal with win servers because they tend to need stupid re-activating when moved to different hardware, at least the limited experience I''ve had so far. Either way, that''s not an issue because most of the servers, if not all, are going to be Linux. > > I''ve not done anything with this yet because I''m fairly new to virtualization but even new, I have quickly realized that this is very badly needed. I''ve started with VMware Server 2.0, was about to migrate over ESX but decided it was time to try xen. > > So, I''m looking for a way to have proper redundancy, not just migration because that''s not an option when you just need to reboot or take a server down for some reason. > > I''ll post another question about storage separately so that this thread doesn''t become too convoluted. > > Mike > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Barry van Someren --------------------------------------- Email: barry@bvansomeren.com Email: goltharnl@gmail.com Linked in: http://www.linkedin.com/in/barryvansomeren KvK: 27317624 WWW: http://java-monitor.com/forum/index.php _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> As was said before, when a heartbeat to a domU or even an entire > machine fails, you mark the VM''s on it as down and reboot them from > another server using the cLVM backed storage.I understand but that''s not redundancy, that''s a fail over. Anyone try this? http://mln.sourceforge.net/index.php?page=plugins It sounds like redundancy, is it? Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, That''s why you combine both Clustering and Virtualization. You get the benefits of Clustering (being able to use redundancy) and Virtualization (being able to move servers around, change hardware...etc) If done properly this will increase your uptime Not sure how MLN provides redendancy. Regards, Barry On Tue, Jan 20, 2009 at 5:46 PM, lists@grounded.net <lists@grounded.net> wrote:>> As was said before, when a heartbeat to a domU or even an entire >> machine fails, you mark the VM''s on it as down and reboot them from >> another server using the cLVM backed storage. > > I understand but that''s not redundancy, that''s a fail over. > > Anyone try this? > > http://mln.sourceforge.net/index.php?page=plugins > > It sounds like redundancy, is it? > > Mike > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> That''s why you combine both Clustering and Virtualization. > You get the benefits of Clustering (being able to use redundancy) and > Virtualization (being able to move servers around, change > hardware...etc) > If done properly this will increase your uptimeRight, I read an article about using rocks but it ends up being that using vm containers for the rocks cluster is the idea. In other words, I''d like to find a cluster package that would allow vm servers to be clustered. So yes, this is what I''m hoping to find, a cluster solution that would work with vm servers. Do you have some ideas, software pointers, etc? Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > So yes, this is what I''m hoping to find, a cluster solution that would work > with vm servers. > Do you have some ideas, software pointers, etc? > >These might be of interest to you http://workspace.globus.org/clouds/clusters.html http://www.opennebula.org/doku.php?id=about _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"lists@grounded.net" <lists@grounded.net> writes:> this is what I''m hoping to find, a cluster solution that would work > with vm servers.Why wouldn''t any cluster solution work with vm servers? They just shouldn''t care. What''s the problem? -- Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>>this is what I''m hoping to find, a cluster solution that would work >>with vm servers. >> >Why wouldn''t any cluster solution work with vm servers? >They just shouldn''t care. What''s the problem?Like I said from the start, that''s what I''m looking for, a cluster package which would lend itself to handling this type of requirement well. So, for example, GFS, because of it''s sometimes problematic fencing issues isn''t a good idea for this based on my experience in using it for web clusters. I''m sure there are some good packages out there which are being developed for clustering virtual servers, which is why I asked for leads from those using xen :). Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
lists@grounded.net wrote:>> Exactly what kind of redundancy are you looking for? >> > > Here''s a small example. > > I have a GFS cluster of servers which serve up LAMP services which uses two LVS (redundant) servers as a front end for load balancer.Ah, so you''re familiar with GFS and LVS. From your earlier post I''m not sure whether you''re a newbie or someone experienced :)> The one problem I haven''t bothered with is that if there is a failure, the user has to reconnect because the session gets messed up. Otherwise, it''s fully redundant. >That is the nature of TCP. To achive full redundancy, usually the protocol, client, and/or server implementation need to adapt. Example : - With HTTP, (for example) using LAMP servers with session data located on a shared storage or db, you get a "redundant" setup, as in you can conect to any server and will get the same session. There''s still possibility of a failure though: client won''t retry the request if data transfer is interrupted in the middle. - NFS can handle server failover better than HTTP. NFS over TCP will automatically reconnect if disconnected, and retry a failed request. This setup still has a possible problem : If an NFS TCP client is moved from one server to another it will work, but when moved back again to the first server in a short time (say several minutes) it will not work. To handle this particular issue you can use NFS over UDP.> With my virtualization testing, things aren''t so much fun. When a server goes down, or needs to be rebooted, or anything which causes it to have to be down, all guests on that server go down with it of course. Migrating over to another machine is pointless because it takes way too much work to migrate just to reboot a server. > > Of course, what would be best would be proper redundancy, so that there are multiple containers working together as one. If one goes down, the others simply keep going and no servers go down. > >So you want to achive the same level of "redundandy" with VM/domUs as you would with (from my example above) HTTP or NFS? Then the answer is it''s not possible. With HTTP or NFS, two server can share the same data (via GFS, for example). This means that they serve the same data, and for a failover to occur client simply need to (re)connect (to be more accurate, connected by the load balancer) to the other server. With generic VMs (as in Windows, Linux on ext3, or anything that uses non-cluster fs) however, sharing the same data is not possible. You can not have two generic VMs using the same backend storage because it will lead to data corruption. An exeption is when the VM is using cluster FS like GFS, but that''s another story. What IS possible, though, is LIVE migration. For this to work : - backend storage for domU is stored in shared block device (SAN LUN, cLVM, GFS, nfs, whatever) accessible by both dom0s. - at any given time, only one dom0 can start a particular domU - moving domUs between dom0s can be done using live migration. This migration will be transparent to domU (e.g. they don''t need a reboot) and clients connected to domUs (they will only see something like a network glitch, but the network stack can handle it correctly). BTW, that is also the basic principle with VMware ESX. They use their own cluster FS for VM backend storage, but the rest is similar. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Ah, so you''re familiar with GFS and LVS. From your earlier post I''m not > sure whether you''re a newbie or someone experienced :)I''m always a newbie :). What I mean is that I take on various new technologies but never have the time to become very proficient with any one of them. I try to learn them well enough to be able to put them to use, then over time, I try to learn more so that I can fine tune, etc. I guess my newness to xen is showing in this case also and of course, anyone who is new to anything will need to eventually become aware of proper terminology etc. I think that''s probably the biggest giveaway. Anyhow, yes, I had been using GFS for about 3 years I think now. I slowly started going more towards filer based NFS because the fencing issues were becoming rather frustrating.> on a shared storage or db, you get a "redundant" setup, as in you can > conect to any server and will get the same session. There''s still > possibility of a failure though: client won''t retry the request if data > transfer is interrupted in the middle.Right, this is somewhat simple because we''re not talking an entire operating system needing to be redundant.> - NFS can handle server failover better than HTTP. NFS over TCP will > automatically reconnect if disconnected, and retry a failed request. > This setup still has a possible problem : If an NFS TCP client is moved > from one server to another it will work, but when moved back again to > the first server in a short time (say several minutes) it will not work. > To handle this particular issue you can use NFS over UDP.> So you want to achive the same level of "redundandy" with VM/domUs as > you would with (from my example above) HTTP or NFS? Then the answer is > it''s not possible.I would guess I''m not alone in this thinking. I think being able to create redundant virtual environments would be the ultimate in the near future. I hope this is already in the works. So until then, basically, all we have is still pretty good, taking advantage of moving the cluster to the virtual servers as guests is still a pretty good positive, but it would/will be nice to see full redundancy in virtualization. That''s when things will be incredibly powerful in networking computing.> An exeption is when the VM is using cluster FS > like GFS, but that''s another story.I was thinking about this but thought that it might not work out well depending on how it was set up. One could have say a GFS share for all of the VM servers, then each VM server could have it''s own local storage to cut down on network storage I/O use. The guests would run local, though, they could have network storage as well. Either way, it was this thinking that lead me to hoping that there might be some way of having redundancy for the VM servers as well.> What IS possible, though, is LIVE migration. For this to work : > - backend storage for domU is stored in shared block device (SAN LUN,Speaking of this, while I understand that this is not redundancy, it would be interesting to know how quickly such a migration could occur as this sounds like the immediate solution at least. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Have you tried CentOS.That''s what my xen servers are running on now.> It comes with all you need for clustered systems. Also, I wouldnt > bother using gfs to format your filesystems. I would add an iscsi LUNI don''t use GFS for the OS, just for shared storage between nodes. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello, I was wondering which benchmark tools "we" use to measure the performance of a domU. Especially I''m interested in benchmarking a w2k3 domu. And are there any references of the performance of domu''s to other virtualisation technique''s. Best regards, jeroen. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
lists@grounded.net wrote:> >> What IS possible, though, is LIVE migration. For this to work : >> - backend storage for domU is stored in shared block device (SAN LUN, >> > > Speaking of this, while I understand that this is not redundancy, it would be interesting to know how quickly such a migration could occur as this sounds like the immediate solution at least. > >During migration, the longest process is copying domU''s memory. In general it''d be as fast as your dom0''s link. For example, on my test, migrating a Linux PV with 500MB memory using 100Mbps link between dom0s took about 48s. During most of this time domU can keep on doing its job (i.e. the down time is NOT 48s, it''s much shorter). Pinging domU from outside during the migration process shows 0% packet loss. ssh connection to domU stops responding for a while but resumes normally after a few seconds (mostly due to TCP retransmits, which is the same thing you''d get if you yank the network cable and plug it back in). So if you''re interested in using live migration (using Xen, VMware, or any other products) you should make sure that dom0s has fast private network. Regads, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
FWIW, I''ve had much better success with OCFS2''s fencing methods than with GFS. My OCFS2 nodes, when they lose connectivity to the rest of the cluster, automatically reboot themselves - I''m guessing there''s some kernel code in there for that. I don''t need dedicated fencing devices or anything like that - OCFS2 takes care of that for me. Also, I''d say the overall point is that there''s no "magic bullet" that''s going to knock out all of your redundancy concerns at the same time. Xen will take care of some of that by allowing you to create multiple domUs that share the same data and let you live migrate those domUs from system to system, or restart them quickly and easily if one of your nodes fails. There are plenty of good solutions for setting this functionality up quickly. You still have to deal with redundant TCP and UDP connections to the services that these domUs provide, and that''s completely outside of the realm of Xen or any virtualization technology - that has to be done inside the domU O/S, just like physical systems. -Nick>>> "lists@grounded.net" lists@grounded.net> 2009/01/20 22:51 >> ( mailto:lists@grounded.net> )Anyhow, yes, I had been using GFS for about 3 years I think now. I slowly started going more towards filer based NFS because the fencing issues were becoming rather frustrating. This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jeroen groenewegen van der weyden
2009-Jan-21 15:56 UTC
[Xen-users] Re: Xen domu benchmarking
I guess my question was not challenging enough I see the following performance on intel q9450 dom0 opensuse 11.0 64bit xen 3.2.1 domU W2K3 phy on drbd. performance of dom0 disk read 50Mb/s write 14,87 Mb/s cpu superpi 1M in 17.9sec whetstone FPU 16.300 MWIPS dhrystine alu 12.919 MDIPS memory interger assignment 146.883 real assignment 166.454 net throughput (gplpv driver) domu -> dom0 524 mbit/s is this performance typical? for me the asymmetric performance of the disk is curious, anybody else seeing this? jeroen jeroen groenewegen van der weyden wrote:> Hello, > > I was wondering which benchmark tools "we" use to measure the > performance of a domU. Especially I''m interested in benchmarking a > w2k3 domu. > > And are there any references of the performance of domu''s to other > virtualisation technique''s. > > Best regards, > > jeroen. >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, This is not necessarily a bad thing, drives are often worse at writing than reading. A lot of it may also depend on the actual test you are doing. What benchmark are you running? On Wed, Jan 21, 2009 at 4:56 PM, jeroen groenewegen van der weyden <groen692@grosc.com> wrote:> I guess my question was not challenging enough > > I see the following performance on > intel q9450 > dom0 opensuse 11.0 64bit xen 3.2.1 > domU W2K3 phy on drbd. > > performance of dom0 > disk > read 50Mb/s > write 14,87 Mb/s > > cpu > superpi 1M in 17.9sec > whetstone FPU 16.300 MWIPS > dhrystine alu 12.919 MDIPS > > memory > interger assignment 146.883 > real assignment 166.454 > > net throughput (gplpv driver) > domu -> dom0 524 mbit/s > > is this performance typical? > for me the asymmetric performance of the disk is curious, anybody else > seeing this? > > jeroen > > jeroen groenewegen van der weyden wrote: >> >> Hello, >> >> I was wondering which benchmark tools "we" use to measure the performance >> of a domU. Especially I''m interested in benchmarking a w2k3 domu. >> >> And are there any references of the performance of domu''s to other >> virtualisation technique''s. >> >> Best regards, >> >> jeroen. >> > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Barry van Someren --------------------------------------- Email: barry@bvansomeren.com Email: goltharnl@gmail.com Linked in: http://www.linkedin.com/in/barryvansomeren KvK: 27317624 WWW: http://java-monitor.com/forum/index.php _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> During migration, the longest process is copying domU''s memory. In > general it''d be as fast as your dom0''s link. For example, on my test, > migrating a Linux PV with 500MB memory using 100Mbps link between dom0s > took about 48s. During most of this time domU can keep on doing its jobMost of my guests are using 1024MB of memory. A minute or two of down time is not the worst thing, not great but better than having to copy everything over to another server, set up the guest, fire it up from scratch, etc. That would be fine until something better comes along. Everything I have is using 1GB Ethernet so should be able to get some good migration speeds. So how does this work, you have a standby vm server which is already aware of the guest then, and the migration transfers the memory over to that other server. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>FWIW, I''ve had much better success with OCFS2''s fencing methods than with >GFS.My OCFS2 nodes, when they lose connectivity to the rest of theI''ve looked at ZFS and sure love the sounds of that. Not sure if they have an open source version, I thought I did see one though. When I was reading up on it, it sounded a lot simpler and had a lot of additional features.>Also, I''d say the overall point is that there''s no "magic bullet" that''s >going to knock out all of your redundancy concerns at the same time.XenRight, I''ve picked that up based on the replies :). I can still fantasize though, of a cluster of virtual servers running OS''s which are totally redundant. What a perfect world that will be! And I''m SURE it''s only a matter of time. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jeroen groenewegen van der weyden
2009-Jan-21 18:53 UTC
Re: [Xen-users] Re: Xen domu benchmarking
I use Fresh Diagnose - SysInfo and Benchmarks <http://www.freshdevices.com/t/t/2146/> best regards, jeroen Barry van Someren wrote:> Hi, > > This is not necessarily a bad thing, drives are often worse at > writing than reading. > A lot of it may also depend on the actual test you are doing. > > What benchmark are you running? > > On Wed, Jan 21, 2009 at 4:56 PM, jeroen groenewegen van der weyden > <groen692@grosc.com> wrote: > >> I guess my question was not challenging enough >> >> I see the following performance on >> intel q9450 >> dom0 opensuse 11.0 64bit xen 3.2.1 >> domU W2K3 phy on drbd. >> >> performance of dom0 >> disk >> read 50Mb/s >> write 14,87 Mb/s >> >> cpu >> superpi 1M in 17.9sec >> whetstone FPU 16.300 MWIPS >> dhrystine alu 12.919 MDIPS >> >> memory >> interger assignment 146.883 >> real assignment 166.454 >> >> net throughput (gplpv driver) >> domu -> dom0 524 mbit/s >> >> is this performance typical? >> for me the asymmetric performance of the disk is curious, anybody else >> seeing this? >> >> jeroen >> >> jeroen groenewegen van der weyden wrote: >> >>> Hello, >>> >>> I was wondering which benchmark tools "we" use to measure the performance >>> of a domU. Especially I''m interested in benchmarking a w2k3 domu. >>> >>> And are there any references of the performance of domu''s to other >>> virtualisation technique''s. >>> >>> Best regards, >>> >>> jeroen. >>> >>> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> >> > > > > > ------------------------------------------------------------------------ > > > No virus found in this incoming message. > Checked by AVG - http://www.avg.com > Version: 8.0.176 / Virus Database: 270.10.10/1905 - Release Date: 20-1-2009 14:34 > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Something like this? <http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile&do=get&target=Kemari_08.pdf> http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile&do=get&target=Kemari_08.pdf On Wed, Jan 21, 2009 at 12:20 PM, lists@grounded.net <lists@grounded.net>wrote:> > FWIW, I''ve had much better success with OCFS2''s fencing methods than with > > GFS.� My OCFS2 nodes, when they lose connectivity to the rest of the > > I''ve looked at ZFS and sure love the sounds of that. Not sure if they have > an open source version, I thought I did see one though. When I was reading > up on it, it sounded a lot simpler and had a lot of additional features. > > > Also, I''d say the overall point is that there''s no "magic bullet" that''s > > going to knock out all of your redundancy concerns at the same time.� Xen > > Right, I''ve picked that up based on the replies :). I can still fantasize > though, of a cluster of virtual servers running OS''s which are totally > redundant. What a perfect world that will be! And I''m SURE it''s only a > matter of time. > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for the lead Rob!>Something like this? > >http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile >&do=get&target=Kemari_08.pdfSo, in essence, this is doing a sort of pre-migration, constantly updating the information that the new VM server would need in order to accept a migration command and fire up it''s new servers. So because the machines are constantly syncing this information, there''s no need to do a manual migration as the data would always be there. Nifty and seems like something that someone would eventually have realized :). Is that kind of what it boils down to? Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
That''s what it looks like to me too. On Wed, Jan 21, 2009 at 3:18 PM, lists@grounded.net <lists@grounded.net>wrote:> Thanks for the lead Rob! > > > Something like this? > > > > > http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile > > &do=get&target=Kemari_08.pdf > > So, in essence, this is doing a sort of pre-migration, constantly updating > the information that the new VM server would need in order to accept a > migration command and fire up it''s new servers. > > So because the machines are constantly syncing this information, there''s no > need to do a manual migration as the data would always be there. > > Nifty and seems like something that someone would eventually have realized > :). > Is that kind of what it boils down to? > > Mike > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Well, if that''s not it, maybe that''s a viable idea if someone were to look into it. Mike On Wed, 21 Jan 2009 17:08:33 -0600, Rob Beglinger wrote:> That''s what it looks like to me too. > > On Wed, Jan 21, 2009 at 3:18 PM, lists@grounded.net <lists@grounded.net> > wrote: >> Thanks for the lead Rob! >> >>> Something like this? >>> >>> http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile >>> &do=get&target=Kemari_08.pdf >>> >> >> So, in essence, this is doing a sort of pre-migration, constantly >> updating the information that the new VM server would need in order to >> accept a migration command and fire up it''s new servers. >> >> So because the machines are constantly syncing this information, there''s >> no need to do a manual migration as the data would always be there. >> >> Nifty and seems like something that someone would eventually have >> realized :). >> Is that kind of what it boils down to? >> >> Mike >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jan 22, 2009 at 1:15 AM, lists@grounded.net <lists@grounded.net> wrote:>> migrating a Linux PV with 500MB memory using 100Mbps link between dom0s >> took about 48s. During most of this time domU can keep on doing its job > > Most of my guests are using 1024MB of memory. A minute or two of down time is not the worst thingI think you misunderstood. Live migration of a domU with 1G memory might take a minute or two, but the actual downtime is about a SECOND or two :D> So how does this work, you have a standby vm server which is already aware of the guest then, and the migration transfers the memory over to that other server.Pretty much it. The actual command line is simply "xm migrate -l <domU name> <new dom0>". If you''ve setup xend and dom0s properly, the domU will be migrated to the new dom0. All dom0s should be able to access the same storage by using the path. This means that if you use iscsi (imported on dom0) for domU backend, you should use /dev/disk/by-path or /dev/disk/by-uuid to specify domU''s disk instead of using (for example) /dev/sda. Same thing goes for network. If you use bridged networking, all dom0s should have bridges with the same name connected to the same network. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I think you misunderstood. Live migration of a domU with 1G memory > might take a minute or two, but the actual downtime is about a SECOND > or two :DI''ll have to play with this. The problem is that on a failure, you never know when the server is going to go down. Makes me wonder if I could have a simple script going which constantly sends the updates to a shared drive and some heartbeat mechanism on standby machine/s so that if main server goes down, another quickly takes over. Wonder if that''s the idea behind the link that Rob sent me. No need to re-invent the horse, looks like there are in fact lots of ideas out there. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> I think you misunderstood. Live migration of a domU with 1G memory >> might take a minute or two, but the actual downtime is about a SECOND >> or two :D > >I''ll have to play with this. The problem is that on a failure, you never know when the server is going to go down. Makes me wonder if I could have a simple script going which constantly sends the >updates to a shared drive and some heartbeat mechanism on standby machine/s so that if main server goes down, another quickly takes over. > >Wonder if that''s the idea behind the link that Rob sent me. No need to re-invent the horse, looks like there are in fact lots of ideas out there.You are expecting to make "an instance of a vm" fault tolerant. I haven''t seen that, but I have seen storage fault tolerant as people have stated. So you still are concerned with the xen server tanking (which makes redundant storage useless) you need to handle this at the application level. Such as fault tolerant service''s running on two xen servers with their each own vm. VirtualServerA on XenServer1 running some cluster aware application that *you* need redundant with VirtualServerB on XenServer2. Many applications and OS''s have this feature, Windows Clusters etc. You could cluster a file server this way so the backend is redundant and the file serving application is redundant. Clients then access a virtual host that is created by the cluster software running on the two vm''s... jlc _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jan 22, 2009 at 7:06 AM, lists@grounded.net <lists@grounded.net> wrote:> The problem is that on a failure, you never know when the server is going to go down. Makes me wonder if I could have a simple script going which constantly sends the updates to a shared drive and some heartbeat mechanism on standby machine/s so that if main server goes down, another quickly takes over.Good luck! Please share your results if you found one that works. Though I imagine it will take a heavy hit on performance (you''d basically be syncing domU''s memory to disk) Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> You are expecting to make "an instance of a vm" fault tolerant. I haven''t > seen that, but I have seen storage fault tolerant as people have stated. So youWell, so far, I''ve been talking about the entire operating system but from all of the conversation, it''s not looking feasible, at least at this time.> you need to handle this at the application level. Such as fault tolerant > service''s running on two xen servers with their each own vm.You''re absolutely correct of course and that''s always the first way to handle this. I was asking for leads, input, ideas, maybe something would have come up. Since I''m no developer myself, I''ll take a look at the leads sent to me but as you say, for now, I''ll simply try to handle things as I always have, with application redundancy. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mike, You might want to check out the many management frontends to Xen. I''m using the simplest one (I believe), ConVirt (formerly XenMan), for instance. When I once issued a shutdown on dom0 without first shutting down my (only) VM, its status changed to "migrating". Now, there was nowhere to migrate to, but at least it tried. I think you can find various levels of sophistication in the frontends, some requiring SAN infrastructure, providing automatic deployment, power management, migration and some much simpler. Maybe you can find something that''s right for you, especially if you can live with a minute or two of downtime in the event of hw failure. Migration won''t help you then, but a quick or even automatic deploy will. You can find some links to various projects on the xen.org web page. We are using VMware ESX at work and if I understand correctly, Xen can do the same things. I can say that our ability to quickly deploy and move around live servers has drastically changed the uptime for our Windows servers, far beyond any complicated clusters we''ve put up previously. And it can be managed by our 24x7 operators. Our Sun/Unix guys who haven''t yet managed to virtualize, live in danger of becoming extinct. Cheers, /Chris>> You are expecting to make "an instance of a vm" fault tolerant. I >> haven''t >> seen that, but I have seen storage fault tolerant as people have stated. >> So you > > Well, so far, I''ve been talking about the entire operating system but from > all of the conversation, it''s not looking feasible, at least at this time. > >> you need to handle this at the application level. Such as fault tolerant >> service''s running on two xen servers with their each own vm. > > You''re absolutely correct of course and that''s always the first way to > handle this. I was asking for leads, input, ideas, maybe something would > have come up. Since I''m no developer myself, I''ll take a look at the leads > sent to me but as you say, for now, I''ll simply try to handle things as I > always have, with application redundancy. > > Mike > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Chris, Thanks for all of the input.> You might want to check out the many management frontends to Xen. I''m > using the simplest one (I believe), ConVirt (formerly XenMan), forSo far, just playing with xen I''ve only used the desktop which I don''t much care for and command line virsh to control things. In my world, the best tool for basic management would be a cli based basic gui rather than the desktop. It''s just too much overkill just to manage guests. I''ll take a look at the above tools too.> even automatic deploy will. You can find some links to various projects on > the xen.org web page.Not sure if you read this full thread but folks have given some really great leads to quite a number of tools, some more complex than others, but all useful.> around live servers has drastically changed the uptime for our Windows > servers, far beyond any complicated clusters we''ve put up previously. AndWhen I think about my concerns, they are really only related to win machines. I have a handful of win servers which need to be up and reliable and aren''t using the greatest applications so can''t be very redundant. That''s why I started the thread wondering if I could cluster VM servers redundantly, hoping to just make the who win OS''s redundant. Anyhow, it''s really the win machines that are giving me the grief, the linux machines are easy to run redundant. Thanks for the input! Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Did you had a look at: http://www.teegee.de/index2.php?option=com_docman&task=doc_view&gid=11&Itemid=7 Looks promissing.... -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of lists@grounded.net Sent: Thursday, January 22, 2009 4:54 PM To: xen-users Subject: RE: [Xen-users] Distributed xen or cluster? Hi Chris, Thanks for all of the input.> You might want to check out the many management frontends to Xen. I''m > using the simplest one (I believe), ConVirt (formerly XenMan), forSo far, just playing with xen I''ve only used the desktop which I don''t much care for and command line virsh to control things. In my world, the best tool for basic management would be a cli based basic gui rather than the desktop. It''s just too much overkill just to manage guests. I''ll take a look at the above tools too.> even automatic deploy will. You can find some links to various >projects on > the xen.org web page.Not sure if you read this full thread but folks have given some really great leads to quite a number of tools, some more complex than others, but all useful.> around live servers has drastically changed the uptime for our Windows > servers, far beyond any complicated clusters we''ve put up previously. >AndWhen I think about my concerns, they are really only related to win machines. I have a handful of win servers which need to be up and reliable and aren''t using the greatest applications so can''t be very redundant. That''s why I started the thread wondering if I could cluster VM servers redundantly, hoping to just make the who win OS''s redundant. Anyhow, it''s really the win machines that are giving me the grief, the linux machines are easy to run redundant. Thanks for the input! Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users ______________________________________________________________________ Dit bericht kan informatie bevatten die niet voor u is bestemd. Indien u niet de geadresseerde bent of dit bericht abusievelijk aan u is toegezonden, wordt u verzocht dat aan de afzender te melden en het bericht te verwijderen. De Staat aanvaardt geen aansprakelijkheid voor schade, van welke aard ook, die verband houdt met risico''s verbonden aan het elektronisch verzenden van berichten. Bezoek onze vernieuwde website www.defensie.nl This message may contain information that is not intended for you. If you are not the addressee or if this message was sent to you by mistake, you are requested to inform the sender and delete the message. The State accepts no liability for damage of any kind resulting from the risks inherent in the electronic transmission of messages. Please visit our new website www.defensie.nl _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Adding; http://lbvm.sourceforge.net/ Which may be interesting to some. Mike On Tue, 20 Jan 2009 17:39:52 +1100, Mark Walkom wrote:> Ganeti - http://code.google.com/p/ganeti/ > > ? > > 2009/1/20 lists@grounded.net <lists@grounded.net> >> Anyone aware of any clustering package for xen, in order to gain >> redundancy, etc. >> >> Mike >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users