Hi all, We're looking at deploying a small Xen cluster to run some of our smaller applications. I'm curious to get the lists opinions and advice on what's needed. The plan at the moment is to have two or three servers running as the Xen dom0 hosts and two servers running as storage servers. As we're trying to do this on a small scale, there is no means to hook the system into our SAN, so the storage servers do not have a shared storage subsystem. Is it possible to run DRBD on the two storage servers and then export the block devices over the network to the xen hosts? Ideally the goal is to have the effect of shared storage on the xen hosts so that domains can be migrated between them in case one server needs to go offline. Do I run GFS on top of the DRBD mirrored device, exported via GNBD to the xen hosts; or the other way around, using GNBD to export the DRBD mirrored device and then GFS running on the xen hosts? Is this possible; is there an easier/simpler/better way to do it? Thanks, Tom
Take a look at iSCSI for the storage servers. iSCSI Enterprise Target is what I use here and it works well for us. You don't really need shared filesystems if you are doing direct block io to LVs or raw partitions as the Xen migration will handle the hand-off, but you will if you are using flat files, because of this I recommend using LVs or raw partitions as clustered filesystems will put a serious overhead on the Xen guest io. -Ross -----Original Message----- From: centos-bounces at centos.org <centos-bounces at centos.org> To: CentOS mailing list <centos at centos.org> Sent: Wed Jan 02 17:44:19 2008 Subject: [CentOS] Xen, GFS, GNBD and DRBD? Hi all, We're looking at deploying a small Xen cluster to run some of our smaller applications. I'm curious to get the lists opinions and advice on what's needed. The plan at the moment is to have two or three servers running as the Xen dom0 hosts and two servers running as storage servers. As we're trying to do this on a small scale, there is no means to hook the system into our SAN, so the storage servers do not have a shared storage subsystem. Is it possible to run DRBD on the two storage servers and then export the block devices over the network to the xen hosts? Ideally the goal is to have the effect of shared storage on the xen hosts so that domains can be migrated between them in case one server needs to go offline. Do I run GFS on top of the DRBD mirrored device, exported via GNBD to the xen hosts; or the other way around, using GNBD to export the DRBD mirrored device and then GFS running on the xen hosts? Is this possible; is there an easier/simpler/better way to do it? Thanks, Tom _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20080102/25c8ad31/attachment-0003.html>
On 03/01/2008, at 9:55 AM, Ross S. W. Walker wrote:> > Take a look at iSCSI for the storage servers. iSCSI Enterprise > Target is what I use here and it works well for us. > > You don't really need shared filesystems if you are doing direct > block io to LVs or raw partitions as the Xen migration will handle > the hand-off, but you will if you are using flat files, because of > this I recommend using LVs or raw partitions as clustered > filesystems will put a serious overhead on the Xen guest io. > > -RossRoss, I can use DRBD to mirror data between the two storage servers and iSCSI to export the block devices, but how will iSCSI cope with failure of one storage server? Can I use heartbeat and CRM to failover the host IP and iSCSI target to the other storage server? Regards, Tom
Hi Tom, On Wednesday 02 January 2008 23:44:19 Tom Lanyon wrote:> Hi all, > > We're looking at deploying a small Xen cluster to run some of our > smaller applications. I'm curious to get the lists opinions and advice > on what's needed.I'm not the biggest fan of DRBD with Xen and everything but it's for a "small" Xen cluster isn't it ;-) . In my opinion it brings way to much complexity in a concept that should always stay as simple as possible.> > The plan at the moment is to have two or three servers running as the > Xen dom0 hosts and two servers running as storage servers. As we're > trying to do this on a small scale, there is no means to hook the > system into our SAN, so the storage servers do not have a shared > storage subsystem. > > Is it possible to run DRBD on the two storage servers and then export > the block devices over the network to the xen hosts? Ideally the goal > is to have the effect of shared storage on the xen hosts so that > domains can be migrated between them in case one server needs to go > offline. Do I run GFS on top of the DRBD mirrored device, exported via > GNBD to the xen hosts; or the other way around, using GNBD to export > the DRBD mirrored device and then GFS running on the xen hosts? > > Is this possible; is there an easier/simpler/better way to do it?For DRBD as base for GFS you might want to have a look at http://gfs.wikidev.net/DRBD_Cookbook I didn't test it but it might be what you are looking for. When thinking about GNBD you could also think about iSCSI (as already stated) as it is a standard. Make it highly available move it onto some other two nodes and there you go. But still there you'll need shared storage. How about extending your thoughts also onto NFS. Again you'll have to make it highly available which results in shared storage or DRBD. BTW: Don't make your *small* cluster to complex to manage. ;-) Have fun Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/
You can fail-over using iSCSI multi-pathing. Have the initiator log in to both targets and then setup dm-multipath to do fail-over. On the target side you could use drbd with multi primaries and there you have it, redundant storage with easy fail-over. -Ross -----Original Message----- From: centos-bounces at centos.org <centos-bounces at centos.org> To: CentOS mailing list <centos at centos.org> Sent: Wed Jan 02 20:15:58 2008 Subject: Re: [CentOS] Xen, GFS, GNBD and DRBD? On 03/01/2008, at 9:55 AM, Ross S. W. Walker wrote:> > Take a look at iSCSI for the storage servers. iSCSI Enterprise > Target is what I use here and it works well for us. > > You don't really need shared filesystems if you are doing direct > block io to LVs or raw partitions as the Xen migration will handle > the hand-off, but you will if you are using flat files, because of > this I recommend using LVs or raw partitions as clustered > filesystems will put a serious overhead on the Xen guest io. > > -RossRoss, I can use DRBD to mirror data between the two storage servers and iSCSI to export the block devices, but how will iSCSI cope with failure of one storage server? Can I use heartbeat and CRM to failover the host IP and iSCSI target to the other storage server? Regards, Tom _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20080103/a8fd70be/attachment-0003.html>