Hi, Currently I''m trying to set up a number of servers with Xen 3.0. To ensure high availability, this setup should support live migration of the domUs between the hardware-nodes. On the hardware side we have a number of servers connected to a SAN via fibre channel. The problem now is that I can''t find any definitive requirements for the virtual block devices and filesystems presented to the domUs. The documentations doesn''t say much on that subject. I have read a number of example setups ranging from ext3 in LVM (non-cluster) volumes on the SAN-disk to cluster-aware solutions based on e.g. OCFS2. As far as I can tell no concurrent access to each dumUs storage from multiple hosts takes place during live migration or otherwise. So I''m wondering whether cluster-save technology like OCFS2, GFS or CLVM are really necessary to ensure that the domUs filesystem is not corrupted during migration. If possible I would like to avoid using cluster-software as it brings with it new points of failure. So my questions are: What are the actual requirements on the domUs storage? Could you give me a few examples of thoroughly tried and tested setups? Thanks in advance Björn _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Fast Jack > Sent: 12 June 2007 17:08 > To: xen-users@lists.xensource.com > Subject: [Xen-users] live migration on SAN > > Hi, > > Currently I''m trying to set up a number of servers with Xen 3.0. To > ensure high availability, this setup should support live migration of > the domUs between the hardware-nodes. > > On the hardware side we have a number of servers connected to a SAN > via fibre channel. The problem now is that I can''t find any definitive > requirements for the virtual block devices and filesystems presented > to the domUs. The documentations doesn''t say much on that subject. > I have read a number of example setups ranging from ext3 in LVM > (non-cluster) volumes on the SAN-disk to cluster-aware solutions based > on e.g. OCFS2. > > As far as I can tell no concurrent access to each dumUs storage from > multiple hosts takes place during live migration or otherwise. So I''m > wondering whether cluster-save technology like OCFS2, GFS or CLVM are > really necessary to ensure that the domUs filesystem is not corrupted > during migration. If possible I would like to avoid using > cluster-software as it brings with it new points of failure. > > So my questions are: > What are the actual requirements on the domUs storage?The obvious requirement is that the storage is available to both physical machines - so some sort of networked storage is absolutely necessary. I don''t believe that it needs to be "cluster" or "multi-access" capable, although I''m not 100% sure. The reason I believe this to be superfluous is that the domain on the "new" machine isn''t actually started until AFTER the domain on the "old" machine has been stopped. What makes me unsure is that there is a chance that some disk read/write operation from "old" machine is still "in flight" somewhere [actually, only writes are a problem here], and thus will arrive after some read request by "new" machine. The bad thing about getting this wrong is of course that the problems caused by such "in flight" operations will most likely just be harmless, and the result of any "cross" access is unnoticable, but on the rare occasion where it goes wrong, according to Murphy''s law, it will be something important that gets messed up. I don''t really know how to figure out if there is a possible race-condition between data written by old guest and new guest reading the same data. -- Mats> Could you give me a few examples of thoroughly tried and > tested setups? > > Thanks in advance > > Björn > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jun 12, 2007 at 06:07:37PM +0200, Fast Jack wrote:> > So my questions are: > What are the actual requirements on the domUs storage? > Could you give me a few examples of thoroughly tried and tested setups?As i see it the most simple thing that should work is having just big luns/ disks from the san or via iscsi exported to the xen-dom0s. Having those i.e. 8GB big luns accessable by the dom0s involved in the migration should be all you need. Having many small luns from the san needs at least an lvm over that to be able to use larger chunks, the requirement that the dom0s accessing the one lvm-systems do not interfere calls for an lvm that is aware of more than some accessing node, like clvm or evms. Christian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I am currently doing live migration with a san backend. I use ocfs2 on the dom0s to share the domu between them selves. Fast Jack wrote:> Hi, > > Currently I''m trying to set up a number of servers with Xen 3.0. To > ensure high availability, this setup should support live migration of > the domUs between the hardware-nodes. > > On the hardware side we have a number of servers connected to a SAN > via fibre channel. The problem now is that I can''t find any definitive > requirements for the virtual block devices and filesystems presented > to the domUs. The documentations doesn''t say much on that subject. > I have read a number of example setups ranging from ext3 in LVM > (non-cluster) volumes on the SAN-disk to cluster-aware solutions based > on e.g. OCFS2. > > As far as I can tell no concurrent access to each dumUs storage from > multiple hosts takes place during live migration or otherwise. So I''m > wondering whether cluster-save technology like OCFS2, GFS or CLVM are > really necessary to ensure that the domUs filesystem is not corrupted > during migration. If possible I would like to avoid using > cluster-software as it brings with it new points of failure. > > So my questions are: > What are the actual requirements on the domUs storage? > Could you give me a few examples of thoroughly tried and tested setups? > > Thanks in advance > > Björn > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersThe information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete this material from any computer. In accordance with industry regulations, all messages are retained and are subject to monitoring. This message has been scanned for viruses and dangerous content and is believed to be clean. Securities offered through Cantella & Co., Inc., Member NASD/SIPC. Home Office: 2 Oliver Street, 11th Floor, Boston, MA 02109 Telephone: (617)521-8630 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2007/6/12, Fast Jack <fastjack75@gmail.com>:> Hi, >> On the hardware side we have a number of servers connected to a SAN > via fibre channel. The problem now is that I can''t find any definitive > requirements for the virtual block devices and filesystems presented > to the domUs. The documentations doesn''t say much on that subject. > I have read a number of example setups ranging from ext3 in LVM > (non-cluster) volumes on the SAN-disk to cluster-aware solutions based > on e.g. OCFS2. > > As far as I can tell no concurrent access to each dumUs storage from > multiple hosts takes place during live migration or otherwise. So I''m > wondering whether cluster-save technology like OCFS2, GFS or CLVM are > really necessary to ensure that the domUs filesystem is not corrupted > during migration. If possible I would like to avoid using > cluster-software as it brings with it new points of failure.low-impact options i''d see (avoiding most of the layers that make up a cluster) - EVMS + Cluster Segment Manager - Redhat CLVM (i think i''d chose that) from experience the risk of concurrent access to data segments or just even disklabels is high and annoying (out of sync kernel labels, relabeling a disk that looks all unused and empty, udev configuration errors that shift device names, dozens more to the simplest thing, someone dd''ing all over your disks.) all these usually go aware using some kind of cluster, scsi reservations and such. of course these are extra risk and configuration woes might occur but they haven''t been invented for no reason. personally, i found ocfs2 is the easiest way out.> So my questions are: > What are the actual requirements on the domUs storage?must be reachable either by pointing at a file (file/tap:aio) or at something under /dev (phy) - but the usual write locking (r/r!/w/w!) in block-attach is only working for a single domU. so if ever two dom0s start the same domU it will render the backend storage, whatever type, whatever storage, whatever filesystem into rubbish after the first metadata update.> Could you give me a few examples of thoroughly tried and tested setups?Can''t give you either of that :p I hear that heartbeat2 or Redhat cluster suite or Primecluster are quite tried and tested. Most easier setups have been tested and fail at some point. btw:>I don''t really know how to figure out if there is a possiblerace-condition >between data written by old guest and new guest reading the same data. Yes - hopefully the new guest panics. Like someone waking up in the wrong house, next to the wrong wife. Florian -- ''Sie brauchen sich um Ihre Zukunft keine Gedanken zu machen'' _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users