Does anyone have any pointers with regard to using Xen and storing VBDs on a GFS volume under dom0? This sounds like it should work well, especially for migrating domains, but it looks as though GFS won''t allow a read-write mount of a loop device (file) so I end up with read only VBDs on the SAN, which are obviously useless. Maybe my approach is completely off, but it sounded pretty good up until I discovered the lock problem. FYI, the gfs error when mounting is listed below. Thanks in advance for any help or insight. Output in /var/log/messages: Apr 18 12:11:23 blade1 kernel: GFS: fsid=alpha:gfs1.0: warning: assertion "gfs_glock_is_locked_by_me(ip->i_gl)" failed Apr 18 12:11:23 blade1 kernel: GFS: fsid=alpha:gfs1.0: function = gfs_prepare_write Apr 18 12:11:23 blade1 kernel: GFS: fsid=alpha:gfs1.0: file = /usr/ src/build/729060-x86_64/BUILD/xen0/src/gfs/ops_address.c, line = 329 Apr 18 12:11:23 blade1 kernel: GFS: fsid=alpha:gfs1.0: time = 1145387483 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Does anyone have any pointers with regard to using Xen and storing > VBDs on a GFS volume under dom0? This sounds like it should work > well, especially for migrating domains, but it looks as though GFS > won''t allow a read-write mount of a loop device (file) so I end up > with read only VBDs on the SAN, which are obviously useless. Maybe my > approach is completely off, but it sounded pretty good up until I > discovered the lock problem. FYI, the gfs error when mounting is > listed below.Sorry to not actually help, but what do you need GFS for if you have a SAN? You can''t have multiple copies of the same domU writing to the lun at once, so what purpose does a cluster filesystem serve? John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each, attached to FC SAN. The thought was that I would create a GFS volume on the SAN, mount it under Xen dom0 on all 3 blades, create all the VBDs for my VMs on the SAN, and thus be able to easily migrate VMs from one blade to another, without any intermediary mounts and unmounts on the blades. I thought it made a lot of sense, but maybe my approach is wrong. On Apr 18, 2006, at 1:00 PM, John Madden wrote:>> Does anyone have any pointers with regard to using Xen and storing >> VBDs on a GFS volume under dom0? This sounds like it should work >> well, especially for migrating domains, but it looks as though GFS >> won''t allow a read-write mount of a loop device (file) so I end up >> with read only VBDs on the SAN, which are obviously useless. Maybe my >> approach is completely off, but it sounded pretty good up until I >> discovered the lock problem. FYI, the gfs error when mounting is >> listed below. > > Sorry to not actually help, but what do you need GFS for if you > have a SAN? > You can''t have multiple copies of the same domU writing to the lun > at once, > so what purpose does a cluster filesystem serve? > > John > > > -- > John Madden > Sr. UNIX Systems Engineer > Ivy Tech Community College of Indiana > jmadden@ivytech.edu_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tuesday 18 April 2006 16:17, Jim Klein wrote:> The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each, > attached to FC SAN. The thought was that I would create a GFS volume > on the SAN, mount it under Xen dom0 on all 3 blades, create all the > VBDs for my VMs on the SAN, and thus be able to easily migrate VMs > from one blade to another, without any intermediary mounts and > unmounts on the blades. I thought it made a lot of sense, but maybe > my approach is wrong.Not necessarily wrong, but perhaps just an unnecessary layer. If your intent is HA Xen, I would set it up like this: 1) Both machines connected to the SAN over FC 2) Both machines having visibility to the same SAN LUN(s) 3) Both machines running heartbeat with private interconnects 4) LVM lv''s (from dom0) on the LUN(s) for carving up the storage for the domU''s 5) In the event of a node failure, the failback machine starts with an "/etc/init.d/lvm start" or equivalent to prep the lv''s for use. Then xend start, etc. For migration, you''d be doing somewhat the same thing, only you''d need a separate SAN LUN (still use LVM inside dom0) for each VBD. My understanding is that writing is only done by one Xen stack at once (node 0 before migration, node 1 after migration, nothing in between), so all you have to do is make that LUN available to the other Xen instance and you should be set. A cluster filesystem should only be used when more than one node must write to the same LUN at the same time. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''ve done exactly this (with iSCSI instead of FC), but I did take the extra step to configure GFS, as I intended each cluster node to run various DomU''s (3 or 4 on each). The DomU VBD''s are all stored on the same iSCSI LUN, so each node can read/write to the LUN simultaneously with GFS. It took a lot of trial and error to get everything working - I got stuck trying to figure out why the LVM2-cluster package was missing in Fedora Core 5, and finally realized that it wasn''t really necessary as long as I did all of the LVM administration from one node and used the pvscan/vgscan/lvscan tools on the other nodes to refresh the metadata. Stephen Palmer Gearbox Software CIO/Director of GDS> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of John Madden > Sent: Tuesday, April 18, 2006 3:31 PM > To: xen-users@lists.xensource.com > Cc: Jim Klein > Subject: Re: [Xen-users] Xen and GFS > > On Tuesday 18 April 2006 16:17, Jim Klein wrote: > > The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each, > > attached to FC SAN. The thought was that I would create a GFS volume > > on the SAN, mount it under Xen dom0 on all 3 blades, create all the > > VBDs for my VMs on the SAN, and thus be able to easily migrate VMs > > from one blade to another, without any intermediary mounts and > > unmounts on the blades. I thought it made a lot of sense, but maybe > > my approach is wrong. > > Not necessarily wrong, but perhaps just an unnecessary layer. If your > intent > is HA Xen, I would set it up like this: > > 1) Both machines connected to the SAN over FC > 2) Both machines having visibility to the same SAN LUN(s) > 3) Both machines running heartbeat with private interconnects > 4) LVM lv''s (from dom0) on the LUN(s) for carving up the storage forthe> domU''s > 5) In the event of a node failure, the failback machine starts with > an "/etc/init.d/lvm start" or equivalent to prep the lv''s for use.Then> xend > start, etc. > > For migration, you''d be doing somewhat the same thing, only you''d needa> separate SAN LUN (still use LVM inside dom0) for each VBD. My > understanding > is that writing is only done by one Xen stack at once (node 0 before > migration, node 1 after migration, nothing in between), so all youhave to> do > is make that LUN available to the other Xen instance and you should be > set. > A cluster filesystem should only be used when more than one node must > write > to the same LUN at the same time. > > John > > > > -- > John Madden > Sr. UNIX Systems Engineer > Ivy Tech Community College of Indiana > jmadden@ivytech.edu > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> For migration, you''d be doing somewhat the same thing, only you''d need a > separate SAN LUN (still use LVM inside dom0) for each VBD.Just use the same shared LVM (see clvm).> so all you have to > do > is make that LUN available to the other Xen instance and you should be > set.And be sure that old node won''t have data in memory cache not still written to disk. Will Xen take care of this ? Use clvm to sync metadata and hack Xen VBD scripts to lvchange -aly/lvchange -aln on start/stop of a VM. That will keep everyone synced automagically and flush pending writes on shutdown. BR, -- Sylvain COUTANT ADVISEO http://www.adviseo.fr/ http://www.open-sp.fr/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks again. I was trying to avoid the mount/unmount complexity and use SAN space more efficiently by simply keeping all the xenU domains on the shared file system, but it looks as though that won''t work quite like I had hoped. I''m not looking for HA per se (although this is important,) more for flexibility when load balancing the VMs running on all the blades, and safer live migrations from blade to blade. I''m a little nervous about having a LUN up on two boxes at the same time, as I''ve got some experience with killing file systems this way (in the test lab, anyway.) On Apr 18, 2006, at 1:30 PM, John Madden wrote:> On Tuesday 18 April 2006 16:17, Jim Klein wrote: >> The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each, >> attached to FC SAN. The thought was that I would create a GFS volume >> on the SAN, mount it under Xen dom0 on all 3 blades, create all the >> VBDs for my VMs on the SAN, and thus be able to easily migrate VMs >> from one blade to another, without any intermediary mounts and >> unmounts on the blades. I thought it made a lot of sense, but maybe >> my approach is wrong. > > Not necessarily wrong, but perhaps just an unnecessary layer. If > your intent > is HA Xen, I would set it up like this: > > 1) Both machines connected to the SAN over FC > 2) Both machines having visibility to the same SAN LUN(s) > 3) Both machines running heartbeat with private interconnects > 4) LVM lv''s (from dom0) on the LUN(s) for carving up the storage > for the > domU''s > 5) In the event of a node failure, the failback machine starts with > an "/etc/init.d/lvm start" or equivalent to prep the lv''s for use. > Then xend > start, etc. > > For migration, you''d be doing somewhat the same thing, only you''d > need a > separate SAN LUN (still use LVM inside dom0) for each VBD. My > understanding > is that writing is only done by one Xen stack at once (node 0 before > migration, node 1 after migration, nothing in between), so all you > have to do > is make that LUN available to the other Xen instance and you should > be set. > A cluster filesystem should only be used when more than one node > must write > to the same LUN at the same time. > > John > > > > -- > John Madden > Sr. UNIX Systems Engineer > Ivy Tech Community College of Indiana > jmadden@ivytech.edu_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Thanks again. I was trying to avoid the mount/unmount complexity and > use SAN space more efficiently by simply keeping all the xenU domains > on the shared file system, but it looks as though that won''t work > quite like I had hoped. I''m not looking for HA per se (although this > is important,) more for flexibility when load balancing the VMs > running on all the blades, and safer live migrations from blade to > blade. I''m a little nervous about having a LUN up on two boxes at the > same time, as I''ve got some experience with killing file systems this > way (in the test lab, anyway.)Well GFS really should work, I''m just suggesting that you don''t need it. And the LUN''s being "available" (as in, equivalent to the scsi cable being plugged into the clustered shared scsi device) on multiple hosts is how HA on a SAN is generally done. You really don''t have to worry about it as long as you configure the failback client to not write to it until it''s really time. :) John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
That''s exactly what I want to do, and I am using FC5 as well. But when I create the VBD''s (either with the xenguest-install.py script or manually creating an img file with dd and mounting -o loop) I get I/O errors and the messages in the log listed earlier. The images mount, but are not writable, presumably because of a locking problem. I found a note in the kernel archives that spoke of problems getting loop file systems to mount properly off a GFS volume, but didn''t see a resolution. On Apr 18, 2006, at 1:42 PM, Stephen Palmer wrote:> I''ve done exactly this (with iSCSI instead of FC), but I did take the > extra step to configure GFS, as I intended each cluster node to run > various DomU''s (3 or 4 on each). The DomU VBD''s are all stored on the > same iSCSI LUN, so each node can read/write to the LUN simultaneously > with GFS. > > It took a lot of trial and error to get everything working - I got > stuck > trying to figure out why the LVM2-cluster package was missing in > Fedora > Core 5, and finally realized that it wasn''t really necessary as > long as > I did all of the LVM administration from one node and used the > pvscan/vgscan/lvscan tools on the other nodes to refresh the metadata. > > Stephen Palmer > Gearbox Software > CIO/Director of GDS > >> -----Original Message----- >> From: xen-users-bounces@lists.xensource.com [mailto:xen-users- >> bounces@lists.xensource.com] On Behalf Of John Madden >> Sent: Tuesday, April 18, 2006 3:31 PM >> To: xen-users@lists.xensource.com >> Cc: Jim Klein >> Subject: Re: [Xen-users] Xen and GFS >> >> On Tuesday 18 April 2006 16:17, Jim Klein wrote: >>> The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each, >>> attached to FC SAN. The thought was that I would create a GFS volume >>> on the SAN, mount it under Xen dom0 on all 3 blades, create all the >>> VBDs for my VMs on the SAN, and thus be able to easily migrate VMs >>> from one blade to another, without any intermediary mounts and >>> unmounts on the blades. I thought it made a lot of sense, but maybe >>> my approach is wrong. >> >> Not necessarily wrong, but perhaps just an unnecessary layer. If >> your >> intent >> is HA Xen, I would set it up like this: >> >> 1) Both machines connected to the SAN over FC >> 2) Both machines having visibility to the same SAN LUN(s) >> 3) Both machines running heartbeat with private interconnects >> 4) LVM lv''s (from dom0) on the LUN(s) for carving up the storage for > the >> domU''s >> 5) In the event of a node failure, the failback machine starts with >> an "/etc/init.d/lvm start" or equivalent to prep the lv''s for use. > Then >> xend >> start, etc. >> >> For migration, you''d be doing somewhat the same thing, only you''d >> need > a >> separate SAN LUN (still use LVM inside dom0) for each VBD. My >> understanding >> is that writing is only done by one Xen stack at once (node 0 before >> migration, node 1 after migration, nothing in between), so all you > have to >> do >> is make that LUN available to the other Xen instance and you >> should be >> set. >> A cluster filesystem should only be used when more than one node must >> write >> to the same LUN at the same time. >> >> John >> >> >> >> -- >> John Madden >> Sr. UNIX Systems Engineer >> Ivy Tech Community College of Indiana >> jmadden@ivytech.edu >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Oh, well, I guess the difference is that I''m not actually mounting the files as VBD''s (as I innacurately said earlier). I''m just using the syntax: disk = [ ''file:/mnt/xen/vrserver1,xvda,w'' ] ... to do file backed storage. They''re never attached as VBD''s to DomU. Maybe that would work for you? -Steve> -----Original Message----- > From: Jim Klein [mailto:jklein@saugus.k12.ca.us] > Sent: Tuesday, April 18, 2006 3:58 PM > To: xen-users@lists.xensource.com > Cc: Stephen Palmer > Subject: Re: [Xen-users] Xen and GFS > > That''s exactly what I want to do, and I am using FC5 as well. But > when I create the VBD''s (either with the xenguest-install.py script > or manually creating an img file with dd and mounting -o loop) I get > I/O errors and the messages in the log listed earlier. The images > mount, but are not writable, presumably because of a locking problem. > I found a note in the kernel archives that spoke of problems getting > loop file systems to mount properly off a GFS volume, but didn''t see > a resolution. > > > On Apr 18, 2006, at 1:42 PM, Stephen Palmer wrote: > > > I''ve done exactly this (with iSCSI instead of FC), but I did takethe> > extra step to configure GFS, as I intended each cluster node to run > > various DomU''s (3 or 4 on each). The DomU VBD''s are all stored onthe> > same iSCSI LUN, so each node can read/write to the LUNsimultaneously> > with GFS. > > > > It took a lot of trial and error to get everything working - I got > > stuck > > trying to figure out why the LVM2-cluster package was missing in > > Fedora > > Core 5, and finally realized that it wasn''t really necessary as > > long as > > I did all of the LVM administration from one node and used the > > pvscan/vgscan/lvscan tools on the other nodes to refresh themetadata.> > > > Stephen Palmer > > Gearbox Software > > CIO/Director of GDS > > > >> -----Original Message----- > >> From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > >> bounces@lists.xensource.com] On Behalf Of John Madden > >> Sent: Tuesday, April 18, 2006 3:31 PM > >> To: xen-users@lists.xensource.com > >> Cc: Jim Klein > >> Subject: Re: [Xen-users] Xen and GFS > >> > >> On Tuesday 18 April 2006 16:17, Jim Klein wrote: > >>> The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each, > >>> attached to FC SAN. The thought was that I would create a GFSvolume> >>> on the SAN, mount it under Xen dom0 on all 3 blades, create allthe> >>> VBDs for my VMs on the SAN, and thus be able to easily migrate VMs > >>> from one blade to another, without any intermediary mounts and > >>> unmounts on the blades. I thought it made a lot of sense, butmaybe> >>> my approach is wrong. > >> > >> Not necessarily wrong, but perhaps just an unnecessary layer. If > >> your > >> intent > >> is HA Xen, I would set it up like this: > >> > >> 1) Both machines connected to the SAN over FC > >> 2) Both machines having visibility to the same SAN LUN(s) > >> 3) Both machines running heartbeat with private interconnects > >> 4) LVM lv''s (from dom0) on the LUN(s) for carving up the storagefor> > the > >> domU''s > >> 5) In the event of a node failure, the failback machine starts with > >> an "/etc/init.d/lvm start" or equivalent to prep the lv''s for use. > > Then > >> xend > >> start, etc. > >> > >> For migration, you''d be doing somewhat the same thing, only you''d > >> need > > a > >> separate SAN LUN (still use LVM inside dom0) for each VBD. My > >> understanding > >> is that writing is only done by one Xen stack at once (node 0before> >> migration, node 1 after migration, nothing in between), so all you > > have to > >> do > >> is make that LUN available to the other Xen instance and you > >> should be > >> set. > >> A cluster filesystem should only be used when more than one nodemust> >> write > >> to the same LUN at the same time. > >> > >> John > >> > >> > >> > >> -- > >> John Madden > >> Sr. UNIX Systems Engineer > >> Ivy Tech Community College of Indiana > >> jmadden@ivytech.edu > >> > >> _______________________________________________ > >> Xen-users mailing list > >> Xen-users@lists.xensource.com > >> http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
You know what, I think I''m in the same boat as you are. I got my test environment up and running, but now that I''m verifying everything I am actually seeing the same errors you are. The DomUs can''t write to their filesystems and I''m getting the same log messages in Dom0: Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1: warning: assertion "gfs_glock_is_locked_by_me(ip->i_gl)" failed Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1: function = gfs_prepare_write Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1: file /usr/src/build/729060-x86_64/BUILD/xen0/src/gfs/ops_address.c, line 329 Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1: time 1145395369 Sorry I spoke too soon. So ... anyone else have a clue? :) -Steve> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Stephen Palmer > Sent: Tuesday, April 18, 2006 4:08 PM > To: Jim Klein; xen-users@lists.xensource.com > Subject: RE: [Xen-users] Xen and GFS > > Oh, well, I guess the difference is that I''m not actually mounting the > files as VBD''s (as I innacurately said earlier). I''m just using the > syntax: > > disk = [ ''file:/mnt/xen/vrserver1,xvda,w'' ] > > ... to do file backed storage. They''re never attached as VBD''s toDomU.> Maybe that would work for you? > > -Steve > > > -----Original Message----- > > From: Jim Klein [mailto:jklein@saugus.k12.ca.us] > > Sent: Tuesday, April 18, 2006 3:58 PM > > To: xen-users@lists.xensource.com > > Cc: Stephen Palmer > > Subject: Re: [Xen-users] Xen and GFS > > > > That''s exactly what I want to do, and I am using FC5 as well. But > > when I create the VBD''s (either with the xenguest-install.py script > > or manually creating an img file with dd and mounting -o loop) I get > > I/O errors and the messages in the log listed earlier. The images > > mount, but are not writable, presumably because of a lockingproblem.> > I found a note in the kernel archives that spoke of problems getting > > loop file systems to mount properly off a GFS volume, but didn''t see > > a resolution. > > > > > > On Apr 18, 2006, at 1:42 PM, Stephen Palmer wrote: > > > > > I''ve done exactly this (with iSCSI instead of FC), but I did take > the > > > extra step to configure GFS, as I intended each cluster node torun> > > various DomU''s (3 or 4 on each). The DomU VBD''s are all stored on > the > > > same iSCSI LUN, so each node can read/write to the LUN > simultaneously > > > with GFS. > > > > > > It took a lot of trial and error to get everything working - I got > > > stuck > > > trying to figure out why the LVM2-cluster package was missing in > > > Fedora > > > Core 5, and finally realized that it wasn''t really necessary as > > > long as > > > I did all of the LVM administration from one node and used the > > > pvscan/vgscan/lvscan tools on the other nodes to refresh the > metadata. > > > > > > Stephen Palmer > > > Gearbox Software > > > CIO/Director of GDS > > > > > >> -----Original Message----- > > >> From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > > >> bounces@lists.xensource.com] On Behalf Of John Madden > > >> Sent: Tuesday, April 18, 2006 3:31 PM > > >> To: xen-users@lists.xensource.com > > >> Cc: Jim Klein > > >> Subject: Re: [Xen-users] Xen and GFS > > >> > > >> On Tuesday 18 April 2006 16:17, Jim Klein wrote: > > >>> The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each, > > >>> attached to FC SAN. The thought was that I would create a GFS > volume > > >>> on the SAN, mount it under Xen dom0 on all 3 blades, create all > the > > >>> VBDs for my VMs on the SAN, and thus be able to easily migrateVMs> > >>> from one blade to another, without any intermediary mounts and > > >>> unmounts on the blades. I thought it made a lot of sense, but > maybe > > >>> my approach is wrong. > > >> > > >> Not necessarily wrong, but perhaps just an unnecessary layer. If > > >> your > > >> intent > > >> is HA Xen, I would set it up like this: > > >> > > >> 1) Both machines connected to the SAN over FC > > >> 2) Both machines having visibility to the same SAN LUN(s) > > >> 3) Both machines running heartbeat with private interconnects > > >> 4) LVM lv''s (from dom0) on the LUN(s) for carving up the storage > for > > > the > > >> domU''s > > >> 5) In the event of a node failure, the failback machine startswith> > >> an "/etc/init.d/lvm start" or equivalent to prep the lv''s foruse.> > > Then > > >> xend > > >> start, etc. > > >> > > >> For migration, you''d be doing somewhat the same thing, only you''d > > >> need > > > a > > >> separate SAN LUN (still use LVM inside dom0) for each VBD. My > > >> understanding > > >> is that writing is only done by one Xen stack at once (node 0 > before > > >> migration, node 1 after migration, nothing in between), so allyou> > > have to > > >> do > > >> is make that LUN available to the other Xen instance and you > > >> should be > > >> set. > > >> A cluster filesystem should only be used when more than one node > must > > >> write > > >> to the same LUN at the same time. > > >> > > >> John > > >> > > >> > > >> > > >> -- > > >> John Madden > > >> Sr. UNIX Systems Engineer > > >> Ivy Tech Community College of Indiana > > >> jmadden@ivytech.edu > > >> > > >> _______________________________________________ > > >> Xen-users mailing list > > >> Xen-users@lists.xensource.com > > >> http://lists.xensource.com/xen-users > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 18 Apr 2006, Jim Klein wrote:> Does anyone have any pointers with regard to using Xen and storing VBDs > on a GFS volume under dom0? This sounds like it should work well, > especially for migrating domains, but it looks as though GFS won''t allow > a read-write mount of a loop device (file) so I end up with read only > VBDs on the SAN, which are obviously useless. Maybe my approach is > completely off, but it sounded pretty good up until I discovered the > lock problem. FYI, the gfs error when mounting is listed below.It works fine for me.. on my SAN, I export the shared LUN to all the Xen nodes, and run CLVM on top of it. Then, I export the LV''s to the Xen domU''s as a virtual drive, and well, use ''em. Why do you need loop devices and such? ------------------------------------------------------------------------ | nate carlson | natecars@natecarlson.com | http://www.natecarlson.com | | depriving some poor village of its idiot since 1981 | ------------------------------------------------------------------------ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
You are correct, if clvmd is not running, you can activate the LV on all the nodes without the system complaining. clvm primarily tells the system to hand the locks off to gfs, and propogates lvm updates to the other cluster nodes. It does not, however, give you a cluster safe file system, so don''t mount an lv on all the nodes at the same time or you risk corruption. -- Jim Klein Director Information Services & Technology LPIC1, CNA/CNE 4-6, RHCT/RHCE 3 Saugus Union School District http://www.saugus.k12.ca.us>>> Nate Carlson <natecars@natecarlson.com> 04/19/06 7:23 AM >>>On Wed, 19 Apr 2006, Stephen Palmer wrote:> Yeah, I did rebuild it; but when I went to start the daemon, it locked> up. I didn''t spend any time troubleshooting it, really, and moved on > without it. > > Right now, I''m actually using a single LVM formatted with GFS andshared> between the nodes, so I don''t see why I couldn''t reconfigure things to> match what you have done. I believe you see those errors because you > have CLVM installed at all, and so it''s expecting to find the daemon > running on all of the nodes. I don''t have it installed so it wouldn''t> know any better ...Could very well be.> Anyhow, it will be easy enough to test this out. I''ll go back and > figure out how to get CLVM working if it doesn''t work.Good luck! :) ------------------------------------------------------------------------ | nate carlson | natecars@natecarlson.com | http://www.natecarlson.com | | depriving some poor village of its idiot since 1981 | ------------------------------------------------------------------------ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users