Hi,
I am trying to share a GFS2 partition across multiple xen boxes to share
f.x. the /home dir (using FC6). According to the Xen 3.0 documentation
at chapter 6.1, I should be using either NFS or a clusterFS. I do not
want to use a networkFS, so GFS seems to be the sane choice. The setup:
SAN -> /dev/sdX -> [c]LVM -> vgxen -> lvbox{00-99}
                                   -> lvswap{00-99}
                                   -> lvhome
Each VM has mounted a unique rootfs from lvboxX and swap from lvswapX
partition and share lvhome using GFS. The SAN has a multipath FC
connection to 8 blades and the VMs run distributed over the blades.
The problem, however, is that xen does not allow me to export
blockdevices that are in use. I''ve tried to export the PV (/dev/sdX)
and
the LV (/dev/vgxen/lvhome), but both cases fail with an error (Device
/dev/sdX is mounted in a guest domain, and so cannot be mounted now.)
Any VM started after the first fails.
Is there a way to force xen to export the same blockdevice (either the
PV or preferably the LV)?
-- 
Greetings Bertho
Bertho Stultiens
Senior Systems Manager
Mobilethink A/S
On Fri, 17 Nov 2006, Bertho Stultiens wrote:> The problem, however, is that xen does not allow me to export > blockdevices that are in use. I''ve tried to export the PV (/dev/sdX) and > the LV (/dev/vgxen/lvhome), but both cases fail with an error (Device > /dev/sdX is mounted in a guest domain, and so cannot be mounted now.) > Any VM started after the first fails. > > Is there a way to force xen to export the same blockdevice (either the > PV or preferably the LV)?Isn''t the typical setup to have GNDB as SAN and have that exported to the xen''s? Paul
Hi Bertho,
My setup for gfs inside guest''s looks like this.
For export 1 os lv and 2 shared lv''s I use this:
disk = [''phy:/dev/xenvg/xenRHEL4_3,ioemu:hda,w'',
        ''phy:/dev/xenvg/xenCLU1fs1,ioemu:hdb,w!'',
        ''phy:/dev/xenvg/xenCLU1fs2,ioemu:hdc,w!'' ]
For sharing a blockdevice you need "w!". On top xenCLU1fs1 and
xenCLU1fs2 i
have running gfs with clvm inside the guests.
Thomas
-----Ursprüngliche Nachricht-----
Von: fedora-xen-bounces@redhat.com [mailto:fedora-xen-bounces@redhat.com] Im
Auftrag von Bertho Stultiens
Gesendet: Freitag, 17. November 2006 10:46
An: fedora-xen@redhat.com
Betreff: [Fedora-xen] Block device sharing
Hi,
I am trying to share a GFS2 partition across multiple xen boxes to share
f.x. the /home dir (using FC6). According to the Xen 3.0 documentation
at chapter 6.1, I should be using either NFS or a clusterFS. I do not
want to use a networkFS, so GFS seems to be the sane choice. The setup:
SAN -> /dev/sdX -> [c]LVM -> vgxen -> lvbox{00-99}
                                   -> lvswap{00-99}
                                   -> lvhome
Each VM has mounted a unique rootfs from lvboxX and swap from lvswapX
partition and share lvhome using GFS. The SAN has a multipath FC
connection to 8 blades and the VMs run distributed over the blades.
The problem, however, is that xen does not allow me to export
blockdevices that are in use. I''ve tried to export the PV (/dev/sdX)
and
the LV (/dev/vgxen/lvhome), but both cases fail with an error (Device
/dev/sdX is mounted in a guest domain, and so cannot be mounted now.)
Any VM started after the first fails.
Is there a way to force xen to export the same blockdevice (either the
PV or preferably the LV)?
-- 
Greetings Bertho
Bertho Stultiens
Senior Systems Manager
Mobilethink A/S
--
Fedora-xen mailing list
Fedora-xen@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-- 
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.14.6/536 - Release Date: 16.11.2006
Thomas von Steiger wrote:> My setup for gfs inside guest''s looks like this. > For export 1 os lv and 2 shared lv''s I use this: > disk = [''phy:/dev/xenvg/xenRHEL4_3,ioemu:hda,w'', > ''phy:/dev/xenvg/xenCLU1fs1,ioemu:hdb,w!'', > ''phy:/dev/xenvg/xenCLU1fs2,ioemu:hdc,w!'' ] > For sharing a blockdevice you need "w!". On top xenCLU1fs1 and xenCLU1fs2 i > have running gfs with clvm inside the guests.Thank you very much. The w! does the trick. You also have the "ioemu" flag specified. Is this required? I could not find any documentation about that. -- Greetings Bertho Bertho Stultiens Senior Systems Manager Mobilethink A/S
On Sun, Nov 19, 2006 at 11:49:38AM +0100, Bertho Stultiens wrote:> Thomas von Steiger wrote: > > My setup for gfs inside guest''s looks like this. > > For export 1 os lv and 2 shared lv''s I use this: > > disk = [''phy:/dev/xenvg/xenRHEL4_3,ioemu:hda,w'', > > ''phy:/dev/xenvg/xenCLU1fs1,ioemu:hdb,w!'', > > ''phy:/dev/xenvg/xenCLU1fs2,ioemu:hdc,w!'' ] > > For sharing a blockdevice you need "w!". On top xenCLU1fs1 and xenCLU1fs2 i > > have running gfs with clvm inside the guests. > > Thank you very much. The w! does the trick. > > You also have the "ioemu" flag specified. Is this required? I could not > find any documentation about that.ioemu: is obsolete as of Xen 3.0.3 release. It prior releases it was needed whenever setting up a disk for fullyvirt/HVM guests. In current releases it is silently ignored. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|