Hi, folks -- I''m using xen 2.0.6 on a Debian Sarge host, and Linux 2.6.11.10-xen0. I use Xen to host a number of QA environments. I''m mounting filesystems for guest vms from files on disk. Ideally, I would like to share a single ''file'' between all VMs for /usr/local and another ext3 partition, and maintain individual root and swap partitions on a per-machine basis. This doesn''t seem to work presently. When I bring up the ''second'' machine in the cluster, I get : qa-host:/etc/xen/auto# xm create xm-manager Using config file "xm-manager". Error: Error creating domain: vbd: Segment not found: uname=file:/export/vm/vm-usrlocal If I then shutdown the machine ''holding'' this partition I can bring up the VM as normal. qa-host:/etc/xen/auto# xm create xm-manager Using config file "xm-manager". Started domain VM8, console on port 9614 Is this a limitation that I can not work around ? A limitation that an upgrade or alternative method of storing the filesystem will fix ? Ideally I''d like to avoid the situation where I physically export the partitions I want to use on guest OSes over NFS .. but I am aware that it''s an option which will probably work... Thanks for any ideas you guys may have .. -a _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jean-Christophe Guillain
2005-Aug-03 16:15 UTC
Re: [Xen-users] Sharing filesystems between VMs.
Andy Davidson writes:> > Hi, folks -- > > I''m using xen 2.0.6 on a Debian Sarge host, and Linux 2.6.11.10-xen0. > > I use Xen to host a number of QA environments. > > I''m mounting filesystems for guest vms from files on disk. Ideally, I > would like to share a single ''file'' between all VMs for /usr/local and > another ext3 partition, and maintain individual root and swap partitions > on a per-machine basis. > > This doesn''t seem to work presently. When I bring up the ''second'' machine > in the cluster, I get : > > qa-host:/etc/xen/auto# xm create xm-manager > Using config file "xm-manager". > Error: Error creating domain: vbd: Segment not found: > uname=file:/export/vm/vm-usrlocal > > > If I then shutdown the machine ''holding'' this partition I can bring up the > VM as normal. > > qa-host:/etc/xen/auto# xm create xm-manager > Using config file "xm-manager". > Started domain VM8, console on port 9614 > > > Is this a limitation that I can not work around ? A limitation that an > upgrade or alternative method of storing the filesystem will fix ? > > Ideally I''d like to avoid the situation where I physically export the > partitions I want to use on guest OSes over NFS .. but I am aware that > it''s an option which will probably work... > > Thanks for any ideas you guys may have .. > > -aDid you respect the syntax of the disks definitions in your config files (expecially the rw rights)? " DISK Set the first entry in this list to calculate the offset of the domain''s root partition, based on the domain ID. Set the second to the location of /usr if you are sharing it between domains (e.g. disk = [''phy:your_hard_drive%d,sda1,w'' % (base_partition_number + vmid), ''phy:your_usr_partition,sda6,r'' ] " (from the online documentation) jC *** Jean-Christophe Guillain jcg@adviseo.fr - 06 61 52 20 76 http://www.adviseo.fr - http://www.open-sp.fr _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> This doesn''t seem to work presently. When I bring up the ''second'' > machine in the cluster, I get : > > qa-host:/etc/xen/auto# xm create xm-manager > Using config file "xm-manager". > Error: Error creating domain: vbd: Segment not found: > uname=file:/export/vm/vm-usrlocalCould you please include your config file "xm-manager", or at least the vbd-related sections of it? -- Stop the infinite loop, I want to get off! http://surreal.istic.org/ Paraphernalia/Never hides your broken bones,/ And I don''t know why you''d want to try:/ It''s plain to see you''re on your own. -- Paul Simon The documentation that can be written is not the true documentation. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Daniel Hulme wrote:>>This doesn''t seem to work presently. When I bring up the ''second'' >>machine in the cluster, I get : >>qa-host:/etc/xen/auto# xm create xm-manager >>Using config file "xm-manager". >>Error: Error creating domain: vbd: Segment not found: >>uname=file:/export/vm/vm-usrlocal > Could you please include your config file "xm-manager", or at least the > vbd-related sections of it?Thank you for replying so quickly. Attached are the config files for xm-manager and xm-vm1. In case I was not clear, before, I can start xm-manager when xm-vm1 is down, but then I can nolonger start xm-vm1 - it dies with the same error above. Again - thanks. -a _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Aug 03, 2005 at 05:09:01PM +0100, Andy Davidson wrote:> This doesn''t seem to work presently. When I bring up the ''second'' > machine in the cluster, I get : > > qa-host:/etc/xen/auto# xm create xm-manager > Using config file "xm-manager". > Error: Error creating domain: vbd: Segment not found: > uname=file:/export/vm/vm-usrlocalYou can''t export the same vbd twice r/w, and maybe not even r/o, I forget. There is just no facility for doing that and AFAIK the error you get is saving you from seeing both domains crash. You need a cluster file system like GFS, OCFS, etc., or just NFS. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Andy Smith wrote:> On Wed, Aug 03, 2005 at 05:09:01PM +0100, Andy Davidson wrote: > You can''t export the same vbd twice r/w, and maybe not even r/o, I > forget. There is just no facility for doing that and AFAIK the > error you get is saving you from seeing both domains crash. > You need a cluster file system like GFS, OCFS, etc., or just NFS.This was a theory I had; I tried exporting the filesystem r/w from xm-manager and r/o from the other vms, but until I changed it to r/w across the board from all machines, the vms refused to boot at all - fsck would hang the bootup complaining of filesystem inconsistencies. I''d not considered a clustered filesystem - thank you for the suggestion. -a _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> This was a theory I had; I tried exporting the filesystem r/w from > xm-manager and r/o from the other vms, but until I changed it to r/w > across the board from all machines, the vms refused to boot at all - > fsck would hang the bootup complaining of filesystem inconsistencies.You should export it read-only to everything. If xm-manager mounts it read-write, the filesystem will be marked as dirty. When another domain tries to mount it, it notices the dirty flag and fsck''s it. Fsck, of course, fails because the device is read-only. If you really need xm-manager to be able to write to it, you could try putting whichever mount option it is that turns off fsck''ing dirty fses in each domain''s /etc/fstab, but when xm-manager writes to the fs they will probably break heavily. Also, to query your config file from the other post:>>># This makes the disk device depend on the vmid - assuming # that devices sda7, sda8 etc. exist. The device is exported # to all domains as sda1. # All domains get sda6 read-only (to use for /usr, see below). disk = [''file:/export/vm/vmm-root,sda1,w'', ''file:/export/vm/vmm-swap,sda2,w'', ''file:/export/vm/vm-code,sda3,w'', ''file:/export/vm/vm-usrlocal,sda4,w'' ] <<< The comment mentions sda6-8, but these are not configured. This means that the thing below,>>># Sets runlevel 4 and the device for /usr. extra = "4 VMID=%d usr=/dev/sda6" % vmid <<< probably won''t do what you expect it to do. If you want all your domains to have r/w access to the same device, you''re going to have to use a networked filesystem. -- Stop the infinite loop, I want to get off! http://surreal.istic.org/ Paraphernalia/Never hides your broken bones,/ And I don''t know why you''d want to try:/ It''s plain to see you''re on your own. -- Paul Simon The documentation that can be written is not the true documentation. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
take a look at Openafs ... it''s also pretty good for that and easy to setup . Sven Andy Davidson <andy@nosignal.org> Sent by: xen-users-bounces@lists.xensource.com 03/08/2005 18:34 To xen-users@lists.xensource.com cc Subject Re: [Xen-users] Sharing filesystems between VMs. Andy Smith wrote:> On Wed, Aug 03, 2005 at 05:09:01PM +0100, Andy Davidson wrote: > You can''t export the same vbd twice r/w, and maybe not even r/o, I > forget. There is just no facility for doing that and AFAIK the > error you get is saving you from seeing both domains crash. > You need a cluster file system like GFS, OCFS, etc., or just NFS.This was a theory I had; I tried exporting the filesystem r/w from xm-manager and r/o from the other vms, but until I changed it to r/w across the board from all machines, the vms refused to boot at all - fsck would hang the bootup complaining of filesystem inconsistencies. I''d not considered a clustered filesystem - thank you for the suggestion. -a _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Daniel Hulme wrote:> You should export it read-only to everything. If xm-manager mounts it > read-write, the filesystem will be marked as dirty.Thanks for your help. I grabbed some sources for a kernel claiming to be linux-2.6.11-xenU and it seems that I''ll need to rebuild the kernel (from looking at the .config shipped with it) to support some of the alternative filesystems mentioned by others on the list.> The comment mentions sda6-8, but these are not configured. This means > that the thing below,I incredibly lazilly crafted a few changes from the default config file which shipped with the Xen distribution.. I do need to go through the tweaking and sanity checking route before full production use. Thanks for the pointers though. -a _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> You can''t export the same vbd twice r/w, and maybe not even r/o, I > forget. There is just no facility for doing that and AFAIK the > error you get is saving you from seeing both domains crash.Exporting twice r/o is fine. If any running domain has r/w access then it won''t let you start other domains with either r/o or r/w access. You *can* put a ! after the ''w'' permision to mean "no, really share it writeable" to disable the check however you should NEVER do this unless you have a cluster aware filesystem. Basically, because the FS layer usually expects only one kernel to access a filesystem, one writer and multiple readers will cause confusion, multiple readers will hose your filesystem pretty quickly.> You need a cluster file system like GFS, OCFS, etc., or just NFS.Yes, if you want write sharing. Exporting read only is fine, though. Cheers, Mark _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Aug 03, 2005 at 07:32:14PM +0100, Mark Williamson wrote:> > You need a cluster file system like GFS, OCFS, etc., or just NFS. > > Yes, if you want write sharing. Exporting read only is fine, though.Do other domains with it mounted r/o notice changes from the write domain immediately? If so then I could use this to mitigate my backup strategy problem mentioned in an earlier email, where I couldn''t use LVM snapshots. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Brown pointed out a typo I''d made - thanks Tom!> > Basically, because the FS layer usually expects only one kernel to access > > a filesystem, one writer and multiple readers will cause confusion, > > multiple readers will hose your filesystem pretty quickly. > > didn''t you mean "multiple writers" there?Yeah, sorry :-) What I meant was: * multiple readers = fine, go for it * one (or more) readers, one writer = bad - the readers will get confused pretty quick, although the writer and the underlying data will be happy * multiple writers = really really bad - they''ll trash the filesystem Cheers, Mark> -Tom > > > > You need a cluster file system like GFS, OCFS, etc., or just NFS. > > > > Yes, if you want write sharing. Exporting read only is fine, though. > > > > Cheers, > > Mark > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > ---------------------------------------------------------------------- > tbrown@BareMetal.com | "The Internet is a world of ends. You''re at one > http://BareMetal.com/ | end, and everybody and everything else are at the > web hosting since ''95 | other ends." - http://www.worldofends.com/_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel ------------------------------------------------------- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > You need a cluster file system like GFS, OCFS, etc., or just NFS. > > > > Yes, if you want write sharing. Exporting read only is fine, though. > > Do other domains with it mounted r/o notice changes from the write > domain immediately?With a cluster filesystem this sort of thing should be possible. I''m not sure exactly what coherency guarantees they provide (NFS in particular sometimes takes a while to notice changes but it''s a very different technology to the other two). The reason you can''t do this with a "normal" filesystem (like ext, Reiser, etc) is that it won''t expect the data to change under it - when it does, it''ll look like massive disk corruption and it will get very unhappy.> If so then I could use this to mitigate my backup strategy problem > mentioned in an earlier email, where I couldn''t use LVM snapshots.Indeed! Cheers, Mark _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users