Hi, I''m currently working on some clustering howto and trying to do my work with an increasing number of Xen domUs. They run fedora core 6 and are intended to share a number of OCFS2 filesystems, which all reside on an EVMS volume. I''m now looking for a _clean_ way of enabling shared rw access. I know people have already done this, the only documented way I found so far included hacking /etc/xen/scripts/block to the grade of disabling the whole check it''s intended to do, which isn''t ''production grade'' :) I agree blocking shared rw accesses in general is a good thing [tm], but I wonder what to do about the cases where it''s not. The user guide goes like this: "If you want read-write sharing, export the directory to other domains via NFS from domain 0 (or use a cluster file system such as GFS or ocfs2)." So, here I am, using OCFS2, and wondering why the manual stops right there. The /etc/xen/scripts/block seems to simply ignore the fact people might need shared accesses. The last big thread on this seems to date back to 2005 and mostly consists of a discussion what happens by rw sharing a volume without using a cluster-aware filesystem. (which is, to be honest, a fun thing to watch) I could of course map the volumes to my fileserver and generate an iSCSI target there, but I think I have other ways of maximizing overhead :) Any takers? If not, who do should I submit a patch for ''block'' to? Regards, Florian -- ''Sie brauchen sich um Ihre Zukunft keine Gedanken zu machen'' _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 2007-02-06 at 02:53 +0100, Florian Heigl wrote:> Hi, > > I''m currently working on some clustering howto and trying to do my > work with an increasing number of Xen domUs. They run fedora core 6 > and are intended to share a number of OCFS2 filesystems, which all > reside on an EVMS volume. > > I''m now looking for a _clean_ way of enabling shared rw access.The only time this is an issue is when more than 1 guest using the cluster FS is on the same physical dom-0 node. the "w!" flag when specifying the VBD works very well. If your dom-u''s root file system is ocfs2, be sure to specify an appropriate initrd to do the following : bring up eth(x) and (if iscsi) configure it. some means of obtaining a centralized cluster.conf if so desired modprobe ocfs2 .. pivot_root Then o2cb will take over the rest. I really recommend booting to a small local vbd, then arrange fstab in such a way that you facilitate your single system image if so desired.> I know > people have already done this, the only documented way I found so far > included hacking /etc/xen/scripts/block to the grade of disabling the > whole check it''s intended to do, which isn''t ''production grade'' :)You did a sort of difficult search. Quite a bit of what turns up in the top 10 via any reasonable keyword search phrase will give you out dated information.> > I agree blocking shared rw accesses in general is a good thing [tm], > but I wonder what to do about the cases where it''s not.This was "polished" quite a bit.> I could of course map the volumes to my fileserver and generate an > iSCSI target there, but I think I have other ways of maximizing > overhead :)AoE is *very* nice for this, and has a very small overhead cost, and no need for tcp offload cards since its a routless protocol. I recommend looking into it, migrating becomes very easy once you do.> > Any takers? > If not, who do should I submit a patch for ''block'' to? > > Regards, > FlorianHope this helps. Best, --Tim _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
for the search engines and anyone else. :) ---------- Forwarded message ---------- From: Florian Heigl <florian.heigl@gmail.com> Date: 06.02.2007 11:52 Subject: Re: [Xen-users] vbd Sharing To: tim.post@netkinetics.net 2007/2/6, Tim Post <tim.post@netkinetics.net>:> > I''m now looking for a _clean_ way of enabling shared rw access. > > The only time this is an issue is when more than 1 guest using the > cluster FS is on the same physical dom-0 node.this will be the standard scenario for now (haha)> the "w!" flag when specifying the VBD works very well. If your dom-u''s > root file system is ocfs2, be sure to specify an appropriate initrd to > do the following :Wow! all solved, thank You. (could someone document this?)> You did a sort of difficult search. Quite a bit of what turns up in the > top 10 via any reasonable keyword search phrase will give you out dated > information.> AoE is *very* nice for this, and has a very small overhead cost, and no > need for tcp offload cards since its a routless protocol. I recommend > looking into it, migrating becomes very easy once you do.Right now It would still be one more service in dom0. Maybe I''ll build a small storage server with the next disk upgrades. Thanks for the hint! -- ''Sie brauchen sich um Ihre Zukunft keine Gedanken zu machen'' -- ''Sie brauchen sich um Ihre Zukunft keine Gedanken zu machen'' _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 2007-02-06 at 11:53 +0100, Florian Heigl wrote:> for the search engines and anyone else. > > :)Glad it works :)> Wow! all solved, thank You. > (could someone document this?)It is documented, the problem you (and many) experienced is the search engines are producing older out of date content in the top 20 or so. The Xen Wiki is the place to look first , but even there you''ll run into quite a bit of old info. Its nice if you can update things as you see them, or even note "This document appears to be out of date" to whatever wiki you happen to be using. I''ve even seen sites running media wiki that just plug directly into open source mailing lists and suck out the content to serve Google ads, these are typically showcasing pre 2.0.7 stuff that did well in Google. Its a shame because it makes things harder for everyone, but very little can be done about it. Best, --Tim _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users