I was wondering what people use / recommend for a cluster fs for use with xen ? Also with this cluster fs, will the domU''s be using flat files / lvm ? (it should be the former as per my guess since these flat files will be stored on the cluster fs, but then won''t performance hit will be there for these domU''s ? ) I don''t want to invest in a san/ nas solution, just looking for cost effective. Right now i have a single server which i will use for the cluster fs and slowly add more servers to the cluster as and when i need more space. Atleast thats what i have thought till now. All ideas are welcome. -- regards, Anand Gupta
Harald Kubota
2006-Sep-23 16:13 UTC
[Fedora-xen] Re: [Xen-users] what do you recommend for cluster fs ??
Anand Gupta wrote:> I was wondering what people use / recommend for a cluster fs for use > with xen ?I''m currently using a firewire external hard disk. When used with the correct IDE-FW-chipset (Oxford are the only ones working), 4 hosts can connect to one disk concurrently. The other cheap possibility is to use iSCSI (e.g. iet). Or NFS. Harald
Rik van Riel
2006-Sep-23 17:40 UTC
Re: [Fedora-xen] what do you recommend for cluster fs ??
Anand Gupta wrote:> I was wondering what people use / recommend for a cluster fs for use > with xen ? > > Also with this cluster fs, will the domU''s be using flat files / lvm ? > (it should be the former as per my guess since these flat files will be > stored on the cluster fs, but then won''t performance hit will be there > for these domU''s ? )You could also use CLVM to have your guests living on logical volumes that are visible on all cluster hosts. -- "You don''t have to be crazy to do this ... but it helps." -- Bob Ross
On Sep 23, 2006, at 7:44 AM, Anand Gupta wrote:> I was wondering what people use / recommend for a cluster fs for > use with xen ?We use GFS.> Also with this cluster fs, will the domU''s be using flat files / > lvm ? (it should be the former as per my guess since these flat > files will be stored on the cluster fs, but then won''t performance > hit will be there for these domU''s ? )We use CLVM.> I don''t want to invest in a san/ nas solution, just looking for > cost effective.You can actually get *both* a SAN/cost effective solution from these folks: www.coraid.com We''re very impressed with what they''re offering. Works as advertised!> Right now i have a single server which i will use for the cluster > fs and slowly add more servers to the cluster as and when i need > more space. Atleast thats what i have thought till now.You need to be careful with clustering. We''re using it for high availability and scalability, but if you make the wrong decisions, you''ll end up with lower reliability. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christopher G. Stach II
2006-Sep-24 00:00 UTC
[Fedora-xen] Re: [Xen-users] what do you recommend for cluster fs ??
Harald Kubota wrote:> Anand Gupta wrote: >> I was wondering what people use / recommend for a cluster fs for use >> with xen ? > I''m currently using a firewire external hard disk. When used with the > correct IDE-FW-chipset (Oxford are the only ones working), 4 hosts can > connect to one disk concurrently. > > The other cheap possibility is to use iSCSI (e.g. iet). Or NFS. > > HaraldDoesn''t really answer the "cluster FS" question, since that''s just shared storage. :) GNBD + CLVM + GFS here. -- Christopher G. Stach II
Harald Kubota
2006-Sep-24 00:17 UTC
[Fedora-xen] Re: [Xen-users] what do you recommend for cluster fs ??
Christopher G. Stach II wrote:> Harald Kubota wrote: > >> Anand Gupta wrote: >> >>> I was wondering what people use / recommend for a cluster fs for use >>> with xen ? >>> >> I''m currently using a firewire external hard disk. When used with the >> correct IDE-FW-chipset (Oxford are the only ones working), 4 hosts can >> connect to one disk concurrently. >> >> The other cheap possibility is to use iSCSI (e.g. iet). Or NFS. >> > Doesn''t really answer the "cluster FS" question, since that''s just > shared storage. :) > >Good point and of course true. I skipped the "cluster fs" part completely. Now if the question were "What do you use for shared storage to be able to use Xen migration", then I''d be ok. Harald
Martin Hierling
2006-Sep-24 17:03 UTC
Re: [Xen-users] what do you recommend for cluster fs ??
Hi, anybody using or testing ocfs2? regards Martin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''m about to, actually. I''m going to make some rather easy to deploy cluster-in-a-box image collections and some simple scripts to setup networking to get them up and running. I''ll be using Pound [ http://www.apsis.ch/pound/ ] over a farm skinny centos4 / lighttpd / fastcgi dom-u''s on top of a conventional MySQL cluster for db, with failover pound load balancer and mysql manager dom-u''s (located on different machines), just based on the common 3.0.x para-virtualized kernel. Its use will be easily rolling out something to handle high traffic or image / media intense web sites, forums that take off quickly or sites prone to frequent /.''ings or diggs. I''ve yet to get ocfs2 working but its mostly due to lack of effort on my part, I just don''t have the time to work on hobby stuff anymore. If you wouldn''t mind posting your experiences with it should you elect to try it, I''d appreciate the reading :) I often just default to gfs because it works and I can deploy it quickly. It seems like a cinch .. check : [ from oracle''s site http://oss.oracle.com/projects/ocfs2/ ] :>> Ubuntu: OCFS2 is included in the stock Dapper kernel. Ubuntu users >> must also install the ocfs2-tools and ocfs2console packages, which >> are in Dapper as well. Thanks goes to Fabio Massimo Di Nitto for >> doing the packaging work.Best, -Tim On Sun, 2006-09-24 at 19:03 +0200, Martin Hierling wrote:> Hi, > > anybody using or testing ocfs2? > > regards Martin > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Anthony Wright
2006-Sep-24 18:39 UTC
Re: [Xen-users] what do you recommend for cluster fs ??
Tom Mornini wrote:> On Sep 23, 2006, at 7:44 AM, Anand Gupta wrote: > >> I was wondering what people use / recommend for a cluster fs for use >> with xen ? > > We use GFS. > >> Also with this cluster fs, will the domU''s be using flat files / lvm >> ? (it should be the former as per my guess since these flat files >> will be stored on the cluster fs, but then won''t performance hit will >> be there for these domU''s ? ) > > We use CLVM.I hear talk of CLVM, but I''m trying to understand what the difference is between CLVM and normal LVM, where the CLVM software is and what I have to do to turn my LVM system into a CLVM????? I''d also be interested to know if you can combine CLVM with a dm-raid solution, to create a clustered raid device? I understand that dm-raid0/1 has been released, and there''s a dm-raid4/5 in alpha at the moment. Thanks, Tony Wright. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sep 24, 2006, at 11:39 AM, Anthony Wright wrote:> Tom Mornini wrote: > >> On Sep 23, 2006, at 7:44 AM, Anand Gupta wrote: >> >>> I was wondering what people use / recommend for a cluster fs for >>> use with xen ? >> >> We use GFS. >> >>> Also with this cluster fs, will the domU''s be using flat files / >>> lvm ? (it should be the former as per my guess since these flat >>> files will be stored on the cluster fs, but then won''t >>> performance hit will be there for these domU''s ? ) >> >> We use CLVM. > > I hear talk of CLVM, but I''m trying to understand what the > difference is between CLVM and normal LVM, where the CLVM software > is and what I have to do to turn my LVM system into a CLVM?????It''s part of the Red Hat Cluster Suite.> I''d also be interested to know if you can combine CLVM with a dm- > raid solution, to create a clustered raid device? I understand that > dm-raid0/1 has been released, and there''s a dm-raid4/5 in alpha at > the moment.I''m quite certain that dm-raid doesn''t work in a clustered environment yet. I have a feeling you''re anxious (as are many) for a solution like this: http://sourceware.org/cluster/ddraid/ Unfortunately there''s no date on the article, and it hasn''t been updated in months. If you''re looking for something useful *today*, then I''d recommend a SAN solution. The AoE SAN solution from Coraid that I mentioned earlier isn''t much more expensive than plain ATA disks, so it''s not prohibitive. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Martin Hierling wrote:> Hi, > > anybody using or testing ocfs2?I''ve used ocfs2 a little under Gentoo. I was using evms + heartbeat to manage block devices shared via AoE. It all seems to work ok, but I have to seem to have to restart heartbeat occasionally because the evms client want connect to the cluster manager properly. Cheers, Brad _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Do you happen to know if ocfs2 dynamically allocates inodes similar to how GFS does things? Or is it more ext3-ish in behavior? With gfs, its not uncommon to see 100% of your inodes in use.. where with ext3 one would wonder if the world was about to end abruptly. This can really throw network / heartbeat monitors for a loop if you aren''t expecting it. I found that out with gfs, and figured I''d save all who may try ocfs2 the 8 or 9 hours it took to figure out just why it was happening. I was looking for bug reports on xensource when It was Red Hat''s tree that I should have been climbing. :) HTH -Tim On Mon, 2006-09-25 at 15:07 +1000, Brad Plant wrote:> Martin Hierling wrote: > > Hi, > > > > anybody using or testing ocfs2? > > I''ve used ocfs2 a little under Gentoo. I was using evms + heartbeat to > manage block devices shared via AoE. It all seems to work ok, but I have > to seem to have to restart heartbeat occasionally because the evms > client want connect to the cluster manager properly. > > Cheers, > > Brad > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tim Post wrote:> Do you happen to know if ocfs2 dynamically allocates inodes similar to > how GFS does things? Or is it more ext3-ish in behavior?To be honest, I have no idea. But I did find this with a quick google: http://oss.oracle.com/projects/ocfs2/dist/documentation/fasheh.pdf <quote> File metadata is allocated in blocks via a sub allocation mechanism. All block allocators in OCFS2 grow dynamically. Most notably, this allows OCFS2 to grow inode allocation on demand. </quote>> With gfs, its not uncommon to see 100% of your inodes in use.. where > with ext3 one would wonder if the world was about to end abruptly. > > This can really throw network / heartbeat monitors for a loop if you > aren''t expecting it. > > I found that out with gfs, and figured I''d save all who may try ocfs2 > the 8 or 9 hours it took to figure out just why it was happening. I was > looking for bug reports on xensource when It was Red Hat''s tree that I > should have been climbing. :)Thanks for the heads up :-) Cheers, Brad _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Martin Hierling
2006-Sep-25 15:09 UTC
Re: [Xen-users] what do you recommend for cluster fs ??
Tim, setup is easy (about 1h) but my test szenario was: P3-1GHz with Dom0 as iSCSI Target and 3 DomU as ocfs2 "clients" mounting the same FS over open-iscsi. That works out of the box but under heavy load the Clients get kicked by the heardbeat. That obviously was because the complete network was fuff of iscsi traffic (bonnie). I will test that in a real environment in about 2 weeks. will reoport here. regards Martin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sep 25, 2006, at 8:09 AM, Martin Hierling wrote:> setup is easy (about 1h) but my test szenario was: P3-1GHz with > Dom0 as iSCSI Target and 3 DomU as ocfs2 "clients" mounting the > same FS over open-iscsi. That works out of the box but under heavy > load the Clients get kicked by the heardbeat. That obviously was > because the complete network was fuff of iscsi traffic (bonnie). I > will test that in a real environment in about 2 weeks. will reoport > here.You might get better performance and leave this problem behind with AoE as it''s much less demanding under very high loads. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 25, 2006 at 09:52:19AM -0700, Tom Mornini wrote:> On Sep 25, 2006, at 8:09 AM, Martin Hierling wrote: > > >setup is easy (about 1h) but my test szenario was: P3-1GHz with > >Dom0 as iSCSI Target and 3 DomU as ocfs2 "clients" mounting the > >same FS over open-iscsi. That works out of the box but under heavy > >load the Clients get kicked by the heardbeat. That obviously was > >because the complete network was fuff of iscsi traffic (bonnie). I > >will test that in a real environment in about 2 weeks. will reoport > >here.> You might get better performance and leave this problem behind with > AoE as it''s much less demanding under very high loads.I must admit that it sounds like you''re running your frontend and backend networks on the same LAN. It''s probably wise to use a dedicated network for storage network traffic, because as you''ve seen, it can quite easily saturate the link, causing performance issues. It''s also entirely possible that moving the storage onto a dedicated network would bring security benefits, assuming you''ve taken proper precautions (there''s no need for it to have access / be accessible from the internet). -- Ceri Storey <cez@necrofish.org.uk> ''What I really want is "apt-get smite"'' --Rob Partington <http://rjp.frottage.org> _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sep 25, 2006, at 10:00 AM, Ceri Storey wrote:> On Mon, Sep 25, 2006 at 09:52:19AM -0700, Tom Mornini wrote: >> On Sep 25, 2006, at 8:09 AM, Martin Hierling wrote: >> >>> setup is easy (about 1h) but my test szenario was: P3-1GHz with >>> Dom0 as iSCSI Target and 3 DomU as ocfs2 "clients" mounting the >>> same FS over open-iscsi. That works out of the box but under heavy >>> load the Clients get kicked by the heardbeat. That obviously was >>> because the complete network was fuff of iscsi traffic (bonnie). I >>> will test that in a real environment in about 2 weeks. will reoport >>> here. > >> You might get better performance and leave this problem behind with >> AoE as it''s much less demanding under very high loads. > > I must admit that it sounds like you''re running your frontend and > backend networks on the same LAN. > > It''s probably wise to use a dedicated network for storage network > traffic, because as you''ve seen, it can quite easily saturate the > link, > causing performance issues. > > It''s also entirely possible that moving the storage onto a dedicated > network would bring security benefits, assuming you''ve taken proper > precautions (there''s no need for it to have access / be accessible > from > the internet).These are all excellent points. We have separate IP and AoE networks. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 What advantages are there to using a FS via iSCSI instead of just using NFS? Are there some websites/articles you can point me to? - --tod -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (Cygwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFGBnX6xsamcQR838RAg+wAKDaYq4wqXU7DIbheSxvo2Ak+IbvXwCgo3Zt LrG7s3xtaZMQ7++MBOo+BNc=KVY+ -----END PGP SIGNATURE----- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Sep-25 18:17 UTC
Re: [Xen-users] what do you recommend for cluster fs ??
If you are going through the trouble of a second network, wouldn''t using 1Gbit fibre make more sense? 1gbt fibre-channel is actually cheaper than gigabit Ethernet, assuming you are buying name-brand equipment, comparing used to used, and for disk use, fibre-channel is much faster than going over the network. It also has basic disk organization/pesudo-security built in already. I use pci-x qlogic 2200 cards (around $10 per) and Brocade silworm 2800 switches (around $100/each) along with whatever fibre arrays I can find (I have a IBM EXP-500 right now- Nice! but it was $250. You can get used dell/EMC 10 bay half-height arrays for little more than shipping; but that''s ''cause they are flimsy crap. After getting it shipped with drives in it, you will have bad/flaky slots. I just ordered a Sun StorEdge A5200 for around $150, but those are low-profile, and the half-height drives are extremely cheap- you can get 10KRPM half-height 73G drives for around $10/each. that goes up to $50 or so for the low-profile drives of the same spec, and you only get a 30% density improvement.) On Mon, 25 Sep 2006, Tom Mornini wrote:> Date: Mon, 25 Sep 2006 10:59:39 -0700 > From: Tom Mornini <tmornini@engineyard.com> > To: Xen Users <Xen-users@lists.xensource.com> > Cc: Martin Hierling <martin@mh-itc.de> > Subject: Re: [Xen-users] what do you recommend for cluster fs ?? > > On Sep 25, 2006, at 10:00 AM, Ceri Storey wrote: > >> On Mon, Sep 25, 2006 at 09:52:19AM -0700, Tom Mornini wrote: >>> On Sep 25, 2006, at 8:09 AM, Martin Hierling wrote: >>> >>>> setup is easy (about 1h) but my test szenario was: P3-1GHz with >>>> Dom0 as iSCSI Target and 3 DomU as ocfs2 "clients" mounting the >>>> same FS over open-iscsi. That works out of the box but under heavy >>>> load the Clients get kicked by the heardbeat. That obviously was >>>> because the complete network was fuff of iscsi traffic (bonnie). I >>>> will test that in a real environment in about 2 weeks. will reoport >>>> here. >> >>> You might get better performance and leave this problem behind with >>> AoE as it''s much less demanding under very high loads. >> >> I must admit that it sounds like you''re running your frontend and >> backend networks on the same LAN. >> >> It''s probably wise to use a dedicated network for storage network >> traffic, because as you''ve seen, it can quite easily saturate the link, >> causing performance issues. >> >> It''s also entirely possible that moving the storage onto a dedicated >> network would bring security benefits, assuming you''ve taken proper >> precautions (there''s no need for it to have access / be accessible from >> the internet). > > These are all excellent points. We have separate IP and AoE networks. > > -- > -- Tom Mornini > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Sep-25 20:46 UTC
Re: [Xen-users] what do you recommend for cluster fs ??
Well, if ATA meets your needs, that''s fine; last time I used 7.5K SATA, I had long "pauses" whenever all the other computers on a particular disk/array ran their daily crontab, and other times of even moderately high IO. On fibre, everything is pretty smooth. under load it can get slow, I''m running on 2.0Ghz Xeons w/ a 400Mhz bus and pc2100 ram, but the latencies are always low; the system is always responsive. Also, all the external SATA chassis have been anything but commodity, last time I looked (which admittedly was a while ago) both 3-ware and Adaptec had competing, incompatible "standards" for their 4-lane connectors, and as I recall, at the time the RAID cards almost cost as much as the disks in the 14 bay supermicro we used. Are there standard interconnects now, such that I can buy a disk chassis from one company and be fairly certain it will connect to a storage controller from another? (I really would like to know... I would like to try out some of those 2.5" 10KRPM SAS drives- those look cool. but I refuse to do so until there is an open standard supported by more than one vendor.) I have seen fairly standard-looking sata JBOD cases that used fibre channel interconnects; I will probably be buying some for storage shortly; Personally, I place zero value on "support" as even from the premium vendors like EMC, and even when you escalate up to Engineering, you don''t get anyone that knows more than I do, and it takes days to get to that point. However, good manufacturer warranties are really nice- I''ll pay a significant premium for those. I always buy corsair ram for that reason; awesome warranty. and new disks are nearly always from Seagate; Seagate has excellent warranty support. You don''t even have to talk to a person; just fill out a webform (easy when you have a barcode scanner) and mail them in. So yeah, if there are standard SAS/SATA interconnects, I''d like to know about them, as that seems to be where Seagate thinks things are going. (and I like the way SAS scales. one bus per disk when you have as many spindles as I do would equal some really nice thruput.. that''s the only thing the fibre disks lack. They are okay under heavy random load where SATA chokes, but they are also only okay under sequential load, where SATA flies. Of course, I don''t see much sequential activity (sequential activity from several hosts to the same disk equals random load on the disk) so I go with the fibre. Still, good sequential performance would make full system restores and a few other things run a whole lot faster. you don''t need to do a full system restore very often, but when you do, you really, really need to do it. )> On Sep 25, 2006, at 11:17 AM, Luke Crawford wrote: > >> If you are going through the trouble of a second network, wouldn''t using >> 1Gbit fibre make more sense? 1gbt fibre-channel is actually cheaper than >> gigabit Ethernet, assuming you are buying name-brand equipment, comparing >> used to used, and for disk use, fibre-channel is much faster than going >> over the network. It also has basic disk organization/pesudo-security >> built in already. > > Not for me. The way I see things, with Coraid and AoE, I get to stay in > uber-commodity land with SATA disks, GbE cards and switches. The current AoE > drivers balance requests over multiple ports, so in a fully redundant > configuration you get nearly 2Gbps throughput. > >> I use pci-x qlogic 2200 cards (around $10 per) and Brocade silworm 2800 >> switches (around $100/each) along with whatever fibre arrays I can find (I >> have a IBM EXP-500 right now- Nice! but it was $250. You can get used >> dell/EMC 10 bay half-height arrays for little more than shipping; but >> that''s ''cause they are flimsy crap. After getting it shipped with drives >> in it, you will have bad/flaky slots. I just ordered a Sun StorEdge >> A5200 for around $150, but those are low-profile, and the half-height >> drives are extremely cheap- you can get 10KRPM half-height 73G drives for >> around $10/each. that goes up to $50 or so for the low-profile drives of >> the same spec, and you only get a 30% density improvement.) > > I like the fact that the stuff I''m buying is reasonably inexpensive, and is > brand new with manufacturer warranties. > > Additionally, I like the fact that it''s all headed in the right direction for > 10GE soon, when those prices drop as well. > > -- > -- Tom Mornini_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sep 25, 2006, at 1:46 PM, Luke Crawford wrote:> Well, if ATA meets your needs, that''s fine; last time I used 7.5K > SATA, I had long "pauses" whenever all the other computers on a > particular disk/array ran their daily crontab, and other times of > even moderately high IO. On fibre, everything is pretty smooth. > under load it can get slow, I''m running on 2.0Ghz Xeons w/ a 400Mhz > bus and pc2100 ram, but the latencies are always low; the system > is always responsive.http://www.coraid.com So far, so good, and I''m not the only one using them. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Sep-25 22:31 UTC
Re: [Xen-users] what do you recommend for cluster fs ??
On Mon, 25 Sep 2006, Tom Mornini wrote:> http://www.coraid.comTheir SR1520 looks almost exactly like the SuperMicro I was describing having used before. Interesting stuff. do you use the AOE in dom0 and export block devices? or do you do AoE directly from within the DomUs? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sep 25, 2006, at 3:31 PM, Luke Crawford wrote:> On Mon, 25 Sep 2006, Tom Mornini wrote: >> http://www.coraid.com > > Their SR1520 looks almost exactly like the SuperMicro I was > describing having used before. Interesting stuff. do you use the > AOE in dom0 and export block devices? or do you do AoE directly > from within the DomUs?It *is* a SuperMicro, but with their software installed. :-) We use AoE to create the block devices in Dom0, then export them with the DomUs, as you guessed. If you want to work with AoE, make sure you get their drivers. They''re in-kernel now, but for whatever reason, they''re very far out of date in the standard kernel sources. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Anand Gupta
2006-Sep-27 19:53 UTC
Re: [Fedora-xen] what do you recommend for cluster fs ??
Could you please direct me some documentation/howto where i can get a feel of it and see how to go about installing and using it ? I heard about the RH Cluster Suite, however wasn''t able to find any documentation to use it. On 9/23/06, Rik van Riel <riel@redhat.com> wrote:> > Anand Gupta wrote: > > I was wondering what people use / recommend for a cluster fs for use > > with xen ? > > > > Also with this cluster fs, will the domU''s be using flat files / lvm ? > > (it should be the former as per my guess since these flat files will be > > stored on the cluster fs, but then won''t performance hit will be there > > for these domU''s ? ) > > You could also use CLVM to have your guests living on logical > volumes that are visible on all cluster hosts. > > -- > "You don''t have to be crazy to do this ... but it helps." -- Bob Ross >-- regards, Anand Gupta
Daniel P. Berrange
2006-Sep-27 19:58 UTC
[Xen-users] Re: [Fedora-xen] what do you recommend for cluster fs ??
On Thu, Sep 28, 2006 at 01:23:21AM +0530, Anand Gupta wrote:> Could you please direct me some documentation/howto where i can get a feel > of it and see how to go about installing and using it ? I heard about the RH > Cluster Suite, however wasn''t able to find any documentation to use it.There''s some docs linked off the Clustersuite dev pages: http://sources.redhat.com/cluster/ Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=| _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Anand Gupta
2006-Sep-27 20:04 UTC
Re: [Fedora-xen] what do you recommend for cluster fs ??
Thanks for the link. Already tried looking there, however was confused. I guess there is no other way then to try to find way in those docs itself. On 9/28/06, Daniel P. Berrange <berrange@redhat.com> wrote:> > On Thu, Sep 28, 2006 at 01:23:21AM +0530, Anand Gupta wrote: > > Could you please direct me some documentation/howto where i can get a > feel > > of it and see how to go about installing and using it ? I heard about > the RH > > Cluster Suite, however wasn''t able to find any documentation to use it. > > There''s some docs linked off the Clustersuite dev pages: > > http://sources.redhat.com/cluster/ > > Regards, > Dan. > -- > |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 > -=| > |=- Perl modules: http://search.cpan.org/~danberr/ > -=| > |=- Projects: http://freshmeat.net/~danielpb/ > -=| > |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B > 9505 -=| >-- regards, Anand Gupta
Andrew Cathrow
2006-Sep-27 20:32 UTC
[Xen-users] Re: [Fedora-xen] what do you recommend for cluster fs ??
All the documentation is available on line. http://www.redhat.com/docs/manuals/csgfs/ On Thu, 2006-09-28 at 01:23 +0530, Anand Gupta wrote:> Could you please direct me some documentation/howto where i can get a > feel of it and see how to go about installing and using it ? I heard > about the RH Cluster Suite, however wasn''t able to find any > documentation to use it. > > > On 9/23/06, Rik van Riel <riel@redhat.com> wrote: > > Anand Gupta wrote: > > I was wondering what people use / recommend for a cluster fs > for use > > with xen ? > > > > Also with this cluster fs, will the domU''s be using flat > files / lvm ? > > (it should be the former as per my guess since these flat > files will be > > stored on the cluster fs, but then won''t performance hit > will be there > > for these domU''s ? ) > > You could also use CLVM to have your guests living on logical > volumes that are visible on all cluster hosts. > > -- > "You don''t have to be crazy to do this ... but it helps." -- > Bob Ross > > > > > -- > regards, > > Anand Gupta > > -- > Fedora-xen mailing list > Fedora-xen@redhat.com > https://www.redhat.com/mailman/listinfo/fedora-xen_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Anand Gupta
2006-Sep-27 20:34 UTC
Re: [Fedora-xen] what do you recommend for cluster fs ??
On 9/28/06, Andrew Cathrow <acathrow@redhat.com> wrote:> > All the documentation is available on line. > > http://www.redhat.com/docs/manuals/csgfs/ >Thanks for the pointer Andrew. -- regards, Anand Gupta