I am interested in how Gluser and XCP would work together. I see that Gluster says they support running as a VM in Xen, but does XCP support connecting to a Gluster presented volume for booting and running VMs? Any good technical details would be greatly appreciated. TIA, Scott _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 06/04/2011 04:23 AM, Scott Damron wrote:> I am interested in how Gluser and XCP would work together. I see that > Gluster says they support running as a VM in Xen, but does XCP support > connecting to a Gluster presented volume for booting and running VMs? > Any good technical details would be greatly appreciated. > > TIA, > > Scott >I haven''t used Gluster at all yet, but I''ve been meaning to give it a try on XCP to see what I can do with it. Does Gluster allow you to share a volume as NFS? If that were the case, then XCP could use the Gluster NFS share as an SR, and you''d have Gluster underneath your XCP VMs, without XCP having any idea. Would this set up be helpful to you? Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I believe it would be, I really am just trying to avoid using iSCSI due to overhead. I also am struggling to understand XCP a bit I guess. Is it really just Xen with VSwitch, Vastsky and a few other magic bits thrown in, or is it entirely something new that tools you would normally use with Xen won''t work with? Does it use Vastsky and VSwitch by default when you set up some systems, or do you have to configure something special to make use of the switching and storage? Thanks, Scott On Sat, Jun 4, 2011 at 8:46 AM, Mike McClurg <mike.mcclurg@citrix.com> wrote:> On 06/04/2011 04:23 AM, Scott Damron wrote: >> I am interested in how Gluser and XCP would work together. I see that >> Gluster says they support running as a VM in Xen, but does XCP support >> connecting to a Gluster presented volume for booting and running VMs? >> Any good technical details would be greatly appreciated. >> >> TIA, >> >> Scott >> > > I haven''t used Gluster at all yet, but I''ve been meaning to give it a try on XCP to see what I can do with it. Does Gluster allow you to share a volume as NFS? If that were the case, then XCP could use the Gluster NFS share as an SR, and you''d have Gluster underneath your XCP VMs, without XCP having any idea. Would this set up be helpful to you? > > Mike >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I would really recommend against using Gluster or any filesystem based method of providing VM storage, you are much better off using a block based storage system. iSCSI is many times more efficent and fault tolerant than Gluster. My recommendation is to use the FUSE driver within the VMs to connect to Gluster volumes if you require them. Joseph. On 5 June 2011 00:28, Scott Damron <sdamron@gmail.com> wrote:> I believe it would be, I really am just trying to avoid using iSCSI > due to overhead. I also am struggling to understand XCP a bit I > guess. Is it really just Xen with VSwitch, Vastsky and a few other > magic bits thrown in, or is it entirely something new that tools you > would normally use with Xen won''t work with? Does it use Vastsky and > VSwitch by default when you set up some systems, or do you have to > configure something special to make use of the switching and storage? > > Thanks, > > Scott > > On Sat, Jun 4, 2011 at 8:46 AM, Mike McClurg <mike.mcclurg@citrix.com> wrote: >> On 06/04/2011 04:23 AM, Scott Damron wrote: >>> I am interested in how Gluser and XCP would work together. I see that >>> Gluster says they support running as a VM in Xen, but does XCP support >>> connecting to a Gluster presented volume for booting and running VMs? >>> Any good technical details would be greatly appreciated. >>> >>> TIA, >>> >>> Scott >>> >> >> I haven''t used Gluster at all yet, but I''ve been meaning to give it a try on XCP to see what I can do with it. Does Gluster allow you to share a volume as NFS? If that were the case, then XCP could use the Gluster NFS share as an SR, and you''d have Gluster underneath your XCP VMs, without XCP having any idea. Would this set up be helpful to you? >> >> Mike >> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Kind regards, Joseph. Founder | Director Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Jun 05, 2011 at 11:56:28PM +1000, Joseph Glanville wrote:> I would really recommend against using Gluster or any filesystem based > method of providing VM storageDitto. I would specifically recommend against Gluster. We tried it at my company for VM storage, and it''s just not suitable for large files like VM images, especially if you are hoping to use the advanced features like replication. --Igor _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
What do you recommend for such configuration: Say, we have 4 servers, each with 12 drives. I want to distribute and replicate xen images over all servers in such manner that: *) live migration works *) *any* one of 4 server may fail still leaving *all* images accessible. *) Will work over Infiniband (not only using ipoib) *) It would be a big plus, if filesystem do not require separate metadata/transaction/management/you-name-it server or at least provide machanism for transparent fileover. I was reading deocumentation of number of network filesystems and GlusterFS seemed the best choice. Igor Serebryany wrote:> On Sun, Jun 05, 2011 at 11:56:28PM +1000, Joseph Glanville wrote: >> I would really recommend against using Gluster or any filesystem based >> method of providing VM storage > > Ditto. I would specifically recommend against Gluster. We tried it at my > company for VM storage, and it''s just not suitable for large files like > VM images, especially if you are hoping to use the advanced features > like replication. > > --Igor > > > ------------------------------------------------------------------------ > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users-- Martins Lazdans _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
You should have a SAN and have no disks in the actual servers. On Mon, Jun 6, 2011 at 12:45 PM, Martins Lazdans <marrtins@dqdp.net> wrote:> What do you recommend for such configuration: > Say, we have 4 servers, each with 12 drives. I want to distribute and > replicate xen images over all servers in such manner that: > *) live migration works > *) *any* one of 4 server may fail still leaving *all* images accessible. > *) Will work over Infiniband (not only using ipoib) > *) It would be a big plus, if filesystem do not require separate > metadata/transaction/management/you-name-it server or at least provide > machanism for transparent fileover. > > I was reading deocumentation of number of network filesystems and GlusterFS > seemed the best choice. > > Igor Serebryany wrote: > >> On Sun, Jun 05, 2011 at 11:56:28PM +1000, Joseph Glanville wrote: >> >>> I would really recommend against using Gluster or any filesystem based >>> method of providing VM storage >>> >> >> Ditto. I would specifically recommend against Gluster. We tried it at my >> company for VM storage, and it''s just not suitable for large files like >> VM images, especially if you are hoping to use the advanced features >> like replication. >> >> --Igor >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > -- > Martins Lazdans > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Are you actually thinking of XCP here when you say don''t have any disks in the servers? I don''t think that is an option, as it uses local disk when bringing up a VM, I believe this is the default behavior. Then there is his question about Infiniband. ipoib may be supported in XCP, but plain Infiniband is not. I imagine a person could fiddle around and get it working, but it wouldn''t be stable enough from what I have seen. Thanks, Scott On Mon, Jun 6, 2011 at 5:48 AM, Wouter van Eekelen <me@woet.me> wrote:> You should have a SAN and have no disks in the actual servers. > > On Mon, Jun 6, 2011 at 12:45 PM, Martins Lazdans <marrtins@dqdp.net> wrote: >> >> What do you recommend for such configuration: >> Say, we have 4 servers, each with 12 drives. I want to distribute and >> replicate xen images over all servers in such manner that: >> *) live migration works >> *) *any* one of 4 server may fail still leaving *all* images accessible. >> *) Will work over Infiniband (not only using ipoib) >> *) It would be a big plus, if filesystem do not require separate >> metadata/transaction/management/you-name-it server or at least provide >> machanism for transparent fileover. >> >> I was reading deocumentation of number of network filesystems and >> GlusterFS seemed the best choice. >> >> Igor Serebryany wrote: >>> >>> On Sun, Jun 05, 2011 at 11:56:28PM +1000, Joseph Glanville wrote: >>>> >>>> I would really recommend against using Gluster or any filesystem based >>>> method of providing VM storage >>> >>> Ditto. I would specifically recommend against Gluster. We tried it at my >>> company for VM storage, and it''s just not suitable for large files like >>> VM images, especially if you are hoping to use the advanced features >>> like replication. >>> >>> --Igor >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >> >> -- >> Martins Lazdans >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Well, our setup is as follows: - Hostnodes have a 8 GB USB stick or 30 GB SSD for local storage - Are connected to a SAN through iSCSI. You indeed need some local storage, but you can have the actual VM storage on a SAN. On Mon, Jun 6, 2011 at 2:54 PM, Scott Damron <sdamron@gmail.com> wrote:> Are you actually thinking of XCP here when you say don''t have any > disks in the servers? I don''t think that is an option, as it uses > local disk when bringing up a VM, I believe this is the default > behavior. Then there is his question about Infiniband. ipoib may be > supported in XCP, but plain Infiniband is not. I imagine a person > could fiddle around and get it working, but it wouldn''t be stable > enough from what I have seen. > > Thanks, > > Scott > > On Mon, Jun 6, 2011 at 5:48 AM, Wouter van Eekelen <me@woet.me> wrote: > > You should have a SAN and have no disks in the actual servers. > > > > On Mon, Jun 6, 2011 at 12:45 PM, Martins Lazdans <marrtins@dqdp.net> > wrote: > >> > >> What do you recommend for such configuration: > >> Say, we have 4 servers, each with 12 drives. I want to distribute and > >> replicate xen images over all servers in such manner that: > >> *) live migration works > >> *) *any* one of 4 server may fail still leaving *all* images accessible. > >> *) Will work over Infiniband (not only using ipoib) > >> *) It would be a big plus, if filesystem do not require separate > >> metadata/transaction/management/you-name-it server or at least provide > >> machanism for transparent fileover. > >> > >> I was reading deocumentation of number of network filesystems and > >> GlusterFS seemed the best choice. > >> > >> Igor Serebryany wrote: > >>> > >>> On Sun, Jun 05, 2011 at 11:56:28PM +1000, Joseph Glanville wrote: > >>>> > >>>> I would really recommend against using Gluster or any filesystem based > >>>> method of providing VM storage > >>> > >>> Ditto. I would specifically recommend against Gluster. We tried it at > my > >>> company for VM storage, and it''s just not suitable for large files like > >>> VM images, especially if you are hoping to use the advanced features > >>> like replication. > >>> > >>> --Igor > >>> > >>> > >>> > ------------------------------------------------------------------------ > >>> > >>> _______________________________________________ > >>> Xen-users mailing list > >>> Xen-users@lists.xensource.com > >>> http://lists.xensource.com/xen-users > >> > >> -- > >> Martins Lazdans > >> > >> _______________________________________________ > >> Xen-users mailing list > >> Xen-users@lists.xensource.com > >> http://lists.xensource.com/xen-users > > > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sat, Jun 4, 2011 at 4:28 PM, Scott Damron <sdamron@gmail.com> wrote:> I believe it would be, I really am just trying to avoid using iSCSI > due to overhead. I also am struggling to understand XCP a bit I > guess. Is it really just Xen with VSwitch, Vastsky and a few other > magic bits thrown in, or is it entirely something new that tools you > would normally use with Xen won''t work with? Does it use Vastsky and > VSwitch by default when you set up some systems, or do you have to > configure something special to make use of the switching and storage? > > Thanks, > > Scott >NFS has much more overhead than iSCSI. It also runs on TCP/IP (which is why iSCSI has so much overhead), but you have an NFS server and "forgeign" file system as well. Best would be to use AOE (if you can''t change to Fibre Optic) and then use something like ZFS / EXT4 on the Hypervisors -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sat, Jun 4, 2011 at 9:28 AM, Scott Damron <sdamron@gmail.com> wrote:> Is it really just Xen with VSwitch, Vastsky and a few other > magic bits thrown inis Vastsky there? is it stable enough? -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I purchased a CoRaid 12 TB SAN for AOE. We will see how it goes. I would prefer Infiniband for the available bandwidth, but as I am using XCP, I figured I would go with something supported out of the box. I will let the list know how it goes. Scott On Mon, Jun 6, 2011 at 8:51 AM, Rudi Ahlers <Rudi@softdux.com> wrote:> > > On Sat, Jun 4, 2011 at 4:28 PM, Scott Damron <sdamron@gmail.com> wrote: >> >> I believe it would be, I really am just trying to avoid using iSCSI >> due to overhead. I also am struggling to understand XCP a bit I >> guess. Is it really just Xen with VSwitch, Vastsky and a few other >> magic bits thrown in, or is it entirely something new that tools you >> would normally use with Xen won''t work with? Does it use Vastsky and >> VSwitch by default when you set up some systems, or do you have to >> configure something special to make use of the switching and storage? >> >> Thanks, >> >> Scott > > > > > NFS has much more overhead than iSCSI. It also runs on TCP/IP (which is why > iSCSI has so much overhead), but you have an NFS server and "forgeign" file > system as well. > > Best would be to use AOE (if you can''t change to Fibre Optic) and then use > something like ZFS / EXT4 on the Hypervisors > -- > Kind Regards > Rudi Ahlers > SoftDux > > Website: http://www.SoftDux.com > Technical Blog: http://Blog.SoftDux.com > Office: 087 805 9573 > Cell: 082 554 7532 >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Saturday 04 June 2011 07:16 PM, Mike McClurg wrote:> On 06/04/2011 04:23 AM, Scott Damron wrote: >> I am interested in how Gluser and XCP would work together. I see that >> Gluster says they support running as a VM in Xen, but does XCP support >> connecting to a Gluster presented volume for booting and running VMs? >> Any good technical details would be greatly appreciated. >> >> TIA, >> >> Scott >> > I haven''t used Gluster at all yet, but I''ve been meaning to give it a try on XCP to see what I can do with it. Does Gluster allow you to share a volume as NFS? If that were the case, then XCP could use the Gluster NFS share as an SR, and you''d have Gluster underneath your XCP VMs, without XCP having any idea. Would this set up be helpful to you? > > Mike >yes Gluster allows to share volume as NFS, I have tried it with XCP and the shared storage works very well... _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, There are no such systems available commercially as far as I am aware. I developed a custom solution for my company to achieve this but at this point in time your best bet is a SAN that support SRP. Have a look at the devices from Scalable Informatics - I always recommend them to others using Infiniband fabrics. Joseph. On 6 June 2011 03:45, Martins Lazdans <marrtins@dqdp.net> wrote:> What do you recommend for such configuration: > Say, we have 4 servers, each with 12 drives. I want to distribute and > replicate xen images over all servers in such manner that: > *) live migration works > *) *any* one of 4 server may fail still leaving *all* images accessible. > *) Will work over Infiniband (not only using ipoib) > *) It would be a big plus, if filesystem do not require separate > metadata/transaction/management/you-name-it server or at least provide > machanism for transparent fileover. > > I was reading deocumentation of number of network filesystems and GlusterFS > seemed the best choice. > > Igor Serebryany wrote: >> >> On Sun, Jun 05, 2011 at 11:56:28PM +1000, Joseph Glanville wrote: >>> >>> I would really recommend against using Gluster or any filesystem based >>> method of providing VM storage >> >> Ditto. I would specifically recommend against Gluster. We tried it at my >> company for VM storage, and it''s just not suitable for large files like >> VM images, especially if you are hoping to use the advanced features >> like replication. >> >> --Igor >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users > > -- > Martins Lazdans > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Kind regards, Joseph. Founder | Director Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
It's been a year now since anyone posted to this thread. What was the final verdict? Has the latest release of GlusterFS (i.e., Version 3.3) changed anyone's opinion of GlusterFS as a HA VMDI SR? Eric Pretorious Truckee, CA Joseph Glanville wrote> > There are no such systems available commercially as far as I am aware. > I developed a custom solution for my company to achieve this but at > this point in time your best bet is a SAN that support SRP. Have a > look at the devices from Scalable Informatics - I always recommend > them to others using Infiniband fabrics. > > Joseph. > > On 6 June 2011 03:45, Martins Lazdans <marrtins@> wrote: >> What do you recommend for such configuration: >> Say, we have 4 servers, each with 12 drives. I want to distribute and >> replicate xen images over all servers in such manner that: >> *) live migration works >> *) *any* one of 4 server may fail still leaving *all* images accessible. >> *) Will work over Infiniband (not only using ipoib) >> *) It would be a big plus, if filesystem do not require separate >> metadata/transaction/management/you-name-it server or at least provide >> machanism for transparent fileover. >> >> I was reading deocumentation of number of network filesystems and >> GlusterFS >> seemed the best choice. >> >> Igor Serebryany wrote: >>> >>> On Sun, Jun 05, 2011 at 11:56:28PM +1000, Joseph Glanville wrote: >>>> >>>> I would really recommend against using Gluster or any filesystem based >>>> method of providing VM storage >>> >>> Ditto. I would specifically recommend against Gluster. We tried it at my >>> company for VM storage, and it's just not suitable for large files like >>> VM images, especially if you are hoping to use the advanced features >>> like replication. >>> >>> --Igor >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@.xensource >>> http://lists.xensource.com/xen-users >> >> -- >> Martins Lazdans >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@.xensource >> http://lists.xensource.com/xen-users > > -- > Kind regards, > Joseph. > Founder | Director > Orion Virtualisation Solutions | www.orionvm.com.au | Phone: 1300 56 > 99 52 | Mobile: 0428 754 846 > > _______________________________________________ > Xen-users mailing list > Xen-users@.xensource > http://lists.xensource.com/xen-users >-- View this message in context: http://xen.1045712.n5.nabble.com/XCP-and-Gluster-tp4453177p5709665.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users