Hi, I have been using Xen under CentOS and Openfiler as the storage for a while now, however with the release of RHEL6 without support for Xen and with the release of XCP 1.0, I have the impression it''s time to move away from CentOS and give XCP a try. My greatest doubt is about LVM. Openfiler handles an entire RAID10 volume which is exported as smaller volumes created through LVM via iSCSI. However, XCP also uses LVM to manage snapshots and such. I believe if I start using XCP with my current storage server (Openfiler), I will end up with two layers of LVM -- first a volume on Openfiler which is exported via iSCSI, and then a new LVM layer created on top of the LUN I will import via iSCSI on XCP. How bad (performance-wise) would be to have two layers of LVM? Would it be too complicated to maintain? (I imagine it would be as simple as dealing with the same thing twice, but there''s also XCP handling LVM, so I''m not sure how that would work out). Does anyone here has similar experiences they can share? I know Openfiler does not have an API for integration with other appliances, so I believe XCP would not be able to create and manipulate the LVM directly on the Openfiler storage in order to avoid the two layers on top of each other. Does anyone if that kind of integration is doable, or even possible? Best regards, Eduardo Bragatto _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Eduardo, I am using this scenario (openfiler + XCP) from more than 1 year. I have had no performance loss but my openfiler is patched for using fibrechannel. I suggest you to create only a big LUN (less than 2TB remember) and let XCP manage it. Best regards, FG -----Messaggio originale----- Da: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] Per conto di Eduardo Bragatto Inviato: sabato 2 aprile 2011 16:46 A: xen-users Oggetto: [Xen-users] XCP and Openfiler as the storage Hi, I have been using Xen under CentOS and Openfiler as the storage for a while now, however with the release of RHEL6 without support for Xen and with the release of XCP 1.0, I have the impression it''s time to move away from CentOS and give XCP a try. My greatest doubt is about LVM. Openfiler handles an entire RAID10 volume which is exported as smaller volumes created through LVM via iSCSI. However, XCP also uses LVM to manage snapshots and such. I believe if I start using XCP with my current storage server (Openfiler), I will end up with two layers of LVM -- first a volume on Openfiler which is exported via iSCSI, and then a new LVM layer created on top of the LUN I will import via iSCSI on XCP. How bad (performance-wise) would be to have two layers of LVM? Would it be too complicated to maintain? (I imagine it would be as simple as dealing with the same thing twice, but there''s also XCP handling LVM, so I''m not sure how that would work out). Does anyone here has similar experiences they can share? I know Openfiler does not have an API for integration with other appliances, so I believe XCP would not be able to create and manipulate the LVM directly on the Openfiler storage in order to avoid the two layers on top of each other. Does anyone if that kind of integration is doable, or even possible? Best regards, Eduardo Bragatto _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christopher J Petrolino
2011-Apr-05 16:55 UTC
Re: [Xen-users] XCP and Openfiler as the storage
That is helpful info, I am going to be doing some testing with Openfiler and XCP soon, I will let you know my results as well. On Tue, Apr 5, 2011 at 12:24 PM, <cluster@xinet.it> wrote:> Hi Eduardo, > > I am using this scenario (openfiler + XCP) from more than 1 year. I have had > no performance loss but my openfiler is patched for using fibrechannel. > > I suggest you to create only a big LUN (less than 2TB remember) and let XCP > manage it. > > Best regards, > FG > > -----Messaggio originale----- > Da: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] Per conto di Eduardo Bragatto > Inviato: sabato 2 aprile 2011 16:46 > A: xen-users > Oggetto: [Xen-users] XCP and Openfiler as the storage > > Hi, > > I have been using Xen under CentOS and Openfiler as the storage for a while > now, however with the release of RHEL6 without support for Xen and with the > release of XCP 1.0, I have the impression it''s time to move away from CentOS > and give XCP a try. > > My greatest doubt is about LVM. Openfiler handles an entire RAID10 volume > which is exported as smaller volumes created through LVM via iSCSI. However, > XCP also uses LVM to manage snapshots and such. > > I believe if I start using XCP with my current storage server (Openfiler), I > will end up with two layers of LVM -- first a volume on Openfiler which is > exported via iSCSI, and then a new LVM layer created on top of the LUN I > will import via iSCSI on XCP. > > How bad (performance-wise) would be to have two layers of LVM? Would it be > too complicated to maintain? (I imagine it would be as simple as dealing > with the same thing twice, but there''s also XCP handling LVM, so I''m not > sure how that would work out). > > Does anyone here has similar experiences they can share? > > I know Openfiler does not have an API for integration with other appliances, > so I believe XCP would not be able to create and manipulate the LVM directly > on the Openfiler storage in order to avoid the two layers on top of each > other. Does anyone if that kind of integration is doable, or even possible? > > > Best regards, > Eduardo Bragatto > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Apr 5, 2011 at 12:24 PM, <cluster@xinet.it> wrote:> Hi Eduardo, > > I am using this scenario (openfiler + XCP) from more than 1 year. I have > had > no performance loss but my openfiler is patched for using fibrechannel. > > I suggest you to create only a big LUN (less than 2TB remember) and let XCP > manage it. >Why only 2GB ??> > Best regards, > FG > > -----Messaggio originale----- > Da: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] Per conto di Eduardo > Bragatto > Inviato: sabato 2 aprile 2011 16:46 > A: xen-users > Oggetto: [Xen-users] XCP and Openfiler as the storage > > Hi, > > I have been using Xen under CentOS and Openfiler as the storage for a while > now, however with the release of RHEL6 without support for Xen and with the > release of XCP 1.0, I have the impression it''s time to move away from > CentOS > and give XCP a try. > > My greatest doubt is about LVM. Openfiler handles an entire RAID10 volume > which is exported as smaller volumes created through LVM via iSCSI. > However, > XCP also uses LVM to manage snapshots and such. > > I believe if I start using XCP with my current storage server (Openfiler), > I > will end up with two layers of LVM -- first a volume on Openfiler which is > exported via iSCSI, and then a new LVM layer created on top of the LUN I > will import via iSCSI on XCP. > > How bad (performance-wise) would be to have two layers of LVM? Would it be > too complicated to maintain? (I imagine it would be as simple as dealing > with the same thing twice, but there''s also XCP handling LVM, so I''m not > sure how that would work out). > > Does anyone here has similar experiences they can share? > > I know Openfiler does not have an API for integration with other > appliances, > so I believe XCP would not be able to create and manipulate the LVM > directly > on the Openfiler storage in order to avoid the two layers on top of each > other. Does anyone if that kind of integration is doable, or even possible? > > > Best regards, > Eduardo Bragatto > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Eduardo Bragatto
2011-Apr-05 17:11 UTC
Re: R: [Xen-users] XCP and Openfiler as the storage
Hi, thanks for your answer, however that raised another question. I have several hypervisors, which means if I create one big LUN, I will need different servers to have access to the same LUN and as far as I know iSCSI LUNs are not supposed to be shared. Obviously, within that same LUN, each hypervisor would be accessing different Logical Volumes, but in case a new LV or a new snapshot is created, it would have to allocate space from the LUN and that could lead to a scenario where the other hypervisors do not see the new LV or the new snapshot. Have you encountered this kind of problem, or do you have a different scenario (perhaps only one hypervisor for the big LUN for example)? Best regards, Eduardo. On Apr 5, 2011, at 1:24 PM, <cluster@xinet.it> <cluster@xinet.it> wrote:> Hi Eduardo, > > I am using this scenario (openfiler + XCP) from more than 1 year. I > have had > no performance loss but my openfiler is patched for using > fibrechannel. > > I suggest you to create only a big LUN (less than 2TB remember) and > let XCP > manage it. > > Best regards, > FG > > -----Messaggio originale----- > Da: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] Per conto di Eduardo > Bragatto > Inviato: sabato 2 aprile 2011 16:46 > A: xen-users > Oggetto: [Xen-users] XCP and Openfiler as the storage > > Hi, > > I have been using Xen under CentOS and Openfiler as the storage for > a while > now, however with the release of RHEL6 without support for Xen and > with the > release of XCP 1.0, I have the impression it''s time to move away > from CentOS > and give XCP a try. > > My greatest doubt is about LVM. Openfiler handles an entire RAID10 > volume > which is exported as smaller volumes created through LVM via iSCSI. > However, > XCP also uses LVM to manage snapshots and such. > > I believe if I start using XCP with my current storage server > (Openfiler), I > will end up with two layers of LVM -- first a volume on Openfiler > which is > exported via iSCSI, and then a new LVM layer created on top of the > LUN I > will import via iSCSI on XCP. > > How bad (performance-wise) would be to have two layers of LVM? Would > it be > too complicated to maintain? (I imagine it would be as simple as > dealing > with the same thing twice, but there''s also XCP handling LVM, so I''m > not > sure how that would work out). > > Does anyone here has similar experiences they can share? > > I know Openfiler does not have an API for integration with other > appliances, > so I believe XCP would not be able to create and manipulate the LVM > directly > on the Openfiler storage in order to avoid the two layers on top of > each > other. Does anyone if that kind of integration is doable, or even > possible? > > > Best regards, > Eduardo Bragatto > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Apr 5, 2011 at 1:08 PM, Outback Dingo <outbackdingo@gmail.com>wrote:> > > On Tue, Apr 5, 2011 at 12:24 PM, <cluster@xinet.it> wrote: > >> Hi Eduardo, >> >> I am using this scenario (openfiler + XCP) from more than 1 year. I have >> had >> no performance loss but my openfiler is patched for using fibrechannel. >> >> I suggest you to create only a big LUN (less than 2TB remember) and let >> XCP >> manage it. >> > > Why only 2GB ?? >jeeez I meant why only 2 TB> > >> >> Best regards, >> FG >> >> -----Messaggio originale----- >> Da: xen-users-bounces@lists.xensource.com >> [mailto:xen-users-bounces@lists.xensource.com] Per conto di Eduardo >> Bragatto >> Inviato: sabato 2 aprile 2011 16:46 >> A: xen-users >> Oggetto: [Xen-users] XCP and Openfiler as the storage >> >> Hi, >> >> I have been using Xen under CentOS and Openfiler as the storage for a >> while >> now, however with the release of RHEL6 without support for Xen and with >> the >> release of XCP 1.0, I have the impression it''s time to move away from >> CentOS >> and give XCP a try. >> >> My greatest doubt is about LVM. Openfiler handles an entire RAID10 volume >> which is exported as smaller volumes created through LVM via iSCSI. >> However, >> XCP also uses LVM to manage snapshots and such. >> >> I believe if I start using XCP with my current storage server (Openfiler), >> I >> will end up with two layers of LVM -- first a volume on Openfiler which is >> exported via iSCSI, and then a new LVM layer created on top of the LUN I >> will import via iSCSI on XCP. >> >> How bad (performance-wise) would be to have two layers of LVM? Would it be >> too complicated to maintain? (I imagine it would be as simple as dealing >> with the same thing twice, but there''s also XCP handling LVM, so I''m not >> sure how that would work out). >> >> Does anyone here has similar experiences they can share? >> >> I know Openfiler does not have an API for integration with other >> appliances, >> so I believe XCP would not be able to create and manipulate the LVM >> directly >> on the Openfiler storage in order to avoid the two layers on top of each >> other. Does anyone if that kind of integration is doable, or even >> possible? >> >> >> Best regards, >> Eduardo Bragatto >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Apr 5, 2011, at 2:45 PM, Outback Dingo wrote:> Why only 2GB ?? > > jeeez I meant why only 2 TBAFAIK, some iSCSI Initiators can not handle > 2TB LUNs. This is not necessarily true for all platforms, but Windows for example is known for not supporting iSCSI LUN >2TB. Does anyone know if that limit applies to XCP as well? Best regards, Eduardo. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Apr 5, 2011 at 1:53 PM, Eduardo Bragatto <eduardo@bragatto.com>wrote:> > On Apr 5, 2011, at 2:45 PM, Outback Dingo wrote: > > Why only 2GB ?? >> >> jeeez I meant why only 2 TB >> > > AFAIK, some iSCSI Initiators can not handle > 2TB LUNs. This is not > necessarily true for all platforms, but Windows for example is known for not > supporting iSCSI LUN >2TB. >I thought it was 16TB per LUN on current kernels> Does anyone know if that limit applies to XCP as well? > > Best regards, > Eduardo. > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Apr 5, 2011 at 6:24 PM, <cluster@xinet.it> wrote:> Hi Eduardo, > > I am using this scenario (openfiler + XCP) from more than 1 year. I have had > no performance loss but my openfiler is patched for using fibrechannel. > > I suggest you to create only a big LUN (less than 2TB remember) and let XCP > manage it. > > Best regards, > FGI''m just curious, what do you do of the OpenFiler fails completely? It will fail, sooner or later. What method of standby-NAS / failover do you have in place? -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Have a look to this: http://www.techno360.in/forum/showthread.php/2689-Understanding-the-2-TB-Lim it-in-Windows-Storage Regards, FG Da: Outback Dingo [mailto:outbackdingo@gmail.com] Inviato: martedì 5 aprile 2011 19:09 A: cluster@xinet.it Cc: Eduardo Bragatto; xen-users Oggetto: Re: [Xen-users] XCP and Openfiler as the storage On Tue, Apr 5, 2011 at 12:24 PM, <cluster@xinet.it> wrote: Hi Eduardo, I am using this scenario (openfiler + XCP) from more than 1 year. I have had no performance loss but my openfiler is patched for using fibrechannel. I suggest you to create only a big LUN (less than 2TB remember) and let XCP manage it. Why only 2GB ?? Best regards, FG -----Messaggio originale----- Da: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] Per conto di Eduardo Bragatto Inviato: sabato 2 aprile 2011 16:46 A: xen-users Oggetto: [Xen-users] XCP and Openfiler as the storage Hi, I have been using Xen under CentOS and Openfiler as the storage for a while now, however with the release of RHEL6 without support for Xen and with the release of XCP 1.0, I have the impression it''s time to move away from CentOS and give XCP a try. My greatest doubt is about LVM. Openfiler handles an entire RAID10 volume which is exported as smaller volumes created through LVM via iSCSI. However, XCP also uses LVM to manage snapshots and such. I believe if I start using XCP with my current storage server (Openfiler), I will end up with two layers of LVM -- first a volume on Openfiler which is exported via iSCSI, and then a new LVM layer created on top of the LUN I will import via iSCSI on XCP. How bad (performance-wise) would be to have two layers of LVM? Would it be too complicated to maintain? (I imagine it would be as simple as dealing with the same thing twice, but there''s also XCP handling LVM, so I''m not sure how that would work out). Does anyone here has similar experiences they can share? I know Openfiler does not have an API for integration with other appliances, so I believe XCP would not be able to create and manipulate the LVM directly on the Openfiler storage in order to avoid the two layers on top of each other. Does anyone if that kind of integration is doable, or even possible? Best regards, Eduardo Bragatto _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Apr 6, 2011 at 11:49 AM, <cluster@xinet.it> wrote:> Have a look to this: > > > > http://www.techno360.in/forum/showthread.php/2689-Understanding-the-2-TB-Limit-in-Windows-Storage > > >That''s only limited to: a) Windows b) more specifically NTFS So, with Windows domU''s you need to create a smaller (50GB?) partition to install Windows and then add the larger (more than 2TB partition, but format it as a GPT type disk.>From an SAN / NAS / OpenFiler / FreeNAS point of view you could stillexport the full 8TB (if, for example you had that much storage) to your dom0, but then sub-partition it on the dom0 for the different VM''s as needed. -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Apr 5, 2011 at 10:53 AM, Eduardo Bragatto <eduardo@bragatto.com> wrote:> > On Apr 5, 2011, at 2:45 PM, Outback Dingo wrote: > >> Why only 2GB ?? >> >> jeeez I meant why only 2 TB > > AFAIK, some iSCSI Initiators can not handle > 2TB LUNs. This is not > necessarily true for all platforms, but Windows for example is known for not > supporting iSCSI LUN >2TB. > > Does anyone know if that limit applies to XCP as well?Sadly, it does. I believe the limit was meant to enforce a 2TB cap for VHD VDIs, as VHD has a 2TB limitation, but the limit also enforces a 2TB cap for raw VDIs as well. I asked if this was a bug earlier on the list and got no response. An ugly workaround is @ http://forums.citrix.com/thread.jspa?messageID=1450572 -Dustin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christopher J Petrolino
2011-Apr-06 16:41 UTC
Re: [Xen-users] XCP and Openfiler as the storage
On Wed, Apr 6, 2011 at 4:12 AM, Rudi Ahlers <Rudi@softdux.com> wrote:> On Tue, Apr 5, 2011 at 6:24 PM, <cluster@xinet.it> wrote: >> Hi Eduardo, >> >> I am using this scenario (openfiler + XCP) from more than 1 year. I have had >> no performance loss but my openfiler is patched for using fibrechannel. >> >> I suggest you to create only a big LUN (less than 2TB remember) and let XCP >> manage it. >> >> Best regards, >> FG > > > I''m just curious, what do you do of the OpenFiler fails completely? > It will fail, sooner or later. > What method of standby-NAS / failover do you have in place? >Openfiler has some great features built in for replication. For our needs it was a no-brainer the cost of 2 BIG openfilers still ended up being way under 1 small big name SAN. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Apr 6, 2011 at 6:41 PM, Christopher J Petrolino <cpetrolino@gmail.com> wrote:> On Wed, Apr 6, 2011 at 4:12 AM, Rudi Ahlers <Rudi@softdux.com> wrote: >> On Tue, Apr 5, 2011 at 6:24 PM, <cluster@xinet.it> wrote: >>> Hi Eduardo, >>> >>> I am using this scenario (openfiler + XCP) from more than 1 year. I have had >>> no performance loss but my openfiler is patched for using fibrechannel. >>> >>> I suggest you to create only a big LUN (less than 2TB remember) and let XCP >>> manage it. >>> >>> Best regards, >>> FG >> >> >> I''m just curious, what do you do of the OpenFiler fails completely? >> It will fail, sooner or later. >> What method of standby-NAS / failover do you have in place? >> > > > Openfiler has some great features built in for replication. For our > needs it was a no-brainer the cost of 2 BIG openfilers still ended up > being way under 1 small big name SAN. > > _______________________________________________Yes, it''s cheap (cost of hardware only). But how do you handle the failover? Manually? Are you using NFS of iSCSI? -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Apr 6, 2011, at 1:47 PM, Rudi Ahlers wrote:>> On Wed, Apr 6, 2011 at 4:12 AM, Rudi Ahlers <Rudi@softdux.com> wrote: >> >> Openfiler has some great features built in for replication. For our >> needs it was a no-brainer the cost of 2 BIG openfilers still ended up >> being way under 1 small big name SAN. > > > Yes, it''s cheap (cost of hardware only). But how do you handle the > failover? Manually? Are you using NFS of iSCSI?Openfiler has HA support built in. It uses DRDB and Heartbeat to handle replication and fail over. If you have two Openfiler boxes, you can configure them as a cluster right from their web interface -- it seems very simple, but this is not the Openfiler mailing list, so let''s not loose focus :) Best regards, Eduardo _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Eduardo Bragatto
2011-Apr-06 17:18 UTC
Re: R: [Xen-users] XCP and Openfiler as the storage
Hello FG, sorry to bring up this question again, but I''m really curious about the fact the LUN might be accessed by more than one hypervisor at the same time. Did you encounter the any problems like I described below? Best regards, Eduardo On Apr 5, 2011, at 2:11 PM, Eduardo Bragatto wrote:> Hi, > > thanks for your answer, however that raised another question. > > I have several hypervisors, which means if I create one big LUN, I > will need different servers to have access to the same LUN and as > far as I know iSCSI LUNs are not supposed to be shared. Obviously, > within that same LUN, each hypervisor would be accessing different > Logical Volumes, but in case a new LV or a new snapshot is created, > it would have to allocate space from the LUN and that could lead to > a scenario where the other hypervisors do not see the new LV or the > new snapshot. > > Have you encountered this kind of problem, or do you have a > different scenario (perhaps only one hypervisor for the big LUN for > example)?_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Apr 6, 2011 at 7:18 PM, Eduardo Bragatto <eduardo@bragatto.com> wrote:> Hello FG, > > sorry to bring up this question again, but I''m really curious about the fact > the LUN might be accessed by more than one hypervisor at the same time. > > Did you encounter the any problems like I described below? > > Best regards, > Eduardo >How do you access the same LUN from different hypervisors, without corruption? -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Apr 6, 2011, at 2:34 PM, Rudi Ahlers wrote:> On Wed, Apr 6, 2011 at 7:18 PM, Eduardo Bragatto > <eduardo@bragatto.com> wrote: >> Hello FG, >> >> sorry to bring up this question again, but I''m really curious about >> the fact >> the LUN might be accessed by more than one hypervisor at the same >> time. >> >> Did you encounter the any problems like I described below? >> >> Best regards, >> Eduardo >> > > > How do you access the same LUN from different hypervisors, without > corruption?AFAIK, you can if you use a shared filesystem on top of the LUN, like GFS. FG was kind enough to answer me in private but it''s still not clear to me how he is doing it. If he doesn''t answer here I will update the thread later with the information he might provide. Best regards, Eduardo _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, according to FG he does share the LUNs and it is allowed. I had the same wrong impression that sharing a LUN would cause data corruption, but that''s not true for all cases. According to Wikipedia: http://en.wikipedia.org/wiki/ISCSI "In enterprise deployments, LUNs usually represent slices of large RAID disk arrays, often allocated one per client. iSCSI imposes no rules or restrictions on multiple computers sharing individual LUNs; it leaves shared access to a single underlying filesystem as a task for the operating system." Pay attention to what it says here: it leaves shared access to a single underlying filesystem as a task for the operating system, Which means if you create an EXT3 filesystem (which can not be shared) on a LUN and mount it and use it on multiple servers, data will become corrupted very soon. However, if you create a GFS filesystem on that same LUN, it will run fine from multiple systems. Therefore, it''s clear that the LUN can be shared, as long as the layer on top of it can handle being shared. From what I understand, XCP will handle the entire LUN as a shared storage, and will create separate LVMs for each VM without any problem. I believe you would only have a problem if you decided yo actually *mount* the filesystems inside those LVMs in multiple servers -- but as long as you don''t, and XCP seems to handle that, you won''t have problems. My conclusion is a bit speculative as I''m still researching the subject and I''m assuming XCP takes care of avoiding any race conditions when mounting the filesystems from the shared LUN (i.e. not allowing two hypervisors to attempt to mount them at the same time and properly unmount / mount during a live migration). If anyone can shed any lights here it would be very much appreciated. According to FG he has been using XCP with two big shared LUNs (both > 1TB) for over one year with no problems. Best regards, Eduardo Bragatto _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 04/06/11 19:38, Eduardo Bragatto wrote:> On Apr 6, 2011, at 2:34 PM, Rudi Ahlers wrote: > >> On Wed, Apr 6, 2011 at 7:18 PM, Eduardo Bragatto >> <eduardo@bragatto.com> wrote: >>> Hello FG, >>> >>> sorry to bring up this question again, but I''m really curious about >>> the fact >>> the LUN might be accessed by more than one hypervisor at the same time. >>> >>> Did you encounter the any problems like I described below? >>> >>> Best regards, >>> Eduardo >>> >> >> >> How do you access the same LUN from different hypervisors, without >> corruption? > > AFAIK, you can if you use a shared filesystem on top of the LUN, like GFS. > > FG was kind enough to answer me in private but it''s still not clear to > me how he is doing it. If he doesn''t answer here I will update the > thread later with the information he might provide. > > Best regards, > Eduardo > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersJust jumping in here and I might have smissed some info, but accessing the same LUN from different hypervisors is no problem whatsoever. It''s like a SCSCI drive that can be connected to several hosts (yes, a SCCSI bus can be shared by several machines). The thing though is that only one hypervisor should actively use it and write to it. We do this and access is regulated by clustering software. Rgds, B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I believe since each VM''s vdi is assigned a different ''chunk'' of the LUN, no two machines will ever be writing to the same chunk at the same time. -Dustin On Wed, Apr 6, 2011 at 10:34 AM, Rudi Ahlers <Rudi@softdux.com> wrote:> On Wed, Apr 6, 2011 at 7:18 PM, Eduardo Bragatto <eduardo@bragatto.com> wrote: >> Hello FG, >> >> sorry to bring up this question again, but I''m really curious about the fact >> the LUN might be accessed by more than one hypervisor at the same time. >> >> Did you encounter the any problems like I described below? >> >> Best regards, >> Eduardo >> > > > How do you access the same LUN from different hypervisors, without corruption? > > > > -- > Kind Regards > Rudi Ahlers > SoftDux > > Website: http://www.SoftDux.com > Technical Blog: http://Blog.SoftDux.com > Office: 087 805 9573 > Cell: 082 554 7532 > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Apr 6, 2011 at 8:58 PM, Dustin Marquess <dmarquess@gmail.com> wrote:> I believe since each VM''s vdi is assigned a different ''chunk'' of the > LUN, no two machines will ever be writing to the same chunk at the > same time. >True, but what about high availabilty? If one Hypervisor fails and all it''s domU''s need to be restarted on another Hypervisor would it cause problems? -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Le 2011-04-06 14:58, Dustin Marquess a écrit :> I believe since each VM''s vdi is assigned a different ''chunk'' of the > LUN, no two machines will ever be writing to the same chunk at the > same time.With a LVMOISCSI SR (LVM over iSCSI), the same VDI cannot be shared between two VM''s, this seems to be designed like this, even if you set the sharable flag at "true". However, there is a way to share a VDI between 2 VM''s with XenServer/XCP: you need to create iSCSI SR using xe (XenCenter can only create LVMOISCSI SR). This a "VDI-per-LUN" approach, and when using this, you will be able to attach the same VDI to two VM''s, in order to use a clustered filesystem (gfs, ocfs2) The iSCSI SR type driver requires an IP address and IQN identifier. It will initialise the iSCSI session and detect all LUNs presented from that target. All LUNs get assigned a VDI UUID and are entered into the XAPI DB with the managed flag set to false. As you call VDI.create for that SR with a virtual-size parameter it will select the closest matching VDI from the unmanaged freelist and toggle the flag to managed. At this point you can attach VDIs via VBDs to VMs, and existing content on the LUN should be accessible to the VM over the virtual block device. Example : xe sr-create name-label="SAN raw view" type=iscsi shared=true device-config:target=<target IP> device-config:targetIQN=<target IQN> xe vdi-create sr-uuid=<SR uuid returned> name-label="shared_vdi" type=user virtual-size=<your LUN size> sharable=true xe vbd-create vm-uuid=<vm1_uuid> vdi-uuid=<VDI UUID returned> device=<requested device position> xe vbd-create vm-uuid=<vm2_uuid> vdi-uuid=<VDI UUID returned> device=<requested device position> Hope this helps, Pierre * * _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Apr 6, 2011 at 7:12 PM, Eduardo Bragatto <eduardo@bragatto.com> wrote:> > On Apr 6, 2011, at 1:47 PM, Rudi Ahlers wrote: > > On Wed, Apr 6, 2011 at 4:12 AM, Rudi Ahlers <Rudi@softdux.com> wrote: > > Openfiler has some great features built in for replication. For our > > needs it was a no-brainer the cost of 2 BIG openfilers still ended up > > being way under 1 small big name SAN. > > Yes, it''s cheap (cost of hardware only). But how do you handle the > failover? Manually? Are you using NFS of iSCSI? > > Openfiler has HA support built in. It uses DRDB and Heartbeat to handle > replication and fail over. If you have two Openfiler boxes, you can > configure them as a cluster right from their web interface -- it seems very > simple, but this is not the Openfiler mailing list, so let''s not loose focus > :) > Best regards, > EduardoYes, I know it''s not an OF mailing list, but I''m interested in this kind of setup using something cheap like OF or FreeNAS -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Apr 7, 2011, at 5:22 AM, Rudi Ahlers wrote:> On Wed, Apr 6, 2011 at 7:12 PM, Eduardo Bragatto > <eduardo@bragatto.com> wrote: >> >> Openfiler has HA support built in. It uses DRDB and Heartbeat to >> handle >> replication and fail over. If you have two Openfiler boxes, you can >> configure them as a cluster right from their web interface -- it >> seems very >> simple, but this is not the Openfiler mailing list, so let''s not >> loose focus >> :) >> Best regards, >> Eduardo > > Yes, I know it''s not an OF mailing list, but I''m interested in this > kind of setup using something cheap like OF or FreeNASSorry if I was unclear on what I meant. DRDB (http://www.drbd.org/): DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1. Hearbeat (http://www.linux-ha.org/wiki/Heartbeat): Heartbeat is a daemon that provides cluster infrastructure (communication and membership) services to its clients. This allows clients to know about the presence (or disappearance!) of peer processes on other machines and to easily exchange messages with them. In order to be useful to users, the Heartbeat daemon needs to be combined with a cluster resource manager (CRM) which has the task of starting and stopping the services (IP addresses, web servers, etc.) that cluster will make highly available. Pacemaker is the preferred cluster resource manager for clusters based on Heartbeat. Basically DRDB works by replicating everything you write to your disks, via network, to another storage box. Heartbeat runs on both OF boxes and it keeps monitoring for their presence. If one of the storage servers goes down, the other one will pick up the IP used by the box that just "disappeared" so your clients can still reach the storage server on the same IP with the same data they had on the failed storage server. This is all handled by OF itself, you only need to do setup two OF boxes in HA cluster via it''s web interface. Best regards, Eduardo Bragatto. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users