hi, if I were to use Debian Squeeze as dom0 (3 servers) and install Xen hypervisor 4.0 on all of them, how would I achieve HA with only using local storage? has it ever been done before? -- Regards, Umarzuki Mochlis http://debmal.my
On Wed, Sep 5, 2012 at 10:57 AM, Umarzuki Mochlis <umarzuki@gmail.com> wrote:> hi, > > if I were to use Debian Squeeze as dom0 (3 servers) and install Xen > hypervisor 4.0 on all of them, how would I achieve HA with only using > local storage?You can''t> has it ever been done before?The easiest way would be to use shared storage and some sort of cluster manager. So if it were me I''d use one node as SAN (e.g. with openindiana + nappit, or whatever iscsi-capable storage appliance of your choice) and the other two as compute nodes. The SAN becomes a single point of failure though (unless you can afford two storage nodes, but that''s another story). If you don''t have SAN, then to have a "shared" common storage you can use DRBD. AFAIK you can only have two nodes for primary-primary setup though. See https://alteeve.ca/w/Red_Hat_Cluster_Service_3_Tutorial for example (it uses KVM, but the concept should be similar). -- Fajar
On Tue, Sep 4, 2012 at 9:44 PM, Fajar A. Nugraha <list@fajar.net> wrote:> On Wed, Sep 5, 2012 at 10:57 AM, Umarzuki Mochlis <umarzuki@gmail.com> > wrote: > > hi, > > > > if I were to use Debian Squeeze as dom0 (3 servers) and install Xen > > hypervisor 4.0 on all of them, how would I achieve HA with only using > > local storage? > > You can''t > > > has it ever been done before? > > The easiest way would be to use shared storage and some sort of > cluster manager. So if it were me I''d use one node as SAN (e.g. with > openindiana + nappit, or whatever iscsi-capable storage appliance of > your choice) and the other two as compute nodes. The SAN becomes a > single point of failure though (unless you can afford two storage > nodes, but that''s another story). > > If you don''t have SAN, then to have a "shared" common storage you can > use DRBD. AFAIK you can only have two nodes for primary-primary setup > though. See https://alteeve.ca/w/Red_Hat_Cluster_Service_3_Tutorial > for example (it uses KVM, but the concept should be similar). > > -- > Fajar > > You can, technically do this with only local storage and stuff like DRBD;boot your HA host off the network and store everything in RAM, bypassing local storage entirely. The drawback there is obvious, but it is a (VERY application dependent) option. Also, with respect to doing it using Xen, Xen HA = Remus, and DRBD is going to be your best option for storage if you''ve ruled out an actual HA storage solution (e.g., a SAN). It''s discussed in the original Remus whitepaper as well (http://nss.cs.ubc.ca/remus/papers/remus-nsdi08.pdf). That being said, while you can technically make DRBD work across three hosts (mirror A to B, then B to C), I''m not sure if you could make Remus do that, and on the subject of Remus, I wouldn''t trust it in prod until 4.2 is released. _______________________________________________> Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, Sep 4, 2012 at 10:01 PM, John Sherwood <jrs@vt.edu> wrote:> > > On Tue, Sep 4, 2012 at 9:44 PM, Fajar A. Nugraha <list@fajar.net> wrote: > >> On Wed, Sep 5, 2012 at 10:57 AM, Umarzuki Mochlis <umarzuki@gmail.com> >> wrote: >> > hi, >> > >> > if I were to use Debian Squeeze as dom0 (3 servers) and install Xen >> > hypervisor 4.0 on all of them, how would I achieve HA with only using >> > local storage? >> >> You can''t >> >> > has it ever been done before? >> >> The easiest way would be to use shared storage and some sort of >> cluster manager. So if it were me I''d use one node as SAN (e.g. with >> openindiana + nappit, or whatever iscsi-capable storage appliance of >> your choice) and the other two as compute nodes. The SAN becomes a >> single point of failure though (unless you can afford two storage >> nodes, but that''s another story). >> >> If you don''t have SAN, then to have a "shared" common storage you can >> use DRBD. AFAIK you can only have two nodes for primary-primary setup >> though. See https://alteeve.ca/w/Red_Hat_Cluster_Service_3_Tutorial >> for example (it uses KVM, but the concept should be similar). >> >> -- >> Fajar >> >> You can, technically do this with only local storage and stuff like DRBD; >oops, meant to say "without" stuff like DRBD (kind of an important word to leave out, sorry)> boot your HA host off the network and store everything in RAM, bypassing > local storage entirely. The drawback there is obvious, but it is a (VERY > application dependent) option. > > Also, with respect to doing it using Xen, Xen HA = Remus, and DRBD is > going to be your best option for storage if you''ve ruled out an actual HA > storage solution (e.g., a SAN). It''s discussed in the original Remus > whitepaper as well (http://nss.cs.ubc.ca/remus/papers/remus-nsdi08.pdf). > That being said, while you can technically make DRBD work across three > hosts (mirror A to B, then B to C), I''m not sure if you could make Remus do > that, and on the subject of Remus, I wouldn''t trust it in prod until 4.2 is > released. > > _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xen.org >> http://lists.xen.org/xen-users >> > >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Actually look into gluster for turning your local storage into shared storage. Works with more than one node and replicates data. On Sep 5, 2012 1:05 AM, "John Sherwood" <jrs@vt.edu> wrote:> > > On Tue, Sep 4, 2012 at 10:01 PM, John Sherwood <jrs@vt.edu> wrote: > >> >> >> On Tue, Sep 4, 2012 at 9:44 PM, Fajar A. Nugraha <list@fajar.net> wrote: >> >>> On Wed, Sep 5, 2012 at 10:57 AM, Umarzuki Mochlis <umarzuki@gmail.com> >>> wrote: >>> > hi, >>> > >>> > if I were to use Debian Squeeze as dom0 (3 servers) and install Xen >>> > hypervisor 4.0 on all of them, how would I achieve HA with only using >>> > local storage? >>> >>> You can''t >>> >>> > has it ever been done before? >>> >>> The easiest way would be to use shared storage and some sort of >>> cluster manager. So if it were me I''d use one node as SAN (e.g. with >>> openindiana + nappit, or whatever iscsi-capable storage appliance of >>> your choice) and the other two as compute nodes. The SAN becomes a >>> single point of failure though (unless you can afford two storage >>> nodes, but that''s another story). >>> >>> If you don''t have SAN, then to have a "shared" common storage you can >>> use DRBD. AFAIK you can only have two nodes for primary-primary setup >>> though. See https://alteeve.ca/w/Red_Hat_Cluster_Service_3_Tutorial >>> for example (it uses KVM, but the concept should be similar). >>> >>> -- >>> Fajar >>> >>> You can, technically do this with only local storage and stuff like DRBD; >> > > oops, meant to say "without" stuff like DRBD (kind of an important word to > leave out, sorry) > > >> boot your HA host off the network and store everything in RAM, bypassing >> local storage entirely. The drawback there is obvious, but it is a (VERY >> application dependent) option. >> >> Also, with respect to doing it using Xen, Xen HA = Remus, and DRBD is >> going to be your best option for storage if you''ve ruled out an actual HA >> storage solution (e.g., a SAN). It''s discussed in the original Remus >> whitepaper as well (http://nss.cs.ubc.ca/remus/papers/remus-nsdi08.pdf). >> That being said, while you can technically make DRBD work across three >> hosts (mirror A to B, then B to C), I''m not sure if you could make Remus do >> that, and on the subject of Remus, I wouldn''t trust it in prod until 4.2 is >> released. >> >> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xen.org >>> http://lists.xen.org/xen-users >>> >> >> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
2012/9/5 Andrew Wells <agwells0714@gmail.com>:> Actually look into gluster for turning your local storage into shared > storage. Works with more than one node and replicates data.thanks for for this useful info, also to everyone who shared their opinion -- Regards, Umarzuki Mochlis http://debmal.my
From my experience gluster write performance (2 mirrored nodes) is very low, approx 40-45Mb/sec. read is pretty high and it was network saturation - appr 90-95Mb/sec (i have gigi network). For HA for VMs i''d highly recommend DRDB+Pacemaker+Corosync. Solution is much more advanced, and requires much more efforts, but output will cover all time expenses. I have cluster of 2 nodes with DRBD, some VM images (block devices, using Logical Volumes on top of DRDB) are up to 512G. I don''t see any performance loss. Also Pacemaker is well integrated with XEN live migration. So i think it''s best solution for "cheap man".> 2012/9/5 Andrew Wells <agwells0714@gmail.com>: >> Actually look into gluster for turning your local storage into shared >> storage. Works with more than one node and replicates data. > > thanks for for this useful info, also to everyone who shared their opinion > > > > -- > Regards, > > Umarzuki Mochlis > http://debmal.my > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Im on the latest gluster, was following recommendations about tune-up, improved results, but still have less then 50Mb/sec. Yes, current version of DRBD stops on 2 nodes, but next one 9.x will support up to 32 (need to be verified).> > True, but I have not heard it being that slow before, also if you do > go with gluster use 3.3. if you are looking for ease of setup gluster > is easy. And drdb stops at 2 nodes doesnt it. > > On Sep 5, 2012 11:08 AM, <dan@soleks.com <mailto:dan@soleks.com>> wrote: > > From my experience gluster write performance (2 mirrored nodes) > is very low, approx 40-45Mb/sec. read is pretty high and it was > network saturation - appr 90-95Mb/sec (i have gigi network). For > HA for VMs i''d highly recommend DRDB+Pacemaker+Corosync. Solution > is much more advanced, and requires much more efforts, but output > will cover all time expenses. I have cluster of 2 nodes with DRBD, > some VM images (block devices, using Logical Volumes on top of > DRDB) are up to 512G. I don''t see any performance loss. Also > Pacemaker is well integrated with XEN live migration. So i think > it''s best solution for "cheap man". > > > > 2012/9/5 Andrew Wells <agwells0714@gmail.com > <mailto:agwells0714@gmail.com>>: > >> Actually look into gluster for turning your local storage into > shared > >> storage. Works with more than one node and replicates data. > > > > thanks for for this useful info, also to everyone who shared > their opinion > > > > > > > > -- > > Regards, > > > > Umarzuki Mochlis > > http://debmal.my > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xen.org <mailto:Xen-users@lists.xen.org> > > http://lists.xen.org/xen-users > > > > > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org <mailto:Xen-users@lists.xen.org> > http://lists.xen.org/xen-users > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
True, but I have not heard it being that slow before, also if you do go with gluster use 3.3. if you are looking for ease of setup gluster is easy. And drdb stops at 2 nodes doesnt it. On Sep 5, 2012 11:08 AM, <dan@soleks.com> wrote:> From my experience gluster write performance (2 mirrored nodes) is very > low, approx 40-45Mb/sec. read is pretty high and it was network saturation > - appr 90-95Mb/sec (i have gigi network). For HA for VMs i''d highly > recommend DRDB+Pacemaker+Corosync. Solution is much more advanced, and > requires much more efforts, but output will cover all time expenses. I have > cluster of 2 nodes with DRBD, some VM images (block devices, using Logical > Volumes on top of DRDB) are up to 512G. I don''t see any performance loss. > Also Pacemaker is well integrated with XEN live migration. So i think it''s > best solution for "cheap man". > > > > 2012/9/5 Andrew Wells <agwells0714@gmail.com>: > >> Actually look into gluster for turning your local storage into shared > >> storage. Works with more than one node and replicates data. > > > > thanks for for this useful info, also to everyone who shared their > opinion > > > > > > > > -- > > Regards, > > > > Umarzuki Mochlis > > http://debmal.my > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xen.org > > http://lists.xen.org/xen-users > > > > > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users