hello is it possible, to setup a computer that has 2 lan interfaces with xen ? i want to have 2 file servers that holds the same mirrored virtual disc of the os that i run. each read/write operation should be made to both servers. so that if one server crashes, the other server continues serving the disk image without any interruption. thanks for your help roland _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 2009-11-24 at 17:34 +0100, Roland Frei wrote:> hello > > is it possible, to setup a computer that has 2 lan interfaces with xen ? > i want to have 2 file servers that holds the same mirrored virtual disc of > the os that i run. > each read/write operation should be made to both servers. > so that if one server crashes, the other server continues serving the disk > image without any interruption. > > thanks for your help > rolandI believe you are looking for a high availability cluster. http://en.wikipedia.org/wiki/High-availability_cluster Google for "high availability storage cluster" and that might get you some how-tos. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
i want to setup a easy high availability storage system for xen guest os. (1 pc that run xen and 2 nas for storage) THE HARDAWARE PC1 runns has 2 physical lan interfaces. the interfaces are directly connected to 2 separated nas. the nas are basic file servers .. no special setup. GUEST OS on pc1 a host os is installed that runns xen. XEN xen provides a virtual disc for the guest os. HERE IS THE THING: instead that this virtual disc is stored on one drive, is should be STORED ON 2 NETWORK SHARES. (something like raid1 over lan) IN CASE OF CRASH server1 or server 2 can crash completely. lan1 or lan2 can crash or be unplugged switch1 or switch2 can have failure the xen virtual disk "driver" will continue working on the other nas. AFTER CRASH if both nas/lan run again, the xen virtual disk "driver" should resync the crashed system with the running disc. THE QUESTION is it possible to setup xen so that is uses the 2 network share to store the virtual discs ? (of corse there will be many pc''s that use this 2 nas systems for storage .. and/or multiple xen can run on it... so at the end nas1 and nas2 will host many discs. it would be perfect if the disc taht runs the guest os and xen will be a cd-iso or usb-stick) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > i want to setup a easy high availability storage system for xen guestos.> (1 pc that run xen and 2 nas for storage) > > THE HARDAWARE > PC1 runns has 2 physical lan interfaces. the interfaces aredirectly> connected to 2 separated nas. > the nas are basic file servers .. no special setup. > > GUEST OS > on pc1 a host os is installed that runns xen. > > XEN > xen provides a virtual disc for the guest os. HERE IS THE THING:instead> that this virtual disc is stored on one drive, is should be STORED ON2> NETWORK SHARES. (something like raid1 over lan) > > IN CASE OF CRASH > server1 or server 2 can crash completely. > lan1 or lan2 can crash or be unplugged > switch1 or switch2 can have failure > the xen virtual disk "driver" will continue working on the othernas.> > AFTER CRASH > if both nas/lan run again, the xen virtual disk "driver" shouldresync> the crashed system with the running disc. > > THE QUESTION > is it possible to setup xen so that is uses the 2 network share tostore> the virtual discs ? > > (of corse there will be many pc''s that use this 2 nas systems forstorage> .. and/or multiple xen can run on it... so at the end nas1 and nas2will> host many discs. it would be perfect if the disc taht runs the guestos> and xen will be a cd-iso or usb-stick) >You haven''t specified a budget so it''s hard to know what to recommend. HP and others make NAS/SAN devices that will do what you want, but don''t expect change out of $20K. For about 1/10th of that you could set up a few servers and run DRBD which operates almost exactly as you described. http://en.wikipedia.org/wiki/Drbd http://www.drbd.org/ James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Nov 25, 2009 at 4:04 PM, Roland Frei <frei@datatrust.ch> wrote:> i want to setup a easy high availability storage system for xen guest os. (1 > pc that run xen and 2 nas for storage)> XEN > xen provides a virtual disc for the guest os. HERE IS THE THING: instead > that this virtual disc is stored on one drive, is should be STORED ON 2 > NETWORK SHARES. (something like raid1 over lan)> THE QUESTION > is it possible to setup xen so that is uses the 2 network share to store > the virtual discs ?Short answer: yes, but if you''re new to HA/cluster concept (which, from your questions, I assume you are) better stay away from it. It''s complicated. Long answer: there are various approach to this problem, none of them are one-step-easy-to-install solution. Here''s an approach: - san exports its storage as iscsi - server/PC imports iscsi share from both san - setup raid1 on server using those two iscsi imports Caveats: - need to fine-tune iscsi so that when one of the sans are down it doesn''t wait indefinitely - need to fine-tune/hack Linux RAID so that when the down san comes back up, it will sync automatically Here''s another approach: If the sans are actually an x86 server with lots of disk, you can setup drbd + failover cluster on it. Caveats: - need to learn drbd and cluster, which IMHO is more complicated than Xen - in my test setup, sometimes when a drbd node is down, the other one will reboot as well. Then again it''s probably because I use ocfs2 on top of drbd. YMMV -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, take a look at this document http://communities.vmware.com/servlet/JiveServlet/previewBody/10964-102-2-9835/ha_san_how-to.pdf;jsessionid=12BC07515EE5DB61AEFA58BA9251110D Its describing building highly-available SAN iSCSI storage on ubuntu. Br Peter Braun 2009/11/25 Fajar A. Nugraha <fajar@fajar.net>:> On Wed, Nov 25, 2009 at 4:04 PM, Roland Frei <frei@datatrust.ch> wrote: >> i want to setup a easy high availability storage system for xen guest os. (1 >> pc that run xen and 2 nas for storage) > > >> XEN >> xen provides a virtual disc for the guest os. HERE IS THE THING: instead >> that this virtual disc is stored on one drive, is should be STORED ON 2 >> NETWORK SHARES. (something like raid1 over lan) > > >> THE QUESTION >> is it possible to setup xen so that is uses the 2 network share to store >> the virtual discs ? > > > Short answer: yes, but if you''re new to HA/cluster concept (which, > from your questions, I assume you are) better stay away from it. It''s > complicated. > > Long answer: there are various approach to this problem, none of them > are one-step-easy-to-install solution. > Here''s an approach: > - san exports its storage as iscsi > - server/PC imports iscsi share from both san > - setup raid1 on server using those two iscsi imports > Caveats: > - need to fine-tune iscsi so that when one of the sans are down it > doesn''t wait indefinitely > - need to fine-tune/hack Linux RAID so that when the down san comes > back up, it will sync automatically > > Here''s another approach: If the sans are actually an x86 server with > lots of disk, you can setup drbd + failover cluster on it. > Caveats: > - need to learn drbd and cluster, which IMHO is more complicated than Xen > - in my test setup, sometimes when a drbd node is down, the other one > will reboot as well. Then again it''s probably because I use ocfs2 on > top of drbd. YMMV > > -- > Fajar > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Nov 25, 2009 at 5:01 PM, Peter Braun <xenware@gmail.com> wrote:> Hi, > > take a look at this document > > http://communities.vmware.com/servlet/JiveServlet/previewBody/10964-102-2-9835/ha_san_how-to.pdf;jsessionid=12BC07515EE5DB61AEFA58BA9251110D > > Its describing building highly-available SAN iSCSI storage on ubuntu.It''s great to see a how-to for this kind of setup, but here''s two comments from me: - using GFS2 with fence "manual" is, AFAIK, asking for trouble. One of the reasons I chose OCFS2 is that it can work without fencing (at least better than GFS), and it can automatically reboot itself when it detects "something wrong that can cause inconsistency" . - if you''re sharing using iscsi target, it might be easier (and also much better performance) to simply use LVM-backed LUNs instead of file-backed LUNs. Thus you only need clvm. If you''ve already implement this setup and have good result (e.g. it works fine when you yank some power cords or ethernet cables) please share your experience :D -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, its not my document. Actually Ive tried to install H/A SAN according do this document but without success. Am looking for opensource solution of H/A SAN - and this is close to my goal. The basics: 1) 2 machines with some HDD space synchronized between them with DRBD. 2) Now am little bit confused what shall I use above the DRBD block device? - some cluster FS like OCFS2 or GFS? - LVM? 3) create files with DD on cluster FS and export them with iSCSI? create LVM partitions and export them with iSCSI? 4) how to make iSCSI target highly available? - configure iSCSI on virtual IP/another IP and run it as HA service - configure separate iSCSI targets on both SAN hosts and connect it to Xen server as multipath? 5) hearbeat configuration VM machines with iSCSI HDD space on SAN should survive reboot/non availability of one SAN hosts without interruption nor noticing that SAN is degraded. Is that even possible? Thank for your comments Peter Braun 2009/11/26 Fajar A. Nugraha <fajar@fajar.net>:> On Wed, Nov 25, 2009 at 5:01 PM, Peter Braun <xenware@gmail.com> wrote: >> Hi, >> >> take a look at this document >> >> http://communities.vmware.com/servlet/JiveServlet/previewBody/10964-102-2-9835/ha_san_how-to.pdf;jsessionid=12BC07515EE5DB61AEFA58BA9251110D >> >> Its describing building highly-available SAN iSCSI storage on ubuntu. > > > It''s great to see a how-to for this kind of setup, but here''s two > comments from me: > - using GFS2 with fence "manual" is, AFAIK, asking for trouble. One of > the reasons I chose OCFS2 is that it can work without fencing (at > least better than GFS), and it can automatically reboot itself when it > detects "something wrong that can cause inconsistency" . > - if you''re sharing using iscsi target, it might be easier (and also > much better performance) to simply use LVM-backed LUNs instead of > file-backed LUNs. Thus you only need clvm. > > If you''ve already implement this setup and have good result (e.g. it > works fine when you yank some power cords or ethernet cables) please > share your experience :D > > -- > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Nov 26, 2009 at 4:26 AM, Peter Braun <xenware@gmail.com> wrote:> Am looking for opensource solution of H/A SAN - and this is close to my goal.there are lots of options, some simply bad; most good for some things and bad for other things> > > The basics: > > 1) 2 machines with some HDD space synchronized between them with DRBD.ok> > 2) Now am little bit confused what shall I use above the DRBD block device? > - some cluster FS like OCFS2 or GFS? > - LVM?first question: what would be the clients, and how many?? if the client(s) would store files, you need a filesystem. if the client(s) are Xen boxes, the best would be block devices, shared via iSCSI (or AoE, or FC, or nbd...) and split with LVM. or split with LVM and shared. if you want a single client, for filesystem anyone would do and for block devices LVM would be enough. if you want several clients, you need a cluster filesystem or cLVM. or you could split with plain LVM and share each LV via iSCSI. pruning some branches off the decision tree, you get two main options: 1: two storage boxes, synced with DRBD, split with (plain) LVM, share each LV via iSCSI. pros: - easy to admnister - no ''clustering'' software (apart from DRBD) - any number of clients cons: - you can grow by adding more storage pairs; but a single LV can''t span two pairs - no easy way to move LVs between boxes - if you''re not careful you can get ''hot spots'' of unbalanced load 2: any number of ''pairs'', each synced with DRBD. no LVM, share the full block via iSCSI. set cLVM on the clients, using each ''pair'' as a PV. pros: - very scalable - lots of flexibility, the clients see a single continous expanse of storage split in LVs cons: - somewhat more complex to setup well - cLVM has some limitations: no pvmove, no snapshots (maybe soon will be fixed?)> 3) create files with DD on cluster FS and export them with iSCSI? > create LVM partitions and export them with iSCSI?if you''re exporting image files with iSCSI, then the only direct client of these files is iSCSI itself. no need for a cluster FS, any plain FS would do. ... and of course, LVM is more efficient than any FS> > 4) how to make iSCSI target highly available? > - configure iSCSI on virtual IP/another IP and run it as HA service > - configure separate iSCSI targets on both SAN hosts and > connect it to Xen server as multipath?no experience here. i''d guess multipath is nicer; but any delay in DRBD replication would be visible as read inconsistencies. a migratable IP number might be safer.> 5) hearbeat configurationyes, this can be a chore.> VM machines with iSCSI HDD space on SAN should survive reboot/non > availability of one SAN hosts without interruption nor noticing that > SAN is degraded. > > Is that even possible?yes, but plan to spend lots of time to get it right. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Javier, thank you very much for your thoughts. SAN should be providing HDD space for several hundreds of VM at several Xen boxes over iSCSI. This should provide: - easy offline migration of VMs between hosts (just stop VM on xen01 an start on xen04 - matter of seconds) - higher utilization of expensive 15k harddrives - ? possible live migration in future - higher speed than current RAID1 local drive - easy expandable - ? VM HDD snapshot on the fly? Technology summary: 2 SAN hosts DRBD cLVM highly available iSCSI TARGET HEARTBEAT Whats clear difference between LVM and cLVM? Br Peter 2009/11/26 Javier Guerra <javier@guerrag.com>:> On Thu, Nov 26, 2009 at 4:26 AM, Peter Braun <xenware@gmail.com> wrote: >> Am looking for opensource solution of H/A SAN - and this is close to my goal. > > there are lots of options, some simply bad; most good for some things > and bad for other things > >> >> >> The basics: >> >> 1) 2 machines with some HDD space synchronized between them with DRBD. > > ok > >> >> 2) Now am little bit confused what shall I use above the DRBD block device? >> - some cluster FS like OCFS2 or GFS? >> - LVM? > > first question: what would be the clients, and how many?? > > if the client(s) would store files, you need a filesystem. if the > client(s) are Xen boxes, the best would be block devices, shared via > iSCSI (or AoE, or FC, or nbd...) and split with LVM. or split with > LVM and shared. > > if you want a single client, for filesystem anyone would do and for > block devices LVM would be enough. if you want several clients, you > need a cluster filesystem or cLVM. or you could split with plain LVM > and share each LV via iSCSI. > > pruning some branches off the decision tree, you get two main options: > > 1: two storage boxes, synced with DRBD, split with (plain) LVM, share > each LV via iSCSI. > pros: > - easy to admnister > - no ''clustering'' software (apart from DRBD) > - any number of clients > cons: > - you can grow by adding more storage pairs; but a single LV can''t > span two pairs > - no easy way to move LVs between boxes > - if you''re not careful you can get ''hot spots'' of unbalanced load > > 2: any number of ''pairs'', each synced with DRBD. no LVM, share the > full block via iSCSI. set cLVM on the clients, using each ''pair'' as a > PV. > pros: > - very scalable > - lots of flexibility, the clients see a single continous expanse > of storage split in LVs > cons: > - somewhat more complex to setup well > - cLVM has some limitations: no pvmove, no snapshots (maybe soon > will be fixed?) > > >> 3) create files with DD on cluster FS and export them with iSCSI? >> create LVM partitions and export them with iSCSI? > > if you''re exporting image files with iSCSI, then the only direct > client of these files is iSCSI itself. no need for a cluster FS, any > plain FS would do. > ... and of course, LVM is more efficient than any FS > >> >> 4) how to make iSCSI target highly available? >> - configure iSCSI on virtual IP/another IP and run it as HA service >> - configure separate iSCSI targets on both SAN hosts and >> connect it to Xen server as multipath? > > no experience here. i''d guess multipath is nicer; but any delay in > DRBD replication would be visible as read inconsistencies. a > migratable IP number might be safer. > >> 5) hearbeat configuration > > yes, this can be a chore. > >> VM machines with iSCSI HDD space on SAN should survive reboot/non >> availability of one SAN hosts without interruption nor noticing that >> SAN is degraded. >> >> Is that even possible? > > yes, but plan to spend lots of time to get it right. > > > -- > Javier >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Nov 26, 2009 at 9:22 PM, Javier Guerra <javier@guerrag.com> wrote:> On Thu, Nov 26, 2009 at 4:26 AM, Peter Braun <xenware@gmail.com> wrote: >> Am looking for opensource solution of H/A SAN - and this is close to my goal.>> 4) how to make iSCSI target highly available? >> - configure iSCSI on virtual IP/another IP and run it as HA service >> - configure separate iSCSI targets on both SAN hosts and >> connect it to Xen server as multipath? > > no experience here. i''d guess multipath is nicer; but any delay in > DRBD replication would be visible as read inconsistencies. a > migratable IP number might be safer.drbd in dual primary mode (which is necessary for cluster file system or Xen live migration) will assure data on both nodes stay consistent, at the cost of performance. Note that if you use iscsi+multipath, you need to configure iscsi not to try indefinitely in case one of the storage nodes go down. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users