Hi all. I am currently working for a hosting provider in a 100+ linux hosts' environment. We have www, mail HA solutions, as storage we mainly use NFS at the moment. We are also using DRBD, Heartbeat, Corosync. I am now gathering info to make a cluster with: - two virtualization nodes (active master and passive slave); - two storage nodes (for vm files) used by mentioned virtualization nodes (also active/passive). For virtualization I am thinking to use OpenVZ or KVM. For storage NFS or iSCSI. Could you please share your experiences with these technologies? Which one would you use and why? Are there any good alternatives in CentOS? Thanks for the info, Rafal.
> I am currently working for a hosting provider in a 100+ linux hosts' > environment. We have www, mail HA solutions, as storage we mainly use > NFS at the moment. We are also using DRBD, Heartbeat, Corosync. > > I am now gathering info to make a cluster with: > - two virtualization nodes (active master and passive slave); > - two storage nodes (for vm files) used by mentioned virtualization > nodes (also active/passive). > > For virtualization I am thinking to use OpenVZ or KVM. For storage NFS > or iSCSI. Could you please share your experiences with these > technologies? Which one would you use and why? Are there any good > alternatives in CentOS? > > Thanks for the info, > Rafal.I mainly go with Xen for a virtualization platform but KVM will work as well assuming that your hardware supports it. For a storage platform I'm assuming you are going to use servers with disk exporting as either NFS or iSCSI. If you are going this route I would suggest spending the money on a redundant storage array (one with redundant heads, power supplies, etc) that serves NFS as that I have found the easiest to deal with for migrations and everything else. If you can't do that, I would use servers with enough disk storage to make a decent array, setup DRBD in master/slave and export via NFS to your virtualization hosts. If money is really tight you could setup just two servers that act as virtualization hosts and storage platforms with an active/active two-node cluster using master/master DRBD + GFS. Be warned that you will lose quite a bit of performance due to the overhead of the cluster VS a dedicated purpose-built storage array... but we've been running this for a while without issue in some areas. -Tait
On 01/10/2012 02:59 PM, Rafa? Radecki wrote:> Hi all. > > I am currently working for a hosting provider in a 100+ linux hosts' > environment. We have www, mail HA solutions, as storage we mainly use > NFS at the moment. We are also using DRBD, Heartbeat, Corosync. > > I am now gathering info to make a cluster with: > - two virtualization nodes (active master and passive slave); > - two storage nodes (for vm files) used by mentioned virtualization > nodes (also active/passive). > > For virtualization I am thinking to use OpenVZ or KVM. For storage NFS > or iSCSI. Could you please share your experiences with these > technologies? Which one would you use and why? Are there any good > alternatives in CentOS? > > Thanks for the info, > Rafal.If you plan to use DRBD, do you really need external SAN? If not, this might be good; https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron
On Jan 10, 2012, at 2:59 PM, Rafa? Radecki <radecki.rafal at gmail.com> wrote:> Hi all. > > I am currently working for a hosting provider in a 100+ linux hosts' > environment. We have www, mail HA solutions, as storage we mainly use > NFS at the moment. We are also using DRBD, Heartbeat, Corosync. > > I am now gathering info to make a cluster with: > - two virtualization nodes (active master and passive slave); > - two storage nodes (for vm files) used by mentioned virtualization > nodes (also active/passive). > > For virtualization I am thinking to use OpenVZ or KVM. For storage NFS > or iSCSI. Could you please share your experiences with these > technologies? Which one would you use and why? Are there any good > alternatives in CentOS?For Linux virtualization on a scale greater then a couple of hosts I'd buy VMware and get a good SAN box with redundancy, say EMC, 3Par, NetApp or one of the middle tier like Equallogic, Lefthand or Compellent. Otherwise a Xen cluster with an NFS store for the VM files (ease of management) and iSCSI for their data partitions (performance) using DRBD for fault tolerance. -Ross