Does anyone has implemented this sucessfully? I am asking this because we are implementing Xen on our test lab machines, which they hold up to three 3com and intel Nics 10/100mbps based. These servers are meant to replace MS messaging and intranet webservers which holds up to 5000 hits per day and thousands of mails, and probably the Dom0 could not handle this kind of setup with only one 100mbps link, and could not afford changing all the networking hardware to gigabit, at least not yet. Any pointers perhaps? Greetings from Mexico. -- "It is human nature to think wisely and act in an absurd fashion." "Todo el desorden del mundo proviene de las profesiones mal o mediocremente servidas" -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20080715/25daa014/attachment-0002.html>
on 7-15-2008 1:34 PM Victor Padro spake the following:> Does anyone has implemented this sucessfully? > > I am asking this because we are implementing Xen on our test lab > machines, which they hold up to three 3com and intel Nics 10/100mbps based. > > These servers are meant to replace MS messaging and intranet webservers > which holds up to 5000 hits per day and thousands of mails, and probably > the Dom0 could not handle this kind of setup with only one 100mbps link, > and could not afford changing all the networking hardware to gigabit, at > least not yet. > > Any pointers perhaps? > > Greetings from Mexico. >How fast is your incoming connection? If you have a data line from the outside world that can saturate a 100 Mbit network card, you can afford new cards. -- MailScanner is like deodorant... You hope everybody uses it, and you notice quickly if they don't!!!! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 258 bytes Desc: OpenPGP digital signature URL: <http://lists.centos.org/pipermail/centos/attachments/20080715/770bd809/attachment-0002.sig>
"Victor Padro" <vpadro at gmail.com> writes:> Does anyone has implemented this sucessfully?I have not used bonding with xen, but once you have a bonded interface in the Dom0 it should be trivial. setup your bonded interface as usual, then in /etc/xend-config.sxp where it says (network-script network-bridge) set it to (network-script 'network-bridge netdev=bond0') it should just work.> These servers are meant to replace MS messaging and intranet webservers > which holds up to 5000 hits per day and thousands of mails, and probably the > Dom0 could not handle this kind of setup with only one 100mbps link, and > could not afford changing all the networking hardware to gigabit, at least > not yet.100Mbps is a whole lot of bandwidth for a webserver unless you are serving video or large file downloads or something. 100Mbps is enough to choke a very powerful mailserver, nevermind exchange. I suspect that if you are using windows on Xen, disk and network I/O to and from the windows DomU will be a bigger problem than network speeds. Are you using the paravirtualized windows drivers? without them, network and disk IO is going to feel pretty slow in windows, no matter how fast the actual network or disk is.
On Tuesday 15 July 2008 22:34:56 Victor Padro wrote:> Does anyone has implemented this sucessfully? > > I am asking this because we are implementing Xen on our test lab machines, > which they hold up to three 3com and intel Nics 10/100mbps based. > > These servers are meant to replace MS messaging and intranet webservers > which holds up to 5000 hits per day and thousands of mails, and probably > the Dom0 could not handle this kind of setup with only one 100mbps link, > and could not afford changing all the networking hardware to gigabit, at > least not yet. > > Any pointers perhaps?Just go the normal way. As long as you are not using VLANs on top of bonds the default bridgescripts should do just fine. Before starting with using a bond as no active/backup configuration I urge you to read and understand the bonding.txt from the kernel-source: http://www.mjmwired.net/kernel/Documentation/networking/bonding.txt or just websearcch: bonding.txt The problem is not to configure the bonding on a linux-machine but to get the network setup right (Etherchannel, LACP, one switch or multiple switches, etc.) and know what to expect from which setup. And last but not least human communication between network guys and os-guys. That are the biggest problem with bonding in my experience. -marc.> > Greetings from Mexico. > > > -- > "It is human nature to think wisely and act in an absurd fashion." > > "Todo el desorden del mundo proviene de las profesiones mal o mediocremente > servidas"-- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/
On Tue, Jul 15, 2008 at 03:34:56PM -0500, Victor Padro wrote:> Does anyone has implemented this sucessfully?Yes and no. :/> I am asking this because we are implementing Xen on our test lab machines, > which they hold up to three 3com and intel Nics 10/100mbps based. > > These servers are meant to replace MS messaging and intranet webservers > which holds up to 5000 hits per day and thousands of mails, and probably the > Dom0 could not handle this kind of setup with only one 100mbps link, and > could not afford changing all the networking hardware to gigabit, at least > not yet. > > Any pointers perhaps?This is not CentOS specific, nor RHEL & clones, but a general Linux issue: bonding and bridging is broken leading to loops on the switch unless the switch is intelligent enough to do trunking on its side. The problem is that the outgoing packages from the virtual xen bridge are seen by the other bonding memebers and the learned mac addresses on the xen bridge toggle from the VM to the outside interface. I had posted this issue on the respective lists, but nothing happened - ideally the bridge code would allow for static macs. This then indirectly affects anything that uses a Linux bridge, including xen and most other virtual solutions. If you google for bonding on each of them you will find trouble reports all over. So you options are: a) use only active/backup type solutions to avoid loops. b) use an inteligent swicth that is able to trunk ports and therefore does not generate the loops. But then these servers cannot be PXE/DHCP booted anymore (for reinstalling them). I had these issues with the 2x1GB setup on the ATrpms servers and lost a lot of hair over it. The increased throughput was there at the end, but maybe I'd preferred to keep my hair ... -- Axel.Thimm at ATrpms.net