similar to: Redundant LAN routing possible?

Displaying 20 results from an estimated 40000 matches similar to: "Redundant LAN routing possible?"

2010 Jun 25
2
Virtualization as cheap redundancy option?
I'm wondering if virtualization could be used as a cheap redundancy solution for situations which can tolerate a certain amount of downtime. Current recommendations is to run some kind of replication server such as DRBD. The problem here is cost if there are more than one server (or servers running on different OS) to be backed up. I'd basically need to tell my client they need to buy say
2011 Apr 12
1
Choosing network interface to send traffic through
I've got a server that initially was connected to a static WAN connection via eth0. Now I've added a second NIC eth1 connected to a local network switch with the intention of using it as a backup remote access connection via a dynamic ADSL connection. The problem now is getting the IP address of the dynamic ADSL connection. I've written a script that updates another server with the
2010 Jul 01
2
Update update failed on filesystem-2.4.0-3.el5.x86_64
I was doing a upgrade of an existing machine from 5.4 to 5.5. Everything else worked except for this rpm filesystem-2.4.0-3.el5.x86_64 It seems that it's trying to unpack the file into the CentOS DVD mount point > Error unpacking rpm package filesystem-2.4.0-3.el5.x86_64 > error: unpacking of archive failed on file /media: cpio: lsetfilecon Would it be safe to manually download the
2011 Sep 26
1
Is gluster suitable and production ready for email/web servers?
I've been leaning towards actually deploying gluster in one of my projects for a while and finally a probable candidate project came up. However, researching into the specific use case, it seems that gluster isn't really suitable for load profiles that deal with lots of concurrent small files. e.g. http://www.techforce.com.br/news/linux_blog/glusterfs_tuning_small_files
2011 Jun 23
4
Jumbo frames problem with Realtek NICs?
I was trying to do some performance testing between using iSCSI on the host as a diskfile to a guest vs the VM guest using the iSCSI device directly. However, in the process of trying to establish a baseline performance figure, I started increasing the MTU settings on the PCI-express NICs with RTL8168B chips. First bottleneck was discovering the max MTU allowed on these is 7K instead of 9K but
2007 Mar 27
2
network redundancy via two nics, two routers?
Hi List, I'm trying to configure two switches to provide redundancy (i.e. in case one switch goes down), and am wondering if there is a standard way to configure a CentOS box to use different gateways in a bonded interface, depending upon which physical nic is being used? A bit more detail might help answer the "And why do you want to do that?" questions... - Switch 1,
2011 Apr 29
1
Multiple IP Addresses for a bridge
Is it possible to assign multiple IP addresses to a bridge the same way ethernet devices can? The purpose is to accept incoming traffic for multiple public IP. 1 Physical NIC -> br0 (accepts incoming traffic for x.x.x.2 to x.x.x.5) Then 3 different virtual interfaces are connected to this bridge 1. eth0 (x.x.x.2) 2. eth1 (x.x.x.3) 3. eth2 (x.x.x.4)
2011 Jun 25
3
Jumbo Frame performance or lackof?
After successfully getting higher MTU to work on my Realtek NICs, I started testing the impact of higher MTU on file transfers using NFS exported ramdisk to ramdisk. The results were unexpected. The higher the MTU on the sending NIC, the lower the file transfer speed. I tested by using time cp to copy a 1GB file (In case compression might affect the results, so I dd the test file from the CentOS
2010 Sep 10
5
Traffic shaping on CentOS
I've been trying to do traffic shaping on one of my public servers and after reading up, it seems like the way to do so is via tc/htb. However, most of the documentation seems at least half a decade old with nothing new recently. Furthermore, trying to get documentation on tc filters turned up a blank. man tc refers to a tc-filters (8) but trying to man that gives a no such page/section
2005 Aug 26
5
OT: CentOS server with 2 GbE links to 2 GbE switches
Hi all, I am trying to come up with an architecture that has some redundancy. The idea is to hook up the two GbE LAN interfaces of a CentOS server to two Gigabit Ethernet switches. In case one switch goes down, there is a redundant path (the server is redundant too). Here is the idea: ----------- | GbE | PCs
2011 Jun 08
3
High system load but low cpu usage
I'm trying to figure out what's causing an average system load of 3+ to 5+ on an Intel quad core. The server has with 2 KVM guests (assigned 1 core and 2 cores) that's lightly loaded (0.1~0.4) each. Both guest/host are running 64bit CentOS 5.6 Originally I suspected maybe it's i/o but on checking, there is very little i/o wait % as well. Plenty of free disk space available on all
2013 Aug 14
12
xen 4.3 - bridge with bonding under Debian Wheezy
Hi all, i have a xen 4.3 installation and would like to have a bridge bond szenario: *** eth0 eth1 | | bond0 | br0 | vif = [ ''bridge=br0,mac=xx:xx:xx:xx:xx:xx'' ] *** With the network script in debian wheezy *** /etc/network/interfaces auto bond0 iface bond0 inet manual slaves eth0 eth1
2005 Feb 10
1
Redundant Links Routing
I have two LANS connected by two different pipes: ------------- ------------- LAN 1 --|--eth0 eth1-|-- VPN link --|-eth1 eth0--|-- LAN 2 | | | | | | | | | | | | | eth2-|-- T1 Link --|-eth2 |
2011 Jun 09
4
Possible to use multiple disk to bypass I/O wait?
I'm trying to resolve an I/O problem on a CentOS 5.6 server. The process basically scans through Maildirs, checking for space usage and quota. Because there are hundred odd user folders and several 10s of thousands of small files, this sends the I/O wait % way high. The server hits a very high load level and stops responding to other requests until the crawl is done. I am wondering if I add
2012 Aug 03
4
Urgent help on replacing /var
In a moment of epic stupidity, having ran out of space on the root partition of a server due to /var chewing up the space, I added a separate drive for the purpose of mounting it as /var To do so, I mounted the new drive as /var2, cp -R (in hindsight should had rsync to preserve attributes), deleted the original /var to free up space, edited fstab and rebooted... unsurprisingly to a fubar'd
2011 Mar 21
4
mdraid on top of mdraid
Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the storage array surviving a physical network switch
2011 Jun 29
8
Anyway to ensure SSH availability?
I was having problems with the same server locking up to the point I can't even get in via SSH. I've already used HTB/TC to reserve bandwidth for my SSH port but the problem now isn't an attack on the bandwidth. So I'm trying to figure out if there's a way to ensure that SSH is given cpu and i/o priority. However, so far reading seems to imply that it's probably not going
2016 Mar 01
10
Any experiences with newer WD Red drives?
Might be slightly OT as it isn't necessarily a CentOS related issue. I've been using WD Reds as mdraid components which worked pretty well for non-IOPS intensive workloads. However, the latest C7 server I built, ran into problems with them on on a Intel C236 board (SuperMicro X11SSH) with tons of "ata bus error write fpdma queued". Googling on it threw up old suggestions to
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software. I have two NICs that run though different switches. I have the lustre options in my modprobe.conf to look like this: options lnet networks=tcp0(eth1,eth0) My MGS seems to be only listening on the first interface however. When I try and ping the 1st interface (eth1)
2010 Jun 28
3
CentOS MD RAID 1 on Openfiler iSCSI
Has anybody tried or knows if it is possible to create a MD RAID1 device using networked iSCSI devices like those created using OpenFiler? The idea I'm thinking of here is to use two OpenFiler servers with physical drives in RAID 1, to create iSCSI virtual devices and run CentOS guest VMs off the MD RAID 1 device. Since theoretically, this setup would survive both a single physical drive