search for: 200mbps

Displaying 11 results from an estimated 11 matches for "200mbps".

Did you mean: 100mbps
2005 Feb 10
1
One "200Mbps" virtual link between 2 ethernet adaptators of 2 linux boxes.
Hi, ------- ------- | B |eth0---------eth0| C | | |eth1---------eth1| | ------- ------- In an attempt to have the install setup to increase speed and/or reliability of a link between two linux machines (for example in case of a wireless connection), I read that there were more than one solution, for example the old eql driver, bonding
2007 Feb 19
0
Absolute Maximal Bandwidth
...) bandwidth usage maximum. As an example I might have 200MBit/sec "agreed" bandwidth, and the ability to go up to 500MBit/sec if I wish. Anything past 200MBit/sec invokes a huge cost. Example tcc script (might contain typos): dev eth0 { ingress { $inpolicer = SLB ( cbs 100kB, cir 200Mbps ); class (<$whatever>) if SLB_ok ($policer); drop if 1; /* Drop the traffic exceeding the 200mbit rate */ } egress { $egpolicer = SLB (cbs 100kB, cir 200Mbps ); class (<$ftp>) if (ip_dst == 10.1.1.1 && tcp_dport == 21 && SLB_ok ($egpolicer)); class (<$web...
2008 Jan 06
1
DRBD NFS load issues
...eat setup on two servers running Active/Passive DRBD. The NFS servers themselves are 1x 2 core Opterons with 8G ram and 5TB space with 16 drives and a 3ware controller. They're connected to a HP procurve switch with bonded ethernet. The sync-rates between the two DRBD nodes seem to safely reach 200Mbps or better. The processors on the active NFS servers run with a load of 0.2, so it seems mighty healthy. Until I do a serious backup. I have a few load balanced web nodes and two database nodes as NFS clients. When I start backing up my database to a mounted NFS partition, a plain rsync drives the...
2007 Feb 12
1
Page allocation failure
...oes some packet filtering, and balances the traffic through L1 and L2. Every interface is gigabit. (Realtek NICs) I''m using IMQ on L1 and L2, to separate the traffic into 2 zones, international and local, with HTB for shaping. The system works fine for some time, but when the traffic hits 200Mbps, and ocassionally bursts to 250-300Mbps, L1 and L2 behave strangely (packet loss > 30%, increased latency +20ms), sometimes they even hang, leaving me with the only solution: rebooting them. I''ve checked the CPU usage, it stays around 80% during the highest traffic. I''ve exami...
2007 Sep 23
2
nfe driver 6.2 stable
Hi I installed the following driver. http://www.f.csce.kyushu-u.ac.jp/~shigeaki//software/freebsd-nfe.html Before I had the nve driver which was unstable on this server and on a prior server in both cases causing either spontaneous reboot or just a crash when under load. So far touchwood the nfe driver has stayed up and running at almost 3 days uptime and has had some stress. I know the driver
2009 Jul 28
1
mdadm RAID sync speed limitation?
While creating some arrays this evening, I noticed this line my dmesg: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reconstruction. Why is there a 200MBps limitation on the mdadm sync speed? It certainly isn't a concern for my particular systems (15k U320 SCSI) but I can imagine there are much higher end systems that actually could sync at a rate equal to or higher than the limit. Is this limit arbitrary? Based on detected system specs? Limitatio...
2007 Feb 23
0
RTL8169S-32 card
Hello, I am building a DRBD cluster with CentOS 4.4. Both nodes have RTL8169S-32 cards, Pheenet branded. I'm getting no more than 185 Mbps with the driver selected at installation (r8169). Downloading Realtek's r1000 climbs to some amazing 200Mbps. I am measuring performance with iperf, host to host, with a cat5e direct cable some 2.5ft long. Maybe disk I/O will be the bottleneck in my use case but anyway I feel performance is well under par with this cards. Is something wrong in my measuring setup? Am I right in expecting more speed? Has an...
2002 Jun 22
7
bonding & vlan - kernel 2.4.18 (RHL7.3)
Hi, Hopefully this won''t be too off-topic (I''ve seen both bonding & vlan mentioned on the list, but not really together). I''ve tried to get bonding (2 x 100Mb EEPro, but will want to try on 1000BaseT) and vlans to work together, but without luck. I can get them working fine (seemingly at least - I didn''t tried bursting on the bonded port) individually.
2007 Apr 18
4
[Bridge] MTU Question
I have a bridge that has gigabit interfaces. The machine in question has the fun job of being a Bridge, Firewall and SMB server. Both of the Gigabit interfaces are connected to workstations directly via Xover cable (well MDI-X to be exact). My question is, if I enable jumbo frames on the gigabit interfaces will that make any difference in overall transfer rate of the bridge? I was thinking it
2007 Nov 19
15
Unexpected results using HTB qdisc
Hi All, I am using the script below to limit download rates and manage traffic for a certain IP address and testing the results using iperf. The rate that iperf reports is much higher than the rate I have configured for the HTB qdisc. It''s probably just some newbie trap that''s messing things up but I''m buggered if I can see it. The following script is run on the
2005 Apr 15
16
Serial ATA hardware raid.
Hi everyone, I'm looking into setting up a SATA hardware raid, probably 5 to use with CentOS 4. I chose hardware raid over software mostly because I like the fact that the raid is transparent to the OS. Does anyone know of any SATA controllers that are well tested for this sort of usage? From what I can tell from googling, this is more or less where RHEL stands: Red Hat Enterprise Linux