Displaying 4 results from an estimated 4 matches for "nttcp".
Did you mean:
netcp
2006 Mar 30
4
samba 3 performance issues
...512 MB of RAM and a 3ware
card using SATA disks in a RAID 5 configuration (3ware controller card).
We have a gigabit network and are using Intel Gigabit ethernet cards
e1000).
When copying large files to the samba shares on the system, the transfer
rate maxes out near 100 mb/s. We tested with nttcp and were able to get
speeds of nearly 800mb/s. So I think it is safe to conclude this is not
a network issue.
Various tools like top, xosview and mpstat convinced us that we are
bound in the CPU. Stopping the samba file transfer and the cpu idle time
exceeds 90%. We are convinced that our CPU...
2006 Apr 25
3
Freebsd Stable 6.x ipsec slower than with 4.9
Hello List,
I have to dualcore Athlon 64 4800+ systems. Initially I was running 4.9
on both of them an was able to get 54mbits thru direct connected realtek
10/100 cards as measured by nttcp.
I put stable on one of the system and now can on get 37mbits as measured
by nttcp when going thru an ipsec tunnel.
Eliminating the tunnel I get 94mbit/sec.
Ideas as to why this is happening?
Also with 6.x I get some failed messages from dmesg:
Copyright (c) 1992-2006 The FreeBSD Project.
Cop...
2004 Feb 19
0
Gigabit Ethernet and samba network bandwidth
...ed by ordinary 100 Megabit link to the same server doing a copy
of a big file from server share.
The problem is what copy speed in that case is very low.
It's about 4-5 Megabytes per second (while 100 Megabits is expected).
At the same time i've got about 90 Megabits data transfer with NTTcp
(linux network benchmark) program. Ftp transfers are also going much faster.
At the other side when i switched server link from gigabit to fastEthernet
port of the same switch i got samba transfer speed about 9-10 Megabytes per
second what's looks like a full fastEthernet utilisation.
O...
2006 Apr 18
3
FreeBSD 4.9 losing mbufs!!!
...dialup actiontec
dualpc modems. We want
to use FreeBSD systems rather than put in Cisco equip which is what we
have done for other
large customers.
The problem:
I have been testing between an Athlon 64 3000+ (client) and an Athlon
64 X2 4800+ (server) across a dedicated 100mb lan. When I use nttcp,
which is a round trip tcp test, across the gre/vpn the client system,
(which goes to 0 percent idle), network stack will eventually stop
responding. In trying to track this down I find that
net.inet.ip.intr_queue_maxlen which is normally 50 has been reached (I
added a sysctl to be able to l...