Emanuel Strobl
2004-Nov-17 14:58 UTC
serious networking (em) performance (ggate and NFS) problem
Dear best guys, I really love 5.3 in many ways but here're some unbelievable transfer rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve my performance problem (*laugh*): (In short, see *** below) Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI Desktop adapter MT) connected directly without a switch/hub and "device polling" compiled into a custom kernel with HZ set to 256 and kern.polling.enabled set to "1": LOCAL: (/samsung is ufs2 on /dev/ad4p1, a SAMSUNG SP080N2) test3:~#7: dd if=/dev/zero of=/samsung/testfile bs=16k ^C10524+0 records in 10524+0 records out 172425216 bytes transferred in 3.284735 secs (52492882 bytes/sec) -> ^^^^^^^^ ~ 52MB/s NFS(udp,polling): (/samsung is nfs on test3:/samsung, via em0, x-over, polling enabled) test2:/#21: dd if=/dev/zero of=/samsung/testfile bs=16k ^C1858+0 records in 1857+0 records out 30425088 bytes transferred in 8.758475 secs (3473788 bytes/sec) -> ^^^^^^^ ~ 3,4MB/s This example shows that using NFS over GigaBit Ethernet decimates performance by the factor of 15, in words fifteen! GGATE with MTU 16114 and polling: test2:/dev#28: ggatec create 10.0.0.2 /dev/ad4p1 ggate0 test2:/dev#29: mount /dev/ggate0 /samsung/ test2:/dev#30: dd if=/dev/zero of=/samsung/testfile bs=16k ^C2564+0 records in 2563+0 records out 41992192 bytes transferred in 15.908581 secs (2639594 bytes/sec) -> ^^^^^^^ ~ 2,6MB/s GGATE without polling and MTU 16114: test2:~#12: ggatec create 10.0.0.2 /dev/ad4p1 ggate0 test2:~#13: mount /dev/ggate0 /samsung/ test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=128k ^C1282+0 records in 1281+0 records out 167903232 bytes transferred in 11.274768 secs (14891945 bytes/sec) -> ^^^^^^^^ ~ 15MB/s .....and with 1m blocksize: test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m ^C61+0 records in 60+0 records out 62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec) -> ^^^^^^^^ ~ 13,6MB/s I can't imagine why there seems to be a absolute limit of 15MB/s that can be transfered over the network But it's even worse, here two excerpts of NFS (udp) with jumbo Frames (mtu=16114): test2:~#23: mount 10.0.0.2:/samsung /samsung/ test2:~#24: dd if=/dev/zero of=/samsung/testfile bs=1m ^C89+0 records in 88+0 records out 92274688 bytes transferred in 13.294708 secs (6940708 bytes/sec) -> ^^^^^^^ ~7MB/s .....and with 64k blocksize: test2:~#25: dd if=/dev/zero of=/samsung/testfile bs=64k ^C848+0 records in 847+0 records out 55508992 bytes transferred in 8.063415 secs (6884055 bytes/sec) And with TCP-NFS (and Jumbo Frames): test2:~#30: mount_nfs -T 10.0.0.2:/samsung /samsung/ test2:~#31: dd if=/dev/zero of=/samsung/testfile bs=64k ^C1921+0 records in 1920+0 records out 125829120 bytes transferred in 7.461226 secs (16864403 bytes/sec) -> ^^^^^^^^ ~ 17MB/s Again NFS (udp) but with MTU 1500: test2:~#9: mount_nfs 10.0.0.2:/samsung /samsung/ test2:~#10: dd if=/dev/zero of=/samsung/testfile bs=8k ^C12020+0 records in 12019+0 records out 98459648 bytes transferred in 10.687460 secs (9212633 bytes/sec) -> ^^^^^^^ ~ 10MB/s And TCP-NFS with MTU 1500: test2:~#12: mount_nfs -T 10.0.0.2:/samsung /samsung/ test2:~#13: dd if=/dev/zero of=/samsung/testfile bs=8k ^C19352+0 records in 19352+0 records out 158531584 bytes transferred in 12.093529 secs (13108794 bytes/sec) -> ^^^^^^^^ ~ 13MB/s GGATE with default MTU of 1500, polling disabled: test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=64k ^C971+0 records in 970+0 records out 63569920 bytes transferred in 6.274578 secs (10131346 bytes/sec) -> ^^^^^^^^ ~ 10M/s Conclusion: *** - It seems that GEOM_GATE is less efficient with GigaBit (em) than NFS via TCP is. - em seems to have problems with MTU greater than 1500 - UDP seems to have performance disadvantages over TCP regarding NFS which should be vice versa AFAIK - polling and em (GbE) with HZ=256 is definitly no good idea, even 10Base-2 can compete - NFS over TCP with MTU of 16114 gives the maximum transferrate for large files over GigaBit Ethernet with a value of 17MB/s, a quarter of what I'd expect with my test equipment. - overall network performance (regarding large file transfers) is horrible Please, if anybody has the knowledge to dig into these problems, let me know if I can do any tests to help getting ggate and NFS useful in fast 5.3-stable environments. Best regards, -Harry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20041117/660af19c/attachment.bin
Sean McNeil
2004-Nov-17 15:17 UTC
serious networking (em) performance (ggate and NFS) problem
On Wed, 2004-11-17 at 23:57 +0100, Emanuel Strobl wrote:> Dear best guys, > > I really love 5.3 in many ways but here're some unbelievable transfer rates, > after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve > my performance problem (*laugh*): > > (In short, see *** below) > > Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI > Desktop adapter MT) connected directly without a switch/hub and "device > polling" compiled into a custom kernel with HZ set to 256 and > kern.polling.enabled set to "1": > > LOCAL: > (/samsung is ufs2 on /dev/ad4p1, a SAMSUNG SP080N2) > test3:~#7: dd if=/dev/zero of=/samsung/testfile bs=16k > ^C10524+0 records in > 10524+0 records out > 172425216 bytes transferred in 3.284735 secs (52492882 bytes/sec) > -> > ^^^^^^^^ ~ 52MB/s > NFS(udp,polling): > (/samsung is nfs on test3:/samsung, via em0, x-over, polling enabled) > test2:/#21: dd if=/dev/zero of=/samsung/testfile bs=16k > ^C1858+0 records in > 1857+0 records out > 30425088 bytes transferred in 8.758475 secs (3473788 bytes/sec) > -> ^^^^^^^ ~ 3,4MB/s > > This example shows that using NFS over GigaBit Ethernet decimates performance > by the factor of 15, in words fifteen! > > GGATE with MTU 16114 and polling: > test2:/dev#28: ggatec create 10.0.0.2 /dev/ad4p1 > ggate0 > test2:/dev#29: mount /dev/ggate0 /samsung/ > test2:/dev#30: dd if=/dev/zero of=/samsung/testfile bs=16k > ^C2564+0 records in > 2563+0 records out > 41992192 bytes transferred in 15.908581 secs (2639594 bytes/sec) > -> ^^^^^^^ ~ 2,6MB/s > > GGATE without polling and MTU 16114: > test2:~#12: ggatec create 10.0.0.2 /dev/ad4p1 > ggate0 > test2:~#13: mount /dev/ggate0 /samsung/ > test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=128k > ^C1282+0 records in > 1281+0 records out > 167903232 bytes transferred in 11.274768 secs (14891945 bytes/sec) > -> ^^^^^^^^ ~ 15MB/s > .....and with 1m blocksize: > test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m > ^C61+0 records in > 60+0 records out > 62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec) > -> ^^^^^^^^ ~ 13,6MB/s > > I can't imagine why there seems to be a absolute limit of 15MB/s that can be > transfered over the network > But it's even worse, here two excerpts of NFS (udp) with jumbo Frames > (mtu=16114): > test2:~#23: mount 10.0.0.2:/samsung /samsung/ > test2:~#24: dd if=/dev/zero of=/samsung/testfile bs=1m > ^C89+0 records in > 88+0 records out > 92274688 bytes transferred in 13.294708 secs (6940708 bytes/sec) > -> ^^^^^^^ ~7MB/s > .....and with 64k blocksize: > test2:~#25: dd if=/dev/zero of=/samsung/testfile bs=64k > ^C848+0 records in > 847+0 records out > 55508992 bytes transferred in 8.063415 secs (6884055 bytes/sec) > > And with TCP-NFS (and Jumbo Frames): > test2:~#30: mount_nfs -T 10.0.0.2:/samsung /samsung/ > test2:~#31: dd if=/dev/zero of=/samsung/testfile bs=64k > ^C1921+0 records in > 1920+0 records out > 125829120 bytes transferred in 7.461226 secs (16864403 bytes/sec) > -> ^^^^^^^^ ~ 17MB/s > > Again NFS (udp) but with MTU 1500: > test2:~#9: mount_nfs 10.0.0.2:/samsung /samsung/ > test2:~#10: dd if=/dev/zero of=/samsung/testfile bs=8k > ^C12020+0 records in > 12019+0 records out > 98459648 bytes transferred in 10.687460 secs (9212633 bytes/sec) > -> ^^^^^^^ ~ 10MB/s > And TCP-NFS with MTU 1500: > test2:~#12: mount_nfs -T 10.0.0.2:/samsung /samsung/ > test2:~#13: dd if=/dev/zero of=/samsung/testfile bs=8k > ^C19352+0 records in > 19352+0 records out > 158531584 bytes transferred in 12.093529 secs (13108794 bytes/sec) > -> ^^^^^^^^ ~ 13MB/s > > GGATE with default MTU of 1500, polling disabled: > test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=64k > ^C971+0 records in > 970+0 records out > 63569920 bytes transferred in 6.274578 secs (10131346 bytes/sec) > -> ^^^^^^^^ ~ 10M/s > > > Conclusion: > > *** > > - It seems that GEOM_GATE is less efficient with GigaBit (em) than NFS via TCP > is. > > - em seems to have problems with MTU greater than 1500 > > - UDP seems to have performance disadvantages over TCP regarding NFS which > should be vice versa AFAIK > > - polling and em (GbE) with HZ=256 is definitly no good idea, even 10Base-2 > can compete > > - NFS over TCP with MTU of 16114 gives the maximum transferrate for large > files over GigaBit Ethernet with a value of 17MB/s, a quarter of what I'd > expect with my test equipment. > > - overall network performance (regarding large file transfers) is horrible > > Please, if anybody has the knowledge to dig into these problems, let me know > if I can do any tests to help getting ggate and NFS useful in fast 5.3-stable > environments.I am very interested in this as I have similar issues with the re driver. It it horrible when operating at gigE vs. 100BT. Have you tried plugging the machines into a 100BT instead? Cheers, Sean -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: This is a digitally signed message part Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20041117/df96ebf0/attachment.bin
Scott Long
2004-Nov-17 15:33 UTC
serious networking (em) performance (ggate and NFS) problem
Emanuel Strobl wrote:> Dear best guys, > > I really love 5.3 in many ways but here're some unbelievable transfer rates, > after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve > my performance problem (*laugh*): > > (In short, see *** below) > > Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI > Desktop adapter MT) connected directly without a switch/hub and "device > polling" compiled into a custom kernel with HZ set to 256 and > kern.polling.enabled set to "1": > > LOCAL: > (/samsung is ufs2 on /dev/ad4p1, a SAMSUNG SP080N2) > test3:~#7: dd if=/dev/zero of=/samsung/testfile bs=16k > ^C10524+0 records in > 10524+0 records out > 172425216 bytes transferred in 3.284735 secs (52492882 bytes/sec) > -> > ^^^^^^^^ ~ 52MB/s > NFS(udp,polling): > (/samsung is nfs on test3:/samsung, via em0, x-over, polling enabled) > test2:/#21: dd if=/dev/zero of=/samsung/testfile bs=16k > ^C1858+0 records in > 1857+0 records out > 30425088 bytes transferred in 8.758475 secs (3473788 bytes/sec) > -> ^^^^^^^ ~ 3,4MB/s > > This example shows that using NFS over GigaBit Ethernet decimates performance > by the factor of 15, in words fifteen! > > GGATE with MTU 16114 and polling: > test2:/dev#28: ggatec create 10.0.0.2 /dev/ad4p1 > ggate0 > test2:/dev#29: mount /dev/ggate0 /samsung/ > test2:/dev#30: dd if=/dev/zero of=/samsung/testfile bs=16k > ^C2564+0 records in > 2563+0 records out > 41992192 bytes transferred in 15.908581 secs (2639594 bytes/sec) > -> ^^^^^^^ ~ 2,6MB/s > > GGATE without polling and MTU 16114: > test2:~#12: ggatec create 10.0.0.2 /dev/ad4p1 > ggate0 > test2:~#13: mount /dev/ggate0 /samsung/ > test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=128k > ^C1282+0 records in > 1281+0 records out > 167903232 bytes transferred in 11.274768 secs (14891945 bytes/sec) > -> ^^^^^^^^ ~ 15MB/s > .....and with 1m blocksize: > test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m > ^C61+0 records in > 60+0 records out > 62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec) > -> ^^^^^^^^ ~ 13,6MB/s > > I can't imagine why there seems to be a absolute limit of 15MB/s that can be > transfered over the network > But it's even worse, here two excerpts of NFS (udp) with jumbo Frames > (mtu=16114): > test2:~#23: mount 10.0.0.2:/samsung /samsung/ > test2:~#24: dd if=/dev/zero of=/samsung/testfile bs=1m > ^C89+0 records in > 88+0 records out > 92274688 bytes transferred in 13.294708 secs (6940708 bytes/sec) > -> ^^^^^^^ ~7MB/s > .....and with 64k blocksize: > test2:~#25: dd if=/dev/zero of=/samsung/testfile bs=64k > ^C848+0 records in > 847+0 records out > 55508992 bytes transferred in 8.063415 secs (6884055 bytes/sec) > > And with TCP-NFS (and Jumbo Frames): > test2:~#30: mount_nfs -T 10.0.0.2:/samsung /samsung/ > test2:~#31: dd if=/dev/zero of=/samsung/testfile bs=64k > ^C1921+0 records in > 1920+0 records out > 125829120 bytes transferred in 7.461226 secs (16864403 bytes/sec) > -> ^^^^^^^^ ~ 17MB/s > > Again NFS (udp) but with MTU 1500: > test2:~#9: mount_nfs 10.0.0.2:/samsung /samsung/ > test2:~#10: dd if=/dev/zero of=/samsung/testfile bs=8k > ^C12020+0 records in > 12019+0 records out > 98459648 bytes transferred in 10.687460 secs (9212633 bytes/sec) > -> ^^^^^^^ ~ 10MB/s > And TCP-NFS with MTU 1500: > test2:~#12: mount_nfs -T 10.0.0.2:/samsung /samsung/ > test2:~#13: dd if=/dev/zero of=/samsung/testfile bs=8k > ^C19352+0 records in > 19352+0 records out > 158531584 bytes transferred in 12.093529 secs (13108794 bytes/sec) > -> ^^^^^^^^ ~ 13MB/s > > GGATE with default MTU of 1500, polling disabled: > test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=64k > ^C971+0 records in > 970+0 records out > 63569920 bytes transferred in 6.274578 secs (10131346 bytes/sec) > -> ^^^^^^^^ ~ 10M/s > > > Conclusion: > > *** > > - It seems that GEOM_GATE is less efficient with GigaBit (em) than NFS via TCP > is. > > - em seems to have problems with MTU greater than 1500 > > - UDP seems to have performance disadvantages over TCP regarding NFS which > should be vice versa AFAIK > > - polling and em (GbE) with HZ=256 is definitly no good idea, even 10Base-2 > can competeYou should be setting HZ to 1000 or higher.> > - NFS over TCP with MTU of 16114 gives the maximum transferrate for large > files over GigaBit Ethernet with a value of 17MB/s, a quarter of what I'd > expect with my test equipment. > > - overall network performance (regarding large file transfers) is horrible > > Please, if anybody has the knowledge to dig into these problems, let me know > if I can do any tests to help getting ggate and NFS useful in fast 5.3-stable > environments.if_em in 5.3 has a large performance penalty in the common case due to a programming error. I fixed it in 6-CURRENT and 5.3-STABLE. You might want to try updating to the RELENG_5 branch to see if you get better results. Scott
Chuck Swiger
2004-Nov-17 16:02 UTC
serious networking (em) performance (ggate and NFS) problem
Emanuel Strobl wrote: [ ... ]> Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI > Desktop adapter MT) connected directly without a switch/hubIf filesharing via NFS is your primary goal, it's reasonable to test that, however it would be easier to make sense of your results by testing your network hardware at a lower level. Since you're already running portmap/RPC, consider using spray to blast some packets rapidly and see what kind of bandwidth you max out using that. Or use ping with -i & -s set to reasonable values depending on whether you're using jumbo frames or not. If the problem is your connection is dropping a few packets, this will show up better here. Using "ping -f" is also a pretty good troubleshooter. If you can dig up a gigabit switch with management capabilities to test with, taking a look at the per-port statistics for errors would also be worth doing. A dodgy network cable can still work well enough for the cards to have a green link light, but fail to handle high traffic properly. [ ... ]> - em seems to have problems with MTU greater than 1500Have you tried using an MTU of 3K or 7K? I also seem to recall that there were performance problems with em in 5.3 and a fix is being tested in -CURRENT. [I just saw Scott's response to the list, and your answer, so maybe nevermind this point.]> - UDP seems to have performance disadvantages over TCP regarding NFS which > should be vice versa AFAIKHmm, yeah...again, this makes me wonder whether you are dropping packets. NFS over TCP does better than UDP does in lossy network conditions.> - polling and em (GbE) with HZ=256 is definitly no good idea, even 10Base-2 > can competeYou should be setting HZ to 1000, 2000, or so when using polling, and a higher HZ is definitely recommmended when you add in jumbo frames and GB speeds. -- -Chuck PS: followup-to set to reduce crossposting...
Mike Jakubik
2004-Nov-17 17:48 UTC
serious networking (em) performance (ggate and NFS) problem
Emanuel Strobl said: ^^^^^^^^ ~ 15MB/s> .....and with 1m blocksize: > test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m > ^C61+0 records in > 60+0 records out > 62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec) > -> ^^^^^^^^ ~ 13,6MB/s > > I can't imagine why there seems to be a absolute limit of 15MB/s that can > be > transfered over the networkI have two PCs connected together, using the em card. One is FreeBSD 6 from Fri Nov 5 , the other is Windows XP. I am using the default mtu of 1500, no polling, and i get ~ 21MB/s tranfser rates via ftp. Im sure this would be higher with jumbo frames. Both computers are AMD cpus with Via chipsets. Perhaps its your hard drive that cant keep up?
Andreas Braukmann
2004-Nov-18 00:02 UTC
serious networking (em) performance (ggate and NFS) problem
--On Mittwoch, 17. November 2004 20:48 Uhr -0500 Mike Jakubik <mikej@rogers.com> wrote:> I have two PCs connected together, using the em card. One is FreeBSD 6 > from Fri Nov 5 , the other is Windows XP. I am using the default mtu of > 1500, no polling, and i get ~ 21MB/s tranfser rates via ftp. Im sure this > would be higher with jumbo frames.probably> Both computers are AMD cpus with Via > chipsets.Which AMD Chipset? VIA did some pretty bad PCI implementations in the past. Once I wondered about suspiciously low transfer rates in the process of testing 3Ware and Adaptec (2120S, 2200S) RAID-Controllers. The transfer rates maxed out at ca. 30 MByte/s. Switching the testboxes mainboard from one with VIA chipset to one with an AMD (MP / MPX) chipset was a great success. -Andreas
Pawel Jakub Dawidek
2004-Nov-18 01:43 UTC
serious networking (em) performance (ggate and NFS) problem
On Wed, Nov 17, 2004 at 11:57:41PM +0100, Emanuel Strobl wrote: +> Dear best guys, +> +> I really love 5.3 in many ways but here're some unbelievable transfer rates, +> after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve +> my performance problem (*laugh*): [...] I done some test in the past with ggate and PCI64/GBit NICs and I get ~38MB/s AFAIR. Remember that when using 32bit PCIs you can get transfer about 500Mbit/s. Please run those test with netperf (/usr/ports/benchmarks/netperf) and send results. -- Pawel Jakub Dawidek http://www.FreeBSD.org pjd@FreeBSD.org http://garage.freebsd.pl FreeBSD committer Am I Evil? Yes, I Am! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20041118/00469380/attachment.bin
Robert Watson
2004-Nov-18 04:29 UTC
serious networking (em) performance (ggate and NFS) problem
On Wed, 17 Nov 2004, Emanuel Strobl wrote:> I really love 5.3 in many ways but here're some unbelievable transfer > rates, after I went out and bought a pair of Intel GigaBit Ethernet > Cards to solve my performance problem (*laugh*):I think the first thing you want to do is to try and determine whether the problem is a link layer problem, network layer problem, or application (file sharing) layer problem. Here's where I'd start looking: (1) I'd first off check that there wasn't a serious interrupt problem on the box, which is often triggered by ACPI problems. Get the box to be as idle as possible, and then use vmstat -i or stat -vmstat to see if anything is spewing interrupts. (2) Confirm that your hardware is capable of the desired rates: typically this involves looking at whether you have a decent card (most if_em cards are decent), whether it's 32-bit or 64-bit PCI, and so on. For unidirectional send on 32-bit PCI, be aware that it is not possible to achieve gigabit performance because the PCI bus isn't fast enough, for example. (3) Next, I'd use a tool like netperf (see ports collection) to establish three characteristics: round trip latency from user space to user space (UDP_RR), TCP throughput (TCP_STREAM), and large packet throughput (UDP_STREAM). With decent boxes on 5.3, you should have no trouble at all maxing out a single gig-e with if_em, assuming all is working well hardware wise and there's no software problem specific to your configuration. (4) Note that router latency (and even switch latency) can have a substantial impact on gigabit performance, even with no packet loss, in part due to stuff like ethernet flow control. You may want to put the two boxes back-to-back for testing purposes. (5) Next, I'd measure CPU consumption on the end box -- in particular, use top -S and systat -vmstat 1 to compare the idle condition of the system and the system under load. If you determine there is a link layer or IP layer problem, we can start digging into things like the error statistics in the card, negotiation issues, etc. If not, you want to move up the stack to try and characterize where it is you're hitting the performance issue. Robert N M Watson FreeBSD Core Team, TrustedBSD Projects robert@fledge.watson.org Principal Research Scientist, McAfee Research
Mike Jakubik
2004-Nov-18 08:44 UTC
serious networking (em) performance (ggate and NFS) problem
Andreas Braukmann said:> --On Mittwoch, 17. November 2004 20:48 Uhr -0500 Mike Jakubik > <mikej@rogers.com> wrote: > >> I have two PCs connected together, using the em card. One is FreeBSD 6 >> from Fri Nov 5 , the other is Windows XP. I am using the default mtu of >> 1500, no polling, and i get ~ 21MB/s tranfser rates via ftp. Im sure >> this >> would be higher with jumbo frames. > > probably > >> Both computers are AMD cpus with Via >> chipsets. > > Which AMD Chipset? VIA did some pretty bad PCI implementations > in the past. Once I wondered about suspiciously low transfer > rates in the process of testing 3Ware and Adaptec (2120S, 2200S) > RAID-Controllers. The transfer rates maxed out at ca. 30 MByte/s. > Switching the testboxes mainboard from one with VIA chipset to > one with an AMD (MP / MPX) chipset was a great success.The FreeBSD box is KT133A and Windows is KT266A. The VIA chipsets had bandwith or latency (i cant remember) issues with the PCI bus. Perhaps you are maxing out your PCI bus, or the HDs cant keep up?
Shunsuke SHINOMIYA
2004-Nov-19 02:18 UTC
serious networking (em) performance (ggate and NFS) problem
Hi, Jeremie, how is this? To disable Interrupt Moderation, sysctl hw.em?.int_throttle_valve=0. However, because this patch is just made now, it is not fully tested.> > if you suppose your computer has sufficient performance, please try to > > disable or adjust parameters of Interrupt Moderation of em. > > Nice ! It would be even better if there was a boot-time sysctl to > configure the behaviour of this feature, or something like ifconfig > link0 option of the fxp(4) driver.-- Shunsuke SHINOMIYA <shino@fornext.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: if_em.diff Type: application/octet-stream Size: 3191 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20041119/36a2e46b/if_em.obj