ever since 6.1 I've seen fluctuations in the performance of the em (Intel(R) PRO/1000 Gigabit Ethernet). motherboard OBN (On Board NIC) ---------------- ------------------ 1- Intel SE7501WV2S Intel 82546EB::2.1 2- Intel SE7320VP2D2 INTEL 82541 3- Sun Fire X4100 Server Intel(R) PRO/1000 test 1: writing to a NetApp filer via NFS/UDP FreeBSD Linux MegaBytes/sec 1- Average: 18.48 32.61 2- Average: 15.69 35.72 3- Average: 16.61 29.69 (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in speed of around 60% on FreeBSD but none on Linux) test2: iperf using 1 as server: FreeBSD(*) Linux Mbits/sec 1- 926 905 (this machine was busy) 2- 545 798 3- 910 912 *: did a 'sysctl net.inet.tcp.sendspace=65536' So, it seems to me something is not that good in the UDP department, but I can't find what to tweek. Any help? danny
> test 1: writing to a NetApp filer via NFS/UDP > FreeBSD Linux > MegaBytes/sec > 1- Average: 18.48 32.61 > 2- Average: 15.69 35.72 > 3- Average: 16.61 29.69 > (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in speed of > around 60% on FreeBSD but none on Linux)I've always used tcp when doing nfs between my (former) FreeBSD 5.3 fileserver and my FreeBSD 6.x webservers. I increased the read- and write-packetsize to 32768 and achieved the best performance using this packetsize and tcp. Using udp sometimes gave me the "server not responding", but never appeared using tcp. So stick to tcp. regards Claus
Hi, 1) Have you made sure there are no NFS rexmits reported by nfsstat -c ? 2) I haven't run thruput tests lately, but when I tested NFS/UDP thruput a few months ago, I routinely got over 70MB/s sequential read thruput and over 80MB/s sequential write thruput (against filers). I used 32KB blocksizes. -current improves upon this significantly, by about 25%-30%. Send me private e-mail and we can discuss NFS client tunings. mohan --- Danny Braniss <danny@cs.huji.ac.il> wrote:> > ever since 6.1 I've seen fluctuations in the performance of > the em (Intel(R) PRO/1000 Gigabit Ethernet). > > motherboard OBN (On Board NIC) > ---------------- ------------------ > 1- Intel SE7501WV2S Intel 82546EB::2.1 > 2- Intel SE7320VP2D2 INTEL 82541 > 3- Sun Fire X4100 Server Intel(R) PRO/1000 > > test 1: writing to a NetApp filer via NFS/UDP > FreeBSD Linux > MegaBytes/sec > 1- Average: 18.48 32.61 > 2- Average: 15.69 35.72 > 3- Average: 16.61 29.69 > (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in speed of > around 60% on FreeBSD but none on Linux) > > test2: iperf using 1 as server: > FreeBSD(*) Linux > Mbits/sec > 1- 926 905 (this machine was busy) > 2- 545 798 > 3- 910 912 > *: did a 'sysctl net.inet.tcp.sendspace=65536' > > > So, it seems to me something is not that good in the UDP department, but > I can't find what to tweek. > > Any help? > > danny > > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >
On 8/30/06, Danny Braniss <danny@cs.huji.ac.il> wrote:> > ever since 6.1 I've seen fluctuations in the performance of > the em (Intel(R) PRO/1000 Gigabit Ethernet). > > motherboard OBN (On Board NIC) > ---------------- ------------------ > 1- Intel SE7501WV2S Intel 82546EB::2.1 > 2- Intel SE7320VP2D2 INTEL 82541 > 3- Sun Fire X4100 Server Intel(R) PRO/1000 > > test 1: writing to a NetApp filer via NFS/UDP > FreeBSD Linux > MegaBytes/sec > 1- Average: 18.48 32.61 > 2- Average: 15.69 35.72 > 3- Average: 16.61 29.69 > (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in speed of > around 60% on FreeBSD but none on Linux) > > test2: iperf using 1 as server: > FreeBSD(*) Linux > Mbits/sec > 1- 926 905 (this machine was busy) > 2- 545 798 > 3- 910 912 > *: did a 'sysctl net.inet.tcp.sendspace=65536' > > > So, it seems to me something is not that good in the UDP department, but > I can't find what to tweek. > > Any help? > > dannyHave discussed this some internally, the best idea I've heard is that UDP is not giving us the interrupt rate that TCP would, so we end up not cleaning up as often, and thus descriptors might not be as quickly available.. Its just speculation at this point. Try this: the default is only to have 256 descriptors, try going for the MAX which is 4K. Cheers, Jack
> > > >Any help? > > > > danny > > Have discussed this some internally, the best idea I've heard is that > UDP is not giving us the interrupt rate that TCP would, so we end up > not cleaning up as often, and thus descriptors might not be as quickly > available.. Its just speculation at this point. > > Try this: the default is only to have 256 descriptors, try going for the MAX > which is 4K. >I've made some udp performance tests with FreeBSD 6-STABLE. The results were poor. I had the best results with 4.11. Does anybody made some test like that? -- Att., Marcelo Gardini NIC .br
Jack Vogel wrote:> On 8/30/06, Danny Braniss <danny@cs.huji.ac.il> wrote: >> >> ever since 6.1 I've seen fluctuations in the performance of >> the em (Intel(R) PRO/1000 Gigabit Ethernet). >> >> motherboard OBN (On Board NIC) >> ---------------- ------------------ >> 1- Intel SE7501WV2S Intel 82546EB::2.1 >> 2- Intel SE7320VP2D2 INTEL 82541 >> 3- Sun Fire X4100 Server Intel(R) PRO/1000 >> >> test 1: writing to a NetApp filer via NFS/UDP >> FreeBSD Linux >> MegaBytes/sec >> 1- Average: 18.48 32.61 >> 2- Average: 15.69 35.72 >> 3- Average: 16.61 29.69 >> (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in >> speed of >> around 60% on FreeBSD but none on Linux) >> >> test2: iperf using 1 as server: >> FreeBSD(*) Linux >> Mbits/sec >> 1- 926 905 (this machine was busy) >> 2- 545 798 >> 3- 910 912 >> *: did a 'sysctl net.inet.tcp.sendspace=65536' >> >> >> So, it seems to me something is not that good in the UDP department, but >> I can't find what to tweek. >> >> Any help? >> >> danny > > Have discussed this some internally, the best idea I've heard is that > UDP is not giving us the interrupt rate that TCP would, so we end up > not cleaning up as often, and thus descriptors might not be as quickly > available.. Its just speculation at this point.If a high interrupt rate is a problem and your NIC+driver supports it, then try enabling polling(4) aswell. This has helped me for bulk transfers on slower boxes but i have noticed problems with ALTQ/dummynet and other highly realtime dependent networking code. YMMV. More info in the man 4 polling. I think recent linux kernels/drivers have this implemented so it will enable it dynamically on high load. However i only skimmed the documents and i'm not a linux expert so i may be wrong on that. /Junics> > Try this: the default is only to have 256 descriptors, try going for > the MAX > which is 4K. > > Cheers, > > Jack > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" >
> Jack Vogel wrote: > > On 8/30/06, Danny Braniss <danny@cs.huji.ac.il> wrote: > >> > >> ever since 6.1 I've seen fluctuations in the performance of > >> the em (Intel(R) PRO/1000 Gigabit Ethernet). > >> > >> motherboard OBN (On Board NIC) > >> ---------------- ------------------ > >> 1- Intel SE7501WV2S Intel 82546EB::2.1 > >> 2- Intel SE7320VP2D2 INTEL 82541 > >> 3- Sun Fire X4100 Server Intel(R) PRO/1000 > >> > >> test 1: writing to a NetApp filer via NFS/UDP > >> FreeBSD Linux > >> MegaBytes/sec > >> 1- Average: 18.48 32.61 > >> 2- Average: 15.69 35.72 > >> 3- Average: 16.61 29.69 > >> (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in > >> speed of > >> around 60% on FreeBSD but none on Linux) > >> > >> test2: iperf using 1 as server: > >> FreeBSD(*) Linux > >> Mbits/sec > >> 1- 926 905 (this machine was busy) > >> 2- 545 798 > >> 3- 910 912 > >> *: did a 'sysctl net.inet.tcp.sendspace=65536' > >> > >> > >> So, it seems to me something is not that good in the UDP department, but > >> I can't find what to tweek. > >> > >> Any help? > >> > >> danny > > > > Have discussed this some internally, the best idea I've heard is that > > UDP is not giving us the interrupt rate that TCP would, so we end up > > not cleaning up as often, and thus descriptors might not be as quickly > > available.. Its just speculation at this point. > If a high interrupt rate is a problem and your NIC+driver supports it, > then try enabling polling(4) aswell. This has helped me for bulk > transfers on slower boxes but i have noticed problems with ALTQ/dummynet > and other highly realtime dependent networking code. YMMV. > More info in the man 4 polling. > I think recent linux kernels/drivers have this implemented so it will > enable it dynamically on high load. However i only skimmed the documents > and i'm not a linux expert so i may be wrong on that. > /Junicsas far as i know, polling only works on UP machines, besides, TCP performance is much better than UDP - which goes against basic instincts. the packets arriving at the NIC get processed - interrupt - before you can tell that they are IP/TCP/UDP, so the iterrupt latency should be the same for all.> > > > Try this: the default is only to have 256 descriptors, try going for > > the MAX > > which is 4K. > > > > Cheers, > > > > Jack