----- Original Message -----> Bez?glich Bryan Venteicher's Nachricht vom 05.08.2013 02:12
(localtime):
> > Hi,
> >
> > I've ported the OpenBSD vmxnet3 ethernet driver to FreeBSD. I did
a
> > lot of cleanup, bug fixes, new features, etc (+2000 new lines) along
> > the way so there is not much of a resemblance left.
> >
> > The driver is in good enough shape I'd like additional testers. A
patch
> > against -CURRENT is at [1]. Alternatively, the driver and a Makefile
is
> > at [2]; this should compile at least as far back as 9.1. I can look at
> > 8-STABLE if there is interest.
> >
> > Obviously, besides reports of 'it works', I'm interested
performance vs
> > the emulated e1000, and (for those using it) the VMware tools vmxnet3
> > driver. Hopefully it is no worse :)
>
> Hello Bryan,
>
> thanks a lot for your hard work!
>
> It seems if_vmx doesn't support jumbo frames. If I set mtu 9000, I get
> ?vmx0: cannot populate Rx queue 0?, I have no problems using jumbo
> frames with vmxnet3.
>
This could fail for two reasons - could not allocate an mbuf cluster,
or the call to bus_dmamap_load_mbuf_sg() failed. For the former, you
should check vmstat -z. For the later, the behavior of bus_dmamap_load_mbuf_sg()
changed between 9.1 and 9.2, and I know it was broken for awhile. I don't
recall exactly when I fixed it (I think shortly after I made the original
announcement). Could you retry with the files from HEAD @ [1]? Also, there
are new sysctl oids (dev.vmx.X.mbuf_load_failed & dev.vmx.X.mgetcl_failed)
for these errors.
I just compiled the driver on 9.2-RC2 with the sources from HEAD and was
able to change the MTU to 9000.
[1]- http://svnweb.freebsd.org/base/head/sys/dev/vmware/vmxnet3/
> I took a oldish host (4x2,8GHz Core2[LGA775]) with recent software: ESXi
> 5.1U1 and FreeBSD-9.2-RC2
> Two guests are connected to one MTU9000 "VMware Software Switch".
>
I've got a few performance things to still look at. What's the sysctl
dev.vmx.X output for the if_vmx<->if_vmx tests?
> Simple iperf (standard TCP) results:
>
> vmxnet3jumbo <-> vmxnet3jumbo
> 5.3Gbits/sec, load: 40-60%Sys 0.5-2%Intr
>
> vmxnet3 <-> vmxnet3
> 1.85 GBits/sec, load: 60-80%Sys 0-0.8%Intr
>
>
> if_vmx <-> if_vmx
> 1.51 GBits/sec, load: 10-45%Sys 40-48%Intr
> !!!
> if_vmxjumbo <-> if_vmxjumbo not possible
>
>
> if_em(e1000) <-> if_em(e1000)
> 1.23 GBits/sec, load: 80-60%Sys 0.5-8%Intr
>
> if_em(e1000)jumbo <-> if_em(e1000)jumbo
> 2.27Gbits/sec, load: 40-30%Sys 0.5-5%Intr
>
>
> if_igb(e1000e)junmbo <-> if_igb(e1000e)jumbo
> 5.03 Gbits/s, load: 70-60%Sys 0.5%Intr
>
> if_igb(e1000e) <-> if_igb(e1000e)
> 1.39 Gbits/s, load: 60-80%Sys 0.5%Intr
>
>
> f_igb(e1000e) <-> if_igb(e1000e), both hw.em.[rt]xd=4096
> 1.66 Gbits/s, load: 65-90%Sys 0.5%Intr
>
> if_igb(e1000e)junmbo <-> if_igb(e1000e)jumbo, both hw.em.[rt]xd=4096
> 4.81 Gbits/s, load: 65%Sys 0.5%Intr
>
> Conclusion:
> if_vmx performs well compared to the regular emulated nics and standard
> MTU, but it's behind tuned e1000e nic emulation and can't reach
vmxnet3
> performance with regular mtu. If one needs throughput, the missing jumbo
> frame support in if_vmx is a show stopper.
>
> e1000e is preferable over e1000, even if not officially choosable with
> "FreeBSD"-selection as guest (edit .vmx and alter
ethernet0.virtualDev > "e1000e", and dont forget to set
hw.em.enable_msix=0 in loader.conf,
> although the driver e1000e attaches is if_igb!)
>
> Thanks,
>
> -Harry
>
>