Stan Hoeppner
2013-Jul-30 07:26 UTC
[Samba] SMB throughput inquiry, Jeremy, and James' bow tie
I went to the site to subscribe again and ended up watching some of Jeremy's Google interviews. I particularly enjoyed the interview with James and the bow tie lesson at the end. :) So anyway, I recently upgraded my home network to end-to-end GbE. My clients are Windows XP SP3 w/hot fixes, and my Samba server is 3.5.6 atop vanilla kernel.org Linux 3.2.6 and Debian 6.0.6. With FDX fast ethernet steady SMB throughput was ~8.5MB/s. FTP and HTTP throughput were ~11.5MB/s. With GbE steady SMB throughput is ~23MB/s, nearly a 3x improvement, making large file copies such as ISOs much speedier. However ProFTPd and Lighttpd throughput are both a steady ~48MB/s, just over double the SMB throughput. I've tweaked the various Windows TCP stack registry settings, WindowScaling ON, Timestamps OFF, 256KB TcpWindowSize, etc. Between two Windows machines SMB throughput is ~45MB/s. You can see from the remarks below the various smb.conf options I've tried. No tweaking thus far of either Windows or Samba has yielded any improvement, at all. It seems that regardless of tweaking I'm stuck at ~23MB/s. [global] # max xmit=65536 # socket options=TCP_NODELAY IPTOS_LOWDELAY # read raw=yes # large readwrite=yes # aio read size=8192 nt acl support=no fstype=Samba client signing=disabled smb encrypt=disabled # smb ports=139 smb ports=445 The Linux server has an Intel PRO/1000GT NIC, the clients motherboard embedded RealTek 8111/8169, the latter being the reason I'm limited to ~50MB/s over the wire. I run nmbd via the standard init script at startup but I run smbd via inetd. This doesn't appear to affect throughput. I effect config changes with kill -HUP of inetd and killing smbd. I have Wireshark installed on one of the Windows XP machines, though I'm a complete novice with it. I assume a packet trace may be necessary to figure out where the SMB request/reply latency is hiding. ~23MB/s is a marked improvement and I'm not intending to complain here. It just seems rather low given FTP/HTTP throughput. I'm wondering how much of that ~48MB/s I'm leaving on the table, that could be coaxed out of Windows or smbd, the kernel, etc with some tweaking. I don't want to take up a bunch of anyone's time with this. If you can just tell me what information you need in order to point me in the right direction, I'll do my best to provide it with little fuss. Thanks again for providing such an invaluable piece of open source software to the world. -- Stan
Volker Lendecke
2013-Jul-30 09:25 UTC
[Samba] SMB throughput inquiry, Jeremy, and James' bow tie
On Tue, Jul 30, 2013 at 02:26:42AM -0500, Stan Hoeppner wrote:> I went to the site to subscribe again and ended up watching some of > Jeremy's Google interviews. I particularly enjoyed the interview with > James and the bow tie lesson at the end. :) > > So anyway, I recently upgraded my home network to end-to-end GbE. My > clients are Windows XP SP3 w/hot fixes, and my Samba server is 3.5.6 > atop vanilla kernel.org Linux 3.2.6 and Debian 6.0.6. > > With FDX fast ethernet steady SMB throughput was ~8.5MB/s. FTP and HTTP > throughput were ~11.5MB/s. With GbE steady SMB throughput is ~23MB/s, > nearly a 3x improvement, making large file copies such as ISOs much > speedier. However ProFTPd and Lighttpd throughput are both a steady > ~48MB/s, just over double the SMB throughput. > > I've tweaked the various Windows TCP stack registry settings, > WindowScaling ON, Timestamps OFF, 256KB TcpWindowSize, etc. Between two > Windows machines SMB throughput is ~45MB/s. You can see from the > remarks below the various smb.conf options I've tried. No tweaking thus > far of either Windows or Samba has yielded any improvement, at all. It > seems that regardless of tweaking I'm stuck at ~23MB/s. > > [global] > # max xmit=65536 > # socket options=TCP_NODELAY IPTOS_LOWDELAY > # read raw=yes > # large readwrite=yes > # aio read size=8192 > nt acl support=no > fstype=Samba > client signing=disabled > smb encrypt=disabled > # smb ports=139 > smb ports=445 > > The Linux server has an Intel PRO/1000GT NIC, the clients motherboard > embedded RealTek 8111/8169, the latter being the reason I'm limited to > ~50MB/s over the wire. > > I run nmbd via the standard init script at startup but I run smbd via > inetd. This doesn't appear to affect throughput. I effect config > changes with kill -HUP of inetd and killing smbd. > > I have Wireshark installed on one of the Windows XP machines, though I'm > a complete novice with it. I assume a packet trace may be necessary to > figure out where the SMB request/reply latency is hiding. > > ~23MB/s is a marked improvement and I'm not intending to complain here. > It just seems rather low given FTP/HTTP throughput. I'm wondering how > much of that ~48MB/s I'm leaving on the table, that could be coaxed out > of Windows or smbd, the kernel, etc with some tweaking.The main question is -- does your client issue multiple requests in parallel? If not, you are effectively limited to a TCP Window size of roughly 60k, because the higher level only issues requests of that size sequentially. If you have a properly multi-threaded or async copy program on the client, I think even XP would be able to do multi-issue. With newer clients like Windows 7 the situation is even better: The SMB2 client is a lot better performance-wise than XP ever was. With best regards, Volker Lendecke -- SerNet GmbH, Bahnhofsallee 1b, 37081 G?ttingen phone: +49-551-370000-0, fax: +49-551-370000-9 AG G?ttingen, HRB 2816, GF: Dr. Johannes Loxen http://www.sernet.de, mailto:kontakt at sernet.de
L.P.H. van Belle
2013-Jul-30 09:28 UTC
[Samba] SMB throughput inquiry, Jeremy, and James' bow tie
Hai, as compairison. Running Ubuntu 12.04 LTS. kernel 3.2.0-(latest ubuntu kernel ) Samba 3.6.12 Sernet release. 1 x ssd, top speed 400Mb/s ( reallife speeds ) 2 x 5400 RPM disk in raid 1, mdraid aka software raid. Draytek 2850 with gigabit ports. Copy speed from server to pc. about 110-120MB/s ( aka the speed i see in windows ) large files, like 2+ Gibabit files ) Copy speed from server to pc, about 40-80MB/s files from 1-50 Mb. Copy speed from server to pc, about 1-20MB/s lots of small files ( like 1kb-2Mb ) Tuning, windows side, Power schema, High performance disabled search indexing service. and . netsh interface tcp set global autotuning=disabled Tuning samba side. only, other settings are default. socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072 I suggest, upgrade your debian samba, to at lease 3.6.6 from backports. Or use the sernet packages. I noticed a improvement in speed after this upgrade. In my office i'm running samba 3.6.6 from backports on debian. On ubuntu im using the sernet packages 3.6.12 Good luck. Louis>-----Oorspronkelijk bericht----- >Van: stan at hardwarefreak.com >[mailto:samba-bounces at lists.samba.org] Namens Stan Hoeppner >Verzonden: dinsdag 30 juli 2013 9:27 >Aan: samba at lists.samba.org >Onderwerp: [Samba] SMB throughput inquiry, Jeremy, and James' bow tie > >I went to the site to subscribe again and ended up watching some of >Jeremy's Google interviews. I particularly enjoyed the interview with >James and the bow tie lesson at the end. :) > >So anyway, I recently upgraded my home network to end-to-end GbE. My >clients are Windows XP SP3 w/hot fixes, and my Samba server is 3.5.6 >atop vanilla kernel.org Linux 3.2.6 and Debian 6.0.6. > >With FDX fast ethernet steady SMB throughput was ~8.5MB/s. >FTP and HTTP >throughput were ~11.5MB/s. With GbE steady SMB throughput is ~23MB/s, >nearly a 3x improvement, making large file copies such as ISOs much >speedier. However ProFTPd and Lighttpd throughput are both a steady >~48MB/s, just over double the SMB throughput. > >I've tweaked the various Windows TCP stack registry settings, >WindowScaling ON, Timestamps OFF, 256KB TcpWindowSize, etc. >Between two >Windows machines SMB throughput is ~45MB/s. You can see from the >remarks below the various smb.conf options I've tried. No >tweaking thus >far of either Windows or Samba has yielded any improvement, at all. It >seems that regardless of tweaking I'm stuck at ~23MB/s. > >[global] ># max xmit=65536 ># socket options=TCP_NODELAY IPTOS_LOWDELAY ># read raw=yes ># large readwrite=yes ># aio read size=8192 >nt acl support=no >fstype=Samba >client signing=disabled >smb encrypt=disabled ># smb ports=139 >smb ports=445 > >The Linux server has an Intel PRO/1000GT NIC, the clients motherboard >embedded RealTek 8111/8169, the latter being the reason I'm limited to >~50MB/s over the wire. > >I run nmbd via the standard init script at startup but I run smbd via >inetd. This doesn't appear to affect throughput. I effect config >changes with kill -HUP of inetd and killing smbd. > >I have Wireshark installed on one of the Windows XP machines, >though I'm >a complete novice with it. I assume a packet trace may be necessary to >figure out where the SMB request/reply latency is hiding. > >~23MB/s is a marked improvement and I'm not intending to complain here. > It just seems rather low given FTP/HTTP throughput. I'm wondering how >much of that ~48MB/s I'm leaving on the table, that could be coaxed out >of Windows or smbd, the kernel, etc with some tweaking. > >I don't want to take up a bunch of anyone's time with this. If you can >just tell me what information you need in order to point me in >the right >direction, I'll do my best to provide it with little fuss. > >Thanks again for providing such an invaluable piece of open source >software to the world. > >-- >Stan >-- >To unsubscribe from this list go to the following URL and read the >instructions: https://lists.samba.org/mailman/options/samba > >
Stan Hoeppner wrote:> With FDX fast ethernet steady SMB throughput was ~8.5MB/s. FTP and HTTP > throughput were ~11.5MB/s. With GbE steady SMB throughput is ~23MB/s, > nearly a 3x improvement, making large file copies such as ISOs much > speedier. However ProFTPd and Lighttpd throughput are both a steady > ~48MB/s, just over double the SMB throughput.---- Hi Stan --- I've done a lot of in throughput testing on my home network. Now that you've made the jump to 1Gb, have you given thought to move to jumbo packet sizes? I found that moving up to a 9000 byte packet size (9014 frame size) gave the single best throughput upgrade on winXP. My best throughput rates on WinXP were in the 80-90MBps range, while on Win7, That increased to 125MB/s max write throughput and 119MB/s max read throughput. The reads are 6% slower due to the round-trip time it takes the requester to do the next read. Without that you are unlikely to get more than 40-50MB/s. In my recent testing with a 20Mb connection (An intel 540 dual-interface card at each end, wired straight through, end-to-end (no intervening switches). I further optimized my test setup and wrote a test prog to help my testing: /h> iotest iotest [-h]|[BlockSize]; Using Defaults: Count 128 ? BS 64M R:128?64M: 8.0GB:18.28s:448.0MB/s W:128?64M: 8.0GB:15.15s:540.7MB/s ---- I only got it a few months back, and haven't made much progress in getting it any faster -- hitting Samba's single threaded limits -- based on the protocol's single threaded server/user design. When I say I optimized my test setup -- I separated network throughput testing from disk-performance. They need to be tackeled separately, and both are important. Note I am using 64MB transfer sizes for my file in the test... as that is about about the largest optimal for this setup. I sometimes get around same perf with 32MB xfer sizes, but higher and lower, I start experiencing drop-offs: /h> iotest 32M R:256?32M: 8.0GB:18.59s:440.5MB/s W:256?32M: 8.0GB:14.68s:557.9MB/s /h> iotest 16M R:512?16M: 8.0GB:26.58s:308.2MB/s W:512?16M: 8.0GB:16.74s:489.2MB/s /h> iotest 8M R:1K?8M: 8.0GB:24.75s:330.9MB/s W:1K?8M: 8.0GB:19.31s:424.1MB/s /h> iotest 4M R:2K?4M: 8.0GB:27.13s:301.9MB/s W:2K?4M: 8.0GB:22.29s:367.5MB/s /h> iotest 128M R:64?128M: 8.0GB:21.00s:390.0MB/s W:64?128M: 8.0GB:15.03s:544.7MB/s ---- Note -- I haven't tested with ftp or http. My only other testing was with 'scp' which doesn't compete at all with SMB due to the encryption overhead. As a ballpark, a quick run (output looks different due to it being a different machine w/differently installed base HW) over a 1Gb gave (note, this was recorded with me logged in via remote desktop over the same connection).> iotestR:512+0 records in 512+0 records out 4294967296 bytes (4.3 GB) copied, 37.2361 s, 115 MB/s W:512+0 records in 512+0 records out 4294967296 bytes (4.3 GB) copied, 36.1117 s, 119 MB/s ------------------ You'll find that switching to jumbo packets will give you a 3x-4x improvement, maybe higher or lower depending on your network cards and such. --- After that you also need to tune the TCP/IP stacks on the server and client (WinXP can benefit from tuning more than Win7), linux has lots of nobs as well. Google is your friend, and I could say more, but this note is too long already... Hope that gives you some ideas. oh... to separate the network from disk testing, use cygwin on client. on server create devices in your home directory for /dev/zero (as a source device) and /dev/null (as a target). Cheers, Linda