bjquinn@seidal.com
2006-Jun-04 23:14 UTC
[Samba] Maximum samba file transfer speed on gigabit...
Ok so maybe someone can explain this to me. I've been banging my head against the wall on this one for several weeks now and the powers that be are starting to get a little impatient. What we've got is an old FoxPro application, the FoxPro .dbf's being stored on a Linux fileserver using Samba (Fedora 3 currently, using Fedora 5 on the new test server). We're having speed problems (don't mention the fact that we should be using a real SQL server - I know, I know). So I'm thinking what I need to do is to increase the speed at which the server can distribute those .dbf files across the network. We'd been getting somewhere between 10-20 MB/s, depending on file size, etc. We've already got a gigabit network. So, I'm thinking to myself, "a gigabit is 125 MB/s, so we should be going a LOT faster". Ok, so I know it's only really about 119 MB/s (darn 1000 B 1KB vs 1024 B = 1KB marketing crap). Whatever. That's a lot faster than 10-20 MB/s. I've got a bottleneck, I tell myself. The hard drive light on the old server is blood red all the time and top reports high (~10-40%) iowait. Must be the hard drive. So we upgrade from 2x 10K RPM SATA 1.5Gbps drives in RAID-0 to 4x 15K RPM SAS 3.0Gbps drives in RAID-10. That should do it. Nope. No difference, no change whatsoever (that was an expensive mistake). Then it must be the network card is the bottleneck. So we get PCI-E Gigabit NICs, I learn all about rmem and wmem and tcp window sizes, set a bunch of those settings (rmem & wmem 25000000, tcp window size on Windows = 262800 as well as so_sndbuf, so_rcvbuf, max xmit, and read size in smb.conf = 262800), still no change. No change! I can run 944 Mb/s or higher in iperf. Why can't I even get a FRACTION of that transferring files through Samba? I mean, hard drive speed shouldn't be the issue - a single one of these SAS drives is supposed to sustain 90+ MB/s, and I have four of them raided together. The NICs are testing out at nearly 1Gb/s. Is there REALLY that much overhead for Samba? Isn't there something I can do to increase the efficiency of the file transfer speeds? It doesn't seem to matter which settings I use in Samba, the best I ever get is about 22 MB/s, and it sometimes bogs down to around 12 MB/s. Assuming nothing else is the bottleneck, that's about 100 Mb/s - 175 Mb/s, or 10-18% of the theoretical limit of gigabit ethernet. The Windows clients never write the data received over the network to the hard drive, it loads it up into memory, which should be fairly fast, as are all the clients - 2.8+ GHz, 800MHz FSB, 10K RPM SATA drives, etc. Besides that, these fast SATA drives ought to be able to write more than 10-15 MB/s for a file transfer anyway. What am I missing here? Is the overhead for Samba really that significant, or is there some setting I can change, or am I overlooking something else? Thanks for your help, and maybe you guys can spare my head any more injury from the banging it has been getting over the past few weeks. -BJ Quinn
bjquinn@seidal.com wrote:> What > am I missing here? Is the overhead for Samba really that significant, or > is there some setting I can change, or am I overlooking something else? >What Version of Samba is running? Is it a kind of Locking Problem? Have you tried to use this settings in smb.conf (in the Share Section): oplocks = No level2 oplocks = No OR veto oplock files = /*.dbf/ In the Book "Samba 3 by Examble" is the following Tipp for WinNT/W2k/SP: Set HKLM\CurrentControlSet\Services\LanmanServer\Parameters "EnableOplocks"=dword:00000000 and HKLM\CurrentControlSet\Services\LanmanWorkstation\Parameters "UseOpportunisticLocking"=dword:00000000 What speed have a Filetransfer with ftp? Have you testet your Diskthrouput with bonnie (or such Tools)? What speed did you have with a Windows Server? Greetings Thomas
Adam Nielsen
2006-Jun-05 23:49 UTC
[Samba] Maximum samba file transfer speed on gigabit...
> a single one of these SAS drives is supposed to sustain 90+ MB/s, and > I have four of them raided together.You should be able to do a crude test by creating a large file ("dd if=/dev/random of=test.dat bs=1048576 count=100" will create a 100MB test file) and then timing how long it takes to read the file back ("time dd if=test.dat of=/dev/null") That'll tell you if your hard drives are configured properly and reading at full speed. Use a larger file for a more accurate test.> The NICs are testing out at nearly 1Gb/s. Is there REALLY that much > overhead for Samba?I wouldn't think there'd be a huge overhead, but in my own experience it's certainly noticeable (as compared to say FTP.) Don't forget that if the PC on the other end isn't capable of receiving the data at full speed, then it doesn't matter how fast the server is. You can test this by sharing the test file you created above, making it suitably large to give you time to properly test, and then copy it onto one of the other PCs via Samba. This should theoretically max out the network connection, but if your other PC sits at close to 100% CPU usage, then the bottleneck is out there, not on the server. If you map a network drive and do it from a command prompt, you should be able to do something like "copy test.dat nul" which under DOS at least would read in the file but not write it to disk (note only one L in 'nul') Cheers, Adam.
bjquinn@seidal.com
2006-Jun-06 07:01 UTC
[Samba] Maximum samba file transfer speed on gigabit...
> What Version of Samba is running?Various versions of 3.0 on multiple servers.> Is it a kind of Locking Problem?Ooh, good question, I'm not sure, and I'll try your oplocks settings. What exactly am I turning off, however, if I do that? Am I turning off file locking altogether?> What speed have a Filetransfer with ftp? > What speed did you have with a Windows Server?Ok well along those lines, here's another thing that I've noticed since I first posted. I had been getting ~940Mb/s in iperf, so I didn't think it was a network or NIC specific issue. I was using "mount -t cifs" and "rsync -a --stats --progress" to gauge my speed, which is where I was getting the <20 MB/s speed statistics. However, copying large files through Windows Explorer from the Samba share results in 55-60 MB/s. So, I don't know if there's a problem with rsync, smbfs, or cifs or whatever, but it looks like actual file transfer speeds (whether on one large file or an entire directory) are pretty good. I wouldn't mind seeing closer to 100+ MB/s, but I guess at around 60 MB/s, that's a great start. NOW the problem is that whenever I actually OPEN a file from any of the Samba servers, it opens MUCH slower than on a comparable Windows server. A large Excel file, for example, takes 15 seconds to load instead of 6 seconds when loaded from the Windows server. A given FoxPro query takes 45-55 seconds to run over the Samba share as opposed to around 10-12 seconds over the network from the Windows server. Could this be related to the oplocks stuff you were talking about, or would this point to a completely different problem? What are the downsides to turning off these oplocks settings?>Have you testet your Diskthrouput with bonnie (or such Tools)?Yes, and I'm getting at least 50-60 MB/s (probably now my bottleneck), although I've set up an SAS raid array that ought to get much faster than that, but doesn't - however that's a question for another mailing list! Thanks for your help! -BJ Quinn
bjquinn@seidal.com
2006-Jun-14 07:07 UTC
[Samba] Maximum samba file transfer speed on gigabit...
>Oplock's tells the Windows Client he can cache the requestet file on >local machine. >Should the Client change the File (or another Client would do this) the >Lock must released by the first Client, or Samba break's the Lock after >a certain time he doesn't become the Lock back. > >When you take the Settings in your Share Section with the Database File, >then this Settings work only on this Share. > >So helped this?No, this setting alone didn't seem to make any difference, although (as suggested by Gerald, the following settings created about a 15% speedup (down to about 55 seconds from 65 seconds on a baseline FoxPro query that we've been using to test speed). socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536 IPTOS_LOWDELAY use sendfile = no lock spin time = 15 lock spin count = 30 oplocks = no level2 oplocks = no That's an improvement, but still nowhere near what speeds I think I ought to be getting, and still nowhere near the 10-15 seconds the same query takes if the .dbf files reside on a Windows server with similar or worse hardware.>> Ok well along those lines, here's another thing that I've noticed since I >> first posted. I had been getting ~940Mb/s in iperf, so I didn't think it >> was a network or NIC specific issue. I was using "mount -t cifs" and >> "rsync -a --stats --progress" to gauge my speed, which is where I was >> >Sorry, i didn't understand you. >You have mounted from a different Linux Workstation this Share, or did >you mount a Share from the Windows Workstation?>From my linux server where I'm doing these tests, I mounted a Windowsshare through cifs (also tried smbfs) and copied files to it from the server's hard drive. That was surprisingly slow, never above 20 MB/s, and rarely above 15 MB/s. Although that's a bit disconcerting (and maybe it has something to do with my problem), if I don't worry about mounting a windows share and just copy files from the server to the windows machine through Windows Explorer on the windows machine, I get 50-60 MB/s, which is plenty fast for now, and I think the hard drive on the server is the bottleneck at this point.>>> Have you testet your Diskthrouput with bonnie (or such Tools)? >>> >> >> Yes, and I'm getting at least 50-60 MB/s (probably now my bottleneck), >> although I've set up an SAS raid array that ought to get much faster than >> that, but doesn't - however that's a question for another mailing list! >> >> >And without a RAID Array, only a Simple Disk?Yes, it's a 10K RPM SATA WD Raptor drive. As single disks come, they're pretty fast.>Maybe a Problem with the RAID Controller or your Bussystem?Very possibly, although I think it's a problem with the AIC94xx driver in the kernel. Since my RAID array is actually running slightly slower than my single disk, it's probably either the driver, or possibly the controller card itself, as you suggested.>What Kind of Mainboard?Asus P5WDG2-WS>What Bussystem, PCI (PCI-X should be much better for a huge Performance >in a Gigabit Environment)?PCI-E x4 actually, for the onboard dual gigabit network card. Iperf results in plenty fast speeds (~940 Mb/s).>How long take a time dd count=1000000 bs=1024 if=/dev/zero >of=/tmp/testfile?Regardless of file size, this test results fairly consistently in about a 55 MB/s speed on the single drive and 45-50 MB/s on the RAID array. -BJ Quinn