I'm using Samba & smbfs to make directories on a Linux file server
available across a switched Gigabit network. Unfortunately, when
mounting the shares to another Linux system with smbfs, the performance
is terrible.
To test the setup, I created both a 100mb and 650mb file and transferred
them with ftp, smbclient, and smbfs (mounted share). I also used iperf
to send each file, just out of curiosity. Here's what I'm seeing:
iperf:
100mb - 1.7 seconds (59 MB/s)
650mb - 10.8 seconds (60 MB/s)
FTP:
100mb - 2.17 seconds (47 MB/s)
650mb - 34.9 seconds (19 MB/s)
smbclient:
100mb - 5.2 seconds (19 MB/s)
650mb - 35.1 seconds (18.8 MB/s)
smbfs:
100mb - 45.4 seconds (2.5 MB/s)
650mb - 282.6 seconds (2.4 MB/s)
As you can see, using iperf (which has little or no overhead), the
network is capable of about 60 MB/s. I wasn't expecting to get anything
near that through a file transfer protocol (though I'm not entirely sure
why FTP is so much faster with the 100mb file as opposed to the 650mb
file), but smbfs is nearly 10 times slower than smbclient.
Both the server and the host are Linux machines. The samba server is
running Ubuntu (Debian) with the 2.6.8 kernel, while the host is a
Gentoo box running the 2.6.10-rc3 (nitro2) kernel.
I have made a few adjustments to the TCP settings on each system:
echo 262144 > /proc/sys/net/core/rmem_max
echo 262144 > /proc/sys/net/core/wmem_max
echo 163840 > /proc/sys/net/core/rmem_default
echo 163840 > /proc/sys/net/core/wmem_default
echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_wmem
echo "49152 163840 262144" > /proc/sys/net/ipv4/tcp_mem
These, however, have only helped each of the transfer types
performance-wise (FTP especially, smbfs wasn't really affected at all).
Does anybody have any idea why I'm seeing such a huge difference between
the smbfs and smbclient numbers? Am I missing something obvious?