Hello, I have a little problem which I can't solve with Samba used under Debian Squeeze. (version of Samba : 3.5.6) I made a transfer speed between Samba server and a samba mount on the same PC copy to RAM and the same file directly to RAM. 1. HDD to RAM => 105-115Mo/s 2. Shared the same HDD directory with samba and mount to "/media" locally with (mount -t smbfs) : Samba to RAM => 60-67Mo/s 3. From a remote PC under Windows : ~60Mo/s How Samba can divide by almost two the bandwidth ? (105Mo/s HDD and 60Mo/s Samba) I have already changed that to Samba which have up my bandwidth from ~55Mo/s to ~60Mo/s : ************************************** socket options = TCP_NODELAY IPTOS_LOWDELAY SO_SNDBUF=262144 SO_RCVBUF=262144 SO_KEEPALIVE min receivefile size=16348 use sendfile=true aio read size = 643638 aio write size = 643638 aio write behind = true dns proxy=no ************************************** Someone have an idea how I can increased speed of Samba to almost 100Mo/s ? (I can understand than the 10% is lost by different protocol but 40-50% no :( ) 10Gbits should come and can't sustain 1Gbits ? :( Thank you in advance for your reply ! Best Regards, Jean
On 4/8/2012 8:08 PM, Azerty Ytreza wrote:> Hello, > > I have a little problem which I can't solve with Samba used under Debian > Squeeze. (version of Samba : 3.5.6) > I made a transfer speed between Samba server and a samba mount on the same > PC copy to RAM and the same file directly to RAM. > > 1. HDD to RAM => 105-115Mo/sPure HD read speed.> 2. Shared the same HDD directory with samba and mount to "/media" locally > with (mount -t smbfs) : > Samba to RAM => 60-67Mo/sThis isn't a copy to RAM. You've created a loop from/to the drive through the samba server and client. Thus, if you copy from the samba share to /media, you're reading from the HD and writing to the HD. So you're getting about 120MB/s aggregate to/from the drive. Linux buffering is likely pumping this number up a bit. Issue a sync after the copy command for flush the write to disk and you'll see a more accurate number.> 3. From a remote PC under Windows : ~60Mo/sObviously GbE. This low throughput can have a number of causes, most dealing with network performance, not samba.> How Samba can divide by almost two the bandwidth ? (105Mo/s HDD and 60Mo/s > Samba) > > I have already changed that to Samba which have up my bandwidth from > ~55Mo/s to ~60Mo/s : > ************************************** > socket options = TCP_NODELAY IPTOS_LOWDELAY SO_SNDBUF=262144 > SO_RCVBUF=262144 SO_KEEPALIVE > min receivefile size=16348 > use sendfile=true > aio read size = 643638 > aio write size = 643638 > aio write behind = true > dns proxy=no > **************************************Are you using jumbo frames? Which NICs? Which GbE switch? Using FTP client on Windows PC, what is your GET transfer rate from the Samba machine? If it's less than 80MB/s you may have a network problem. If it's over 90MB/s you may still have some Samba tuning to do. BTW, 60MB/s from Samba to Windows over GbE is pretty damn good. Many people can't get over 35-40MB/s with Windows/GbE and Samba> Someone have an idea how I can increased speed of Samba to almost 100Mo/s ? > (I can understand than the 10% is lost by different protocol but 40-50% no > :( ) > 10Gbits should come and can't sustain 1Gbits ? :(Assuming your future 10GbE network is configured and tuned perfectly, you'll need a disk that can push over 1,000 MB/s sustained data rate to fill the 10GbE pipe. This requires either a large striped array of spinning rust (more than 14 SATA disks in RAID0), or a smaller array of fast SSDs (4 in a RAID0). -- Stan
> > Pure HD read speed. >Yes and it's what I want.> This isn't a copy to RAM. You've created a loop from/to the drive > through the samba server and client. Thus, if you copy from the samba > share to /media, you're reading from the HD and writing to the HD. So > you're getting about 120MB/s aggregate to/from the drive. Linux > buffering is likely pumping this number up a bit. Issue a sync after > the copy command for flush the write to disk and you'll see a more > accurate number.Yes, it's a loop on the same machine but I doesn't copy to the HD. Because my system run in ram. I have one HDD with datas on "/datas" but all other folders are on ram but I have copied that from "/media" (ram) to "/tmp" (ram). ("/media" is the mount on "/datas"). And the file transferred is very big more than 20Gb, I cancel the copy after a moment because not have enough memory for copy all the file. At each time, I have "iotop -o" opened for look the transfer speed from HDD.> Obviously GbE. This low throughput can have a number of causes, most > dealing with network performance, not samba.Yes, it's for that which I try to isolate the cause. Iperf give me very good performance but after when I try with real file, I doesn't have that :(> Are you using jumbo frames? Which NICs? Which GbE switch? Using FTP > client on Windows PC, what is your GET transfer rate from the Samba > machine? If it's less than 80MB/s you may have a network problem. If > it's over 90MB/s you may still have some Samba tuning to do. BTW, > 60MB/s from Samba to Windows over GbE is pretty damn good. Many people > can't get over 35-40MB/s with Windows/GbE and SambaYes, I have set jumbo frame at 4500 because seem the better value after a lot of test. More bigger frame reduce performance from my test. Ethernet controller : Intel Corporation 82574L Gigabit Network Connection Switch : Netgear GS108T FTP (proftpd) from Samba server to Windows : 105-110Mo/sec (from "/datas" checked with "iotop -o" not FileZilla) Yes 60Mo/s it's not bad, but I would understand why I can't use full bandwidth because all my HDD can sustain ~90-100Mo/s and the network should be OK.> Assuming your future 10GbE network is configured and tuned perfectly, > you'll need a disk that can push over 1,000 MB/s sustained data rate to > fill the 10GbE pipe. This requires either a large striped array of > spinning rust (more than 14 SATA disks in RAID0), or a smaller array of > fast SSDs (4 in a RAID0).Yes I know that this speed is almost unreachable actually but I can't reach 1gbps limit now so 10gbps ... :( I'm almost sure than Samba can use almost full gbps speed but how to enabled that ? :( Thank you for your help ! Jean
On 03:06:34 wrote Stan Hoeppner:> On 4/10/2012 9:36 AM, Volker Lendecke wrote: > > On Tue, Apr 10, 2012 at 08:55:14AM -0500, Chris Weiss wrote: > >> On Tue, Apr 10, 2012 at 8:53 AM, Volker Lendecke > >> > >> <Volker.Lendecke at sernet.de> wrote: > >>> On Tue, Apr 10, 2012 at 08:26:48AM -0500, Chris Weiss wrote: > >>>> that's dramatic! what needs done (from a user POV) to get this > >>>> backported into Stable distro kernels? suggestions? > >>> > >>> Wait until the next major releases pick it up. > >> > >> that's a really crappy option. in certain cases that > >> could be 4 years from now. > > > > Well, if you are an important enough RH customer you might > > be able to apply pressure. But that's a LOT of money > > probably. Same for SuSE. Debian will likely be very > > resistant against that kind of bribery^Wincentive. > > Debian already has 3.2.6 available in the stable repo: > > $ aptitude search linux-image > ... > i linux-image-3.2.6 - Linux kernel, version 3.2.6 > ...My Fedora is running 3.3 and performance screams with reads and writes over cifs, especially to Samba. At least SuSE and RHEL6.2 appear to have upgraded their kernel far enough to get the really fast writes over cifs. Jeff Layton did a good job on these performance patches. Hard to complain about 95% network utilization (and it will get even better when the SMB2 and SMB2.1 support is merged). You will be even happier with 3.4 kernel on the client because then you can get even more parallelism (assuming you have a big set of disks to distribute work across on your server) when you set much larger values for "max mux" in the server's smb.conf you will be able to get up to 32768 requests in parallel queued to Samba. With today's networks and Samba the default for servers (of 50) is way too low - and with 3.4 kernel cifs client we will be able to send even more requests in parallel if the server indicates it can support it (more than 50 maximum multiplex requests). Note that Linux cifs kernel client always supported great parallelism and would easily use most of the network bandwidth if multiple processes were doing i/o against multiple files on the same mount - but with 3.0 (for sequential write like file copies) and later kernels for reads - cifs is VERY fast now. Prior to 3.0 kernel for fast file copies from Windows or Samba servers you can use smbclient (user space tool) which due to good work by Volker has had nice performance for sequential read/wirte for a few years. -- Thanks, Steve
On 03:06:34 wrote Stan Hoeppner:> On 4/10/2012 9:36 AM, Volker Lendecke wrote: > > On Tue, Apr 10, 2012 at 08:55:14AM -0500, Chris Weiss wrote: > >> On Tue, Apr 10, 2012 at 8:53 AM, Volker Lendecke > >> > >> <Volker.Lendecke at sernet.de> wrote: > >>> On Tue, Apr 10, 2012 at 08:26:48AM -0500, Chris Weiss wrote: > >>>> that's dramatic! what needs done (from a user POV) to get this > >>>> backported into Stable distro kernels? suggestions? > >>> > >>> Wait until the next major releases pick it up. > >> > >> that's a really crappy option. in certain cases that > >> could be 4 years from now. > > > > Well, if you are an important enough RH customer you might > > be able to apply pressure. But that's a LOT of money > > probably. Same for SuSE. Debian will likely be very > > resistant against that kind of bribery^Wincentive. > > Debian already has 3.2.6 available in the stable repo: > > $ aptitude search linux-image > ... > i linux-image-3.2.6 - Linux kernel, version 3.2.6 > ...My Fedora is running 3.3 and performance screams with reads and writes over cifs, especially to Samba. At least SuSE and RHEL6.2 appear to have upgraded their kernel far enough to get the really fast writes over cifs. Jeff Layton did a good job on these performance patches. Hard to complain about 95% network utilization (and it will get even better when the SMB2 and SMB2.1 support is merged). You will be even happier with 3.4 kernel on the client because then you can get even more parallelism (assuming you have a big set of disks to distribute work across on your server) when you set much larger values for "max mux" in the server's smb.conf you will be able to get up to 32768 requests in parallel queued to Samba. With today's networks and Samba the default for servers (of 50) is way too low - and with 3.4 kernel cifs client we will be able to send even more requests in parallel if the server indicates it can support it (more than 50 maximum multiplex requests). Note that Linux cifs kernel client always supported great parallelism and would easily use most of the network bandwidth if multiple processes were doing i/o against multiple files on the same mount - but with 3.0 (for sequential write like file copies) and later kernels for reads - cifs is VERY fast now. Prior to 3.0 kernel for fast file copies from Windows or Samba servers you can use smbclient (user space tool) which due to good work by Volker has had nice performance for sequential read/wirte for a few years. -- Thanks, Steve