You really meant 300 _M_bps as the upper bound according to Enterasys.
My switches are from Enterasys too. After a firmware updates I get about
722 Mbps both ways. The client's disk drive (Maxtor 250 GB) can't read
or
write faster.
If you don't get transfer rates in the immediate neighbourhood of the
read/write
speeds of your client's disk drives, then your network setup (hardware or
configuration) is probably wrong. It's not easy to say what exactly is
wrong.
> From: Doug VanLeuven <roamdad@sonic.net>
> To: samba@lists.samba.org
> Date: Fri, 22 Sep 2006 02:50:17 -0700
> Subject: Re: [Samba] Transfer rates faster than 23MBps?
> OK, I'll top post.
> I can't let this stand unanswered.
> I ran a LOT of tests with gigabit copper and windows machines. I never
> did better than 40 seconds per gig. That was with the Intel cards
> configured for maximum cpu utilization. 80-90% cpu for 40 sec per gig.
> On windows. Uploads went half as fast. Asymetric. Of course I only
> had 32 bit PCI, 2.5Gig processor motherboards with 45MBps drives.
>
> Which leads me to my point. One can't rationally compare performance
of
> gigabit ethernet without talking about hardware on the platforms. I
> wouldn't think you'd have overlooked this, but one can bump up
against
> the speed of the disk drive. Raid has overhead. Have you tried
> something like iostat? Serial ATA? I seem to recall the folks at
> Enterasys indicating 300Gbps as a practical upper limit on copper gig.
> Are you using fiber? 64 bit PCI? Who made which model of the network
> card? Is it a network card that's well supported in Linux? Can you
> change the interrupt utilization of the card? What's the CPU
> utilization on the Redhat machine during transfers?
>
> I don't have specific answers for your questions, but one can't
just say
> this software product is slower on gigabit than the other one without
> talking hardware at the same time.
>
> I have lots of memory. I use these configurations in sysctl.conf to up
> the performance of send/recieve windows on my systems. There's
articles
> out there. I don't have historical references handy.
> YMMV.
> net.core.wmem_max = 1048576
> net.core.rmem_max = 1048576
> net.ipv4.tcp_wmem = 4096 65536 1048575
> net.ipv4.tcp_rmem = 4096 524288 1048575
> net.ipv4.tcp_window_scaling = 1
>
> Regards, Doug
>
> > I wanted to follow up to my email to provide at least a partial answer
> > to my problem.
> >
> > The stock RedHat AS4-U3 Samba config has SO_SNDBUF and SO_RCVBUF set
> > to 8k. With this value, I can transfer a 1GB file in about 70-75
> > seconds, about 14MBps. If I increase those buffers to their max value
> > of 64k, that same 1GB file transfers in 45-50 seconds, about 23MBps.
> >
> > That is the _ONLY_ configuration value I've found that made any
> > difference in my setup. All the other tweaks I'd done, when
removed,
> > seemed to make no difference at all. I was playing with oplocks,
> > buffers, max xmit sizes, you name it. But the socket option buffers
> > was the only thing that made a difference.
> >
> > I'm still looking for more speed. I'll report if I find
anything else
> > that helps.
> >
> > In response to Jeremy's suggestion of using smbclient, I ran a
test
> > from a Linux client using smbclient and it reported a transfer rate of
> > 21MBps, about the same as a normal smbfs mount. I haven't tried
> > porting smbclient to Windows yet, and probably won't until we get
more
> > info on what the server is doing.
> >
> > Thanks everyone.
> >
> > -Mark
> >
> > Mark Smith wrote:
> >> We use SMB to transfer large files (between 1GB and 5GB) from
RedHat
> >> AS4 Content Storage servers to Windows clients with 6 DVD burners
and
> >> robotic arms and other cool gadgets. The servers used to be
Windows
> >> based, but we're migrating to RedHat for a host of reasons.
> >>
> >> Unfortunately, the RedHat Samba servers are about 2.5 times slower
> >> than the Windows servers. Windows will copy a 1GB file in about
30
> >> seconds, where as it takes about 70 to 75 seconds to copy the same
> >> file from a RedHat Samba server.
> >>
> >> I've asked Dr. Google and gotten all kinds of suggestions,
most of
> >> which have already been applied by RedHat to the stock Samba
config.
> >> I've opened a ticket with RedHat. They pointed out a couple
errors
> >> in my config, but fixing those didn't have any effect. Some
> >> tweaking, however, has gotten the transfer speed to about 50
seconds
> >> for that 1GB file.
> >>
> >> But I seem to have hit a brick wall; my fastest time ever was 44
> >> seconds, but typically it's around 50.
> >>
> >> I know it's not a problem with network or disk; if I use
Apache and
> >> HTTP to transfer the same file from the same server, it transfers
in
> >> about 15 to 20 seconds. Unfortunately, HTTP doesn't meet our
other
> >> requirements for random access to the file.
> >>
> >> Do you folks use Samba for large file transfers at all? Have you
had
> >> any luck speeding it up past about 23MBps (the 44 second transfer
> >> speed)? Any help you may have would be fantastic. Thanks.
>