I'm running rsync via ssh on two machines connected with 100 Mbit/s Ethernet cards (at high speed) via a Linksys switch. It's all right here in a single closet. I'm sending large files (initial transfer, nothing preexists on the destination machine) and seeing transfer rates in the 2 Mbit/s range. The CPUs on both machines are more than 90% idle, and I upped the --block-size to 256 kilobytes. Disk activity is very light. Fedora Core 1, i686. I don't expect 100 Mbit/s by any means, but 2? Is this typical? What am I missing? Thanks a million! Marc Abel Powell, Ohio
whats your rsync invocation lines? On Thu, Mar 11, 2004 at 12:04:21AM -0500, Marc Abel wrote:> I'm running rsync via ssh on two machines connected with 100 Mbit/s > Ethernet cards (at high speed) via a Linksys switch. It's all right > here in a single closet. I'm sending large files (initial transfer, > nothing preexists on the destination machine) and seeing transfer rates > in the 2 Mbit/s range. The CPUs on both machines are more than 90% > idle, and I upped the --block-size to 256 kilobytes. Disk activity is > very light. Fedora Core 1, i686. > > I don't expect 100 Mbit/s by any means, but 2? Is this typical? What > am I missing? > > Thanks a million! > > Marc Abel > Powell, Ohio > > > -- > To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync > Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html-- Tomasz M. Ciolek ******************************************************************************* tmc at dreamcraft dot com dot au or tmc at goldweb dot com dot au ******************************************************************************* GPG Key ID: 0x41C4C2F0 Key available on www.pgp.net ******************************************************************************* Everything falls under the law of change; Like a dream, a phantom, a bubble, a shadow, like dew of flash of lightning. You should contemplate like this.
On Thu, 11 Mar 2004, Marc Abel wrote:> I don't expect 100 Mbit/s by any means, but 2? Is this typical? What > am I missing?Since it's on a local network segment, you probably don't need to waste the overhead that SSH provides you. In my tests, I was able to get 32 mb/s using plain rsync to the rsync daemon with basic RH installs. I'm sure if I spent some time tweaking I could probably get much better than that. -Chuck -- http://www.quantumlinux.com Quantum Linux Laboratories, LLC. ACCELERATING Business with Open Technology "The measure of the restoration lies in the extent to which we apply social values more noble than mere monetary profit." - FDR
Thanks to all three for your kindness. Invoked as: rsync -lave ssh --delete --delete-excluded \ --exclude ".[!.]*" \ /home/me me@majority:composite_bu I did some looking around, and it seems like I haven't even installed the daemons for rsh, telnet, etc. (I'll follow up on this, but I can't this morning.) Likewise, running the straight rsync-to-rsync connection is proving to need a little studying up. I really did mean to say 2 Mbit/s. About 10 Mbyte/min is all that I'm getting. I can understand ssh slowing things down a lot, but not without seeing it get a lot of CPU time. These are 1 GHz machines, roughly, and they aren't getting slowed down. Marc Abel Powell, Ohio
On Thu, 11 Mar 2004, Marc Abel m-abel-at-columbus.rr.com |Rsync List| wrote:> I don't expect 100 Mbit/s by any means, but 2? Is this typical? What > am I missing?One good diagnostic would be to measure the network bandwidth between the two systems directly. I use netperf (http://www.netperf.org/netperf/NetperfPage.html) which often reveals problems with ethernet duplex, cabling, or switches. If netperf tests great (i.e. 90+Mbit) I would next try a bare "scp" and see what kind of throughput you get. It's also possible that rsync is spending too much time looking for small changes in a large file. It may be faster to use "--whole-file". -- Steve
This seems obvious, but it's easy to forget. Are all the filesystems on both ends locally-attached? If they're NFS or SMBFS, rsync is about a third the speed of SCP, unless you use "-W" to tell it to send the whole file if the timestamp/size don't match... in most cases, anyway. ethernet lan and dialup wan to the remote server, I'd probably go ahead and let it use the rsync algorithm. Try not using "-z"... Actually, maybe you already aren't. Here's a strange one for you to try. I'm sure you're running wide open. Try using --bwlimit= to hold it back a bit. If your network is chattering, backing off the throttle might get you down the track a hair faster. Since everything is inside, try opening up rsh for a while (don't leave it open if you're internet-connected.) on host1 time dd if=/dev/zero bs=1024k count=1024k |rsh host2 dd of=/dev/null That'll give you a good idea of what your network can do. Try tarring up whatever you're sending by rsync, sending it straight to /dev/null. Feed it through dd for a byte count. time tar -cf - whatever |dd of=/dev/null bs=1024k Some controllers don't do very good high-speed/volume I/O. Cache will keep things seeming fast, but when you go past cache, you see the true speed of your disk subsystem. The above test will show you some of that. if read's good, check write time dd if=/dev/zero bs=1024k count=1024k of=/bigfilesystem/fileyouregoingtodelete Tim Conway Unix System Administration Contractor - IBM Global Services conway@us.ibm.com I'm running rsync via ssh on two machines connected with 100 Mbit/s Ethernet cards (at high speed) via a Linksys switch. It's all right here in a single closet. I'm sending large files (initial transfer, nothing preexists on the destination machine) and seeing transfer rates in the 2 Mbit/s range. The CPUs on both machines are more than 90% idle, and I upped the --block-size to 256 kilobytes. Disk activity is very light. Fedora Core 1, i686. I don't expect 100 Mbit/s by any means, but 2? Is this typical? What am I missing?
I did some more investigating on the ssh slowdown theory... I also found one negative comment concerning my router (Linksys BEFSR81) which might be throughput-related. In a nutshell: ssh between these two machines can run at least 43 Mbit/s. I created a 100 Mbyte file from /dev/urandom, and ssh brought it back (ssh majority cat delete.me >file) in under 20 seconds. No compression (it was reasonably random), and entirely accurate (I did md5sums at both ends). So I'm still seeing cat > 43 Mbit/s and rsync < 2 Mbit/s, both using ssh. Round trip packet time is under 0.5 ms, and I looked through a bunch of IP tuning stuff too and my buffers and features seem fine. I wonder what's missing here. Even if I get this working via rsh or direct-to-rsync-server, it's still reasonable to think it would work with ssh. Marc Abel Powell, Ohio