We have some speed/performance issues: We have a 100M fullduplex private network setup to handle rsync transfers to our "mirror" server with a command like: time rsync -e ssh -avzl --delete --rsync-path=/usr/local/bin/rsync \ --exclude ".netscape/cache/" --delete-excluded \ bigserver:/staff1 /mirror/bigserver It takes about 20 minutes to check/transfer files from bigserver:/staff1 (Solaris 8) and the local machine (RH 7.2, Linux 2.4.2-2). bigserver/staff1 is 24G. Is that speed fast or slow? I think it's slow for a 100M fullduplex private network. It goes from a private NIC on bigserver to the hub to a private NIC on the local host - all 100M fullduplex. The output is showing many directories transferred, although no files within the directories are actually being transferred. Is that normal? The hub is showing lots of collisions, although there are no collisions reported on the local box's private NIC : localhost # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:02:B3:8B:E4:CE inet addr:10.0.0.10 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1601661 errors:0 dropped:0 overruns:0 frame:624536 TX packets:1477346 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 Interrupt:5 Base address:0xb400 Anyone know how to display the collisions on a solaris box? I have also noticed that running tape backups over the private network (ssh/dd) causes both to slow to a crawl - even though the tape backups can only do about 2M a second (DLT IV). Any suggestions/help appreciated. -- David Arnold dga@csse.monash.edu.au
On Monday 03 June 2002 07:03, David Arnold wrote:> We have some speed/performance issues: > > We have a 100M fullduplex private network setup to handle rsync transfers > to our "mirror" server with a command like: > > time rsync -e ssh -avzl --delete --rsync-path=/usr/local/bin/rsync \ > --exclude ".netscape/cache/" --delete-excluded \ > bigserver:/staff1 /mirror/bigserver >Here are your speed problems: 1- If you're really on a private network, you should run rsync as a daemon, and avoid ssh. This will save many cpu cycles and some bandwidth. 2- With 100 Mbits dedicated to rsync, you shouldn't use compression (-z). This will save many cpu cycles too. :-) Take a look but i'm quite sure that your problems are actually on cpu side and not on network side... I'm actually testing rsync on shared 100Mbit switched network and it take less than 1 minute to synchronize 55G in more than 250 000 files. Olivier