On Sat, 2006-Jun-24 20:12:56 -0700, Nikolas Britton
wrote:>Test Setup:
>250 50MB files (13068252KB)
>dd if=/dev/random of=testfile bs=1m count=50
>Ethernet mtu=6500
>Transferred files were wiped after every test with 'rm -r *'.
>
>Test:
>hostB: nc -4l port | tar xpbf n -
>hostA: date; tar cbf n - . | nc hostB port; date
>
>Test Results:
>seconds = n
>645sec. = 1024
>670sec. = 512
>546sec. = 256
>503sec. = 128
>500sec. = 128 (control)
>515sec. = 96
>508sec. = 64
>501sec. = 20 (default)
>
>Conclusions: Make your own.
I don't think that's so unexpected. tar doesn't use multiple
buffers
so filling and emptying the buffer is done serially. Once the buffer
exceeds the space in the pipe buffer and the local TCP send buffer,
then the write from the hostA tar is delayed until the TCP buffer can
drain. At the same time, the read from the hostB tar is blocked
waiting for data from the network.
Optimal throughput will depend on maximally overlapping the file reads
on hostB with the network traffic and file writes on hostB. This, in
turn, means you want to be able to hold at least a full buffer of data
in the intervening processes and kernel buffers. Assuming that you
aren't network bandwidth limited, you should look at increasing
net.inet.tcp.sendspace and maybe net.inet.tcp.recvspace, or using
an intervening program on hostA that does its own re-buffering.
--
Peter Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 187 bytes
Desc: not available
Url :
http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20060625/eb9036ca/attachment.pgp