Hi,
I am looking into the codes of linux kernel(ipv4/tcp.c,tcp_output.c
/net/core/dev.c etc.) to find out how multiple sockets are
drained on to the network. Here is the picture that I got so
far: If there are ''n'' sockets with data to send over TCP,
then they all supposedly take equal turns draining their
data on to the xmit_queue.
1. However, this ''equal turn'' (fair) philosophy is guided more
in
part due to the nature of the TCP behavior under competing flows,
than anything else.
2. In practice though, the draining of sockets is like a FCFS
scheduling discipline!!!
Which ever socket has data to send, will grab a piece of the
TCP sk->sndbuf (16KB) and write it out. If the sndbuf is full,
the process waits a random amount of time (between 2 and 21
jiffies) then retries again.
3. Once inside the sndbuf, the packet is passed on to the IP
level and then to the device level (default device_xmit_queue
length of 100 packets for ethernet). At every instant the
routines constantly try to push as much data out on to the
network (device_xmit_queue) as possible. right?
If I am wrong regarding any of the above 3 statements, **please**
correct me.
socket1
||||| ---> TCP processing --> IP processing _
\
socket2 \ n/w xmit_queue
|||| ---> TCP processing --> IP processing ------> |||||||--->
. /
. /
. /
socket n /
||||| ---> TCP processing --> IP processing -
- jaika
ps: Here is what I gleaned from the sources:
in tcp_ipv4.c: sk->sndbuff is initialised to sysctl_tcp_wmem[1];
and in net/ipv4/tcp.c: int sysctl_tcp_wmem[3] = {4*1024, 16*1024,
128*1024};
So I guess the sndbuff is initialised to 16K.
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.