I am trying to make sense of the way in which OpenSSH computes window size, so far without much success :-( My understanding is that when a client specifies a window size N at the beginning of a session, it is letting the server know that it (the server) can send, on a given channel, up to N bytes worth of data that consumes window space (essentially the payload of SSH_MSG_CHANNEL_DATA and one or two more packets.) If the server sends more data of that type than allowed by the window size in the channel, the client is free to ignore such data silently. Therefore, the server has to wait for the client to send a window size adjust packet before sending more data. Again, if I understand things correctly, a server has to keep track of the number of window-consuming bytes that it has sent, stopping sending data when the current window is exhausted. Now how does an OpenSSH client decide when to send a window adjust packet? Looking into the OpenSSH code, the decision seems to be made in channel_check_window(), but it is not clear how and why. The exact nature of the fields local_window, local_consumed is not explained anywhere (as far as I can tell) and the correspondence with what the server is sending is not obvious. Let me give an example to make it really confusing :-) The first set of traces corresponds to an OpenSSH client that has started a file transfer session against a (non-OpenSSH) server. The initial window size specified by the client is 131072 bytes. The server then sends five packets in one go, consuming 26, 32768, 32768, 32768 and 32742 bytes, respectively, thus consuming the whole of the current window space. I understand the meaning of the first line in the client's traces: The window starts at 131072 bytes, a packet consuming 26 bytes has been received, which implies that the window size is reduced to 131046 bytes. The remaining lines I just don't get. Obviously the second packet has been received, for the local window in the client has been decremented to 98278 bytes. Where is the size (32768 bytes) of that second packet recorded? Where are those 4122, 8218, etc. bytes coming from? The server has never sent packets consuming such a number of bytes. Then when the local window, according to the client, is 65510 bytes, it sends a window adjust packet. Why at 65510 bytes? Why not when the window size reaches zero? Why adjust by 36890 bytes? I understand that this is more than the maximum packet size (which seems to be 32768 bytes) but how is 36980 arrived at? Interestingly, the server does receive the window adjust packet just in time, right after it has consumed all the previously available window space. Can anybody throw some light on all this? I believe that my confusion stems from the local_window and local_consumed fields mentioned above. Does anybody know exactly what they mean? Client traces: local window 131046, local consumed 26, local window max 131072 local window 98278, local consumed 4122, local window max 131072 local window 98278, local consumed 8218, local window max 131072 local window 98278, local consumed 12314, local window max 131072 local window 98278, local consumed 16410, local window max 131072 local window 98278, local consumed 20506, local window max 131072 local window 98278, local consumed 24602, local window max 131072 local window 98278, local consumed 28698, local window max 131072 local window 98278, local consumed 32794, local window max 131072 local window 98278, local consumed 32794, local window max 131072 local window 65510, local consumed 36890, local window max 131072 window 65510 sent adjust 36890 Server traces: 131072 26 131046 131046 32768 98278 98278 32768 65510 65510 32768 32742 32742 32742 0 Received 36890 adjust