After some research into the current solutions on Solaris, it doesn''t appear that there''s a current solution, perhaps everyone here would know better. Let me lay out the problem so that it''s a little more clear. As far as I can tell, most currently available, easy to use solutions use queuing for traffic shaping (netfilter in linux and what squeues in S10 appear to be as well as the solution that squid appears to perform). Unfortunately, it''s pretty tough to actually do inbound traffic shaping with this kind of a tactic because the bits already arrived. You can drop packets or queue up the incoming stuff but that either causes massive retransmission for a negative effect on connection quality or gives you very spiky bandwidth because the connection is going all over the place. Where queues do seem to work is on outbound traffic. In that case, you have full control of what goes out and can queue it up, making sure it works. Now, ideally, I''d like to limit incoming connections bandwidth on a per-connection and per-port basis (Crossbow, which is an awsome project, only appears to do something with squeues on a local IP and port basis). A use case would be if someone using a service, say HTTP does something "bad" such as start a DOS, you can limit that bandwidth down to almost nothing so that they don''t retry the connection, think that the DOS is working correctly and potentially if you''ve got a false positive, the client can still use the service if at a severely reduced level. Pure packet dropping or queuing wouldn''t work in this instance, I believe, because the bandwidth would get reclaimed to some level but you''d loose a lot still and in the case of a false positive, the service would be completely unusable based on how many dropped packets would be occurring. Therefore, after looking through my networking books again here, I believe the solution to this problem is fixed by reaching down into the guts of TCP itself. Because TCP has to handle limited connections all the time, the protocol has some allowances for finding the optimal bandwidth of a connection a reshaping the incoming packets so that it''s filled correctly. It does this through the TCP window, the MSS header option and TCP congestion avoidance. From everything I''ve been able to find about the network stack in S10, there aren''t any hooks for me to be able to edit these settings on the fly at a per-connection level. So, the first obvious question, is this possible at all? And/or has someone already done something like this? Assuming that it''s not been done before, where can I find information on how to get it done? Such as, would this need to be a network driver wrapper or just a simple kernel module that can be loaded when it''s needed? I''m sure that there''s information out there on where to begin but I''ve sure had a hard time finding it .... any pointers would be greatly appreciated.
This would be a really cool idea. There''s not a good way of doing this in Linux either. Frankly, there''s not a really good way of doing it outside of a Packeteer. I can think of a lot of applications for per-session TCP rate control. -J On 2/19/07, Thomas Rampelberg <rampelbergt at digitar.com> wrote:> After some research into the current solutions on Solaris, it doesn''t > appear that there''s a current solution, perhaps everyone here would know > better. Let me lay out the problem so that it''s a little more clear. > > As far as I can tell, most currently available, easy to use solutions > use queuing for traffic shaping (netfilter in linux and what squeues in > S10 appear to be as well as the solution that squid appears to perform). > Unfortunately, it''s pretty tough to actually do inbound traffic shaping > with this kind of a tactic because the bits already arrived. You can > drop packets or queue up the incoming stuff but that either causes > massive retransmission for a negative effect on connection quality or > gives you very spiky bandwidth because the connection is going all over > the place. Where queues do seem to work is on outbound traffic. In that > case, you have full control of what goes out and can queue it up, making > sure it works. > > Now, ideally, I''d like to limit incoming connections bandwidth on a > per-connection and per-port basis (Crossbow, which is an awsome project, > only appears to do something with squeues on a local IP and port basis). > A use case would be if someone using a service, say HTTP does something > "bad" such as start a DOS, you can limit that bandwidth down to almost > nothing so that they don''t retry the connection, think that the DOS is > working correctly and potentially if you''ve got a false positive, the > client can still use the service if at a severely reduced level. > > Pure packet dropping or queuing wouldn''t work in this instance, I > believe, because the bandwidth would get reclaimed to some level but > you''d loose a lot still and in the case of a false positive, the service > would be completely unusable based on how many dropped packets would be > occurring. > > Therefore, after looking through my networking books again here, I > believe the solution to this problem is fixed by reaching down into the > guts of TCP itself. Because TCP has to handle limited connections all > the time, the protocol has some allowances for finding the optimal > bandwidth of a connection a reshaping the incoming packets so that it''s > filled correctly. It does this through the TCP window, the MSS header > option and TCP congestion avoidance. From everything I''ve been able to > find about the network stack in S10, there aren''t any hooks for me to be > able to edit these settings on the fly at a per-connection level. > > So, the first obvious question, is this possible at all? And/or has > someone already done something like this? > > Assuming that it''s not been done before, where can I find information on > how to get it done? Such as, would this need to be a network driver > wrapper or just a simple kernel module that can be loaded when it''s > needed? I''m sure that there''s information out there on where to begin > but I''ve sure had a hard time finding it .... any pointers would be > greatly appreciated. > _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/crossbow-discuss >
On 2/19/07, Thomas Rampelberg <rampelbergt at digitar.com> wrote:> > Pure packet dropping or queuing wouldn''t work in this instance, I > believe, because the bandwidth would get reclaimed to some level but > you''d loose a lot still and in the case of a false positive, the service > would be completely unusable based on how many dropped packets would be > occurring. > > Therefore, after looking through my networking books again here, I > believe the solution to this problem is fixed by reaching down into the > guts of TCP itself. Because TCP has to handle limited connections all > the time, the protocol has some allowances for finding the optimal > bandwidth of a connection a reshaping the incoming packets so that it''s > filled correctly. It does this through the TCP window, the MSS header > option and TCP congestion avoidance. From everything I''ve been able to > find about the network stack in S10, there aren''t any hooks for me to be > able to edit these settings on the fly at a per-connection level. > > So, the first obvious question, is this possible at all? And/or has > someone already done something like this? >But that''s the entire point of packet dropping. If you drop packets from a TCP connection, preferably not using tail-drop then the sender should start to close down the window due to the re-transmissions that start to occur. The window will close down until the re-transmission rate drops (i.e. the receiver stops dropping packets because the b/w has fallen sufficiently low). Paul -- Paul Durrant http://www.linkedin.com/in/pdurrant
Paul Durrant writes: > On 2/19/07, Thomas Rampelberg <rampelbergt at digitar.com> wrote: > > > > Pure packet dropping or queuing wouldn''t work in this instance, I > > believe, because the bandwidth would get reclaimed to some level but > > you''d loose a lot still and in the case of a false positive, the service > > would be completely unusable based on how many dropped packets would be > > occurring. > > > > Therefore, after looking through my networking books again here, I > > believe the solution to this problem is fixed by reaching down into the > > guts of TCP itself. Because TCP has to handle limited connections all > > the time, the protocol has some allowances for finding the optimal > > bandwidth of a connection a reshaping the incoming packets so that it''s > > filled correctly. It does this through the TCP window, the MSS header > > option and TCP congestion avoidance. From everything I''ve been able to > > find about the network stack in S10, there aren''t any hooks for me to be > > able to edit these settings on the fly at a per-connection level. > > > > So, the first obvious question, is this possible at all? And/or has > > someone already done something like this? > > > > But that''s the entire point of packet dropping. If you drop packets > from a TCP connection, preferably not using tail-drop then the sender > should start to close down the window due to the re-transmissions that > start to occur. The window will close down until the re-transmission > rate drops (i.e. the receiver stops dropping packets because the b/w > has fallen sufficiently low). > > Paul > While drops are a necessity where hard resources are contended, I think we can open the debate when the said resources are managed/controlled. Peer''s behavior will transmit as much as can fit in the smallest of the congestion window and received socket buffer size. In the absence of drops, the cwnd just grows but it would seems to me that, by tweaking the advertised receive window and assuming the round trip time is stable (?), we can control the incoming bandwidth. That BW should never exceed (advertised socket buffer / rtt). -r > -- > Paul Durrant > http://www.linkedin.com/in/pdurrant > _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/crossbow-discuss
Roch - PAE wrote:> Paul Durrant writes: > > On 2/19/07, Thomas Rampelberg <rampelbergt at digitar.com> wrote: > > > > > > Pure packet dropping or queuing wouldn''t work in this instance, I > > > believe, because the bandwidth would get reclaimed to some level but > > > you''d loose a lot still and in the case of a false positive, the service > > > would be completely unusable based on how many dropped packets would be > > > occurring. > > > > > > Therefore, after looking through my networking books again here, I > > > believe the solution to this problem is fixed by reaching down into the > > > guts of TCP itself. Because TCP has to handle limited connections all > > > the time, the protocol has some allowances for finding the optimal > > > bandwidth of a connection a reshaping the incoming packets so that it''s > > > filled correctly. It does this through the TCP window, the MSS header > > > option and TCP congestion avoidance. From everything I''ve been able to > > > find about the network stack in S10, there aren''t any hooks for me to be > > > able to edit these settings on the fly at a per-connection level. > > > > > > So, the first obvious question, is this possible at all? And/or has > > > someone already done something like this? > > > > > > > But that''s the entire point of packet dropping. If you drop packets > > from a TCP connection, preferably not using tail-drop then the sender > > should start to close down the window due to the re-transmissions that > > start to occur. The window will close down until the re-transmission > > rate drops (i.e. the receiver stops dropping packets because the b/w > > has fallen sufficiently low). > > > > Paul > > > > While drops are a necessity where hard resources are > contended, I think we can open the debate when the said > resources are managed/controlled. > > Peer''s behavior will transmit as much as can fit in the > smallest of the congestion window and received socket buffer > size. In the absence of drops, the cwnd just grows but it > would seems to me that, by tweaking the advertised receive > window and assuming the round trip time is stable (?), we > can control the incoming bandwidth. That BW should never > exceed (advertised socket buffer / rtt). > > -r > >I''d not thought about the exact repercussions of packet dropping on actual incoming bandwidth but from my experiments, pure packet dropping does not provide the QOS that I would like/need for this application. Roch, you''ve got the idea ..... of course, as I''m coming to find out, none of the hooks for this are in the kernel atm and adding them would theoretically add a performance hit to something that''s extremely performance sensitive. However, what if something along the lines of DTrace probes were added? The idea being that only when the extra functionality was needed, a module could be loaded and the performance hit realized. For servers handling this task specifically, the performance hit would be more than acceptable for me. If no one''s seen the similarity yet, I''m proposing something along the lines of a programmatic Packeteer interface for that kind of in depth shaping that occurs without the QOS implications of queuing or dropping packets.
Hi Guys, Just thought I''d inject my two cents. Pro-actively adjusting the TCP window and MSS size are much better way of doing traffic shaping. The ack-floods you can get from hard dropping packets (a la QoS) can be just of much of a headache as the bandwidth surge you''re trying to quell. Doing a Packeteer-style traffic shaping in the Solaris network stack would absolutely rock! Completely unbiased here...I wouldn''t happen to have an app that would benefit or anything... ;-) -J On 2/20/07, Thomas Rampelberg <rampelbergt at digitar.com> wrote:> Roch - PAE wrote: > > Paul Durrant writes: > > > On 2/19/07, Thomas Rampelberg <rampelbergt at digitar.com> wrote: > > > > > > > > Pure packet dropping or queuing wouldn''t work in this instance, I > > > > believe, because the bandwidth would get reclaimed to some level but > > > > you''d loose a lot still and in the case of a false positive, the service > > > > would be completely unusable based on how many dropped packets would be > > > > occurring. > > > > > > > > Therefore, after looking through my networking books again here, I > > > > believe the solution to this problem is fixed by reaching down into the > > > > guts of TCP itself. Because TCP has to handle limited connections all > > > > the time, the protocol has some allowances for finding the optimal > > > > bandwidth of a connection a reshaping the incoming packets so that it''s > > > > filled correctly. It does this through the TCP window, the MSS header > > > > option and TCP congestion avoidance. From everything I''ve been able to > > > > find about the network stack in S10, there aren''t any hooks for me to be > > > > able to edit these settings on the fly at a per-connection level. > > > > > > > > So, the first obvious question, is this possible at all? And/or has > > > > someone already done something like this? > > > > > > > > > > But that''s the entire point of packet dropping. If you drop packets > > > from a TCP connection, preferably not using tail-drop then the sender > > > should start to close down the window due to the re-transmissions that > > > start to occur. The window will close down until the re-transmission > > > rate drops (i.e. the receiver stops dropping packets because the b/w > > > has fallen sufficiently low). > > > > > > Paul > > > > > > > While drops are a necessity where hard resources are > > contended, I think we can open the debate when the said > > resources are managed/controlled. > > > > Peer''s behavior will transmit as much as can fit in the > > smallest of the congestion window and received socket buffer > > size. In the absence of drops, the cwnd just grows but it > > would seems to me that, by tweaking the advertised receive > > window and assuming the round trip time is stable (?), we > > can control the incoming bandwidth. That BW should never > > exceed (advertised socket buffer / rtt). > > > > -r > > > > > I''d not thought about the exact repercussions of packet dropping on > actual incoming bandwidth but from my experiments, pure packet dropping > does not provide the QOS that I would like/need for this application. > > Roch, you''ve got the idea ..... of course, as I''m coming to find out, > none of the hooks for this are in the kernel atm and adding them would > theoretically add a performance hit to something that''s extremely > performance sensitive. However, what if something along the lines of > DTrace probes were added? The idea being that only when the extra > functionality was needed, a module could be loaded and the performance > hit realized. For servers handling this task specifically, the > performance hit would be more than acceptable for me. > > If no one''s seen the similarity yet, I''m proposing something along the > lines of a programmatic Packeteer interface for that kind of in depth > shaping that occurs without the QOS implications of queuing or dropping > packets. > > _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/crossbow-discuss >
Darren.Reed at Sun.COM
2007-Feb-21 01:38 UTC
[crossbow-discuss] Traffic shaping in Solaris
Jason J. W. Williams wrote:> Hi Guys, > > Just thought I''d inject my two cents. Pro-actively adjusting the TCP > window and MSS size are much better way of doing traffic shaping. The > ack-floods you can get from hard dropping packets (a la QoS) can be > just of much of a headache as the bandwidth surge you''re trying to > quell. > > Doing a Packeteer-style traffic shaping in the Solaris network stack > would absolutely rock! Completely unbiased here...I wouldn''t happen to > have an app that would benefit or anything... ;-)What sort of capabilities are you looking for in shaping packets? - introducing random packet loss? - introducing fixed/random delays? - imposing queue restrictions (n slots, n kB, n MB) - imposing a bandwidth limit (n kb/s, n Mb/s) - changing the TCP MSS (can only be changed when the connection *starts) - changing the TCP window size - others? Darren
Hi Darren, We''re looking for TCP MSS and window size modification primarily. Window size for talkers gone bad, and both for re-connecting talkers known to be bad. We''d like to have the hooks to implement/control this from an application accepting the connections. Introducing delays to SYNs and ACKs it would seem to me would need to be used intelligently (and gently) in conjunction with a TCP window method to not cause an ACK flood. Imposing a bandwidth limit would be a nice higher-level way to achieve the same effect, and let the OS decide how to resize the TCP window and introduce delays accordingly. Queue restrictions and random packet loss get back into the traditional method using QoS, and don''t achieve what we''re looking for necessarily. Best Regards, Jason On 2/20/07, Darren.Reed at sun.com <Darren.Reed at sun.com> wrote:> Jason J. W. Williams wrote: > > > Hi Guys, > > > > Just thought I''d inject my two cents. Pro-actively adjusting the TCP > > window and MSS size are much better way of doing traffic shaping. The > > ack-floods you can get from hard dropping packets (a la QoS) can be > > just of much of a headache as the bandwidth surge you''re trying to > > quell. > > > > Doing a Packeteer-style traffic shaping in the Solaris network stack > > would absolutely rock! Completely unbiased here...I wouldn''t happen to > > have an app that would benefit or anything... ;-) > > > What sort of capabilities are you looking for in shaping packets? > > - introducing random packet loss? > - introducing fixed/random delays? > - imposing queue restrictions (n slots, n kB, n MB) > - imposing a bandwidth limit (n kb/s, n Mb/s) > - changing the TCP MSS (can only be changed when the connection *starts) > - changing the TCP window size > - others? > > Darren > >
Darren.Reed at Sun.COM wrote:> What sort of capabilities are you looking for in shaping packets? > > - introducing random packet loss? > - introducing fixed/random delays? > - imposing queue restrictions (n slots, n kB, n MB) > - imposing a bandwidth limit (n kb/s, n Mb/s) > - changing the TCP MSS (can only be changed when the connection *starts) > - changing the TCP window size > - others? > > Darren >Obviously, you can impose bandwidth limits, queue restrictions (I''m a little fuzzy on this one but it appears to be like netfilter in Linux) and drop packets currently if at a broad level. What I''d really like is to be able to change all the above (except packet loss or fixed/random delays ... that''s easy enough to just do in server code) on a per connection basis. Having a way to granularly set a bandwidth limit for a specific connection would be very useful as a wrapper to changes in the TCP window size and MSS. As I understand the way that limits are imposed in Crossbow, on a per interface/port basis, it''s implemented using squeues and ends up introducing packet loss and queue restrictions instead of the smoother options that you can use with TCP header manipulation. (Someone please correct me if I''m lacking some understanding on how this happens under the hood.) About the MSS, albeit my TCP is a little fuzzy, but would it be possible to do a connection reestablishment to get the MSS changed?
Darren.Reed at Sun.COM
2007-Feb-21 19:13 UTC
[crossbow-discuss] Traffic shaping in Solaris
Thomas Rampelberg wrote:> ... > >About the MSS, albeit my TCP is a little fuzzy, but would it be possible >to do a connection reestablishment to get the MSS changed? > >No, you can''t do that. The TCP MSS option is only present/recognised when in a packet with the SYN flag set. Darren
Darren.Reed at Sun.COM
2007-Feb-21 19:50 UTC
[crossbow-discuss] Traffic shaping in Solaris
Jason J. W. Williams wrote:> Hi Darren, > > We''re looking for TCP MSS and window size modification primarily.How would you wish to do this? A different setsockoption for setting the received/sent MSS?> Window size for talkers gone bad, and both for re-connecting talkers > known to be bad. We''d like to have the hooks to implement/control this > from an application accepting the connections.I''m not sure that I understand what you''re talking about here, can you expand on what problem you''re having and how you would like to address it? Darren
Hi Darren,> A different setsockoption for setting the received/sent MSS?That would be a very good option.> I''m not sure that I understand what you''re talking about here, > can you expand on what problem you''re having and how you would > like to address it?Say for example you''ve got a heavily utilized SMTP server. 100 different clients connect that you haven''t seen before. 20 of them start transmitting 20 MB files concurrently, which completely saturates your 20 Mb/s link for 106 seconds (assuming each client is limited on their end to a T1). In this case, what I''d like to be able to do is tell the OS when the connection is built up only allow this sender to consume 100 Kb/s. If the after 3 MB they''re still sending data then I''d want to ratchet them down again to probably 20 Kb/s. The reason being, these 10 senders otherwise prevent the other 80 from getting connections...or cause them to time out. Optimally, you''d just tell the OS the rate when a connection was accepted, and then have the ability to reset the rate on an ongoing connection by handing the OS Source IP/Source Port Destination, IP/Destination Port and the new rate. Then the OS would automagically adjust the MSS (on a new connection) and the TCP window to meet the set rate, since the OS is in a better position to make those decisions rapidly. Does this help? -J
Nicolas Williams
2007-Feb-21 21:57 UTC
[networking-discuss] Re: [crossbow-discuss] Traffic shaping in Solaris
On Wed, Feb 21, 2007 at 02:37:34PM -0700, Jason J. W. Williams wrote:> >I''m not sure that I understand what you''re talking about here, > >can you expand on what problem you''re having and how you would > >like to address it? > > Say for example you''ve got a heavily utilized SMTP server. 100 > different clients connect that you haven''t seen before. 20 of them > start transmitting 20 MB files concurrently, which completely > saturates your 20 Mb/s link for 106 seconds (assuming each client is > limited on their end to a T1). In this case, what I''d like to be able > to do is tell the OS when the connection is built up only allow this > sender to consume 100 Kb/s. If the after 3 MB they''re still sending > data then I''d want to ratchet them down again to probably 20 Kb/s. The > reason being, these 10 senders otherwise prevent the other 80 from > getting connections...or cause them to time out.But you don''t need to muck with the MSS to make this happen -- nor can you since it''s a connection establishment-time thing. It''s important to note that you can''t actually stop bad peers from sending packets at their line rate, but you can slow down well behaved peers. Nico --
Jason J. W. Williams
2007-Feb-21 22:29 UTC
[networking-discuss] Re: [crossbow-discuss] Traffic shaping in Solaris
Hi Nico, 90% of the bandwidth hogs we see are folks using well-behaved OSs. And for the evil-doers out there that decide to craft a special mass-mailer that somehow bypasses their OS TCP stack to ignore the window size, then there''s nothing to do there but use standard QoS drops. But that''s a rare occurrence. The more common issue are impolite clients that need to be elegantly shaped down. -J On 2/21/07, Nicolas Williams <Nicolas.Williams at sun.com> wrote:> On Wed, Feb 21, 2007 at 02:37:34PM -0700, Jason J. W. Williams wrote: > > >I''m not sure that I understand what you''re talking about here, > > >can you expand on what problem you''re having and how you would > > >like to address it? > > > > Say for example you''ve got a heavily utilized SMTP server. 100 > > different clients connect that you haven''t seen before. 20 of them > > start transmitting 20 MB files concurrently, which completely > > saturates your 20 Mb/s link for 106 seconds (assuming each client is > > limited on their end to a T1). In this case, what I''d like to be able > > to do is tell the OS when the connection is built up only allow this > > sender to consume 100 Kb/s. If the after 3 MB they''re still sending > > data then I''d want to ratchet them down again to probably 20 Kb/s. The > > reason being, these 10 senders otherwise prevent the other 80 from > > getting connections...or cause them to time out. > > But you don''t need to muck with the MSS to make this happen -- nor can > you since it''s a connection establishment-time thing. > > It''s important to note that you can''t actually stop bad peers from > sending packets at their line rate, but you can slow down well behaved > peers. > > Nico > -- >
Thomas Rampelberg writes: > Darren.Reed at Sun.COM wrote: > > What sort of capabilities are you looking for in shaping packets? > > > > - introducing random packet loss? > > - introducing fixed/random delays? > > - imposing queue restrictions (n slots, n kB, n MB) > > - imposing a bandwidth limit (n kb/s, n Mb/s) > > - changing the TCP MSS (can only be changed when the connection *starts) > > - changing the TCP window size > > - others? > > > > Darren > > > > Obviously, you can impose bandwidth limits, queue restrictions (I''m a > little fuzzy on this one but it appears to be like netfilter in Linux) > and drop packets currently if at a broad level. > > What I''d really like is to be able to change all the above (except > packet loss or fixed/random delays ... that''s easy enough to just do in > server code) on a per connection basis. > > Having a way to granularly set a bandwidth limit for a specific > connection would be very useful as a wrapper to changes in the TCP > window size and MSS. As I understand the way that limits are imposed in > Crossbow, on a per interface/port basis, it''s implemented using squeues > and ends up introducing packet loss and queue restrictions instead of > the smoother options that you can use with TCP header manipulation. > (Someone please correct me if I''m lacking some understanding on how this > happens under the hood.) > > About the MSS, albeit my TCP is a little fuzzy, but would it be possible > to do a connection reestablishment to get the MSS changed? I keep thinking that this thread is using MSS where they mean RTT. I lost as to why we''d tune the MSS here !? -r
Roch - PAE wrote:> Thomas Rampelberg writes: > > Darren.Reed at Sun.COM wrote: > > > What sort of capabilities are you looking for in shaping packets? > > > > > > - introducing random packet loss? > > > - introducing fixed/random delays? > > > - imposing queue restrictions (n slots, n kB, n MB) > > > - imposing a bandwidth limit (n kb/s, n Mb/s) > > > - changing the TCP MSS (can only be changed when the connection *starts) > > > - changing the TCP window size > > > - others? > > > > > > Darren > > > > > > > Obviously, you can impose bandwidth limits, queue restrictions (I''m a > > little fuzzy on this one but it appears to be like netfilter in Linux) > > and drop packets currently if at a broad level. > > > > What I''d really like is to be able to change all the above (except > > packet loss or fixed/random delays ... that''s easy enough to just do in > > server code) on a per connection basis. > > > > Having a way to granularly set a bandwidth limit for a specific > > connection would be very useful as a wrapper to changes in the TCP > > window size and MSS. As I understand the way that limits are imposed in > > Crossbow, on a per interface/port basis, it''s implemented using squeues > > and ends up introducing packet loss and queue restrictions instead of > > the smoother options that you can use with TCP header manipulation. > > (Someone please correct me if I''m lacking some understanding on how this > > happens under the hood.) > > > > About the MSS, albeit my TCP is a little fuzzy, but would it be possible > > to do a connection reestablishment to get the MSS changed? > > > I keep thinking that this thread is using MSS where they > mean RTT. I lost as to why we''d tune the MSS here !? > > -rNow that I''ve discovered you can only change the MSS on connection establishment ... there doesn''t appear to be any reason =/ It looked good while reading through my networking book, does that count?
Hi Roch, I did mean MSS...but as Tom said now that its apparent you can only set it on a SYN, there''s not a lot of use to it. Best Regards, Jason On 2/22/07, Roch - PAE <Roch.Bourbonnais at sun.com> wrote:> Thomas Rampelberg writes: > > Darren.Reed at Sun.COM wrote: > > > What sort of capabilities are you looking for in shaping packets? > > > > > > - introducing random packet loss? > > > - introducing fixed/random delays? > > > - imposing queue restrictions (n slots, n kB, n MB) > > > - imposing a bandwidth limit (n kb/s, n Mb/s) > > > - changing the TCP MSS (can only be changed when the connection *starts) > > > - changing the TCP window size > > > - others? > > > > > > Darren > > > > > > > Obviously, you can impose bandwidth limits, queue restrictions (I''m a > > little fuzzy on this one but it appears to be like netfilter in Linux) > > and drop packets currently if at a broad level. > > > > What I''d really like is to be able to change all the above (except > > packet loss or fixed/random delays ... that''s easy enough to just do in > > server code) on a per connection basis. > > > > Having a way to granularly set a bandwidth limit for a specific > > connection would be very useful as a wrapper to changes in the TCP > > window size and MSS. As I understand the way that limits are imposed in > > Crossbow, on a per interface/port basis, it''s implemented using squeues > > and ends up introducing packet loss and queue restrictions instead of > > the smoother options that you can use with TCP header manipulation. > > (Someone please correct me if I''m lacking some understanding on how this > > happens under the hood.) > > > > About the MSS, albeit my TCP is a little fuzzy, but would it be possible > > to do a connection reestablishment to get the MSS changed? > > > I keep thinking that this thread is using MSS where they > mean RTT. I lost as to why we''d tune the MSS here !? > > -r > >