Klaas Jan Wierenga
2005-Jul-28 15:29 UTC
[Icecast-dev] icecast performance on manyconcurrentlow-bitrate streams
Karl, I've managed to patch up my branch of Icecast to do the batching. Checked everything with valgrind and tested it extensively. It looks good. Tcpdump now shows nice size frames (mostly 1400 bytes). Any reason why you're not settings the MTU to something closer to 1500? Many thanks for your help, KJ -----Oorspronkelijk bericht----- Van: Karl Heyes [mailto:karl@xiph.org] Verzonden: donderdag 28 juli 2005 17:03 Aan: Klaas Jan Wierenga CC: icecast-dev Onderwerp: Re: [Icecast-dev] icecast performance on manyconcurrentlow-bitrate streams On Thu, 2005-07-28 at 15:30, Klaas Jan Wierenga wrote:> Hi Karl, > > Thanks for your info. I have a standard Icecast-2.2 release with a fewlocal patches. I'm a little apprehesive to apply my patches to the kh14 branch, so I'd rather patch my branch with the changes related to batched reads from the kh branch.> I've looked at your code to see if I could spot the changes related tobatching reads. So far I have not been able to find where you've made this patch. Could you point me in the right direction? The changes for batching part are really just isolated to the format_mp3.[ch] files, within the 2 readers calls (filter_meta, no_meta). commit diff is listed on http://lists.xiph.org/pipermail/commits/2005-June/007469.html There have been various changes between 2.2 and the trunk/kh code so I'm sure that just dropping those 2 files into the 2.2.0 tree will not work right off. For one thing the response headers are treated differently I don't know the scope of your patches, so I can't even give you any hints on those. karl.
Karl Heyes
2005-Jul-28 15:59 UTC
[Icecast-dev] icecast performance on manyconcurrentlow-bitrate streams
On Thu, 2005-07-28 at 23:29, Klaas Jan Wierenga wrote:> Karl, > > I've managed to patch up my branch of Icecast to do the batching. Checked > everything with valgrind and tested it extensively. It looks good. Tcpdump > now shows nice size frames (mostly 1400 bytes). Any reason why you're not > settings the MTU to something closer to 1500?It isn't setting the MTU, I'm just making sure initially that the block size is large enough to make fuller packets. Obviously it's not possible to determine the best size due to the fact that it's listener dependant and we are batching up at the reading not sending stage. However as you have mentioned, the common MTU size is 1500 but that includes TCP so 1400 bytes sends was a quick/near estimation of a full packet, you can increase the block size by some more. The code demonstrates the mechanism more than the absolute maximum currently. karl.
Klaas Jan Wierenga
2005-Jul-29 01:12 UTC
[Icecast-dev] icecast performance on many concurrent low-bitrate streams
Karl, Sorry I keep going on about this, but I'd like to understand the issues here. I understand that the value of 1400 is not directly determining the MTU, but if a connection has an MTU of >= 1400+TCP then it turns out that (on my configuration of a linux-2.4 kernel) most packets will have a 1400 byte payload, some will have less on a link with MTU 1500. On a link with a smaller MTU almost all packets will be filled to the maximum payload size for the MTU. What would be the arguments against buffering a little more? Assuming the lowest bitrate is 16kbit/sec = 2kbytes/sec you could set the batching value to 2048. This way you fill packets completely on links up-to MTU 2048+TCP. Of course in a real-life system the maximum payload is 1500-TCP. KJ Karl Heyes wrote:>On Thu, 2005-07-28 at 23:29, Klaas Jan Wierenga wrote: > > >>Karl, >> >>I've managed to patch up my branch of Icecast to do the batching. Checked >>everything with valgrind and tested it extensively. It looks good. Tcpdump >>now shows nice size frames (mostly 1400 bytes). Any reason why you're not >>settings the MTU to something closer to 1500? >> >> > >It isn't setting the MTU, I'm just making sure initially that the block >size is large enough to make fuller packets. Obviously it's not possible >to determine the best size due to the fact that it's listener dependant >and we are batching up at the reading not sending stage. > >However as you have mentioned, the common MTU size is 1500 but that >includes TCP so 1400 bytes sends was a quick/near estimation of a full >packet, you can increase the block size by some more. The code >demonstrates the mechanism more than the absolute maximum currently. > >karl. > > > > >