We're abusing icecast in a true narrowcasting setup (personalized stream per mountpoint). The streams itself are created in a piece of proprietory (spelling?, i'm dutch) software, icecast merely relays them. However, the intended endpoint is an embedded device. This device has trouble with tcp/ip packets not matching the max. packet size (MSS or MSS minus header). After eleborate testing, we found that using the sockopt 'TCP_CORK' instead of 'TCP_NODELAY' produces far better results on the field on reconnects etc/. Also, with streaming media, TCP_CORK is more efficient than TCP_NODELAY. To patch icecast to use tcp_cork is a piece of cake, it involves no more than 10 lines of code. My question would be if the maintainers would consider bringing this into the main tree. It could be implemented as flag for configure, of, (even better) as some setting in the config file. I have already implemented this as configure flag. If this is considered something usefull, i will submit the patch i created. Regards, Henri Zikken
Make sure your patch accounts for the fact that not all platforms implement TCP_CORK. /dale On Dec 25, 2005, at 9:04 AM, Henri Zikken wrote:> We're abusing icecast in a true narrowcasting setup (personalized > stream per > mountpoint). The streams itself are created in a piece of proprietory > (spelling?, i'm dutch) software, icecast merely relays them. > > However, the intended endpoint is an embedded device. This device has > trouble with tcp/ip packets not matching the max. packet size (MSS > or MSS > minus header). After eleborate testing, we found that using the > sockopt > 'TCP_CORK' instead of 'TCP_NODELAY' produces far better results on > the field > on reconnects etc/. Also, with streaming media, TCP_CORK is more > efficient > than TCP_NODELAY. > > To patch icecast to use tcp_cork is a piece of cake, it involves no > more > than 10 lines of code. My question would be if the maintainers would > consider bringing this into the main tree. It could be implemented > as flag > for configure, of, (even better) as some setting in the config file. > > I have already implemented this as configure flag. If this is > considered > something usefull, i will submit the patch i created. > > Regards, > > Henri Zikken > > _______________________________________________ > Icecast mailing list > Icecast@xiph.org > http://lists.xiph.org/mailman/listinfo/icecast >
Hi Henri and others, Very interesting post about TCP_CORK. I would be very interested in having it applied in the next version of Icecast. I'm using Icecast in a somewhat narrowcasting setup with large numbers of sources (> 100) and between 5 and 50 listeners per source. All streaming is done at low bitrates (16 - 24 kbit/sec) and listeners use embedded devices connected by 56k modems. It is therefore very important to have efficient use of available bandwidth. For low-bitrate streams the problem originates in the fact that the stream source often produces many small packets (they should be using TCP_CORK too!), which were passed on unchanged by icecast to each client (again as small write() calls on each socket). This problem had been reduced a lot in Icecast 2.3.0 as can be read from the release notes of 2.3.0: Fixes for 2.3.0 ... * avoid small writes to reduce TCP overhead. This core of this fix is in the complete_read function from format_mp3.c, which is used in mp3_get_no_meta() and mp3_get_filter_meta(). /* This does the actual reading, making sure the read data is packaged in * blocks of 1400 bytes (near the common MTU size). This is because many * incoming streams come in small packets which could waste a lot of * bandwidth with many listeners due to headers and such like. */ static int complete_read (source_t *source) As far as I can see this fix has been applied only for the mp3 format. I guess the problem still remains for other formats, correct me if I am wrong. However, if I understand correctly your TCP_CORK solution would apply to all formats since it is applied on the client socket (irrespective of the format). I would be very interested in having this patch and seeing it applied to Icecast. Cheers, KJ p.s. For an in depth analysis of TCP_CORK read Christiopher Baus' excelent article: http://www.baus.net/on-tcp_cork -----Oorspronkelijk bericht----- Van: icecast-bounces@xiph.org [mailto:icecast-bounces@xiph.org]Namens Henri Zikken Verzonden: zondag 25 december 2005 15:05 Aan: icecast@xiph.org Onderwerp: [Icecast] Use of TCP_CORK instead of TCP_NODELAY We're abusing icecast in a true narrowcasting setup (personalized stream per mountpoint). The streams itself are created in a piece of proprietory (spelling?, i'm dutch) software, icecast merely relays them. However, the intended endpoint is an embedded device. This device has trouble with tcp/ip packets not matching the max. packet size (MSS or MSS minus header). After eleborate testing, we found that using the sockopt 'TCP_CORK' instead of 'TCP_NODELAY' produces far better results on the field on reconnects etc/. Also, with streaming media, TCP_CORK is more efficient than TCP_NODELAY. To patch icecast to use tcp_cork is a piece of cake, it involves no more than 10 lines of code. My question would be if the maintainers would consider bringing this into the main tree. It could be implemented as flag for configure, of, (even better) as some setting in the config file. I have already implemented this as configure flag. If this is considered something usefull, i will submit the patch i created. Regards, Henri Zikken _______________________________________________ Icecast mailing list Icecast@xiph.org http://lists.xiph.org/mailman/listinfo/icecast -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.371 / Virus Database: 267.14.7/214 - Release Date: 23-12-2005 -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.371 / Virus Database: 267.14.7/214 - Release Date: 23-12-2005 -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.371 / Virus Database: 267.14.8/215 - Release Date: 27-12-2005
> > p.s. For an in depth analysis of TCP_CORK read Christiopher Baus' excelent > article: http://www.baus.net/on-tcp_corkThanks for this pointer. I'd been meaning to reply on this thread, but hadn't got around to it, primarily because I didn't really understand TCP_CORK (the linux manpage is, as usual, fairly unclear on what exactly it does). Now I understand!> > -----Oorspronkelijk bericht----- > Van: icecast-bounces@xiph.org [mailto:icecast-bounces@xiph.org]Namens Henri > Zikken > Verzonden: zondag 25 december 2005 15:05 > Aan: icecast@xiph.org > Onderwerp: [Icecast] Use of TCP_CORK instead of TCP_NODELAY > > > We're abusing icecast in a true narrowcasting setup (personalized stream per > mountpoint). The streams itself are created in a piece of proprietory > (spelling?, i'm dutch) software, icecast merely relays them. > > However, the intended endpoint is an embedded device. This device has > trouble with tcp/ip packets not matching the max. packet size (MSS or MSS > minus header). After eleborate testing, we found that using the sockopt > 'TCP_CORK' instead of 'TCP_NODELAY' produces far better results on the field > on reconnects etc/. Also, with streaming media, TCP_CORK is more efficient > than TCP_NODELAY.This is pretty broken. There are really three possibilities. In order of increasing maximum delay: 1) TCP_NODELAY - what icecast uses, deliberately. 2) default (Nagle) - we used to do this in icecast. 3) TCP_CORK - what you've added As a streaming server, it's fairly crucial for icecast to send out data with as low a delay as possible (many clients don't care, but some do). That's why we use TCP_NODELAY - we actually WANT to send out data as soon as we can. There are limits to how much overhead is reasonable to accept, which is why we do some aggregation in userspace; this aggregation should probably be tuned better, but it's important that icecast control it, not the kernel. You want TCP_CORK, it seems, because of bugs in your target devices - well, whilst we're willing to make some concessions to broken clients, an inability to speak TCP correctly is well outside what I consider sensible, particularly given that it will degrade icecast's performance for working clients (you remain welcome, of course, to hack up your local copy). It's also very unportable. Mike
Klaas Jan Wierenga wrote:> Hi Henri and others, > > Very interesting post about TCP_CORK. I would be very interested in having > it applied in the next version of Icecast.I'd be more interested in some figures to show there being a benefit, most examples talk about HTTP servers with short lived connections where sendfile(2) is used.> For low-bitrate streams the problem originates in the fact that the stream > source often produces many small packets (they should be using TCP_CORK > too!), which were passed on unchanged by icecast to each client (again as > small write() calls on each socket). This problem had been reduced a lot in > Icecast 2.3.0 as can be read from the release notes of 2.3.0:This is exactly why it was implemented, a few people complained about the overhead with large numbers of listeners, not only because of the TCP overhead but also the fact that it reduces the write syscall overhead. Will TCP_CORK (linux) and TCP_NOPUSH (BSD) give noticable benefits wrt icecast? It might prove helpful if available but more info is needed.> As far as I can see this fix has been applied only for the mp3 format. I > guess the problem still remains for other formats, correct me if I am wrong.It applies to all passthrough streams (ie mp3, aac*, nsv), Ogg is different as it has pages which are generally not really small. karl