Displaying 20 results from an estimated 6000 matches similar to: "icecast performance on many concurrent low-bitrate streams"
2005 Jul 28
2
icecast performance on many concurrentlow-bitrate streams
Hi Karl,
Thanks for your info. I have a standard Icecast-2.2 release with a few local patches. I'm a little apprehesive to apply my patches to the kh14 branch, so I'd rather patch my branch with the changes related to batched reads from the kh branch.
I've looked at your code to see if I could spot the changes related to batching reads. So far I have not been able to find where
2005 Feb 09
2
relation between burst- and queue-size with low bitrate streams
Hi all,
I have a question about the relationship between the "queue-size" and the
"burst-size" <limits> parameters.
Q: When a client connects is the burst-size immediately added to the
queue-size?
If this is the case then the slack for a 16kpbs client goes down from 50 sec
(102400[queue-size] / 16kbps = 50 sec) to 18 seconds ( (102400 -
65536[burst-size]) / 16kpbs =
2005 Jul 28
2
icecast performance on manyconcurrentlow-bitrate streams
Karl,
I've managed to patch up my branch of Icecast to do the batching. Checked
everything with valgrind and tested it extensively. It looks good. Tcpdump
now shows nice size frames (mostly 1400 bytes). Any reason why you're not
settings the MTU to something closer to 1500?
Many thanks for your help,
KJ
-----Oorspronkelijk bericht-----
Van: Karl Heyes [mailto:karl@xiph.org]
Verzonden:
2005 Feb 09
0
relation between burst- and queue-size with low bitrate streams
On Wed, 2005-02-09 at 20:44, Klaas Jan Wierenga wrote:
> Q: When a client connects is the burst-size immediately added to the
> queue-size?
no.
> If this is the case then the slack for a 16kpbs client goes down from 50 sec
> (102400[queue-size] / 16kbps = 50 sec) to 18 seconds ( (102400 -
> 65536[burst-size]) / 16kpbs = 18 sec).
>
> I'm experiencing quite a few of the
2005 Jul 28
0
icecast performance on many concurrentlow-bitrate streams
On Thu, 2005-07-28 at 15:30, Klaas Jan Wierenga wrote:
> Hi Karl,
>
> Thanks for your info. I have a standard Icecast-2.2 release with a few local patches. I'm a little apprehesive to apply my patches to the kh14 branch, so I'd rather patch my branch with the changes related to batched reads from the kh branch.
> I've looked at your code to see if I could spot the changes
2005 Mar 07
2
high CPU load for large # sources?
Hi all,
I have an icecast setup with 20+ sources. During peak times some 20 sources
will be connected with a total of some 250 listeners more-or-less equally
divided over the 20 sources. All streams are running at a measly 16 kbps.
There is enough bandwidth to/from the server. During these peak times I see
very high CPU usage for icecast 98-99%. The system I'm running is an Intel
Celeron
2005 Dec 28
2
Use of TCP_CORK instead of TCP_NODELAY
>
> p.s. For an in depth analysis of TCP_CORK read Christiopher Baus' excelent
> article: http://www.baus.net/on-tcp_cork
Thanks for this pointer. I'd been meaning to reply on this thread, but
hadn't got around to it, primarily because I didn't really understand
TCP_CORK (the linux manpage is, as usual, fairly unclear on what
exactly it does). Now I understand!
>
>
2004 Aug 06
1
Second patch again CVS version
On Sun, Feb 24, 2002 at 09:04:03AM +0100, Ricardo Galli wrote:
> Sorry, didn't explain well.
>
> Nagle's algorithm (rfc896) buffers user data until there is no pending acks
> or it can send a full segment (rfc1122).
>
> icecast doesn't need it at all, because it already sends large buffers and
> the time to send the next buffers is relatively very long.
IMO
2023 Aug 07
2
Packet Timing and Data Leaks
On Mon, 7 Aug 2023, Chris Rapier wrote:
> > The broader issue of hiding all potential keystroke timing is not yet fixed.
>
> Could some level of obfuscation come from enabling Nagle for interactive
> sessions that has an associated TTY? Though that would be of limited
> usefulness in low RTT environments. I don't like the idea of having a steady
> drip of packets as that
2005 Mar 07
0
high CPU load for large # sources?
On Mon, 2005-03-07 at 22:01, Klaas Jan Wierenga wrote:
> Hi all,
>
> I have an icecast setup with 20+ sources. During peak times some 20 sources
> will be connected with a total of some 250 listeners more-or-less equally
> divided over the 20 sources. All streams are running at a measly 16 kbps.
> There is enough bandwidth to/from the server. During these peak times I see
>
2006 Jan 24
4
sftp performance problem, cured by TCP_NODELAY
In certain situations sftp download speed can be much less than that
of scp.
After many days of trying to find the cause finally I found it to be
the tcp nagle algorithm, which if turned off with TCP_NODELAY
eliminates the problem.
Now I see it being discussed back in 2002, but it still unresolved in
openssh-4.2 :(
Simple solution would be to add a NoDelay option to ssh which sftp
would set.
2012 Oct 17
6
SuSE Linux Enterprise Server OpenSSH 5.1p1 nagle issue?
I have a system in place where it appears that TCP will make a massive
change in behavior mid-stream with existing SSH sessions. We noticed the
issue first with an application using an SSH forward. However, we were
able to rule that out by generating the same TCP characteristics by
having a perl script dump text out to a terminal simulating a large data
flow from the far end(ssh server) back
2005 Jul 26
2
Icecast/ices problem
Thanks for the suggestion. It turns out that debug for ices told me
nothing but debug for icecast 2.2.0 showed that it was terminating the
source at the same place in the playlist, apparently due to a lack of
trailing metadata in one particular file -- icecast saw it as end of
stream.
After checking code updates on the TRAC system, I installed the kh branch
of icecast (kh13) and this seems
2004 Aug 06
4
Second patch again CVS version
On 24/02/02 05:02, Jack Moffitt shaped the electrons to say:
> > - The server didn't check for the status of the client's socket before
> > the unblocking send(). This caused a disconnection at a minimun network
> > congestion, causing a broken pipe error (Linux 2.4 behaviour?) in the
> > network. I've just added a poll in sock.c.>
> Can you send me this
2005 Dec 28
0
Use of TCP_CORK instead of TCP_NODELAY
Michael,
With regard to your comment below:
> As a streaming server, it's fairly crucial for icecast to send out
> data with as low a delay as possible (many clients don't care, but
> some do). That's why we use TCP_NODELAY - we actually WANT to send out
> data as soon as we can.
Can you explain how some clients depend on a low delay when receiving data
from icecast? How
2020 Nov 04
2
parallel PSOCK connection latency is greater on Linux?
I'm not sure the user would know ;). This is very system-specific issue just because the Linux network stack behaves so differently from other OSes (for purely historical reasons). That makes it hard to abstract as a "feature" for the R sockets that are supposed to be platform-independent. At least TCP_NODELAY is actually part of POSIX so it is on better footing, and disabling
2004 Nov 18
0
FW: Dumping streams to a file?
Yes that is the plan. I'll have to find some time to graft the patch onto
2.1 mainline and post it here.
KJ
-----Oorspronkelijk bericht-----
Van: Myke Place [mailto:mp@trans.xmission.com]Namens Myke Place
Verzonden: donderdag 18 november 2004 22:14
Aan: Klaas Jan Wierenga
Onderwerp: Re: [Icecast] Dumping streams to a file?
Is the plan to eventually move this from -trunk to the mainline
2006 Feb 03
1
RE: 5, 000 concurrent calls system rolloutquestion
There you go. "if it is doing no other work" is key phrase. A lot of PC can do that these days if all it has to do is re-route packets to different destinations, and guess what, if you make sure silence compression is turned on at the endpoints, you can claim even more streams can be passed through. The trict here is how * stores the mapping pair and how effiecent its lookup process is.
2005 Aug 04
2
Icecast Instalation
Hello All,
I've tried to install the Icecast streaming server on my Slackware 10
without success.
Someone could help me on it? And what kind of modules I can set up?
Best Regards.
2009 Sep 28
1
is glusterfs DHT really distributed?
Hi All,
I noticed a very weird phenomenon when I'm copying data (200KB image
files) to our glusterfs storage. When I run only run client, it copies
roughly 20 files per second and as soon as I start a second client on
another machine, the copy rate of the first client immediately degrade
to 5 files per second. When I stop the second client, the first client
will immediately speed up