similar to: Use of TCP_CORK instead of TCP_NODELAY

Displaying 20 results from an estimated 1000 matches similar to: "Use of TCP_CORK instead of TCP_NODELAY"

2005 Dec 28
0
Use of TCP_CORK instead of TCP_NODELAY
Michael, With regard to your comment below: > As a streaming server, it's fairly crucial for icecast to send out > data with as low a delay as possible (many clients don't care, but > some do). That's why we use TCP_NODELAY - we actually WANT to send out > data as soon as we can. Can you explain how some clients depend on a low delay when receiving data from icecast? How
2005 Dec 28
2
Use of TCP_CORK instead of TCP_NODELAY
> > p.s. For an in depth analysis of TCP_CORK read Christiopher Baus' excelent > article: http://www.baus.net/on-tcp_cork Thanks for this pointer. I'd been meaning to reply on this thread, but hadn't got around to it, primarily because I didn't really understand TCP_CORK (the linux manpage is, as usual, fairly unclear on what exactly it does). Now I understand! > >
2005 Dec 28
0
Use of TCP_CORK instead of TCP_NODELAY
Hi Henri and others, Very interesting post about TCP_CORK. I would be very interested in having it applied in the next version of Icecast. I'm using Icecast in a somewhat narrowcasting setup with large numbers of sources (> 100) and between 5 and 50 listeners per source. All streaming is done at low bitrates (16 - 24 kbit/sec) and listeners use embedded devices connected by 56k modems. It
2005 Dec 25
4
Use of TCP_CORK instead of TCP_NODELAY
We're abusing icecast in a true narrowcasting setup (personalized stream per mountpoint). The streams itself are created in a piece of proprietory (spelling?, i'm dutch) software, icecast merely relays them. However, the intended endpoint is an embedded device. This device has trouble with tcp/ip packets not matching the max. packet size (MSS or MSS minus header). After eleborate testing,
2019 Jun 06
0
[nbdkit PATCH 1/2] server: Add support for corking
Any time we reply to NBD_CMD_READ or NBD_CMD_BLOCK_STATUS, we end up calling conn->send() more than once. Now that we've disabled Nagle's algorithm, this implies that we try harder to send the small header immediately, rather than batching it with the rest of the payload, which causes more overhead in the amount of actual network traffic. For interfaces that support corking (gnutls, or
2019 Jun 06
0
[nbdkit PATCH 2/2] server: Cork around grouped transmission send()s
As mentioned in the previous patch, sending small packets as soon as possible leads to network packet overhead. Grouping related writes under corking appears to help everything but unencrypted Unix plain sockets. I tested with appropriate combinations from: $ nbdkit {-p 10810,-U -} \ {--tls=require --tls-verify-peer --tls-psk=./keys.psk,} memory size=64m \ --run
2005 Dec 29
1
Use of TCP_CORK instead of TCP_NODELAY
Klaas Jan Wierenga wrote: >>This is exactly why it was implemented, a few people complained about >>the overhead with large numbers of listeners, not only because of the >>TCP overhead but also the fact that it reduces the write syscall >>overhead. Will TCP_CORK (linux) and TCP_NOPUSH (BSD) give noticable >>benefits wrt icecast? It might prove helpful if available but
2005 Dec 28
2
Use of TCP_CORK instead of TCP_NODELAY
Klaas Jan Wierenga wrote: > Hi Henri and others, > > Very interesting post about TCP_CORK. I would be very interested in having > it applied in the next version of Icecast. I'd be more interested in some figures to show there being a benefit, most examples talk about HTTP servers with short lived connections where sendfile(2) is used. > For low-bitrate streams the problem
2023 Mar 24
1
[PATCH 1/1] nbd/server: push pending frames after sending reply
On Fri, Mar 24, 2023 at 11:47:20AM +0100, Florian Westphal wrote: > qemu-nbd doesn't set TCP_NODELAY on the tcp socket. > > Kernel waits for more data and avoids transmission of small packets. > Without TLS this is barely noticeable, but with TLS this really shows. > > Booting a VM via qemu-nbd on localhost (with tls) takes more than > 2 minutes on my system. tcpdump
2005 Dec 29
0
Use of TCP_CORK instead of TCP_NODELAY
> This is exactly why it was implemented, a few people complained about > the overhead with large numbers of listeners, not only because of the > TCP overhead but also the fact that it reduces the write syscall > overhead. Will TCP_CORK (linux) and TCP_NOPUSH (BSD) give noticable > benefits wrt icecast? It might prove helpful if available but more info > is needed. As
2006 Jan 24
4
sftp performance problem, cured by TCP_NODELAY
In certain situations sftp download speed can be much less than that of scp. After many days of trying to find the cause finally I found it to be the tcp nagle algorithm, which if turned off with TCP_NODELAY eliminates the problem. Now I see it being discussed back in 2002, but it still unresolved in openssh-4.2 :( Simple solution would be to add a NoDelay option to ssh which sftp would set.
2010 Oct 10
3
pop3 TCP_CORK too late error
I was straceing a pop3 process and noticed that the TCP_CORK option isn't set soon enough: epoll_wait(8, {{EPOLLOUT, {u32=37481984, u64=37481984}}}, 38, 207) = 1 write(41, "iTxPBrNlaNFao+yQzLhuO4/+tQ5cuiKSe"..., 224) = 224 epoll_ctl(8, EPOLL_CTL_MOD, 41, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=37481984, u64=37481984}}) = 0 pread(19,
2019 Jun 06
4
[nbdkit PATCH 0/2] Reduce network overhead with corking
Slightly RFC, as I need more time to investigate why Unix sockets appeared to degrade with this patch. But as TCP sockets (over loopback to localhost) and TLS sessions (regardless of underlying Unix or TCP) both showed improvements, this looks like a worthwhile series. Eric Blake (2): server: Add support for corking server: Cork around grouped transmission send()s server/internal.h | 3
2004 Aug 06
1
Second patch again CVS version
On Sun, Feb 24, 2002 at 09:04:03AM +0100, Ricardo Galli wrote: > Sorry, didn't explain well. > > Nagle's algorithm (rfc896) buffers user data until there is no pending acks > or it can send a full segment (rfc1122). > > icecast doesn't need it at all, because it already sends large buffers and > the time to send the next buffers is relatively very long. IMO
2002 Jan 26
7
[PATCH] Added NoDelay config option and nodelay subsystem option
Hello again! Since there was some resistance against adding TCP_NODELAY uncontionally, I've made another patch. The new patch contains the following: * Added a NoDelay yes/no (default no) config option to ssh and sshd * Added -oNoDelay=yes to the ssh command line for sftp. * Changed the sshd subsystem config option syntax from Subsystem name path to Subsystem name options path
2002 Jan 31
1
Use of TCP_NODELAY in commercial SSH
In order to test my overlapping request path for sftp on another ssh server, I downloaded ssh2 version 3.1.0 from ssh.com. Having downloaded it, I decided to study the use of TCP_NODELAY in that implementation. Here's what I found: * Both ssh2 and sshd2 has a NoDelay config option which is false by default. * The ssh2 client does not enable or disable NoDelay because of a channel
2006 Dec 20
1
Nagle & delayed ACK strike again
This time the problem is that the ssh server only sets TCP_NODELAY for interactive (tty) sessions or if X11 forwarding is enabled. Neither of which are true for the use of the sftp subsystem. This hurts upload performance for sftp/sshfs. I'm not sure why this hasn't cropped up earlier. Were there any TCP_NODELAY related changes in the sshd code recently? Is there a reason not to
2012 Oct 17
6
SuSE Linux Enterprise Server OpenSSH 5.1p1 nagle issue?
I have a system in place where it appears that TCP will make a massive change in behavior mid-stream with existing SSH sessions. We noticed the issue first with an application using an SSH forward. However, we were able to rule that out by generating the same TCP characteristics by having a perl script dump text out to a terminal simulating a large data flow from the far end(ssh server) back
2019 Jun 10
2
[nbdkit PATCH] crypto: Tweak handling of SEND_MORE
In the recent commit 3842a080 to add SEND_MORE support, I blindly implemented the tls code as: if (SEND_MORE) { cork send } else { send uncork } because it showed improvements for my test case of aio-parallel-load from libnbd. But that test sticks to 64k I/O requests. But with further investigation, I've learned that even though gnutls corking works great for smaller
2009 Sep 28
1
is glusterfs DHT really distributed?
Hi All, I noticed a very weird phenomenon when I'm copying data (200KB image files) to our glusterfs storage. When I run only run client, it copies roughly 20 files per second and as soon as I start a second client on another machine, the copy rate of the first client immediately degrade to 5 files per second. When I stop the second client, the first client will immediately speed up