similar to: [nbdkit PATCH 0/2] Reduce network overhead with corking

Displaying 20 results from an estimated 900 matches similar to: "[nbdkit PATCH 0/2] Reduce network overhead with corking"

2019 Jun 07
4
[nbdkit PATCH v2 0/2] Reduce network overhead with MSG_MORE/corking
This time around, the numbers are indeed looking better than in v1; and I like the interface better. Eric Blake (2): server: Prefer send() over write() server: Group related transmission send()s server/internal.h | 7 +++- server/connections.c | 51 +++++++++++++++++++++++++--- server/crypto.c | 11 ++++--
2019 Jun 06
0
[nbdkit PATCH 2/2] server: Cork around grouped transmission send()s
As mentioned in the previous patch, sending small packets as soon as possible leads to network packet overhead. Grouping related writes under corking appears to help everything but unencrypted Unix plain sockets. I tested with appropriate combinations from: $ nbdkit {-p 10810,-U -} \ {--tls=require --tls-verify-peer --tls-psk=./keys.psk,} memory size=64m \ --run
2019 Jun 06
0
[nbdkit PATCH 1/2] server: Add support for corking
Any time we reply to NBD_CMD_READ or NBD_CMD_BLOCK_STATUS, we end up calling conn->send() more than once. Now that we've disabled Nagle's algorithm, this implies that we try harder to send the small header immediately, rather than batching it with the rest of the payload, which causes more overhead in the amount of actual network traffic. For interfaces that support corking (gnutls, or
2019 Jun 10
2
[nbdkit PATCH] crypto: Tweak handling of SEND_MORE
In the recent commit 3842a080 to add SEND_MORE support, I blindly implemented the tls code as: if (SEND_MORE) { cork send } else { send uncork } because it showed improvements for my test case of aio-parallel-load from libnbd. But that test sticks to 64k I/O requests. But with further investigation, I've learned that even though gnutls corking works great for smaller
2019 Mar 18
3
[PATCH nbdkit 0/2] server: Split out NBD protocol code from connections code.
These are a couple of patches in preparation for the Block Status implementation. While the patches (especially the second one) are very large they are really just elementary code motion. Rich.
2020 Feb 11
4
[PATCH nbdkit v2 0/3] server: Remove explicit connection parameter.
v1 was here: https://www.redhat.com/archives/libguestfs/2020-February/msg00081.html v2 replaces struct connection *conn = GET_CONN; with GET_CONN; which sets conn implicitly and asserts that it is non-NULL. If we actually want to test if conn is non-NULL or behave differently, then you must use threadlocal_get_conn() instead, and some existing uses do that. Rich.
2020 Feb 11
5
[PATCH nbdkit 0/3] server: Remove explicit connection parameter.
The third patch is a large but mechanical change which gets rid of passing around struct connection * entirely within the server, preferring instead to reference the connection through thread-local storage. I hope this is a gateway to simplifying other parts of the code. Rich.
2005 Sep 13
1
Solaris build failed
dovecot-v1.0-alpha build failed for Solaris 11 (OpenSolaris Nevada). The problematic line is in socket.c line 228. The fix should be to change SOL_TCP to IPPROTO_TCP found in netinet/in.h. This change should work universally on all platforms. Gary
2023 Mar 24
1
[PATCH 1/1] nbd/server: push pending frames after sending reply
On Fri, Mar 24, 2023 at 11:47:20AM +0100, Florian Westphal wrote: > qemu-nbd doesn't set TCP_NODELAY on the tcp socket. > > Kernel waits for more data and avoids transmission of small packets. > Without TLS this is barely noticeable, but with TLS this really shows. > > Booting a VM via qemu-nbd on localhost (with tls) takes more than > 2 minutes on my system. tcpdump
2010 Oct 10
3
pop3 TCP_CORK too late error
I was straceing a pop3 process and noticed that the TCP_CORK option isn't set soon enough: epoll_wait(8, {{EPOLLOUT, {u32=37481984, u64=37481984}}}, 38, 207) = 1 write(41, "iTxPBrNlaNFao+yQzLhuO4/+tQ5cuiKSe"..., 224) = 224 epoll_ctl(8, EPOLL_CTL_MOD, 41, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=37481984, u64=37481984}}) = 0 pread(19,
2005 Dec 28
2
Use of TCP_CORK instead of TCP_NODELAY
> > p.s. For an in depth analysis of TCP_CORK read Christiopher Baus' excelent > article: http://www.baus.net/on-tcp_cork Thanks for this pointer. I'd been meaning to reply on this thread, but hadn't got around to it, primarily because I didn't really understand TCP_CORK (the linux manpage is, as usual, fairly unclear on what exactly it does). Now I understand! > >
2019 Jun 07
0
[nbdkit PATCH v2 2/2] server: Group related transmission send()s
We disabled Nagle's algorithm to allow less latency in our responses reaching the client; but as a side effect, it leads to more network overhead when we send a reply split across more than one write(). Take advantage of various means for grouping related writes (Linux' MSG_MORE for sockets, gnutls' corking for TLS) to send a larger packet, and adjust callers to pass in our internal
2005 Dec 28
0
Use of TCP_CORK instead of TCP_NODELAY
> As a streaming server, it's fairly crucial for icecast to > send out data with as low a delay as possible (many clients > don't care, but some do). That's why we use TCP_NODELAY - we > actually WANT to send out data as soon as we can. Nagle is inherently unsuited for streams. NODELAY was (imho) ment for connections for which Nagle isn't sufficient and CORK is not
2017 Nov 17
8
[RFC nbdkit PATCH 0/6] Enable full parallel request handling
I want to make my nbd forwarding plugin fully parallel - but to do that, I first need to make nbdkit itself fully parallel ;) With this series, I was finally able to demonstrate out-of-order responses when using qemu-io (which is great at sending back-to-back requests prior to waiting for responses) coupled with the nbd file plugin (which has a great feature of rdelay and wdelay, to make it
2005 Dec 25
4
Use of TCP_CORK instead of TCP_NODELAY
We're abusing icecast in a true narrowcasting setup (personalized stream per mountpoint). The streams itself are created in a piece of proprietory (spelling?, i'm dutch) software, icecast merely relays them. However, the intended endpoint is an embedded device. This device has trouble with tcp/ip packets not matching the max. packet size (MSS or MSS minus header). After eleborate testing,
2019 Mar 20
15
[PATCH nbdkit 0/8] Implement extents using a simpler array.
Not sure what version we're up to, but this reimplements extents using the new simpler structure described in this thread: https://www.redhat.com/archives/libguestfs/2019-March/msg00077.html I also fixed most of the things that Eric pointed out in the previous review, although I need to go back over his replies and check I've got everything. This needs a bit more testing. However the
2006 Jan 21
1
Is sip1.voipbuster.com corking reliably for others on list?
I am trying to move from IAX2 to SIP for voipbuster, moving at the same time to sip1.voipbuster.com. When I try calling out, I see that there is SIP exchange, and in many cases also RTP data being exchanged. Hover in a very large number of attempts the connection is not established. Half of the time there is no RTP, the rest of the time there *is* RTP data flowing in two ways, but no ringtone is
2005 Dec 28
2
Use of TCP_CORK instead of TCP_NODELAY
Klaas Jan Wierenga wrote: > Hi Henri and others, > > Very interesting post about TCP_CORK. I would be very interested in having > it applied in the next version of Icecast. I'd be more interested in some figures to show there being a benefit, most examples talk about HTTP servers with short lived connections where sendfile(2) is used. > For low-bitrate streams the problem
2019 Jun 04
2
Re: [PATCH libnbd v2 3/4] api: Implement concurrent writer.
There are several races / deadlocks which I've thought about. Let's see if I can remember them all ... (1) This I experienced: nbd_aio_get_fd deadlocks if there are concurrent synchronous APIs going on. A typical case is where you set up the concurrent writer thread before connecting, and then call a synchronous connect function such as connect_tcp. The synchronous function grabs
2016 Apr 12
2
Slow reading of large dovecot-uidlist files
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 12 Apr 2016, Bostjan Skufca wrote: > On 12 April 2016 at 10:23, A.L.E.C <alec at alec.pl> wrote: > >> I don't know dovecot's code, but I suppose it uses uidlist file to get >> mailbox statistics that it returns as EXISTS, RECENT, UNSEEN, UIDNEXT, >> UIDVALIDITY, etc, which are required by IMAP standard. I