Displaying 20 results from an estimated 85 matches for "msg_more".
2019 Jun 08
0
[PATCH libnbd 3/3] states: Use MSG_MORE to coalesce messages into single packets.
Since we disabled Nagle's algorithm we may send very small packets
over the wire in some situations where we are calling send(2) from
states that are responsible for small parts of the protocol. By
setting the MSG_MORE flag we can indicate to the kernel that more data
will follow (usually) immediately and so it can append the data to the
same outgoing packet.
Although there is some variability in the test there is a measurable
benefit. Using this test:
$ time nbdkit memory 100M --run 'examples/threaded-r...
2019 Jun 08
6
[PATCH libnbd 0/3] states: Use MSG_MORE to coalesce messages.
Appears to have a measurable benefit, see 3/3 for test results.
Rich.
2023 Jun 17
2
[PATCH net-next v2 17/17] net: Kill MSG_SENDPAGE_NOTLAST
Now that ->sendpage() has been removed, MSG_SENDPAGE_NOTLAST can be cleaned
up. Things were converted to use MSG_MORE instead, but the protocol
sendpage stubs still convert MSG_SENDPAGE_NOTLAST to MSG_MORE, which is now
unnecessary.
Signed-off-by: David Howells <dhowells at redhat.com>
cc: "David S. Miller" <davem at davemloft.net>
cc: Eric Dumazet <edumazet at google.com>
cc: Jakub Ki...
2019 Jun 12
3
[libnbd PATCH 0/2] More with MSG_MORE
I'm not sure if this is worth pursuing. On paper, it makes sense (if
we know we have multiple commands batched to send over the wire, AND
those commands are short in length, we might as well use MSG_MORE),
but the measurement numbers with it applied might just be in the
noise.
Eric Blake (2):
examples: Enhance access patterns of threaded-reads-and-writes
states: Another use for MSG_MORE
examples/threaded-reads-and-writes.c | 12 ++++++++----
generator/states-issue-command.c | 4 +++-
2...
2019 Jun 12
0
[libnbd PATCH 2/2] states: Another use for MSG_MORE
...generator/states-issue-command.c
@@ -42,7 +42,7 @@
h->request.count = htobe32 ((uint32_t) cmd->count);
h->wbuf = &h->request;
h->wlen = sizeof (h->request);
- if (cmd->type == NBD_CMD_WRITE)
+ if (cmd->type == NBD_CMD_WRITE || cmd->next)
h->wflags = MSG_MORE;
SET_NEXT_STATE (%SEND_REQUEST);
return 0;
@@ -70,6 +70,8 @@
if (cmd->type == NBD_CMD_WRITE) {
h->wbuf = cmd->data;
h->wlen = cmd->count;
+ if (cmd->next && cmd->count < 64 * 1024)
+ h->wflags = MSG_MORE;
SET_NEXT_STATE (%SEND_WRITE...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -...
2019 Jun 09
1
Re: [PATCH libnbd 2/3] states: Add handle h->wflags field.
...updated patch.
Rich.
>From 15a687b50acecebcfd3dc6222d93e6df984b83c6 Mon Sep 17 00:00:00 2001
From: "Richard W.M. Jones" <rjones@redhat.com>
Date: Sat, 8 Jun 2019 19:12:22 +0100
Subject: [PATCH] states: Add handle h->wflags field.
This field contains optimization flags (ie. MSG_MORE) which are passed
through to the socket layer if it supports them. The flags are reset
automatically when we move to another state.
---
generator/states.c | 10 +++++++---
lib/internal.h | 1 +
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/generator/states.c b/generator/stat...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
rx_bat...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
rx_bat...
2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
...f (!rx_batched) {
> + local_bh_disable();
> + netif_receive_skb(skb);
> + local_bh_enable();
> + } else {
> + tun_rx_batched(tfile, skb, more);
> + }
> #else
> netif_rx_ni(skb);
> #endif
If rx_batched has been set, and we are talking to clients not using
this new MSG_MORE facility (or such clients don't have multiple TX
packets to send to you, thus MSG_MORE is often clear), you are doing a
lot more work per-packet than the existing code.
You take the queue lock, you test state, you splice into a local queue
on the stack, then you walk that local stack queue to...
2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
...f (!rx_batched) {
> + local_bh_disable();
> + netif_receive_skb(skb);
> + local_bh_enable();
> + } else {
> + tun_rx_batched(tfile, skb, more);
> + }
> #else
> netif_rx_ni(skb);
> #endif
If rx_batched has been set, and we are talking to clients not using
this new MSG_MORE facility (or such clients don't have multiple TX
packets to send to you, thus MSG_MORE is often clear), you are doing a
lot more work per-packet than the existing code.
You take the queue lock, you test state, you splice into a local queue
on the stack, then you walk that local stack queue to...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
rx_bat...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
rx_bat...
2018 Sep 07
1
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...> > > }
> > > - vq->heads[nvq->done_idx].id = cpu_to_vhost32(vq, head);
> > > - vq->heads[nvq->done_idx].len = 0;
> > > -
> > > total_len += len;
> > > - if (tx_can_batch(vq, total_len))
> > > - msg.msg_flags |= MSG_MORE;
> > > - else
> > > - msg.msg_flags &= ~MSG_MORE;
> > > +
> > > + /* For simplicity, TX batching is only enabled if
> > > + * sndbuf is unlimited.
> > What if sndbuf changes while this processing is going on?
>
> We will get the co...
2018 May 21
1
[RFC PATCH net-next 03/12] vhost_net: introduce vhost_has_more_pkts()
...>dev, vq) &&
> - likely(!vhost_exceeds_maxpend(net))) {
> + vhost_has_more_pkts(net, vq)) {
Yes, I know it came from here, but likely/unlikely are for branch
control, so they should encapsulate everything inside the if, unless
I'm mistaken.
> msg.msg_flags |= MSG_MORE;
> } else {
> msg.msg_flags &= ~MSG_MORE;
> @@ -605,7 +611,7 @@ static void handle_tx(struct vhost_net *net)
> else
> vhost_zerocopy_signal_used(net, vq);
> vhost_net_tx_packet(net);
> - if (unlikely(vhost_exceeds_weight(++sent_pkts, total_len))) {
> +...
2019 Jun 07
4
[nbdkit PATCH v2 0/2] Reduce network overhead with MSG_MORE/corking
This time around, the numbers are indeed looking better than in v1;
and I like the interface better.
Eric Blake (2):
server: Prefer send() over write()
server: Group related transmission send()s
server/internal.h | 7 +++-
server/connections.c | 51 +++++++++++++++++++++++++---
server/crypto.c | 11 ++++--
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often
lead bad cache utilization under heavy load. So this patch tries to do
some batching during rx before submitting them to host network
stack. This is done through accepting MSG_MORE as a hint from
sendmsg() caller, if it was set, batch the packet temporarily in a
linked list and submit them all once MSG_MORE were cleared.
Tests were done by pktgen (burst=128) in guest over mlx4(noqueue) on host:
Mpps -+%
rx-frames = 0 0.91...