search for: pkt_size

Displaying 20 results from an estimated 26 matches for "pkt_size".

Did you mean: gdt_size
2019 Jul 30
1
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...ages sent to the > > transmitter > > - Patches 4+5: allow the host to split packets on multiple buffers and use > > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed > > > > host -> guest [Gbps] > > pkt_size before opt p 1 p 2+3 p 4+5 > > > > 32 0.032 0.030 0.048 0.051 > > 64 0.061 0.059 0.108 0.117 > > 128 0.122 0.112 0.227 0.234 > > 256 0.244 0.241 0.418 0.415 > > 512 0.459 0...
2019 Jul 29
0
[PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput
...e number of credit update messages sent to the > transmitter > - Patches 4+5: allow the host to split packets on multiple buffers and use > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed > > host -> guest [Gbps] > pkt_size before opt p 1 p 2+3 p 4+5 > > 32 0.032 0.030 0.048 0.051 > 64 0.061 0.059 0.108 0.117 > 128 0.122 0.112 0.227 0.234 > 256 0.244 0.241 0.418 0.415 > 512 0.459 0.466 0.847 0.865 > 1K...
2019 Jul 30
0
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...e number of credit update messages sent to the > transmitter > - Patches 4+5: allow the host to split packets on multiple buffers and use > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed > > host -> guest [Gbps] > pkt_size before opt p 1 p 2+3 p 4+5 > > 32 0.032 0.030 0.048 0.051 > 64 0.061 0.059 0.108 0.117 > 128 0.122 0.112 0.227 0.234 > 256 0.244 0.241 0.418 0.415 > 512 0.459 0.466 0.847 0.865 > 1K...
2019 Jul 30
7
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...ckets - Patches 2+3: reduce the number of credit update messages sent to the transmitter - Patches 4+5: allow the host to split packets on multiple buffers and use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed host -> guest [Gbps] pkt_size before opt p 1 p 2+3 p 4+5 32 0.032 0.030 0.048 0.051 64 0.061 0.059 0.108 0.117 128 0.122 0.112 0.227 0.234 256 0.244 0.241 0.418 0.415 512 0.459 0.466 0.847 0.865 1K 0.927 0.919 1.657...
2019 Jul 30
7
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...ckets - Patches 2+3: reduce the number of credit update messages sent to the transmitter - Patches 4+5: allow the host to split packets on multiple buffers and use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed host -> guest [Gbps] pkt_size before opt p 1 p 2+3 p 4+5 32 0.032 0.030 0.048 0.051 64 0.061 0.059 0.108 0.117 128 0.122 0.112 0.227 0.234 256 0.244 0.241 0.418 0.415 512 0.459 0.466 0.847 0.865 1K 0.927 0.919 1.657...
2016 May 26
3
pjsip segfault problem
hi, after switch from 13.7 + pjproject 2.4.5 to 13.9.1 pjproject bundled i have problem with segfault (centos 6) Program terminated with signal 11, Segmentation fault. #0 0xb7665695 in check_cached_response (sess=0xafbd688c, packet=0xb07676d8, pkt_size=132, options=1, token=0xafecc2bc, parsed_len=0x0, src_addr=0xb0e47a20, src_addr_len=16) at ../src/pjnath/stun_session.c:1287 1287 if (t->msg_magic == msg->hdr.magic && it was only once after 2 days. i dont know how to repeat it now :( any similiar experience? --...
2019 May 13
0
[PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput
...to the transmitter > - Patches 5+6: allow the host to split packets on multiple buffers and use > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed > - Patches 7+8: increase RX buffer size to 64 KiB > > host -> guest [Gbps] > pkt_size before opt p 1+2 p 3+4 p 5+6 p 7+8 virtio-net + vhost > TCP_NODELAY > 64 0.068 0.063 0.130 0.131 0.128 0.188 0.187 > 256 0.274 0.236 0.392 0.338 0.282...
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...LAY, just to be fair, because VSOCK doesn't implement something like this. In both cases I set the MTU to the maximum allowed (65520). VSOCK TCP + virtio-net + vhost host -> guest [Gbps] host -> guest [Gbps] pkt_size before opt. patch 1 patches 2+3 patch 4 TCP_NODELAY 64 0.060 0.102 0.102 0.096 0.16 0.15 256 0.22 0.40 0.40 0.36 0.32 0.57 512 0.42 0.82 0.85 0.74 1.2 1.2 1K 0.7...
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...LAY, just to be fair, because VSOCK doesn't implement something like this. In both cases I set the MTU to the maximum allowed (65520). VSOCK TCP + virtio-net + vhost host -> guest [Gbps] host -> guest [Gbps] pkt_size before opt. patch 1 patches 2+3 patch 4 TCP_NODELAY 64 0.060 0.102 0.102 0.096 0.16 0.15 256 0.22 0.40 0.40 0.36 0.32 0.57 512 0.42 0.82 0.85 0.74 1.2 1.2 1K 0.7...
2019 May 10
18
[PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput
...update messages sent to the transmitter - Patches 5+6: allow the host to split packets on multiple buffers and use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed - Patches 7+8: increase RX buffer size to 64 KiB host -> guest [Gbps] pkt_size before opt p 1+2 p 3+4 p 5+6 p 7+8 virtio-net + vhost TCP_NODELAY 64 0.068 0.063 0.130 0.131 0.128 0.188 0.187 256 0.274 0.236 0.392 0.338 0.282 0.749...
2019 May 10
18
[PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput
...update messages sent to the transmitter - Patches 5+6: allow the host to split packets on multiple buffers and use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed - Patches 7+8: increase RX buffer size to 64 KiB host -> guest [Gbps] pkt_size before opt p 1+2 p 3+4 p 5+6 p 7+8 virtio-net + vhost TCP_NODELAY 64 0.068 0.063 0.130 0.131 0.128 0.188 0.187 256 0.274 0.236 0.392 0.338 0.282 0.749...
2019 Apr 04
15
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...l evaluate to use virtio-net as transport, but I need to understand better how to interface with it, maybe introducing sk_buff in virtio-vsock. Any suggestions? Here some benchmarks step by step. I used iperf3 [2] modified with VSOCK support: host -> guest [Gbps] pkt_size before opt. patch 1 patches 2+3 patch 4 64 0.060 0.102 0.102 0.096 256 0.22 0.40 0.40 0.36 512 0.42 0.82 0.85 0.74 1K 0.7 1.6 1.6 1.5 2K 1.5...
2019 Apr 04
15
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...l evaluate to use virtio-net as transport, but I need to understand better how to interface with it, maybe introducing sk_buff in virtio-vsock. Any suggestions? Here some benchmarks step by step. I used iperf3 [2] modified with VSOCK support: host -> guest [Gbps] pkt_size before opt. patch 1 patches 2+3 patch 4 64 0.060 0.102 0.102 0.096 256 0.22 0.40 0.40 0.36 512 0.42 0.82 0.85 0.74 1K 0.7 1.6 1.6 1.5 2K 1.5...
2019 Apr 08
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...s easier, we can do this. Another thing that may help is to implement sendpage(), which will greatly improve the performance. Thanks > > Here some benchmarks step by step. I used iperf3 [2] modified with VSOCK > support: > > host -> guest [Gbps] > pkt_size before opt. patch 1 patches 2+3 patch 4 > 64 0.060 0.102 0.102 0.096 > 256 0.22 0.40 0.40 0.36 > 512 0.42 0.82 0.85 0.74 > 1K 0.7 1.6 1.6 1....
2019 Apr 04
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...doesn't implement something like this. Why not? > In both cases I set the MTU to the maximum allowed (65520). > > VSOCK TCP + virtio-net + vhost > host -> guest [Gbps] host -> guest [Gbps] > pkt_size before opt. patch 1 patches 2+3 patch 4 TCP_NODELAY > 64 0.060 0.102 0.102 0.096 0.16 0.15 > 256 0.22 0.40 0.40 0.36 0.32 0.57 > 512 0.42 0.82 0.85 0.74 1.2 1.2 &gt...
2019 Apr 04
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...but I need to understand better how to > interface with it, maybe introducing sk_buff in virtio-vsock. > > Any suggestions? > > Here some benchmarks step by step. I used iperf3 [2] modified with VSOCK > support: > > host -> guest [Gbps] > pkt_size before opt. patch 1 patches 2+3 patch 4 > 64 0.060 0.102 0.102 0.096 > 256 0.22 0.40 0.40 0.36 > 512 0.42 0.82 0.85 0.74 > 1K 0.7 1.6 1.6 1.5 &g...
2019 Jul 17
22
[PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput
...ckets - Patches 2+3: reduce the number of credit update messages sent to the transmitter - Patches 4+5: allow the host to split packets on multiple buffers and use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed host -> guest [Gbps] pkt_size before opt p 1 p 2+3 p 4+5 32 0.032 0.030 0.048 0.051 64 0.061 0.059 0.108 0.117 128 0.122 0.112 0.227 0.234 256 0.244 0.241 0.418 0.415 512 0.459 0.466 0.847 0.865 1K 0.927 0.919 1.657...
2019 Jul 17
22
[PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput
...ckets - Patches 2+3: reduce the number of credit update messages sent to the transmitter - Patches 4+5: allow the host to split packets on multiple buffers and use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed host -> guest [Gbps] pkt_size before opt p 1 p 2+3 p 4+5 32 0.032 0.030 0.048 0.051 64 0.061 0.059 0.108 0.117 128 0.122 0.112 0.227 0.234 256 0.244 0.241 0.418 0.415 512 0.459 0.466 0.847 0.865 1K 0.927 0.919 1.657...
2019 May 31
7
[PATCH v3 0/5] vsock/virtio: optimizations to increase the throughput
...2+3: fix locking and reduce the number of credit update messages sent to the transmitter - Patches 4+5: allow the host to split packets on multiple buffers and use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed host -> guest [Gbps] pkt_size before opt p 1 p 2+3 p 4+5 32 0.035 0.032 0.049 0.051 64 0.069 0.061 0.123 0.126 128 0.138 0.116 0.256 0.252 256 0.247 0.254 0.434 0.444 512 0.498 0.482 0.940 0.931 1K 0.951 0.975 1.878...
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...er cleaned in free_old_xmit_skbs, >>> because it requires a start_xmit and by now the (only) socket is out of >>> descriptors? >> >> Typo, sorry. I meant out of sndbuf. > > > I mean e.g for tun. If its sndbuf is smaller than e.g (vq->num >> 1) * > $pkt_size and if all packet were held by some modules, limitation like > vq->num >> 1 won't work since we hit sudbuf before it. Good point. >> >>> A watchdog would help somewhat. With tx-napi, this case cannot occur, >>> either, as free_old_xmit_skbs no longer depend...