Displaying 20 results from an estimated 140 matches for "sndbuf".
2018 Sep 07
1
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...:52PM +0800, Jason Wang wrote:
> > > @@ -556,10 +667,14 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
> > > size_t len, total_len = 0;
> > > int err;
> > > int sent_pkts = 0;
> > > + bool bulking = (sock->sk->sk_sndbuf == INT_MAX);
> > What does bulking mean?
>
> The name is misleading, it means whether we can do batching. For simplicity,
> I disable batching is sndbuf is not INT_MAX.
But what does batching have to do with sndbuf?
> > > for (;;) {
> > > bool busyloop_int...
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...mentation: export
>>> counters similar to net->tx_zcopy_err and net->tx_packets (though
>>> without reset to zero, as in vhost_net_tx_packet).
>
>
> I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
>
>>>
>>>> 1) sndbuf is not INT_MAX
>>>
>>> You mean the case where the device stalls, later zerocopy notifications
>>> are queued, but these are never cleaned in free_old_xmit_skbs,
>>> because it requires a start_xmit and by now the (only) socket is out of
>>> descriptors?...
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...mentation: export
>>> counters similar to net->tx_zcopy_err and net->tx_packets (though
>>> without reset to zero, as in vhost_net_tx_packet).
>
>
> I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
>
>>>
>>>> 1) sndbuf is not INT_MAX
>>>
>>> You mean the case where the device stalls, later zerocopy notifications
>>> are queued, but these are never cleaned in free_old_xmit_skbs,
>>> because it requires a start_xmit and by now the (only) socket is out of
>>> descriptors?...
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...compared to
> the alternative drop in throughput. It probably would be good to be
> able to measure this without kernel instrumentation: export
> counters similar to net->tx_zcopy_err and net->tx_packets (though
> without reset to zero, as in vhost_net_tx_packet).
>
>> 1) sndbuf is not INT_MAX
>
> You mean the case where the device stalls, later zerocopy notifications
> are queued, but these are never cleaned in free_old_xmit_skbs,
> because it requires a start_xmit and by now the (only) socket is out of
> descriptors?
Typo, sorry. I meant out of sndbuf.
&...
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...compared to
> the alternative drop in throughput. It probably would be good to be
> able to measure this without kernel instrumentation: export
> counters similar to net->tx_zcopy_err and net->tx_packets (though
> without reset to zero, as in vhost_net_tx_packet).
>
>> 1) sndbuf is not INT_MAX
>
> You mean the case where the device stalls, later zerocopy notifications
> are queued, but these are never cleaned in free_old_xmit_skbs,
> because it requires a start_xmit and by now the (only) socket is out of
> descriptors?
Typo, sorry. I meant out of sndbuf.
&...
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...yer socket need to build skb
> and pass it to network core. The batched packet submission allows us
> to do batching like netif_receive_skb_list() in the future.
>
> This saves lots of indirect calls for better cache utilization. For
> the case that we can't so batching e.g when sndbuf is limited or
> packet size is too large, we will go for usual one packet per
> sendmsg() way.
>
> Doing testpmd on various setups gives us:
>
> Test /+pps%
> XDP_DROP on TAP /+44.8%
> XDP_REDIRECT on TAP /+29%
> macvtap (skb) /+26%
>
> N...
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...yer socket need to build skb
> and pass it to network core. The batched packet submission allows us
> to do batching like netif_receive_skb_list() in the future.
>
> This saves lots of indirect calls for better cache utilization. For
> the case that we can't so batching e.g when sndbuf is limited or
> packet size is too large, we will go for usual one packet per
> sendmsg() way.
>
> Doing testpmd on various setups gives us:
>
> Test /+pps%
> XDP_DROP on TAP /+44.8%
> XDP_REDIRECT on TAP /+29%
> macvtap (skb) /+26%
>
> N...
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
...utilization is 7%
After this patch:
VM1 to VM2 throughput is 9.3Mbit/s
Vm1 to External throughput is 93Mbit/s
CPU utilization is 16%
Completed performance test on 40gbe shows no obvious changes in both
throughput and cpu utilization with this patch.
The patch only solve this issue when unlimited sndbuf. We still need a
solution for limited sndbuf.
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Qin Chuanyu <qinchuanyu at huawei.com>
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
Changes from V1:
- Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit
-...
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
...utilization is 7%
After this patch:
VM1 to VM2 throughput is 9.3Mbit/s
Vm1 to External throughput is 93Mbit/s
CPU utilization is 16%
Completed performance test on 40gbe shows no obvious changes in both
throughput and cpu utilization with this patch.
The patch only solve this issue when unlimited sndbuf. We still need a
solution for limited sndbuf.
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Qin Chuanyu <qinchuanyu at huawei.com>
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
Changes from V1:
- Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit
-...
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...VM2 throughput is 9.3Mbit/s
VM1 to External throughput is 40Mbit/s
After this patch:
VM1 to VM2 throughput is 9.3Mbit/s
Vm1 to External throughput is 93Mbit/s
Simple performance test on 40gbe shows no obvious changes in
throughput after this patch.
The patch only solve this issue when unlimited sndbuf. We still need a
solution for limited sndbuf.
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Qin Chuanyu <qinchuanyu at huawei.com>
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)...
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...VM2 throughput is 9.3Mbit/s
VM1 to External throughput is 40Mbit/s
After this patch:
VM1 to VM2 throughput is 9.3Mbit/s
Vm1 to External throughput is 93Mbit/s
Simple performance test on 40gbe shows no obvious changes in
throughput after this patch.
The patch only solve this issue when unlimited sndbuf. We still need a
solution for limited sndbuf.
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Qin Chuanyu <qinchuanyu at huawei.com>
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)...
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
...nal throughput is 93Mbit/s
>> > CPU utilization is 16%
>> >
>> > Completed performance test on 40gbe shows no obvious changes in both
>> > throughput and cpu utilization with this patch.
>> >
>> > The patch only solve this issue when unlimited sndbuf. We still need a
>> > solution for limited sndbuf.
>> >
>> > Cc: Michael S. Tsirkin <mst at redhat.com>
>> > Cc: Qin Chuanyu <qinchuanyu at huawei.com>
>> > Signed-off-by: Jason Wang <jasowang at redhat.com>
> I thought hard about t...
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
...nal throughput is 93Mbit/s
>> > CPU utilization is 16%
>> >
>> > Completed performance test on 40gbe shows no obvious changes in both
>> > throughput and cpu utilization with this patch.
>> >
>> > The patch only solve this issue when unlimited sndbuf. We still need a
>> > solution for limited sndbuf.
>> >
>> > Cc: Michael S. Tsirkin <mst at redhat.com>
>> > Cc: Qin Chuanyu <qinchuanyu at huawei.com>
>> > Signed-off-by: Jason Wang <jasowang at redhat.com>
> I thought hard about t...
2018 Sep 07
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...build skb
>> and pass it to network core. The batched packet submission allows us
>> to do batching like netif_receive_skb_list() in the future.
>>
>> This saves lots of indirect calls for better cache utilization. For
>> the case that we can't so batching e.g when sndbuf is limited or
>> packet size is too large, we will go for usual one packet per
>> sendmsg() way.
>>
>> Doing testpmd on various setups gives us:
>>
>> Test /+pps%
>> XDP_DROP on TAP /+44.8%
>> XDP_REDIRECT on TAP /+29%
>> macv...
2017 Sep 04
0
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...measure this without kernel instrumentation: export
>> counters similar to net->tx_zcopy_err and net->tx_packets (though
>> without reset to zero, as in vhost_net_tx_packet).
I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
>>
>>> 1) sndbuf is not INT_MAX
>> You mean the case where the device stalls, later zerocopy notifications
>> are queued, but these are never cleaned in free_old_xmit_skbs,
>> because it requires a start_xmit and by now the (only) socket is out of
>> descriptors?
> Typo, sorry. I meant ou...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
.../s
>> Vm1 to External throughput is 93Mbit/s
> Would like to see CPU utilization #s as well.
>
Will measure this.
>> Simple performance test on 40gbe shows no obvious changes in
>> throughput after this patch.
>>
>> The patch only solve this issue when unlimited sndbuf. We still need a
>> solution for limited sndbuf.
>>
>> Cc: Michael S. Tsirkin<mst at redhat.com>
>> Cc: Qin Chuanyu<qinchuanyu at huawei.com>
>> Signed-off-by: Jason Wang<jasowang at redhat.com>
> I think this needs some thought.
>
> In parti...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
.../s
>> Vm1 to External throughput is 93Mbit/s
> Would like to see CPU utilization #s as well.
>
Will measure this.
>> Simple performance test on 40gbe shows no obvious changes in
>> throughput after this patch.
>>
>> The patch only solve this issue when unlimited sndbuf. We still need a
>> solution for limited sndbuf.
>>
>> Cc: Michael S. Tsirkin<mst at redhat.com>
>> Cc: Qin Chuanyu<qinchuanyu at huawei.com>
>> Signed-off-by: Jason Wang<jasowang at redhat.com>
> I think this needs some thought.
>
> In parti...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...ould like to see CPU utilization #s as well.
>>>
>>
>> Will measure this.
>>>> Simple performance test on 40gbe shows no obvious changes in
>>>> throughput after this patch.
>>>>
>>>> The patch only solve this issue when unlimited sndbuf. We still need a
>>>> solution for limited sndbuf.
>>>>
>>>> Cc: Michael S. Tsirkin<mst at redhat.com>
>>>> Cc: Qin Chuanyu<qinchuanyu at huawei.com>
>>>> Signed-off-by: Jason Wang<jasowang at redhat.com>
>>> I t...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...ould like to see CPU utilization #s as well.
>>>
>>
>> Will measure this.
>>>> Simple performance test on 40gbe shows no obvious changes in
>>>> throughput after this patch.
>>>>
>>>> The patch only solve this issue when unlimited sndbuf. We still need a
>>>> solution for limited sndbuf.
>>>>
>>>> Cc: Michael S. Tsirkin<mst at redhat.com>
>>>> Cc: Qin Chuanyu<qinchuanyu at huawei.com>
>>>> Signed-off-by: Jason Wang<jasowang at redhat.com>
>>> I t...
2008 Nov 19
1
Assistance needed on using mount.smbfs (cifs) to authenticate to samba server with encrypt passwords = No.
...:
64
[528109.522284] fs/cifs/connect.c: CIFS VFS: in cifs_mount as Xid: 30
with uid: 0
[528109.522292] fs/cifs/connect.c: Username: tech
[528109.522295] fs/cifs/connect.c: UNC: \\172.16.0.8\tech ip:
172.16.0.8
[528109.522306] fs/cifs/connect.c: Socket created
[528109.523047] fs/cifs/connect.c: sndbuf 16384 rcvbuf 87380 rcvtimeo
0x7fffffff
[528109.523054] fs/cifs/transport.c: Sending smb of length 68
[528109.528073] fs/cifs/connect.c: Existing smb sess not found
[528109.528086] fs/cifs/cifssmb.c: secFlags 0x7
[528109.528091] fs/cifs/transport.c: For smb_command 114
[528109.528094] fs/cifs/t...