Displaying 20 results from an estimated 200 matches similar to: "[PATCH] vhost_net: Use fdget() and fdput()"
2023 May 05
0
[PATCH] vhost_net: Use fdget() and fdput()
On Fri, May 05, 2023 at 04:41:55PM +0800, ye xingchen wrote:
> >>
> >> From: Ye Xingchen <ye.xingchen at zte.com.cn>
> >>
> >> convert the fget()/fput() uses to fdget()/fdput().
> >What's the advantages of this?
> >
> >Thanks
> >>
> >> Signed-off-by: Ye Xingchen <ye.xingchen at zte.com.cn>
> >> ---
2023 May 05
0
[PATCH] vhost_net: Use fdget() and fdput()
On Fri, May 5, 2023 at 2:24?PM <ye.xingchen at zte.com.cn> wrote:
>
> From: Ye Xingchen <ye.xingchen at zte.com.cn>
>
> convert the fget()/fput() uses to fdget()/fdput().
What's the advantages of this?
Thanks
>
> Signed-off-by: Ye Xingchen <ye.xingchen at zte.com.cn>
> ---
> drivers/vhost/net.c | 10 +++++-----
> 1 file changed, 5 insertions(+),
2018 Mar 09
0
[PATCH net 2/3] vhost_net: keep private_data and rx_ring synced
We get pointer ring from the exported sock, this means we should keep
rx_ring and vq->private synced during both vq stop and backend set,
otherwise we may see stale rx_ring.
Fixes: c67df11f6e480 ("vhost_net: try batch dequing from skb array")
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c |
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
2017 Mar 21
0
[PATCH net-next 7/8] vhost_net: try batch dequing from skb array
We used to dequeue one skb during recvmsg() from skb_array, this could
be inefficient because of the bad cache utilization and spinlock
touching for each packet. This patch tries to batch them by calling
batch dequeuing helpers explicitly on the exported skb array and pass
the skb back through msg_control for underlayer socket to finish the
userspace copying.
Tests were done by XDP1:
- small
2017 Mar 22
2
[PATCH net-next 7/8] vhost_net: try batch dequing from skb array
On Tue, Mar 21, 2017 at 12:04:46PM +0800, Jason Wang wrote:
> We used to dequeue one skb during recvmsg() from skb_array, this could
> be inefficient because of the bad cache utilization and spinlock
> touching for each packet. This patch tries to batch them by calling
> batch dequeuing helpers explicitly on the exported skb array and pass
> the skb back through msg_control for
2017 Mar 22
2
[PATCH net-next 7/8] vhost_net: try batch dequing from skb array
On Tue, Mar 21, 2017 at 12:04:46PM +0800, Jason Wang wrote:
> We used to dequeue one skb during recvmsg() from skb_array, this could
> be inefficient because of the bad cache utilization and spinlock
> touching for each packet. This patch tries to batch them by calling
> batch dequeuing helpers explicitly on the exported skb array and pass
> the skb back through msg_control for
2020 Feb 13
0
vhost changes (batched) in linux-next after 12/13 trigger random crashes in KVM guests after reboot
On 13.02.20 17:29, Eugenio P?rez wrote:
> Can we try with this traces?
Does not apply on eccb852f1fe6bede630e2e4f1a121a81e34354ab, can you double check?
>
> From b793b4106085ab1970bdedb340e49f37843ed585 Mon Sep 17 00:00:00 2001
> From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma at redhat.com>
> Date: Thu, 13 Feb 2020 17:27:05 +0100
> Subject: [PATCH] vhost: Add debug in
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2020 Feb 14
0
vhost changes (batched) in linux-next after 12/13 trigger random crashes in KVM guests after reboot
I did
ping -c 20 -f ... ; reboot
twice
The ping after the first reboot showed .......E
this was on the host console
[ 55.951885] CPU: 34 PID: 1908 Comm: CPU 0/KVM Not tainted 5.5.0+ #21
[ 55.951891] Hardware name: IBM 3906 M04 704 (LPAR)
[ 55.951892] Call Trace:
[ 55.951902] [<0000001ede114132>] show_stack+0x8a/0xd0
[ 55.951906] [<0000001edeb0672a>]
2020 Feb 14
0
vhost changes (batched) in linux-next after 12/13 trigger random crashes in KVM guests after reboot
On 14.02.20 08:40, Eugenio Perez Martin wrote:
> Hi.
>
> Were the vhost and vhost_net modules loaded with dyndbg='+plt'? I miss
> all the others regular debug traces on that one.
I did
echo -n 'file drivers/vhost/vhost.c +plt' > control
and
echo -n 'file drivers/vhost/net.c +plt' > control
but apparently it did not work...me hates dynamic debug.
2013 Aug 06
6
[PATCH 0/4] btrfs: out-of-band (aka offline) dedupe v4
Hi,
The following series of patches implements in btrfs an ioctl to do
out-of-band deduplication of file extents.
To be clear, this means that the file system is mounted and running, but the
dedupe is not done during file writes, but after the fact when some
userspace software initiates a dedupe.
The primary patch is loosely based off of one sent by Josef Bacik back
in January, 2011.
2010 May 23
0
error in loading vhost_net module
On Sun, May 23, 2010 at 7:10 AM, Michael S. Tsirkin <mst at redhat.com> wrote:
> On Sat, May 22, 2010 at 07:31:15PM -0400, Balachandar wrote:
>> Hi Michael,
>> ?? Thank you for your vhost_net module. I am really excited to know that your
>> module reduces latency 4 to 5 times. I was trying on my own project to reduce
>> latency in using virtio but when i saw your
2010 May 23
0
error in loading vhost_net module
On Sun, May 23, 2010 at 7:10 AM, Michael S. Tsirkin <mst at redhat.com> wrote:
> On Sat, May 22, 2010 at 07:31:15PM -0400, Balachandar wrote:
>> Hi Michael,
>> ?? Thank you for your vhost_net module. I am really excited to know that your
>> module reduces latency 4 to 5 times. I was trying on my own project to reduce
>> latency in using virtio but when i saw your