Displaying 20 results from an estimated 10000 matches similar to: "[PULL net-2.6] vhost: cleanups and fixes"
2017 May 10
0
[PULL] virtio: fixes, cleanups, performance
The following changes since commit a351e9b9fc24e982ec2f0e76379a49826036da12:
Linux 4.11 (2017-04-30 19:47:48 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
for you to fetch changes up to c8b0d7290657996a29f318b948d2397d30e70c36:
s390/virtio: change maintainership (2017-05-09 16:43:25 +0300)
2017 May 10
0
[PULL] virtio: fixes, cleanups, performance
The following changes since commit a351e9b9fc24e982ec2f0e76379a49826036da12:
Linux 4.11 (2017-04-30 19:47:48 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
for you to fetch changes up to c8b0d7290657996a29f318b948d2397d30e70c36:
s390/virtio: change maintainership (2017-05-09 16:43:25 +0300)
2013 Nov 14
2
[PATCH] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2013 Nov 14
2
[PATCH] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2011 Mar 06
1
[PATCH] vhost: copy_from_user -> __copy_from_user
copy_from_user is pretty high on perf top profile,
replacing it with __copy_from_user helps.
It's also safe because we do access_ok checks during setup.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
drivers/vhost/vhost.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index ade0568..01701b8 100644
---
2011 Mar 06
1
[PATCH] vhost: copy_from_user -> __copy_from_user
copy_from_user is pretty high on perf top profile,
replacing it with __copy_from_user helps.
It's also safe because we do access_ok checks during setup.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
drivers/vhost/vhost.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index ade0568..01701b8 100644
---
2017 Mar 29
1
[PATCH] virtio_net: fix support for small rings
When ring size is small (<32 entries) making buffers smaller means a
full ring might not be able to hold enough buffers to fit a single large
packet.
Make sure a ring full of buffers is large enough to allow at least one
packet of max size.
Fixes: 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag allocators")
Signed-off-by: Michael S. Tsirkin <mst at
2017 Mar 29
1
[PATCH] virtio_net: fix support for small rings
When ring size is small (<32 entries) making buffers smaller means a
full ring might not be able to hold enough buffers to fit a single large
packet.
Make sure a ring full of buffers is large enough to allow at least one
packet of max size.
Fixes: 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag allocators")
Signed-off-by: Michael S. Tsirkin <mst at
2017 Mar 29
1
[PATCH v2] virtio_net: fix support for small rings
When ring size is small (<32 entries) making buffers smaller means a
full ring might not be able to hold enough buffers to fit a single large
packet.
Make sure a ring full of buffers is large enough to allow at least one
packet of max size.
Fixes: 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag allocators")
Signed-off-by: Michael S. Tsirkin <mst at
2017 Mar 29
1
[PATCH v2] virtio_net: fix support for small rings
When ring size is small (<32 entries) making buffers smaller means a
full ring might not be able to hold enough buffers to fit a single large
packet.
Make sure a ring full of buffers is large enough to allow at least one
packet of max size.
Fixes: 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag allocators")
Signed-off-by: Michael S. Tsirkin <mst at
2013 Nov 12
12
[PATCH net-next 1/4] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2013 Nov 12
12
[PATCH net-next 1/4] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2013 Nov 20
2
[PATCH RFC] virtio_net: fix error handling for mergeable buffers
----- ???? -----
> On Wed, Nov 20, 2013 at 05:07:25PM +0800, Jason Wang wrote:
> > When mergeable buffer were used, we only put the first page buf leave the
> > rest
> > of buffers in the virt queue. This will cause the driver could not get the
> > correct head buffer any more. Fix this by dropping the rest of buffers for
> > this
> > packet.
> >
>
2013 Nov 20
2
[PATCH RFC] virtio_net: fix error handling for mergeable buffers
----- ???? -----
> On Wed, Nov 20, 2013 at 05:07:25PM +0800, Jason Wang wrote:
> > When mergeable buffer were used, we only put the first page buf leave the
> > rest
> > of buffers in the virt queue. This will cause the driver could not get the
> > correct head buffer any more. Fix this by dropping the rest of buffers for
> > this
> > packet.
> >
>
2016 Mar 18
0
[PATCH] gpu/drm: Use u64_to_user_pointer
Hi Joe,
[auto build test WARNING on drm/drm-next]
[also build test WARNING on next-20160318]
[cannot apply to v4.5]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
url: https://github.com/0day-ci/linux/commits/Joe-Perches/gpu-drm-Use-u64_to_user_pointer/20160319-012749
base: git://people.freedesktop.org/~airlied/linux.git drm-next
config:
2019 Jul 10
2
[RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
Hi,
as Jason suggested some months ago, I looked better at the virtio-net driver to
understand if we can reuse some parts also in the virtio-vsock driver, since we
have similar challenges (mergeable buffers, page allocation, small
packets, etc.).
Initially, I would add the skbuff in the virtio-vsock in order to re-use
receive_*() functions.
Then I would move receive_[small, big, mergeable]() and