Displaying 13 results from an estimated 13 matches for "1705,9".
Did you mean:
105,9
2014 Jan 16
0
[PATCH net-next v4 2/6] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...*vi)
}
}
+static void free_receive_page_frags(struct virtnet_info *vi)
+{
+ int i;
+ for (i = 0; i < vi->max_queue_pairs; i++)
+ if (vi->rq[i].alloc_frag.page)
+ put_page(vi->rq[i].alloc_frag.page);
+}
+
static void free_unused_bufs(struct virtnet_info *vi)
{
void *buf;
@@ -1705,9 +1707,8 @@ free_recv_bufs:
unregister_netdev(dev);
free_vqs:
cancel_delayed_work_sync(&vi->refill);
+ free_receive_page_frags(vi);
virtnet_del_vqs(vi);
- if (vi->alloc_frag.page)
- put_page(vi->alloc_frag.page);
free_stats:
free_percpu(vi->stats);
free:
@@ -1724,6 +172...
2024 Dec 01
5
[RFC 0/5] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com>
Based on: Provide a new two step DMA mapping API patchset
https://lore.kernel.org/kvm/20241114170247.GA5813 at lst.de/T/#t
This patch series aims to enable Peer-to-Peer (P2P) DMA access in
GPU-centric applications that utilize RDMA and private device pages. This
enhancement reduces data transfer overhead by allowing the GPU to directly
expose
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2023 Mar 14
7
[PATCH v8 0/6] evm: Do HMAC of multiple per LSM xattrs for new inodes
From: Roberto Sassu <roberto.sassu at huawei.com>
One of the major goals of LSM stacking is to run multiple LSMs side by side
without interfering with each other. The ultimate decision will depend on
individual LSM decision.
Several changes need to be made to the LSM infrastructure to be able to
support that. This patch set tackles one of them: gives to each LSM the
ability to specify one
2011 Aug 15
9
[patch v2 0/9] btrfs: More error handling patches
Hi all -
The following 9 patches add more error handling to the btrfs code:
- Add btrfs_panic
- Catch locking failures in {set,clear}_extent_bit
- Push up set_extent_bit errors to callers
- Push up lock_extent errors to callers
- Push up clear_extent_bit errors to callers
- Push up unlock_extent errors to callers
- Make pin_down_extent return void
- Push up btrfs_pin_extent errors to
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2011 Oct 04
68
[patch 00/65] Error handling patchset v3
Hi all -
Here''s my current error handling patchset, against 3.1-rc8. Almost all of
this patchset is preparing for actual error handling. Before we start in
on that work, I''m trying to reduce the surface we need to worry about. It
turns out that there is a ton of code that returns an error code but never
actually reports an error.
The patchset has grown to 65 patches. 46 of them