search for: page_frag

Displaying 20 results from an estimated 112 matches for "page_frag".

2013 Dec 23
3
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...> +++ b/drivers/net/virtio_net.c > > > @@ -78,6 +78,9 @@ struct receive_queue { > > > /* Chain pages by the private ptr. */ > > > struct page *pages; > > > > > > + /* Page frag for GFP_ATOMIC packet buffer allocation. */ > > > + struct page_frag atomic_frag; > > > + > > > /* RX: fragments + linear part + virtio header */ > > > struct scatterlist sg[MAX_SKB_FRAGS + 2]; > > > > > > @@ -127,9 +130,9 @@ struct virtnet_info { > > > struct mutex config_lock; > > > > >...
2013 Dec 23
3
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...> +++ b/drivers/net/virtio_net.c > > > @@ -78,6 +78,9 @@ struct receive_queue { > > > /* Chain pages by the private ptr. */ > > > struct page *pages; > > > > > > + /* Page frag for GFP_ATOMIC packet buffer allocation. */ > > > + struct page_frag atomic_frag; > > > + > > > /* RX: fragments + linear part + virtio header */ > > > struct scatterlist sg[MAX_SKB_FRAGS + 2]; > > > > > > @@ -127,9 +130,9 @@ struct virtnet_info { > > > struct mutex config_lock; > > > > >...
2013 Dec 23
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...c51a988..d38d130 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -78,6 +78,9 @@ struct receive_queue { > /* Chain pages by the private ptr. */ > struct page *pages; > > + /* Page frag for GFP_ATOMIC packet buffer allocation. */ > + struct page_frag atomic_frag; > + > /* RX: fragments + linear part + virtio header */ > struct scatterlist sg[MAX_SKB_FRAGS + 2]; > > @@ -127,9 +130,9 @@ struct virtnet_info { > struct mutex config_lock; > > /* Page_frag for GFP_KERNEL packet buffer allocation when we run > -...
2013 Dec 23
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...c51a988..d38d130 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -78,6 +78,9 @@ struct receive_queue { > /* Chain pages by the private ptr. */ > struct page *pages; > > + /* Page frag for GFP_ATOMIC packet buffer allocation. */ > + struct page_frag atomic_frag; > + > /* RX: fragments + linear part + virtio header */ > struct scatterlist sg[MAX_SKB_FRAGS + 2]; > > @@ -127,9 +130,9 @@ struct virtnet_info { > struct mutex config_lock; > > /* Page_frag for GFP_KERNEL packet buffer allocation when we run > -...
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote: > On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > > So there isn't a conflict with respect to locking. > > > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > > GFP_KERNEL? If yes why? > > I believe it is safe to use the same page_frag and I will send out a > followup patchset using just the per-receive page_frags. For future > consideration, Eric noted that disabling NAPI before GFP_KERNEL > allocs c...
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote: > On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > > So there isn't a conflict with respect to locking. > > > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > > GFP_KERNEL? If yes why? > > I believe it is safe to use the same page_frag and I will send out a > followup patchset using just the per-receive page_frags. For future > consideration, Eric noted that disabling NAPI before GFP_KERNEL > allocs c...
2013 Dec 23
0
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
.../drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -78,6 +78,9 @@ struct receive_queue { > > /* Chain pages by the private ptr. */ > > struct page *pages; > > > > + /* Page frag for GFP_ATOMIC packet buffer allocation. */ > > + struct page_frag atomic_frag; > > + > > /* RX: fragments + linear part + virtio header */ > > struct scatterlist sg[MAX_SKB_FRAGS + 2]; > > > > @@ -127,9 +130,9 @@ struct virtnet_info { > > struct mutex config_lock; > > > > /* Page_frag for GFP_KERNEL pack...
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
...vers/vhost/net.c b/drivers/vhost/net.c index ab11b2bee273..d919284f103b 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -141,6 +141,10 @@ struct vhost_net { unsigned tx_zcopy_err; /* Flush in progress. Protected by tx vq lock. */ bool tx_flush; + /* Private page frag */ + struct page_frag page_frag; + /* Refcount bias of page frag */ + int refcnt_bias; }; static unsigned vhost_net_zcopy_mask __read_mostly; @@ -637,14 +641,53 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) !vhost_vq_avail_empty(vq->dev, vq); } +#define SKB_FRAG_PAGE_ORDER...
2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...gt; On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote: > >> On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > >>> So there isn't a conflict with respect to locking. > >>> > >>> Is it problematic to use same page_frag with both GFP_ATOMIC and with > >>> GFP_KERNEL? If yes why? > >> I believe it is safe to use the same page_frag and I will send out a > >> followup patchset using just the per-receive page_frags. For future > >> consideration, Eric noted that disabling NAPI be...
2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...gt; On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote: > >> On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > >>> So there isn't a conflict with respect to locking. > >>> > >>> Is it problematic to use same page_frag with both GFP_ATOMIC and with > >>> GFP_KERNEL? If yes why? > >> I believe it is safe to use the same page_frag and I will send out a > >> followup patchset using just the per-receive page_frags. For future > >> consideration, Eric noted that disabling NAPI be...
2013 Dec 17
15
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allo...
2013 Dec 17
15
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allo...
2013 Dec 17
0
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...o_net.c b/drivers/net/virtio_net.c index c51a988..d38d130 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -78,6 +78,9 @@ struct receive_queue { /* Chain pages by the private ptr. */ struct page *pages; + /* Page frag for GFP_ATOMIC packet buffer allocation. */ + struct page_frag atomic_frag; + /* RX: fragments + linear part + virtio header */ struct scatterlist sg[MAX_SKB_FRAGS + 2]; @@ -127,9 +130,9 @@ struct virtnet_info { struct mutex config_lock; /* Page_frag for GFP_KERNEL packet buffer allocation when we run - * low on memory. + * low on memory. May sle...
2014 Jan 16
0
[PATCH net-next v4 2/6] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...emaining space is added to the current allocated buffer so that the remaining space can be used to store packet data. Signed-off-by: Michael Dalton <mwdalton at google.com> --- v1->v2: Use GFP_COLD for RX buffer allocations (as in netdev_alloc_frag()). Remove per-netdev GFP_KERNEL page_frag allocator. drivers/net/virtio_net.c | 69 ++++++++++++++++++++++++------------------------ 1 file changed, 35 insertions(+), 34 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7b17240..36cbf06 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net....
2013 Nov 12
0
[PATCH net-next 3/4] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...o_net.c b/drivers/net/virtio_net.c index 69fb225..0c93054 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -79,6 +79,9 @@ struct receive_queue { /* Chain pages by the private ptr. */ struct page *pages; + /* Page frag for GFP_ATOMIC packet buffer allocation. */ + struct page_frag atomic_frag; + /* RX: fragments + linear part + virtio header */ struct scatterlist sg[MAX_SKB_FRAGS + 2]; @@ -128,9 +131,9 @@ struct virtnet_info { struct mutex config_lock; /* Page_frag for GFP_KERNEL packet buffer allocation when we run - * low on memory. + * low on memory. May sle...
2013 Dec 26
0
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > So there isn't a conflict with respect to locking. > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > GFP_KERNEL? If yes why? I believe it is safe to use the same page_frag and I will send out a followup patchset using just the per-receive page_frags. For future consideration, Eric noted that disabling NAPI before GFP_KERNEL allocs can potentially inhibit virtio-...
2013 Dec 27
0
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...ric Dumazet wrote: > On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote: >> On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: >>> So there isn't a conflict with respect to locking. >>> >>> Is it problematic to use same page_frag with both GFP_ATOMIC and with >>> GFP_KERNEL? If yes why? >> I believe it is safe to use the same page_frag and I will send out a >> followup patchset using just the per-receive page_frags. For future >> consideration, Eric noted that disabling NAPI before GFP_KERNEL >...
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allo...
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allo...
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Thu, Dec 26, 2013 at 01:28:58PM -0800, Michael Dalton wrote: > On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > > So there isn't a conflict with respect to locking. > > > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > > GFP_KERNEL? If yes why? > > I believe it is safe to use the same page_frag and I will send out a > followup patchset using just the per-receive page_frags. Seems easier to use it straight away I think. > For future > consideration, Eric note...