search for: gfp_t

Displaying 20 results from an estimated 875 matches for "gfp_t".

2016 Jun 22
2
[PATCH net-next V2] tun: introduce tx skb ring
...ichael S. Tsirkin <mst at redhat.com> --- diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h index a29b023..e576801 100644 --- a/include/linux/ptr_ring.h +++ b/include/linux/ptr_ring.h @@ -354,20 +354,14 @@ static inline int ptr_ring_init(struct ptr_ring *r, int size, int pad, gfp_t gfp return 0; } -static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, - void (*destroy)(void *)) +static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue, + int size, gfp_t gfp, + void (*destroy)(void *)) { - unsigned long flags;...
2016 Jun 22
2
[PATCH net-next V2] tun: introduce tx skb ring
...ichael S. Tsirkin <mst at redhat.com> --- diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h index a29b023..e576801 100644 --- a/include/linux/ptr_ring.h +++ b/include/linux/ptr_ring.h @@ -354,20 +354,14 @@ static inline int ptr_ring_init(struct ptr_ring *r, int size, int pad, gfp_t gfp return 0; } -static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, - void (*destroy)(void *)) +static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue, + int size, gfp_t gfp, + void (*destroy)(void *)) { - unsigned long flags;...
2013 May 07
2
[PATCH] Btrfs: fix passing wrong arg gfp_t to decide the correct allocation mode
If you look the code carefully, you will see all the tree_mod_alloc() has to use GFP_ATOMIC. However, the original code pass the wrong arg gfp_t in some places, this dosen''t cause any problems, because in the tree_mod_alloc(), it ignores arg gfp_t and just use GFP_ATOMIC directly, this is not good. However, i think we should try best not to allocate with GFP_ATOMIC, so i keep the gfp_t there in the hope we can change allocation mo...
2011 Jul 29
1
[PATCH RFC net-next] virtio_net: refill buffer right after being used
...g used. Sign-off-by: Shirley Ma <xma at us.ibm.com> --- diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0c7321c..c8201d4 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -429,6 +429,22 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, gfp_t gfp) return err; } +static bool fill_one(struct virtio_net *vi, gfp_t gfp) +{ + int err; + + if (vi->mergeable_rx_bufs) + err = add_recvbuf_mergeable(vi, gfp); + else if (vi->big_packets) + err = add_recvbuf_big(vi, gfp); + else + err = add_recvbuf_small(vi, gfp); + + if (err >= 0)...
2011 Jul 29
1
[PATCH RFC net-next] virtio_net: refill buffer right after being used
...g used. Sign-off-by: Shirley Ma <xma at us.ibm.com> --- diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0c7321c..c8201d4 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -429,6 +429,22 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, gfp_t gfp) return err; } +static bool fill_one(struct virtio_net *vi, gfp_t gfp) +{ + int err; + + if (vi->mergeable_rx_bufs) + err = add_recvbuf_mergeable(vi, gfp); + else if (vi->big_packets) + err = add_recvbuf_big(vi, gfp); + else + err = add_recvbuf_small(vi, gfp); + + if (err >= 0)...
2023 May 17
2
[PATCH vhost v9 04/12] virtio_ring: virtqueue_add() support premapped
...rivers/virtio/virtio_ring.c index 1ffab1eb40c0..e2fc50c05bec 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2135,6 +2135,7 @@ static inline int virtqueue_add(struct virtqueue *_vq, unsigned int in_sgs, void *data, void *ctx, + bool premapped, gfp_t gfp) { struct vring_virtqueue *vq = to_vvq(_vq); @@ -2176,7 +2177,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq, total_sg++; } return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs, - data, NULL, gfp); + data, NULL, false, gfp); } EXPORT_SYMBOL_GPL(virtqueue_add_sgs...
2016 Jun 28
1
[PATCH net-next V2] tun: introduce tx skb ring
...y: Michael S. Tsirkin <mst at redhat.com> -- diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h index c900708..7e01c1f 100644 --- a/include/linux/skb_array.h +++ b/include/linux/skb_array.h @@ -151,16 +151,24 @@ static inline int skb_array_init(struct skb_array *a, int size, gfp_t gfp) return ptr_ring_init(&a->ring, size, 0, gfp); } -void __skb_array_destroy_skb(void *ptr) +static void __skb_array_destroy_skb(void *ptr) { kfree_skb(ptr); } -int skb_array_resize(struct skb_array *a, int size, gfp_t gfp) +static inline int skb_array_resize(struct skb_array *a...
2016 Jun 28
1
[PATCH net-next V2] tun: introduce tx skb ring
...y: Michael S. Tsirkin <mst at redhat.com> -- diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h index c900708..7e01c1f 100644 --- a/include/linux/skb_array.h +++ b/include/linux/skb_array.h @@ -151,16 +151,24 @@ static inline int skb_array_init(struct skb_array *a, int size, gfp_t gfp) return ptr_ring_init(&a->ring, size, 0, gfp); } -void __skb_array_destroy_skb(void *ptr) +static void __skb_array_destroy_skb(void *ptr) { kfree_skb(ptr); } -int skb_array_resize(struct skb_array *a, int size, gfp_t gfp) +static inline int skb_array_resize(struct skb_array *a...
2018 Apr 18
7
[PATCH] net: don't use kvzalloc for DMA memory
On Wed, 18 Apr 2018, David Miller wrote: > From: Mikulas Patocka <mpatocka at redhat.com> > Date: Wed, 18 Apr 2018 12:44:25 -0400 (EDT) > > > The structure net_device is followed by arbitrary driver-specific data > > (accessible with the function netdev_priv). And for virtio-net, these > > driver-specific data must be in DMA memory. > > And we are saying
2013 Mar 06
7
[PATCH 0/6] virtio_add_buf replacement.
OK, so I've spent a few days benchmarking. Turns out 80% of virtio_add_buf cases are uni-directional (including the always-performance-sensitive networking code), and that gets no performance penalty (though tests with real networking would be appreciated!). I'm not reposting all the "convert driver to virtio_add_outbuf()" patches: just the scsi one which I didn't have
2013 Mar 06
7
[PATCH 0/6] virtio_add_buf replacement.
OK, so I've spent a few days benchmarking. Turns out 80% of virtio_add_buf cases are uni-directional (including the always-performance-sensitive networking code), and that gets no performance penalty (though tests with real networking would be appreciated!). I'm not reposting all the "convert driver to virtio_add_outbuf()" patches: just the scsi one which I didn't have
2018 Apr 19
2
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
...s use vmalloc if CONFIG_DEBUG_VM is turned on. > > ... > > --- linux-2.6.orig/mm/util.c 2018-04-18 15:46:23.000000000 +0200 > +++ linux-2.6/mm/util.c 2018-04-18 16:00:43.000000000 +0200 > @@ -395,6 +395,7 @@ EXPORT_SYMBOL(vm_mmap); > */ > void *kvmalloc_node(size_t size, gfp_t flags, int node) > { > +#ifndef CONFIG_DEBUG_VM > gfp_t kmalloc_flags = flags; > void *ret; > > @@ -426,6 +427,7 @@ void *kvmalloc_node(size_t size, gfp_t f > */ > if (ret || size <= PAGE_SIZE) > return ret; > +#endif > > return __vmalloc_no...
2018 Apr 19
2
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
...s use vmalloc if CONFIG_DEBUG_VM is turned on. > > ... > > --- linux-2.6.orig/mm/util.c 2018-04-18 15:46:23.000000000 +0200 > +++ linux-2.6/mm/util.c 2018-04-18 16:00:43.000000000 +0200 > @@ -395,6 +395,7 @@ EXPORT_SYMBOL(vm_mmap); > */ > void *kvmalloc_node(size_t size, gfp_t flags, int node) > { > +#ifndef CONFIG_DEBUG_VM > gfp_t kmalloc_flags = flags; > void *ret; > > @@ -426,6 +427,7 @@ void *kvmalloc_node(size_t size, gfp_t f > */ > if (ret || size <= PAGE_SIZE) > return ret; > +#endif > > return __vmalloc_no...
2012 Jan 10
3
[PATCH v2 0/3] virtio_net: Better low memory handling.
The following series applies to net-next. The following series changes the low memory paths in virtio_net to not disable NAPI while waiting in the allocator in the slow path. It attempts to rectify some performance problems we've seen where the network performance drops significantly when memory is low. The working theory is that the disabling of NAPI while allocations are occuring in the
2012 Jan 10
3
[PATCH v2 0/3] virtio_net: Better low memory handling.
The following series applies to net-next. The following series changes the low memory paths in virtio_net to not disable NAPI while waiting in the allocator in the slow path. It attempts to rectify some performance problems we've seen where the network performance drops significantly when memory is low. The working theory is that the disabling of NAPI while allocations are occuring in the
2020 Jun 22
1
[PATCH 14/16] mm/thp: add THP allocation helper
...| 16 ++++++++++++++++ > 2 files changed, 26 insertions(+) > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 67a0774e080b..1c7d968a27d3 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -562,6 +562,16 @@ extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, > alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id(), false) > #define alloc_page_vma_node(gfp_mask, vma, addr, node) \ > alloc_pages_vma(gfp_mask, 0, vma, addr, node, false) > +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > +extern struct page *alloc_transh...
2013 Dec 10
0
[RFC] dma-mapping: dma_alloc_coherent_mask return dma_addr_t
...dae6..6357810 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -100,10 +100,10 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size, flush_write_buffers(); } -static inline unsigned long dma_alloc_coherent_mask(struct device *dev, - gfp_t gfp) +static inline dma_addr_t dma_alloc_coherent_mask(struct device *dev, + gfp_t gfp) { - unsigned long dma_mask = 0; + dma_addr_t dma_mask = 0; dma_mask = dev->coherent_dma_mask; if (!dma_mask) @@ -114,7 +114,7 @@ static inline unsigned long dma_alloc_coherent_mask(struct device...
2012 Jan 04
4
[RFC PATCH v1 0/2] virtio_net: Better low memory handling.
The following series applies to net-next. The following series changes the low memory paths in virtio_net to allow the driver to contribute to reclaim when memory is tight. It attempts to rectify some performance problems we've seen where the network performance drops significantly when memory is low. The working theory is that while the driver contributes to memory pressure when throughput
2012 Jan 04
4
[RFC PATCH v1 0/2] virtio_net: Better low memory handling.
The following series applies to net-next. The following series changes the low memory paths in virtio_net to allow the driver to contribute to reclaim when memory is tight. It attempts to rectify some performance problems we've seen where the network performance drops significantly when memory is low. The working theory is that while the driver contributes to memory pressure when throughput
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...C_COSTLY_ORDER exists and is 3 Hmm... it looks like I missed __GFP_NORETRY diff --git a/net/core/sock.c b/net/core/sock.c index 5393b4b719d7..5f42a4d70cb2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) gfp_t gfp = prio; if (order) - gfp |= __GFP_COMP | __GFP_NOWARN; + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; pfrag->page = alloc_pages(gfp, order); if (likely(pfrag->page)) { pfrag->offset = 0;