search for: gfp

Displaying 20 results from an estimated 877 matches for "gfp".

Did you mean: fp
2023 May 17
2
[PATCH vhost v9 04/12] virtio_ring: virtqueue_add() support premapped
...rivers/virtio/virtio_ring.c index 1ffab1eb40c0..e2fc50c05bec 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2135,6 +2135,7 @@ static inline int virtqueue_add(struct virtqueue *_vq, unsigned int in_sgs, void *data, void *ctx, + bool premapped, gfp_t gfp) { struct vring_virtqueue *vq = to_vvq(_vq); @@ -2176,7 +2177,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq, total_sg++; } return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs, - data, NULL, gfp); + data, NULL, false, gfp); } EXPORT_SYMBOL_GPL(virtqueue_add_s...
2011 Jul 29
1
[PATCH RFC net-next] virtio_net: refill buffer right after being used
...g used. Sign-off-by: Shirley Ma <xma at us.ibm.com> --- diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0c7321c..c8201d4 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -429,6 +429,22 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, gfp_t gfp) return err; } +static bool fill_one(struct virtio_net *vi, gfp_t gfp) +{ + int err; + + if (vi->mergeable_rx_bufs) + err = add_recvbuf_mergeable(vi, gfp); + else if (vi->big_packets) + err = add_recvbuf_big(vi, gfp); + else + err = add_recvbuf_small(vi, gfp); + + if (err >=...
2011 Jul 29
1
[PATCH RFC net-next] virtio_net: refill buffer right after being used
...g used. Sign-off-by: Shirley Ma <xma at us.ibm.com> --- diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0c7321c..c8201d4 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -429,6 +429,22 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, gfp_t gfp) return err; } +static bool fill_one(struct virtio_net *vi, gfp_t gfp) +{ + int err; + + if (vi->mergeable_rx_bufs) + err = add_recvbuf_mergeable(vi, gfp); + else if (vi->big_packets) + err = add_recvbuf_big(vi, gfp); + else + err = add_recvbuf_small(vi, gfp); + + if (err >=...
2005 Jan 12
1
Asterisk server stopped - "0-order allocation failed " errors in the log
...in/run-crons && /usr/sbin/run-crons ) Jan 12 15:12:27 palmtrace-nuvoip tftpd[7279]: Serving ata000d28383ca9 to 10.9.8.49:6616 Jan 12 15:12:27 palmtrace-nuvoip tftpd[21012]: Serving atadefault.cfg to 10.9.8.49:6617 Jan 12 15:13:43 palmtrace-nuvoip __alloc_pages: 0-order allocation failed (gfp=0x1d2/0) Jan 12 15:13:43 palmtrace-nuvoip VM: killing process mpg123 Jan 12 15:13:48 palmtrace-nuvoip __alloc_pages: 0-order allocation failed (gfp=0x1f0/0) Jan 12 15:13:48 palmtrace-nuvoip __alloc_pages: 0-order allocation failed (gfp=0x1f0/0) Jan 12 15:14:17 palmtrace-nuvoip __alloc_pages: 0-...
2016 Jun 22
2
[PATCH net-next V2] tun: introduce tx skb ring
...ichael S. Tsirkin <mst at redhat.com> --- diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h index a29b023..e576801 100644 --- a/include/linux/ptr_ring.h +++ b/include/linux/ptr_ring.h @@ -354,20 +354,14 @@ static inline int ptr_ring_init(struct ptr_ring *r, int size, int pad, gfp_t gfp return 0; } -static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, - void (*destroy)(void *)) +static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue, + int size, gfp_t gfp, + void (*destroy)(void *)) { - unsigned long flags;...
2016 Jun 22
2
[PATCH net-next V2] tun: introduce tx skb ring
...ichael S. Tsirkin <mst at redhat.com> --- diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h index a29b023..e576801 100644 --- a/include/linux/ptr_ring.h +++ b/include/linux/ptr_ring.h @@ -354,20 +354,14 @@ static inline int ptr_ring_init(struct ptr_ring *r, int size, int pad, gfp_t gfp return 0; } -static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, - void (*destroy)(void *)) +static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue, + int size, gfp_t gfp, + void (*destroy)(void *)) { - unsigned long flags;...
2023 Jan 06
2
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
On Fri, Jan 06, 2023 at 05:15:28PM +0000, Robin Murphy wrote: > On 2023-01-06 16:42, Jason Gunthorpe wrote: > > The internal mechanisms support this, but instead of exposting the gfp to > > the caller it wrappers it into iommu_map() and iommu_map_atomic() > > > > Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. > > FWIW, since we *do* have two variants already, I think I'd have a mild > preference for leaving the regular map call...
2023 Jan 06
2
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
On Fri, Jan 06, 2023 at 05:15:28PM +0000, Robin Murphy wrote: > On 2023-01-06 16:42, Jason Gunthorpe wrote: > > The internal mechanisms support this, but instead of exposting the gfp to > > the caller it wrappers it into iommu_map() and iommu_map_atomic() > > > > Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. > > FWIW, since we *do* have two variants already, I think I'd have a mild > preference for leaving the regular map call...
2012 Jan 10
3
[PATCH v2 0/3] virtio_net: Better low memory handling.
The following series applies to net-next. The following series changes the low memory paths in virtio_net to not disable NAPI while waiting in the allocator in the slow path. It attempts to rectify some performance problems we've seen where the network performance drops significantly when memory is low. The working theory is that the disabling of NAPI while allocations are occuring in the
2012 Jan 10
3
[PATCH v2 0/3] virtio_net: Better low memory handling.
The following series applies to net-next. The following series changes the low memory paths in virtio_net to not disable NAPI while waiting in the allocator in the slow path. It attempts to rectify some performance problems we've seen where the network performance drops significantly when memory is low. The working theory is that the disabling of NAPI while allocations are occuring in the
2012 Jan 04
4
[RFC PATCH v1 0/2] virtio_net: Better low memory handling.
...process-context refill loop works, the memory subsystem is effectively polled every half second when things look bleak, leading to significant packet loss and congestion window collapse. The first patch rectifies the "not contributing to reclaim" by letting the driver try to allocate as GFP_KERNEL when run from process context. The second patch is a bit more complicated. It essentially removes the serialization that is currently in place built around enabling and disabling napi polling, and replaces it by protecting the underlying virtqueue accesses with a bottom-half spinlock. As...
2012 Jan 04
4
[RFC PATCH v1 0/2] virtio_net: Better low memory handling.
...process-context refill loop works, the memory subsystem is effectively polled every half second when things look bleak, leading to significant packet loss and congestion window collapse. The first patch rectifies the "not contributing to reclaim" by letting the driver try to allocate as GFP_KERNEL when run from process context. The second patch is a bit more complicated. It essentially removes the serialization that is currently in place built around enabling and disabling napi polling, and replaces it by protecting the underlying virtqueue accesses with a bottom-half spinlock. As...
2020 Jun 22
1
[PATCH 14/16] mm/thp: add THP allocation helper
...gt; Transparent huge page allocation policy is controlled by several sysfs > variables. Rather than expose these to each device driver that needs to > allocate THPs, provide a helper function. > > Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> > --- > include/linux/gfp.h | 10 ++++++++++ > mm/huge_memory.c | 16 ++++++++++++++++ > 2 files changed, 26 insertions(+) > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 67a0774e080b..1c7d968a27d3 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -562,6 +562,1...
2013 Mar 06
7
[PATCH 0/6] virtio_add_buf replacement.
OK, so I've spent a few days benchmarking. Turns out 80% of virtio_add_buf cases are uni-directional (including the always-performance-sensitive networking code), and that gets no performance penalty (though tests with real networking would be appreciated!). I'm not reposting all the "convert driver to virtio_add_outbuf()" patches: just the scsi one which I didn't have
2013 Mar 06
7
[PATCH 0/6] virtio_add_buf replacement.
OK, so I've spent a few days benchmarking. Turns out 80% of virtio_add_buf cases are uni-directional (including the always-performance-sensitive networking code), and that gets no performance penalty (though tests with real networking would be appreciated!). I'm not reposting all the "convert driver to virtio_add_outbuf()" patches: just the scsi one which I didn't have
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...0800, Eric Dumazet wrote: > > My suggestion is to use a recent kernel, and/or eventually backport the > mm fixes if any. > > order-3 allocations should not reclaim 2GB out of 8GB. > > There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3 Hmm... it looks like I missed __GFP_NORETRY diff --git a/net/core/sock.c b/net/core/sock.c index 5393b4b719d7..5f42a4d70cb2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) gfp_t gfp = prio; if (order) - gfp |= __GFP_C...
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...0800, Eric Dumazet wrote: > > My suggestion is to use a recent kernel, and/or eventually backport the > mm fixes if any. > > order-3 allocations should not reclaim 2GB out of 8GB. > > There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3 Hmm... it looks like I missed __GFP_NORETRY diff --git a/net/core/sock.c b/net/core/sock.c index 5393b4b719d7..5f42a4d70cb2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) gfp_t gfp = prio; if (order) - gfp |= __GFP_C...
2016 Jun 28
1
[PATCH net-next V2] tun: introduce tx skb ring
...y: Michael S. Tsirkin <mst at redhat.com> -- diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h index c900708..7e01c1f 100644 --- a/include/linux/skb_array.h +++ b/include/linux/skb_array.h @@ -151,16 +151,24 @@ static inline int skb_array_init(struct skb_array *a, int size, gfp_t gfp) return ptr_ring_init(&a->ring, size, 0, gfp); } -void __skb_array_destroy_skb(void *ptr) +static void __skb_array_destroy_skb(void *ptr) { kfree_skb(ptr); } -int skb_array_resize(struct skb_array *a, int size, gfp_t gfp) +static inline int skb_array_resize(struct skb_array...
2016 Jun 28
1
[PATCH net-next V2] tun: introduce tx skb ring
...y: Michael S. Tsirkin <mst at redhat.com> -- diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h index c900708..7e01c1f 100644 --- a/include/linux/skb_array.h +++ b/include/linux/skb_array.h @@ -151,16 +151,24 @@ static inline int skb_array_init(struct skb_array *a, int size, gfp_t gfp) return ptr_ring_init(&a->ring, size, 0, gfp); } -void __skb_array_destroy_skb(void *ptr) +static void __skb_array_destroy_skb(void *ptr) { kfree_skb(ptr); } -int skb_array_resize(struct skb_array *a, int size, gfp_t gfp) +static inline int skb_array_resize(struct skb_array...
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first step in fixing it. The iommu driver contract already includes a 'gfp' argument to th...