search for: gfp_atomic

Displaying 20 results from an estimated 1170 matches for "gfp_atomic".

2013 May 07
2
[PATCH] Btrfs: fix passing wrong arg gfp_t to decide the correct allocation mode
If you look the code carefully, you will see all the tree_mod_alloc() has to use GFP_ATOMIC. However, the original code pass the wrong arg gfp_t in some places, this dosen''t cause any problems, because in the tree_mod_alloc(), it ignores arg gfp_t and just use GFP_ATOMIC directly, this is not good. However, i think we should try best not to allocate with GFP_ATOMIC, so i keep th...
2010 Dec 09
1
[PATCH 1/1] Properly check return values of kmalloc and vmbus_recvpacket
...l *channel = context; u8 *buf; - u32 buflen, recvlen; + u32 recvlen; u64 requestid; u8 execute_shutdown = false; + int ret = 0; struct shutdown_msg_data *shutdown_msg; struct icmsg_hdr *icmsghdrp; struct icmsg_negotiate *negop = NULL; - buflen = PAGE_SIZE; - buf = kmalloc(buflen, GFP_ATOMIC); + buf = kmalloc(PAGE_SIZE, GFP_ATOMIC); - vmbus_recvpacket(channel, buf, buflen, &recvlen, &requestid); + if (!buf) { + printk(KERN_INFO + "Unable to allocate memory for shutdown_onchannelcallback"); + return; + } + + ret = vmbus_recvpacket(channel, buf, PAGE_SIZE, &...
2010 Dec 09
1
[PATCH 1/1] Properly check return values of kmalloc and vmbus_recvpacket
...l *channel = context; u8 *buf; - u32 buflen, recvlen; + u32 recvlen; u64 requestid; u8 execute_shutdown = false; + int ret = 0; struct shutdown_msg_data *shutdown_msg; struct icmsg_hdr *icmsghdrp; struct icmsg_negotiate *negop = NULL; - buflen = PAGE_SIZE; - buf = kmalloc(buflen, GFP_ATOMIC); + buf = kmalloc(PAGE_SIZE, GFP_ATOMIC); - vmbus_recvpacket(channel, buf, buflen, &recvlen, &requestid); + if (!buf) { + printk(KERN_INFO + "Unable to allocate memory for shutdown_onchannelcallback"); + return; + } + + ret = vmbus_recvpacket(channel, buf, PAGE_SIZE, &...
2010 Dec 13
3
[PATCH 1/1] hv: Use only one receive buffer per channel and kmalloc on initialize
...ntext; - u8 *buf; - u32 buflen, recvlen; + u32 recvlen; u64 requestid; u8 execute_shutdown = false; @@ -52,24 +54,23 @@ static void shutdown_onchannelcallback(void *context) struct icmsg_hdr *icmsghdrp; struct icmsg_negotiate *negop = NULL; - buflen = PAGE_SIZE; - buf = kmalloc(buflen, GFP_ATOMIC); - - vmbus_recvpacket(channel, buf, buflen, &recvlen, &requestid); + vmbus_recvpacket(channel, shut_txf_buf, + PAGE_SIZE, &recvlen, &requestid); if (recvlen > 0) { DPRINT_DBG(VMBUS, "shutdown packet: len=%d, requestid=%lld", recvlen, requestid); - i...
2010 Dec 13
3
[PATCH 1/1] hv: Use only one receive buffer per channel and kmalloc on initialize
...ntext; - u8 *buf; - u32 buflen, recvlen; + u32 recvlen; u64 requestid; u8 execute_shutdown = false; @@ -52,24 +54,23 @@ static void shutdown_onchannelcallback(void *context) struct icmsg_hdr *icmsghdrp; struct icmsg_negotiate *negop = NULL; - buflen = PAGE_SIZE; - buf = kmalloc(buflen, GFP_ATOMIC); - - vmbus_recvpacket(channel, buf, buflen, &recvlen, &requestid); + vmbus_recvpacket(channel, shut_txf_buf, + PAGE_SIZE, &recvlen, &requestid); if (recvlen > 0) { DPRINT_DBG(VMBUS, "shutdown packet: len=%d, requestid=%lld", recvlen, requestid); - i...
2011 Nov 03
1
[PATCH 2 of 5] virtio: rename virtqueue_add_buf_gfp to virtqueue_add_buf
Remove wrapper functions. This makes the allocation type explicit in all callers; I used GPF_KERNEL where it seemed obvious, left it at GFP_ATOMIC otherwise. Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -171,7 +171,7 @@ static bool do_req(struct request_queue } } - if (virtqueue_a...
2011 Nov 03
1
[PATCH 2 of 5] virtio: rename virtqueue_add_buf_gfp to virtqueue_add_buf
Remove wrapper functions. This makes the allocation type explicit in all callers; I used GPF_KERNEL where it seemed obvious, left it at GFP_ATOMIC otherwise. Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -171,7 +171,7 @@ static bool do_req(struct request_queue } } - if (virtqueue_a...
2018 Aug 03
0
[PATCH] crypto: virtio: Replace GFP_ATOMIC with GFP_KERNEL in __virtio_crypto_ablkcipher_do_req()
...called in atomic context. > > __virtio_crypto_ablkcipher_do_req() is only called by > virtio_crypto_ablkcipher_crypt_req(), which is only called by > virtcrypto_find_vqs() that is never called in atomic context. > > __virtio_crypto_ablkcipher_do_req() calls kzalloc_node() with GFP_ATOMIC, > which is not necessary. > GFP_ATOMIC can be replaced with GFP_KERNEL. > > This is found by a static analysis tool named DCNS written by myself. > I also manually check the kernel code before reporting it. > > Signed-off-by: Jia-Ju Bai <baijiaju1990 at gmail.com> Pat...
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
...l capture a significant amount of the allocations. Update the iommu_map() API to pass in the GFP argument, and fix all call sites. Replace iommu_map_atomic(). Audit the "enterprise" iommu drivers to make sure they do the right thing. Intel and S390 ignore the GFP argument and always use GFP_ATOMIC. This is problematic for iommufd anyhow, so fix it. AMD and ARM SMMUv2/3 are already correct. A follow up series will be needed to capture the allocations made when the iommu_domain itself is allocated, which will complete the job. v3: - Leave a GFP_ATOMIC in "Add a gfp parameter to iommu_m...
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
...l capture a significant amount of the allocations. Update the iommu_map() API to pass in the GFP argument, and fix all call sites. Replace iommu_map_atomic(). Audit the "enterprise" iommu drivers to make sure they do the right thing. Intel and S390 ignore the GFP argument and always use GFP_ATOMIC. This is problematic for iommufd anyhow, so fix it. AMD and ARM SMMUv2/3 are already correct. A follow up series will be needed to capture the allocations made when the iommu_domain itself is allocated, which will complete the job. v3: - Leave a GFP_ATOMIC in "Add a gfp parameter to iommu_m...
2013 Dec 23
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On 12/17/2013 08:16 AM, Michael Dalton wrote: > The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC > mergeable rx buffer allocations. This commit migrates virtio-net to use > per-receive queue page frags for GFP_ATOMIC allocation. This change unifies > mergeable rx buffer memory allocation, which now will use skb_refill_frag() > for both atomic and GFP-WAIT buffer allocations. > &...
2013 Dec 23
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On 12/17/2013 08:16 AM, Michael Dalton wrote: > The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC > mergeable rx buffer allocations. This commit migrates virtio-net to use > per-receive queue page frags for GFP_ATOMIC allocation. This change unifies > mergeable rx buffer memory allocation, which now will use skb_refill_frag() > for both atomic and GFP-WAIT buffer allocations. > &...
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
...l capture a significant amount of the allocations. Update the iommu_map() API to pass in the GFP argument, and fix all call sites. Replace iommu_map_atomic(). Audit the "enterprise" iommu drivers to make sure they do the right thing. Intel and S390 ignore the GFP argument and always use GFP_ATOMIC. This is problematic for iommufd anyhow, so fix it. AMD and ARM SMMUv2/3 are already correct. A follow up series will be needed to capture the allocations made when the iommu_domain itself is allocated, which will complete the job. v2: - Prohibit bad GFP flags in the iommu wrappers - Split out...
2013 Jan 02
0
[PATCH] virtio: use chained scatterlists
...; @@ -112,7 +112,7 @@ static void virtblk_add_buf_wait(struct virtio_blk *vblk, TASK_UNINTERRUPTIBLE); spin_lock_irq(vblk->disk->queue->queue_lock); - if (virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr, + if (virtqueue_add_buf(vblk->vq, out, in, vbr, GFP_ATOMIC) < 0) { spin_unlock_irq(vblk->disk->queue->queue_lock); io_schedule(); @@ -128,12 +128,13 @@ static void virtblk_add_buf_wait(struct virtio_blk *vblk, } static inline void virtblk_add_req(struct virtblk_req *vbr, - unsigned int out, unsigned int in) + struct scat...
2013 Jan 02
0
[PATCH] virtio: use chained scatterlists
...; @@ -112,7 +112,7 @@ static void virtblk_add_buf_wait(struct virtio_blk *vblk, TASK_UNINTERRUPTIBLE); spin_lock_irq(vblk->disk->queue->queue_lock); - if (virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr, + if (virtqueue_add_buf(vblk->vq, out, in, vbr, GFP_ATOMIC) < 0) { spin_unlock_irq(vblk->disk->queue->queue_lock); io_schedule(); @@ -128,12 +128,13 @@ static void virtblk_add_buf_wait(struct virtio_blk *vblk, } static inline void virtblk_add_req(struct virtblk_req *vbr, - unsigned int out, unsigned int in) + struct scat...
2013 Dec 23
3
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Mon, Dec 23, 2013 at 09:27:07AM -0800, Eric Dumazet wrote: > On Mon, 2013-12-23 at 16:12 +0800, Jason Wang wrote: > > On 12/17/2013 08:16 AM, Michael Dalton wrote: > > > The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC > > > mergeable rx buffer allocations. This commit migrates virtio-net to use > > > per-receive queue page frags for GFP_ATOMIC allocation. This change unifies > > > mergeable rx buffer memory allocation, which now will use skb_refill_frag() > > > for both atomic...
2013 Dec 23
3
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Mon, Dec 23, 2013 at 09:27:07AM -0800, Eric Dumazet wrote: > On Mon, 2013-12-23 at 16:12 +0800, Jason Wang wrote: > > On 12/17/2013 08:16 AM, Michael Dalton wrote: > > > The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC > > > mergeable rx buffer allocations. This commit migrates virtio-net to use > > > per-receive queue page frags for GFP_ATOMIC allocation. This change unifies > > > mergeable rx buffer memory allocation, which now will use skb_refill_frag() > > > for both atomic...
2018 Apr 03
3
[PATCH] drm/virtio: fix vq wait_event condition
...drm/virtio/virtgpu_vq.c index 48e4f1df6e..020070d483 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -293,7 +293,7 @@ static int virtio_gpu_queue_ctrl_buffer_locked(struct virtio_gpu_device *vgdev, ret = virtqueue_add_sgs(vq, sgs, outcnt, incnt, vbuf, GFP_ATOMIC); if (ret == -ENOSPC) { spin_unlock(&vgdev->ctrlq.qlock); - wait_event(vgdev->ctrlq.ack_queue, vq->num_free); + wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= outcnt + incnt); spin_lock(&vgdev->ctrlq.qlock); goto retry; } else { @@ -368,7 +368,7 @@ st...
2018 Apr 03
3
[PATCH] drm/virtio: fix vq wait_event condition
...drm/virtio/virtgpu_vq.c index 48e4f1df6e..020070d483 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -293,7 +293,7 @@ static int virtio_gpu_queue_ctrl_buffer_locked(struct virtio_gpu_device *vgdev, ret = virtqueue_add_sgs(vq, sgs, outcnt, incnt, vbuf, GFP_ATOMIC); if (ret == -ENOSPC) { spin_unlock(&vgdev->ctrlq.qlock); - wait_event(vgdev->ctrlq.ack_queue, vq->num_free); + wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= outcnt + incnt); spin_lock(&vgdev->ctrlq.qlock); goto retry; } else { @@ -368,7 +368,7 @@ st...
2014 Jan 08
4
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...case ? > > Answer : nothing special should happen, we drop incoming traffic, > and make sure the driver recovers properly. (like not NULL deref or > crazy things like that) > > Why virtio_net should be different ? Basically yes, we could start dropping packets immediately once GFP_ATOMIC allocations fail and repost the buffer to host, and hope memory is available by the time we get the next interrupt. But we wanted host to have visibility into the fact that we are out of memory and packets are dropped, so we did not want to repost. If we don't repost how do we know memory is fi...