search for: __gfp_zero

Displaying 20 results from an estimated 317 matches for "__gfp_zero".

2017 Aug 03
6
[PATCH RESEND] mm: don't zero ballooned pages
...the hypervisor and shouldn't be given to the host ksmd to scan. Therefore, it is not necessary to zero ballooned pages, which is very time consuming when the page amount is large. The ongoing fast balloon tests show that the time to balloon 7G pages is increased from ~491ms to 2.8 seconds with __GFP_ZERO added. So, this patch removes the flag. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Cc: Michal Hocko <mhocko at kernel.org> Cc: Michael S. Tsirkin <mst at redhat.com> --- mm/balloon_compaction.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/balloon...
2017 Aug 03
6
[PATCH RESEND] mm: don't zero ballooned pages
...the hypervisor and shouldn't be given to the host ksmd to scan. Therefore, it is not necessary to zero ballooned pages, which is very time consuming when the page amount is large. The ongoing fast balloon tests show that the time to balloon 7G pages is increased from ~491ms to 2.8 seconds with __GFP_ZERO added. So, this patch removes the flag. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Cc: Michal Hocko <mhocko at kernel.org> Cc: Michael S. Tsirkin <mst at redhat.com> --- mm/balloon_compaction.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/balloon...
2017 Jul 31
2
[PATCH] mm: don't zero ballooned pages
...the hypervisor and shouldn't be given to the host ksmd to scan. Therefore, it is not necessary to zero ballooned pages, which is very time consuming when the page amount is large. The ongoing fast balloon tests show that the time to balloon 7G pages is increased from ~491ms to 2.8 seconds with __GFP_ZERO added. So, this patch removes the flag. Signed-off-by: Wei Wang <wei.w.wang at intel.com> --- mm/balloon_compaction.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index 9075aa5..b06d9fe 100644 --- a/mm/balloon_compacti...
2017 Jul 31
2
[PATCH] mm: don't zero ballooned pages
...the hypervisor and shouldn't be given to the host ksmd to scan. Therefore, it is not necessary to zero ballooned pages, which is very time consuming when the page amount is large. The ongoing fast balloon tests show that the time to balloon 7G pages is increased from ~491ms to 2.8 seconds with __GFP_ZERO added. So, this patch removes the flag. Signed-off-by: Wei Wang <wei.w.wang at intel.com> --- mm/balloon_compaction.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index 9075aa5..b06d9fe 100644 --- a/mm/balloon_compacti...
2007 Apr 18
1
[PATCH 3/3] Gdt hotplug
...000000000 -0700 +++ linux-2.6.14-rc1/arch/i386/kernel/smpboot.c 2005-09-28 12:54:08.000000000 -0700 @@ -898,7 +898,8 @@ static int __devinit do_boot_cpu(int api * This grunge runs the startup process for * the targeted processor. */ - cpu_gdt_descr[cpu].address = __get_free_page(GFP_KERNEL|__GFP_ZERO); + if (!cpu_gdt_descr[cpu].address) + cpu_gdt_descr[cpu].address = __get_free_page(GFP_KERNEL|__GFP_ZERO); atomic_set(&init_deasserted, 0);
2007 Apr 18
1
[PATCH 3/3] Gdt hotplug
...000000000 -0700 +++ linux-2.6.14-rc1/arch/i386/kernel/smpboot.c 2005-09-28 12:54:08.000000000 -0700 @@ -898,7 +898,8 @@ static int __devinit do_boot_cpu(int api * This grunge runs the startup process for * the targeted processor. */ - cpu_gdt_descr[cpu].address = __get_free_page(GFP_KERNEL|__GFP_ZERO); + if (!cpu_gdt_descr[cpu].address) + cpu_gdt_descr[cpu].address = __get_free_page(GFP_KERNEL|__GFP_ZERO); atomic_set(&init_deasserted, 0);
2019 Oct 29
2
[RFC PATCH 0/2] virtio: allow per vq DMA domain
We used to have use a single parent for all DMA operations. This tends to complicate the mdev based hardware virtio datapath offloading which may not implement the control path over datapath like ctrl vq in the case of virtio-net. So this series tries to intorduce per DMA domain by allowing trasnport to specify the parent device for each virtqueue. Then for the case of virtio-mdev device, it can
2011 Feb 11
1
[PATCH 1/3]: Staging: hv: Use native page allocation/free functions
...us_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size, newchannel->channel_callback_context = context; /* Allocate the ring buffer */ - out = osd_page_alloc((send_ringbuffer_size + recv_ringbuffer_size) - >> PAGE_SHIFT); + out = (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, + get_order(send_ringbuffer_size + recv_ringbuffer_size)); + if (!out) return -ENOMEM; @@ -300,8 +301,8 @@ Cleanup: errorout: ringbuffer_cleanup(&newchannel->outbound); ringbuffer_cleanup(&newchannel->inbound); - osd_page_free(out, (send_ringbuffer_size + recv_ringbuffer...
2011 Feb 11
1
[PATCH 1/3]: Staging: hv: Use native page allocation/free functions
...us_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size, newchannel->channel_callback_context = context; /* Allocate the ring buffer */ - out = osd_page_alloc((send_ringbuffer_size + recv_ringbuffer_size) - >> PAGE_SHIFT); + out = (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, + get_order(send_ringbuffer_size + recv_ringbuffer_size)); + if (!out) return -ENOMEM; @@ -300,8 +301,8 @@ Cleanup: errorout: ringbuffer_cleanup(&newchannel->outbound); ringbuffer_cleanup(&newchannel->inbound); - osd_page_free(out, (send_ringbuffer_size + recv_ringbuffer...
2019 Jun 18
2
[PATCH v2 07/12] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing
...st); if (exbuf->num_bo_handles) { - bo_handles = kvmalloc_array(exbuf->num_bo_handles, - sizeof(uint32_t), GFP_KERNEL); + sizeof(uint32_t), GFP_KERNEL); buflist = kvmalloc_array(exbuf->num_bo_handles, - sizeof(struct ttm_validate_buffer), - GFP_KERNEL | __GFP_ZERO); + sizeof(struct drm_gem_object*), + GFP_KERNEL | __GFP_ZERO); if (!bo_handles || !buflist) { ret = -ENOMEM; goto out_unused_fd; @@ -181,19 +179,15 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, ret = -ENOENT; goto out_unused_fd; }...
2019 Jun 18
2
[PATCH v2 07/12] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing
...st); if (exbuf->num_bo_handles) { - bo_handles = kvmalloc_array(exbuf->num_bo_handles, - sizeof(uint32_t), GFP_KERNEL); + sizeof(uint32_t), GFP_KERNEL); buflist = kvmalloc_array(exbuf->num_bo_handles, - sizeof(struct ttm_validate_buffer), - GFP_KERNEL | __GFP_ZERO); + sizeof(struct drm_gem_object*), + GFP_KERNEL | __GFP_ZERO); if (!bo_handles || !buflist) { ret = -ENOMEM; goto out_unused_fd; @@ -181,19 +179,15 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, ret = -ENOENT; goto out_unused_fd; }...
2017 Aug 03
0
[PATCH RESEND] mm: don't zero ballooned pages
...;t be given to the host ksmd to scan. I find MADV_DONTNEED reference still quite confusing. What do you think about the following wording instead: " Zeroying ballon pages is rather time consuming, especially when a lot of pages are in flight. E.g. 7GB worth of ballooned memory takes 2.8s with __GFP_ZERO while it takes ~491ms without it. The original commit argued that zeroying will help ksmd to merge these pages on the host but this argument is assuming that the host actually marks balloon pages for ksm which is not universally true. So we pay performance penalty for something that even might not...
2020 Nov 03
0
[PATCH v2 0/8] slab: provide and use krealloc_array()
...e Perches <joe at perches.com> wrote: > > On Mon, 2020-11-02 at 16:20 +0100, Bartosz Golaszewski wrote: > > > From: Bartosz Golaszewski <bgolaszewski at baylibre.com> > Yeah so I had this concern for devm_krealloc() and even sent a patch > that extended it to honor __GFP_ZERO before I noticed that regular > krealloc() silently ignores __GFP_ZERO. I'm not sure if this is on > purpose. Maybe we should either make krealloc() honor __GFP_ZERO or > explicitly state in its documentation that it ignores it? And my voice here is to ignore for the same reasons: res...
2024 Sep 23
1
[PATCH 2/2] nouveau/dmem: Fix memory leak in `migrate_to_ram` upon copy error
...vers/gpu/drm/nouveau/nouveau_dmem.c @@ -193,7 +193,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) if (!spage || !(src & MIGRATE_PFN_MIGRATE)) goto done; - dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address); + dpage = alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO, vmf->vma, vmf->address); if (!dpage) goto done; -- 2.34.1
2020 Sep 25
2
[PATCH 17/18] dma-iommu: implement ->alloc_noncoherent
...> + if (!gfpflags_allow_blocking(gfp)) { > + struct page *page; > + > + page = dma_common_alloc_pages(dev, size, handle, dir, gfp); > + if (!page) > + return NULL; > + return page_address(page); > + } > + > + return iommu_dma_alloc_remap(dev, size, handle, gfp | __GFP_ZERO, > + PAGE_KERNEL, 0); iommu_dma_alloc_remap() makes use of the DMA_ATTR_ALLOC_SINGLE_PAGES attribute to optimize the allocations for devices which don't care about how contiguous the backing memory is. Do you think we could add an attrs argument to this function and pass it there?...
2014 Sep 17
4
[PATCH v5 2/3] virtio_pci: Use the DMA API for virtqueues when possible
...(struct virtio_device *vdev, unsigned index, > > info->num = num; > info->msix_vector = msix_vec; > + info->use_dma_api = vp_use_dma_api(); > > - size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN)); > - info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO); > + size = vring_size(num, VIRTIO_PCI_VRING_ALIGN); > + if (info->use_dma_api) { > + info->queue = dma_zalloc_coherent(vdev->dev.parent, size, > + &info->queue_dma_addr, > + GFP_KERNEL); > + } else { > + info->queue = alloc_pages_exact(PAGE_...
2014 Sep 17
4
[PATCH v5 2/3] virtio_pci: Use the DMA API for virtqueues when possible
...(struct virtio_device *vdev, unsigned index, > > info->num = num; > info->msix_vector = msix_vec; > + info->use_dma_api = vp_use_dma_api(); > > - size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN)); > - info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO); > + size = vring_size(num, VIRTIO_PCI_VRING_ALIGN); > + if (info->use_dma_api) { > + info->queue = dma_zalloc_coherent(vdev->dev.parent, size, > + &info->queue_dma_addr, > + GFP_KERNEL); > + } else { > + info->queue = alloc_pages_exact(PAGE_...
2014 Sep 17
1
[PATCH v5 2/3] virtio_pci: Use the DMA API for virtqueues when possible
...(struct virtio_device *vdev, unsigned index, > > info->num = num; > info->msix_vector = msix_vec; > + info->use_dma_api = vp_use_dma_api(); > > - size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN)); > - info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO); > + size = vring_size(num, VIRTIO_PCI_VRING_ALIGN); > + if (info->use_dma_api) { > + info->queue = dma_zalloc_coherent(vdev->dev.parent, size, > + &info->queue_dma_addr, > + GFP_KERNEL); > + } else { > + info->queue = alloc_pages_exact(PAGE_...
2014 Sep 17
1
[PATCH v5 2/3] virtio_pci: Use the DMA API for virtqueues when possible
...(struct virtio_device *vdev, unsigned index, > > info->num = num; > info->msix_vector = msix_vec; > + info->use_dma_api = vp_use_dma_api(); > > - size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN)); > - info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO); > + size = vring_size(num, VIRTIO_PCI_VRING_ALIGN); > + if (info->use_dma_api) { > + info->queue = dma_zalloc_coherent(vdev->dev.parent, size, > + &info->queue_dma_addr, > + GFP_KERNEL); > + } else { > + info->queue = alloc_pages_exact(PAGE_...
2017 Jul 31
0
[PATCH] mm: don't zero ballooned pages
...where this MADV_DONTNEED is done, please? > Therefore, it is not > necessary to zero ballooned pages, which is very time consuming when > the page amount is large. The ongoing fast balloon tests show that the > time to balloon 7G pages is increased from ~491ms to 2.8 seconds with > __GFP_ZERO added. So, this patch removes the flag. Please make it obvious that this is a revert of bb01b64cfab7 ("mm/balloon_compaction.c: enqueue zero page to balloon device"). > Signed-off-by: Wei Wang <wei.w.wang at intel.com> > --- > mm/balloon_compaction.c | 2 +- > 1 file...