search for: alloc_pages_node

Displaying 20 results from an estimated 25 matches for "alloc_pages_node".

2020 Jul 17
0
[PATCH] virtio_ring: use alloc_pages_node for NUMA-aware allocation
...on v5.8-rc5 next-20200716] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Shile-Zhang/virtio_ring-use-alloc_pages_node-for-NUMA-aware-allocation/20200717-173734 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 8882572675c1bb1cc544f4e229a11661f1fc52e4 config: alpha-allyesconfig (attached as .config) compiler: alpha-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://...
2020 Jul 21
0
[PATCH v2] virtio_ring: use alloc_pages_node for NUMA-aware allocation
On Tue, Jul 21, 2020 at 03:00:13PM +0800, Shile Zhang wrote: > Use alloc_pages_node() allocate memory for vring queue with proper > NUMA affinity. > > Reported-by: kernel test robot <lkp at intel.com> > Suggested-by: Jiang Liu <liuj97 at gmail.com> > Signed-off-by: Shile Zhang <shile.zhang at linux.alibaba.com> Do you observe any performance gain...
2020 Jul 17
0
[PATCH] virtio_ring: use alloc_pages_node for NUMA-aware allocation
...on v5.8-rc5 next-20200716] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Shile-Zhang/virtio_ring-use-alloc_pages_node-for-NUMA-aware-allocation/20200717-173734 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 8882572675c1bb1cc544f4e229a11661f1fc52e4 config: s390-randconfig-r015-20200717 (attached as .config) compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project ed6b578040a8...
2014 Sep 17
3
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
...t;unsubscribe kvm" in > > the body of a message to majordomo at vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > Digging through the code, I think I found a possible cause: tags->rqs[..] is not initialized with zeroes (via alloc_pages_node in blk-mq.c:blk_mq_init_rq_map()). When a request is created: 1. __blk_mq_alloc_request() gets a free tag (thus e.g. removing it from bitmap_tags) 2. __blk_mq_alloc_request() initializes is via blk_mq_rq_ctx_init(). The struct is filled with life and rq->q is set. When blk_mq_hw_ctx_check_t...
2014 Sep 17
3
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
...t;unsubscribe kvm" in > > the body of a message to majordomo at vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > Digging through the code, I think I found a possible cause: tags->rqs[..] is not initialized with zeroes (via alloc_pages_node in blk-mq.c:blk_mq_init_rq_map()). When a request is created: 1. __blk_mq_alloc_request() gets a free tag (thus e.g. removing it from bitmap_tags) 2. __blk_mq_alloc_request() initializes is via blk_mq_rq_ctx_init(). The struct is filled with life and rq->q is set. When blk_mq_hw_ctx_check_t...
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote: > > What's the conclusion of your discussion? It seems you want some > > statistic before deciding whether to ripping the bitmap from the ABI, > > am I right? > > I think Andrea and David feel pretty strongly that we should remove the > bitmap, unless we have some data to support keeping it. I don't feel as >
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote: > > What's the conclusion of your discussion? It seems you want some > > statistic before deciding whether to ripping the bitmap from the ABI, > > am I right? > > I think Andrea and David feel pretty strongly that we should remove the > bitmap, unless we have some data to support keeping it. I don't feel as >
2014 Sep 17
0
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
...; the body of a message to majordomo at vger.kernel.org > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > > > > Digging through the code, I think I found a possible cause: > > tags->rqs[..] is not initialized with zeroes (via alloc_pages_node in > blk-mq.c:blk_mq_init_rq_map()). Yes, it may cause problem when the request is allocated at the 1st time, and timeout handler may comes just after the allocation and before its initialization, then oops triggered because of garbage data in the request. --
2016 Dec 09
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
...t'd be good to check the stats or the benchmark on large guests, at least one hundred gigabytes or so. Changing topic but still about the ABI features needed, so it may be relevant for this discussion: 1) vNUMA locality: i.e. allowing host to specify which vNODEs to take memory from, using alloc_pages_node in guest. So you can ask to take X pages from vnode A, Y pages from vnode B, in one vmenter. 2) allowing qemu to tell the guest to stop inflating the balloon and report a fragmentation limit being hit, when sync compaction powered allocations fails at certain power-of-two order granularit...
2014 Sep 17
2
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
...sage to majordomo at vger.kernel.org >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>> >>> >> >> Digging through the code, I think I found a possible cause: >> >> tags->rqs[..] is not initialized with zeroes (via alloc_pages_node in >> blk-mq.c:blk_mq_init_rq_map()). > > Yes, it may cause problem when the request is allocated at the 1st time, > and timeout handler may comes just after the allocation and before its > initialization, then oops triggered because of garbage data in the request. > > -- &g...
2014 Sep 17
2
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
...sage to majordomo at vger.kernel.org >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>> >>> >> >> Digging through the code, I think I found a possible cause: >> >> tags->rqs[..] is not initialized with zeroes (via alloc_pages_node in >> blk-mq.c:blk_mq_init_rq_map()). > > Yes, it may cause problem when the request is allocated at the 1st time, > and timeout handler may comes just after the allocation and before its > initialization, then oops triggered because of garbage data in the request. > > -- &g...
2020 Sep 15
0
[PATCH RFC v1 09/18] x86/hyperv: provide a bunch of helper functions
...t; + num_allocations = 0; > + i = 0; > + order = HV_DEPOSIT_MAX_ORDER; > + > + while (num_pages) { > + /* Find highest order we can actually allocate */ > + desired_order = 31 - __builtin_clz(num_pages); > + order = MIN(desired_order, order); > + do { > + pages[i] = alloc_pages_node(node, GFP_KERNEL, order); > + if (!pages[i]) { > + if (!order) { > + goto err_free_allocations; > + } > + --order; > + } > + } while (!pages[i]); > + > + split_page(pages[i], order); > + counts[i] = 1 << order; > + num_pages -= counts[i];...
2014 Sep 12
3
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
On 09/12/2014 01:54 PM, Ming Lei wrote: > On Thu, Sep 11, 2014 at 6:26 PM, Christian Borntraeger > <borntraeger at de.ibm.com> wrote: >> Folks, >> >> we have seen the following bug with 3.16 as a KVM guest. It suspect the blk-mq rework that happened between 3.15 and 3.16, but it can be something completely different. >> > > Care to share how you reproduce
2014 Sep 12
3
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
On 09/12/2014 01:54 PM, Ming Lei wrote: > On Thu, Sep 11, 2014 at 6:26 PM, Christian Borntraeger > <borntraeger at de.ibm.com> wrote: >> Folks, >> >> we have seen the following bug with 3.16 as a KVM guest. It suspect the blk-mq rework that happened between 3.15 and 3.16, but it can be something completely different. >> > > Care to share how you reproduce
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2020 Sep 15
0
[PATCH 15/18] dma-mapping: add a new dma_alloc_pages API
...IG_MMU */ } + +struct page *dma_common_alloc_pages(struct device *dev, size_t size, + dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + struct page *page; + + page = dma_alloc_contiguous(dev, size, gfp); + if (!page) + page = alloc_pages_node(dev_to_node(dev), gfp, get_order(size)); + if (!page) + return NULL; + + *dma_handle = ops->map_page(dev, page, 0, size, dir, + DMA_ATTR_SKIP_CPU_SYNC); + if (*dma_handle == DMA_MAPPING_ERROR) { + dma_free_contiguous(dev, page, size); + return NULL; + } + + memset(page_address(page), 0...
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first