Displaying 20 results from an estimated 23 matches for "gfp_kernel_account".
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to
GFP_KERNEL to use the gfp parameter like the other allocations in this
function.
Auditing says this is never called from an atomic context, so it is safe
as is, but reads wrong.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
 drivers/iommu/dma-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to
GFP_KERNEL to use the gfp parameter like the other allocations in this
function.
Auditing says this is never called from an atomic context, so it is safe
as is, but reads wrong.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
 drivers/iommu/dma-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first step in fixing it.
The iommu driver contract already includes a 'gfp' argument to the
map_pages op,...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first step in fixing it.
The iommu driver contract already includes a 'gfp' argument to the
map_pages op,...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first step in fixing it.
The iommu driver contract already includes a 'gfp' argument to the
map_pages op,...
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first step in fixing it.
The iommu driver contract already includes a 'gfp' argument to the
map_pages op,...
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first step in fixing it.
The iommu driver contract already includes a 'gfp' argument to the
map_pages op,...
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first step in fixing it.
The iommu driver contract already includes a 'gfp' argument to the
map_pages op,...
2023 May 31
1
[syzbot] [kvm?] [net?] [virt?] general protection fault in vhost_work_queue
...sk);
+	WARN_ON(!llist_empty(&dev->worker.work_list));
+	WRITE_ONCE(dev->worker.vtsk, NULL);
 }
 
 static int vhost_worker_create(struct vhost_dev *dev)
 {
-	struct vhost_worker *worker;
 	struct vhost_task *vtsk;
 	char name[TASK_COMM_LEN];
 	int ret;
 
-	worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
-	if (!worker)
-		return -ENOMEM;
-
-	dev->worker = worker;
-	worker->kcov_handle = kcov_common_handle();
-	init_llist_head(&worker->work_list);
+	dev->worker.kcov_handle = kcov_common_handle();
+	init_llist_head(&dev->worker.work_list);
 	snprintf(name, sizeof(name), "...
2023 Jun 05
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...vhost_task_stop(dev->worker.vtsk);
+	dev->worker.kcov_handle = 0;
+	dev->worker.vtsk = NULL;
 }
 
 static int vhost_worker_create(struct vhost_dev *dev)
 {
-	struct vhost_worker *worker;
 	struct vhost_task *vtsk;
 	char name[TASK_COMM_LEN];
-	int ret;
-
-	worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
-	if (!worker)
-		return -ENOMEM;
 
-	dev->worker = worker;
-	worker->kcov_handle = kcov_common_handle();
-	init_llist_head(&worker->work_list);
+	init_llist_head(&dev->worker.work_list);
 	snprintf(name, sizeof(name), "vhost-%d", current->pid);
 
-	vtsk = vhost_t...
2023 Jun 05
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...vhost_task_stop(dev->worker.vtsk);
+	dev->worker.kcov_handle = 0;
+	dev->worker.vtsk = NULL;
 }
 
 static int vhost_worker_create(struct vhost_dev *dev)
 {
-	struct vhost_worker *worker;
 	struct vhost_task *vtsk;
 	char name[TASK_COMM_LEN];
-	int ret;
-
-	worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
-	if (!worker)
-		return -ENOMEM;
 
-	dev->worker = worker;
-	worker->kcov_handle = kcov_common_handle();
-	init_llist_head(&worker->work_list);
+	init_llist_head(&dev->worker.work_list);
 	snprintf(name, sizeof(name), "vhost-%d", current->pid);
 
-	vtsk = vhost_t...
2023 Jan 20
0
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...particular context is why 
we're using iommu_map_sg_atomic() further down - that seems to have been 
an oversight in 781ca2de89ba, since this particular path has never 
supported being called in atomic context.
Overall I'm starting to wonder if it might not be better to stick a "use 
GFP_KERNEL_ACCOUNT if you allocate" flag in the domain for any level of 
the API internals to pick up as appropriate, rather than propagate 
per-call gfp flags everywhere. As it stands we're still missing 
potential pagetable and other domain-related allocations by drivers in 
.attach_dev and even (in probab...
2023 Aug 08
0
[Bridge] [PATCH v2 11/14] networking: Update to register_net_sysctl_sz
...for the unprivileged users.
> > 
...
> >   	const char *dev_name_source;
> >   	char neigh_path[ sizeof("net//neigh/") + IFNAMSIZ + IFNAMSIZ ];
> >   	char *p_name;
> > +	size_t neigh_vars_size;
> >   	t = kmemdup(&neigh_sysctl_template, sizeof(*t), GFP_KERNEL_ACCOUNT);
> >   	if (!t)
> > @@ -3790,11 +3791,13 @@ int neigh_sysctl_register(struct net_device *dev, struct neigh_parms *p,
> >   		t->neigh_vars[i].extra2 = p;
> >   	}
> > +	neigh_vars_size = ARRAY_SIZE(t->neigh_vars);
> >   	if (dev) {
> >   		dev_name_s...
2023 Jan 06
2
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...000, Robin Murphy wrote:
> On 2023-01-06 16:42, Jason Gunthorpe wrote:
> > The internal mechanisms support this, but instead of exposting the gfp to
> > the caller it wrappers it into iommu_map() and iommu_map_atomic()
> > 
> > Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
> 
> FWIW, since we *do* have two variants already, I think I'd have a mild
> preference for leaving the regular map calls as-is (i.e. implicit
> GFP_KERNEL), and just generalising the _atomic versions for the special
> cases.
I think it is just better to follow kernel conventi...
2023 Jan 06
2
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...000, Robin Murphy wrote:
> On 2023-01-06 16:42, Jason Gunthorpe wrote:
> > The internal mechanisms support this, but instead of exposting the gfp to
> > the caller it wrappers it into iommu_map() and iommu_map_atomic()
> > 
> > Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
> 
> FWIW, since we *do* have two variants already, I think I'd have a mild
> preference for leaving the regular map calls as-is (i.e. implicit
> GFP_KERNEL), and just generalising the _atomic versions for the special
> cases.
I think it is just better to follow kernel conventi...
2023 Jun 01
1
[syzbot] [kvm?] [net?] [virt?] general protection fault in vhost_work_queue
...to zero here,
but maybe we don't need to.
Thanks,
Stefano
> }
>
> static int vhost_worker_create(struct vhost_dev *dev)
> {
>-	struct vhost_worker *worker;
> 	struct vhost_task *vtsk;
> 	char name[TASK_COMM_LEN];
> 	int ret;
>
>-	worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
>-	if (!worker)
>-		return -ENOMEM;
>-
>-	dev->worker = worker;
>-	worker->kcov_handle = kcov_common_handle();
>-	init_llist_head(&worker->work_list);
>+	dev->worker.kcov_handle = kcov_common_handle();
>+	init_llist_head(&dev->worker.work_list);
>...
2023 Jun 06
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...->worker.kcov_handle = 0;
>+	dev->worker.vtsk = NULL;
> }
>
> static int vhost_worker_create(struct vhost_dev *dev)
> {
>-	struct vhost_worker *worker;
> 	struct vhost_task *vtsk;
> 	char name[TASK_COMM_LEN];
>-	int ret;
>-
>-	worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
>-	if (!worker)
>-		return -ENOMEM;
>
>-	dev->worker = worker;
>-	worker->kcov_handle = kcov_common_handle();
>-	init_llist_head(&worker->work_list);
>+	init_llist_head(&dev->worker.work_list);
> 	snprintf(name, sizeof(name), "vhost-%d", curre...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
 arch/arm/mm/dma-mapping.c                       | 11 +++++++----
 .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c |  3 ++-
 drivers/gpu/drm/tegra/drm.c                     |  2 +-
 drivers/gpu/host1x/cdma.c                       |  2 +...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
 arch/arm/mm/dma-mapping.c                       | 11 +++++++----
 .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c |  3 ++-
 drivers/gpu/drm/tegra/drm.c                     |  2 +-
 drivers/gpu/host1x/cdma.c                       |  2 +...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
 arch/arm/mm/dma-mapping.c                       | 11 +++++++----
 .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c |  3 ++-
 drivers/gpu/drm/tegra/drm.c                     |  2 +-
 drivers/gpu/host1x/cdma.c                       |  2 +...