Displaying 17 results from an estimated 17 matches for "err_unmap".
Did you mean:
err_vunmap
2013 Jul 25
0
How to get the PFN of a vmalloc'ed address in a domU ?
...t page * of these pages to map them in the user
space :
Mapping granted pages (that''s xensocket''s code) :
if (!(x->buffer_area = alloc_vm_area(buffer_num_pages * PAGE_SIZE, NULL))) {
DPRINTK("error: cannot allocate %d buffer pages\n", buffer_num_pages);
goto err_unmap;
}
x->buffer_addr = (unsigned long)x->buffer_area->addr;
grefp = &d->buffer_first_gref;
for (i = 0; i < buffer_num_pages; i++) {
printk(KERN_INFO "Mounting GREF %d\n", *grefp);
memset(&op, 0, sizeof(op));
op.host_addr = x->buffer_addr + i * PAGE_SIZE;...
2019 Sep 17
0
[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
...uf, foo->dev);
> if (IS_ERR(attach))
> return PTR_ERR(attach);
>
> sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(sgt)) {
> ret = PTR_ERR(sgt);
> goto err_detach;
> }
>
> if (sgt->nents != 1) {
> ret = -EINVAL;
> goto err_unmap;
> }
>
> *id = sg_dma_address(sgt->sgl);
I agree with Gerd, this looks pretty horrible to me.
The usual way we've done these kind of special dma-bufs is:
- They all get allocated at the same place, through some library or
whatever.
- You add a dma_buf_is_virtio(dma_buf) fun...
2019 Oct 08
0
[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
...;
> > > if (IS_ERR(sgt)) {
> > > ret = PTR_ERR(sgt);
> > > goto err_detach;
> > > }
> > >
> > > if (sgt->nents != 1) {
> > > ret = -EINVAL;
> > > goto err_unmap;
> > > }
> > >
> > > *id = sg_dma_address(sgt->sgl);
> >
> > I agree with Gerd, this looks pretty horrible to me.
> >
> > The usual way we've done these kind of special dma-bufs is:
> >
> > - They all get allocated at...
2019 Oct 08
0
[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
...ret = PTR_ERR(sgt);
> > > > > goto err_detach;
> > > > > }
> > > > >
> > > > > if (sgt->nents != 1) {
> > > > > ret = -EINVAL;
> > > > > goto err_unmap;
> > > > > }
> > > > >
> > > > > *id = sg_dma_address(sgt->sgl);
> > > >
> > > > I agree with Gerd, this looks pretty horrible to me.
> > > >
> > > > The usual way we've done these kind o...
2019 Oct 16
0
[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
...> goto err_detach;
> > > > > > > }
> > > > > > >
> > > > > > > if (sgt->nents != 1) {
> > > > > > > ret = -EINVAL;
> > > > > > > goto err_unmap;
> > > > > > > }
> > > > > > >
> > > > > > > *id = sg_dma_address(sgt->sgl);
> > > > > >
> > > > > > I agree with Gerd, this looks pretty horrible to me.
> > > > > >
&...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...mu/iommufd/pages.c
@@ -456,7 +456,8 @@ static int batch_iommu_map_small(struct iommu_domain *domain,
size % PAGE_SIZE);
while (size) {
- rc = iommu_map(domain, iova, paddr, PAGE_SIZE, prot);
+ rc = iommu_map(domain, iova, paddr, PAGE_SIZE, prot,
+ GFP_KERNEL);
if (rc)
goto err_unmap;
iova += PAGE_SIZE;
@@ -500,7 +501,8 @@ static int batch_to_domain(struct pfn_batch *batch, struct iommu_domain *domain,
else
rc = iommu_map(domain, iova,
PFN_PHYS(batch->pfns[cur]) + page_offset,
- next_iova - iova, area->iommu_prot);
+ next_iova - i...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...mu/iommufd/pages.c
@@ -456,7 +456,8 @@ static int batch_iommu_map_small(struct iommu_domain *domain,
size % PAGE_SIZE);
while (size) {
- rc = iommu_map(domain, iova, paddr, PAGE_SIZE, prot);
+ rc = iommu_map(domain, iova, paddr, PAGE_SIZE, prot,
+ GFP_KERNEL);
if (rc)
goto err_unmap;
iova += PAGE_SIZE;
@@ -500,7 +501,8 @@ static int batch_to_domain(struct pfn_batch *batch, struct iommu_domain *domain,
else
rc = iommu_map(domain, iova,
PFN_PHYS(batch->pfns[cur]) + page_offset,
- next_iova - iova, area->iommu_prot);
+ next_iova - i...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...mu/iommufd/pages.c
@@ -456,7 +456,8 @@ static int batch_iommu_map_small(struct iommu_domain *domain,
size % PAGE_SIZE);
while (size) {
- rc = iommu_map(domain, iova, paddr, PAGE_SIZE, prot);
+ rc = iommu_map(domain, iova, paddr, PAGE_SIZE, prot,
+ GFP_KERNEL);
if (rc)
goto err_unmap;
iova += PAGE_SIZE;
@@ -500,7 +501,8 @@ static int batch_to_domain(struct pfn_batch *batch, struct iommu_domain *domain,
else
rc = iommu_map(domain, iova,
PFN_PHYS(batch->pfns[cur]) + page_offset,
- next_iova - iova, area->iommu_prot);
+ next_iova - i...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2011 Jun 02
0
[PATCH] pci: Use pr_<level> and pr_fmt
...r_drhd_unit *drhd)
#ifdef CONFIG_DMAR
agaw = iommu_calculate_agaw(iommu);
if (agaw < 0) {
- printk(KERN_ERR
- "Cannot get a valid agaw for iommu (seq_id = %d)\n",
+ pr_err("Cannot get a valid agaw for iommu (seq_id = %d)\n",
iommu->seq_id);
goto err_unmap;
}
msagaw = iommu_calculate_max_sagaw(iommu);
if (msagaw < 0) {
- printk(KERN_ERR
- "Cannot get a valid max agaw for iommu (seq_id = %d)\n",
- iommu->seq_id);
+ pr_err("Cannot get a valid max agaw for iommu (seq_id = %d)\n",
+ iommu->seq_id);
goto...
2011 Jun 02
0
[PATCH] pci: Use pr_<level> and pr_fmt
...r_drhd_unit *drhd)
#ifdef CONFIG_DMAR
agaw = iommu_calculate_agaw(iommu);
if (agaw < 0) {
- printk(KERN_ERR
- "Cannot get a valid agaw for iommu (seq_id = %d)\n",
+ pr_err("Cannot get a valid agaw for iommu (seq_id = %d)\n",
iommu->seq_id);
goto err_unmap;
}
msagaw = iommu_calculate_max_sagaw(iommu);
if (msagaw < 0) {
- printk(KERN_ERR
- "Cannot get a valid max agaw for iommu (seq_id = %d)\n",
- iommu->seq_id);
+ pr_err("Cannot get a valid max agaw for iommu (seq_id = %d)\n",
+ iommu->seq_id);
goto...
2011 Jun 02
0
[PATCH] pci: Use pr_<level> and pr_fmt
...r_drhd_unit *drhd)
#ifdef CONFIG_DMAR
agaw = iommu_calculate_agaw(iommu);
if (agaw < 0) {
- printk(KERN_ERR
- "Cannot get a valid agaw for iommu (seq_id = %d)\n",
+ pr_err("Cannot get a valid agaw for iommu (seq_id = %d)\n",
iommu->seq_id);
goto err_unmap;
}
msagaw = iommu_calculate_max_sagaw(iommu);
if (msagaw < 0) {
- printk(KERN_ERR
- "Cannot get a valid max agaw for iommu (seq_id = %d)\n",
- iommu->seq_id);
+ pr_err("Cannot get a valid max agaw for iommu (seq_id = %d)\n",
+ iommu->seq_id);
goto...