Displaying 20 results from an estimated 24 matches for "ioprot".
Did you mean:
ioport
2014 Jun 27
0
[PATCH] drm/ttm: recognize ARM arch in ioprot handler
From: Lucas Stach <dev at lynxeye.de>
Nouveau can now be used on ARM, so add an ioprot handler for this
architecture.
Signed-off-by: Lucas Stach <dev at lynxeye.de>
Signed-off-by: Alexandre Courbot <acourbot at nvidia.com>
---
drivers/gpu/drm/ttm/ttm_bo_util.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers...
2019 Sep 08
0
[PATCH V6 4/5] iommu/dma-iommu: Use the dev->coherent_dma_mask
...) | IOMMU_MMIO,
+ dma_get_mask(dev));
}
static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
@@ -1041,7 +1042,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
if (!cpu_addr)
return NULL;
- *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot);
+ *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot,
+ dev->coherent_dma_mask);
if (*handle == DMA_MAPPING_ERROR) {
__iommu_dma_free(dev, size, cpu_addr);
return NULL;
--
2.20.1
2019 Dec 21
0
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
...dle, size, dir, attrs);
}
static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr)
@@ -1056,7 +1117,6 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp, unsigned long attrs)
{
bool coherent = dev_is_dma_coherent(dev);
- int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs);
struct page *page = NULL;
void *cpu_addr;
@@ -1074,8 +1134,9 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
if (!cpu_addr)
return NULL;
- *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot,
- dev-&...
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...822,7 +822,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
if (!iova)
goto out_free_pages;
- if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL))
+ if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, gfp))
goto out_free_iova;
if (!(ioprot & IOMMU_CACHE)) {
--
2.39.0
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...822,7 +822,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
if (!iova)
goto out_free_pages;
- if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL))
+ if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, gfp))
goto out_free_iova;
if (!(ioprot & IOMMU_CACHE)) {
--
2.39.0
2023 Jan 20
0
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...ma_alloc_noncontiguous(struct device *dev,
> if (!iova)
> goto out_free_pages;
>
> - if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL))
> + if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, gfp))
> goto out_free_iova;
>
> if (!(ioprot & IOMMU_CACHE)) {
2020 Sep 15
0
[PATCH 17/18] dma-iommu: implement ->alloc_noncoherent
...dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot,
+ unsigned long attrs)
{
struct iommu_domain *domain = iommu_get_dma_domain(dev);
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
bool coherent = dev_is_dma_coherent(dev);
int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs);
- pgprot_t prot = dma_pgprot(dev, PAGE_KERNEL, attrs);
unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
struct page **pages;
struct sg_table sgt;
@@ -1030,8 +1031,10 @@ static void *iommu_dma_alloc(struct device *dev...
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2019 Sep 08
7
[PATCH v6 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V6:
-add more details to the description of patch 001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch
-rename handle_deferred_device to iommu_dma_deferred_attach
-fix double tabs in 0003-iommu-dma-iommu-Handle-deferred-devices.patch
V5:
-Rebase on
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2014 Jun 24
4
[PATCH v2 0/3] drm/ttm: nouveau: memory coherency for ARM
...ime, let's try to get the 3 guys below merged!
Changes since v1:
- Removed conditional compilation for Nouveau cache sync handler
- Refactored nouveau_gem_ioctl_cpu_prep() into a new function to keep buffer
cache management into nouveau_bo.c
Lucas Stach (3):
drm/ttm: recognize ARM arch in ioprot handler
drm/ttm: introduce dma cache sync helpers
drm/nouveau: hook up cache sync functions
drivers/gpu/drm/nouveau/nouveau_bo.c | 47 +++++++++++++++++++++++++++++++++++
drivers/gpu/drm/nouveau/nouveau_bo.h | 1 +
drivers/gpu/drm/nouveau/nouveau_gem.c | 10 +++-----
drivers/gpu/drm/ttm/tt...
2014 May 19
8
[PATCH 0/4] drm/ttm: nouveau: memory coherency fixes for ARM
...rt:
http://lists.freedesktop.org/archives/nouveau/2013-August/014026.html
Another patch takes care of flushing the CPU write-buffer when writing BOs
through a non-BAR path.
Alexandre Courbot (1):
drm/nouveau: introduce CPU cache flushing macro
Lucas Stach (3):
drm/ttm: recognize ARM arch in ioprot handler
drm/ttm: introduce dma cache sync helpers
drm/nouveau: hook up cache sync functions
drivers/gpu/drm/nouveau/core/os.h | 17 +++++++++++++++
drivers/gpu/drm/nouveau/nouveau_bo.c | 40 +++++++++++++++++++++++++++++++++--
drivers/gpu/drm/nouveau/nouveau_bo.h | 20 ++++++++++++++++++...
2013 Aug 28
11
[PATCH 0/6] Nouveau on ARM fixes
...tness fixes,
a lot of optimization work remains to be done, but at least
it's enough to get accel working and let the machine survive
a piglit run.
A new BO flag is introduced to allow userspace to hint the
kernel about possible optimizations.
Lucas Stach (6):
drm/ttm: recognize ARM arch in ioprot handler
drm/ttm: introduce dma cache sync helpers
drm/nouveau: hook up cache sync functions
drm/nouveau: introduce NOUVEAU_GEM_TILE_WCUS
drm/nouveau: map IB write-combined
drm/nouveau: use MSI interrupts
drivers/gpu/drm/nouveau/core/include/subdev/mc.h | 1 +
drivers/gpu/drm/nouveau/co...
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api.
While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
This issue is most likely in the i915 driver and is most likely caused by the