Displaying 16 results from an estimated 16 matches for "protection_domain".
2019 Sep 08
7
[PATCH v6 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V6:
-add more details to the description of patch 001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch
-rename handle_deferred_device to iommu_dma_deferred_attach
-fix double tabs in 0003-iommu-dma-iommu-Handle-deferred-devices.patch
V5:
-Rebase on
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
..._iommu_map(struct iommu_domain *dom, unsigned long iova,
static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
size_t page_size,
- struct iommu_iotlb_gather *gather)
+ struct iommu_iotlb_gather *gather,
+ struct page **freelist)
{
struct protection_domain *domain = to_pdomain(dom);
struct domain_pgtable pgtable;
@@ -2636,6 +2637,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
spin_unlock_irqrestore(&dom->lock, flags);
}
+static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain,
+ unsigned long io...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
..._iommu_map(struct iommu_domain *dom, unsigned long iova,
static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
size_t page_size,
- struct iommu_iotlb_gather *gather)
+ struct iommu_iotlb_gather *gather,
+ struct page **freelist)
{
struct protection_domain *domain = to_pdomain(dom);
struct domain_pgtable pgtable;
@@ -2636,6 +2637,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
spin_unlock_irqrestore(&dom->lock, flags);
}
+static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain,
+ unsigned long io...
2019 Dec 21
0
[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
..._iommu_map(struct iommu_domain *dom, unsigned long iova,
static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
size_t page_size,
- struct iommu_iotlb_gather *gather)
+ struct iommu_iotlb_gather *gather,
+ struct page **freelist)
{
struct protection_domain *domain = to_pdomain(dom);
@@ -2668,6 +2669,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
spin_unlock_irqrestore(&dom->lock, flags);
}
+static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain,
+ unsigned long iova, size_t size,
+ struct p...
2020 Aug 17
1
[PATCH 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
..._iommu_map(struct iommu_domain *dom, unsigned long iova,
static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
size_t page_size,
- struct iommu_iotlb_gather *gather)
+ struct iommu_iotlb_gather *gather,
+ struct page **freelist)
{
struct protection_domain *domain = to_pdomain(dom);
struct domain_pgtable pgtable;
@@ -2636,6 +2637,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
spin_unlock_irqrestore(&dom->lock, flags);
}
+static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain,
+ unsigned long io...
2020 Aug 18
0
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...long iova,
>
> static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
> size_t page_size,
> - struct iommu_iotlb_gather *gather)
> + struct iommu_iotlb_gather *gather,
> + struct page **freelist)
> {
> struct protection_domain *domain = to_pdomain(dom);
> struct domain_pgtable pgtable;
> @@ -2636,6 +2637,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
> spin_unlock_irqrestore(&dom->lock, flags);
> }
>
> +static void amd_iommu_flush_iotlb_range(struct iommu_doma...
2016 Jun 02
52
[RFC v3 00/45] dma-mapping: Use unsigned long for dma_attrs
Hi,
This is third approach (complete this time) for replacing struct
dma_attrs with unsigned long.
The main patch (2/45) doing the change is split into many subpatches
for easier review (3-43). They should be squashed together when
applying.
*Important:* Patchset is *only* build tested on allyesconfigs: ARM,
ARM64, i386, x86_64 and powerpc. Please provide reviewes and tests
for other
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api.
While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api.
While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
This issue is most likely in the i915 driver and is most likely caused by the
2020 Apr 14
35
[PATCH v2 00/33] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is the second version of this patch-set. The first version with
some more introductory text can be found here:
https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/
Changes v1->v2:
* Rebased to v5.7-rc1
* Re-wrote the arm-smmu changes as suggested by Robin Murphy
* Re-worked the Exynos patches to hopefully not break the
driver anymore
* Fixed a missing
2020 Apr 14
35
[PATCH v2 00/33] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is the second version of this patch-set. The first version with
some more introductory text can be found here:
https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/
Changes v1->v2:
* Rebased to v5.7-rc1
* Re-wrote the arm-smmu changes as suggested by Robin Murphy
* Re-worked the Exynos patches to hopefully not break the
driver anymore
* Fixed a missing
2020 Apr 29
35
[PATCH v3 00/34] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is the third version of this patch-set. Older versions can be found
here:
v1: https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/
(Has some more introductory text)
v2: https://lore.kernel.org/lkml/20200414131542.25608-1-joro at 8bytes.org/
Changes v2 -> v3:
* Rebased v5.7-rc3
* Added a missing iommu_group_put() as reported by Lu Baolu.
* Added a
2020 Apr 07
41
[RFC PATCH 00/34] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is a patch-set to remove all calls of iommu_group_get_for_dev() from
the IOMMU drivers and move the per-device group setup and default domain
allocation into the IOMMU core code.
This eliminates some ugly back and forth between IOMMU core code and the
IOMMU drivers, where the driver called iommu_group_get_for_dev() which itself
called back into the driver.
The patch-set started as a
2020 Apr 07
41
[RFC PATCH 00/34] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is a patch-set to remove all calls of iommu_group_get_for_dev() from
the IOMMU drivers and move the per-device group setup and default domain
allocation into the IOMMU core code.
This eliminates some ugly back and forth between IOMMU core code and the
IOMMU drivers, where the driver called iommu_group_get_for_dev() which itself
called back into the driver.
The patch-set started as a