Displaying 15 results from an estimated 15 matches for "1381,8".
Did you mean:
1281,8
2019 Sep 27
1
[PATCH v2 26/27] drm/dp_mst: Also print unhashed pointers for malloc/topology references
...G("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref) - 1);
> + DRM_DEBUG("mstb %p/%px (%d)\n",
> + mstb, mstb, kref_read(&mstb->malloc_kref) - 1);
> kref_put(&mstb->malloc_kref, drm_dp_free_mst_branch_device);
> }
>
> @@ -1379,7 +1381,8 @@ void
> drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port)
> {
> kref_get(&port->malloc_kref);
> - DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->malloc_kref));
> + DRM_DEBUG("port %p/%px (%d)\n",
> + port, port, kref_read(&am...
2019 Sep 03
0
[PATCH v2 26/27] drm/dp_mst: Also print unhashed pointers for malloc/topology references
...st_branch *mstb)
{
- DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref) - 1);
+ DRM_DEBUG("mstb %p/%px (%d)\n",
+ mstb, mstb, kref_read(&mstb->malloc_kref) - 1);
kref_put(&mstb->malloc_kref, drm_dp_free_mst_branch_device);
}
@@ -1379,7 +1381,8 @@ void
drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port)
{
kref_get(&port->malloc_kref);
- DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->malloc_kref));
+ DRM_DEBUG("port %p/%px (%d)\n",
+ port, port, kref_read(&port->malloc_kref));
}...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ev, struct scatterlist *sg,
prot = __dma_info_to_prot(dir, attrs);
- ret = iommu_map(mapping->domain, iova, phys, len, prot);
+ ret = iommu_map(mapping->domain, iova, phys, len, prot,
+ GFP_KERNEL);
if (ret < 0)
goto fail;
count += len >> PAGE_SHIFT;
@@ -1379,7 +1381,8 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
prot = __dma_info_to_prot(dir, attrs);
- ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot);
+ ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len,
+ prot, GFP_KERNEL...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ev, struct scatterlist *sg,
prot = __dma_info_to_prot(dir, attrs);
- ret = iommu_map(mapping->domain, iova, phys, len, prot);
+ ret = iommu_map(mapping->domain, iova, phys, len, prot,
+ GFP_KERNEL);
if (ret < 0)
goto fail;
count += len >> PAGE_SHIFT;
@@ -1379,7 +1381,8 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
prot = __dma_info_to_prot(dir, attrs);
- ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot);
+ ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len,
+ prot, GFP_KERNEL...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ev, struct scatterlist *sg,
prot = __dma_info_to_prot(dir, attrs);
- ret = iommu_map(mapping->domain, iova, phys, len, prot);
+ ret = iommu_map(mapping->domain, iova, phys, len, prot,
+ GFP_KERNEL);
if (ret < 0)
goto fail;
count += len >> PAGE_SHIFT;
@@ -1379,7 +1381,8 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
prot = __dma_info_to_prot(dir, attrs);
- ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot);
+ ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len,
+ prot, GFP_KERNEL...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2012 Oct 08
5
[PATCH v4 0/5] Finish hotplugging support.
This rounds off hotplugging support by allowing you to add and remove
drives at any stage (before and after launch).
Rich.
2020 Apr 07
41
[RFC PATCH 00/34] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is a patch-set to remove all calls of iommu_group_get_for_dev() from
the IOMMU drivers and move the per-device group setup and default domain
allocation into the IOMMU core code.
This eliminates some ugly back and forth between IOMMU core code and the
IOMMU drivers, where the driver called iommu_group_get_for_dev() which itself
called back into the driver.
The patch-set started as a
2020 Apr 07
41
[RFC PATCH 00/34] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is a patch-set to remove all calls of iommu_group_get_for_dev() from
the IOMMU drivers and move the per-device group setup and default domain
allocation into the IOMMU core code.
This eliminates some ugly back and forth between IOMMU core code and the
IOMMU drivers, where the driver called iommu_group_get_for_dev() which itself
called back into the driver.
The patch-set started as a
2019 Sep 03
50
[PATCH v2 00/27] DP MST Refactors + debugging tools + suspend/resume reprobing
This is the large series for adding MST suspend/resume reprobing that
I've been working on for quite a while now. In addition, I:
- Refactored and cleaned up any code I ended up digging through in the
process of understanding how some parts of these helpers worked.
- Added some debugging tools along the way that I ended up needing to
figure out some issues in my own code
Note that