similar to: [PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI

Displaying 20 results from an estimated 1000 matches similar to: "[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI"

2023 Feb 17
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 01:35:59PM +0800, Jason Wang wrote: > On Fri, Feb 17, 2023 at 8:15 AM Jason Gunthorpe <jgg at nvidia.com> wrote: > > > > On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote: > > > From: Rong Wang <wangrong68 at huawei.com> > > > > > > Once enable iommu domain for one device, the MSI > > > translation
2023 Feb 20
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 8:43 PM Jason Gunthorpe <jgg at nvidia.com> wrote: > > On Fri, Feb 17, 2023 at 05:12:29AM -0500, Michael S. Tsirkin wrote: > > On Thu, Feb 16, 2023 at 08:14:50PM -0400, Jason Gunthorpe wrote: > > > On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote: > > > > From: Rong Wang <wangrong68 at huawei.com> > > > >
2023 Feb 16
0
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
? 2023/2/7 20:08, Nanyong Sun ??: > From: Rong Wang <wangrong68 at huawei.com> > > Once enable iommu domain for one device, the MSI > translation tables have to be there for software-managed MSI. > Otherwise, platform with software-managed MSI without an > irq bypass function, can not get a correct memory write event > from pcie, will not get irqs. > The solution is
2023 Nov 07
1
[PATCH v2 0/6] IOMMUFD: Deliver IO page faults to user space
> From: Jason Gunthorpe <jgg at ziepe.ca> > Sent: Thursday, November 2, 2023 8:48 PM > > On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote: > > Hi folks, > > > > This series implements the functionality of delivering IO page faults to > > user space through the IOMMUFD framework for nested translation. > Nested > > translation is a hardware
2023 Nov 02
1
[PATCH v2 0/6] IOMMUFD: Deliver IO page faults to user space
On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote: > Hi folks, > > This series implements the functionality of delivering IO page faults to > user space through the IOMMUFD framework for nested translation. Nested > translation is a hardware feature that supports two-stage translation > tables for IOMMU. The second-stage translation table is managed by the > host VMM,
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote: > > Hi All > This code provides the iommufd support for vdpa device > This code fixes the bugs from the last version and also add the asid support. rebase on kernel > v6,6-rc3 > Test passed in the physical device (vp_vdpa), but there are still some problems in the emulated device (vdpa_sim_net), What kind of problems?
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 11:48:48AM -0400, Jason Gunthorpe wrote: > On Tue, Nov 07, 2023 at 09:55:26AM -0500, Michael S. Tsirkin wrote: > > On Tue, Nov 07, 2023 at 08:49:02AM -0400, Jason Gunthorpe wrote: > > > IMHO, this patch series needs to spend more time internally to Red Hat > > > before it is presented to the community. > > > > Just to add an example
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote: > Test passed in the physical device (vp_vdpa), but there are still some problems in the emulated device (vdpa_sim_net), I'm not sure there's even value in bothering with iommufd for the simulator. Just find a way to disable it and fail gracefully. -- MST
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 08:49:02AM -0400, Jason Gunthorpe wrote: > On Tue, Nov 07, 2023 at 02:30:34AM -0500, Michael S. Tsirkin wrote: > > On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote: > > > > > > Hi All > > > This code provides the iommufd support for vdpa device > > > This code fixes the bugs from the last version and also add the asid
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 10:12:37AM -0400, Jason Gunthorpe wrote: > Big company's should take the responsibility to train and provide > skill development for their own staff. That would result in a beautiful cathedral of a patch. I know this is how some companies work. We are doing more of a bazaar thing here, though. In a bunch of subsystems it seems that you don't get the necessary
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 08:49:02AM -0400, Jason Gunthorpe wrote: > IMHO, this patch series needs to spend more time internally to Red Hat > before it is presented to the community. Just to add an example why I think this "internal review" is a bad idea I seem to recall that someone internal to nvidia at some point attempted to implement this already. The only output from that work
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jun 16
1
[RFC PATCHES 00/17] IOMMUFD: Deliver IO page faults to user space
Hi Baolu, On Tue, May 30, 2023 at 01:37:07PM +0800, Lu Baolu wrote: > - The timeout value for the pending page fault messages. Ideally we > should determine the timeout value from the device configuration, but > I failed to find any statement in the PCI specification (version 6.x). > A default 100 milliseconds is selected in the implementation, but it > leave the room for
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to GFP_KERNEL to use the gfp parameter like the other allocations in this function. Auditing says this is never called from an atomic context, so it is safe as is, but reads wrong. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- drivers/iommu/dma-iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to GFP_KERNEL to use the gfp parameter like the other allocations in this function. Auditing says this is never called from an atomic context, so it is safe as is, but reads wrong. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- drivers/iommu/dma-iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff