similar to: [RFC v1 0/8] vhost-vdpa: add support for iommufd

Displaying 20 results from an estimated 9000 matches similar to: "[RFC v1 0/8] vhost-vdpa: add support for iommufd"

2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote: > > Hi All > This code provides the iommufd support for vdpa device > This code fixes the bugs from the last version and also add the asid support. rebase on kernel > v6,6-rc3 > Test passed in the physical device (vp_vdpa), but there are still some problems in the emulated device (vdpa_sim_net), What kind of problems?
2023 Feb 20
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 8:43 PM Jason Gunthorpe <jgg at nvidia.com> wrote: > > On Fri, Feb 17, 2023 at 05:12:29AM -0500, Michael S. Tsirkin wrote: > > On Thu, Feb 16, 2023 at 08:14:50PM -0400, Jason Gunthorpe wrote: > > > On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote: > > > > From: Rong Wang <wangrong68 at huawei.com> > > > >
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 08:49:02AM -0400, Jason Gunthorpe wrote: > On Tue, Nov 07, 2023 at 02:30:34AM -0500, Michael S. Tsirkin wrote: > > On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote: > > > > > > Hi All > > > This code provides the iommufd support for vdpa device > > > This code fixes the bugs from the last version and also add the asid
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 08:49:02AM -0400, Jason Gunthorpe wrote: > IMHO, this patch series needs to spend more time internally to Red Hat > before it is presented to the community. Just to add an example why I think this "internal review" is a bad idea I seem to recall that someone internal to nvidia at some point attempted to implement this already. The only output from that work
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote: > Test passed in the physical device (vp_vdpa), but there are still some problems in the emulated device (vdpa_sim_net), I'm not sure there's even value in bothering with iommufd for the simulator. Just find a way to disable it and fail gracefully. -- MST
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 10:12:37AM -0400, Jason Gunthorpe wrote: > Big company's should take the responsibility to train and provide > skill development for their own staff. That would result in a beautiful cathedral of a patch. I know this is how some companies work. We are doing more of a bazaar thing here, though. In a bunch of subsystems it seems that you don't get the necessary
2020 Sep 29
0
[PATCH V1 vhost-next] vdpa/mlx5: Make vdpa core driver a distinct module
On Tue, Sep 29, 2020 at 09:20:26AM +0300, Eli Cohen wrote: > On Mon, Sep 28, 2020 at 03:55:09PM -0400, Michael S. Tsirkin wrote: > > On Thu, Sep 24, 2020 at 05:32:31PM +0300, Eli Cohen wrote: > > > Change core vdpa functionality into a loadbale module such that upcoming > > > block implementation will be able to use it. > > > > > > Signed-off-by: Eli
2020 Sep 29
0
[PATCH V1 vhost-next] vdpa/mlx5: Make vdpa core driver a distinct module
On Tue, Sep 29, 2020 at 09:34:33AM +0300, Eli Cohen wrote: > On Tue, Sep 29, 2020 at 02:26:44AM -0400, Michael S. Tsirkin wrote: > > On Tue, Sep 29, 2020 at 09:20:26AM +0300, Eli Cohen wrote: > > > On Mon, Sep 28, 2020 at 03:55:09PM -0400, Michael S. Tsirkin wrote: > > > > On Thu, Sep 24, 2020 at 05:32:31PM +0300, Eli Cohen wrote: > > > > > Change core
2020 Sep 29
0
[PATCH V1 vhost-next] vdpa/mlx5: Make vdpa core driver a distinct module
On Tue, Sep 29, 2020 at 09:57:44AM +0300, Eli Cohen wrote: > On Tue, Sep 29, 2020 at 02:51:12AM -0400, Michael S. Tsirkin wrote: > > On Tue, Sep 29, 2020 at 09:34:33AM +0300, Eli Cohen wrote: > > > On Tue, Sep 29, 2020 at 02:26:44AM -0400, Michael S. Tsirkin wrote: > > > > On Tue, Sep 29, 2020 at 09:20:26AM +0300, Eli Cohen wrote: > > > > > On Mon, Sep
2023 Nov 07
1
[PATCH v2 0/6] IOMMUFD: Deliver IO page faults to user space
> From: Jason Gunthorpe <jgg at ziepe.ca> > Sent: Thursday, November 2, 2023 8:48 PM > > On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote: > > Hi folks, > > > > This series implements the functionality of delivering IO page faults to > > user space through the IOMMUFD framework for nested translation. > Nested > > translation is a hardware
2023 Feb 17
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 8:15 AM Jason Gunthorpe <jgg at nvidia.com> wrote: > > On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote: > > From: Rong Wang <wangrong68 at huawei.com> > > > > Once enable iommu domain for one device, the MSI > > translation tables have to be there for software-managed MSI. > > Otherwise, platform with
2020 Sep 28
0
[PATCH V1 vhost-next] vdpa/mlx5: Make vdpa core driver a distinct module
On Thu, Sep 24, 2020 at 05:32:31PM +0300, Eli Cohen wrote: > Change core vdpa functionality into a loadbale module such that upcoming > block implementation will be able to use it. > > Signed-off-by: Eli Cohen <elic at nvidia.com> Why don't we merge this patch together with the block module? > --- > V0 --> V1: > Removed "default n" for configu options
2023 Feb 17
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 01:35:59PM +0800, Jason Wang wrote: > On Fri, Feb 17, 2023 at 8:15 AM Jason Gunthorpe <jgg at nvidia.com> wrote: > > > > On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote: > > > From: Rong Wang <wangrong68 at huawei.com> > > > > > > Once enable iommu domain for one device, the MSI > > > translation
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Nov 02
1
[PATCH v2 0/6] IOMMUFD: Deliver IO page faults to user space
On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote: > Hi folks, > > This series implements the functionality of delivering IO page faults to > user space through the IOMMUFD framework for nested translation. Nested > translation is a hardware feature that supports two-stage translation > tables for IOMMU. The second-stage translation table is managed by the > host VMM,