Displaying 20 results from an estimated 60000 matches similar to: "[RFC v1 0/8] vhost-vdpa: add support for iommufd"
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 11:48:48AM -0400, Jason Gunthorpe wrote:
> On Tue, Nov 07, 2023 at 09:55:26AM -0500, Michael S. Tsirkin wrote:
> > On Tue, Nov 07, 2023 at 08:49:02AM -0400, Jason Gunthorpe wrote:
> > > IMHO, this patch series needs to spend more time internally to Red Hat
> > > before it is presented to the community.
> >
> > Just to add an example
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 08:49:02AM -0400, Jason Gunthorpe wrote:
> On Tue, Nov 07, 2023 at 02:30:34AM -0500, Michael S. Tsirkin wrote:
> > On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote:
> > >
> > > Hi All
> > > This code provides the iommufd support for vdpa device
> > > This code fixes the bugs from the last version and also add the asid
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote:
>
> Hi All
> This code provides the iommufd support for vdpa device
> This code fixes the bugs from the last version and also add the asid support. rebase on kernel
> v6,6-rc3
> Test passed in the physical device (vp_vdpa), but there are still some problems in the emulated device (vdpa_sim_net),
What kind of problems?
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Tue, Nov 07, 2023 at 10:12:37AM -0400, Jason Gunthorpe wrote:
> Big company's should take the responsibility to train and provide
> skill development for their own staff.
That would result in a beautiful cathedral of a patch. I know this is
how some companies work. We are doing more of a bazaar thing here,
though. In a bunch of subsystems it seems that you don't get the
necessary
2023 Nov 07
0
[RFC v1 0/8] vhost-vdpa: add support for iommufd
On Sat, Nov 04, 2023 at 01:16:33AM +0800, Cindy Lu wrote:
> Test passed in the physical device (vp_vdpa), but there are still some problems in the emulated device (vdpa_sim_net),
I'm not sure there's even value in bothering with iommufd for the
simulator. Just find a way to disable it and fail gracefully.
--
MST
2023 Feb 20
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 8:43 PM Jason Gunthorpe <jgg at nvidia.com> wrote:
>
> On Fri, Feb 17, 2023 at 05:12:29AM -0500, Michael S. Tsirkin wrote:
> > On Thu, Feb 16, 2023 at 08:14:50PM -0400, Jason Gunthorpe wrote:
> > > On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote:
> > > > From: Rong Wang <wangrong68 at huawei.com>
> > > >
2023 Feb 17
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 8:15 AM Jason Gunthorpe <jgg at nvidia.com> wrote:
>
> On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote:
> > From: Rong Wang <wangrong68 at huawei.com>
> >
> > Once enable iommu domain for one device, the MSI
> > translation tables have to be there for software-managed MSI.
> > Otherwise, platform with
2023 Feb 17
1
[PATCH v2] vhost/vdpa: Add MSI translation tables to iommu for software-managed MSI
On Fri, Feb 17, 2023 at 01:35:59PM +0800, Jason Wang wrote:
> On Fri, Feb 17, 2023 at 8:15 AM Jason Gunthorpe <jgg at nvidia.com> wrote:
> >
> > On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote:
> > > From: Rong Wang <wangrong68 at huawei.com>
> > >
> > > Once enable iommu domain for one device, the MSI
> > > translation
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2020 Feb 13
4
[PATCH V2 3/5] vDPA: introduce vDPA bus
On Thu, Feb 13, 2020 at 10:41:06AM -0500, Michael S. Tsirkin wrote:
> On Thu, Feb 13, 2020 at 11:05:42AM -0400, Jason Gunthorpe wrote:
> > On Thu, Feb 13, 2020 at 10:58:44PM +0800, Jason Wang wrote:
> > >
> > > On 2020/2/13 ??9:41, Jason Gunthorpe wrote:
> > > > On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote:
> > > >
> > >
2020 Feb 13
4
[PATCH V2 3/5] vDPA: introduce vDPA bus
On Thu, Feb 13, 2020 at 10:41:06AM -0500, Michael S. Tsirkin wrote:
> On Thu, Feb 13, 2020 at 11:05:42AM -0400, Jason Gunthorpe wrote:
> > On Thu, Feb 13, 2020 at 10:58:44PM +0800, Jason Wang wrote:
> > >
> > > On 2020/2/13 ??9:41, Jason Gunthorpe wrote:
> > > > On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote:
> > > >
> > >
2020 Feb 05
0
[PATCH] vhost: introduce vDPA based backend
On Wed, Feb 05, 2020 at 08:56:48AM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 05, 2020 at 03:50:14PM +0800, Jason Wang wrote:
> > > Would it be better for the map/umnap logic to happen inside each device ?
> > > Devices that needs the IOMMU will call iommu APIs from inside the driver callback.
> >
> > Technically, this can work. But if it can be done by
2020 Feb 14
0
[PATCH V2 3/5] vDPA: introduce vDPA bus
On 2020/2/14 ??12:24, Jason Gunthorpe wrote:
> On Thu, Feb 13, 2020 at 10:56:00AM -0500, Michael S. Tsirkin wrote:
>> On Thu, Feb 13, 2020 at 11:51:54AM -0400, Jason Gunthorpe wrote:
>>>> That bus is exactly what Greg KH proposed. There are other ways
>>>> to solve this I guess but this bikeshedding is getting tiring.
>>> This discussion was for a
2020 Feb 19
0
[PATCH] vhost: introduce vDPA based backend
On Tue, Feb 18, 2020 at 09:53:59AM -0400, Jason Gunthorpe wrote:
> On Fri, Jan 31, 2020 at 11:36:51AM +0800, Tiwei Bie wrote:
>
> > +static int vhost_vdpa_alloc_minor(struct vhost_vdpa *v)
> > +{
> > + return idr_alloc(&vhost_vdpa.idr, v, 0, MINORMASK + 1,
> > + GFP_KERNEL);
> > +}
>
> Please don't use idr in new code, use xarray directly
>
2023 Nov 07
1
[PATCH v2 0/6] IOMMUFD: Deliver IO page faults to user space
> From: Jason Gunthorpe <jgg at ziepe.ca>
> Sent: Thursday, November 2, 2023 8:48 PM
>
> On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote:
> > Hi folks,
> >
> > This series implements the functionality of delivering IO page faults to
> > user space through the IOMMUFD framework for nested translation.
> Nested
> > translation is a hardware
2020 Sep 29
0
[PATCH V1 vhost-next] vdpa/mlx5: Make vdpa core driver a distinct module
On Tue, Sep 29, 2020 at 09:57:44AM +0300, Eli Cohen wrote:
> On Tue, Sep 29, 2020 at 02:51:12AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Sep 29, 2020 at 09:34:33AM +0300, Eli Cohen wrote:
> > > On Tue, Sep 29, 2020 at 02:26:44AM -0400, Michael S. Tsirkin wrote:
> > > > On Tue, Sep 29, 2020 at 09:20:26AM +0300, Eli Cohen wrote:
> > > > > On Mon, Sep
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first