search for: vpda

Displaying 20 results from an estimated 23 matches for "vpda".

Did you mean: vdpa
2020 Feb 05
2
[PATCH] vhost: introduce vDPA based backend
...Feb 05, 2020 at 03:50:14PM +0800, Jason Wang wrote: > > Would it be better for the map/umnap logic to happen inside each device ? > > Devices that needs the IOMMU will call iommu APIs from inside the driver callback. > > Technically, this can work. But if it can be done by vhost-vpda it will make > the vDPA driver more compact and easier to be implemented. Generally speaking, in the kernel, it is normal to not hoist code of out drivers into subsystems until 2-3 drivers are duplicating that code. It helps ensure the right design is used Jason
2020 Feb 05
2
[PATCH] vhost: introduce vDPA based backend
...Feb 05, 2020 at 03:50:14PM +0800, Jason Wang wrote: > > Would it be better for the map/umnap logic to happen inside each device ? > > Devices that needs the IOMMU will call iommu APIs from inside the driver callback. > > Technically, this can work. But if it can be done by vhost-vpda it will make > the vDPA driver more compact and easier to be implemented. Generally speaking, in the kernel, it is normal to not hoist code of out drivers into subsystems until 2-3 drivers are duplicating that code. It helps ensure the right design is used Jason
2020 Feb 05
1
[PATCH] vhost: introduce vDPA based backend
...mething like the iommu_device. > > > > > Would it be better for the map/umnap logic to happen inside each device ? > > Devices that needs the IOMMU will call iommu APIs from inside the driver callback. > > > Technically, this can work. But if it can be done by vhost-vpda it will make > the vDPA driver more compact and easier to be implemented. > > > > Devices that has other ways to do the DMA mapping will call the proprietary APIs. > > > To confirm, do you prefer: > > 1) map/unmap > > or > > 2) pass all maps at one t...
2020 Feb 05
2
[PATCH] vhost: introduce vDPA based backend
...1) device without on-chip IOMMU, DMA was done via IOMMU API which only support incremental map/unmap 2) device with on-chip IOMMU, DMA could be done by device driver itself, and we could choose to pass the whole mappings to the driver at one time through vDPA bus operation (set_map) For vhost-vpda, there're two types of memory mapping: a) memory table, setup by userspace through VHOST_SET_MEM_TABLE, the whole mapping is updated in this way b) IOTLB API, incrementally done by userspace through vhost message (IOTLB_UPDATE/IOTLB_INVALIDATE) The current design is: - Reuse VHOST_SET_MEM_...
2020 Feb 05
2
[PATCH] vhost: introduce vDPA based backend
...1) device without on-chip IOMMU, DMA was done via IOMMU API which only support incremental map/unmap 2) device with on-chip IOMMU, DMA could be done by device driver itself, and we could choose to pass the whole mappings to the driver at one time through vDPA bus operation (set_map) For vhost-vpda, there're two types of memory mapping: a) memory table, setup by userspace through VHOST_SET_MEM_TABLE, the whole mapping is updated in this way b) IOTLB API, incrementally done by userspace through vhost message (IOTLB_UPDATE/IOTLB_INVALIDATE) The current design is: - Reuse VHOST_SET_MEM_...
2020 Feb 05
1
[PATCH] vhost: introduce vDPA based backend
...d to wait for all the DMA is setup to let guest work. > >> 2) device with on-chip IOMMU, DMA could be done by device driver itself, and >> we could choose to pass the whole mappings to the driver at one time through >> vDPA bus operation (set_map) >> >> For vhost-vpda, there're two types of memory mapping: >> >> a) memory table, setup by userspace through VHOST_SET_MEM_TABLE, the whole >> mapping is updated in this way >> b) IOTLB API, incrementally done by userspace through vhost message >> (IOTLB_UPDATE/IOTLB_INVALIDATE) >&...
2020 Feb 05
0
[PATCH] vhost: introduce vDPA based backend
...We may also need to introduce something like the iommu_device. >> > Would it be better for the map/umnap logic to happen inside each device ? > Devices that needs the IOMMU will call iommu APIs from inside the driver callback. Technically, this can work. But if it can be done by vhost-vpda it will make the vDPA driver more compact and easier to be implemented. > Devices that has other ways to do the DMA mapping will call the proprietary APIs. To confirm, do you prefer: 1) map/unmap or 2) pass all maps at one time? Thanks >
2020 Feb 05
0
[PATCH] vhost: introduce vDPA based backend
...50:14PM +0800, Jason Wang wrote: > > > Would it be better for the map/umnap logic to happen inside each device ? > > > Devices that needs the IOMMU will call iommu APIs from inside the driver callback. > > > > Technically, this can work. But if it can be done by vhost-vpda it will make > > the vDPA driver more compact and easier to be implemented. > > Generally speaking, in the kernel, it is normal to not hoist code of > out drivers into subsystems until 2-3 drivers are duplicating that > code. It helps ensure the right design is used > > Ja...
2020 May 13
0
[PATCH V2] ifcvf: move IRQ request/free to status change handlers
...ooks good to me, but with this patch ping cannot work on my >> machine. (It works without this patch). >> >> Thanks > This is strange, it works on my machines, let's have a check offline. > > Thanks, > BR > Zhu Lingshan Note that I tested the patch with vhost-vpda. Thanks.
2020 May 13
0
[PATCH V2] ifcvf: move IRQ request/free to status change handlers
...> Patch looks good to me, but with this patch ping cannot work on my >> machine. (It works without this patch). >> >> Thanks > This is strange, it works on my machines, let's have a check offline. > > Thanks, > BR > Zhu Lingshan I give it a try with virito-vpda and a tiny userspace. Either works. So it could be an issue of qemu codes. Let's wait for Cindy to test if it really works. Thanks
2020 May 19
0
[PATCH V2] ifcvf: move IRQ request/free to status change handlers
...;>> machine. (It works without this patch). >>>> >>>> Thanks >>> This is strange, it works on my machines, let's have a check offline. >>> >>> Thanks, >>> BR >>> Zhu Lingshan >> >> I give it a try with virito-vpda and a tiny userspace. Either works. >> >> So it could be an issue of qemu codes. >> >> Let's wait for Cindy to test if it really works. >> >> Thanks >> >>
2020 Feb 04
10
[PATCH] vhost: introduce vDPA based backend
On 2020/1/31 ??11:36, Tiwei Bie wrote: > This patch introduces a vDPA based vhost backend. This > backend is built on top of the same interface defined > in virtio-vDPA and provides a generic vhost interface > for userspace to accelerate the virtio devices in guest. > > This backend is implemented as a vDPA device driver on > top of the same ops used in virtio-vDPA. It will
2020 Feb 04
10
[PATCH] vhost: introduce vDPA based backend
On 2020/1/31 ??11:36, Tiwei Bie wrote: > This patch introduces a vDPA based vhost backend. This > backend is built on top of the same interface defined > in virtio-vDPA and provides a generic vhost interface > for userspace to accelerate the virtio devices in guest. > > This backend is implemented as a vDPA device driver on > top of the same ops used in virtio-vDPA. It will
2020 Feb 05
0
[PATCH] vhost: introduce vDPA based backend
...t; >> > > > Would it be better for the map/umnap logic to happen inside each device ? > > > Devices that needs the IOMMU will call iommu APIs from inside the driver > > callback. > > > > > > Technically, this can work. But if it can be done by vhost-vpda it will make the > > vDPA driver more compact and easier to be implemented. > > Need to see the layering of such proposal but am not sure. > Vhost-vdpa is generic framework, while the DMA mapping is vendor specific. > Maybe vhost-vdpa can have some shared code needed to operate...
2020 Jul 28
0
[PATCH V4 4/6] vhost_vdpa: implement IRQ offloading in vhost_vdpa
...xt >> vhost_vdpa_set_status() and other vDPA bus drivers' set_status(). If >> this is true, there's even no need to introduce any new config ops >> but just let set_status() to return the irqs used for the device. Or >> if we want this to be more generic, we need vpda's own irq manager >> (which should be similar to irq bypass manager). That is: > I think there is no need for a driver to free / re-request its irqs after DRIVER_OK though it can do so. If a driver changed its irq of a vq after DRIVER_OK, the vq is still operational but will lose irq...
2020 Feb 06
0
[PATCH] vhost: introduce vDPA based backend
...ice. >>>> >>> Would it be better for the map/umnap logic to happen inside each device ? >>> Devices that needs the IOMMU will call iommu APIs from inside the driver >> callback. >> >> >> Technically, this can work. But if it can be done by vhost-vpda it will make the >> vDPA driver more compact and easier to be implemented. > Need to see the layering of such proposal but am not sure. > Vhost-vdpa is generic framework, while the DMA mapping is vendor specific. > Maybe vhost-vdpa can have some shared code needed to operate on iommu...
2020 Feb 05
0
[PATCH] vhost: introduce vDPA based backend
...tters but we are better off on emulating hardware not specific guest behaviour. > 2) device with on-chip IOMMU, DMA could be done by device driver itself, and > we could choose to pass the whole mappings to the driver at one time through > vDPA bus operation (set_map) > > For vhost-vpda, there're two types of memory mapping: > > a) memory table, setup by userspace through VHOST_SET_MEM_TABLE, the whole > mapping is updated in this way > b) IOTLB API, incrementally done by userspace through vhost message > (IOTLB_UPDATE/IOTLB_INVALIDATE) > > The current d...
2020 Feb 17
0
[PATCH V2 3/5] vDPA: introduce vDPA bus
...hing creating many char devs should have a > class. That makes the sysfs work as expected > > I suppose this is vhost user? Actually not. Vhost-user is the vhost protocol that is used for userspace vhost backend (usually though a UNIX domain socket). What's being done in the vhost-vpda is a new type of the vhost in kernel. > I admit I don't really see how this > vhost stuff works, all I see are global misc devices? Very unusual for > a new subsystem to be using global misc devices.. Vhost is not a subsystem right now, e.g for it's net implementation, it was...
2020 Jul 28
0
[PATCH V4 4/6] vhost_vdpa: implement IRQ offloading in vhost_vdpa
...e usage of get_vq_irq() in the context vhost_vdpa_set_status() and other vDPA bus drivers' set_status(). If this is true, there's even no need to introduce any new config ops but just let set_status() to return the irqs used for the device. Or if we want this to be more generic, we need vpda's own irq manager (which should be similar to irq bypass manager). That is: - bus driver can register itself as consumer - vDPA device driver can register itself as producer - matching via queue index - deal with registering/unregistering of consumer/producer So there's no need to care w...
2020 Feb 05
2
[PATCH] vhost: introduce vDPA based backend
On 2020/2/5 ??1:31, Michael S. Tsirkin wrote: > On Wed, Feb 05, 2020 at 11:12:21AM +0800, Jason Wang wrote: >> On 2020/2/5 ??10:05, Tiwei Bie wrote: >>> On Tue, Feb 04, 2020 at 02:46:16PM +0800, Jason Wang wrote: >>>> On 2020/2/4 ??2:01, Michael S. Tsirkin wrote: >>>>> On Tue, Feb 04, 2020 at 11:30:11AM +0800, Jason Wang wrote: >>>>>>