search for: vitio

Displaying 20 results from an estimated 21 matches for "vitio".

Did you mean: virtio
2018 Jan 17
2
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...copies of each packet. I think we want to use only 1 interface to? send out any packet. In case of broadcast/multicasts it would be an optimization to send them via virtio and this patch series adds that optimization. In the receive path,? the broadcasts should only go the PF and reach the VM via vitio so that the VM doesn't see duplicate broadcasts. > > To me the east/west scenario looks like you want something > more similar to a bridge on top of the virtio/PT pair. > > So I suspect that use-case will need a separate configuration bit, > and possibly that's when you...
2018 Jan 17
2
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...copies of each packet. I think we want to use only 1 interface to? send out any packet. In case of broadcast/multicasts it would be an optimization to send them via virtio and this patch series adds that optimization. In the receive path,? the broadcasts should only go the PF and reach the VM via vitio so that the VM doesn't see duplicate broadcasts. > > To me the east/west scenario looks like you want something > more similar to a bridge on top of the virtio/PT pair. > > So I suspect that use-case will need a separate configuration bit, > and possibly that's when you...
2018 Dec 29
0
[RFC PATCH V3 1/5] vhost: generalize adding used elem
Use one generic vhost_copy_to_user() instead of two dedicated accessor. This will simplify the conversion to fine grain accessors. About 2% improvement of PPS were seen during vitio-user txonly test. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 55e5aa662ad5..f179b5ee14c4 100644 --- a/drivers/vhost/vhost.c ++...
2019 Jan 07
0
[RFC PATCH V3 1/5] vhost: generalize adding used elem
...Tsirkin wrote: >> On Sat, Dec 29, 2018 at 08:46:52PM +0800, Jason Wang wrote: >>> Use one generic vhost_copy_to_user() instead of two dedicated >>> accessor. This will simplify the conversion to fine grain >>> accessors. About 2% improvement of PPS were seen during vitio-user >>> txonly test. >>> >>> Signed-off-by: Jason Wang <jasowang at redhat.com> >> I don't hve a problem with this patch but do you have >> any idea how come removing what's supposed to be >> an optimization speeds things up? > With SMA...
2018 Jan 17
0
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...icast group does not have any VMs on same host. I'd rather we just sent everything out on the PT if that's there. The reason we have virtio in the picture is just so we can migrate without downtime. > In the receive path,? the broadcasts should only go the PF and reach the VM > via vitio so that the VM doesn't see duplicate broadcasts. > > > > > > To me the east/west scenario looks like you want something > > more similar to a bridge on top of the virtio/PT pair. > > > > So I suspect that use-case will need a separate configuration bit, &...
2018 Jan 17
2
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...ding solution ends up being a way to resolve that so that they could just have it take care of picking the right Tx queue based on the NUMA affinity and fall back to the virtio/netvsc when those fail. >> In the receive path, the broadcasts should only go the PF and reach the VM >> via vitio so that the VM doesn't see duplicate broadcasts. >> >> >> > >> > To me the east/west scenario looks like you want something >> > more similar to a bridge on top of the virtio/PT pair. >> > >> > So I suspect that use-case will need a separ...
2018 Jan 17
2
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...ding solution ends up being a way to resolve that so that they could just have it take care of picking the right Tx queue based on the NUMA affinity and fall back to the virtio/netvsc when those fail. >> In the receive path, the broadcasts should only go the PF and reach the VM >> via vitio so that the VM doesn't see duplicate broadcasts. >> >> >> > >> > To me the east/west scenario looks like you want something >> > more similar to a bridge on top of the virtio/PT pair. >> > >> > So I suspect that use-case will need a separ...
2018 Jan 17
2
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
On Thu, Jan 11, 2018 at 9:58 PM, Sridhar Samudrala <sridhar.samudrala at intel.com> wrote: > This feature bit can be used by hypervisor to indicate virtio_net device to > act as a backup for another device with the same MAC address. > > Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> > --- > drivers/net/virtio_net.c | 2 +- >
2018 Jan 17
2
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
On Thu, Jan 11, 2018 at 9:58 PM, Sridhar Samudrala <sridhar.samudrala at intel.com> wrote: > This feature bit can be used by hypervisor to indicate virtio_net device to > act as a backup for another device with the same MAC address. > > Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> > --- > drivers/net/virtio_net.c | 2 +- >
2018 Jan 23
0
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...use a PV interface for it. > > When we do this, we'll need to have another > > feature bit, and we can call it SIDE_CHANNEL or whatever. > > > > > >>>> In the receive path, the broadcasts should only go the PF and reach the VM > >>>> via vitio so that the VM doesn't see duplicate broadcasts. > >>>> > >>>> > >>>>> To me the east/west scenario looks like you want something > >>>>> more similar to a bridge on top of the virtio/PT pair. > >>>>> > &gt...
2018 Jan 22
0
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...of extra information host to guest, and we'd have to use a PV interface for it. When we do this, we'll need to have another feature bit, and we can call it SIDE_CHANNEL or whatever. > >> In the receive path, the broadcasts should only go the PF and reach the VM > >> via vitio so that the VM doesn't see duplicate broadcasts. > >> > >> > >> > > >> > To me the east/west scenario looks like you want something > >> > more similar to a bridge on top of the virtio/PT pair. > >> > > >> > So I susp...
2018 Jan 22
5
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...t to guest, and we'd have to use a PV interface for it. > When we do this, we'll need to have another > feature bit, and we can call it SIDE_CHANNEL or whatever. > > >>>> In the receive path, the broadcasts should only go the PF and reach the VM >>>> via vitio so that the VM doesn't see duplicate broadcasts. >>>> >>>> >>>>> To me the east/west scenario looks like you want something >>>>> more similar to a bridge on top of the virtio/PT pair. >>>>> >>>>> So I suspect t...
2018 Jan 22
5
[virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
...t to guest, and we'd have to use a PV interface for it. > When we do this, we'll need to have another > feature bit, and we can call it SIDE_CHANNEL or whatever. > > >>>> In the receive path, the broadcasts should only go the PF and reach the VM >>>> via vitio so that the VM doesn't see duplicate broadcasts. >>>> >>>> >>>>> To me the east/west scenario looks like you want something >>>>> more similar to a bridge on top of the virtio/PT pair. >>>>> >>>>> So I suspect t...
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Changes from V2: - fix buggy range overlapping check - tear down MMU notifier during vhost ioctl to make sure
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Changes from V2: - fix buggy range overlapping check - tear down MMU notifier during vhost ioctl to make sure
2018 Dec 28
4
[RFC PATCH V2 0/3] vhost: accelerate metadata access through vmap()
Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. Test shows about 24% improvement on TX PPS. It should benefit other cases as well. Changes from V1: - instead of pinning pages, use MMU notifier to invalidate vmaps and remap duing
2019 Apr 23
7
[RFC PATCH V3 0/6] vhost: accelerate metadata access
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. This is done through setup kernel address through direct mapping and co-opreate VM management with MMU notifiers. Test shows about 23% improvement on TX PPS. TCP_STREAM doesn't see obvious
2019 May 24
10
[PATCH net-next 0/6] vhost: accelerate metadata access
Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling like SMAP. This is done through setup kernel address through direct mapping and co-opreate VM management with MMU notifiers. Test shows about 23% improvement on TX PPS. TCP_STREAM
2019 May 24
10
[PATCH net-next 0/6] vhost: accelerate metadata access
Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling like SMAP. This is done through setup kernel address through direct mapping and co-opreate VM management with MMU notifiers. Test shows about 23% improvement on TX PPS. TCP_STREAM
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even hardware feature toggling. This is done through setup kernel address through vmap() and resigter MMU notifier for invalidation. Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see obvious improvement.