Displaying 20 results from an estimated 5000 matches similar to: "[PATCHv3 3/4] qemu-kvm: vhost-net implementation"
2009 Aug 10
0
[PATCH 3/3] qemu-kvm: vhost-net implementation
This adds support for vhost-net virtio kernel backend.
To enable (assuming device eth2):
1. enable promisc mode or program guest mac in device eth2
2. disable tso, gso, lro on the card
3. add vhost=eth0 to -net flag
4. run with CAP_NET_ADMIN priviledge (e.g. root)
This patch is RFC, but works without issues for me.
It still needs to be split up, tested and benchmarked properly,
but posting it
2009 Aug 10
0
[PATCH 3/3] qemu-kvm: vhost-net implementation
This adds support for vhost-net virtio kernel backend.
To enable (assuming device eth2):
1. enable promisc mode or program guest mac in device eth2
2. disable tso, gso, lro on the card
3. add vhost=eth0 to -net flag
4. run with CAP_NET_ADMIN priviledge (e.g. root)
This patch is RFC, but works without issues for me.
It still needs to be split up, tested and benchmarked properly,
but posting it
2009 Aug 13
0
[PATCHv2 3/3] qemu-kvm: vhost-net implementation
This adds support for vhost-net virtio kernel backend.
To enable (assuming device eth2):
1. enable promisc mode or program guest mac in device eth2
2. disable tso, gso, lro on the card
3. add vhost=eth0 to -net flag
4. run with CAP_NET_ADMIN priviledge (e.g. root)
This patch is RFC, but works without issues for me.
It still needs to be split up, tested and benchmarked properly,
but posting it
2009 Aug 13
0
[PATCHv2 3/3] qemu-kvm: vhost-net implementation
This adds support for vhost-net virtio kernel backend.
To enable (assuming device eth2):
1. enable promisc mode or program guest mac in device eth2
2. disable tso, gso, lro on the card
3. add vhost=eth0 to -net flag
4. run with CAP_NET_ADMIN priviledge (e.g. root)
This patch is RFC, but works without issues for me.
It still needs to be split up, tested and benchmarked properly,
but posting it
2009 Nov 02
2
[PATCHv4 6/6] qemu-kvm: vhost-net implementation
This adds support for vhost-net virtio kernel backend.
This patch is not intended to being merged yet.
I'm posting it for the benefit of people testing
the backend.
Usage instructions:
vhost currently requires MSI-X support in guest virtio.
This means guests kernel version should be >= 2.6.31.
To enable vhost, simply add ",vhost" flag to nic options.
Example with tap backend:
2009 Nov 02
2
[PATCHv4 6/6] qemu-kvm: vhost-net implementation
This adds support for vhost-net virtio kernel backend.
This patch is not intended to being merged yet.
I'm posting it for the benefit of people testing
the backend.
Usage instructions:
vhost currently requires MSI-X support in guest virtio.
This means guests kernel version should be >= 2.6.31.
To enable vhost, simply add ",vhost" flag to nic options.
Example with tap backend:
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2011 May 04
4
[PATCH 0/3] virtio-net: 64 bit features, event index
OK, here's a patch that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
Support is added in both userspace and vhost-net.
I see nice performance improvements: e.g. from 12 to 18 Gbit/s host
to guest with netperf, but did not spend a lot of time testing
performance. I hope others will try this out and report.
Note: there
2011 May 04
4
[PATCH 0/3] virtio-net: 64 bit features, event index
OK, here's a patch that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
Support is added in both userspace and vhost-net.
I see nice performance improvements: e.g. from 12 to 18 Gbit/s host
to guest with netperf, but did not spend a lot of time testing
performance. I hope others will try this out and report.
Note: there
2011 May 19
2
[PATCHv2 0/2] virtio-net: 64 bit features, event index
OK, here's a patch that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
Support is added in both userspace and vhost-net.
If you see issues or are just curious, you can
turn the new feature off. For example:
-global virtio-net-pci.event_idx=on
-global virtio-blk-pci.event_idx=off
Also, it's possible to try both
2011 May 19
2
[PATCHv2 0/2] virtio-net: 64 bit features, event index
OK, here's a patch that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
Support is added in both userspace and vhost-net.
If you see issues or are just curious, you can
turn the new feature off. For example:
-global virtio-net-pci.event_idx=on
-global virtio-blk-pci.event_idx=off
Also, it's possible to try both
2012 Jul 23
2
[PATCH V2] qemu-xen-traditionnal, Fix dirty logging during migration.
This moves the xen_modified_memory call from cpu_physical_memory_map to
cpu_physical_memory_unmap because the memory could be migrated before the
device model have written to it.
But because we need to know the guest address and to avoid rewriting a new
function, the call is moved to qemu_invalidate_entry. So this later has to new
parameters, the length of the mapping and if it was a write.
2012 Mar 19
1
[PATCHv2] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is
done through a PCI IO space (via BAR 0 of the virtual PCI device).
However, Linux guests happen to use ioread/iowrite/iomap primitives
for access, and these work uniformly across memory/io BARs.
While PCI IO accesses are faster than MMIO on x86 kvm,
MMIO might be helpful on other systems:
for example IBM pSeries machines not
2012 Mar 19
1
[PATCHv2] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is
done through a PCI IO space (via BAR 0 of the virtual PCI device).
However, Linux guests happen to use ioread/iowrite/iomap primitives
for access, and these work uniformly across memory/io BARs.
While PCI IO accesses are faster than MMIO on x86 kvm,
MMIO might be helpful on other systems:
for example IBM pSeries machines not
2009 Jun 18
0
[PATCHv5 09/13] qemu: virtio support for many interrupt vectors
Extend virtio to support many interrupt vectors, and rearrange code in
preparation for multi-vector support (mostly move reset out to bindings,
because we will have to reset the vectors in transport-specific code).
Actual bindings in pci, and use in net, to follow.
Load and save are not connected to bindings yet, so they are left
stubbed out for now.
Signed-off-by: Michael S. Tsirkin <mst at
2009 Jun 18
0
[PATCHv5 09/13] qemu: virtio support for many interrupt vectors
Extend virtio to support many interrupt vectors, and rearrange code in
preparation for multi-vector support (mostly move reset out to bindings,
because we will have to reset the vectors in transport-specific code).
Actual bindings in pci, and use in net, to follow.
Load and save are not connected to bindings yet, so they are left
stubbed out for now.
Signed-off-by: Michael S. Tsirkin <mst at
2009 Jun 10
0
[PATCHv4 09/13] qemu: virtio support for many interrupt vectors
Extend virtio to support many interrupt vectors, and rearrange code in
preparation for multi-vector support (mostly move reset out to bindings,
because we will have to reset the vectors in transport-specific code).
Actual bindings in pci, and use in net, to follow.
Load and save are not connected to bindings yet, so they are left
stubbed out for now.
Signed-off-by: Michael S. Tsirkin <mst at