Displaying 20 results from an estimated 1000 matches similar to: "macvtap direct"
2020 May 19
1
Re: macvtap direct
On Thu, May 14, 2020 at 1:32 PM Laine Stump <laine@redhat.com> wrote:
> On 5/13/20 12:52 AM, Subhendu Ghosh wrote:
> > Hi
> >
> > Couple of questions around macvtap direct usage:
> >
> > 1) is the document here current?
> > https://libvirt.org/formatnetwork.html#examplesDirect
>
> Yes. None of that has changed in any major way in many years.
>
2020 May 14
0
Re: macvtap direct
On 5/13/20 12:52 AM, Subhendu Ghosh wrote:
> Hi
>
> Couple of questions around macvtap direct usage:
>
> 1) is the document here current?
> https://libvirt.org/formatnetwork.html#examplesDirect
Yes. None of that has changed in any major way in many years.
>
> I have been able to get host to guest network traffic without any
> special configuration or switch since
2015 Mar 20
2
getting oriented/networking
I've been using virt-manager and kvm with a disk image (as in the raw bits) from a physical windows 7 machine. Initial performance was dreadful, but improved as I switched to virtio and spice. I've been running linux VM's somewhat longer (much longer if you count kvm without libvirt).
There are lots of choices exposed by virt-manager. How do I find out what the choices mean, and
2015 Mar 20
1
Re: getting oriented/networking [some success]
I seem to have run into https://bugzilla.redhat.com/show_bug.cgi?id=855640, because when I tried the fix/work-around at the end (comment 11), ethtool -K eth0 gro off, my download speed by speedtest went from undetectable to ~150Mb/s. However, it was not able to connect for the upload test, and so something may still be off. Non-virtual machines can do the upload test, so it's not just a
2009 Dec 03
3
[RFC 0/2] macvtap, second try
I did not get this ready for the merge window, but people asked what
the status of this is so I'm posting it now to solicit feedback.
The first patch just adds some hooks into macvlan.c and is less
invasive than the previous version. That part should be fine
and I'd like this to get merged into macvlan for 2.6.33 if people
agree that the approach is right.
The second patch adds the
2009 Dec 03
3
[RFC 0/2] macvtap, second try
I did not get this ready for the merge window, but people asked what
the status of this is so I'm posting it now to solicit feedback.
The first patch just adds some hooks into macvlan.c and is less
invasive than the previous version. That part should be fine
and I'd like this to get merged into macvlan for 2.6.33 if people
agree that the approach is right.
The second patch adds the
2009 Dec 03
3
[RFC 0/2] macvtap, second try
I did not get this ready for the merge window, but people asked what
the status of this is so I'm posting it now to solicit feedback.
The first patch just adds some hooks into macvlan.c and is less
invasive than the previous version. That part should be fine
and I'd like this to get merged into macvlan for 2.6.33 if people
agree that the approach is right.
The second patch adds the
2015 Apr 30
3
Limitations of macvtap devices?
I am running OpenStack inside a libvirt guest that is connected to the
local network via a macvtap interface. My experience so far suggests
that a macvtap interface will not pass traffic with a source MAC
address other than the MAC address of the interface itself...for
example, if inside the guest eth0 is attached to a bridge.
Is that correct, or is there some setting that will make that work?
2009 Aug 07
3
[Bridge] [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support
Paul,
I also think that bridge may not be the right place for VEPA, but rather a simpler sw/hw mux
Although the VEPA support may reside in multiple places (I.e. also in the bridge)
As Arnd pointed out Or already added an extension to qemu that allow direct guest virtual NIC mapping to an interface device (vs using tap), this was done specifically to address VEPA, and result in much faster
2009 Aug 07
3
[Bridge] [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support
Paul,
I also think that bridge may not be the right place for VEPA, but rather a simpler sw/hw mux
Although the VEPA support may reside in multiple places (I.e. also in the bridge)
As Arnd pointed out Or already added an extension to qemu that allow direct guest virtual NIC mapping to an interface device (vs using tap), this was done specifically to address VEPA, and result in much faster
2009 Aug 07
3
[Bridge] [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support
Paul,
I also think that bridge may not be the right place for VEPA, but rather a simpler sw/hw mux
Although the VEPA support may reside in multiple places (I.e. also in the bridge)
As Arnd pointed out Or already added an extension to qemu that allow direct guest virtual NIC mapping to an interface device (vs using tap), this was done specifically to address VEPA, and result in much faster
2019 Mar 13
2
Re: KVM-Docker-Networking using TAP and MACVLAN
On 3/13/19 2:26 PM, Martin Kletzander wrote:
> IIUC, you are using the tap0 device, but it is not plugged anywhere.
> By that I
> mean there is one end that you created and passed through into the VM,
> but there
> is no other end of that. I can think of some complicated ways how to
> do what
> you are trying to, but hopefully the above explanation will move you
> forward
2015 Sep 17
3
Guest agent is not responding
hello,
in my windows vm i installed qemu-guest-agent and rebootet the vm.
In the settings for the vm i set via virt-manager a new channel "unix
socket" "org.qemu.guest_agent.0" "virtio".
when i try to do a snapshot via shell i get:
virsh snapshot-create-as --domain win7new win7new-snap1 --disk-only
--atomic --quiesce
error: Guest agent is not responding:
2017 Oct 26
5
Re: Need to increase the rx and tx buffer size of my interface
Hi Ashish,
I have tested with your xml in the first mail, and it works for
rx_queue_size(see
below).
multiqueue need to work with vhost backend driver. And when you set
"queues=1" it will ignored.
Please check your qemu-kvm-rhev package, should be newer than
qemu-kvm-rhev-2.9.0-16.el7_4.2
And the logs?
tx_queue_size='512' will not work in the guest with direct type interface,
2017 Oct 26
2
Re: Need to increase the rx and tx buffer size of my interface
Hi Ashish,
IMO, it is yes, no way to increase tx_queue_size for direct type interface
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
On Thu, Oct 26, 2017 at 3:38 PM, Ashish Kurian <ashishbnv@gmail.com> wrote:
> Hi Yalan,
>
> In the previous email you mentioned "tx_queue_size='512' will not work in
> the guest with direct type
2009 Dec 08
3
Guest bridge setup variations
As promised, here is my small writeup on which setups I feel
are important in the long run for server-type guests. This
does not cover -net user, which is really for desktop kinds
of applications where you do not want to connect into the
guest from another IP address.
I can see four separate setups that we may or may not want to
support, the main difference being how the forwarding between
guests
2009 Dec 08
3
Guest bridge setup variations
As promised, here is my small writeup on which setups I feel
are important in the long run for server-type guests. This
does not cover -net user, which is really for desktop kinds
of applications where you do not want to connect into the
guest from another IP address.
I can see four separate setups that we may or may not want to
support, the main difference being how the forwarding between
guests
2012 Jun 21
1
Cannot create macvlan devices on this platform
Hi,
libvirt (0.9.11) refuses to start KVM based virtual machines on my
system when changing the network connection from "host bridge" to
"direct" (macvtap/macvlan), neither in "bridge" nor in "vepa" mode:
"Cannot create macvlan devices on this platform"
That's astonishing because I can easily setup working macvlan devices
using the
2009 Jun 15
1
[Bridge] [PATCH][RFC] net/bridge: add basic VEPA support
This patch adds basic Virtual Ethernet Port Aggregator (VEPA)
capabilities to the Linux kernel Ethernet bridging code.
A Virtual Ethernet Port Aggregator (VEPA) is a capability within
a physical end station that collaborates with an adjacent, external
bridge to provide distributed bridging support between multiple
virtual end stations and external networks. The VEPA collaborates
by forwarding all
2009 Jun 15
1
[Bridge] [PATCH][RFC] net/bridge: add basic VEPA support
This patch adds basic Virtual Ethernet Port Aggregator (VEPA)
capabilities to the Linux kernel Ethernet bridging code.
A Virtual Ethernet Port Aggregator (VEPA) is a capability within
a physical end station that collaborates with an adjacent, external
bridge to provide distributed bridging support between multiple
virtual end stations and external networks. The VEPA collaborates
by forwarding all