Displaying 20 results from an estimated 259 matches for "vm2".
Did you mean:
v2
2011 Aug 29
1
with heavy VM IO, clocksource causes random dom0 reboots
...in
that file begin with capital letters:
GRUB_TIMEOUT=5
GRUB_CMDLINE_XEN="com1=9600,8n1 console=com1,vga noreboot"
GRUB_CMDLINE_LINUX="console=tty0 console=hvc0"
2. Is a package necessary for pit ?
3. should clocksource = pit be set on domUs as well?
Aug 29 06:28:53 vm2 kernel: [53400.204119] updatedb.mloc D
0000000000000000 0 3605 3601 0x00000000
Aug 29 06:28:53 vm2 kernel: [53400.204125] ffff8802ed071530
0000000000000286 0000000000000000 ffff88009babd260
Aug 29 06:28:53 vm2 kernel: [53400.204132] ffff8802e95f0000
0000000000000088 000000000000f9e0 fff...
2014 Nov 12
3
Put virbr0 in promiscusous
Hi ,
I have two virtual machines VM1 and VM2. Then I have added eth0 of my VM
to 'default' network.
Use case :-
I want to monitor all traffic on virbr0('default' network).
Steps followed :-
1. Add VM1 eth0 to virbr0
2. Add VM2 eth1 to virbr0
3. brctl setageing ovsbr0 0 ..(To put bridge in promiscuous)
Now I am running tcpd...
2013 Dec 03
0
cputune shares with multiple cpu and pinning
Hi,
I have found the cpu time partitioning based on cpu shares weights not
very intuitive.
On RHEL64, I deployed two qemu/kvm VMs
VM1 with 1 vcpu and 512 cpu shares
VM2 with 2 vcpus and 1024 cpu shares
I pinned their vcpus to specific host pcpus:
VM1 vcpu 0 to host pcpu1
VM2 vcpu 0 to host pcpu1, VM2 vcpu 1 to host pcpu2
I executed inside the VMs a simple process that consume all available cpu,
eg
# cat /dev/zero > /dev/null
on the host, using 'top...
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...networking is an exitless I/O path. In
other words, packets can be transferred between VMs without any
vmexits (this requires a polling driver).
Here is how it works. QEMU gets "-device vhost-user" so that a VM can
act as the vhost-user server:
VM1 (virtio-net guest driver) <-> VM2 (vhost-user device)
VM1 has a regular virtio-net PCI device. VM2 has a vhost-user device
and plays the host role instead of the normal virtio-net guest driver
role.
The ugly thing about this is that VM2 needs to map all of VM1's guest
RAM so it can access the vrings and packet data. The sol...
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...networking is an exitless I/O path. In
other words, packets can be transferred between VMs without any
vmexits (this requires a polling driver).
Here is how it works. QEMU gets "-device vhost-user" so that a VM can
act as the vhost-user server:
VM1 (virtio-net guest driver) <-> VM2 (vhost-user device)
VM1 has a regular virtio-net PCI device. VM2 has a vhost-user device
and plays the host role instead of the normal virtio-net guest driver
role.
The ugly thing about this is that VM2 needs to map all of VM1's guest
RAM so it can access the vrings and packet data. The sol...
2014 Aug 06
2
Re: [libvirt] libvirt external disk-only snapshot will pause the VM?
...snapshot for multiple VMs at the same time
>
> >
> > Is there only way that I turn to freeze/thaw functions?
>
> Guest freeze/thaw (virDomainFSFreeze) only works on a live guest. So
> what you will have to do is:
>
> virDomainFSFreeze(vm1, ...)
> virDomainFSFreeze(vm2, ...)
> virDomainSuspend(vm1)
> virDomainSuspend(vm2)
> virDomainSnapshotCreateXML(vm1, ...)
> virDomainSnapshotCreateXML(vm2, ...)
> virDomainResume(vm1)
> virDomainResume(vm2)
> virDomainFSThaw(vm1, ...)
> virDOmainFSThaw(vm2, ...)
>
I see, thanks.
>
> Howev...
2015 Apr 22
5
Zerocopy VM-to-VM networking using virtio-net
...covered informally
throughout the text, this is not a VIRTIO specification change proposal.
The VM-to-VM capable virtio-net PCI adapter has an additional MMIO BAR
called the Shared Buffers BAR. The Shared Buffers BAR is a shared
memory region on the host so that the virtio-net devices in VM1 and VM2
both access the same region of memory.
The vring is still allocated in guest RAM as usual but data buffers must
be located in the Shared Buffers BAR in order to take advantage of
zero-copy.
When VM1 places a packet into the tx queue and the buffers are located
in the Shared Buffers BAR, the host...
2015 Apr 22
5
Zerocopy VM-to-VM networking using virtio-net
...covered informally
throughout the text, this is not a VIRTIO specification change proposal.
The VM-to-VM capable virtio-net PCI adapter has an additional MMIO BAR
called the Shared Buffers BAR. The Shared Buffers BAR is a shared
memory region on the host so that the virtio-net devices in VM1 and VM2
both access the same region of memory.
The vring is still allocated in guest RAM as usual but data buffers must
be located in the Shared Buffers BAR in order to take advantage of
zero-copy.
When VM1 places a packet into the tx queue and the buffers are located
in the Shared Buffers BAR, the host...
2010 Apr 08
2
Multiple cdrom file-based drives in windows xp hvm?
In my vm2 config i have:
disk = [
''phy:/dev/volumes/vm2-disk,hda,w'',
''phy:/dev/volumes/vm2-swap,hdb,w'',
''phy:/dev/volumes/vm2-data,hdc,w'',
''file:/xen/images/office2007basic.iso,hdd,r'',
''file:/xen/images/prin...
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
...eeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+ +--+--+
| |
+--+--+ +--+--+
| tap0| | tap1|
+--+--+ +--+--+
| |
pfifo_fast htb(10Mbit/s)
| |
+--+--------------+---+
| bridge |
+--+------------------+...
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
...eeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+ +--+--+
| |
+--+--+ +--+--+
| tap0| | tap1|
+--+--+ +--+--+
| |
pfifo_fast htb(10Mbit/s)
| |
+--+--------------+---+
| bridge |
+--+------------------+...
2015 Apr 22
0
Zerocopy VM-to-VM networking using virtio-net
...throughout the text, this is not a VIRTIO specification change proposal.
>
> The VM-to-VM capable virtio-net PCI adapter has an additional MMIO BAR
> called the Shared Buffers BAR. The Shared Buffers BAR is a shared
> memory region on the host so that the virtio-net devices in VM1 and VM2
> both access the same region of memory.
>
> The vring is still allocated in guest RAM as usual but data buffers must
> be located in the Shared Buffers BAR in order to take advantage of
> zero-copy.
>
> When VM1 places a packet into the tx queue and the buffers are located
&...
2015 Apr 27
4
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...rred between VMs without any
>>> vmexits (this requires a polling driver).
>>>
>>> Here is how it works. QEMU gets "-device vhost-user" so that a VM can
>>> act as the vhost-user server:
>>>
>>> VM1 (virtio-net guest driver) <-> VM2 (vhost-user device)
>>>
>>> VM1 has a regular virtio-net PCI device. VM2 has a vhost-user device
>>> and plays the host role instead of the normal virtio-net guest driver
>>> role.
>>>
>>> The ugly thing about this is that VM2 needs to map all...
2015 Apr 27
4
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...rred between VMs without any
>>> vmexits (this requires a polling driver).
>>>
>>> Here is how it works. QEMU gets "-device vhost-user" so that a VM can
>>> act as the vhost-user server:
>>>
>>> VM1 (virtio-net guest driver) <-> VM2 (vhost-user device)
>>>
>>> VM1 has a regular virtio-net PCI device. VM2 has a vhost-user device
>>> and plays the host role instead of the normal virtio-net guest driver
>>> role.
>>>
>>> The ugly thing about this is that VM2 needs to map all...
2015 Sep 01
2
rfc: vhost user enhancements for vm2vm communication
...data movement between VMs, if using polling, this means that 1 host CPU
> needs to be sacrificed for this task.
>
> This is easiest to understand when one of the VMs is
> used with VF pass-through. This can be schematically shown below:
>
> +-- VM1 --------------+ +---VM2-----------+
> | virtio-pci +-vhost-user-+ virtio-pci -- VF | -- VFIO -- IOMMU -- NIC
> +---------------------+ +-----------------+
>
>
> With ivshmem in theory communication can happen directly, with two VMs
> polling the shared memory region.
>
>
>...
2015 Apr 27
1
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
.... In
> other words, packets can be transferred between VMs without any
> vmexits (this requires a polling driver).
>
> Here is how it works. QEMU gets "-device vhost-user" so that a VM can
> act as the vhost-user server:
>
> VM1 (virtio-net guest driver) <-> VM2 (vhost-user device)
>
> VM1 has a regular virtio-net PCI device. VM2 has a vhost-user device
> and plays the host role instead of the normal virtio-net guest driver
> role.
>
> The ugly thing about this is that VM2 needs to map all of VM1's guest
> RAM so it can access t...
2015 Sep 01
2
rfc: vhost user enhancements for vm2vm communication
...data movement between VMs, if using polling, this means that 1 host CPU
> needs to be sacrificed for this task.
>
> This is easiest to understand when one of the VMs is
> used with VF pass-through. This can be schematically shown below:
>
> +-- VM1 --------------+ +---VM2-----------+
> | virtio-pci +-vhost-user-+ virtio-pci -- VF | -- VFIO -- IOMMU -- NIC
> +---------------------+ +-----------------+
>
>
> With ivshmem in theory communication can happen directly, with two VMs
> polling the shared memory region.
>
>
>...
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...eeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+ +--+--+
| |
+--+--+ +--+--+
| tap0| | tap1|
+--+--+ +--+--+
| |
pfifo_fast htb(10Mbit/s)
| |
+--+--------------+---+
| bridge |
+--+------------------+...
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...eeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+ +--+--+
| |
+--+--+ +--+--+
| tap0| | tap1|
+--+--+ +--+--+
| |
pfifo_fast htb(10Mbit/s)
| |
+--+--------------+---+
| bridge |
+--+------------------+...
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
...oth host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking of a single packet may delay or block the guest
>> > transmission. Consider the following setup:
>> >
>> > +-----+ +-----+
>> > | VM1 | | VM2 |
>> > +--+--+ +--+--+
>> > | |
>> > +--+--+ +--+--+
>> > | tap0| | tap1|
>> > +--+--+ +--+--+
>> > | |
>> > pfifo_fast htb(10Mbit/s)
>> >...