Displaying 20 results from an estimated 7000 matches similar to: "Backup suspended/paused VM ?"
2006 Jul 09
0
[PATCH] Fix error message of xm pause/unpause command
Hi,
When I tested the xm pause/unpause command, I found poor error messages.
The following is the error messages.
# xm list
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 492 1 r----- 394.6
vm1 1 512 1 r----- 3.0
# xm pause 0
Error: (22, ''Invalid argument'')
# xm pause
2014 Aug 06
2
Re: [libvirt] libvirt external disk-only snapshot will pause the VM?
On Wed, Aug 6, 2014 at 12:27 PM, Eric Blake <eblake@redhat.com> wrote:
> On 08/06/2014 10:06 AM, Yuanzhen Gu wrote:
> > yes, I got your point, thanks very much Eric.
>
> not entirely, because you still top-posted.
>
> got it entirely this time, not top-posted.
> >
> > If I want to take a distributed snapshot, which need pause all the VMs
> and
>
2014 Aug 06
0
Re: [libvirt] libvirt external disk-only snapshot will pause the VM?
On 08/06/2014 10:06 AM, Yuanzhen Gu wrote:
> yes, I got your point, thanks very much Eric.
not entirely, because you still top-posted.
>
> If I want to take a distributed snapshot, which need pause all the VMs and
> then take snapshot, how can I control the pause for all the VMs?
You mean, you have multiple VMs, and want to take a snapshot of all
their storage at the same point in
2014 Aug 06
0
Re: [libvirt] libvirt external disk-only snapshot will pause the VM?
On 08/06/2014 11:17 AM, Yuanzhen Gu wrote:
>> Guest freeze/thaw (virDomainFSFreeze) only works on a live guest. So
>> what you will have to do is:
>>
>> virDomainFSFreeze(vm1, ...)
>> virDomainFSFreeze(vm2, ...)
>> virDomainSuspend(vm1)
>> virDomainSuspend(vm2)
>> virDomainSnapshotCreateXML(vm1, ...)
>> virDomainSnapshotCreateXML(vm2, ...)
2013 Dec 03
0
cputune shares with multiple cpu and pinning
Hi,
I have found the cpu time partitioning based on cpu shares weights not
very intuitive.
On RHEL64, I deployed two qemu/kvm VMs
VM1 with 1 vcpu and 512 cpu shares
VM2 with 2 vcpus and 1024 cpu shares
I pinned their vcpus to specific host pcpus:
VM1 vcpu 0 to host pcpu1
VM2 vcpu 0 to host pcpu1, VM2 vcpu 1 to host pcpu2
I executed inside the VMs a simple process that consume all
2015 Apr 22
0
Zerocopy VM-to-VM networking using virtio-net
On Wed, 22 Apr 2015 18:01:38 +0100
Stefan Hajnoczi <stefanha at redhat.com> wrote:
> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
> if you are a non-TC member.]
>
> Hi,
> Some modern networking applications bypass the kernel network stack so
> that rx/tx rings and DMA buffers can be directly mapped. This is
> typical in DPDK applications
2012 Mar 22
1
Does libvirt check MCS labels during hot-add disk image ?
Libvirt doesn't care about security during hot add disk images. It even
accepts addition of disk images of other guest running on the host.
Steps followed to create this scenario :
Started two VMs with following security configurations:
vm1:
<seclabel type='dynamic' model='selinux' relabel='yes'>
2014 Feb 25
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
> We used to stop the handling of tx when the number of pending DMAs
> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
> of both host and guest. But it was too aggressive in some cases, since
> any delay or blocking of a single packet may delay or block the guest
> transmission. Consider the following
2015 Apr 22
5
Zerocopy VM-to-VM networking using virtio-net
[It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
if you are a non-TC member.]
Hi,
Some modern networking applications bypass the kernel network stack so
that rx/tx rings and DMA buffers can be directly mapped. This is
typical in DPDK applications where virtio-net currently is one of
several NIC choices.
Existing virtio-net implementations are not optimized for VM-to-VM
2015 Apr 22
5
Zerocopy VM-to-VM networking using virtio-net
[It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
if you are a non-TC member.]
Hi,
Some modern networking applications bypass the kernel network stack so
that rx/tx rings and DMA buffers can be directly mapped. This is
typical in DPDK applications where virtio-net currently is one of
several NIC choices.
Existing virtio-net implementations are not optimized for VM-to-VM
2015 Sep 01
0
rfc: vhost user enhancements for vm2vm communication
On Tue, Sep 01, 2015 at 09:35:21AM +0200, Jan Kiszka wrote:
> On 2015-08-31 16:11, Michael S. Tsirkin wrote:
> > Hello!
> > During the KVM forum, we discussed supporting virtio on top
> > of ivshmem.
>
> No, not on top of ivshmem. On top of shared memory. Our model is
> different from the simplistic ivshmem.
>
> > I have considered it, and came up with an
2014 Nov 12
3
Put virbr0 in promiscusous
Hi ,
I have two virtual machines VM1 and VM2. Then I have added eth0 of my VM
to 'default' network.
Use case :-
I want to monitor all traffic on virbr0('default' network).
Steps followed :-
1. Add VM1 eth0 to virbr0
2. Add VM2 eth1 to virbr0
3. brctl setageing ovsbr0 0 ..(To put bridge in promiscuous)
Now I am running tcpdump on eth1 of VM2 and trying to ping
2014 Mar 17
0
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/13/2014 09:28 AM, Jason Wang wrote:
> On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
>> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>>>> of both host and guest. But it was too
2015 Apr 27
0
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
Am 2015-04-27 um 15:01 schrieb Stefan Hajnoczi:
> On Mon, Apr 27, 2015 at 1:55 PM, Jan Kiszka <jan.kiszka at siemens.com> wrote:
>> Am 2015-04-27 um 14:35 schrieb Jan Kiszka:
>>> Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi:
>>>> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
>>>>> On 24 April 2015 at 15:22, Stefan
2015 Apr 22
1
Zerocopy VM-to-VM networking using virtio-net
On Wed, Apr 22, 2015 at 6:46 PM, Cornelia Huck <cornelia.huck at de.ibm.com> wrote:
> On Wed, 22 Apr 2015 18:01:38 +0100
> Stefan Hajnoczi <stefanha at redhat.com> wrote:
>
>> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
>> if you are a non-TC member.]
>>
>> Hi,
>> Some modern networking applications bypass the kernel
2015 Apr 22
1
Zerocopy VM-to-VM networking using virtio-net
On Wed, Apr 22, 2015 at 6:46 PM, Cornelia Huck <cornelia.huck at de.ibm.com> wrote:
> On Wed, 22 Apr 2015 18:01:38 +0100
> Stefan Hajnoczi <stefanha at redhat.com> wrote:
>
>> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC
>> if you are a non-TC member.]
>>
>> Hi,
>> Some modern networking applications bypass the kernel
2014 Feb 26
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote:
> On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
> >On 2014/2/26 13:53, Jason Wang wrote:
> >>On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
> >>>On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
> >>>>We used to stop the handling of tx when the number of pending DMAs
>
2015 Apr 27
1
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi:
> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
>> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:
>>>
>>> The motivation for making VM-to-VM fast is that while software
>>> switches on the host are efficient today (thanks to vhost-user), there
2014 Mar 10
0
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
> We used to stop the handling of tx when the number of pending DMAs
> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
> of both host and guest. But it was too aggressive in some cases, since
> any delay or blocking of a single packet may delay or block the guest
> transmission. Consider the following
2014 Feb 26
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 2014/2/26 13:53, Jason Wang wrote:
> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>>> We used to stop the handling of tx when the number of pending DMAs
>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>>> of both host and guest. But it was too aggressive in some