search for: postcopi

Displaying 20 results from an estimated 29 matches for "postcopi".

Did you mean: postcopy
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
Hello, On Wed, Nov 02, 2016 at 05:08:35AM -0400, Pan Xinhui wrote: > Support the vcpu_is_preempted() functionality under KVM. This will > enhance lock performance on overcommitted hosts (more runnable vcpus > than physical cpus in the system) as doing busy waits for preempted > vcpus will hurt system performance far worse than early yielding. > > Use one field of struct
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
Hello, On Wed, Nov 02, 2016 at 05:08:35AM -0400, Pan Xinhui wrote: > Support the vcpu_is_preempted() functionality under KVM. This will > enhance lock performance on overcommitted hosts (more runnable vcpus > than physical cpus in the system) as doing busy waits for preempted > vcpus will hurt system performance far worse than early yielding. > > Use one field of struct
2016 Dec 19
0
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
hi, Andrea thanks for your reply. :) ? 2016/12/19 19:42, Andrea Arcangeli ??: > Hello, > > On Wed, Nov 02, 2016 at 05:08:35AM -0400, Pan Xinhui wrote: >> Support the vcpu_is_preempted() functionality under KVM. This will >> enhance lock performance on overcommitted hosts (more runnable vcpus >> than physical cpus in the system) as doing busy waits for preempted
2017 Jan 07
2
Regarding Migration Code
Greetings, I was trying to understand the flow of Migration Code in libvirt and have few doubts: 1) libvirt talks to QEMU/KVM guests via QEMU API. So overall, in order to manage QEMU/KVM guests I can either use libvirt (or tools based on libvirt like virsh) or QEMU monitor. Is it so? 2) Since libvirt is Hypervisor neutral so actual migration algorithm(precopy or postcopy) is present in the
2020 Jan 21
2
How to detect completion of a paused VM migration on the destination?
Hi, when a normally running VM is migrated, libvirt sends VIR_DOMAIN_EVENT_RESUMED_MIGRATED event on the destination once the migration completes. I can see that when a paused VM is migrated, libvirt sends VIR_DOMAIN_EVENT_SUSPENDED_PAUSED instead. Since there seems to be nothing migration specific about VIR_DOMAIN_EVENT_SUSPENDED_PAUSED event, my question is: Is it safe to assume on the
2013 Sep 24
0
Bug#710650: Bug#718767: transition: ocaml 4.00.1
Le 24/09/2013 19:00, Thomas Goirand a ?crit : > Please don't think this way. > > I work daily on 79 packages to maintain OpenStack in Debian: > > http://qa.debian.org/developer.php?login=openstack-devel at lists.alioth.debian.org If you go that way, I could say that you're blocking my work on 214 packages in Debian:
2013 Sep 24
2
Bug#710650: Bug#718767: transition: ocaml 4.00.1
On 09/24/2013 10:04 PM, St?phane Glondu wrote: > Le 24/09/2013 15:48, St?phane Glondu a ?crit : >> If I remove all binary packages of xen-api from testing, the following >> new packages are broken: xcp-guest-templates, nova-xcp-plugins, >> nova-compute-xen. >> >> xcp-guest-templates is built by guest-templates which seems to be a leaf >> package and could be
2020 Jan 22
0
Re: How to detect completion of a paused VM migration on the destination?
On 1/21/20 3:28 PM, Milan Zamazal wrote: > Hi, > > when a normally running VM is migrated, libvirt sends > VIR_DOMAIN_EVENT_RESUMED_MIGRATED event on the destination once the > migration completes. I can see that when a paused VM is migrated, > libvirt sends VIR_DOMAIN_EVENT_SUSPENDED_PAUSED instead. > > Since there seems to be nothing migration specific about >
2013 Sep 06
5
Bug#710650: Bug#718767: transition: ocaml 4.00.1
Le 05/09/2013 23:18, Julien Cristau a ?crit : > tracker adjusted. xen-api is currently broken though, so you'll need to > get that fixed before starting. I've just fixed a blocking bug (#713349) which was due to the renaming of an OCaml library (type-conv -> type_conv). Now, xen-api FTBFS because of what looks like an API change in some (C) dependency: > [...] > + gcc -g
2016 Mar 03
0
[RFC qemu 0/4] A PV solution for live migration optimization
* Liang Li (liang.z.li at intel.com) wrote: > The current QEMU live migration implementation mark the all the > guest's RAM pages as dirtied in the ram bulk stage, all these pages > will be processed and that takes quit a lot of CPU cycles. > > From guest's point of view, it doesn't care about the content in free > pages. We can make use of this fact and skip
2017 Jun 23
2
qemu-kvm-ev-2.6.0-28.el7_3.10.1 now available
Hi, qemu-kvm-ev-2.6.0-28.el7.10.1 <https://cbs.centos.org/koji/buildinfo?buildID=17495> has been tagged for release and will soon be available on CentOS mirrors. This release addresses a security issue (CVE-2017-7718) which has a security impact rated important. See https://www.redhat.com/archives/rhsa-announce/2017-June/msg00014.html for more details on this update. Here's the
2015 Mar 27
1
Re: Point-in-time snapshots
On 03/27/2015 11:21 AM, Richard W.M. Jones wrote: > > AIUI: > > We'd issue a drive-backup monitor command with an nbd:... target. > > The custom NBD server receives a stream of blocks (as writes). > > On the other side of this, libguestfs is also talking to the custom > NBD server. Libguestfs (which is really a qemu process) is issuing > random reads.
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6: fix typos and remove uncessary comments. change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3:
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6: fix typos and remove uncessary comments. change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3:
2016 Mar 03
2
[RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
Get the free pages information through virtio and filter out the free pages in the ram bulk stage. This can significantly reduce the total live migration time as well as network traffic. Signed-off-by: Liang Li <liang.z.li at intel.com> --- migration/ram.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 46 insertions(+), 6 deletions(-) diff --git
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization > > * Liang Li (liang.z.li at intel.com) wrote: > > The current QEMU live migration implementation mark the all the > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > will be processed and that takes quit a lot of CPU cycles. > > > > From guest's
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization > > * Liang Li (liang.z.li at intel.com) wrote: > > The current QEMU live migration implementation mark the all the > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > will be processed and that takes quit a lot of CPU cycles. > > > > From guest's
2019 Jun 21
0
Intermittent live migration hang with ceph RBD attached volume
Software in use: *Source hypervisor:* *Qemu:* stable-2.12 branch *Libvirt*: v3.2-maint branch *OS*: CentOS 6 *Destination hypervisor: **Qemu:* stable-2.12 branch *Libvirt*: v4.9-maint branch *OS*: CentOS 7 I'm experiencing an intermittent live migration hang of a virtual machine (KVM) with a ceph RBD volume attached. At the high level what I see is that when this does happen, the virtual
2015 Mar 27
0
Re: Point-in-time snapshots
On Fri, Mar 27, 2015 at 10:37:44AM -0600, Eric Blake wrote: > On 03/27/2015 09:35 AM, Richard W.M. Jones wrote: > > But libguestfs doesn't want to do a backup, nor get a copy of the > > whole disk, it just wants to access a scattering of blocks (maybe a > > few hundred) but at a single point in time, in as lightweight a manner > > as possible. > > If you KNOW
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> > > > * Liang Li (liang.z.li at intel.com) wrote: > > > The current QEMU live migration implementation mark the all the > > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > > will be processed and that takes quit a lot of CPU cycles. > > > > > > From guest's point of view, it doesn't care about the