Displaying 20 results from an estimated 20000 matches similar to: "Live Migration and how exactly it works"
2014 Mar 20
0
Re: Live migration process in src/qemu_driver.ca
On 03/20/2014 10:05 AM, Faizul Bari wrote:
> Hello,
>
> I have been trying to track different phases of a live migration process. I
> am using libvirt with qemu-kvm. I am issuing migration commands using
> virsh.
>
> Now, I want to measure the time spent in each phase of live migration,
> e.g., pre-copy and stop-copy. I stumbled upon the file qemu_driver.c. It
> has
2014 Mar 20
1
Re: Live migration process in src/qemu_driver.ca
Thanks Eric.
So, I need to look at QEMU. Do you know which files/functions should I look
at?
--
Faiz
On Thu, Mar 20, 2014 at 12:41 PM, Eric Blake <eblake@redhat.com> wrote:
> On 03/20/2014 10:05 AM, Faizul Bari wrote:
> > Hello,
> >
> > I have been trying to track different phases of a live migration
> process. I
> > am using libvirt with qemu-kvm. I am
2014 Mar 20
2
Live migration process in src/qemu_driver.ca
Hello,
I have been trying to track different phases of a live migration process. I
am using libvirt with qemu-kvm. I am issuing migration commands using
virsh.
Now, I want to measure the time spent in each phase of live migration,
e.g., pre-copy and stop-copy. I stumbled upon the file qemu_driver.c. It
has functions like
qemudDomainMigratePrepare2
qemudDomainMigratePerform
2014 Dec 16
1
[LLVMdev] 3.5.1 Testing Phase Begins
Yeah, I was just puzzled :D No worries.
Cheers,
Sebastian
2014-12-16 11:09 GMT+01:00 Daniel Sanders <Daniel.Sanders at imgtec.com>:
> It looks like I missed that email completely, I started out replying to
> Nikola and Hans which is why I was talking about Windows rather than OS X.
>
>
>
> I've just found your first email in my llvmdev folder and I think I see what
2014 Dec 16
1
[LLVMdev] 3.5.1 Testing Phase Begins
It looks like I missed that email completely, I started out replying to Nikola and Hans which is why I was talking about Windows rather than OS X.
I've just found your first email in my llvmdev folder and I think I see what happened. My mail rules deliver llvmdev messages to a folder named llvmdev unless I'm directly addressed in which case it delivers to my inbox. It looks like you
2009 Apr 29
0
Live migration fails depending on on the kernel Version of DomU
Hi,
I''m running a two node xen-Cluster with CentOS 5.3. I''m using
gfs2 for the cluster filesystem and kernel 2.6.18-128.1.6.el5xen #1 SMP
for Dom0 (xen 3.0.3).
Live migration works fine as long as the DomU guest is running CentOS 5.3
or SLC 4.7 with kernel version 2.6.9-55.EL.cern. But if the SLC 4.7
guest is
running the current 2.6.9-78.0.17.EL.cern kernel, the migration
2016 Mar 15
0
[RFC qemu 0/4] A PV solution for live migration optimization
> On Mon, Mar 14, 2016 at 05:03:34PM +0000, Dr. David Alan Gilbert wrote:
> > * Li, Liang Z (liang.z.li at intel.com) wrote:
> > > >
> > > > Hi,
> > > > I'm just catching back up on this thread; so without reference
> > > > to any particular previous mail in the thread.
> > > >
> > > > 1) How many of the free
2016 Mar 04
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> On Thu, Mar 03, 2016 at 05:46:15PM +0000, Dr. David Alan Gilbert wrote:
> > * Liang Li (liang.z.li at intel.com) wrote:
> > > The current QEMU live migration implementation mark the all the
> > > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > > will be processed and that takes quit a lot of CPU cycles.
> > >
> > >
2016 Mar 04
0
[RFC qemu 0/4] A PV solution for live migration optimization
> > > * Liang Li (liang.z.li at intel.com) wrote:
> > > > The current QEMU live migration implementation mark the all the
> > > > guest's RAM pages as dirtied in the ram bulk stage, all these
> > > > pages will be processed and that takes quit a lot of CPU cycles.
> > > >
> > > > From guest's point of view, it doesn't
2016 Mar 08
0
[RFC qemu 0/4] A PV solution for live migration optimization
On (Thu) 03 Mar 2016 [18:44:24], Liang Li wrote:
> The current QEMU live migration implementation mark the all the
> guest's RAM pages as dirtied in the ram bulk stage, all these pages
> will be processed and that takes quit a lot of CPU cycles.
>
> From guest's point of view, it doesn't care about the content in free
> pages. We can make use of this fact and skip
2016 Mar 03
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
On Thu, Mar 03, 2016 at 06:44:24PM +0800, Liang Li wrote:
> The current QEMU live migration implementation mark the all the
> guest's RAM pages as dirtied in the ram bulk stage, all these pages
> will be processed and that takes quit a lot of CPU cycles.
>
> From guest's point of view, it doesn't care about the content in free
> pages. We can make use of this fact
2016 Mar 15
1
[RFC qemu 0/4] A PV solution for live migration optimization
* Li, Liang Z (liang.z.li at intel.com) wrote:
> > On Mon, Mar 14, 2016 at 05:03:34PM +0000, Dr. David Alan Gilbert wrote:
> > > * Li, Liang Z (liang.z.li at intel.com) wrote:
> > > > >
> > > > > Hi,
> > > > > I'm just catching back up on this thread; so without reference
> > > > > to any particular previous mail in
2016 Mar 15
1
[RFC qemu 0/4] A PV solution for live migration optimization
* Li, Liang Z (liang.z.li at intel.com) wrote:
> > On Mon, Mar 14, 2016 at 05:03:34PM +0000, Dr. David Alan Gilbert wrote:
> > > * Li, Liang Z (liang.z.li at intel.com) wrote:
> > > > >
> > > > > Hi,
> > > > > I'm just catching back up on this thread; so without reference
> > > > > to any particular previous mail in
2016 Mar 04
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
* Roman Kagan (rkagan at virtuozzo.com) wrote:
> On Fri, Mar 04, 2016 at 08:23:09AM +0000, Li, Liang Z wrote:
> > > On Thu, Mar 03, 2016 at 05:46:15PM +0000, Dr. David Alan Gilbert wrote:
> > > > * Liang Li (liang.z.li at intel.com) wrote:
> > > > > The current QEMU live migration implementation mark the all the
> > > > > guest's RAM pages
2016 Mar 03
0
[RFC qemu 0/4] A PV solution for live migration optimization
* Liang Li (liang.z.li at intel.com) wrote:
> The current QEMU live migration implementation mark the all the
> guest's RAM pages as dirtied in the ram bulk stage, all these pages
> will be processed and that takes quit a lot of CPU cycles.
>
> From guest's point of view, it doesn't care about the content in free
> pages. We can make use of this fact and skip
2016 Mar 14
0
[RFC qemu 0/4] A PV solution for live migration optimization
* Li, Liang Z (liang.z.li at intel.com) wrote:
> >
> > Hi,
> > I'm just catching back up on this thread; so without reference to any
> > particular previous mail in the thread.
> >
> > 1) How many of the free pages do we tell the host about?
> > Your main change is telling the host about all the
> > free pages.
>
> Yes, all
2013 Aug 29
0
Re: Is fallback vhost_net to qemu for live migrate available?
Hi Qin,
On Mon, Aug 26, 2013 at 10:32 PM, Qin Chuanyu <qinchuanyu@huawei.com> wrote:
> Hi all
>
> I am participating in a project which try to port vhost_net on Xen。
Neat!
> By change the memory copy and notify mechanism ,currently virtio-net with
> vhost_net could run on Xen with good performance。
I think the key in doing this would be to implement a property
ioeventfd
2013 Sep 02
0
Re: Is fallback vhost_net to qemu for live migrate available?
On 08/31/2013 12:45 PM, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
>> Hi Qin,
>
>>> By change the memory copy and notify mechanism ,currently
>>> virtio-net with
>>> vhost_net could run on Xen with good performance。
>>
>> I think the key in doing this would be to implement a property
>> ioeventfd and irqfd interface in
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> >
> > * Liang Li (liang.z.li at intel.com) wrote:
> > > The current QEMU live migration implementation mark the all the
> > > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > > will be processed and that takes quit a lot of CPU cycles.
> > >
> > > From guest's point of view, it doesn't care about the
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> >
> > * Liang Li (liang.z.li at intel.com) wrote:
> > > The current QEMU live migration implementation mark the all the
> > > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > > will be processed and that takes quit a lot of CPU cycles.
> > >
> > > From guest's point of view, it doesn't care about the