Displaying 20 results from an estimated 100000 matches similar to: "Help with live Migration"
2016 Jun 01
2
Migration problem - takes 5 minutes to start moving the memory
Hi,
I'm facing a strange issue while doing a migration from an hypervisor to another one. The migration takes for ever to start moving the memory.
The VM had no workload what so ever, just a basic ubuntu image. The versions on the hypervisors are: libvirt 1.2.21, qemu 1.2.3
Command to launche the migration:
virsh migrate --verbose --live --abort-on-error --tunnelled --p2p --auto-converge
2015 May 12
1
Live Migration failure: An error occurred, but the cause is unknown
Hi everyone,
I’m testing the new Openstack Kilo on Ubuntu-15.04 and hypervisor is KVM.
I can creat instance successfully , but live migration is always failed. Error report like this (from nova-compute.log on a compute node):
2015-05-12 18:11:12.753 3641 INFO nova.virt.libvirt.driver [-] [instance: cee4965c-b298-4b5b-8669-bee9ac72c720] Migration running for 0 secs, memory 0% remaining; (bytes
2014 Mar 20
1
Re: Live migration process in src/qemu_driver.ca
Thanks Eric.
So, I need to look at QEMU. Do you know which files/functions should I look
at?
--
Faiz
On Thu, Mar 20, 2014 at 12:41 PM, Eric Blake <eblake@redhat.com> wrote:
> On 03/20/2014 10:05 AM, Faizul Bari wrote:
> > Hello,
> >
> > I have been trying to track different phases of a live migration
> process. I
> > am using libvirt with qemu-kvm. I am
2014 Mar 20
0
Re: Live migration process in src/qemu_driver.ca
On 03/20/2014 10:05 AM, Faizul Bari wrote:
> Hello,
>
> I have been trying to track different phases of a live migration process. I
> am using libvirt with qemu-kvm. I am issuing migration commands using
> virsh.
>
> Now, I want to measure the time spent in each phase of live migration,
> e.g., pre-copy and stop-copy. I stumbled upon the file qemu_driver.c. It
> has
2014 Mar 20
2
Live migration process in src/qemu_driver.ca
Hello,
I have been trying to track different phases of a live migration process. I
am using libvirt with qemu-kvm. I am issuing migration commands using
virsh.
Now, I want to measure the time spent in each phase of live migration,
e.g., pre-copy and stop-copy. I stumbled upon the file qemu_driver.c. It
has functions like
qemudDomainMigratePrepare2
qemudDomainMigratePerform
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe,
?
?
I just did a mount like this (added the bold):
?
mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
?Results:
?
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com>
wrote:
> Hello Joe,
>
>
>
>
>
> I just did a mount like this (added the bold):
>
>
> mount -t glusterfs -o
> *attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache*
> ,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log
>
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
While there are probably other interesting parameters and options in gluster itself, for us the largest difference with this speedtest and also for our website (real world performance) was the negative-timeout value during mount. Only 1 seems to solve so many problems, is there anyone knowledgeable why this is the case??
?
This would better be default I suppose ...?
?
I'm still
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
Hello Vijay,
?
?
What do you mean exactly? What info is missing?
?
PS: I already found out that for this particular test all the difference is made by :?negative-timeout=600 , when removing it, it's much much slower again.
?
?
Regards
Jo
?
-----Original message-----
From:Vijay Bellur <vbellur at redhat.com>
Sent:Tue 11-07-2017 18:16
Subject:Re: [Gluster-users] Gluster native mount is
2011 May 23
1
Live Migration UDP implementation
Hi all,
For my academic project to analyse the Performance of transport protocols in
Live Migration of Virtual Machines.I have configured the live migration
using tcp and I succeed the live migration.
Now for my further remaining project work I need to do the same thing for
the UDP implemataiton.When I changed the command in virsh, (My fianl command
in TCP is virsh migrate --live ubuntu21
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled:
[root at dell-per730-03 ~]# gluster v info
Volume Name: vmstore
Type: Replicate
Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.50.1:/rhgs/brick1/vmstore
Brick2:
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
My standard response to someone needing filesystem performance for www
traffic is generally, "you're doing it wrong".
https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/
That said, you might also look at these mount options:
attribute-timeout, entry-timeout, negative-timeout (set to some large
amount of time), and fopen-keep-cache.
On 07/11/2017 07:48 AM, Jo
2016 Mar 09
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> On Mon, Mar 07, 2016 at 01:40:06PM +0200, Michael S. Tsirkin wrote:
> > On Mon, Mar 07, 2016 at 06:49:19AM +0000, Li, Liang Z wrote:
> > > > > No. And it's exactly what I mean. The ballooned memory is still
> > > > > processed during live migration without skipping. The live
> > > > > migration code is
> > > > in
2016 Mar 09
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
On Wed, Mar 09, 2016 at 05:28:54PM +0300, Roman Kagan wrote:
> On Mon, Mar 07, 2016 at 01:40:06PM +0200, Michael S. Tsirkin wrote:
> > On Mon, Mar 07, 2016 at 06:49:19AM +0000, Li, Liang Z wrote:
> > > > > No. And it's exactly what I mean. The ballooned memory is still
> > > > > processed during live migration without skipping. The live migration code is
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On 07/11/2017 08:14 AM, Jo Goossens wrote:
> RE: [Gluster-users] Gluster native mount is really slow compared to nfs
>
> Hello Joe,
>
> I really appreciate your feedback, but I already tried the opcache
> stuff (to not valildate at all). It improves of course then, but not
> completely somehow. Still quite slow.
>
> I did not try the mount options yet, but I will now!
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster:
?
?
root at app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top
2016 Mar 03
0
[RFC qemu 0/4] A PV solution for live migration optimization
* Liang Li (liang.z.li at intel.com) wrote:
> The current QEMU live migration implementation mark the all the
> guest's RAM pages as dirtied in the ram bulk stage, all these pages
> will be processed and that takes quit a lot of CPU cycles.
>
> From guest's point of view, it doesn't care about the content in free
> pages. We can make use of this fact and skip
2012 Apr 25
1
Regarding persistence of VM's after live migration (virDomainMigrateToURI() problem)
Hello
I am working with 3 host machines each running xen with shared NFS storage.
I am working on automatic load balancing if one host is over utilized and
another is under utilized by measuring the utilization from xentop. I am
facing a problem after migration of VM. I am setting the flags ( 1| 8| 16)
in order to do live migration, persist VM on destination, undefine host
from source. After
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is the volume info as requested by soumya:
?
#gluster volume info www
?Volume Name: www
Type: Replicate
Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.140.41:/gluster/www
Brick2: 192.168.140.42:/gluster/www
Brick3: 192.168.140.43:/gluster/www
Options Reconfigured:
2019 Nov 30
0
Re: [PATCH nbdkit 2/3] filters: stats: Measure time per operation
On Sat, Nov 30, 2019 at 9:13 AM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> On Sat, Nov 30, 2019 at 02:17:06AM +0200, Nir Soffer wrote:
> > Previously we measured the total time and used it to calculate the rate
> > of different operations. This is incorrect and hides the real
> > throughput. A more useful way is to measure the time we spent in each
> >