similar to: XCP

Displaying 20 results from an estimated 600 matches similar to: "XCP"

2010 May 08
7
Problem with restore/migration with Xen 4.0.0 and Jeremy kernel (2.6.32.12)
Hi all, I am using Xen 4.0.0 on top of Ubuntu Lucid (amd64), with the Jeremy kernel taken from git (xen/stable-2.6.32.x branch, 2.6.32.12 when I am writing this email). This kernel is also used in my domu. I can save a domu without any problem, but restoring it may need from 2 to 5 minutes, from a 1G checkpoint file (domu has 1GB RAM). There also errors in /var/log/xen/xend.log :
2010 May 08
7
Problem with restore/migration with Xen 4.0.0 and Jeremy kernel (2.6.32.12)
Hi all, I am using Xen 4.0.0 on top of Ubuntu Lucid (amd64), with the Jeremy kernel taken from git (xen/stable-2.6.32.x branch, 2.6.32.12 when I am writing this email). This kernel is also used in my domu. I can save a domu without any problem, but restoring it may need from 2 to 5 minutes, from a 1G checkpoint file (domu has 1GB RAM). There also errors in /var/log/xen/xend.log :
2007 Apr 10
7
PV domain save/restore break
I encounter PV domain restore failure on r14770. Are you guys aware of this? ======================================================================== [2007-04-10 09:57:24 4664] DEBUG (balloon:113) Balloon: 754076 KiB free; need 65536; done. [2007-04-10 09:57:24 4664] DEBUG (XendCheckpoint:220) [xc_restore]: /usr/lib/xen/bin/xc_restore 24 4 1 2 0 0 0 [2007-04-10 09:57:24 4664] INFO
2010 Sep 07
2
remus failure -xen 4.0.1: xc_domain_restore cannot pin page tables
Hardware: Dell Poweredge R510 (32G ram, 8 CPU- Xeon) 64bit - xen 4.0.1 stable 64bit - 2.6.32.18 dom0 (.config attached) running Ubuntu 10.04 32 bit - 2.6.18.8 domU (.config attached) running ubuntu 8.04 domU has 3 tap2 disks, on lvm snapshots. domU has 2G mem, 2 VCPU workload on domU - ssh + top running, destroy domain -- This works . But, If i run a heavier workload say postgres db (just
2009 Feb 02
4
HVM Live Migration Troubles - Xen 3.3.1
I''m having issues with HVM live migration and every xen version I''ve tried - 3.0.1, 3.1.x, 3.3.0 and 3.3.1. Migrations will fail intermittently with messages like this on the receiving hypervisor: [2009-02-02 11:35:19 12629] DEBUG (XendCheckpoint:264) [xc_restore]: /usr/lib64/xen/bin/xc_restore 4 4 2 3 1 1 1 [2009-02-02 11:35:19 12629] INFO (XendCheckpoint:403)
2008 May 08
1
Restoring a DomU HVM-Domain is "slow" (Bandwidth 23MB/sec from a ramdisk, xen3.2.1)
Hi, i do some tests with restoring HVM winxp-domU''s. Even, if i restore a saved DomU from a ramdisk, the Restoring-process has only a bandwidth about 23MB/sec: Here a example restoring an 512MB HVM opensuse10.3 DomU ...... [2008-05-07 22:40:12 3314] DEBUG (XendCheckpoint:218) restore:shadow=0x5, _static_max=0x20000000, _static_min=0x0, [2008-05-07 22:40:12 3314] DEBUG (balloon:132)
2011 Sep 20
9
XL: pv guests dont reboot after migration (xen4.1.2-rc2-pre)
A pv guest will not reboot after migration, the guest itself does everything right, including the shutdown, but xl does not recreate the guest, it just shuts it down. This goes for 2.6.39 and 3.0.4 guest kernels, havent tried different ones. I also haven tried different xen versions. Dont know if this would affect hvm, probably not since qemu leaves the guest running and does a
2011 Feb 07
4
XEN live migration: cannot console or ssh to the migrated guest VM (domU)
I am now testing XEN live migration on two physical hosts with XEN 4.0.1 pvops Ubuntu 10.10 (2.6.35-22). Host A also acts as a NFS server, and Host B acts as a NFS client. When I migrate a guest domain from B to A. The ssh connection experiences a downtime of about 1 minute (the terminal does not react to the keyboard input till about 1 minute later). However, the "sudo xm console" does
2010 Jul 02
10
Do systems have to be IDENTICAL for live migration?
I have several systems running 64bit SLES11SP1. I''m trying to Live Migrate between a couple of them and it''s not working, although the same VM will run on each one starting it manually. The system are not identical. Would one expect migration to work between these two systems? See below. Thanks, James The first system is a ProLiant DL360 G6: Proc 1:2533 MHz Execution
2011 Jul 06
7
Xen 4.0 - prerequisites for succesfull live migration?
Hi, I have three Xen hosts running Xen 4.0.2 (OpenSuSE 11.4 based). I also have one ''NFS'' server with an NFS export holding VM images and configuration files. Each host has a dedicated LAN link directly to the NFS server. I have another separated NFS export for VM locking (but I had the same issues before). When I attempt to live migrate a VM, it *looks''*like
2009 Jun 21
1
Xen LVM DRBD live migration
Hi guys I have few problems with live migration ... and I need some professional help :) I have 2 xen servers ... CentOS 5.3 and I want to have a high available cluster Now let`s begin .... xen0: [root@xen0 ~]# fdisk -l Disk /dev/sda: 218.2 GB, 218238025728 bytes 255 heads, 63 sectors/track, 26532 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start
2009 Nov 30
5
Live migration and DRBD
Hi folks I deploy a two Dell PowerEdge T300 to test Virtualization with kvm+drbd+heartbaet. The KVM drbd and heartbeat work properly. However, I have doubt!! When the primary node has down, the secondary node start the VM that has original running on primary node... So, this required a full stop of hole system... This is not we wish here... Is there something way to live migrate VM from
2007 Jan 26
5
HVM restore broken?
I got latest (13601) yesterday evening. This doesn''t seem to work to do Restore (at least of the Windows test-image that I''ve been using for testing previously). The VM restores reasonably OK, but it jumps to an invalid address shortly after restoring, giving a D1 blue-screen error (DRIVER_IRQL_LESS_OR_EQUAL), which turns out to be "page-fault in driver" after I
2006 Aug 10
2
Works...?! All blocked
Hello again, now my xen works. But a strange thing: Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 2514 4 r----- 76.2 vx1 1 256 1 -b---- 6.1 vx2 2 256 1 -b---- 5.8 vx3 3 256 1 -b---- 5.3 vx4
2011 Dec 14
18
[PATCH 0 of 3] Support for VM generation ID save/restore and migrate
This patch series adds support for preservation of the VM generation ID buffer address in xenstore across save/restore and migrate, and also code to increment the value in all cases except for migration. The first patch modifies creation of the hvmloader key in xenstore and adds creation of a new read/write hvmloader/generation-id-addr key. The second patch changes hvmloader to use the new key (as
2008 Sep 18
4
Migration stalls with 2.6.26.5 kernel
Hello, I have been struggling through the task of moving our infrastructure over to Xen VMs. We were initially using Ubuntu packages for both dom0 and our domUs, but experienced extreme instability so we moved to CentOS, which has been much more reliable for dom0. Since we already had a bunch of Ubuntu VMs, we left them using the Ubuntu 2.4.24-19-xen kernel, but this has turned out to be
2011 Dec 16
13
[PATCH 0 of 4] Support for VM generation ID save/restore and migrate
This patch series adds support for preservation of the VM generation ID buffer address in xenstore across save/restore and migrate, and also code to increment the value in all cases except for migration. Patch 1 modifies the guest ro and rw node creation to an open coding style and cleans up some extraneous node creation. Patch 2 modifies creation of the hvmloader key in xenstore and adds
2008 Mar 03
1
Live migration problem with xen 3.2
Hi, Live migration was working well for me with 3.1. Recently I move to 3.2, and now live migration does not work anymore. Here is what i obtain on source node : ---- xm migrate --live webdav node2 Error: /usr/lib/xen-3.2-1/bin/xc_save 26 6 0 0 1 failed Usage: xm migrate <Domain> <Host> [...] ---- And in the xend.log (still on the source node) : ---- [2008-03-03 10:12:52 15205] INFO
2009 Oct 23
11
soft lockups during live migrate..
Trying to migrate a 64bit PV guest with 64GB running medium to heavy load on xen 3.4.0, it is showing lot of soft lockups. The softlockups are causing dom0 reboot by the cluster FS. The hardware has 256GB and 32 CPUs. Looking into the hypervisor thru kdb, I see one cpu in sh_resync_all() while all other 31 appear spinning on the shadow_lock. I vaguely remember seeing some thread on this while
2011 Dec 14
9
[PATCH 0 of 2] Support for VM generation ID save/restore and migrate
This patch series adds support for preservation of the VM generation ID buffer address in xenstore across save/restore and migrate, and also code to increment the value in all cases except for migration. The vast majority of the code is in second patch. The first patch merely changes the xenstore key name used by hvmloader to store the buffer address.