Vern Burke
2010-Jul-07 18:39 UTC
[Xen-users] XCP 0.5.0 release, upgrade bugs summary 0.1.1 to 0.5.0
Ok, finally got the cloud straightened out and flying again (whew). Here''s my summary of upgrading bugs and caveats: Attempting to list all VMs from xsconsole produces "This feature is not available in pools with more than 100 virtual machines." when there are less than 100 VMs, even with a fully homogenous 0.5.0 pool. Xen Cloud Control System is not affected by this bug. VMs with 0.1.1 pv drivers show that the pv drivers are up to date from the 0.5.0 pool master. Obviously incorrect. XCCS is affected by this bug, workaround will be in place for XCCS 0.5 release. Upgrading the guest utilities to 0.5.0 by using installing the rpm directly with rpm requires using --force to override errors that complain about conflicts with the 0.1.1 guest utilities. This does not occur if you use the install.sh script to install them. With the pool in a partially upgraded state, 0.5 pool master and 0.1.1 slaves, you can''t migrate a VM to the pool master to clear a slave for upgrading. Any attempt to migrate off a slave, shut the VMs down, or even shut the slave down produces "Failed: Internal error: Not_found". Fix was to dump the slaves the hard way and restart the VMs on the upgraded pool master. -- Vern Burke SwiftWater Telecom http://www.swiftwatertel.com ISP/CLEC Engineering Services Data Center Services Remote Backup Services _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Dave Scott
2010-Jul-07 19:29 UTC
[Xen-users] RE: XCP 0.5.0 release, upgrade bugs summary 0.1.1 to 0.5.0
Hi Vern, Thanks for the summary. Could you put these into the bugzilla? The worst one sounds like the Not_found one at the end -- I don''t suppose you still have the logs lying around? I''d quite like to understand that one in more detail. The rest I can probably work out without logs. Cheers, Dave> -----Original Message----- > From: Vern Burke [mailto:vburke@skow.net] > Sent: 07 July 2010 18:39 > To: xen-users@lists.xensource.com > Cc: Dave Scott > Subject: XCP 0.5.0 release, upgrade bugs summary 0.1.1 to 0.5.0 > > Ok, finally got the cloud straightened out and flying again (whew). > Here''s my summary of upgrading bugs and caveats: > > Attempting to list all VMs from xsconsole produces "This feature is not > available in pools with more than 100 virtual machines." when there are > less than 100 VMs, even with a fully homogenous 0.5.0 pool. Xen Cloud > Control System is not affected by this bug. > > VMs with 0.1.1 pv drivers show that the pv drivers are up to date from > the 0.5.0 pool master. Obviously incorrect. XCCS is affected by this > bug, workaround will be in place for XCCS 0.5 release. > > Upgrading the guest utilities to 0.5.0 by using installing the rpm > directly with rpm requires using --force to override errors that > complain about conflicts with the 0.1.1 guest utilities. This does not > occur if you use the install.sh script to install them. > > With the pool in a partially upgraded state, 0.5 pool master and 0.1.1 > slaves, you can''t migrate a VM to the pool master to clear a slave for > upgrading. Any attempt to migrate off a slave, shut the VMs down, or > even shut the slave down produces "Failed: Internal error: Not_found". > Fix was to dump the slaves the hard way and restart the VMs on the > upgraded pool master. > > -- > Vern Burke > > SwiftWater Telecom > http://www.swiftwatertel.com > ISP/CLEC Engineering Services > Data Center Services > Remote Backup Services_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Bug exists at: XCP 0.1.1 XCP 0.5RC2,RC3 XCP 0.5 Bug description: memory-actual of guest machine (PV) can not surpass value of memory-dynamic-max at machine starttime, even if this value (memory-dynamic-max have been raised after boot). Steps to reproduce: (on any PV guest VM) xe vm-shutdown vm=test xe vm-memory-static-range-set min=100MiB max=2GiB vm=test xe vm-memory-dynamic-range-set min=100MiB max=500MiB vm=test xe vm-start vm=test (after booting) name-label ( RW) : test memory-actual ( RO): 524288000 memory-static-max ( RW): 2147483648 memory-dynamic-max ( RW): 524288000 (* let call it boot_max) memory-dynamic-min ( RW): 104857600 memory-static-min ( RW): 104857600 xe vm-memory-dynamic-range set max=200MiB min=200MiB vm=test (here all ok) power-state ( RO): running memory-actual ( RO): 209715200 memory-static-max ( RW): 2147483648 memory-dynamic-max ( RW): 209715200 memory-dynamic-min ( RW): 209715200 memory-static-min ( RW): 104857600 and here is a bug: name-label ( RW) : xv-acc1 power-state ( RO): running memory-actual ( RO): 524288000 memory-static-max ( RW): 2147483648 memory-dynamic-max ( RW): 838860800 memory-dynamic-min ( RW): 838860800 memory-static-min ( RW): 104857600 we changed an memory-dynamic-max up to 800MiB, but memory-actual does not overcome boot_max value, mentioned upper). I check situation with dynamic-min but it work as expected - if I move dynamic_min & dynamic_max below value of dynamic_min at boot time, it move memory_actual below boot time dynamic_min. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users