Hi, Is there any changes to acpi related for HVM? My HVM windows 2008 is running stable for <3.4.0 and just upgraded the xen to version 3.4.0 and hit the following which crash my HVM windows 2008 guest :( This lockup will happen/occur very quickly upon bootup of such HVM. domid: 2 qemu: the number of cpus is 6 config qemu network with xen bridge for tap2.0 eth0 config qemu network with xen bridge for tap2.1 eth1 Watching /local/domain/0/device-model/2/logdirty/next-active Watching /local/domain/0/device-model/2/command qemu_map_cache_init nr_buckets = 4000 size 327680 shared page at pfn feffd buffered io page at pfn feffb Guest uuid = 27d0ed0f-7130-2aa1-13df-03b3b9b4e5b3 Time offset set 0 populating video RAM at ff000000 mapping video RAM from ff000000 Register xen platform. Done register platform. platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state. xs_read(/local/domain/0/device-model/2/xen_extended_power_mgmt): read error I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 cirrus vga map change while on lfb mode mapping vram to f0000000 - f0400000 platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state. platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro state. gpe_sts_write: addr=0x1f68, val=0x0. gpe_sts_write: addr=0x1f69, val=0x0. gpe_sts_write: addr=0x1f6a, val=0x0. gpe_sts_write: addr=0x1f6b, val=0x0. gpe_en_write: addr=0x1f6c, val=0x0. gpe_en_write: addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. gpe_en_write: addr=0x1f6c, val=0x0. gpe_en_write: addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. gpe_sts_write: addr=0x1f68, val=0x0. gpe_sts_write: addr=0x1f69, val=0x0. gpe_sts_write: addr=0x1f6a, val=0x0. gpe_sts_write: addr=0x1f6b, val=0x0. gpe_en_write: addr=0x1f6c, val=0x8. gpe_en_write: addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. ACPI PCI hotplug: read addr=0x10c2, val=0x0. ACPI PCI hotplug: read addr=0x10c3, val=0x0. ACPI PCI hotplug: read addr=0x10c4, val=0x0. ACPI PCI hotplug: read addr=0x10c5, val=0x0. ACPI PCI hotplug: read addr=0x10c6, val=0x0. ACPI PCI hotplug: read addr=0x10c7, val=0x0. ACPI PCI hotplug: read addr=0x10c8, val=0x0. ACPI PCI hotplug: read addr=0x10c9, val=0x0. ACPI PCI hotplug: read addr=0x10ca, val=0x0. ACPI PCI hotplug: read addr=0x10cb, val=0x0. ACPI PCI hotplug: read addr=0x10cc, val=0x0. ACPI PCI hotplug: read addr=0x10cd, val=0x0. ACPI PCI hotplug: read addr=0x10ce, val=0x0. ACPI PCI hotplug: read addr=0x10cf, val=0x0. ACPI PCI hotplug: read addr=0x10d0, val=0x0. ACPI PCI hotplug: read addr=0x10d1, val=0x0. ACPI PCI hotplug: read addr=0x10d2, val=0x0. ACPI PCI hotplug: read addr=0x10d3, val=0x0. ACPI PCI hotplug: read addr=0x10d4, val=0x0. ACPI PCI hotplug: read addr=0x10d5, val=0x0. ACPI PCI hotplug: read addr=0x10d6, val=0x0. ACPI PCI hotplug: read addr=0x10d7, val=0x0. ACPI PCI hotplug: read addr=0x10d8, val=0x0. ACPI PCI hotplug: read addr=0x10d9, val=0x0. ACPI PCI hotplug: read addr=0x10da, val=0x0. ACPI PCI hotplug: read addr=0x10db, val=0x0. ACPI PCI hotplug: read addr=0x10dc, val=0x0. ACPI PCI hotplug: read addr=0x10dd, val=0x0. ACPI PCI hotplug: read addr=0x10de, val=0x0. ACPI PCI hotplug: read addr=0x10df, val=0x0. ACPI PCI hotplug: read addr=0x10e0, val=0x0. ACPI PCI hotplug: read addr=0x10e1, val=0x0. ACPI PCI hotplug: read addr=0x10c2, val=0x0. ACPI PCI hotplug: read addr=0x10c3, val=0x0. ACPI PCI hotplug: read addr=0x10c4, val=0x0. ACPI PCI hotplug: read addr=0x10c5, val=0x0. ACPI PCI hotplug: read addr=0x10c6, val=0x0. ACPI PCI hotplug: read addr=0x10c7, val=0x0. ACPI PCI hotplug: read addr=0x10c8, val=0x0. ACPI PCI hotplug: read addr=0x10c9, val=0x0. ACPI PCI hotplug: read addr=0x10ca, val=0x0. ACPI PCI hotplug: read addr=0x10cb, val=0x0. ACPI PCI hotplug: read addr=0x10cc, val=0x0. ACPI PCI hotplug: read addr=0x10cd, val=0x0. ACPI PCI hotplug: read addr=0x10ce, val=0x0. ACPI PCI hotplug: read addr=0x10cf, val=0x0. ACPI PCI hotplug: read addr=0x10d0, val=0x0. ACPI PCI hotplug: read addr=0x10d1, val=0x0. ACPI PCI hotplug: read addr=0x10d2, val=0x0. ACPI PCI hotplug: read addr=0x10d3, val=0x0. ACPI PCI hotplug: read addr=0x10d4, val=0x0. ACPI PCI hotplug: read addr=0x10d5, val=0x0. ACPI PCI hotplug: read addr=0x10d6, val=0x0. ACPI PCI hotplug: read addr=0x10d7, val=0x0. ACPI PCI hotplug: read addr=0x10d8, val=0x0. ACPI PCI hotplug: read addr=0x10d9, val=0x0. ACPI PCI hotplug: read addr=0x10da, val=0x0. ACPI PCI hotplug: read addr=0x10db, val=0x0. ACPI PCI hotplug: read addr=0x10dc, val=0x0. ACPI PCI hotplug: read addr=0x10dd, val=0x0. ACPI PCI hotplug: read addr=0x10de, val=0x0. ACPI PCI hotplug: read addr=0x10df, val=0x0. ACPI PCI hotplug: read addr=0x10e0, val=0x0. ACPI PCI hotplug: read addr=0x10e1, val=0x0. ACPI PCI hotplug: read addr=0x10c2, val=0x0. ACPI PCI hotplug: read addr=0x10c3, val=0x0. ACPI PCI hotplug: read addr=0x10c4, val=0x0. ACPI PCI hotplug: read addr=0x10c5, val=0x0. ACPI PCI hotplug: read addr=0x10c6, val=0x0. ACPI PCI hotplug: read addr=0x10c7, val=0x0. ACPI PCI hotplug: read addr=0x10c8, val=0x0. ACPI PCI hotplug: read addr=0x10c9, val=0x0. ACPI PCI hotplug: read addr=0x10ca, val=0x0. ACPI PCI hotplug: read addr=0x10cb, val=0x0. ACPI PCI hotplug: read addr=0x10cc, val=0x0. ACPI PCI hotplug: read addr=0x10cd, val=0x0. ACPI PCI hotplug: read addr=0x10ce, val=0x0. ACPI PCI hotplug: read addr=0x10cf, val=0x0. ACPI PCI hotplug: read addr=0x10d0, val=0x0. ACPI PCI hotplug: read addr=0x10d1, val=0x0. ACPI PCI hotplug: read addr=0x10d2, val=0x0. ACPI PCI hotplug: read addr=0x10d3, val=0x0. ACPI PCI hotplug: read addr=0x10d4, val=0x0. ACPI PCI hotplug: read addr=0x10d5, val=0x0. ACPI PCI hotplug: read addr=0x10d6, val=0x0. ACPI PCI hotplug: read addr=0x10d7, val=0x0. ACPI PCI hotplug: read addr=0x10d8, val=0x0. ACPI PCI hotplug: read addr=0x10d9, val=0x0. ACPI PCI hotplug: read addr=0x10da, val=0x0. ACPI PCI hotplug: read addr=0x10db, val=0x0. ACPI PCI hotplug: read addr=0x10dc, val=0x0. ACPI PCI hotplug: read addr=0x10dd, val=0x0. ACPI PCI hotplug: read addr=0x10de, val=0x0. ACPI PCI hotplug: read addr=0x10df, val=0x0. ACPI PCI hotplug: read addr=0x10e0, val=0x0. ACPI PCI hotplug: read addr=0x10e1, val=0x0. ACPI PCI hotplug: read addr=0x10c2, val=0x0. ACPI PCI hotplug: read addr=0x10c3, val=0x0. ACPI PCI hotplug: read addr=0x10c4, val=0x0. ACPI PCI hotplug: read addr=0x10c5, val=0x0. ACPI PCI hotplug: read addr=0x10c6, val=0x0. ACPI PCI hotplug: read addr=0x10c7, val=0x0. ACPI PCI hotplug: read addr=0x10c8, val=0x0. ACPI PCI hotplug: read addr=0x10c9, val=0x0. ACPI PCI hotplug: read addr=0x10ca, val=0x0. ACPI PCI hotplug: read addr=0x10cb, val=0x0. ACPI PCI hotplug: read addr=0x10cc, val=0x0. ACPI PCI hotplug: read addr=0x10cd, val=0x0. ACPI PCI hotplug: read addr=0x10ce, val=0x0. ACPI PCI hotplug: read addr=0x10cf, val=0x0. ACPI PCI hotplug: read addr=0x10d0, val=0x0. ACPI PCI hotplug: read addr=0x10d1, val=0x0. ACPI PCI hotplug: read addr=0x10d2, val=0x0. ACPI PCI hotplug: read addr=0x10d3, val=0x0. ACPI PCI hotplug: read addr=0x10d4, val=0x0. ACPI PCI hotplug: read addr=0x10d5, val=0x0. ACPI PCI hotplug: read addr=0x10d6, val=0x0. ACPI PCI hotplug: read addr=0x10d7, val=0x0. ACPI PCI hotplug: read addr=0x10d8, val=0x0. ACPI PCI hotplug: read addr=0x10d9, val=0x0. ACPI PCI hotplug: read addr=0x10da, val=0x0. ACPI PCI hotplug: read addr=0x10db, val=0x0. ACPI PCI hotplug: read addr=0x10dc, val=0x0. ACPI PCI hotplug: read addr=0x10dd, val=0x0. ACPI PCI hotplug: read addr=0x10de, val=0x0. ACPI PCI hotplug: read addr=0x10df, val=0x0. ACPI PCI hotplug: read addr=0x10e0, val=0x0. ACPI PCI hotplug: read addr=0x10e1, val=0x0. ACPI PCI hotplug: read addr=0x10c2, val=0x0. ACPI PCI hotplug: read addr=0x10c3, val=0x0. ACPI PCI hotplug: read addr=0x10c4, val=0x0. ACPI PCI hotplug: read addr=0x10c5, val=0x0. ACPI PCI hotplug: read addr=0x10c6, val=0x0. ACPI PCI hotplug: read addr=0x10c7, val=0x0. ACPI PCI hotplug: read addr=0x10c8, val=0x0. ACPI PCI hotplug: read addr=0x10c9, val=0x0. ACPI PCI hotplug: read addr=0x10ca, val=0x0. ACPI PCI hotplug: read addr=0x10cb, val=0x0. ACPI PCI hotplug: read addr=0x10cc, val=0x0. ACPI PCI hotplug: read addr=0x10cd, val=0x0. ACPI PCI hotplug: read addr=0x10ce, val=0x0. ACPI PCI hotplug: read addr=0x10cf, val=0x0. ACPI PCI hotplug: read addr=0x10d0, val=0x0. ACPI PCI hotplug: read addr=0x10d1, val=0x0. ACPI PCI hotplug: read addr=0x10d2, val=0x0. ACPI PCI hotplug: read addr=0x10d3, val=0x0. ACPI PCI hotplug: read addr=0x10d4, val=0x0. ACPI PCI hotplug: read addr=0x10d5, val=0x0. ACPI PCI hotplug: read addr=0x10d6, val=0x0. ACPI PCI hotplug: read addr=0x10d7, val=0x0. ACPI PCI hotplug: read addr=0x10d8, val=0x0. ACPI PCI hotplug: read addr=0x10d9, val=0x0. ACPI PCI hotplug: read addr=0x10da, val=0x0. ACPI PCI hotplug: read addr=0x10db, val=0x0. ACPI PCI hotplug: read addr=0x10dc, val=0x0. ACPI PCI hotplug: read addr=0x10dd, val=0x0. ACPI PCI hotplug: read addr=0x10de, val=0x0. ACPI PCI hotplug: read addr=0x10df, val=0x0. ACPI PCI hotplug: read addr=0x10e0, val=0x0. ACPI PCI hotplug: read addr=0x10e1, val=0x0. ACPI PCI hotplug: read addr=0x10c2, val=0x0. ACPI PCI hotplug: read addr=0x10c3, val=0x0. ACPI PCI hotplug: read addr=0x10c4, val=0x0. ACPI PCI hotplug: read addr=0x10c5, val=0x0. ACPI PCI hotplug: read addr=0x10c6, val=0x0. ACPI PCI hotplug: read addr=0x10c7, val=0x0. ACPI PCI hotplug: read addr=0x10c8, val=0x0. ACPI PCI hotplug: read addr=0x10c9, val=0x0. ACPI PCI hotplug: read addr=0x10ca, val=0x0. ACPI PCI hotplug: read addr=0x10cb, val=0x0. ACPI PCI hotplug: read addr=0x10cc, val=0x0. ACPI PCI hotplug: read addr=0x10cd, val=0x0. ACPI PCI hotplug: read addr=0x10ce, val=0x0. ACPI PCI hotplug: read addr=0x10cf, val=0x0. ACPI PCI hotplug: read addr=0x10d0, val=0x0. ACPI PCI hotplug: read addr=0x10d1, val=0x0. ACPI PCI hotplug: read addr=0x10d2, val=0x0. ACPI PCI hotplug: read addr=0x10d3, val=0x0. ACPI PCI hotplug: read addr=0x10d4, val=0x0. ACPI PCI hotplug: read addr=0x10d5, val=0x0. ACPI PCI hotplug: read addr=0x10d6, val=0x0. ACPI PCI hotplug: read addr=0x10d7, val=0x0. ACPI PCI hotplug: read addr=0x10d8, val=0x0. ACPI PCI hotplug: read addr=0x10d9, val=0x0. ACPI PCI hotplug: read addr=0x10da, val=0x0. ACPI PCI hotplug: read addr=0x10db, val=0x0. ACPI PCI hotplug: read addr=0x10dc, val=0x0. ACPI PCI hotplug: read addr=0x10dd, val=0x0. ACPI PCI hotplug: read addr=0x10de, val=0x0. ACPI PCI hotplug: read addr=0x10df, val=0x0. ACPI PCI hotplug: read addr=0x10e0, val=0x0. ACPI PCI hotplug: read addr=0x10e1, val=0x0. ACPI PCI hotplug: read addr=0x10c2, val=0x0. ACPI PCI hotplug: read addr=0x10c3, val=0x0. ACPI PCI hotplug: read addr=0x10c4, val=0x0. ACPI PCI hotplug: read addr=0x10c5, val=0x0. ACPI PCI hotplug: read addr=0x10c6, val=0x0. ACPI PCI hotplug: read addr=0x10c7, val=0x0. ACPI PCI hotplug: read addr=0x10c8, val=0x0. ACPI PCI hotplug: read addr=0x10c9, val=0x0. ACPI PCI hotplug: read addr=0x10ca, val=0x0. ACPI PCI hotplug: read addr=0x10cb, val=0x0. ACPI PCI hotplug: read addr=0x10cc, val=0x0. ACPI PCI hotplug: read addr=0x10cd, val=0x0. ACPI PCI hotplug: read addr=0x10ce, val=0x0. ACPI PCI hotplug: read addr=0x10cf, val=0x0. ACPI PCI hotplug: read addr=0x10d0, val=0x0. ACPI PCI hotplug: read addr=0x10d1, val=0x0. ACPI PCI hotplug: read addr=0x10d2, val=0x0. ACPI PCI hotplug: read addr=0x10d3, val=0x0. ACPI PCI hotplug: read addr=0x10d4, val=0x0. ACPI PCI hotplug: read addr=0x10d5, val=0x0. ACPI PCI hotplug: read addr=0x10d6, val=0x0. ACPI PCI hotplug: read addr=0x10d7, val=0x0. ACPI PCI hotplug: read addr=0x10d8, val=0x0. ACPI PCI hotplug: read addr=0x10d9, val=0x0. ACPI PCI hotplug: read addr=0x10da, val=0x0. ACPI PCI hotplug: read addr=0x10db, val=0x0. ACPI PCI hotplug: read addr=0x10dc, val=0x0. ACPI PCI hotplug: read addr=0x10dd, val=0x0. ACPI PCI hotplug: read addr=0x10de, val=0x0. ACPI PCI hotplug: read addr=0x10df, val=0x0. ACPI PCI hotplug: read addr=0x10e0, val=0x0. ACPI PCI hotplug: read addr=0x10e1, val=0x0. gpe_en_write: addr=0x1f6c, val=0x8. gpe_en_write: addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. Snapshot for HVM windows 2008 vncviewer output attached. I am going to revert xen to previous stable version until this bug/issue is fixed. Hopefully this bug/issue can be fixed soon. Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 23/05/2009 10:42, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote:> Is there any changes to acpi related for HVM? My HVM windows 2008 is > running stable for <3.4.0 and just upgraded the xen to version 3.4.0 > and hit the following which crash my HVM windows 2008 guest :( This > lockup will happen/occur very quickly upon bootup of such HVM.Something timing/timer related must have chnaged in 3.4 to make this more likely. However it is not impossible you could see that bluescreen on 3.3 too! And there is a workaround on 3.4 which does not exist on 3.3 -- add viridian=1 to your domain config file. This will tell Windows it is running on a hypervisor and thus to relax its timer checks. Please give this a go rather than reverting. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser wrote:> [...] there is a workaround on 3.4 which does not exist on 3.3 -- add > viridian=1 to your domain config file. This will tell Windows it is running > on a hypervisor and thus to relax its timer checks.Are the gplpv drivers working with viridian yet? I remember seeing problems reported by Andrew Lyon and James'' answer was to not use the viridian option with gplpv... Best regards, Christian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> Something timing/timer related must have chnaged in 3.4 to make this more > likely. However it is not impossible you could see that bluescreen on 3.3 > too! And there is a workaround on 3.4 which does not exist on 3.3 -- add > viridian=1 to your domain config file. This will tell Windows it is running > on a hypervisor and thus to relax its timer checks.I actually tried that before reverting when initially hitting this lockup issue then I search the mailing list and ended up with this: http://lists.xensource.com/archives/html/xen-devel/2009-04/msg01050.html I tried with viridian=1 and viridian=0 and also never include that in my HVM config file. The problem I am facing is super slow/performance when using HVM in 3.4.0 besides the lockup issue.> > Please give this a go rather than reverting.I have already reverted to xen 3.3.2 changeset 18591 with linux-2.6.18-xen.hg changeset 797. Since this is a production server I am unable to test the setting until later of the night (after midnight) when everyone else heading to sleep except me :p I will give it another go with xen 3.4.0 changeset 19607 with linux-2.6.18-xen.hg changeset 876 again later after midnight then report back the result. The blue screen I encountered when using xen 3.4.0 is easily triggered by doing Windows Backup in HVM windows 2008 guest and within minutes will hit that blue screen whereby using xen 3.3.2 I never encounter such even now as I am doing a windows backup in HVM and completed the backup process. Running xen 3.3.2 HVM windows 2008 guest is solid stable for months since last upgrade from xen 3.3.1 to xen 3.3.2. Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 23/05/2009 12:09, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote:>> Something timing/timer related must have chnaged in 3.4 to make this more >> likely. However it is not impossible you could see that bluescreen on 3.3 >> too! And there is a workaround on 3.4 which does not exist on 3.3 -- add >> viridian=1 to your domain config file. This will tell Windows it is running >> on a hypervisor and thus to relax its timer checks. > > I actually tried that before reverting when initially hitting this > lockup issue then I search the mailing list and ended up with this: > http://lists.xensource.com/archives/html/xen-devel/2009-04/msg01050.htmlTim Deegan posted a patch at the end of that thread. Could you try it? I didn''t apply it for 3.4 since the patch didn''t get any responses in that email thread. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 23/05/2009 12:09, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote:> I tried with viridian=1 and viridian=0 and also never include that in > my HVM config file. The problem I am facing is super slow/performance > when using HVM in 3.4.0 besides the lockup issue.How slow is ''slow''? Is that across more than one type of guest OS? Do you run the GPLPV drivers? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sat, May 23, 2009 at 7:17 PM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:> On 23/05/2009 12:09, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote: > >> I tried with viridian=1 and viridian=0 and also never include that in >> my HVM config file. The problem I am facing is super slow/performance >> when using HVM in 3.4.0 besides the lockup issue. > > How slow is ''slow''? Is that across more than one type of guest OS? Do you > run the GPLPV drivers?I do not run any GPLPV drivers. Performance wise between 3.3.2 and 3.4 difference as in speed and performing tasks via remote desktop in HVM windows 2008 guest is very noticeable. Lag time can be more than twice just for remote desktop logins with no other changes just the difference of xen version 3.3.2 and 3.4.0. I might have jumped into wrong conclusion too soon though and will test again later then report back the result. If you have any test tools for me to perform and compare between both versions will be great ;) Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>> I actually tried that before reverting when initially hitting this >> lockup issue then I search the mailing list and ended up with this: >> http://lists.xensource.com/archives/html/xen-devel/2009-04/msg01050.html > > Tim Deegan posted a patch at the end of that thread. Could you try it? I > didn''t apply it for 3.4 since the patch didn''t get any responses in that > email thread.I actually intend to try that if later my another test for xen 3.4.0 with various different HVM config settings still lead me to the same issue or in short problem still persist. Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sat, May 23, 2009 at 12:08 PM, Christian Tramnitz <chris.ace@gmx.net> wrote:> Keir Fraser wrote: >> >> [...] there is a workaround on 3.4 which does not exist on 3.3 -- add >> viridian=1 to your domain config file. This will tell Windows it is >> running >> on a hypervisor and thus to relax its timer checks. > > Are the gplpv drivers working with viridian yet? I remember seeing problems > reported by Andrew Lyon and James'' answer was to not use the viridian option > with gplpv... > > > Best regards, > ChristianYes they are, James made a small change and the drivers now work perfectly with viridian=1, I have that set on all my vista and 2008 hvm''s as they have at least 4 cpus assigned to them and would regularly bugcheck 101 without it. Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sat, May 23, 2009 at 12:15 PM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:> On 23/05/2009 12:09, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote: > >>> Something timing/timer related must have chnaged in 3.4 to make this more >>> likely. However it is not impossible you could see that bluescreen on 3.3 >>> too! And there is a workaround on 3.4 which does not exist on 3.3 -- add >>> viridian=1 to your domain config file. This will tell Windows it is running >>> on a hypervisor and thus to relax its timer checks. >> >> I actually tried that before reverting when initially hitting this >> lockup issue then I search the mailing list and ended up with this: >> http://lists.xensource.com/archives/html/xen-devel/2009-04/msg01050.html > > Tim Deegan posted a patch at the end of that thread. Could you try it? I > didn''t apply it for 3.4 since the patch didn''t get any responses in that > email thread. > > -- Keir > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >I''ve tested the patch and it fixes the problem of windows 7 locking up on bootup, I''ve also replied in the original thread, sorry I didn''t test it at the time... Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 23/05/2009 12:24, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote:>> How slow is ''slow''? Is that across more than one type of guest OS? Do you >> run the GPLPV drivers? > > I do not run any GPLPV drivers. Performance wise between 3.3.2 and > 3.4 difference as in speed and performing tasks via remote desktop in > HVM windows 2008 guest is very noticeable. Lag time can be more than > twice just for remote desktop logins with no other changes just the > difference of xen version 3.3.2 and 3.4.0. > > I might have jumped into wrong conclusion too soon though and will > test again later then report back the result. If you have any test > tools for me to perform and compare between both versions will be > great ;)Could be a number of things. Changes in qemu are quite a possibility, but that''s not certain. Remote desktop means you use RDP in the Windows guest rather than the qemu vncserver? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sat, May 23, 2009 at 9:23 PM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:> On 23/05/2009 12:24, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote: > >>> How slow is ''slow''? Is that across more than one type of guest OS? Do you >>> run the GPLPV drivers? >> >> I do not run any GPLPV drivers. Performance wise between 3.3.2 and >> 3.4 difference as in speed and performing tasks via remote desktop in >> HVM windows 2008 guest is very noticeable. Lag time can be more than >> twice just for remote desktop logins with no other changes just the >> difference of xen version 3.3.2 and 3.4.0. >> >> I might have jumped into wrong conclusion too soon though and will >> test again later then report back the result. If you have any test >> tools for me to perform and compare between both versions will be >> great ;) > > Could be a number of things. Changes in qemu are quite a possibility, but > that''s not certain. Remote desktop means you use RDP in the Windows guest > rather than the qemu vncserver?Yes, I am using RDP mainly from various OSes such as laptops and PCs (FreeBSD, Linux and Windows XP/Vista). Unless got issues or else I won''t use vnc i.e. vnc to get the snapshot.>From Andrew''s reply, it looks like that patch does fix the issue soyou want me to apply that patch and test then report back or are you going to apply that patch in 3.4 tree? Thanks. Kindest regards, Giam Teck Choon P.S. Repost to list and apology to Keir for email you direct in my previous reply. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> No need for you to test it if that''s your only reason for switchign back to > 3.4.Ok noted ;) I will have to apply that patch and test myself if xen-3.4.0 problem for lockup HVM windows 2008 persist after trying various config setting as I would like to have all xen servers running on same version/release to make my maintenance easy/simple (I compile own RPMs and run updates on yum repos). Thanks for all the prompt replies and support :) Kindest regards, Giam Teck Choon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 23/05/2009 15:48, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote:>> No need for you to test it if that''s your only reason for switchign back to >> 3.4. > > Ok noted ;) I will have to apply that patch and test myself if > xen-3.4.0 problem for lockup HVM windows 2008 persist after trying > various config setting as I would like to have all xen servers running > on same version/release to make my maintenance easy/simple (I compile > own RPMs and run updates on yum repos). > > Thanks for all the prompt replies and support :)Oh, by the way it would be interesting to see if your Windows slowness is affected by putting hap=0 in the domain config file. Are you running with recent Intel processors which support EPT, or with AMD processors supporting NPT? The former can be checked via xm dmesg and look for EPT in the output. If not then hap=0 will actually have no effect. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> Oh, by the way it would be interesting to see if your Windows slowness is > affected by putting hap=0 in the domain config file. Are you running with > recent Intel processors which support EPT, or with AMD processors supporting > NPT? The former can be checked via xm dmesg and look for EPT in the output. > If not then hap=0 will actually have no effect.Ok found the problem... ... vcpus set to above what is available will cause lockup for HVM windows vista/2008 (maybe others also) and even crash your dom0 (happened once and need a hard reboot)! I forgot about it as my servers mostly have 4 VCPUS :p This will not happen if running xen <3.4.0. Thanks a lot for all of you who replied especially to Keir who hint me related to processors type ;) My stupid overlook on VCPUS value can cause me headache... ... Kindest regards, Giam Teck Choon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 24/05/2009 21:16, "Teck Choon Giam" <giamteckchoon@gmail.com> wrote:>> Oh, by the way it would be interesting to see if your Windows slowness is >> affected by putting hap=0 in the domain config file. Are you running with >> recent Intel processors which support EPT, or with AMD processors supporting >> NPT? The former can be checked via xm dmesg and look for EPT in the output. >> If not then hap=0 will actually have no effect. > > Ok found the problem... ... vcpus set to above what is available will > cause lockup for HVM windows vista/2008 (maybe others also) and even > crash your dom0 (happened once and need a hard reboot)! I forgot > about it as my servers mostly have 4 VCPUS :p This will not happen if > running xen <3.4.0. > > Thanks a lot for all of you who replied especially to Keir who hint me > related to processors type ;) My stupid overlook on VCPUS value can > cause me headache... ...You mean you set number of vcpus greater than number of cpus in your system? That will indeed make the guest run slow, but shouldn''t lock up dom0! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> You mean you set number of vcpus greater than number of cpus in your system? > That will indeed make the guest run slow, but shouldn''t lock up dom0!Yes, it does lockup dom0 and a cold-reboot is needed. It doesn''t happen for xen version 3.3.2 and below... ... Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Teck Choon Giam writes ("Re: [Xen-devel] Xen 3.4.0 - Windows 2008 Lockup"):> Yes, it does lockup dom0 and a cold-reboot is needed. It doesn''t > happen for xen version 3.3.2 and below... ...Do you have a serial console onto this machine ? We have been seeing some hard-to-reproduce soft dom0/xen lockups, but IME the dom0 can be kicked back into life by using the Xen `0'' dom0 register dump debug key. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel