Ke, Liping
2008-May-20 06:59 UTC
[Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, all According to feedback those days, we revised and resend HVM virtual S3 patch. Changes includes: 1) We merged part of original S3 suspend and resume path, paused domain when do s3 suspend. Then unpause domain when s3 resume. 2) Add xm trigger <domid> s3suspend interface for triggering s3_resume operation for the suspended domain. 3) Add a flag for mark s3_suspended domain 4) make s3 suspended domain could be saved/restored. We tested the following four patches based on cs17655 on below environment: HVM guest FC8-32e X-window mode, vtd-enabled HVM guest FC8-32e X-window mode, no-vtd with PV drivers vif HVM guest FC6-32p test mode, vtd-enabled HVM guest FC6-32p test, no-vtd with PV drivers vif Also we test s3_suspend->save->restore->s3_resume operation seq for above four scenario. All works. No-windows HVM is tested since we have now vga drivers in qemu don''t support. Thanks& Regards, Criping [PATCH 0/4] HVM Virtual S3 These set of patches are our prototype for HVM virtual ACPI S3 support: - patch 1/4: Xen interface for HVM S3 - patch 2/4: QEMU interface for HVM S3 - patch 3/4: rombios interface for HVM S3 - patch 4/4: xend interface for HVM S3 The main idea is: - emulate ACPI PM1A control resiger in QEMU to capture guest S3 request - when QEMU capture guest S3 request, it call hypercall to trap to Xen - HVM suspend operation now includes below steps: 1. reset all vcpus, timers 2. resume HVM by setting HVM vcpu EIP to 0xfff0, cs base to 0xf0000, and also set other related registers/msr to the correct value/attributes in realmode environment which will start from rombios post Entry code in realmode directly when resuming. 3. rombios post code will start s3 resume by jumping to wakeup vector set by guest OS. 4. pause domain - On resume, "xm trigger <domid> s3resume will call hypercall to trap to XEN How to use it: - apply this patch to changeset 17655:2ada81810ddb - create and boot HVM domain - In HVM guest, enter S3 state * for Linux, "echo mem >/sys/power/state" * for Windows, shutdown windows by Standby - to resume HVM domain, "xm trigger <domid> s3resume" Kevin/Ke/Liping _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-20 13:52 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Mostly checked in, but: (1) I made some big changes to the Xen interface and implementation (2) I removed some extra ACPI objects you added to the DSDT which seemed to serve no purpose. Perhaps they were old debugging aids? (3) I did not take most of the xend changes. I''m not sure exposing this through dominfo and into the VM power state mechanisms is the right thing to do. At least we should have a reason to do it. Also the code in XendCheckpoint.py around save/restore and S3 looked a bit dodgy to me. I might consider it in a separate clearly explained patch. -- Keir On 20/5/08 07:59, "Ke, Liping" <liping.ke@intel.com> wrote:> Hi, all > According to feedback those days, we revised and resend HVM virtual S3 > patch. > Changes includes: > 1) We merged part of original S3 suspend and resume path, paused domain > > when do s3 suspend. Then unpause domain when s3 resume. > 2) Add xm trigger <domid> s3suspend interface for triggering s3_resume > operation for the suspended domain. > 3) Add a flag for mark s3_suspended domain > 4) make s3 suspended domain could be saved/restored. > > We tested the following four patches based on cs17655 on below > environment: > > HVM guest FC8-32e X-window mode, vtd-enabled > HVM guest FC8-32e X-window mode, no-vtd with PV drivers vif > HVM guest FC6-32p test mode, vtd-enabled > HVM guest FC6-32p test, no-vtd with PV drivers vif > Also we test s3_suspend->save->restore->s3_resume operation seq for > above > four scenario. All works. > No-windows HVM is tested since we have now vga drivers in qemu don''t > support. > > Thanks& Regards, > Criping > > > > [PATCH 0/4] HVM Virtual S3 > > These set of patches are our prototype for HVM virtual > ACPI S3 support: > - patch 1/4: Xen interface for HVM S3 > - patch 2/4: QEMU interface for HVM S3 > - patch 3/4: rombios interface for HVM S3 > - patch 4/4: xend interface for HVM S3 > > The main idea is: > - emulate ACPI PM1A control resiger in QEMU to capture guest S3 request > - when QEMU capture guest S3 request, it call hypercall to trap to Xen > - HVM suspend operation now includes below steps: > 1. reset all vcpus, timers > 2. resume HVM by setting HVM vcpu EIP to 0xfff0, cs base to 0xf0000, > and also set other related registers/msr to the correct > value/attributes > in realmode environment which will start from rombios post Entry > code > in realmode directly when resuming. > 3. rombios post code will start s3 resume by jumping to wakeup vector > set > by guest OS. > 4. pause domain > - On resume, "xm trigger <domid> s3resume will call hypercall to trap to > XEN > > How to use it: > - apply this patch to changeset 17655:2ada81810ddb > - create and boot HVM domain > - In HVM guest, enter S3 state > * for Linux, "echo mem >/sys/power/state" > * for Windows, shutdown windows by Standby > - to resume HVM domain, "xm trigger <domid> s3resume" > > > > Kevin/Ke/Liping > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-20 14:39 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, Keir Thanks for the refactory for xen interface! It''s fine. And I think deleting \_PTS and \_WAK will not affect virtual S3 when "mem" is available in /sys/power/state. For 3) So currently save/restore could not perform on a s3_suspended Machine since the domain is not running?I will have some try tomorrow. Thanks a lot! Criping Keir Fraser wrote:> Mostly checked in, but: > > (1) I made some big changes to the Xen interface and implementation > (2) I removed some extra ACPI objects you added to the DSDT which > seemed to serve no purpose. Perhaps they were old debugging aids? > (3) I did not take most of the xend changes. I''m not sure exposing > this through dominfo and into the VM power state mechanisms is the > right thing to do. At least we should have a reason to do it. Also > the code in XendCheckpoint.py around save/restore and S3 looked a bit > dodgy to me. I might consider it in a separate clearly explained > patch. > > -- Keir > > On 20/5/08 07:59, "Ke, Liping" <liping.ke@intel.com> wrote: > >> Hi, all >> According to feedback those days, we revised and resend HVM virtual >> S3 patch. Changes includes: >> 1) We merged part of original S3 suspend and resume path, paused >> domain >> >> when do s3 suspend. Then unpause domain when s3 resume. >> 2) Add xm trigger <domid> s3suspend interface for triggering >> s3_resume operation for the suspended domain. >> 3) Add a flag for mark s3_suspended domain >> 4) make s3 suspended domain could be saved/restored. >> >> We tested the following four patches based on cs17655 on below >> environment: >> >> HVM guest FC8-32e X-window mode, vtd-enabled >> HVM guest FC8-32e X-window mode, no-vtd with PV drivers vif >> HVM guest FC6-32p test mode, vtd-enabled >> HVM guest FC6-32p test, no-vtd with PV drivers vif >> Also we test s3_suspend->save->restore->s3_resume operation seq for >> above four scenario. All works. >> No-windows HVM is tested since we have now vga drivers in qemu don''t >> support. >> >> Thanks& Regards, >> Criping >> >> >> >> [PATCH 0/4] HVM Virtual S3 >> >> These set of patches are our prototype for HVM virtual >> ACPI S3 support: >> - patch 1/4: Xen interface for HVM S3 >> - patch 2/4: QEMU interface for HVM S3 >> - patch 3/4: rombios interface for HVM S3 >> - patch 4/4: xend interface for HVM S3 >> >> The main idea is: >> - emulate ACPI PM1A control resiger in QEMU to capture guest S3 >> request >> - when QEMU capture guest S3 request, it call hypercall to trap to >> Xen >> - HVM suspend operation now includes below steps: >> 1. reset all vcpus, timers >> 2. resume HVM by setting HVM vcpu EIP to 0xfff0, cs base to >> 0xf0000, and also set other related registers/msr to the >> correct value/attributes in realmode environment which will >> start from rombios post Entry code in realmode directly when >> resuming. >> 3. rombios post code will start s3 resume by jumping to wakeup >> vector set by guest OS. >> 4. pause domain >> - On resume, "xm trigger <domid> s3resume will call hypercall to >> trap to XEN >> >> How to use it: >> - apply this patch to changeset 17655:2ada81810ddb >> - create and boot HVM domain >> - In HVM guest, enter S3 state >> * for Linux, "echo mem >/sys/power/state" >> * for Windows, shutdown windows by Standby >> - to resume HVM domain, "xm trigger <domid> s3resume" >> >> >> >> Kevin/Ke/Liping >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-20 14:42 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
On 20/5/08 15:39, "Ke, Liping" <liping.ke@intel.com> wrote:> For 3) So currently save/restore could not perform on a s3_suspended > Machine since the domain is not running?I will have some try tomorrow.I think you could save/restore a s3-suspended domain, but it would auomatically s3-resume on restore! That may not always be what we want? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-20 14:48 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Ke, Liping wrote:> Hi, Keir > Thanks for the refactory for xen interface! It''s fine. > And I think deleting \_PTS and \_WAK will not affect virtual S3 when > "mem" is available in /sys/power/state. > For 3) So currently save/restore could not perform on a s3_suspended > Machine since the domain is not running?I will have some try tomorrow.Not Machine I mean domain here. Sorry for typo.> Thanks a lot! > Criping > > > > Keir Fraser wrote: >> Mostly checked in, but: >> >> (1) I made some big changes to the Xen interface and implementation >> (2) I removed some extra ACPI objects you added to the DSDT which >> seemed to serve no purpose. Perhaps they were old debugging aids? >> (3) I did not take most of the xend changes. I''m not sure exposing >> this through dominfo and into the VM power state mechanisms is the >> right thing to do. At least we should have a reason to do it. Also >> the code in XendCheckpoint.py around save/restore and S3 looked a bit >> dodgy to me. I might consider it in a separate clearly explained >> patch. >> >> -- Keir >> >> On 20/5/08 07:59, "Ke, Liping" <liping.ke@intel.com> wrote: >> >>> Hi, all >>> According to feedback those days, we revised and resend HVM virtual >>> S3 patch. Changes includes: 1) We merged part of original S3 >>> suspend and resume path, paused domain >>> >>> when do s3 suspend. Then unpause domain when s3 resume. >>> 2) Add xm trigger <domid> s3suspend interface for triggering >>> s3_resume operation for the suspended domain. >>> 3) Add a flag for mark s3_suspended domain >>> 4) make s3 suspended domain could be saved/restored. >>> >>> We tested the following four patches based on cs17655 on below >>> environment: >>> >>> HVM guest FC8-32e X-window mode, vtd-enabled >>> HVM guest FC8-32e X-window mode, no-vtd with PV drivers vif >>> HVM guest FC6-32p test mode, vtd-enabled >>> HVM guest FC6-32p test, no-vtd with PV drivers vif >>> Also we test s3_suspend->save->restore->s3_resume operation seq for >>> above four scenario. All works. >>> No-windows HVM is tested since we have now vga drivers in qemu >>> don''t support. >>> >>> Thanks& Regards, >>> Criping >>> >>> >>> >>> [PATCH 0/4] HVM Virtual S3 >>> >>> These set of patches are our prototype for HVM virtual >>> ACPI S3 support: >>> - patch 1/4: Xen interface for HVM S3 >>> - patch 2/4: QEMU interface for HVM S3 >>> - patch 3/4: rombios interface for HVM S3 >>> - patch 4/4: xend interface for HVM S3 >>> >>> The main idea is: >>> - emulate ACPI PM1A control resiger in QEMU to capture guest S3 >>> request >>> - when QEMU capture guest S3 request, it call hypercall to trap to >>> Xen >>> - HVM suspend operation now includes below steps: >>> 1. reset all vcpus, timers >>> 2. resume HVM by setting HVM vcpu EIP to 0xfff0, cs base to >>> 0xf0000, and also set other related registers/msr to the >>> correct value/attributes in realmode environment which will >>> start from rombios post Entry code in realmode directly when >>> resuming. >>> 3. rombios post code will start s3 resume by jumping to wakeup >>> vector set by guest OS. >>> 4. pause domain >>> - On resume, "xm trigger <domid> s3resume will call hypercall to >>> trap to XEN >>> >>> How to use it: >>> - apply this patch to changeset 17655:2ada81810ddb >>> - create and boot HVM domain >>> - In HVM guest, enter S3 state >>> * for Linux, "echo mem >/sys/power/state" >>> * for Windows, shutdown windows by Standby >>> - to resume HVM domain, "xm trigger <domid> s3resume" >>> >>> >>> >>> Kevin/Ke/Liping >>> >>> >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-20 14:50 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
On 20/5/08 15:48, "Ke, Liping" <liping.ke@intel.com> wrote:>> For 3) So currently save/restore could not perform on a s3_suspended >> Machine since the domain is not running?I will have some try tomorrow. > Not Machine I mean domain here. Sorry for typo.Actually it won''t work if the domain has PV drivers installed because then we expect it to run far enough to suspend itself. But if it has no PV drivers then it should not matter that it is s3suspended. However it will magically s3resume when we restore the guest. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-20 15:19 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
> Actually it won''t work if the domain has PV drivers installed because > then we expect it to run far enough to suspend itself. But if it has > no PV drivers then it should not matter that it is s3suspended. > However it will magically s3resume when we restore the guest. >OK. Another thing is that when debugging, we found when s3_sleep, it will generate the acpi_ioport write operation, it is not cleared so v->defer_shutdown is set when s3_suspend. Then it will prevent save process to do domain_shutdown, save process will hang. So we clear defer_shutdown flag for each vcpu when do s3_suspend. After refactory since we did not clear the flag, I am not sure whether this problem will exist. I will have a try tomorrow. Thanks a lot! Criping> -- Keir_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-20 15:21 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
On 20/5/08 16:19, "Ke, Liping" <liping.ke@intel.com> wrote:> OK. Another thing is that > when debugging, we found when s3_sleep, it will generate the acpi_ioport > write operation, it is not cleared so v->defer_shutdown is set when > s3_suspend. > Then it will prevent save process to do domain_shutdown, save process will > hang. > So we clear defer_shutdown flag for each vcpu when do s3_suspend. > > After refactory since we did not clear the flag, I am not sure whether > this problem will exist. I will have a try tomorrow.Oh yes, I forgot I removed that. It looked a bit aggressive, so if the problem remains then perhaps we can find a nicer way round the problem. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-21 06:31 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, Keir We have some test today, found several small points: 1. after cpu_reset, seems we need to call vcpu_initialise to reconstruct vmcs, otherwise, the domain could not be resumed back and xen has low response then. 2. yes, we may need to find a way to clear that io_port_write, otherwise, save will hang. I add back the clear (defer_shutdown =0) to have a test, then it works just fine. 3. in python, seems it will report a. global name "TRIGGER_S3RESUME" is not defined b. global name "HVM_PARAM_ACPI_S_STATE" is not defined After solve those problems, I found it could work fine, even with PV drivers installed. I found vif is ok after resume back. Will you help us to solve them or need do it by us? Thanks a lot for your help! Criping Keir Fraser wrote:> On 20/5/08 16:19, "Ke, Liping" <liping.ke@intel.com> wrote: > >> OK. Another thing is that >> when debugging, we found when s3_sleep, it will generate the >> acpi_ioport write operation, it is not cleared so v->defer_shutdown >> is set when s3_suspend. Then it will prevent save process to do >> domain_shutdown, save process will hang. So we clear defer_shutdown >> flag for each vcpu when do s3_suspend. >> >> After refactory since we did not clear the flag, I am not sure >> whether this problem will exist. I will have a try tomorrow. > > Oh yes, I forgot I removed that. It looked a bit aggressive, so if the > problem remains then perhaps we can find a nicer way round the > problem. > > -- Keir_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-21 07:32 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
On 21/5/08 07:31, "Ke, Liping" <liping.ke@intel.com> wrote:> 1. after cpu_reset, seems we need to call vcpu_initialise to reconstruct vmcs, > otherwise, the domain could not be resumed back and xen has low response then.I don''t like that. We should work out what bit of architectural state needs to be reset and do that from hvm_vcpu_reset_state().> 2. yes, we may need to find a way to clear that io_port_write, otherwise, save > will hang. I add back the clear (defer_shutdown =0) to have a test, then it > works just fine.I now agree this is the correct approach, but perhaps we should vcpu_end_shutdown_deferral() in vcpu_reset().> 3. in python, seems it will report > a. global name "TRIGGER_S3RESUME" is not defined > b. global name "HVM_PARAM_ACPI_S_STATE" is not definedSilly mistakes. I can help with these a bit. Perhaps I can get myself a test setup today. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-23 10:48 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
I think all these issues are fixed as of c/s 17713. However, when I s3resume a Linux guest I find it is unresponsive and the VGA display is corrupted by re-printing of BIOS start-of-day messages. Perhaps the BIOS is taking an incorrect path on S3 resume? It would be good if you can look into this now -- I think the hypervisor issues at least are now resolved and this is probably something in the higher-level rombios or ioemu logic. -- Keir On 21/5/08 07:31, "Ke, Liping" <liping.ke@intel.com> wrote:> Hi, Keir > We have some test today, found several small points: > 1. after cpu_reset, seems we need to call vcpu_initialise to reconstruct vmcs, > otherwise, the domain could not be resumed back and xen has low response then. > 2. yes, we may need to find a way to clear that io_port_write, otherwise, save > will hang. I add back the clear (defer_shutdown =0) to have a test, then it > works just fine. > 3. in python, seems it will report > a. global name "TRIGGER_S3RESUME" is not defined > b. global name "HVM_PARAM_ACPI_S_STATE" is not defined > > After solve those problems, I found it could work fine, even with PV drivers > installed. I found vif is ok after resume back. > Will you help us to solve them or need do it by us? > > Thanks a lot for your help! > Criping > > > Keir Fraser wrote: >> On 20/5/08 16:19, "Ke, Liping" <liping.ke@intel.com> wrote: >> >>> OK. Another thing is that >>> when debugging, we found when s3_sleep, it will generate the >>> acpi_ioport write operation, it is not cleared so v->defer_shutdown >>> is set when s3_suspend. Then it will prevent save process to do >>> domain_shutdown, save process will hang. So we clear defer_shutdown >>> flag for each vcpu when do s3_suspend. >>> >>> After refactory since we did not clear the flag, I am not sure >>> whether this problem will exist. I will have a try tomorrow. >> >> Oh yes, I forgot I removed that. It looked a bit aggressive, so if the >> problem remains then perhaps we can find a nicer way round the >> problem. >> >> -- Keir >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-23 23:40 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Sure, I will try it on Monday. Thanks a lot! Criping Keir Fraser wrote:> I think all these issues are fixed as of c/s 17713. However, when I > s3resume a Linux guest I find it is unresponsive and the VGA display > is corrupted by re-printing of BIOS start-of-day messages. Perhaps > the BIOS is taking an incorrect path on S3 resume? It would be good > if you can look into this now -- I think the hypervisor issues at > least are now resolved and this is probably something in the > higher-level rombios or ioemu logic. > > -- Keir_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-24 00:02 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, Keir I have a rough look and just one thing, I noticed that in arch_vcpu_reset, if it is hvm, we don''t do destroy_pagetables. Maybe it will have some problems. Since now s3 sleep down in protected mode yet wake up in real mode, so cr3 used in protected mode when sleeping down is not freed. It will cause Domain_heap use count != 0, domain_destroy could not be completed totally, some resource is not freed. We found the problem when trying to create a vtd device assigned hvm guest, then destroy it and redo create, yet It failed saying "vtd device is assigned aready". We tried to log this cr3, put_page of this cr3, it works, so we need vcpu_destroy_pagetables. Yet I am not sure whether the problem still exist after your restructure. I will do further try next Monday:) Thanks& Regards, Criping void arch_vcpu_reset(struct vcpu *v) { - destroy_gdt(v); - vcpu_destroy_pagetables(v); + if ( !is_hvm_vcpu(v) ) + { + destroy_gdt(v); + vcpu_destroy_pagetables(v); + } + else + { + vcpu_end_shutdown_deferral(v); + } } Ke, Liping wrote:> Sure, I will try it on Monday. > Thanks a lot! > Criping > Keir Fraser wrote: >> I think all these issues are fixed as of c/s 17713. However, when I >> s3resume a Linux guest I find it is unresponsive and the VGA display >> is corrupted by re-printing of BIOS start-of-day messages. Perhaps >> the BIOS is taking an incorrect path on S3 resume? It would be good >> if you can look into this now -- I think the hypervisor issues at >> least are now resolved and this is probably something in the >> higher-level rombios or ioemu logic. >> >> -- Keir > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-24 07:37 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
On 24/5/08 01:02, "Ke, Liping" <liping.ke@intel.com> wrote:> I have a rough look and just one thing, I noticed that in arch_vcpu_reset, if > it is hvm, we don''t do destroy_pagetables.I think destroy_pagetables() is a bad thing to do when paging is on! We re-adjust pagetable state when update_paging_modes() is called in hvm_vcpu_reset_state(). To my knowledge that is the correct way to go about this. Certainly it''s the only way that didn''t crash for me in some circumstances. :-)> Maybe it will have some problems. Since now s3 sleep down in protected mode > yet wake up in real mode, so cr3 used in protected mode when sleeping down is > not freed. It will cause > Domain_heap use count != 0, domain_destroy could not be completed totally, > some resource is not freed.I hope not and I''m pretty sure not. Do keep an eye out for it though.> We found the problem when trying to create a vtd device assigned hvm guest, > then destroy it and redo create, yet > It failed saying "vtd device is assigned aready". > We tried to log this cr3, put_page of this cr3, it works, so we need > vcpu_destroy_pagetables. > > Yet I am not sure whether the problem still exist after your restructure. I > will do further try next Monday:)Yes please! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-26 06:11 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, Keir I did not meet the problem that you mentioned. I just test on cs17724 on below environment. 1) fc6_32p, no-pv drivers, It works fine. Also, works fine with save/restore combined. 2) fc8_32e, no-pv drivers, X-window mode. It works fine with save/restore combined. 3) fc6_32p, vif pv drivers, X-wirdow mode. It works fine with save/restore combined. 4) fc8_32e, vtd nic assigned. It works fine. Yet found still find below problem domain_destroy is not completed, so vtd-resources are not freed totally. So when you destroy this domain and recreate the domain process will fail. Also verified with even have update_paging_modes, cr3 missing domain_page error problem still exists. I remember I tracked the problem before, When update_page_mode, if we changed cr3, it will put_page(old cr3 page) and Get_page(new cr3 page), so it will keep balance. But for this s3 case, since when sleep down it is in protected mode, when back it begins from real mode, so the cr3 used in protected mode is never put? So this is the only left problem I could find:) Thanks& regards, Criping -----Original Message----- From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] Sent: 2008?5?23? 18:48 To: Ke, Liping; xen-devel@lists.xensource.com Subject: Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent I think all these issues are fixed as of c/s 17713. However, when I s3resume a Linux guest I find it is unresponsive and the VGA display is corrupted by re-printing of BIOS start-of-day messages. Perhaps the BIOS is taking an incorrect path on S3 resume? It would be good if you can look into this now -- I think the hypervisor issues at least are now resolved and this is probably something in the higher-level rombios or ioemu logic. -- Keir On 21/5/08 07:31, "Ke, Liping" <liping.ke@intel.com> wrote:> Hi, Keir > We have some test today, found several small points: > 1. after cpu_reset, seems we need to call vcpu_initialise to reconstruct vmcs, > otherwise, the domain could not be resumed back and xen has low response then. > 2. yes, we may need to find a way to clear that io_port_write, otherwise, save > will hang. I add back the clear (defer_shutdown =0) to have a test, then it > works just fine. > 3. in python, seems it will report > a. global name "TRIGGER_S3RESUME" is not defined > b. global name "HVM_PARAM_ACPI_S_STATE" is not defined > > After solve those problems, I found it could work fine, even with PV drivers > installed. I found vif is ok after resume back. > Will you help us to solve them or need do it by us? > > Thanks a lot for your help! > Criping > > > Keir Fraser wrote: >> On 20/5/08 16:19, "Ke, Liping" <liping.ke@intel.com> wrote: >> >>> OK. Another thing is that >>> when debugging, we found when s3_sleep, it will generate the >>> acpi_ioport write operation, it is not cleared so v->defer_shutdown >>> is set when s3_suspend. Then it will prevent save process to do >>> domain_shutdown, save process will hang. So we clear defer_shutdown >>> flag for each vcpu when do s3_suspend. >>> >>> After refactory since we did not clear the flag, I am not sure >>> whether this problem will exist. I will have a try tomorrow. >> >> Oh yes, I forgot I removed that. It looked a bit aggressive, so if the >> problem remains then perhaps we can find a nicer way round the >> problem. >> >> -- Keir >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-26 07:21 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
On 26/5/08 07:11, "Ke, Liping" <liping.ke@intel.com> wrote:> 4) fc8_32e, vtd nic assigned. It works fine. Yet found still find below > problem > domain_destroy is not completed, so vtd-resources are not freed totally. > So when > you destroy this domain and recreate the domain process will fail. > > Also verified with even have update_paging_modes, cr3 missing > domain_page error problem still exists. I remember I tracked the problem > before, > When update_page_mode, if we changed cr3, it will put_page(old cr3 page) and > Get_page(new cr3 page), so it will keep balance. But for this s3 case, > since when sleep down it is in protected mode, when back it begins from real > mode, so > the cr3 used in protected mode is never put?Do you see this only in case 4 (64-bit guest with VT-d assignment)? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-26 07:45 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, Keir Yes, actually by case 4, I found the problem. Because vtd-assigned guest will first check whether the pci device is available first. I did not verify it on 32bit guest yet. Regards, Criping -----Original Message----- From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] Sent: 2008?5?26? 15:21 To: Ke, Liping; xen-devel@lists.xensource.com Subject: Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent On 26/5/08 07:11, "Ke, Liping" <liping.ke@intel.com> wrote:> 4) fc8_32e, vtd nic assigned. It works fine. Yet found still find below > problem > domain_destroy is not completed, so vtd-resources are not freed totally. > So when > you destroy this domain and recreate the domain process will fail. > > Also verified with even have update_paging_modes, cr3 missing > domain_page error problem still exists. I remember I tracked the problem > before, > When update_page_mode, if we changed cr3, it will put_page(old cr3 page) and > Get_page(new cr3 page), so it will keep balance. But for this s3 case, > since when sleep down it is in protected mode, when back it begins from real > mode, so > the cr3 used in protected mode is never put?Do you see this only in case 4 (64-bit guest with VT-d assignment)? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-26 07:48 UTC
Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hopefully it is fixed by c/s 17730. I did not see this with a Fedora kernel, perhaps because I tested a single-processor guest and perhaps that VCPU was dropped into real mode before triggering S3. I''m not sure. Anyhow S3 definitely didn''t work properly so I suspect there are still one or two bugs which some wider testing coverage would pick out. -- Keir On 26/5/08 08:45, "Ke, Liping" <liping.ke@intel.com> wrote:> Hi, Keir > > Yes, actually by case 4, I found the problem. Because vtd-assigned guest > will first check whether the pci device is available first. > I did not verify it on 32bit guest yet. > > Regards, > Criping > -----Original Message----- > From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] > Sent: 2008?5?26? 15:21 > To: Ke, Liping; xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent > > On 26/5/08 07:11, "Ke, Liping" <liping.ke@intel.com> wrote: > >> 4) fc8_32e, vtd nic assigned. It works fine. Yet found still find below >> problem >> domain_destroy is not completed, so vtd-resources are not freed totally. >> So when >> you destroy this domain and recreate the domain process will fail. >> >> Also verified with even have update_paging_modes, cr3 missing >> domain_page error problem still exists. I remember I tracked the problem >> before, >> When update_page_mode, if we changed cr3, it will put_page(old cr3 page) and >> Get_page(new cr3 page), so it will keep balance. But for this s3 case, >> since when sleep down it is in protected mode, when back it begins from real >> mode, so >> the cr3 used in protected mode is never put? > > Do you see this only in case 4 (64-bit guest with VT-d assignment)? > > -- Keir > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ke, Liping
2008-May-26 08:20 UTC
RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, Keir It works fine, I test vcpu=1 or 2 on FC6_32p and FC8_32e So no other problems I could see now -:). Thanks a lot!! Regards, Criping -----Original Message----- From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] Sent: 2008?5?26? 15:49 To: Ke, Liping; xen-devel@lists.xensource.com Subject: Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent Hopefully it is fixed by c/s 17730. I did not see this with a Fedora kernel, perhaps because I tested a single-processor guest and perhaps that VCPU was dropped into real mode before triggering S3. I''m not sure. Anyhow S3 definitely didn''t work properly so I suspect there are still one or two bugs which some wider testing coverage would pick out. -- Keir On 26/5/08 08:45, "Ke, Liping" <liping.ke@intel.com> wrote:> Hi, Keir > > Yes, actually by case 4, I found the problem. Because vtd-assigned guest > will first check whether the pci device is available first. > I did not verify it on 32bit guest yet. > > Regards, > Criping > -----Original Message----- > From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] > Sent: 2008?5?26? 15:21 > To: Ke, Liping; xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent > > On 26/5/08 07:11, "Ke, Liping" <liping.ke@intel.com> wrote: > >> 4) fc8_32e, vtd nic assigned. It works fine. Yet found still find below >> problem >> domain_destroy is not completed, so vtd-resources are not freed totally. >> So when >> you destroy this domain and recreate the domain process will fail. >> >> Also verified with even have update_paging_modes, cr3 missing >> domain_page error problem still exists. I remember I tracked the problem >> before, >> When update_page_mode, if we changed cr3, it will put_page(old cr3 page) and >> Get_page(new cr3 page), so it will keep balance. But for this s3 case, >> since when sleep down it is in protected mode, when back it begins from real >> mode, so >> the cr3 used in protected mode is never put? > > Do you see this only in case 4 (64-bit guest with VT-d assignment)? > > -- Keir > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel