all, there are 2 issues of HVM save/restore: 1. restore would cause a type dismatch exception, which can be fixed by attached patch. 2. a saved sdl guest change to vnc after restore as "console_refs" are not empty and "devices" has a new added "vfb" device, which is different from create process. can anybody help to fix this W/O breaking vfb? thanks a lot. -- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 15:48 +0800 on 15 Mar (1173973722), Zhai, Edwin wrote:> all, > there are 2 issues of HVM save/restore: > 1. restore would cause a type dismatch exception, which can be fixed by attached > patch.I can''t reproduce this exception.> if is_hvm: > hvm = dominfo.info[''memory_static_min''] > - apic = dominfo.info[''platform''].get(''apic'', 0) > - pae = dominfo.info[''platform''].get(''pae'', 0) > + apic = int(dominfo.info[''platform''].get(''apic'', 0)) > + pae = int(dominfo.info[''platform''].get(''pae'', 0))AFAICS, ''apic'' and ''pae'' are mapped through str() when they''re used, so I don''t see that it helps to cast them to integers now. Is there a reason you don''t pass ''hvm'' through int() as well? Cheers, Tim. -- Tim Deegan <Tim.Deegan@xensource.com>, XenSource UK Limited Registered office c/o EC2Y 5EB, UK; company number 05334508 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thu, Mar 15, 2007 at 10:02:44AM +0000, Tim Deegan wrote:> At 15:48 +0800 on 15 Mar (1173973722), Zhai, Edwin wrote: > > all, > > there are 2 issues of HVM save/restore: > > 1. restore would cause a type dismatch exception, which can be fixed by attached > > patch. > > I can''t reproduce this exception.i can still reproduce on staging tree r14401. trace log ===================================================[2007-03-16 13:32:22 5071] ERROR (XendDomain:1034) Restore failed Traceback (most recent call last): File "/usr/lib64/python/xen/xend/XendDomain.py", line 1029, in domain_restore_fd return XendCheckpoint.restore(self, fd, paused=paused) File "/usr/lib64/python/xen/xend/XendCheckpoint.py", line 199, in restore dominfo.domid, hvm, apic, pae) File "/usr/lib64/python2.3/logging/__init__.py", line 893, in info apply(self._log, (INFO, msg, args), kwargs) File "/usr/lib64/python2.3/logging/__init__.py", line 994, in _log self.handle(record) File "/usr/lib64/python2.3/logging/__init__.py", line 1004, in handle self.callHandlers(record) File "/usr/lib64/python2.3/logging/__init__.py", line 1037, in callHandlers hdlr.handle(record) File "/usr/lib64/python2.3/logging/__init__.py", line 592, in handle self.emit(record) File "/usr/lib64/python2.3/logging/handlers.py", line 102, in emit msg = "%s\n" % self.format(record) File "/usr/lib64/python2.3/logging/__init__.py", line 567, in format return fmt.format(record) File "/usr/lib64/python2.3/logging/__init__.py", line 362, in format record.message = record.getMessage() File "/usr/lib64/python2.3/logging/__init__.py", line 233, in getMessage msg = msg % self.args TypeError: int argument required> > > if is_hvm: > > hvm = dominfo.info[''memory_static_min''] > > - apic = dominfo.info[''platform''].get(''apic'', 0) > > - pae = dominfo.info[''platform''].get(''pae'', 0) > > + apic = int(dominfo.info[''platform''].get(''apic'', 0)) > > + pae = int(dominfo.info[''platform''].get(''pae'', 0)) > > AFAICS, ''apic'' and ''pae'' are mapped through str() when they''re used, so I > don''t see that it helps to cast them to integers now. > Is there a reason you don''t pass ''hvm'' through int() as well?change the "apic=%d" to "%s" is also okay. hvm is int as it come from dominfo.info[''memory_static_min'']. thanks,> > Cheers, > > Tim. > > -- > Tim Deegan <Tim.Deegan@xensource.com>, XenSource UK Limited > Registered office c/o EC2Y 5EB, UK; company number 05334508 >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 13:45 +0800 on 16 Mar (1174052714), Zhai, Edwin wrote:> On Thu, Mar 15, 2007 at 10:02:44AM +0000, Tim Deegan wrote: > > > > I can''t reproduce this exception. > > i can still reproduce on staging tree r14401.My apologies - I hadn''t got the logging level turned up high enough to trigger that line (mutter mutter compile-time type-checking mutter). Tim. -- Tim Deegan <Tim.Deegan@xensource.com>, XenSource UK Limited Registered office c/o EC2Y 5EB, UK; company number 05334508 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> At 13:45 +0800 on 16 Mar (1174052714), Zhai, Edwin wrote: > > On Thu, Mar 15, 2007 at 10:02:44AM +0000, Tim Deegan wrote: > > > I can''t reproduce this exception. > > > > i can still reproduce on staging tree r14401. > > My apologies - I hadn''t got the logging level turned up high enough > to trigger that line (mutter mutter compile-time type-checking mutter).Aren''t we running pylint on the code? Seems to catch the kinds of errors you''d usually want a compiler to catch. If it''s anything like C lint there might need to be some filtering of the output though ;-) Cheers, Mark -- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels. Mark: My wheel has a wheel! _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, Mar 16, 2007 at 04:11:31PM +0000, Mark Williamson wrote:> > At 13:45 +0800 on 16 Mar (1174052714), Zhai, Edwin wrote: > > > On Thu, Mar 15, 2007 at 10:02:44AM +0000, Tim Deegan wrote: > > > > I can''t reproduce this exception. > > > > > > i can still reproduce on staging tree r14401. > > > > My apologies - I hadn''t got the logging level turned up high enough > > to trigger that line (mutter mutter compile-time type-checking mutter). > > Aren''t we running pylint on the code? Seems to catch the kinds of errors > you''d usually want a compiler to catch. > > If it''s anything like C lint there might need to be some filtering of the > output though ;-)There is a pylintrc in tools/python, but the output is far too noisy to be useful as an automated check. I pick through it every now and again. Ewan. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
latest HVM save/restore break again:( i use the memsize(the number in the xmexample.hvm) deduced from ''memory_static_min'' to calculate some HVM PFNs when restore. but now, ''memory_static_min'' becomes 0 since r14425 and memsize is not recorded in the saved configuration(memory_static/dynamic_min/max...) any more. do we have any reason to change the guest configuration so frequently? do you have any other suggestion to get the memsize from configuration? thanks, -- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 20/3/07 08:12, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:> latest HVM save/restore break again:( > > i use the memsize(the number in the xmexample.hvm) deduced from > ''memory_static_min'' to calculate some HVM PFNs when restore.Out of interest: why would you do this? I glanced upon the code you are referring to in xc_hvm_restore.c yesterday, and it struck me as particularly gross. All three PFNs (ioreq, bufioreq, xenstore) could be saved in the store after building the domain and then saved/restored as part of the Python-saved data. The situation is easier than for a PV guest because PFNs do not change across save/restore. The more assumptions about memory layout we bake into xc_hvm_{save,restore} now, the more we have to unbake when the HVM memory map becomes more dynamic (balloon driver support, in particular). Making these assumptions to some extent for now is okay, but we should avoid it where possible. -- Keir> but now, ''memory_static_min'' becomes 0 since r14425 and memsize is not > recorded > in the saved configuration(memory_static/dynamic_min/max...) any more. > > do we have any reason to change the guest configuration so frequently? > do you have any other suggestion to get the memsize from configuration?_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 20, 2007 at 08:29:30AM +0000, Keir Fraser wrote:> On 20/3/07 08:12, "Zhai, Edwin" <edwin.zhai@intel.com> wrote: > > > latest HVM save/restore break again:( > > > > i use the memsize(the number in the xmexample.hvm) deduced from > > ''memory_static_min'' to calculate some HVM PFNs when restore. > > Out of interest: why would you do this? I glanced upon the code you are > referring to in xc_hvm_restore.c yesterday, and it struck me as particularly > gross. All three PFNs (ioreq, bufioreq, xenstore) could be saved in the > store after building the domain and then saved/restored as part of the > Python-saved data. The situation is easier than for a PV guest because PFNssave all PFNs directly is good idea. i have this code to keep create and restore process similar. i''d like directly save/restore all pfns in xc_hvm_{save,restore}. is this your want?> do not change across save/restore. > > The more assumptions about memory layout we bake into xc_hvm_{save,restore} > now, the more we have to unbake when the HVM memory map becomes more dynamic > (balloon driver support, in particular). Making these assumptions to some > extent for now is okay, but we should avoid it where possible. > > -- Keir >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 20/3/07 08:46, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:> save all PFNs directly is good idea. i have this code to keep create and > restore > process similar. > i''d like directly save/restore all pfns in xc_hvm_{save,restore}. is this your > want?Actually yes, that would be fine now I think about it. Probably less code than plumbing it into Python. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 20/3/07 08:46, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:>> Out of interest: why would you do this? I glanced upon the code you are >> referring to in xc_hvm_restore.c yesterday, and it struck me as particularly >> gross. All three PFNs (ioreq, bufioreq, xenstore) could be saved in the >> store after building the domain and then saved/restored as part of the >> Python-saved data. The situation is easier than for a PV guest because PFNs > > save all PFNs directly is good idea. i have this code to keep create and > restore > process similar. > i''d like directly save/restore all pfns in xc_hvm_{save,restore}. is this your > want?Other thoughts on xc_hvm_restore as it is right now, and its use/abuse of ''store_mfn'' parameter to pass in memory_static_min. I think this can be reasonably got rid of: 1. Do the setmaxmem hypercall in Python. There''s no reason to be doing it in xc_hvm_save(). 2. Instead of preallocating the HVM memory, populate the physmap on demand as we do now in xc_linux_restore. I''d do this by having an ''allocated bitmap'', indexed by guest pfn, where a ''1'' means that page is already populated. Alternatively we might choose to avoid needing the bitmap by always doing populate_physmap() whenever we see a pfn, and have Xen guarantee that to be a no-op if RAM is already allocated at that pfn. If we go the bitmap route I''d just make it big enough for a 4GB guest up front (only 128kB required) and then realloc() it to be twice as big whenever we go off the end of the current bitmap. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 08:29 +0000 on 20 Mar (1174379370), Keir Fraser wrote:> On 20/3/07 08:12, "Zhai, Edwin" <edwin.zhai@intel.com> wrote: > > > latest HVM save/restore break again:( > > > > i use the memsize(the number in the xmexample.hvm) deduced from > > ''memory_static_min'' to calculate some HVM PFNs when restore. > > Out of interest: why would you do this? I glanced upon the code you are > referring to in xc_hvm_restore.c yesterday, and it struck me as particularly > gross. All three PFNs (ioreq, bufioreq, xenstore) could be saved in the > store after building the domain and then saved/restored as part of the > Python-saved data. The situation is easier than for a PV guest because PFNs > do not change across save/restore.It''s probably safe just to remove the special handling for those PFNs -- I expanded it a bit when I was fixing up the case where ioreqs were in flight during the save, but that shouldn''t be necessary (and they will have been set to the same magic values by the hvm domain builder on restore.) Cheers, Tim. -- Tim Deegan <Tim.Deegan@xensource.com>, XenSource UK Limited Registered office c/o EC2Y 5EB, UK; company number 05334508 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 20, 2007 at 10:01:48AM +0000, Keir Fraser wrote:> On 20/3/07 08:46, "Zhai, Edwin" <edwin.zhai@intel.com> wrote: > > >> Out of interest: why would you do this? I glanced upon the code you are > >> referring to in xc_hvm_restore.c yesterday, and it struck me as particularly > >> gross. All three PFNs (ioreq, bufioreq, xenstore) could be saved in the > >> store after building the domain and then saved/restored as part of the > >> Python-saved data. The situation is easier than for a PV guest because PFNs > > > > save all PFNs directly is good idea. i have this code to keep create and > > restore > > process similar. > > i''d like directly save/restore all pfns in xc_hvm_{save,restore}. is this your > > want? > > Other thoughts on xc_hvm_restore as it is right now, and its use/abuse of > ''store_mfn'' parameter to pass in memory_static_min. I think this can be > reasonably got rid of: > 1. Do the setmaxmem hypercall in Python. There''s no reason to be doing it > in xc_hvm_save().1.xc_linux_save also has setmaxmem 2.even do it in Python, we still need the memsize for setmaxmem> 2. Instead of preallocating the HVM memory, populate the physmap on demand > as we do now in xc_linux_restore. I''d do this by having an ''allocated > bitmap'', indexed by guest pfn, where a ''1'' means that page is already > populated. Alternatively we might choose to avoid needing the bitmap by > always doing populate_physmap() whenever we see a pfn, and have Xen > guarantee that to be a no-op if RAM is already allocated at that pfn.current hvm restore just try to create the mem layout (same as when create)firstly then shape it gradually, so need memsize when creating guest. seems you want another method: save the mem layout in xc_hvm_save and populate the same one when restore, right? it''s okay for me. BTW, i prefer bitmap way if we can make it efficient.> > If we go the bitmap route I''d just make it big enough for a 4GB guest up > front (only 128kB required) and then realloc() it to be twice as big > whenever we go off the end of the current bitmap. > > -- Keir >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel