As you know, HVM save/restore broke recently because restored config miss guest memsize that used by xc_hvm_restore to locate some pfn. After discussion, we decided to remove the pfn deduction logic from restore side by adding a general memory layout. I have a patch for it. But then qemu broke, because it also require the memsize to locate the share page. We can''t use the previous method, as it requires a lot of changes in qemu. The memsize in xmconfig file is only used at the beginning of create, then lost when runing and restore. we have memory_{dynamic,static}_{max,min} for keeping mem config, but all of them are useless in this case. I have witnessed the fluctuation of memsize config: first as ''memory'', then ''memory_static_min'', now disappear. Guest memsize is an important parameter, so it should be constantly kept in an fixed config entry just like others(vcpus...). Am I right? -- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 24/3/07 11:37, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:> But then qemu broke, because it also require the memsize to locate the share > page. We can''t use the previous method, as it requires a lot of changes in > qemu.Doesn''t your new ''general layout'' patch put the PFNs of xenstore, ioreq, buffered_ioreq in the saved image, and restore in xc_hvm_restore? Qemu-dm should obtain the addresses via HVMOP_get_param. You do not need the memsize parameter. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sat, Mar 24, 2007 at 02:18:44PM +0000, Keir Fraser wrote:> On 24/3/07 11:37, "Zhai, Edwin" <edwin.zhai@intel.com> wrote: > > > But then qemu broke, because it also require the memsize to locate the share > > page. We can''t use the previous method, as it requires a lot of changes in > > qemu. > > Doesn''t your new ''general layout'' patch put the PFNs of xenstore, ioreq, > buffered_ioreq in the saved image, and restore in xc_hvm_restore? Qemu-dmyes,> should obtain the addresses via HVMOP_get_param. > > You do not need the memsize parameter.I don''t think so. Besides locating PFNs, memsize is also used in QEMU for other purpose, such as bitmap allocation, dev init and map_foreign*. So memsize is a must for qemu init. See following code in xc_hvm_build: if ( v_end > HVM_BELOW_4G_RAM_END ) shared_page_nr = (HVM_BELOW_4G_RAM_END >> PAGE_SHIFT) - 1; else shared_page_nr = (v_end >> PAGE_SHIFT) - 1; So it''s impossible to get memsize by saved PFNs when restore a big memory guest.> > -- Keir >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 26/3/07 04:13, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:> I don''t think so. > Besides locating PFNs, memsize is also used in QEMU for other purpose, such as > bitmap allocation, dev init and map_foreign*. So memsize is a must for qemu > init. > > See following code in xc_hvm_build: > if ( v_end > HVM_BELOW_4G_RAM_END ) > shared_page_nr = (HVM_BELOW_4G_RAM_END >> PAGE_SHIFT) - 1; > else > shared_page_nr = (v_end >> PAGE_SHIFT) - 1; > > So it''s impossible to get memsize by saved PFNs when restore a big memory > guest.It can use the new XENMEM_maximum_gpfn hypercall for bitmap allocation. I''m not sure what memsize would have to do with dev init. The map_foreign* is hidden behind mapcache which shouldn''t need to know memsize (although if it''s an issue of sizing buckets I suppose it can use XENMEM_maximum_gpfn). -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, Mar 26, 2007 at 07:31:33PM +0100, Keir Fraser wrote:> On 26/3/07 04:13, "Zhai, Edwin" <edwin.zhai@intel.com> wrote: > > > I don''t think so. > > Besides locating PFNs, memsize is also used in QEMU for other purpose, such as > > bitmap allocation, dev init and map_foreign*. So memsize is a must for qemu > > init. > > > > See following code in xc_hvm_build: > > if ( v_end > HVM_BELOW_4G_RAM_END ) > > shared_page_nr = (HVM_BELOW_4G_RAM_END >> PAGE_SHIFT) - 1; > > else > > shared_page_nr = (v_end >> PAGE_SHIFT) - 1; > > > > So it''s impossible to get memsize by saved PFNs when restore a big memory > > guest. > > It can use the new XENMEM_maximum_gpfn hypercall for bitmap allocation. I''m2 concerns: 1. xc_hvm_build use SCRATCH_PFN(0xFFFFF) to map shared_info, which would overwrite the true max_gpfn. So shall we add check in set_p2m_entry for this? 2. If qemu get the memsize from XENMEM_maximum_gpfn when restore, it''s better to do the same thing when create, i.e. remove the ''-m'' qemu command line.> not sure what memsize would have to do with dev init. The map_foreign* is > hidden behind mapcache which shouldn''t need to know memsize (although if > it''s an issue of sizing buckets I suppose it can use XENMEM_maximum_gpfn). > > -- Keir >-- best rgds, edwin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 27/3/07 16:42, "Zhai, Edwin" <edwin.zhai@intel.com> wrote:> 2 concerns: > 1. xc_hvm_build use SCRATCH_PFN(0xFFFFF) to map shared_info, which would > overwrite the true max_gpfn. So shall we add check in set_p2m_entry for this?It''ll mean the minimum bitmap size is 128kB. Big deal. If we find places where this *does* matter, I think we should add a better hypercall to actually indicate which chunks of the physmap space are in use (e.g., return a bitmap, with one bit per megabyte of pseudophys space -- bit set if any page in that megabyte chunk is populated or has ever been populated).> 2. If qemu get the memsize from XENMEM_maximum_gpfn when restore, it''s better > to > do the same thing when create, i.e. remove the ''-m'' qemu command line.I fully agree! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel