butine@zju.edu.cn
2013-Sep-15 04:56 UTC
about memory migration between NUMA nodes based on HVM
hello,Dario,
when is released the new version of memory migration between NUMA nodes based on
HVM?
Based on your initial version,If I start a pvm, when I call this code, error
occured.
for ( i = 0; i < minfo.p2m_size; i++ )
{
if ( (minfo.pfn_type[i] & XEN_DOMCTL_PFINFO_LPINTAB) == 0 )
continue;
pin[nr_pins].cmd = MMUEXT_UNPIN_TABLE;
pin[nr_pins].arg1.mfn = minfo.p2m_table[i];
nr_pins++;
if ( nr_pins == MAX_PIN_BATCH )
{
if ( xc_mmuext_op(xch, pin, nr_pins, domid) < 0 )
{
PERROR("Failed to unpin a batch of %d MFNs", nr_pins);
goto out;
}
else
DBGPRINTF("Unpinned a batch of %d MFNs", nr_pins);
nr_pins = 0;
}
Through gdb, I find minfo.p2m_table[i] can visit some invaild memory. some value
of minfo.p2m_table[i] may be very largh like 0xffffffffffffffff. if
minfo.p2m_size is 135424, it just happen at about 133000 - 134000. I don't
know what is going wrong. minfo.p2m_size and minfo.p2m_table[i] is got from
function xc_core_arch_map_p2m_rw():
dinfo->p2m_size = nr_gpfns(xch, info->domid);
...
*live_p2m = xc_map_foreign_pages(xch, dom,
rw ? (PROT_READ | PROT_WRITE) : PROT_READ,
p2m_frame_list,
P2M_FL_ENTRIES);
Is it right?
Thanks
Regard,
Butine Huang
2013-09-15
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel