Hi all, these patches introduce NUMA support for HVM guests. A new config option ''guestnodes'' specifies the number of NUMA nodes the guest should see. Memory will be allocated from different (host) nodes, CPU affinity will be set accordingly and the guest will be educated about the topology via an SRAT-ACPI table. This will allow guests which are greater than one host node (both in terms of number of VCPUs or total memory). On AMD Opteron platforms guests otherwise may use non-optimal memory access (from remote nodes), this somehow limits the number of VCPUS to the number of cores in one socket (2 or 4). Another issue solved with this is "fragmented" memory, where the total amount of free memory would be enough for a guest, but it cannot be allocated from a single node. Overcommitting of the number of nodes is currently not possible, so you need a NUMA machine to use this. I have seen performance penalties of 7-12% on Opterons with kernbench on guests with remote memory (numa=off or explicitly wrongly pinned). Explicitly pinning guests with cpus="x-y", omitting the guestnodes option or specifying guestnodes=0 will turn off the new code and revert to current behavior (automatic placement). It would be nice if this still finds its way into 3.3. Please apply the following four patches in order, they should compile and run after each patch. More details in the respective mails. Signed-off-by: Andre Przywara <andre.przywara@amd.com> Regards, Andre. -- Andre Przywara AMD-Operating System Research Center (OSRC), Dresden, Germany Tel: +49 351 277-84917 ----to satisfy European Law for business letters: AMD Saxony Limited Liability Company & Co. KG, Wilschdorfer Landstr. 101, 01109 Dresden, Germany Register Court Dresden: HRA 4896, General Partner authorized to represent: AMD Saxony LLC (Wilmington, Delaware, US) General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel