Dante Cinco
2010-Oct-27 20:58 UTC
[Xen-devel] Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed that the NUMA info as shown by the Xen ''u'' debug-key is different. More specifically, the CPU to node mapping is alternating for 4.0.2 and grouped sequentially for 4.1. This difference affects the allocation (wrt node/socket) of pinned VCPUs to the guest domain. For example, if I''m allocating physical CPUs 0 - 3 to my guest domain, in 4.0.2 the 4 VCPUs will be split between the 2 nodes but in 4.1 the 4 VCPUs will all be in node 0. CPU-to-node mapping for Xen 4.0.2-rc1-pre (xen_changeset:Fri Sep 17 17:06:57 2010 +0100 21350:6e0ffcd2d9e0): (XEN) *** Serial input -> Xen (type ''CTRL-a'' three times to switch input to DOM0) (XEN) ''u'' pressed -> dumping numa info (now-0x4B:40CB2A11) (XEN) idx0 -> NODE0 start->0 size->1703936 (XEN) phys_to_nid(0000000000001000) -> 0 should be 0 (XEN) idx1 -> NODE1 start->1703936 size->1572863 (XEN) phys_to_nid(00000001a0001000) -> 1 should be 1 (XEN) CPU0 -> NODE0 (XEN) CPU1 -> NODE1 (XEN) CPU2 -> NODE0 (XEN) CPU3 -> NODE1 (XEN) CPU4 -> NODE0 (XEN) CPU5 -> NODE1 (XEN) CPU6 -> NODE0 (XEN) CPU7 -> NODE1 (XEN) CPU8 -> NODE0 (XEN) CPU9 -> NODE1 (XEN) CPU10 -> NODE0 (XEN) CPU11 -> NODE1 (XEN) CPU12 -> NODE0 (XEN) CPU13 -> NODE1 (XEN) CPU14 -> NODE0 (XEN) CPU15 -> NODE1 CPU-to-node mapping for Xen 4.1-unstable (xen_changeset:Mon Oct 18 17:40:08 2010 +0100 22262:c0a39dbc624d): (XEN) *** Serial input -> Xen (type ''CTRL-a'' three times to switch input to DOM0) (XEN) ''u'' pressed -> dumping numa info (now-0x7:C195D56F) (XEN) idx0 -> NODE0 start->0 size->1703936 (XEN) phys_to_nid(0000000000001000) -> 0 should be 0 (XEN) idx1 -> NODE1 start->1703936 size->1572863 (XEN) phys_to_nid(00000001a0001000) -> 1 should be 1 (XEN) CPU0 -> NODE0 (XEN) CPU1 -> NODE0 (XEN) CPU2 -> NODE0 (XEN) CPU3 -> NODE0 (XEN) CPU4 -> NODE0 (XEN) CPU5 -> NODE0 (XEN) CPU6 -> NODE0 (XEN) CPU7 -> NODE0 (XEN) CPU8 -> NODE1 (XEN) CPU9 -> NODE1 (XEN) CPU10 -> NODE1 (XEN) CPU11 -> NODE1 (XEN) CPU12 -> NODE1 (XEN) CPU13 -> NODE1 (XEN) CPU14 -> NODE1 (XEN) CPU15 -> NODE1 Dante _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jan Beulich
2010-Oct-28 07:51 UTC
Re: [Xen-devel] Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
>>> On 27.10.10 at 22:58, Dante Cinco <dantecinco@gmail.com> wrote:This is apparently a result of the introduction of normalise_cpu_order().> My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When > switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed > that the NUMA info as shown by the Xen ''u'' debug-key is different. > More specifically, the CPU to node mapping is alternating for 4.0.2 > and grouped sequentially for 4.1. This difference affects the > allocation (wrt node/socket) of pinned VCPUs to the guest domain. For > example, if I''m allocating physical CPUs 0 - 3 to my guest domain, in > 4.0.2 the 4 VCPUs will be split between the 2 nodes but in 4.1 the 4 > VCPUs will all be in node 0.Use of pinning to pre-determined, hard coded numbers is quite obviously dependent on hypervisor internal behavior (i.e. will yield different results if the implementation changes). Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2010-Oct-28 09:25 UTC
Re: [Xen-devel] Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
At 21:58 +0100 on 27 Oct (1288216729), Dante Cinco wrote:> My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When > switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed > that the NUMA info as shown by the Xen ''u'' debug-key is different. > More specifically, the CPU to node mapping is alternating for 4.0.2 > and grouped sequentially for 4.1. This difference affects the > allocation (wrt node/socket) of pinned VCPUs to the guest domain.Yes; this change was deliberate. The mapping of pcpu numbers to core/socket/node used to depend on the BIOS of the particular machine; now it''s a consistent order on all machines. http://xenbits.xen.org/xen-unstable.hg/rev/2f4a89ad2528 Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Possibly Parallel Threads
- Bug#567025: xen-hypervisor-3.4-amd64: unhandled page fault while initializing dom0
- Bug#567026: xen-hypervisor-3.4-amd64: unhandled page fault while initializing dom0
- Problems with power management xen 3.4
- thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
- xen acpi cpufreq driver