search for: vpus

Displaying 9 results from an estimated 9 matches for "vpus".

Did you mean: cpus
2008 May 29
4
pass multicore cpu to domU?
...I''ve read, it bases this restriction on the number of sockets used, rather than cores. So if I installed XP on the bare metal server, I would have full access to all the CPUs. Naturally, I would really prefer to avoid using Windows XP as a host OS. Is there some way to group the vpus into virtual sockets, so that Windows XP will make use of more than just two vpus? The only other option would be to purchase Windows Enterprise Server, which is prohibitively expensive. Ilsa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.c...
2020 Aug 03
1
Re: Libvirt qemu-system-x86_64 on ppc64le no multi threading
On Mon, Aug 03, 2020 at 01:45:45PM +0000, Kim-Norman Sahm wrote: > hi, > > i’m running Debian 10 on POWER9 and would like to spawn x86_64 emulated VMs. > The virtual machine is configured to run with 8 vpus but it’s very slow. > On the host you can see that the qemu-system-x86_64 process is using just one core! > > Ppc64le guests are using multi cores, so its looks like an config problem or software bug with the x86 emulator. snip > Does anybody knows this problem? You've not menti...
2007 Oct 03
1
CPU/VCPU sharing
...ciency of Xen (i have always been a UML man). I would however, like to run a single domU domain, with a single VCPU, but get the power of say, 7 cores, leaving 1 to the dom0 instance. First off, I would like to know if this is possible. In my config file I have specified cpus = "1-7" and vpus=1 in the hope that the 1 vcpu would be an almighty powerful one, but this isn''t the case. There is no errors booting it up, and [\047cpus\047, \0470-7\047] appears in the xend logfile. Here is a copy of the config file. name = "test1" builder = "hvm" memory = &q...
2010 May 21
10
What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and "dom0_max_vcpus=4" ?
Hi experts, Q1:What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and "dom0_max_vcpus=4" ? which will get better performance Q2: dom0_max_vcpus=4 means "core0-3 will be just used by dom0" or means "4 cores(not dedicate cores) will be used by dom0, eg: core2-5 or core3-6? Q3.what does mean "nosmp" , xen, dom0,domU, will just use one
2010 May 21
10
What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and "dom0_max_vcpus=4" ?
Hi experts, Q1:What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and "dom0_max_vcpus=4" ? which will get better performance Q2: dom0_max_vcpus=4 means "core0-3 will be just used by dom0" or means "4 cores(not dedicate cores) will be used by dom0, eg: core2-5 or core3-6? Q3.what does mean "nosmp" , xen, dom0,domU, will just use one
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...is the performance gain worth it? Thanks. --- 1: Pause Loop Exiting is almost certain to vmexit in that case: we default to 4096 TSC cycles on KVM, and pending loop is longer than 4 (4096/PSPIN_THRESHOLD). We would also vmexit if critical section was longer than 4k. 2: In this example, vpus 1 and 2 use the lock while 3 never gets there. VCPU: 1 2 3 lock() // we are the holder pend() // we have pending bit vmexit // while in PSPIN_THRESHOLD loop unlock() vmentry...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...is the performance gain worth it? Thanks. --- 1: Pause Loop Exiting is almost certain to vmexit in that case: we default to 4096 TSC cycles on KVM, and pending loop is longer than 4 (4096/PSPIN_THRESHOLD). We would also vmexit if critical section was longer than 4k. 2: In this example, vpus 1 and 2 use the lock while 3 never gets there. VCPU: 1 2 3 lock() // we are the holder pend() // we have pending bit vmexit // while in PSPIN_THRESHOLD loop unlock() vmentry...
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex