Displaying 20 results from an estimated 79 matches for "pcpus".
Did you mean:
cpus
2013 Mar 15
2
strange phenomenon on CPU affinity
...4 1,3,5,7
VM6 4 0,2,4,6
VM7 4 0,2,4,6
VM8 4 0,2,4,6
I do not set the CPU affinity in the configuration file, and I cannot find
when the hypervisor set the CPU affinity in the source code. In this
situation, 4 VCPUs of each VM are binding to 4 PCPUs permanently, and 5 VMs
run on a set of PCPUs, and others run on the other set of PCPUs. It is
unfair to these VMs.
--
Like Zhou
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
2008 Mar 02
4
xennet windows pv performance
I''ve just pushed some changes to hg that seem to speed things up a bit
for me.
On one machine with 2 x dual core 2.4Ghz AMD cpu''s, testing with iperf
from DomU to Dom0 I get:
Windows (vcpus = 2) - ~500MBits/sec - ~66% cpu utilization (eg 100% on
one, and 33% on the second cpu)
Linux (vcpus = 1) - ~2200MBits/sec
On another machine with 1 x dual core 1.8Ghz AMD cpu, testing as
2012 Jan 25
0
Re: [PATCHv2 1 of 2] libxl: extend pCPUs specification for vcpu-pin.
On Mon, 2012-01-23 at 18:21 +0000, Dario Faggioli wrote:
> Allow for "^<cpuid>" syntax while specifying the pCPUs list
> during a vcpu-pin. This enables doing the following:
>
> xl vcpu-pin 1 1 0-4,^2
>
> and achieving:
>
> xl vcpu-list
> Name ID VCPU CPU State Time(s) CPU Affinity
> ...
> Squeeze_pv 1 1...
2012 Jan 25
0
Re: [PATCHv3 1 of 2] libxl: extend pCPUs specification for vcpu-pin.
On Wed, 2012-01-25 at 14:35 +0000, Dario Faggioli wrote:
> Allow for "^<cpuid>" syntax while specifying the pCPUs list
> during a vcpu-pin. This enables doing the following:
>
> xl vcpu-pin 1 1 0-4,^2
>
> and achieving:
>
> xl vcpu-list
> Name ID VCPU CPU State Time(s) CPU Affinity
> ...
> Squeeze_pv 1 1...
2013 Jul 19
2
pinVcpu not working
Hi all,
I am working with libvirt and I am trying to set cpu affinity. Now I can always use
virsh vcpupin <domain_name> <vcpu> <pcpu>
to pin vcpus to pcpus. I want to do it using Python API. Now, there is function pinVcpu which is supposed to do that. But this is not working. For example I gave
dom.pinVcpu(0,1)
but still my vcpu affinity is for all the pcpus. The function returns 0 (success).
Any idea what am I doing wrong?
Thanks.
~Peeyush...
2020 Sep 18
0
Re: [PATCH v2v] v2v: Set the number of vCPUs to same as host number of pCPUs.
On Fri, Sep 18, 2020 at 10:44:03AM +0100, Richard W.M. Jones wrote:
> So it didn't make any noticable difference in my test. I wonder if
> the test guest I'm using (Fedora 32 using dracut) doesn't use parallel
> compression?
Do you do anything special to optimize storage ? If the thing using
parallel CPUs in the guest is doing I/O you'd likely want to tune
storage at
2020 Sep 18
0
[PATCH v2v] v2v: Set the number of vCPUs to same as host number of pCPUs.
This helps mkinitrd which runs pigz (parallel gzip).
Thanks Jean-Louis Dupond for suggesting this change.
---
common | 2 +-
v2v/v2v.ml | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/common b/common
index c840f2e39..ea5278bba 160000
--- a/common
+++ b/common
@@ -1 +1 @@
-Subproject commit c840f2e39d0bb637a98b224c89f6df011e1d4414
+Subproject commit
2020 Sep 18
1
Re: [PATCH v2v] v2v: Set the number of vCPUs to same as host number of pCPUs.
On Friday, 18 September 2020 11:44:04 CEST Richard W.M. Jones wrote:
> let g = open_guestfs ~identifier:"v2v" () in
> g#set_memsize (g#get_memsize () * 14 / 5);
> + (* Setting the number of vCPUs allows parallel mkinitrd. *)
> + g#set_smp (Sysconf.nr_processors_online ());
IMHO this is not a good idea, for few reasons:
a) it unconditionally uses all the available
2020 Sep 18
1
Re: [PATCH v2v] v2v: Set the number of vCPUs to same as host number of pCPUs.
On Fri, Sep 18, 2020 at 10:52:58AM +0100, Daniel P. Berrangé wrote:
> On Fri, Sep 18, 2020 at 10:44:03AM +0100, Richard W.M. Jones wrote:
> > So it didn't make any noticable difference in my test. I wonder if
> > the test guest I'm using (Fedora 32 using dracut) doesn't use parallel
> > compression?
>
> Do you do anything special to optimize storage ? If
2020 Sep 18
4
[PATCH v2v] v2v: Set the number of vCPUs to same as host number of pCPUs.
So it didn't make any noticable difference in my test. I wonder if
the test guest I'm using (Fedora 32 using dracut) doesn't use parallel
compression?
However I don't think it can cause a problem and it seems obvious that
it could benefit some cases.
Rich.
2013 Jul 10
3
Performance of Xen VCPU Scheduling
Hello,
I observed that a configuration where dom0 vcpus were pinned to a set of
pcpus in the host using dom0_vcpus_pin and the guests were prevented
from running on the dom0 pcpus (here called "exclusively-pinned dom0
vcpus", or xpin) caused the general performance of the guests in a host
with around 24 pcpus or more to increase during bootstorms and high
density of gu...
2013 Sep 06
21
[PATCH v2 0/5] xl: allow for node-wise specification of vcpu pinning
Hi all,
This is the second take of a patch that I submitted some time ago for allowing
specifying vcpu pinning taking NUMA nodes into account. IOW, something like
this:
* "nodes:0-3": all pCPUs of nodes 0,1,2,3;
* "nodes:0-3,^node:2": all pCPUS of nodes 0,1,3;
* "1,nodes:1-2,^6": pCPU 1 plus all pCPUs of nodes 1,2
but not pCPU 6;
v1 was a single patch, this is a small series. It become necessary to do that
while coping with the review comments I got from IanJ. Wh...
2013 Jul 27
0
Re: pinVcpu not working
...]
On Fri, Jul 19, 2013 at 11:45 AM, Peeyush Gupta <gpeeyush@ymail.com> wrote:
> Hi all,
>
> I am working with libvirt and I am trying to set cpu affinity. Now I can
> always use
>
> virsh vcpupin <domain_name> <vcpu> <pcpu>
>
> to pin vcpus to pcpus. I want to do it using Python API. Now, there is
> function pinVcpu which is supposed to do that. But this is not working. For
> example I gave
>
> dom.pinVcpu(0,1)
>
> but still my vcpu affinity is for all the pcpus. The function returns 0
> (success).
> Any idea what a...
2012 Dec 12
7
[PATCH V5] x86/kexec: Change NMI and MCE handling on kexec path
...pu is
never planning to execute a sysret back to a pv vcpu, the update is
safe from a security point of view.
* Swap the NMI trap handlers.
The crashing pcpu gets the nop handler, to prevent it getting stuck in
an NMI context, causing a hang instead of crash. The non-crashing
pcpus all get the nmi_crash handler which is designed never to
return.
do_nmi_crash() will:
* Save the crash notes and shut the pcpu down.
There is now an extra per-cpu variable to prevent us from executing
this multiple times. In the case where we reenter midway through,
attempt the...
2012 Dec 03
17
[PATCH 0 of 3] xen: sched_credit: fix tickling and add some tracing
Hello,
This small series deals with some weirdness in the mechanism with which the
credit scheduler choses what PCPU to tickle upon a VCPU wake-up. Details are
available in the changelog of the first patch.
The new approach has been extensively benchmarked and proved itself either
beneficial or harmless. That means it does not introduce any significant amount
of overhead and/or performances
2007 Oct 03
1
CPU/VCPU sharing
Hi All,
I am new to Xen, and would like to know if anyone can help with a problem I have.
I have a dual - Quad Core Intel 5535 VT 2.66 server, with 24G ram running CentOS5 dom0 and domU (both 64 bit). Everything works great and I am SUPER impressed with the efficiency of Xen (i have always been a UML man). I would however, like to run a single domU domain, with a single VCPU, but get the power
2012 Mar 26
2
[PATCH DOCDAY] docs: wrap misc/xen-command-line.markdown to 80 columns
...ombination with the `low_crashinfo` command line option.
### crashkernel
### credit2\_balance\_over
@@ -211,7 +262,8 @@ Specify the bit width of the DMA heap.
### dom0\_max\_vcpus
> `= <integer>`
-Specifiy the maximum number of vcpus to give to dom0. This defaults to the number of pcpus on the host.
+Specifiy the maximum number of vcpus to give to dom0. This defaults
+to the number of pcpus on the host.
### dom0\_mem (ia64)
> `= <size>`
@@ -260,7 +312,8 @@ Pin dom0 vcpus to their respective pcpus
> Default: `guest_loglvl=none/warning`
-Set the logging level f...
2008 Apr 20
6
creating domU''s consumes 100% of system resources
Hello,
I have noticed that when I use xen-create-image to generate an domU, the
whole server (dom0 and domU''s) basically hangs until it is finished. This
happens primarily during the creation of the ext3 filesystem on an LVM
partition.
This creation of the file system can take up to 4 or 5 minutes. At which
point, any other domU''s are basically paused... tcp connections time
2016 Oct 28
0
[PATCH v6 02/11] locking/osq: Drop the overload of osq_lock()
An over-committed guest with more vCPUs than pCPUs has a heavy overload in
osq_lock().
This is because vCPU A hold the osq lock and yield out, vCPU B wait per_cpu
node->locked to be set. IOW, vCPU B wait vCPU A to run and unlock the osq
lock.
Kernel has an interface bool vcpu_is_preempted(int cpu) to see if a vCPU is
currently running or not....
2013 Jun 14
0
can virsh set the cpuset attribute of <vcpu ..> (CPU Allocation) ?
...t;/vcpu>
...
</domain>
I have seen that virsh vcpupin and virsh emulatorpin can be used to query
and set
the cpusets of the <vcpupin> and <emulatorpin> children elements of
<cputune>
which override the cpuset of <vcpu>.
If i did not have to pin to different PCPUs, I would execute just one
command for the whole domain
rather than one command per VCPU + one per the emulator threads.
Thanks!
--------------------------------------------------
regards,
Edoardo Comar
WebSphere Application Service Platform for Networks (ASPN)
ecomar@uk.ibm.com
IBM UK Ltd, Hursl...