Displaying 20 results from an estimated 4976 matches for "cpus's".
Did you mean:
cpu's
2011 Oct 20
0
[PATCH 07/12] cpufreq: allocate CPU masks dynamically
struct cpufreq_policy, including a cpumask_t member, gets copied in
cpufreq_limit_change(), cpufreq_add_cpu(), set_cpufreq_gov(), and
set_cpufreq_para(). Make the member a cpumask_var_t, thus reducing the
amount of data needing copying (particularly with large NR_CPUS).
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- 2011-09-20.orig/xen/arch/x86/acpi/cpufreq/cpufreq.c 2011-10-12 08:35:12.000000000 +0200
+++ 2011-09-20/xen/arch/x86/acpi/cpufreq/cpufreq.c 2011-10-14 14:55:07.000000000 +0200
@@ -446,7 +446,7 @@ static int acpi_cpufreq_target(struct cp...
2013 Jun 20
3
[PATCH V2 1/2] cpufreq, xenpm: fix cpufreq and xenpm mismatch
Currently cpufreq and xenpm are out of sync. Fix cpufreq reporting of
if turbo mode is enabled or not. Fix xenpm to not decode for tristate,
but a boolean.
Signed-off-by: Jacob Shin <jacob.shin@amd.com>
---
tools/misc/xenpm.c | 14 +++-----------
xen/drivers/cpufreq/utility.c | 2 +-
2 files changed, 4 insertions(+), 12 deletions(-)
diff --git a/tools/misc/xenpm.c
2004 Dec 15
8
SMP guest support in unstable tree.
The unstable tree now includes support for SMP guests, i.e.
domains which run on multiple cpus. SMP guests can use between
1 and 32 virtual cpus, even if the machine has fewer physical cpus.
The code is highly experimental and performance will improve over
time.
To use SMP guests:
- enable option CONFIG_SMP in the Linux 2.6 kernel config
- dom0 will boot with upto the number of physical cp...
2008 May 17
4
vcpus higher than real cpus possible?
Hello, i have just migrated machine to xen server where only 2 cpus are
available. I have copied config from machine where domU was located before.
That machine had 4 CPUS (QUADCORE), current machine has 2 CPUS (1 XEON
2cores).
I have forgot to change vcpus from 4 to 2 in config, but .. what really
surprised me ... machine started . How it is possible? It was real...
2006 Sep 15
11
Supported #of CPUs/VMs per CPUs
Can find documented how many CPUS can Xen support and how many Virtual
Machines per CPU are allowed? Can someone please supply this info?
--
View this message in context: http://www.nabble.com/Supported--of-CPUs-VMs-per-CPUs-tf2278842.html#a6329630
Sent from the Xen - Dev forum at Nabble.com.
_________________________________...
2013 Jan 23
1
VMs fail to start with NUMA configuration
...r the documentation to:
<vcpu placement='auto'>2</vcpu>
<numatune>
<memory tune='strict' placement='auto'/>
</numatune>
However, the VMs won't start and the system is no low on memory.
# numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 0 4 8 12 16 20 24 28
node 0 size: 16374 MB
node 0 free: 11899 MB
node 1 cpus: 32 36 40 44 48 52 56 60
node 1 size: 16384 MB
node 1 free: 15318 MB
node 2 cpus: 2 6 10 14 18 22 26 30
node 2 size: 16384 MB
node 2 free: 15766 MB
node 3 cpus: 34 38 42 46 50 54 58 62
node 3 size: 16384 MB
node 3 free: 1...
2013 Nov 03
2
[LLVMdev] [PATCH] Do not generate nopl instruction on CPUs that don't support it.
Hi
This patch fixes code generation bug - 586-class CPUs don't support the
nopl instruction and some 686-class CPUs don't support it too.
I created bug 17792 for that.
BTW. I think you should also optimize padding on these CPUs - instead of a
stream of 0x90 nops, you should generate variants of "lea (%esi), %esi"
instruction like g...
2013 Sep 17
1
[PATCH v2] xen: sched_credit: filter node-affinity mask against online cpus
...sk() )
and online mask (as retrieved by cpupool_scheduler_cpumask() )
having an empty intersection.
Therefore, when attempting a node-affinity load balancing step
and running this:
...
/* Pick an online CPU from the proper affinity mask */
csched_balance_cpumask(vc, balance_step, &cpus);
cpumask_and(&cpus, &cpus, online);
...
we end up with an empty cpumask (in cpus). At this point, in
the following code:
....
/* If present, prefer vc''s current processor */
cpu = cpumask_test_cpu(vc->processor, &cpus)
? vc->processor...
2007 Apr 18
2
[PATCH] Simplify smp_call_function*() by using common implementation
...c struct call_data_struct *call_data;
-static void __smp_call_function(void (*func) (void *info), void *info,
- int nonatomic, int wait)
+
+static int __smp_call_function_mask(cpumask_t mask,
+ void (*func)(void *), void *info,
+ int wait)
{
struct call_data_struct data;
- int cpus = num_online_cpus() - 1;
+ cpumask_t allbutself;
+ int cpus;
+
+ /* Can deadlock when called with interrupts disabled */
+ WARN_ON(irqs_disabled());
+
+ allbutself = cpu_online_map;
+ cpu_clear(smp_processor_id(), allbutself);
+
+ cpus_and(mask, mask, allbutself);
+ cpus = cpus_weight(mask);
if...
2007 Apr 18
2
[PATCH] Simplify smp_call_function*() by using common implementation
...c struct call_data_struct *call_data;
-static void __smp_call_function(void (*func) (void *info), void *info,
- int nonatomic, int wait)
+
+static int __smp_call_function_mask(cpumask_t mask,
+ void (*func)(void *), void *info,
+ int wait)
{
struct call_data_struct data;
- int cpus = num_online_cpus() - 1;
+ cpumask_t allbutself;
+ int cpus;
+
+ /* Can deadlock when called with interrupts disabled */
+ WARN_ON(irqs_disabled());
+
+ allbutself = cpu_online_map;
+ cpu_clear(smp_processor_id(), allbutself);
+
+ cpus_and(mask, mask, allbutself);
+ cpus = cpus_weight(mask);
if...
2012 Jun 03
1
need to load uhci_hcd with acpi=off
...cess to the
usb-ip-kvm-keyboard.
What I tried:
- thousands of BIOS IRQ/PnP parameters => no luck
- noacpi => kernel panic
- irq=biosirq => no change
Any more idea to get the IRQ with acpi=off parameter set?
Best Regards
Werner
complete dmesg:
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-44)
(dannf@debian.org) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Sat May
5 04:18:09 UTC 2012
[ 0.000000] Command line: placeholder
root=UUID=8622e268-8845-4cd5-82b3-e7fd0568c602 r...
2006 Jan 24
5
domU machines hang when Hyperthreading enabled in BIOS
Hi,
I can''t get rid of an annoying domU machines hanging issue with
Hyperthreading enabled in my BIOS.
Hardware config: IBM x325 series
2 * CPUs Intel Xeon 3.06 Ghz
Xen version: 3.0.0
Kernel version: 2.6.12.6
Linux distrib: Debian Sarge 3.1 r1
I run 1 domU machine on my dom0, here is my Xen config:
. xen0:
/etc/xen/xend-config.sxp : (dom0-num-cpus 0)
. xenU:
/etc/xen/myxenU.cfg : vcpus = 2
- First case: . BIOS Hyp...
2009 Jul 18
26
network misbehaviour with gplpv and 2.6.30
With GPLPV under 2.6.30, GPLPV gets the following from the ring:
ring slot n (first buffer):
status (length) = 54 bytes
offset = 0
flags = NETRXF_extra_info (possibly csum too but not relevant)
ring slot n + 1 (extra info)
gso.size (mss) = 1460
Because NETRXF_extra_info is not set, that''s all I get for that packet.
In the IP header though, the total length is 1544 (which in itself
2009 Jul 18
26
network misbehaviour with gplpv and 2.6.30
With GPLPV under 2.6.30, GPLPV gets the following from the ring:
ring slot n (first buffer):
status (length) = 54 bytes
offset = 0
flags = NETRXF_extra_info (possibly csum too but not relevant)
ring slot n + 1 (extra info)
gso.size (mss) = 1460
Because NETRXF_extra_info is not set, that''s all I get for that packet.
In the IP header though, the total length is 1544 (which in itself
2013 Jun 10
3
[LLVMdev] [PATCH] Add host feature detection for Qualcomm CPUs
Hi,
I would like to add host feature detection for Qualcomm CPUs. The
implementation models
the feature detection for ARM CPUs. Is this OK to commit?
Thanks,
Tobi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130610/caeabd25/attachment.html>
-------------- next part...
2013 Nov 05
0
[LLVMdev] [PATCH] Do not generate nopl instruction on CPUs that don't support it.
Please include a testcase with the patch.
gas uses " nopl 0x0(%eax)" for k6_2. Are you sure it is a gas bug?
On 3 November 2013 13:50, Mikulas Patocka
<mikulas at artax.karlin.mff.cuni.cz> wrote:
> Hi
>
> This patch fixes code generation bug - 586-class CPUs don't support the
> nopl instruction and some 686-class CPUs don't support it too.
>
> I created bug 17792 for that.
>
>
> BTW. I think you should also optimize padding on these CPUs - instead of a
> stream of 0x90 nops, you should generate variants of "lea (%esi),...
2014 Sep 18
2
win2k8 guest and (strange) number of cpus task manager sees
hi everybody
a qemu-kvm guest gets 16 cpus and Windows in "Device
Manager" sees all sixteeen
but "Task Manager" shows only 4, the same "System"
properties say "(4 processors)"
I'd like to learn a bit about this - is it some sort of
"resources management" on the libvirt/qemu causing t...
2012 Nov 01
0
numa topology within domain XML
Hello all,
I'm trying to setup a NUMA topology identical as the machine which hosts
the qemu-kvm VirtualMachine.
numactl -H on the host:
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5
node 0 size: 8189 MB
node 0 free: 7581 MB
node 1 cpus: 6 7 8 9 10 11
node 1 size: 8192 MB
node 1 free: 7061 MB
node 2 cpus: 12 13 14 15 16 17
node 2 size: 8192 MB
node 2 free: 6644 MB
node 3 cpus: 18 19 20 21 22 23
node 3 size: 8192 MB
node 3 free: 7747 MB
node 4 cpus: 24 25 26 27 28 2...
2010 Nov 18
1
what scheduling algorithm does KVM use?
This may not be the best place to ask, but I was prompted by a question about
guest cores on KVM.
We currently use VMWare Server (v1.0) on CentOS5.
It supports up to two virtual CPUs, but not very well, as I understand it.
VMWare Server 2.0 might do better at supporting the same maximum of 2 CPUs, but
if my research is correct, they both use what is called "strict co-scheduling".
Which means, if a two virtual-CPU VM is waiting for a time slice on the
physical host, t...
2019 Jul 02
0
[PATCH v2 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index e65d7fe6489f..1177f863e4cd 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -50,8 +50,8 @@ static inline int fill_gva_list(u64 gva_list[], int offset,
return gva_n - offset;
}
-static void hyperv_flush_tlb_others(const struct cpumask *cpus,
- const struct flush_tlb_info *info)
+static void hyperv_flush_tlb_multi(const struct cpumask *cpus,
+ const struct flush_tlb_info *info)
{
int cpu, vcpu, gva_n, max_gvas;
struct hv_tlb_flush **flush_pcpu;
@@ -59,7 +59,7 @@ static void hyperv_flush_tlb_others(const struct cpumask...