search for: for_each_cpu

Displaying 20 results from an estimated 80 matches for "for_each_cpu".

2009 Jul 15
0
[PATCH] rename for_each_cpu() to for_each_possible_cpu()
...<jbeulich@novell.com> --- 2009-07-10.orig/xen/arch/ia64/linux-xen/perfmon.c 2009-05-27 13:54:05.000000000 +0200 +++ 2009-07-10/xen/arch/ia64/linux-xen/perfmon.c 2009-07-15 10:02:08.000000000 +0200 @@ -7313,7 +7313,7 @@ xenpfm_context_create(XEN_GUEST_HANDLE(p goto out; /* XXX fmt */ - for_each_cpu(cpu) { + for_each_possible_cpu(cpu) { ctx[cpu] = pfm_context_create(&kreq); if (ctx[cpu] == NULL) { error = -ENOMEM; @@ -7325,20 +7325,20 @@ xenpfm_context_create(XEN_GUEST_HANDLE(p BUG_ON(in_irq()); spin_lock(&xenpfm_context_lock); - for_each_cpu(cpu) { + for_each_possible_...
2012 Aug 16
5
[PATCH] AMD, powernow: Update P-state directly when _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL
..._policy_cpus, transition_pstate, + &next_perf_state, 1); + else + transition_pstate(&next_perf_state); - cmd.val = next_perf_state; - cmd.turbo = policy->turbo; - - on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1); - - for_each_cpu(j, &online_policy_cpus) - cpufreq_statistic_update(j, perf->state, next_perf_state); + for_each_cpu(j, &online_policy_cpus) + cpufreq_statistic_update(j, perf->state, next_perf_state); + } perf->state = next_perf_state; - policy->cur = f...
2019 May 27
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...-static void kvm_flush_tlb_others(const struct cpumask *cpumask, +static void kvm_flush_tlb_multi(const struct cpumask *cpumask, const struct flush_tlb_info *info) { u8 state; @@ -594,6 +594,9 @@ static void kvm_flush_tlb_others(const s * queue flush_on_enter for pre-empted vCPUs */ for_each_cpu(cpu, flushmask) { + if (cpu == smp_processor_id()) + continue; + src = &per_cpu(steal_time, cpu); state = READ_ONCE(src->preempted); if ((state & KVM_VCPU_PREEMPTED)) { @@ -603,7 +606,7 @@ static void kvm_flush_tlb_others(const s } } - native_flush_tlb_others(flushmask...
2019 May 27
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...-static void kvm_flush_tlb_others(const struct cpumask *cpumask, +static void kvm_flush_tlb_multi(const struct cpumask *cpumask, const struct flush_tlb_info *info) { u8 state; @@ -594,6 +594,9 @@ static void kvm_flush_tlb_others(const s * queue flush_on_enter for pre-empted vCPUs */ for_each_cpu(cpu, flushmask) { + if (cpu == smp_processor_id()) + continue; + src = &per_cpu(steal_time, cpu); state = READ_ONCE(src->preempted); if ((state & KVM_VCPU_PREEMPTED)) { @@ -603,7 +606,7 @@ static void kvm_flush_tlb_others(const s } } - native_flush_tlb_others(flushmask...
2020 Apr 08
5
[PATCH] x86: mmiotrace: Use cpumask_available for cpumask_var_t variables
...uot;); goto out; @@ -402,7 +402,7 @@ static void leave_uniprocessor(void) int cpu; int err; - if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) + if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0) return; pr_notice("Re-enabling CPUs...\n"); for_each_cpu(cpu, downed_cpus) { base-commit: ae46d2aa6a7fbe8ca0946f24b061b6ccdc6c3f25 -- 2.26.0
2013 Jun 20
3
[PATCH V2 1/2] cpufreq, xenpm: fix cpufreq and xenpm mismatch
Currently cpufreq and xenpm are out of sync. Fix cpufreq reporting of if turbo mode is enabled or not. Fix xenpm to not decode for tristate, but a boolean. Signed-off-by: Jacob Shin <jacob.shin@amd.com> --- tools/misc/xenpm.c | 14 +++----------- xen/drivers/cpufreq/utility.c | 2 +- 2 files changed, 4 insertions(+), 12 deletions(-) diff --git a/tools/misc/xenpm.c
2019 May 27
1
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...tic void kvm_flush_tlb_multi(const struct cpumask *cpumask, > > const struct flush_tlb_info *info) > > { > > u8 state; > > @@ -594,6 +594,9 @@ static void kvm_flush_tlb_others(const s > > * queue flush_on_enter for pre-empted vCPUs > > */ > > for_each_cpu(cpu, flushmask) { > > + if (cpu == smp_processor_id()) > > + continue; > > + > > Even this would be just an optimization; the vCPU you're running on > cannot be preempted. You can just change others to multi. Yeah, I know, but it felt weird so I added the explic...
2020 May 18
2
[PATCH] x86: mmiotrace: Use cpumask_available for cpumask_var_t variables
...int err; > > > > - if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) > > + if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0) > > return; > > pr_notice("Re-enabling CPUs...\n"); > > for_each_cpu(cpu, downed_cpus) { > > > > base-commit: ae46d2aa6a7fbe8ca0946f24b061b6ccdc6c3f25 > > -- > > 2.26.0 > > > > Gentle ping for acceptance, I am not sure who should take this. Looks like Steven or Ingo are the listed maintainers for MMIOTRACE? -- Thanks, ~Nick Des...
2006 May 15
20
[PATCH 0/3] xenoprof fixes
These patches address issues in the kernel part of xenoprof: * Ill-advised use of on_each_cpu() can lead to sleep with interrupts disabled. * Race conditions in active_domains code. * Cleanup of active_domains code. Comments welcome. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2007 Mar 27
0
[PATCH] make all performance counter per-cpu
...[i].perfc_addr; - for (j = 0; j < PRIVOP_COUNT_NADDRS; j++) - atomic_set(&v[j], privop_addr_counter[i].addr[j]); - - v = privop_addr_counter[i].perfc_count; - for (j = 0; j < PRIVOP_COUNT_NADDRS; j++) - atomic_set(&v[j], privop_addr_counter[i].count[j]); + unsigned int cpu; + + for_each_cpu ( cpu ) { + perfc_t *perfcounters = per_cpu(perfcounters, cpu); + struct privop_addr_count *s = per_cpu(privop_addr_counter, cpu); + int i, j; + + for (i = 0; i < PRIVOP_COUNT_NINSTS; i++, s++) { + perfc_t *d; + + /* Note: addresses are truncated! */ + d = perfcounters + privop_addr_i...
2012 Dec 03
17
[PATCH 0 of 3] xen: sched_credit: fix tickling and add some tracing
Hello, This small series deals with some weirdness in the mechanism with which the credit scheduler choses what PCPU to tickle upon a VCPU wake-up. Details are available in the changelog of the first patch. The new approach has been extensively benchmarked and proved itself either beneficial or harmless. That means it does not introduce any significant amount of overhead and/or performances
2020 Apr 08
0
[PATCH] x86: mmiotrace: Use cpumask_available for cpumask_var_t variables
...int cpu; > int err; > > - if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) > + if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0) > return; > pr_notice("Re-enabling CPUs...\n"); > for_each_cpu(cpu, downed_cpus) { > > base-commit: ae46d2aa6a7fbe8ca0946f24b061b6ccdc6c3f25 > -- > 2.26.0 >
2020 Apr 08
1
[PATCH] x86: mmiotrace: Use cpumask_available for cpumask_var_t variables
...rr; > > > > - if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) > > + if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0) > > return; > > pr_notice("Re-enabling CPUs...\n"); > > for_each_cpu(cpu, downed_cpus) { > > > > base-commit: ae46d2aa6a7fbe8ca0946f24b061b6ccdc6c3f25 > > -- > > 2.26.0 > >
2020 May 18
0
[PATCH] x86: mmiotrace: Use cpumask_available for cpumask_var_t variables
...static void leave_uniprocessor(void) > int cpu; > int err; > > - if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) > + if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0) > return; > pr_notice("Re-enabling CPUs...\n"); > for_each_cpu(cpu, downed_cpus) { > > base-commit: ae46d2aa6a7fbe8ca0946f24b061b6ccdc6c3f25 > -- > 2.26.0 > Gentle ping for acceptance, I am not sure who should take this. Cheers, Nathan
2020 May 18
0
[PATCH] x86: mmiotrace: Use cpumask_available for cpumask_var_t variables
...gt; > > - if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) > > > + if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0) > > > return; > > > pr_notice("Re-enabling CPUs...\n"); > > > for_each_cpu(cpu, downed_cpus) { > > > > > > base-commit: ae46d2aa6a7fbe8ca0946f24b061b6ccdc6c3f25 > > > -- > > > 2.26.0 > > > > > > > Gentle ping for acceptance, I am not sure who should take this. > > Looks like Steven or Ingo are the listed...
2019 May 27
0
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...struct cpumask *cpumask, > +static void kvm_flush_tlb_multi(const struct cpumask *cpumask, > const struct flush_tlb_info *info) > { > u8 state; > @@ -594,6 +594,9 @@ static void kvm_flush_tlb_others(const s > * queue flush_on_enter for pre-empted vCPUs > */ > for_each_cpu(cpu, flushmask) { > + if (cpu == smp_processor_id()) > + continue; > + Even this would be just an optimization; the vCPU you're running on cannot be preempted. You can just change others to multi. Paolo > src = &per_cpu(steal_time, cpu); > state = READ_ONCE(src-&...
2019 May 25
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...ocating a new one. + * + * This works under the assumption that there are no nested TLB + * flushes, an assumption that is already made in + * flush_tlb_mm_range(). + */ + struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask); + int cpu; + + cpumask_clear(cond_cpumask); + + for_each_cpu(cpu, cpumask) { + if (tlb_is_not_lazy(cpu)) + __cpumask_set_cpu(cpu, cond_cpumask); + } + __smp_call_function_many(cond_cpumask, flush_tlb_func_remote, + flush_tlb_func_local, (void *)info, 1); + } +} + +void native_flush_tlb_others(const struct cpumask *cpumask, + const struct f...
2019 May 25
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...ocating a new one. + * + * This works under the assumption that there are no nested TLB + * flushes, an assumption that is already made in + * flush_tlb_mm_range(). + */ + struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask); + int cpu; + + cpumask_clear(cond_cpumask); + + for_each_cpu(cpu, cpumask) { + if (tlb_is_not_lazy(cpu)) + __cpumask_set_cpu(cpu, cond_cpumask); + } + __smp_call_function_many(cond_cpumask, flush_tlb_func_remote, + flush_tlb_func_local, (void *)info, 1); + } +} + +void native_flush_tlb_others(const struct cpumask *cpumask, + const struct f...
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...ocating a new one. + * + * This works under the assumption that there are no nested TLB + * flushes, an assumption that is already made in + * flush_tlb_mm_range(). + */ + struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask); + int cpu; + + cpumask_clear(cond_cpumask); + + for_each_cpu(cpu, cpumask) { + if (tlb_is_not_lazy(cpu)) + __cpumask_set_cpu(cpu, cond_cpumask); + } + __smp_call_function_many(cond_cpumask, flush_tlb_func_remote, + flush_tlb_func_local, (void *)info, 1); + } +} + +void native_flush_tlb_others(const struct cpumask *cpumask, + const struct f...
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...ocating a new one. + * + * This works under the assumption that there are no nested TLB + * flushes, an assumption that is already made in + * flush_tlb_mm_range(). + */ + struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask); + int cpu; + + cpumask_clear(cond_cpumask); + + for_each_cpu(cpu, cpumask) { + if (tlb_is_not_lazy(cpu)) + __cpumask_set_cpu(cpu, cond_cpumask); + } + __smp_call_function_many(cond_cpumask, flush_tlb_func_remote, + flush_tlb_func_local, (void *)info, 1); + } +} + +void native_flush_tlb_others(const struct cpumask *cpumask, + const struct f...