search for: native_smp_prepare_boot_cpu

Displaying 20 results from an estimated 68 matches for "native_smp_prepare_boot_cpu".

2007 Apr 18
1
[PATCH] Add smp_ops interface
...end_stop(void) +void native_smp_send_stop(void) { /* Don't deadlock on the call lock in panic */ int nolock = !spin_trylock(&call_lock); @@ -733,3 +733,14 @@ int safe_smp_processor_id(void) return cpuid >= 0 ? cpuid : 0; } + +struct smp_ops smp_ops = { + .smp_prepare_boot_cpu = native_smp_prepare_boot_cpu, + .smp_prepare_cpus = native_smp_prepare_cpus, + .cpu_up = native_cpu_up, + .smp_cpus_done = native_smp_cpus_done, + + .smp_send_stop = native_smp_send_stop, + .smp_send_reschedule = native_smp_send_reschedule, + .smp_call_function_mask = native_smp_call_function_mask, +}; ========================...
2007 Apr 18
1
[PATCH] Add smp_ops interface
...end_stop(void) +void native_smp_send_stop(void) { /* Don't deadlock on the call lock in panic */ int nolock = !spin_trylock(&call_lock); @@ -733,3 +733,14 @@ int safe_smp_processor_id(void) return cpuid >= 0 ? cpuid : 0; } + +struct smp_ops smp_ops = { + .smp_prepare_boot_cpu = native_smp_prepare_boot_cpu, + .smp_prepare_cpus = native_smp_prepare_cpus, + .cpu_up = native_cpu_up, + .smp_cpus_done = native_smp_cpus_done, + + .smp_send_stop = native_smp_send_stop, + .smp_send_reschedule = native_smp_send_reschedule, + .smp_call_function_mask = native_smp_call_function_mask, +}; ========================...
2007 Oct 31
3
[PATCH 0/7] (Re-)introducing pvops for x86_64 - Consolidation part
Hi folks, Here is the result of the latest work on the pvops front, after the x86 arch merge. From the functionality point of view, almost nothing was changed, except for proper vsmp support - which was discussed, but not implemented before - and the introduction of smp_ops in x86_64, which eased the merging of the smp header. Speaking of the merge, a significant part (although not majority) of
2007 Oct 31
3
[PATCH 0/7] (Re-)introducing pvops for x86_64 - Consolidation part
Hi folks, Here is the result of the latest work on the pvops front, after the x86 arch merge. From the functionality point of view, almost nothing was changed, except for proper vsmp support - which was discussed, but not implemented before - and the introduction of smp_ops in x86_64, which eased the merging of the smp header. Speaking of the merge, a significant part (although not majority) of
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross
2017 Sep 05
1
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...virt_spin_lock(struct qspinlock *lock) > +{ > + return native_virt_spin_lock(lock); > } > +#endif /* CONFIG_PARAVIRT_SPINLOCKS */ > #endif /* CONFIG_PARAVIRT */ Because I think the above only ever uses native_virt_spin_lock() when PARAVIRT. > @@ -1381,6 +1382,7 @@ void __init native_smp_prepare_boot_cpu(void) > /* already set me in cpu_online_mask in boot_cpu_init() */ > cpumask_set_cpu(me, cpu_callout_mask); > cpu_set_state_online(me); > + native_pv_lock_init(); > } Aah, this is where that goes.. OK that works too.
2017 Sep 05
1
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...virt_spin_lock(struct qspinlock *lock) > +{ > + return native_virt_spin_lock(lock); > } > +#endif /* CONFIG_PARAVIRT_SPINLOCKS */ > #endif /* CONFIG_PARAVIRT */ Because I think the above only ever uses native_virt_spin_lock() when PARAVIRT. > @@ -1381,6 +1382,7 @@ void __init native_smp_prepare_boot_cpu(void) > /* already set me in cpu_online_mask in boot_cpu_init() */ > cpumask_set_cpu(me, cpu_callout_mask); > cpu_set_state_online(me); > + native_pv_lock_init(); > } Aah, this is where that goes.. OK that works too.
2017 Sep 06
0
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...6/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -77,6 +77,7 @@ #include <asm/i8259.h> #include <asm/realmode.h> #include <asm/misc.h> +#include <asm/qspinlock.h> /* Number of siblings per CPU package */ int smp_num_siblings = 1; @@ -1381,6 +1382,7 @@ void __init native_smp_prepare_boot_cpu(void) /* already set me in cpu_online_mask in boot_cpu_init() */ cpumask_set_cpu(me, cpu_callout_mask); cpu_set_state_online(me); + native_pv_lock_init(); } void __init native_smp_cpus_done(unsigned int max_cpus) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 2...
2017 Sep 06
2
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...oot.c > @@ -77,6 +77,7 @@ > #include <asm/i8259.h> > #include <asm/realmode.h> > #include <asm/misc.h> > +#include <asm/qspinlock.h> > > /* Number of siblings per CPU package */ > int smp_num_siblings = 1; > @@ -1381,6 +1382,7 @@ void __init native_smp_prepare_boot_cpu(void) > /* already set me in cpu_online_mask in boot_cpu_init() */ > cpumask_set_cpu(me, cpu_callout_mask); > cpu_set_state_online(me); > + native_pv_lock_init(); > } > > void __init native_smp_cpus_done(unsigned int max_cpus) > diff --git a/kernel/locking/qspinloc...
2017 Sep 06
2
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...oot.c > @@ -77,6 +77,7 @@ > #include <asm/i8259.h> > #include <asm/realmode.h> > #include <asm/misc.h> > +#include <asm/qspinlock.h> > > /* Number of siblings per CPU package */ > int smp_num_siblings = 1; > @@ -1381,6 +1382,7 @@ void __init native_smp_prepare_boot_cpu(void) > /* already set me in cpu_online_mask in boot_cpu_init() */ > cpumask_set_cpu(me, cpu_callout_mask); > cpu_set_state_online(me); > + native_pv_lock_init(); > } > > void __init native_smp_cpus_done(unsigned int max_cpus) > diff --git a/kernel/locking/qspinloc...
2017 Sep 05
0
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...6/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -77,6 +77,7 @@ #include <asm/i8259.h> #include <asm/realmode.h> #include <asm/misc.h> +#include <asm/qspinlock.h> /* Number of siblings per CPU package */ int smp_num_siblings = 1; @@ -1381,6 +1382,7 @@ void __init native_smp_prepare_boot_cpu(void) /* already set me in cpu_online_mask in boot_cpu_init() */ cpumask_set_cpu(me, cpu_callout_mask); cpu_set_state_online(me); + native_pv_lock_init(); } void __init native_smp_cpus_done(unsigned int max_cpus) -- 2.12.3
2007 Apr 18
4
paravirt repo rebased to 2.6.21-rc6-mm1
Seems to work OK for native and Xen. I had to play a bit with the paravirt-sched-clock patch to deal with the VMI changes. Zach, can you check that it still works? Thanks, J
2007 Apr 18
4
paravirt repo rebased to 2.6.21-rc6-mm1
Seems to work OK for native and Xen. I had to play a bit with the paravirt-sched-clock patch to deal with the VMI changes. Zach, can you check that it still works? Thanks, J
2017 Oct 30
0
[locking/paravirt] static_key_disable_cpuslocked(): static key 'virt_spin_lock_key+0x0/0x20' used before call to jump_label_init()
...000] CS:? 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [??? 0.000000] CR2: ffff88207f4e0000 CR3: 000000207ee09000 CR4: > 00000000000606b0 > [??? 0.000000] Call Trace: > [??? 0.000000]? static_key_disable+0x1a/0x30 > [??? 0.000000]? native_pv_lock_init+0x1b/0x1e > [??? 0.000000]? native_smp_prepare_boot_cpu+0x32/0x35 > [??? 0.000000]? start_kernel+0x14f/0x421 > [??? 0.000000]? x86_64_start_reservations+0x2a/0x2c > [??? 0.000000]? x86_64_start_kernel+0x72/0x75 > [??? 0.000000]? secondary_startup_64+0xa5/0xb0 > [??? 0.000000] Code: 85 c0 75 2f 48 c7 c7 20 5e f0 81 e8 df 2a 7b 00 5b > 4...
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. Juergen Gross (4): paravirt: add generic _paravirt_false() function paravirt: switch
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. Juergen Gross (4): paravirt: add generic _paravirt_false() function paravirt: switch
2017 Sep 06
5
[PATCH v3 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V3: - remove test for hypervisor environment from virt_spin_lock(9 as suggested by
2017 Sep 06
5
[PATCH v3 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V3: - remove test for hypervisor environment from virt_spin_lock(9 as suggested by
2020 Feb 12
5
[PATCH 0/5] x86/vmware: Steal time accounting support
Hello, This patchset introduces steal time accounting support for the VMware guest. The idea and implementation of guest steal time support is similar to KVM ones and it is based on steal clock. The steal clock is a per CPU structure in a shared memory between hypervisor and guest, initialized by each CPU through hypercall. Steal clock is got updated by the hypervisor and read by the guest. The