Displaying 20 results from an estimated 63 matches for "__per_cpu_offset".
2017 Feb 13
4
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...gt;> +"__raw_callee_save___kvm_vcpu_is_preempted:"
> >> +FRAME_BEGIN
> >> +"push %rdi;"
> >> +"push %rdx;"
> >> +"movslq %edi, %rdi;"
> >> +"movq $steal_time+16, %rax;"
> >> +"movq __per_cpu_offset(,%rdi,8), %rdx;"
> >> +"cmpb $0, (%rdx,%rax);"
Could we not put the $steal_time+16 displacement as an immediate in the
cmpb and save a whole register here?
That way we'd end up with something like:
asm("
push %rdi;
movslq %edi, %rdi;
movq __per_cpu_offset(,%rd...
2017 Feb 13
4
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...gt;> +"__raw_callee_save___kvm_vcpu_is_preempted:"
> >> +FRAME_BEGIN
> >> +"push %rdi;"
> >> +"push %rdx;"
> >> +"movslq %edi, %rdi;"
> >> +"movq $steal_time+16, %rax;"
> >> +"movq __per_cpu_offset(,%rdi,8), %rdx;"
> >> +"cmpb $0, (%rdx,%rax);"
Could we not put the $steal_time+16 displacement as an immediate in the
cmpb and save a whole register here?
That way we'd end up with something like:
asm("
push %rdi;
movslq %edi, %rdi;
movq __per_cpu_offset(,%rd...
2017 Feb 13
5
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...Peter Zijlstra wrote:
> >> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
> >>> That way we'd end up with something like:
> >>>
> >>> asm("
> >>> push %rdi;
> >>> movslq %edi, %rdi;
> >>> movq __per_cpu_offset(,%rdi,8), %rax;
> >>> cmpb $0, %[offset](%rax);
> >>> setne %al;
> >>> pop %rdi;
> >>> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct steal_time, preempted)));
> >>>
> >>> And if we could...
2017 Feb 13
5
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...Peter Zijlstra wrote:
> >> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
> >>> That way we'd end up with something like:
> >>>
> >>> asm("
> >>> push %rdi;
> >>> movslq %edi, %rdi;
> >>> movq __per_cpu_offset(,%rdi,8), %rax;
> >>> cmpb $0, %[offset](%rax);
> >>> setne %al;
> >>> pop %rdi;
> >>> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct steal_time, preempted)));
> >>>
> >>> And if we could...
2017 Feb 13
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>> That way we'd end up with something like:
>>
>> asm("
>> push %rdi;
>> movslq %edi, %rdi;
>> movq __per_cpu_offset(,%rdi,8), %rax;
>> cmpb $0, %[offset](%rax);
>> setne %al;
>> pop %rdi;
>> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct steal_time, preempted)));
>>
>> And if we could get rid of the sign extend on edi we could avoid all t...
2017 Feb 13
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>> That way we'd end up with something like:
>>
>> asm("
>> push %rdi;
>> movslq %edi, %rdi;
>> movq __per_cpu_offset(,%rdi,8), %rax;
>> cmpb $0, %[offset](%rax);
>> setne %al;
>> pop %rdi;
>> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct steal_time, preempted)));
>>
>> And if we could get rid of the sign extend on edi we could avoid all t...
2017 Feb 13
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...ruary 13, 2017 2:53:43 AM PST, Peter Zijlstra <peterz at infradead.org> wrote:
>On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>> That way we'd end up with something like:
>>
>> asm("
>> push %rdi;
>> movslq %edi, %rdi;
>> movq __per_cpu_offset(,%rdi,8), %rax;
>> cmpb $0, %[offset](%rax);
>> setne %al;
>> pop %rdi;
>> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct
>steal_time, preempted)));
>>
>> And if we could get rid of the sign extend on edi we could avoid...
2017 Feb 13
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...ruary 13, 2017 2:53:43 AM PST, Peter Zijlstra <peterz at infradead.org> wrote:
>On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>> That way we'd end up with something like:
>>
>> asm("
>> push %rdi;
>> movslq %edi, %rdi;
>> movq __per_cpu_offset(,%rdi,8), %rax;
>> cmpb $0, %[offset](%rax);
>> setne %al;
>> pop %rdi;
>> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct
>steal_time, preempted)));
>>
>> And if we could get rid of the sign extend on edi we could avoid...
2012 Aug 10
0
[PATCH v2 3/6] x86/xen: Read variables from dynamically allocated per_cpu data
...r.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c 2012-07-05 15:47:09.000000000 +0200
+++ crash-6.0.8/xen_hyper.c 2012-07-05 15:50:19.000000000 +0200
@@ -64,7 +64,6 @@ xen_hyper_init(void)
machdep->get_smp_cpus();
machdep->memory_size();
-#ifdef IA64
if (symbol_exists("__per_cpu_offset")) {
xht->flags |= XEN_HYPER_SMP;
if((xht->__per_cpu_offset = malloc(sizeof(ulong) * XEN_HYPER_MAX_CPUS())) == NULL) {
@@ -76,7 +75,6 @@ xen_hyper_init(void)
error(FATAL, "cannot read __per_cpu_offset.\n");
}
}
-#endif
#if defined(X86) || defined(X86_64)
if...
2012 Mar 09
10
[PATCH 0 of 9] (v2) arm: SMP boot
This patch series implements SMP boot for arch/arm, as far as getting
all CPUs up and running the idle loop.
Changes from v1:
- moved barriers out of loop in udelay()
- dropped broken GIC change in favour of explanatory comment
- made the increment of ready_cpus atomic (I couldn''t move the
increment to before signalling the next CPU because the PT
switch has to happen between
2007 Apr 18
5
[patch 0/5] i386-gdt-pda i386 gdt and pda updates
Hi Andrew,
This patch series adds to the end of the existing i386-gdt-cleanups patches:
allow-per-cpu-variables-to-be-page-aligned.patch
i386-gdt-cleanups-use-per-cpu-variables-for-gdt-pda.patch
i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot.patch
i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot-fix.patch
i386-gdt-cleanups-clean-up-cpu_init.patch
2007 Apr 18
5
[patch 0/5] i386-gdt-pda i386 gdt and pda updates
Hi Andrew,
This patch series adds to the end of the existing i386-gdt-cleanups patches:
allow-per-cpu-variables-to-be-page-aligned.patch
i386-gdt-cleanups-use-per-cpu-variables-for-gdt-pda.patch
i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot.patch
i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot-fix.patch
i386-gdt-cleanups-clean-up-cpu_init.patch
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...aw_callee_save___kvm_vcpu_is_preempted, @function;"
> +"__raw_callee_save___kvm_vcpu_is_preempted:"
> +FRAME_BEGIN
> +"push %rdi;"
> +"push %rdx;"
> +"movslq %edi, %rdi;"
> +"movq $steal_time+16, %rax;"
> +"movq __per_cpu_offset(,%rdi,8), %rdx;"
> +"cmpb $0, (%rdx,%rax);"
> +"setne %al;"
> +"pop %rdx;"
> +"pop %rdi;"
> +FRAME_END
> +"ret;"
> +".popsection");
> +
> +#endif
> +
> /*
> * Setup pv_lock_ops to exploit KV...
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...aw_callee_save___kvm_vcpu_is_preempted, @function;"
> +"__raw_callee_save___kvm_vcpu_is_preempted:"
> +FRAME_BEGIN
> +"push %rdi;"
> +"push %rdx;"
> +"movslq %edi, %rdi;"
> +"movq $steal_time+16, %rax;"
> +"movq __per_cpu_offset(,%rdi,8), %rdx;"
> +"cmpb $0, (%rdx,%rax);"
> +"setne %al;"
> +"pop %rdx;"
> +"pop %rdi;"
> +FRAME_END
> +"ret;"
> +".popsection");
> +
> +#endif
> +
> /*
> * Setup pv_lock_ops to exploit KV...
2007 Apr 18
8
[patch 0/6] i386 gdt and percpu cleanups
Hi Andi,
This is a series of patches based on your latest queue (as of the
other day, at least).
It includes:
- the most recent patch to compute the appropriate amount of percpu
space to allocate, using a separate reservation for modules where
needed.
- make the percpu sections page-aligned, so that percpu variables can
be page aligned if needed (which is used by gdt_page)
-
2007 Apr 18
8
[patch 0/6] i386 gdt and percpu cleanups
Hi Andi,
This is a series of patches based on your latest queue (as of the
other day, at least).
It includes:
- the most recent patch to compute the appropriate amount of percpu
space to allocate, using a separate reservation for modules where
needed.
- make the percpu sections page-aligned, so that percpu variables can
be page aligned if needed (which is used by gdt_page)
-
2013 Feb 14
2
[PATCH] x86/xen: don't assume %ds is usable in xen_iret for 32-bit PVOPS.
...n/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
> index f9643fc..33ca6e4 100644
> --- a/arch/x86/xen/xen-asm_32.S
> +++ b/arch/x86/xen/xen-asm_32.S
> @@ -89,11 +89,11 @@ ENTRY(xen_iret)
> */
> #ifdef CONFIG_SMP
> GET_THREAD_INFO(%eax)
> - movl TI_cpu(%eax), %eax
> - movl __per_cpu_offset(,%eax,4), %eax
> - mov xen_vcpu(%eax), %eax
> + movl %ss:TI_cpu(%eax), %eax
> + movl %ss:__per_cpu_offset(,%eax,4), %eax
> + mov %ss:xen_vcpu(%eax), %eax
> #else
> - movl xen_vcpu, %eax
> + movl %ss:xen_vcpu, %eax
> #endif
>
> /* check IF state we're restorin...
2013 Feb 14
2
[PATCH] x86/xen: don't assume %ds is usable in xen_iret for 32-bit PVOPS.
...n/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
> index f9643fc..33ca6e4 100644
> --- a/arch/x86/xen/xen-asm_32.S
> +++ b/arch/x86/xen/xen-asm_32.S
> @@ -89,11 +89,11 @@ ENTRY(xen_iret)
> */
> #ifdef CONFIG_SMP
> GET_THREAD_INFO(%eax)
> - movl TI_cpu(%eax), %eax
> - movl __per_cpu_offset(,%eax,4), %eax
> - mov xen_vcpu(%eax), %eax
> + movl %ss:TI_cpu(%eax), %eax
> + movl %ss:__per_cpu_offset(,%eax,4), %eax
> + mov %ss:xen_vcpu(%eax), %eax
> #else
> - movl xen_vcpu, %eax
> + movl %ss:xen_vcpu, %eax
> #endif
>
> /* check IF state we're restorin...
2017 Feb 13
0
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
> That way we'd end up with something like:
>
> asm("
> push %rdi;
> movslq %edi, %rdi;
> movq __per_cpu_offset(,%rdi,8), %rax;
> cmpb $0, %[offset](%rax);
> setne %al;
> pop %rdi;
> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct steal_time, preempted)));
>
> And if we could get rid of the sign extend on edi we could avoid all the
> push-pop nonsen...
2017 Feb 13
0
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...rote:
> On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
>> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>>> That way we'd end up with something like:
>>>
>>> asm("
>>> push %rdi;
>>> movslq %edi, %rdi;
>>> movq __per_cpu_offset(,%rdi,8), %rax;
>>> cmpb $0, %[offset](%rax);
>>> setne %al;
>>> pop %rdi;
>>> " : : [offset] "i" (((unsigned long)&steal_time) + offsetof(struct steal_time, preempted)));
>>>
>>> And if we could get rid of the sign extend on...