search for: per_cpu_var

Displaying 20 results from an estimated 46 matches for "per_cpu_var".

Did you mean: get_cpu_var
2009 Jan 31
14
[PATCH 2/3] xen: make direct versions of irq_enable/disable/save/restore to common code
....h> + +#include "xen-asm.h" + +/* + Enable events. This clears the event mask and tests the pending + event status with one and operation. If there are pending + events, then enter the hypervisor to get them handled. + */ +ENTRY(xen_irq_enable_direct) + /* Unmask events */ + movb $0, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask + + /* Preempt here doesn''t matter because that will deal with + any pending interrupts. The pending check may end up being + run on the wrong CPU, but that doesn''t hurt. */ + + /* Test for pending */ + testb $0xff, PER_CPU_VAR(xen_vcpu_i...
2015 Nov 18
0
[PATCH 3/3] x86: usergs_sysret32 pv op is no longer needed
...m_64.S +++ b/arch/x86/xen/xen-asm_64.S @@ -68,25 +68,6 @@ ENTRY(xen_sysret64) ENDPATCH(xen_sysret64) RELOC(xen_sysret64, 1b+1) -ENTRY(xen_sysret32) - /* - * We're already on the usermode stack at this point, but - * still with the kernel gs, so we can easily switch back - */ - movq %rsp, PER_CPU_VAR(rsp_scratch) - movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp - - pushq $__USER32_DS - pushq PER_CPU_VAR(rsp_scratch) - pushq %r11 - pushq $__USER32_CS - pushq %rcx - - pushq $0 -1: jmp hypercall_iret -ENDPATCH(xen_sysret32) -RELOC(xen_sysret32, 1b+1) - /* * Xen handles syscall callbacks much...
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v3: - Update on message to describe longer term PIE goal. - Minor change on ftrace if condition. - Changed code using xchgq. - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2007 Jun 06
0
[PATCH UPDATE] xen: use iret directly where possible
...irtual NMI, which we don't implement yet */ +#define XEN_EFLAGS_NMI 0x80000000 /* Enable events. This clears the event mask and tests the pending @@ -81,13 +87,12 @@ ENDPATCH(xen_save_fl_direct) */ ENTRY(xen_restore_fl_direct) testb $X86_EFLAGS_IF>>8, %ah - setz %al - movb %al, PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask + setz PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask /* Preempt here doesn't matter because that will deal with any pending interrupts. The pending check may end up being run on the wrong CPU, but that doesn't hurt. */ - /* check for pending...
2007 Jun 06
0
[PATCH UPDATE] xen: use iret directly where possible
...irtual NMI, which we don't implement yet */ +#define XEN_EFLAGS_NMI 0x80000000 /* Enable events. This clears the event mask and tests the pending @@ -81,13 +87,12 @@ ENDPATCH(xen_save_fl_direct) */ ENTRY(xen_restore_fl_direct) testb $X86_EFLAGS_IF>>8, %ah - setz %al - movb %al, PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask + setz PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask /* Preempt here doesn't matter because that will deal with any pending interrupts. The pending check may end up being run on the wrong CPU, but that doesn't hurt. */ - /* check for pending...
2007 Jun 04
1
[PATCH] xen: use iret directly where possible
...irtual NMI, which we don't implement yet */ +#define XEN_EFLAGS_NMI 0x80000000 /* Enable events. This clears the event mask and tests the pending @@ -81,13 +87,12 @@ ENDPATCH(xen_save_fl_direct) */ ENTRY(xen_restore_fl_direct) testb $X86_EFLAGS_IF>>8, %ah - setz %al - movb %al, PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask + setz PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask /* Preempt here doesn't matter because that will deal with any pending interrupts. The pending check may end up being run on the wrong CPU, but that doesn't hurt. */ - /* check for pending...
2007 Jun 04
1
[PATCH] xen: use iret directly where possible
...irtual NMI, which we don't implement yet */ +#define XEN_EFLAGS_NMI 0x80000000 /* Enable events. This clears the event mask and tests the pending @@ -81,13 +87,12 @@ ENDPATCH(xen_save_fl_direct) */ ENTRY(xen_restore_fl_direct) testb $X86_EFLAGS_IF>>8, %ah - setz %al - movb %al, PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask + setz PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask /* Preempt here doesn't matter because that will deal with any pending interrupts. The pending check may end up being run on the wrong CPU, but that doesn't hurt. */ - /* check for pending...
2007 Jun 04
1
[PATCH] xen: use iret directly where possible
...irtual NMI, which we don't implement yet */ +#define XEN_EFLAGS_NMI 0x80000000 /* Enable events. This clears the event mask and tests the pending @@ -81,13 +87,12 @@ ENDPATCH(xen_save_fl_direct) */ ENTRY(xen_restore_fl_direct) testb $X86_EFLAGS_IF>>8, %ah - setz %al - movb %al, PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask + setz PER_CPU_VAR(xen_vcpu_info)+XEN_vcpu_info_mask /* Preempt here doesn't matter because that will deal with any pending interrupts. The pending check may end up being run on the wrong CPU, but that doesn't hurt. */ - /* check for pending...
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends
2020 Aug 24
0
[PATCH v6 48/76] x86/entry/64: Add entry code for #VC handler
...changed, 175 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 26fc9b42fadc..cc054568ad3f 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -101,6 +101,8 @@ SYM_CODE_START(entry_SYSCALL_64) SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp +SYM_INNER_LABEL(entry_SYSCALL_64_safe_stack, SYM_L_GLOBAL) + /* Construct struct pt_regs on stack */ pushq $__USER_DS /* pt_regs->ss */ pushq PER_CPU_VAR(cpu_tss_rw + TSS_sp2) /* pt_regs->sp */ @@ -446,6 +448,82 @@ _ASM_NOKPROBE(\asmsym) SYM_CODE_E...
2018 Nov 02
2
[PULL] vhost: cleanups and fixes
...\ likely(!__range_not_ok(addr, size, user_addr_max())); \ }) and #define user_addr_max() (current->thread.addr_limit.seg) it seems that it depends on current not on the active mm. get_user and friends are similar: ENTRY(__get_user_1) mov PER_CPU_VAR(current_task), %_ASM_DX cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX jae bad_get_user sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */ and %_ASM_DX, %_ASM_AX ASM_STAC 1: movzbl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC ret E...
2018 Nov 02
2
[PULL] vhost: cleanups and fixes
...\ likely(!__range_not_ok(addr, size, user_addr_max())); \ }) and #define user_addr_max() (current->thread.addr_limit.seg) it seems that it depends on current not on the active mm. get_user and friends are similar: ENTRY(__get_user_1) mov PER_CPU_VAR(current_task), %_ASM_DX cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX jae bad_get_user sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */ and %_ASM_DX, %_ASM_AX ASM_STAC 1: movzbl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC ret E...
2015 Nov 18
8
[PATCH 0/3] Fix and cleanup for 32-bit PV sysexit
The first patch fixes Xen PV regression introduced by 32-bit rewrite. Unlike the earlier version it uses ALTERNATIVE instruction and avoids using xen_sysexit (and sysret32 in compat mode) pv ops, as suggested by Andy. (I ended up patching TEST with XOR to avoid extra NOPs, even though I said yesterday it would be wrong. It's not wrong) As result of this patch irq_enable_sysexit and
2015 Nov 18
8
[PATCH 0/3] Fix and cleanup for 32-bit PV sysexit
The first patch fixes Xen PV regression introduced by 32-bit rewrite. Unlike the earlier version it uses ALTERNATIVE instruction and avoids using xen_sysexit (and sysret32 in compat mode) pv ops, as suggested by Andy. (I ended up patching TEST with XOR to avoid extra NOPs, even though I said yesterday it would be wrong. It's not wrong) As result of this patch irq_enable_sysexit and
2015 Nov 19
7
[PATCH v2 0/3] Fix and cleanup for 32-bit PV sysexit
The first patch fixes Xen PV regression introduced by 32-bit rewrite. Unlike the earlier version it uses ALTERNATIVE instruction and avoids using xen_sysexit (and sysret32 in compat mode) pv ops, as suggested by Andy. As result of this patch irq_enable_sysexit and usergs_sysret32 pv ops are not used anymore by anyone and so can be removed. v2: * patch both TEST and JZ intructions with a