Displaying 8 results from an estimated 8 matches for "stub_ptregs_64".
2017 Oct 11
1
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
...+ b/arch/x86/entry/entry_64.S
@@ -194,12 +194,15 @@ entry_SYSCALL_64_fastpath:
ja 1f /* return -ENOSYS (already in pt_regs->ax) */
movq %r10, %rcx
+ /* Ensures the call is position independent */
+ leaq sys_call_table(%rip), %r11
+
/*
* This call instruction is handled specially in stub_ptregs_64.
* It might end up jumping to the slow path. If it jumps, RAX
* and all argument registers are clobbered.
*/
- call *sys_call_table(, %rax, 8)
+ call *(%r11, %rax, 8)
.Lentry_SYSCALL_64_after_fastpath_call:
movq %rax, RAX(%rsp)
@@ -334,7 +337,8 @@ ENTRY(stub_ptregs_64)
* RAX store...
2017 Oct 20
0
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
...+194,15 @@ entry_SYSCALL_64_fastpath:
> ja 1f /* return -ENOSYS (already in pt_regs->ax) */
> movq %r10, %rcx
>
> + /* Ensures the call is position independent */
> + leaq sys_call_table(%rip), %r11
> +
> /*
> * This call instruction is handled specially in stub_ptregs_64.
> * It might end up jumping to the slow path. If it jumps, RAX
> * and all argument registers are clobbered.
> */
> - call *sys_call_table(, %rax, 8)
> + call *(%r11, %rax, 8)
> .Lentry_SYSCALL_64_after_fastpath_call:
>
> movq %rax, RAX(%rsp)
> @@ -334,7 +...
2017 Oct 20
3
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
.../* return -ENOSYS (already in pt_regs->ax) */
>> movq %r10, %rcx
>>
>> + /* Ensures the call is position independent */
>> + leaq sys_call_table(%rip), %r11
>> +
>> /*
>> * This call instruction is handled specially in stub_ptregs_64.
>> * It might end up jumping to the slow path. If it jumps, RAX
>> * and all argument registers are clobbered.
>> */
>> - call *sys_call_table(, %rax, 8)
>> + call *(%r11, %rax, 8)
>> .Lentry_SYSCALL_64_after_fastpath_call:
&...
2017 Oct 20
3
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
.../* return -ENOSYS (already in pt_regs->ax) */
>> movq %r10, %rcx
>>
>> + /* Ensures the call is position independent */
>> + leaq sys_call_table(%rip), %r11
>> +
>> /*
>> * This call instruction is handled specially in stub_ptregs_64.
>> * It might end up jumping to the slow path. If it jumps, RAX
>> * and all argument registers are clobbered.
>> */
>> - call *sys_call_table(, %rax, 8)
>> + call *(%r11, %rax, 8)
>> .Lentry_SYSCALL_64_after_fastpath_call:
&...
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to