search for: wakeup_long64

Displaying 14 results from an estimated 14 matches for "wakeup_long64".

2018 May 23
0
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
...--git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S index 50b8ed0317a3..472659c0f811 100644 --- a/arch/x86/kernel/acpi/wakeup_64.S +++ b/arch/x86/kernel/acpi/wakeup_64.S @@ -14,7 +14,7 @@ * Hooray, we are in Long 64-bit mode (but still running in low memory) */ ENTRY(wakeup_long64) - movq saved_magic, %rax + movq saved_magic(%rip), %rax movq $0x123456789abcdef0, %rdx cmpq %rdx, %rax jne bogus_64_magic @@ -25,14 +25,14 @@ ENTRY(wakeup_long64) movw %ax, %es movw %ax, %fs movw %ax, %gs - movq saved_rsp, %rsp + movq saved_rsp(%rip), %rsp - movq saved_rbx, %rbx - mo...
2018 May 24
2
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
...p_64.S b/arch/x86/kernel/acpi/wakeup_64.S > index 50b8ed0317a3..472659c0f811 100644 > --- a/arch/x86/kernel/acpi/wakeup_64.S > +++ b/arch/x86/kernel/acpi/wakeup_64.S > @@ -14,7 +14,7 @@ > * Hooray, we are in Long 64-bit mode (but still running in low memory) > */ > ENTRY(wakeup_long64) > - movq saved_magic, %rax > + movq saved_magic(%rip), %rax > movq $0x123456789abcdef0, %rdx > cmpq %rdx, %rax > jne bogus_64_magic Because, as comment says, this is rather tricky code. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) ht...
2018 May 24
2
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
...p_64.S b/arch/x86/kernel/acpi/wakeup_64.S > index 50b8ed0317a3..472659c0f811 100644 > --- a/arch/x86/kernel/acpi/wakeup_64.S > +++ b/arch/x86/kernel/acpi/wakeup_64.S > @@ -14,7 +14,7 @@ > * Hooray, we are in Long 64-bit mode (but still running in low memory) > */ > ENTRY(wakeup_long64) > - movq saved_magic, %rax > + movq saved_magic(%rip), %rax > movq $0x123456789abcdef0, %rdx > cmpq %rdx, %rax > jne bogus_64_magic Because, as comment says, this is rather tricky code. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) ht...
2018 May 25
2
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
...c0f811 100644 > > > --- a/arch/x86/kernel/acpi/wakeup_64.S > > > +++ b/arch/x86/kernel/acpi/wakeup_64.S > > > @@ -14,7 +14,7 @@ > > > * Hooray, we are in Long 64-bit mode (but still running in low > memory) > > > */ > > > ENTRY(wakeup_long64) > > > - movq saved_magic, %rax > > > + movq saved_magic(%rip), %rax > > > movq $0x123456789abcdef0, %rdx > > > cmpq %rdx, %rax > > > jne bogus_64_magic > > > Because, as comment says, this is rather tr...
2018 May 25
2
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
...c0f811 100644 > > > --- a/arch/x86/kernel/acpi/wakeup_64.S > > > +++ b/arch/x86/kernel/acpi/wakeup_64.S > > > @@ -14,7 +14,7 @@ > > > * Hooray, we are in Long 64-bit mode (but still running in low > memory) > > > */ > > > ENTRY(wakeup_long64) > > > - movq saved_magic, %rax > > > + movq saved_magic(%rip), %rax > > > movq $0x123456789abcdef0, %rdx > > > cmpq %rdx, %rax > > > jne bogus_64_magic > > > Because, as comment says, this is rather tr...
2018 May 24
0
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
...gt; > index 50b8ed0317a3..472659c0f811 100644 > > --- a/arch/x86/kernel/acpi/wakeup_64.S > > +++ b/arch/x86/kernel/acpi/wakeup_64.S > > @@ -14,7 +14,7 @@ > > * Hooray, we are in Long 64-bit mode (but still running in low memory) > > */ > > ENTRY(wakeup_long64) > > - movq saved_magic, %rax > > + movq saved_magic(%rip), %rax > > movq $0x123456789abcdef0, %rdx > > cmpq %rdx, %rax > > jne bogus_64_magic > Because, as comment says, this is rather tricky code. I agree, I think mainta...
2018 May 25
0
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
...-- a/arch/x86/kernel/acpi/wakeup_64.S > > > > +++ b/arch/x86/kernel/acpi/wakeup_64.S > > > > @@ -14,7 +14,7 @@ > > > > * Hooray, we are in Long 64-bit mode (but still running in low > > memory) > > > > */ > > > > ENTRY(wakeup_long64) > > > > - movq saved_magic, %rax > > > > + movq saved_magic(%rip), %rax > > > > movq $0x123456789abcdef0, %rdx > > > > cmpq %rdx, %rax > > > > jne bogus_64_magic > > > > > Because,...
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v3: - Update on message to describe longer term PIE goal. - Minor change on ftrace if condition. - Changed code using xchgq. - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends