search for: leaq

Displaying 20 results from an estimated 184 matches for "leaq".

Did you mean: lea
2008 Feb 11
2
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
I'm seeing the following failures with "make check" (x86-32 linux): FAIL: test/CodeGen/X86/fold-mul-lohi.ll Failed with exit(1) at line 2 while running: llvm-as < test/CodeGen/X86/fold-mul-lohi.ll | llc -march=x86-64 | not grep lea leaq B, %rsi leaq A, %r8 leaq P, %rsi child process exited abnormally FAIL: test/CodeGen/X86/stride-nine-with-base-reg.ll Failed with exit(1) at line 2 while running: llvm-as < test/CodeGen/X86/stride-nine-with-base-reg.ll | llc -march=x86-64 | not grep lea leaq B,...
2008 Feb 12
2
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
Hi Evan, In -relocation-model=static mode, those tests are now getting code like this leaq A, %rsi movss %xmm0, (%rsi,%rdx,4) instead of this: movss %xmm0, A(,%rdx,4) This is specifically what these tests were written to catch :-). Running them with -relocation-model=pic is hiding the real bug. Dan On Feb 11, 2008, at 11:22 PM, Evan Cheng wrote: > Fixed. T...
2008 Feb 12
0
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
...rote: > I'm seeing the following failures with "make check" (x86-32 linux): > > FAIL: test/CodeGen/X86/fold-mul-lohi.ll > Failed with exit(1) at line 2 > while running: llvm-as < test/CodeGen/X86/fold-mul-lohi.ll | llc - > march=x86-64 | not grep lea > leaq B, %rsi > leaq A, %r8 > leaq P, %rsi > child process exited abnormally > FAIL: test/CodeGen/X86/stride-nine-with-base-reg.ll > Failed with exit(1) at line 2 > while running: llvm-as < test/CodeGen/X86/stride-nine-with-base- > reg.ll | llc -march=x86-6...
2008 Feb 12
0
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
...be used even in this case (see page 38). You know about about Linux addressing mode than I do. Please check. Thanks, Evan On Feb 12, 2008, at 10:10 AM, Dan Gohman wrote: > Hi Evan, > > In -relocation-model=static mode, those tests are now getting > code like this > > leaq A, %rsi > movss %xmm0, (%rsi,%rdx,4) > > instead of this: > > movss %xmm0, A(,%rdx,4) > > This is specifically what these tests were written to catch :-). > Running them with -relocation-model=pic is hiding the real bug. > > Dan > > On Feb 11...
2017 Oct 11
1
[PATCH v1 01/27] x86/crypto: Adapt assembly for PIE support
...s-x86_64-asm_64.S +++ b/arch/x86/crypto/aes-x86_64-asm_64.S @@ -48,8 +48,12 @@ #define R10 %r10 #define R11 %r11 +/* Hold global for PIE suport */ +#define RBASE %r12 + #define prologue(FUNC,KEY,B128,B192,r1,r2,r5,r6,r7,r8,r9,r10,r11) \ ENTRY(FUNC); \ + pushq RBASE; \ movq r1,r2; \ leaq KEY+48(r8),r9; \ movq r10,r11; \ @@ -74,54 +78,63 @@ movl r6 ## E,4(r9); \ movl r7 ## E,8(r9); \ movl r8 ## E,12(r9); \ + popq RBASE; \ ret; \ ENDPROC(FUNC); +#define round_mov(tab_off, reg_i, reg_o) \ + leaq tab_off(%rip), RBASE; \ + movl (RBASE,reg_i,4), reg_o; + +#define...
2017 Oct 11
1
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
...ntry/entry_64.S index 49167258d587..15bd5530d2ae 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -194,12 +194,15 @@ entry_SYSCALL_64_fastpath: ja 1f /* return -ENOSYS (already in pt_regs->ax) */ movq %r10, %rcx + /* Ensures the call is position independent */ + leaq sys_call_table(%rip), %r11 + /* * This call instruction is handled specially in stub_ptregs_64. * It might end up jumping to the slow path. If it jumps, RAX * and all argument registers are clobbered. */ - call *sys_call_table(, %rax, 8) + call *(%r11, %rax, 8) .Lentry_SYSCALL_64_af...
2018 Jan 18
1
LEAQ instruction path
Hi, I've been trying to teach LLVM that pointers are 128-bit long, which segfaults with some seemingly unrelated stacktrace when I try to take an address of a variable. Since stack saving and loading seems to work fine, I dare to assume the instruction causing problems there is leaq. Now I've done a search for leaq of the entire LLVM codebase with no success and I'd like to know which DAG nodes and eventually instructions does the last store in the following piece of code get translated to before it gets emitted. %1 = alloca i32, align 4 %2 = alloca i32*, align 8...
2017 Oct 20
3
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
.../entry/entry_64.S >> @@ -194,12 +194,15 @@ entry_SYSCALL_64_fastpath: >> ja 1f /* return -ENOSYS (already in pt_regs->ax) */ >> movq %r10, %rcx >> >> + /* Ensures the call is position independent */ >> + leaq sys_call_table(%rip), %r11 >> + >> /* >> * This call instruction is handled specially in stub_ptregs_64. >> * It might end up jumping to the slow path. If it jumps, RAX >> * and all argument registers are clobbered. >> */ &gt...
2017 Oct 20
3
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
.../entry/entry_64.S >> @@ -194,12 +194,15 @@ entry_SYSCALL_64_fastpath: >> ja 1f /* return -ENOSYS (already in pt_regs->ax) */ >> movq %r10, %rcx >> >> + /* Ensures the call is position independent */ >> + leaq sys_call_table(%rip), %r11 >> + >> /* >> * This call instruction is handled specially in stub_ptregs_64. >> * It might end up jumping to the slow path. If it jumps, RAX >> * and all argument registers are clobbered. >> */ &gt...
2017 Oct 20
0
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
...5530d2ae 100644 > --- a/arch/x86/entry/entry_64.S > +++ b/arch/x86/entry/entry_64.S > @@ -194,12 +194,15 @@ entry_SYSCALL_64_fastpath: > ja 1f /* return -ENOSYS (already in pt_regs->ax) */ > movq %r10, %rcx > > + /* Ensures the call is position independent */ > + leaq sys_call_table(%rip), %r11 > + > /* > * This call instruction is handled specially in stub_ptregs_64. > * It might end up jumping to the slow path. If it jumps, RAX > * and all argument registers are clobbered. > */ > - call *sys_call_table(, %rax, 8) > + call...
2013 Feb 01
2
[LLVMdev] Question about compilation result - taking address of input array member
...rayidx = getelementptr inbounds i32* %0, i64 2 ret i32* %arrayidx } $ llc -O3 takeaddr.ll -o - .file "takeaddr.ll" .text .globl bar .align 16, 0x90 .type bar, at function bar: # @bar # BB#0: # %entry movq %rdi, -8(%rsp) leaq 8(%rdi), %rax ret .Ltmp0: .size bar, .Ltmp0-bar .section ".note.GNU-stack","", at progbits The first instruction in "bar" is not clear. Why is it needed? It seems harmless, but does it serve any purpose? Alignment? ISTM that the leaq suffices to pefrorm the actual t...
2013 Feb 01
0
[LLVMdev] Question about compilation result - taking address of input array member
...arrayidx > } > > $ llc -O3 takeaddr.ll -o - > .file "takeaddr.ll" > .text > .globl bar > .align 16, 0x90 > .type bar, at function > bar: # @bar > # BB#0: # %entry > movq %rdi, -8(%rsp) > leaq 8(%rdi), %rax > ret > .Ltmp0: > .size bar, .Ltmp0-bar > > > .section ".note.GNU-stack","", at progbits > > The first instruction in "bar" is not clear. Why is it needed? It > seems harmless, but does it serve any purpose? Alignment? ISTM that...
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends
2015 Oct 27
4
How can I tell llvm, that a branch is preferred ?
...nothing in the specs for "branch" or "switch". And __buildin_expect does nothing, that I am sure of. Unfortunately llvm has this knack for ordering my one most crucial part of code exactly the opposite I want to, it does: (x86_64) cmpq %r15, (%rax,%rdx) jne LBB0_3 Ltmp18: leaq 8(%rax,%rdx), %rcx jmp LBB0_4 LBB0_3: addq $8, %rcx LBB0_4: when I want, cmpq %r15, (%rax,%rdx) jeq LBB0_3 addq $8, %rcx jmp LBB0_4 LBB0_3: leaq 8(%rax,%rdx), %rcx LBB0_4: since that saves me executing a jump 99.9% of the time. Is there anything I can do ? Ciao Nat!
2014 Jul 23
4
[LLVMdev] the clang 3.5 loop optimizer seems to jump in unintentional for simple loops
...the_func(array); delete[] array; return dummy; } ---- compiled with gcc 4.9.1 and clang 3.5 with clang3.5 + #define ITER the_func contains masses of code the code in main is also sometimes different (not just inlined) to the_func clang -DITER -O2 clang -DITER -O3 gives: the_func: leaq 12(%rdi), %rcx leaq 4(%rdi), %rax cmpq %rax, %rcx cmovaq %rcx, %rax movq %rdi, %rsi notq %rsi addq %rax, %rsi shrq $2, %rsi incq %rsi xorl %edx, %edx movabsq $9223372036854775800, %rax # imm = 0x7FFFFFFFFFF...