search for: cfi_restore

Displaying 13 results from an estimated 13 matches for "cfi_restore".

2009 Aug 18
0
[LLVMdev] Build issues on Solaris
Hello, Nathan > or if it should be a configure test, which might be safer. Are there > any x86 platforms (other than apple) that don't need PLT-indirect calls? Yes, mingw. However just tweaking the define is not enough - we're not loading address of GOT into ebx before the call (on 32 bit ABIs) thus the call will be to nowhere. -- With best regards, Anton Korobeynikov Faculty of
2009 Aug 25
2
[LLVMdev] Build issues on Solaris
...n" + "movl %ebp, %esp\n" // Restore ESP + CFI(".cfi_def_cfa_register %esp\n") + "subl $16, %esp\n" + CFI(".cfi_adjust_cfa_offset 16\n") + "popl %ebx\n" + CFI(".cfi_adjust_cfa_offset -4\n") + CFI(".cfi_restore %ebx\n") + "popl %ecx\n" + CFI(".cfi_adjust_cfa_offset -4\n") + CFI(".cfi_restore %ecx\n") + "popl %edx\n" + CFI(".cfi_adjust_cfa_offset -4\n") + CFI(".cfi_restore %edx\n") + "popl %eax\n" +...
2009 Aug 11
6
[LLVMdev] Build issues on Solaris
Hi all, I've encountered a couple of minor build issues on Solaris that have crept in since 2.5, fixes below: 1. In lib/Target/X86/X86JITInfo.cpp, there is: // Check if building with -fPIC #if defined(__PIC__) && __PIC__ && defined(__linux__) #define ASMCALLSUFFIX "@PLT" #else #define ASMCALLSUFFIX #endif Which causes a link failure due to the non-PLT
2017 Oct 06
2
CFI directives for callee saved registers
...e changes to the prologue to not spill callee saved gprs to the stack but rather spill them to unused vector registers. I'm not sure how to handle this in the cfi directives. Originally, we would use cfi_offset to give an offset of where it is saved on the stack. I tried to instead use the cfi_restore directive. As the docs say ".cfi_restore says that the rule for Register is now the same as it was at the beginning of the function, after all initial instructions added by .cfi_startproc were executed." To use this, I need to add new instructions that move the value from the vector b...
2014 Aug 08
4
[LLVMdev] Efficient Pattern matching in Instruction Combine
....cfi_def_cfa_register 5 andl $-16, %esp subl $32, %esp leal 28(%esp), %eax movl %eax, 8(%esp) leal 24(%esp), %eax movl %eax, 4(%esp) movl $.LC0, (%esp) call __isoc99_scanf movl 24(%esp), %eax * orl 28(%esp), %eax* leave .cfi_restore 5 .cfi_def_cfa 4, 4 ret .cfi_endproc GCC also did the optimization. Now we just *slightly flip* the test case : *1.c Test case:*#include<stdio.h> int cal(int a, int b) { *return ((b & ~a) | a);* } int main(){ int a, b; scanf("%d %d", &a, &b); return cal...
2007 May 21
2
changing definition of paravirt_ops.iret
...he usermode %fs + * from the stack. * * For DISABLE_INTERRUPTS/ENABLE_INTERRUPTS (aka "cli"/"sti"), you must * specify what registers can be overwritten (CLBR_NONE, CLBR_EAX/EDX/ECX/ANY). @@ -166,21 +169,15 @@ 2: popl %es; \ 2: popl %es; \ CFI_ADJUST_CFA_OFFSET -4;\ /*CFI_RESTORE es;*/\ -3: popl %fs; \ - CFI_ADJUST_CFA_OFFSET -4;\ - /*CFI_RESTORE fs;*/\ .pushsection .fixup,"ax"; \ 4: movl $0,(%esp); \ jmp 1b; \ 5: movl $0,(%esp); \ jmp 2b; \ -6: movl $0,(%esp); \ - jmp 3b; \ .section __ex_table,"a";\ .align 4; \ .long 1b,4b; \ .long 2b,5...
2007 May 21
2
changing definition of paravirt_ops.iret
...he usermode %fs + * from the stack. * * For DISABLE_INTERRUPTS/ENABLE_INTERRUPTS (aka "cli"/"sti"), you must * specify what registers can be overwritten (CLBR_NONE, CLBR_EAX/EDX/ECX/ANY). @@ -166,21 +169,15 @@ 2: popl %es; \ 2: popl %es; \ CFI_ADJUST_CFA_OFFSET -4;\ /*CFI_RESTORE es;*/\ -3: popl %fs; \ - CFI_ADJUST_CFA_OFFSET -4;\ - /*CFI_RESTORE fs;*/\ .pushsection .fixup,"ax"; \ 4: movl $0,(%esp); \ jmp 1b; \ 5: movl $0,(%esp); \ jmp 2b; \ -6: movl $0,(%esp); \ - jmp 3b; \ .section __ex_table,"a";\ .align 4; \ .long 1b,4b; \ .long 2b,5...
2014 Aug 13
2
[LLVMdev] Efficient Pattern matching in Instruction Combine
...> leal 24(%esp), %eax >>>> movl %eax, 4(%esp) >>>> movl $.LC0, (%esp) >>>> call __isoc99_scanf >>>> movl 24(%esp), %eax >>>> * orl 28(%esp), %eax* >>>> leave >>>> .cfi_restore 5 >>>> .cfi_def_cfa 4, 4 >>>> ret >>>> .cfi_endproc >>>> >>>> GCC also did the optimization. >>>> >>>> Now we just *slightly flip* the test case : >>>> >>>> >>>> >&...
2014 Aug 07
4
[LLVMdev] Efficient Pattern matching in Instruction Combine
Hi, All, Duncan, Rafael, David, Nick. This is regarding pattern matching in InstructionCombine pass. We use 'match' functions many times, but it doesn't do the pattern matching effectively. e.x. Lets take pattern : (A ^ B) | ((B ^ C) ^ A) -> (A ^ B) | C (B ^ A) | ((B ^ C) ^ A) -> (A ^ B) | C Both the patterns above are same, since ^ is commutative in Op0. But,
2007 Apr 18
1
Patch: use .pushsection/.popsection
...... -------------- next part -------------- diff -r e698e6ee2fa1 arch/i386/kernel/entry.S --- a/arch/i386/kernel/entry.S Tue Aug 08 10:18:34 2006 -0700 +++ b/arch/i386/kernel/entry.S Tue Aug 08 10:36:17 2006 -0700 @@ -162,17 +162,17 @@ 2: popl %es; \ 2: popl %es; \ CFI_ADJUST_CFA_OFFSET -4;\ /*CFI_RESTORE es;*/\ -.section .fixup,"ax"; \ +.pushsection .fixup,"ax"; \ 3: movl $0,(%esp); \ jmp 1b; \ 4: movl $0,(%esp); \ jmp 2b; \ -.previous; \ -.section __ex_table,"a";\ +.popsection \ +.pushsection __ex_table,"a";\ .align 4; \ .long 1b,3b; \ .long...
2007 Apr 18
1
Patch: use .pushsection/.popsection
...... -------------- next part -------------- diff -r e698e6ee2fa1 arch/i386/kernel/entry.S --- a/arch/i386/kernel/entry.S Tue Aug 08 10:18:34 2006 -0700 +++ b/arch/i386/kernel/entry.S Tue Aug 08 10:36:17 2006 -0700 @@ -162,17 +162,17 @@ 2: popl %es; \ 2: popl %es; \ CFI_ADJUST_CFA_OFFSET -4;\ /*CFI_RESTORE es;*/\ -.section .fixup,"ax"; \ +.pushsection .fixup,"ax"; \ 3: movl $0,(%esp); \ jmp 1b; \ 4: movl $0,(%esp); \ jmp 2b; \ -.previous; \ -.section __ex_table,"a";\ +.popsection \ +.pushsection __ex_table,"a";\ .align 4; \ .long 1b,3b; \ .long...
2012 Oct 19
0
[PATCHv3] xen/x86: don't corrupt %eip when returning from a signal handler
...= -1 => not a system call */ SAVE_ALL jmp ret_from_exception CFI_ENDPROC diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index cdc790c..430b1fc 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -1451,7 +1451,7 @@ ENTRY(xen_failsafe_callback) CFI_RESTORE r11 addq $0x30,%rsp CFI_ADJUST_CFA_OFFSET -0x30 - pushq_cfi $0 + pushq_cfi $-1 /* orig_ax = -1 => not a system call */ SAVE_ALL jmp error_exit CFI_ENDPROC -- 1.7.2.5
2014 Aug 13
2
[LLVMdev] Efficient Pattern matching in Instruction Combine
...t;> leal 24(%esp), %eax >>>> movl %eax, 4(%esp) >>>> movl $.LC0, (%esp) >>>> call __isoc99_scanf >>>> movl 24(%esp), %eax >>>> orl 28(%esp), %eax >>>> leave >>>> .cfi_restore 5 >>>> .cfi_def_cfa 4, 4 >>>> ret >>>> .cfi_endproc >>>> >>>> GCC also did the optimization. >>>> >>>> Now we just slightly flip the test case : >>>> >>>> 1.c Test case: >>&...