search for: andl

Displaying 20 results from an estimated 131 matches for "andl".

Did you mean: and
2014 Jan 18
2
[LLVMdev] Scheduling quirks
...t_registeri, .Ltmp0-_Z13test_registeri .cfi_endproc .globl _Z14test_scheduleri .align 16, 0x90 .type _Z14test_scheduleri, at function _Z14test_scheduleri: # @_Z14test_scheduleri .cfi_startproc # BB#0: # %entry movl %edi, %eax shrl $2, %eax andl $15, %eax shrl $3, %edi andl $31, %edi xorl %eax, %edi movl %edi, %eax retq .Ltmp1: .size _Z14test_scheduleri, .Ltmp1-_Z14test_scheduleri .cfi_endproc .ident "clang version 3.5 (trunk 199507)" .section ".note.GNU-stack","", at progbits <=== Now once more...
2008 Mar 26
2
[LLVMdev] Checked arithmetic
...y) { %xx = zext i32 %x to i33 %yy = zext i32 %y to i33 %s = add i33 %xx, %yy %tmp = lshr i33 %s, 32 %b = trunc i33 %tmp to i1 ret i1 %b } codegens (on x86-32) to cc: xorl %eax, %eax movl 4(%esp), %ecx addl 8(%esp), %ecx adcl $0, %eax andl $1, %eax ret which uses the condition code as desired. Pity about the redundant andl $1, %eax! Ciao, Duncan.
2016 Feb 11
3
Expected constant simplification not happening
Hi the appended IR code does not optimize to my liking :) this is the interesting part in x86_64, that got produced via clang -Os: --- movq -16(%r12), %rax movl -4(%rax), %ecx andl $2298949, %ecx ## imm = 0x231445 cmpq $2298949, (%rax,%rcx) ## imm = 0x231445 leaq 8(%rax,%rcx), %rax cmovneq %r15, %rax movl $2298949, %esi ## imm = 0x231445 movq %r12, %rdi movq %r14, %rdx callq *(%rax) --- and clang -O3: --- movq -16(%r12), %rax movl -4(%rax), %...
2007 Apr 18
2
[patch 3/8] Allow a kernel to not be in ring 0.
...d, 26 insertions(+), 15 deletions(-) --- 2.6.18-rc3-32.orig/arch/i386/kernel/entry.S +++ 2.6.18-rc3-32/arch/i386/kernel/entry.S @@ -229,8 +229,9 @@ ret_from_intr: check_userspace: movl EFLAGS(%esp), %eax # mix EFLAGS and CS movb CS(%esp), %al - testl $(VM_MASK | 3), %eax - jz resume_kernel + andl $(VM_MASK | SEGMENT_RPL_MASK), %eax + cmpl $USER_RPL, %eax + jb resume_kernel # not returning to v8086 or userspace ENTRY(resume_userspace) cli # make sure we don't miss an interrupt # setting need_resched or sigpending @@ -367,8 +368,8 @@ restore_all: # See comments in process....
2007 Apr 18
2
[patch 3/8] Allow a kernel to not be in ring 0.
...d, 26 insertions(+), 15 deletions(-) --- 2.6.18-rc3-32.orig/arch/i386/kernel/entry.S +++ 2.6.18-rc3-32/arch/i386/kernel/entry.S @@ -229,8 +229,9 @@ ret_from_intr: check_userspace: movl EFLAGS(%esp), %eax # mix EFLAGS and CS movb CS(%esp), %al - testl $(VM_MASK | 3), %eax - jz resume_kernel + andl $(VM_MASK | SEGMENT_RPL_MASK), %eax + cmpl $USER_RPL, %eax + jb resume_kernel # not returning to v8086 or userspace ENTRY(resume_userspace) cli # make sure we don't miss an interrupt # setting need_resched or sigpending @@ -367,8 +368,8 @@ restore_all: # See comments in process....
2010 Oct 07
2
[LLVMdev] [Q] x86 peephole deficiency
Hi all, I am slowly working on a SwitchInst optimizer (http://llvm.org/PR8125) and now I am running into a deficiency of the x86 peephole optimizer (or jump-threader?). Here is what I get: andl $3, %edi je .LBB0_4 # BB#2: # %nz # in Loop: Header=BB0_1 Depth=1 cmpl $2, %edi je .LBB0_6 # BB#3: # %nz.non-middle...
2014 Aug 08
4
[LLVMdev] Efficient Pattern matching in Instruction Combine
...also treated these > the same: > > (B ^ A) | ((A ^ B) ^ C) -> (A ^ B) | C > (B ^ A) | ((B ^ C) ^ A) -> (A ^ B) | C > (B ^ A) | ((C ^ A) ^ B) -> (A ^ B) | C > > I.e., `^` is also associative. Agree with Duncan on including associative operation too. > Can we handle this by just having a canonical ordering? Or is that too difficult to maintain through various instcombines? Yes, its the easiest way to do that. If i am not wrong, what Sean is suggesting is that if we get something like (B ^ A) | ((B ^ C) ^ A) -> (A ^ B) | C and we have written pass for p...
2016 Dec 07
1
Expected constant simplification not happening
...;> wrote: > > Hi > > the appended IR code does not optimize to my liking :) > > this is the interesting part in x86_64, that got produced via clang -Os: > --- > movq -16(%r12), %rax > movl -4(%rax), %ecx > andl $2298949, %ecx ## imm = 0x231445 > cmpq $2298949, (%rax,%rcx) ## imm = 0x231445 > leaq 8(%rax,%rcx), %rax > cmovneq %r15, %rax > movl $2298949, %esi ## imm = 0x231445 > movq %r12, %rdi...
2017 Sep 25
0
What should a truncating store do?
...itpacked vectors should probably be avoided? This also reminded me of the following test case that is in trunk: test/CodeGen/X86/pr20011.ll %destTy = type { i2, i2 } define void @crash(i64 %x0, i64 %y0, %destTy* nocapture %dest) nounwind { ; X64-LABEL: crash: ; X64: # BB#0: ; X64-NEXT: andl $3, %esi ; X64-NEXT: movb %sil, (%rdx) ; X64-NEXT: andl $3, %edi ; X64-NEXT: movb %dil, (%rdx) ; X64-NEXT: retq %x1 = trunc i64 %x0 to i2 %y1 = trunc i64 %y0 to i2 %1 = bitcast %destTy* %dest to <2 x i2>* %2 = insertelement <2 x i2> undef, i2 %x1, i32 0 %3 = insert...
2007 Apr 18
1
[PATCH] Slight cleanups for x86 ring macros (against rc3-mm2)
...y@rustcorp.com.au> diff -r d8064f9b5964 arch/i386/kernel/entry.S --- a/arch/i386/kernel/entry.S Mon Aug 07 13:30:17 2006 +1000 +++ b/arch/i386/kernel/entry.S Mon Aug 07 14:32:11 2006 +1000 @@ -237,7 +237,7 @@ check_userspace: movl EFLAGS(%esp), %eax # mix EFLAGS and CS movb CS(%esp), %al andl $(VM_MASK | SEGMENT_RPL_MASK), %eax - cmpl $SEGMENT_RPL_MASK, %eax + cmpl $USER_RPL, %eax jb resume_kernel # not returning to v8086 or userspace ENTRY(resume_userspace) DISABLE_INTERRUPTS # make sure we don't miss an interrupt @@ -374,8 +374,8 @@ restore_all: # See comments in process...
2007 Apr 18
1
[PATCH] Slight cleanups for x86 ring macros (against rc3-mm2)
...y@rustcorp.com.au> diff -r d8064f9b5964 arch/i386/kernel/entry.S --- a/arch/i386/kernel/entry.S Mon Aug 07 13:30:17 2006 +1000 +++ b/arch/i386/kernel/entry.S Mon Aug 07 14:32:11 2006 +1000 @@ -237,7 +237,7 @@ check_userspace: movl EFLAGS(%esp), %eax # mix EFLAGS and CS movb CS(%esp), %al andl $(VM_MASK | SEGMENT_RPL_MASK), %eax - cmpl $SEGMENT_RPL_MASK, %eax + cmpl $USER_RPL, %eax jb resume_kernel # not returning to v8086 or userspace ENTRY(resume_userspace) DISABLE_INTERRUPTS # make sure we don't miss an interrupt @@ -374,8 +374,8 @@ restore_all: # See comments in process...
2017 Sep 25
3
What should a truncating store do?
...e+load is the right definition of bitcast.  And in fact, the backend will lower a bitcast to a store+load to a stack temporary in cases where there isn't some other lowering specified. The end result is probably going to be pretty inefficient unless your target has a special instruction to handle it (x86 has pmovmskb for i1 vector bitcasts, but otherwise you probably end up with some terrible lowering involving a lot of shifts). > This also reminded me of the following test case that is in trunk: >  test/CodeGen/X86/pr20011.ll > > %destTy = type { i2, i2 } > > define...
2014 Aug 13
2
[LLVMdev] Efficient Pattern matching in Instruction Combine
...) -> (A ^ B) | C >>>> > (B ^ A) | ((C ^ A) ^ B) -> (A ^ B) | C >>>> > >>>> > I.e., `^` is also associative. >>>> >>>> Agree with Duncan on including associative operation too. >>>> >>>> > Can we handle this by just having a canonical ordering? Or is that >>>> too difficult to maintain through various instcombines? >>>> >>>> Yes, its the easiest way to do that. If i am not wrong, what Sean is >>>> suggesting is that if we get >>>> >&g...
2010 Oct 07
0
[LLVMdev] [Q] x86 peephole deficiency
On Oct 6, 2010, at 6:16 PM, Gabor Greif wrote: > Hi all, > > I am slowly working on a SwitchInst optimizer (http://llvm.org/PR8125) > and now I am running into a deficiency of the x86 > peephole optimizer (or jump-threader?). Here is what I get: > > > andl $3, %edi > je .LBB0_4 > # BB#2: # %nz > # in Loop: Header=BB0_1 > Depth=1 > cmpl $2, %edi > je .LBB0_6 > # BB#3: # %nz.non-middle...
2007 Apr 18
0
[PATCH 17/21] i386 Ldt cleanups 1
...ach-work.orig/arch/i386/kernel/entry.S 2005-10-27 17:02:08.000000000 -0700 +++ linux-2.6.14-zach-work/arch/i386/kernel/entry.S 2005-11-04 18:22:07.000000000 -0800 @@ -250,8 +250,8 @@ restore_all: # See comments in process.c:copy_thread() for details. movb OLDSS(%esp), %ah movb CS(%esp), %al - andl $(VM_MASK | (4 << 8) | 3), %eax - cmpl $((4 << 8) | 3), %eax + andl $(VM_MASK | (LDT_SEGMENT << 8) | 3), %eax + cmpl $((LDT_SEGMENT << 8) | 3), %eax je ldt_ss # returning to user-space with LDT SS restore_nocheck: RESTORE_REGS Index: linux-2.6.14-zach-work/arch/i386/k...
2011 May 20
1
No logging
Dear folks, strace shows that snmp-ups driver writes to stderr: [pid 21825] write(2, "No log handling enabled - turnin"..., 52) = 52 | 00000 4e 6f 20 6c 6f 67 20 68 61 6e 64 6c 69 6e 67 20 No log h andling | | 00010 65 6e 61 62 6c 65 64 20 2d 20 74 75 72 6e 69 6e enabled - turnin | | 00020 67 20 6f 6e 20 73 74 64 65 72 72 20 6c 6f 67 67 g on std err logg | | 00030 69 6e 67 0...
2007 Apr 18
0
[PATCH 17/21] i386 Ldt cleanups 1
...ach-work.orig/arch/i386/kernel/entry.S 2005-10-27 17:02:08.000000000 -0700 +++ linux-2.6.14-zach-work/arch/i386/kernel/entry.S 2005-11-04 18:22:07.000000000 -0800 @@ -250,8 +250,8 @@ restore_all: # See comments in process.c:copy_thread() for details. movb OLDSS(%esp), %ah movb CS(%esp), %al - andl $(VM_MASK | (4 << 8) | 3), %eax - cmpl $((4 << 8) | 3), %eax + andl $(VM_MASK | (LDT_SEGMENT << 8) | 3), %eax + cmpl $((LDT_SEGMENT << 8) | 3), %eax je ldt_ss # returning to user-space with LDT SS restore_nocheck: RESTORE_REGS Index: linux-2.6.14-zach-work/arch/i386/k...
2008 Mar 26
0
[LLVMdev] Checked arithmetic
On Wed, 26 Mar 2008, Jonathan S. Shapiro wrote: > I want to background process this for a bit, but it would be helpful to > discuss some approaches first. > > There would appear to be three approaches: > > 1. Introduce a CC register class into the IR. This seems to be a > fairly major overhaul. > > 2. Introduce a set of scalar and fp computation quasi-instructions
2008 Mar 26
0
[LLVMdev] Checked arithmetic
...> %s = add i33 %xx, %yy > %tmp = lshr i33 %s, 32 > %b = trunc i33 %tmp to i1 > ret i1 %b > } > > codegens (on x86-32) to > > cc: > xorl %eax, %eax > movl 4(%esp), %ecx > addl 8(%esp), %ecx > adcl $0, %eax > andl $1, %eax > ret > > which uses the condition code as desired. Pity about the > redundant andl $1, %eax! > > Ciao, > > Duncan. > -Chris -- http://nondot.org/sabre/ http://llvm.org/
2017 Aug 02
3
[InstCombine] Simplification sometimes only transforms but doesn't simplify instruction, causing side effect in other pass
...preds = %if.else, %if.then %ret = phi i32 [ %r1, %if.then ], [ %r2, %if.else ] ret i32 %ret } *** asm code without instcombine: *** ~/workarea/llvm-r309240/rbuild1/bin/llc < a.ll # BB#0: # %entry movzwl (%rdi), %ecx movzbl %cl, %eax andl $1792, %ecx # imm = 0x700 addq a(%rip), %rcx cmpq $1, %rcx jne .LBB0_3 *** asm code with instcombine: *** ~/workarea/llvm-r309240/rbuild1/bin/llc < b.ll # BB#0: # %entry movzwl (%rdi), %eax movzbl...