Displaying 3 results from an estimated 3 matches for "notq".
Did you mean:
not
2011 Jul 12
0
[LLVMdev] GCC Atomic NAND implementation
Hey Guys,
I have a newbie question about supporting the GNU atomic
builtin, __sync_fetch_and_nand. It appears that LLVM 29 produces X86
assembly like the GCC versions below v4.4, i.e.
NEGATE and AND
notq %rax
movq 48(%rsp), %rcx
andq %rcx, %rax
I'm looking to produce X86 assembly like GCC v4.4 and greater, i.e.
NOT AND
movq 48(%rsp), %rcx
andq %rcx, %rax
notq %rax
I currently have custom code to make the switch between impleme...
2014 Jul 23
4
[LLVMdev] the clang 3.5 loop optimizer seems to jump in unintentional for simple loops
..._func contains masses of code
the code in main is also sometimes different (not just inlined) to the_func
clang -DITER -O2
clang -DITER -O3
gives:
the_func:
leaq 12(%rdi), %rcx
leaq 4(%rdi), %rax
cmpq %rax, %rcx
cmovaq %rcx, %rax
movq %rdi, %rsi
notq %rsi
addq %rax, %rsi
shrq $2, %rsi
incq %rsi
xorl %edx, %edx
movabsq $9223372036854775800, %rax # imm = 0x7FFFFFFFFFFFFFF8
andq %rsi, %rax
pxor %xmm0, %xmm0
je .LBB0_1
# BB#2: # %vector.body.p...
2012 Oct 02
18
[PATCH 0/3] x86: adjust entry frame generation
This set of patches converts the way frames gets created from
using PUSHes/POPs to using MOVes, thus allowing (in certain
cases) to avoid saving/restoring part of the register set.
While the place where the (small) win from this comes from varies
between CPUs, the net effect is a 1 to 2% reduction on a
combined interruption entry and exit when the full state save
can be avoided.
1: use MOV