Displaying 13 results from an estimated 13 matches for "cmovel".
Did you mean:
cmove
2018 Apr 12
3
[RFC] __builtin_constant_p() Improvements
...mux() {
if (__builtin_constant_p(37))
return 927;
return 0;
}
int bar(int a) {
if (a)
return foo(42);
else
return mux();
}
Now outputs this code at -O1:
bar:
.cfi_startproc
# %bb.0: # %entry
testl %edi, %edi
movl $927, %ecx # imm = 0x39F
movl $1, %eax
cmovel %ecx, %eax
retq
And this code at -O0:
bar: # @bar
.cfi_startproc
# %bb.0: # %entry
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset %rbp, -16
movq %rsp, %rbp
.cfi_def_cfa_register %rbp
movl %edi, -16(%rbp)
cmpl $0, -16(%rbp)
je .LBB0_...
2018 Apr 13
0
[RFC] __builtin_constant_p() Improvements
...nt a) {
> if (a)
> return foo(42);
> else
> return mux();
> }
>
> Now outputs this code at -O1:
>
> bar:
> .cfi_startproc
> # %bb.0: # %entry
> testl %edi, %edi
> movl $927, %ecx # imm = 0x39F
> movl $1, %eax
> cmovel %ecx, %eax
> retq
>
> And this code at -O0:
>
> bar: # @bar
> .cfi_startproc
> # %bb.0: # %entry
> pushq %rbp
> .cfi_def_cfa_offset 16
> .cfi_offset %rbp, -16
> movq %rsp, %rbp
> .cfi_def_cfa_regis...
2017 Oct 11
1
[PATCH v1 01/27] x86/crypto: Adapt assembly for PIE support
..._mask(%rip), BSWAP_MASK
movaps IV, CTR
PSHUFB_XMM BSWAP_MASK CTR
mov $1, TCTR_LOW
@@ -2850,12 +2852,12 @@ ENTRY(aesni_xts_crypt8)
cmpb $0, %cl
movl $0, %ecx
movl $240, %r10d
- leaq _aesni_enc4, %r11
- leaq _aesni_dec4, %rax
+ leaq _aesni_enc4(%rip), %r11
+ leaq _aesni_dec4(%rip), %rax
cmovel %r10d, %ecx
cmoveq %rax, %r11
- movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK
+ movdqa .Lgf128mul_x_ble_mask(%rip), GF128MUL_MASK
movups (IVP), IV
mov 480(KEYP), KLEN
diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S
index faecb1518bf8..488605b...
2007 Aug 08
2
[PATCH] x86-64: syscall/sysenter support for 32-bit apps
...(TBF_EXCEPTION|TBF_EXCEPTION_ERRCODE|TBF_INTERRUPT),%cl
+ movl $0,TRAPBOUNCE_error_code(%rdx)
+ jmp 1b
+
+ENTRY(compat_sysenter)
+ cmpl $TRAP_gp_fault,UREGS_entry_vector(%rsp)
+ movzwl VCPU_sysenter_sel(%rbx),%eax
+ movzwl VCPU_gp_fault_sel(%rbx),%ecx
+ cmovel %ecx,%eax
+ testl $~3,%eax
+ movl $FLAT_COMPAT_USER_SS,UREGS_ss(%rsp)
+ cmovzl %ecx,%eax
+ movw %ax,TRAPBOUNCE_cs(%rdx)
+ call compat_create_bounce_frame
+ jmp compat_test_all_events
+
ENTRY(compat_int80_direct_trap)
call compat_create_bounc...
2012 Oct 02
18
[PATCH 0/3] x86: adjust entry frame generation
This set of patches converts the way frames gets created from
using PUSHes/POPs to using MOVes, thus allowing (in certain
cases) to avoid saving/restoring part of the register set.
While the place where the (small) win from this comes from varies
between CPUs, the net effect is a 1 to 2% reduction on a
combined interruption entry and exit when the full state save
can be avoided.
1: use MOV
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v3:
- Update on message to describe longer term PIE goal.
- Minor change on ftrace if condition.
- Changed code using xchgq.
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends
2013 Oct 15
0
[LLVMdev] [llvm-commits] r192750 - Enable MI Sched for x86.
...42, %ecx
>> -; CHECK-NEXT: movl 4(%esp), %eax
>> -; CHECK-NEXT: andl $15, %eax
>> -; CHECK-NEXT: cmovnel %ecx, %eax
>> +; CHECK: movl 4(%esp), %ecx
>> +; CHECK-NEXT: andl $15, %ecx
>> +; CHECK-NEXT: movl $42, %eax
>> +; CHECK-NEXT: cmovel %ecx, %eax
>> ; CHECK-NEXT: ret
>> ;
>> ; We don't want:
>> @@ -39,4 +39,3 @@ entry:
>> %retval = select i1 %tmp4, i32 %tmp2, i32 42 ; <i32> [#uses=1]
>> ret i32 %retval
>> }
>> -
>>
>> Modified: llvm/...