Displaying 20 results from an estimated 1000 matches similar to: "Optimizing jumps to identical code blocks"
2015 Sep 01
2
[RFC] New pass: LoopExitValues
On Mon, Aug 31, 2015 at 5:52 PM, Jake VanAdrighem
<jvanadrighem at gmail.com> wrote:
> Do you have some specific performance measurements?
Averaging 4 runs of 10000 iterations each of Coremark on my X86_64
desktop showed:
-O2 performance: +2.9% faster with the L.E.V. pass
-Os size: 1.5% smaller with the L.E.V. pass
In the case of Coremark, the benefit comes mainly from the matrix
2019 Jun 30
6
[hexagon][PowerPC] code regression (sub-optimal code) on LLVM 9 when generating hardware loops, and the "llvm.uadd" intrinsic.
Hi All,
The following code :
void hexagon2( int *a, int *res )
{
int i = 100;
while ( i-- ) {
*res++ = *a++;
}
}
gets compiled as a sub-optimal Software loop on LLVM 9.0 instead of a Hardware loop, whereas it was compiled as a Hardware Loop in LLVM 7.0.
This is the final assembly code generated by LLVM 9.0 :
.text
.file "main.c"
.globl hexagon2 // --
2015 Mar 03
2
[LLVMdev] Need a clue to improve the optimization of some C code
Hi
I have some inline function C code, that llvm could be optimizing better.
Since I am new to this, I wonder if someone could give me a few pointers, how to approach this in LLVM.
Should I try to change the IR code -somehow- to get the code generator to generate better code, or should I rather go to the code generator and try to add an optimization pass ?
Thanks for any feedback.
Ciao
2017 Nov 20
2
Nowaday Scalar Evolution's Problem.
The Problem?
Nowaday, SCEV called "Scalar Evolution" does only evolate instructions that
has predictable operand,
Constant-Based operand. such as that can evolute as a constant.
otherwise we couldn't evolate it as SCEV node, evolated as SCEVUnknown.
important thing that we remember is, we do not use SCEV only for Loop
Deletion,
which that doesn't really needed on nature loops
2015 Aug 31
2
[RFC] New pass: LoopExitValues
Hello LLVM,
This is a proposal for a new pass that improves performance and code
size in some nested loop situations. The pass is target independent.
>From the description in the file header:
This optimization finds loop exit values reevaluated after the loop
execution and replaces them by the corresponding exit values if they
are available. Such sequences can arise after the
2018 Nov 06
4
Rather poor code optimisation of current clang/LLVM targeting Intel x86 (both -64 and -32)
Hi @ll,
while clang/LLVM recognizes common bit-twiddling idioms/expressions
like
unsigned int rotate(unsigned int x, unsigned int n)
{
return (x << n) | (x >> (32 - n));
}
and typically generates "rotate" machine instructions for this
expression, it fails to recognize other also common bit-twiddling
idioms/expressions.
The standard IEEE CRC-32 for "big
2018 Nov 27
2
Rather poor code optimisation of current clang/LLVM targeting Intel x86 (both -64 and -32)
"Sanjay Patel" <spatel at rotateright.com> wrote:
> IIUC, you want to use x86-specific bit-hacks (sbb masking) in cases like
> this:
> unsigned int foo(unsigned int crc) {
> if (crc & 0x80000000)
> crc <<= 1, crc ^= 0xEDB88320;
> else
> crc <<= 1;
> return crc;
> }
To document this for x86 too: rewrite the function
2014 Sep 02
3
[LLVMdev] LICM promoting memory to scalar
All,
If we can speculatively execute a load instruction, why isn’t it safe to hoist it out by promoting it to a scalar in LICM pass?
There is a comment in LICM pass that if a load/store is conditional then it is not safe because it would break the LLVM concurrency model (See commit 73bfa4a).
It has an IR test for checking this in test/Transforms/LICM/scalar-promote-memmodel.ll
However, I have
2014 Jul 23
4
[LLVMdev] the clang 3.5 loop optimizer seems to jump in unintentional for simple loops
the clang 3.5 loop optimizer seems to jump in unintentional for simple loops
the very simple example
----
const int SIZE = 3;
int the_func(int* p_array)
{
int dummy = 0;
#if defined(ITER)
for(int* p = &p_array[0]; p < &p_array[SIZE]; ++p) dummy += *p;
#else
for(int i = 0; i < SIZE; ++i) dummy += p_array[i];
#endif
return dummy;
}
int main(int argc, char** argv)
{
2014 Sep 02
2
[LLVMdev] LICM promoting memory to scalar
I think gcc is right.
It inserted a branch for n == 0 (the cbz at the top), so that's not a problem.
In all other regards, this is safe: if you examine the sequence of loads and stores, it eliminated all but the first load and all but the last store. How's that unsafe?
If I had to guess, the bug here is that LLVM doesn't want to hoist the load over the condition (which it is right
2014 Sep 03
3
[LLVMdev] LICM promoting memory to scalar
Thanks for the background on the concurrent memory model.
So, is it sufficient that the loop entry is guarded by condition (cbz at
top) for preventing the race?
The loop entry will be guarded by condition if loop has been rotated by loop
rotate pass.
Since LICM runs after loop rotate, we can use
ScalarEvolution::isLoopEntryGuardedByCond to check if we can speculatively
execute load without
2019 Jul 01
0
[hexagon][PowerPC] code regression (sub-optimal code) on LLVM 9 when generating hardware loops, and the "llvm.uadd" intrinsic.
The Hexagon part is fixed in r364790.
--
Krzysztof Parzyszek kparzysz at quicinc.com<mailto:kparzysz at quicinc.com> LLVM compiler development
From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Joan Lluch via llvm-dev
Sent: Sunday, June 30, 2019 2:04 PM
To: llvm-dev <llvm-dev at lists.llvm.org>
Subject: [EXT] [llvm-dev] [hexagon][PowerPC] code regression
2017 Dec 19
4
A code layout related side-effect introduced by rL318299
Hi,
Recently 10% performance regression on an important benchmark showed up
after we integrated https://reviews.llvm.org/rL318299. The analysis showed
that rL318299 triggered loop rotation on an multi exits loop, and the loop
rotation introduced code layout issue. The performance regression is a
side-effect of rL318299. I got two testcases a.ll and b.ll attached to
illustrate the problem. a.ll
2011 Feb 18
0
[LLVMdev] Adding "S" suffixed ARM/Thumb2 instructions
On Feb 17, 2011, at 10:35 PM, Вадим Марковцев wrote:
> Hello everyone,
>
> I've added the "S" suffixed versions of ARM and Thumb2 instructions to tablegen. Those are, for example, "movs" or "muls".
> Of course, some instructions have already had their twins, such as add/adds, and I leaved them untouched.
Adding separate "s" instructions is
2017 Dec 19
2
A code layout related side-effect introduced by rL318299
On Mon, Dec 18, 2017 at 5:46 PM Xinliang David Li <davidxl at google.com>
wrote:
> The introduction of cleanup.cond block in b.ll without loop-rotation
> already makes the layout worse than a.ll.
>
>
> Without introducing cleanup.cond block, the layout out is
>
> entry->while.cond -> while.body->ret
>
> All the arrows are hot fall through edges which is
2011 Feb 18
2
[LLVMdev] Adding "S" suffixed ARM/Thumb2 instructions
Hello everyone,
I've added the "S" suffixed versions of ARM and Thumb2 instructions to
tablegen. Those are, for example, "movs" or "muls".
Of course, some instructions have already had their twins, such as add/adds,
and I leaved them untouched.
Besides, I propose the codegen optimization based on them, which removes the
redundant comparison in patterns like
orr
2015 Mar 03
2
[LLVMdev] Need a clue to improve the optimization of some C code
Am 03.03.2015 um 19:49 schrieb Philip Reames <listmail at philipreames.com>:
Hi Philip
first thanks for your response,
> You'll need to prove a bit more information to get any useful response. Questions:
> 1) What's you're use case? Are you using clang to compile C code? Are you manually generating LLVM IR?
yes the "inline function C code" will be compiled
2013 Aug 06
1
[LLVMdev] Patching jump tables at run-time
I am looking for guidance on how to:
1.
2011 Oct 19
0
[LLVMdev] Question regarding basic-block placement optimization
On Wed, Oct 19, 2011 at 3:24 AM, Chandler Carruth <chandlerc at google.com>wrote:
> On Tue, Oct 18, 2011 at 6:58 PM, Jakob Stoklund Olesen <stoklund at 2pi.dk>wrote:
>
>>
>> On Oct 18, 2011, at 5:22 PM, Chandler Carruth wrote:
>>
>> As for why it should be an IR pass, mostly because once the selection
>>> dag runs through the code, we can never
2017 Jul 01
2
KNL Assembly Code for Matrix Multiplication
Thank You,
It means vmovdqa64 zmm22, zmmword ptr [rip + .LCPI0_0] # zmm22 =
[8,9,10,11,12,13,14,15] zmm22 will contain 64 bit constant values which are
indexes here zmm22=8, 9, 10, 11, 12,13,14,15. not the values loaded from
these locations. and zmm2 contains constant 4000. so,
vpmuludq zmm14, zmm10, zmm2 ; will multiply the indexes values with 4000,
as for array b the stride is 4000.
zmm14=