similar to: Nowaday Scalar Evolution's Problem.

Displaying 20 results from an estimated 600 matches similar to: "Nowaday Scalar Evolution's Problem."

2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Hello, If I run clang on the following code: void func(unsigned n) { > for (unsigned long x = 1; x < n; ++x) > dummy(x); > } I get the following llvm ir: define void @func(i32 %n) { > entry: > %conv = zext i32 %n to i64 > %cmp5 = icmp ugt i32 %n, 1 > br i1 %cmp5, label %for.body, label %for.cond.cleanup > for.cond.cleanup:
2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Is that why we do not deduce +<nsw> from "add nsw" either? Is that an intrinsic limitation of creating a context-invariant expressions from a Value* or is that a limitation of our implementation (our unification not considering the nsw flags)? On Wed, Aug 15, 2018 at 12:39 PM Friedman, Eli <efriedma at codeaurora.org> wrote: > On 8/15/2018 12:21 PM, Alexandre Isoard via
2018 Aug 16
3
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Ok. To go back to the original issue, would it be meaningful to add a SCEVUMax(0, BTC) on the final BTC computed by SCEV? So that it does not use "negative values"? On Wed, Aug 15, 2018 at 2:40 PM Friedman, Eli <efriedma at codeaurora.org> wrote: > On 8/15/2018 2:27 PM, Alexandre Isoard wrote: > > I'm not sure I understand the poison/undef/UB distinctions. >
2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
I'm not sure I understand the poison/undef/UB distinctions. But on this example: define i32 @func(i1 zeroext %b, i32 %x, i32 %y) { > entry: > %adds = add nsw i32 %x, %y > %addu = add nuw i32 %x, %y > %cond = select i1 %b, i32 %adds, i32 %addu > ret i32 %cond > } It is important to not propagate the nsw/nuw between the two SCEV expressions (which unification would
2015 Mar 03
2
[LLVMdev] Need a clue to improve the optimization of some C code
Hi I have some inline function C code, that llvm could be optimizing better. Since I am new to this, I wonder if someone could give me a few pointers, how to approach this in LLVM. Should I try to change the IR code -somehow- to get the code generator to generate better code, or should I rather go to the code generator and try to add an optimization pass ? Thanks for any feedback. Ciao
2018 Nov 06
4
Rather poor code optimisation of current clang/LLVM targeting Intel x86 (both -64 and -32)
Hi @ll, while clang/LLVM recognizes common bit-twiddling idioms/expressions like unsigned int rotate(unsigned int x, unsigned int n) { return (x << n) | (x >> (32 - n)); } and typically generates "rotate" machine instructions for this expression, it fails to recognize other also common bit-twiddling idioms/expressions. The standard IEEE CRC-32 for "big
2017 Jul 24
2
LazyValueInfo vs ScalarEvolution
Hi, Both LazyValueInfo and ScalarEvolution can calculate a constant range for an LLVM Value. I found that some times they do not agree, may be I interpreted them incorrectly For example in the following IR: bb85: ; preds = %bb85, %bb73 %tmp86 = phi i32 [ 1, %bb73 ], [ %tmp95, %bb85 ] %tmp95 = add nsw i32 %tmp86, 1 %tmp96 = icmp slt i32 %tmp95,
2015 Mar 03
2
[LLVMdev] Need a clue to improve the optimization of some C code
Am 03.03.2015 um 19:49 schrieb Philip Reames <listmail at philipreames.com>: Hi Philip first thanks for your response, > You'll need to prove a bit more information to get any useful response. Questions: > 1) What's you're use case? Are you using clang to compile C code? Are you manually generating LLVM IR? yes the "inline function C code" will be compiled
2014 Sep 02
3
[LLVMdev] LICM promoting memory to scalar
All, If we can speculatively execute a load instruction, why isn’t it safe to hoist it out by promoting it to a scalar in LICM pass? There is a comment in LICM pass that if a load/store is conditional then it is not safe because it would break the LLVM concurrency model (See commit 73bfa4a). It has an IR test for checking this in test/Transforms/LICM/scalar-promote-memmodel.ll However, I have
2018 Nov 27
2
Rather poor code optimisation of current clang/LLVM targeting Intel x86 (both -64 and -32)
"Sanjay Patel" <spatel at rotateright.com> wrote: > IIUC, you want to use x86-specific bit-hacks (sbb masking) in cases like > this: > unsigned int foo(unsigned int crc) { > if (crc & 0x80000000) > crc <<= 1, crc ^= 0xEDB88320; > else > crc <<= 1; > return crc; > } To document this for x86 too: rewrite the function
2011 Feb 18
0
[LLVMdev] Adding "S" suffixed ARM/Thumb2 instructions
On Feb 17, 2011, at 10:35 PM, Вадим Марковцев wrote: > Hello everyone, > > I've added the "S" suffixed versions of ARM and Thumb2 instructions to tablegen. Those are, for example, "movs" or "muls". > Of course, some instructions have already had their twins, such as add/adds, and I leaved them untouched. Adding separate "s" instructions is
2011 Feb 18
2
[LLVMdev] Adding "S" suffixed ARM/Thumb2 instructions
Hello everyone, I've added the "S" suffixed versions of ARM and Thumb2 instructions to tablegen. Those are, for example, "movs" or "muls". Of course, some instructions have already had their twins, such as add/adds, and I leaved them untouched. Besides, I propose the codegen optimization based on them, which removes the redundant comparison in patterns like orr
2014 Sep 02
2
[LLVMdev] LICM promoting memory to scalar
I think gcc is right. It inserted a branch for n == 0 (the cbz at the top), so that's not a problem. In all other regards, this is safe: if you examine the sequence of loads and stores, it eliminated all but the first load and all but the last store. How's that unsafe? If I had to guess, the bug here is that LLVM doesn't want to hoist the load over the condition (which it is right
2017 Dec 19
4
A code layout related side-effect introduced by rL318299
Hi, Recently 10% performance regression on an important benchmark showed up after we integrated https://reviews.llvm.org/rL318299. The analysis showed that rL318299 triggered loop rotation on an multi exits loop, and the loop rotation introduced code layout issue. The performance regression is a side-effect of rL318299. I got two testcases a.ll and b.ll attached to illustrate the problem. a.ll
2017 Dec 01
2
Using Scalar Evolution to Identify Expressions Evolving in terms of Loop induction variables
Hi, I am using Scalar Evolution to extract access expressions (for load and store instructions) in terms of the loop induction variables. I observe that the Scalar Evolution analysis is returning more expressions than I expect - including ones that are not defined in terms of the loop induction variable. For instance in the following code: for(unsigned long int bid = 0; bid < no_of_queries;
2017 Dec 19
2
A code layout related side-effect introduced by rL318299
On Mon, Dec 18, 2017 at 5:46 PM Xinliang David Li <davidxl at google.com> wrote: > The introduction of cleanup.cond block in b.ll without loop-rotation > already makes the layout worse than a.ll. > > > Without introducing cleanup.cond block, the layout out is > > entry->while.cond -> while.body->ret > > All the arrows are hot fall through edges which is
2014 Sep 03
3
[LLVMdev] LICM promoting memory to scalar
Thanks for the background on the concurrent memory model. So, is it sufficient that the loop entry is guarded by condition (cbz at top) for preventing the race? The loop entry will be guarded by condition if loop has been rotated by loop rotate pass. Since LICM runs after loop rotate, we can use ScalarEvolution::isLoopEntryGuardedByCond to check if we can speculatively execute load without
2013 Aug 06
1
[LLVMdev] Patching jump tables at run-time
I am looking for guidance on how to: 1.
2017 Dec 01
0
Using Scalar Evolution to Identify Expressions Evolving in terms of Loop induction variables
Hi Hashim, Scalar evolution determines evolution of scalar in terms of expression chain driving it. Try dumping the detailed log using opt -analyze -scalar-evolution <.ll> -S , and look for LoopDispositions corresponding to different expression which shows variance characteristics of a particular expression w.r.t loop i.e. [computable/variant/invariant]. Thanks On Fri, Dec 1, 2017 at
2016 Aug 01
2
LLVM Loop vectorizer - 2 vector.body blocks appear
Hello. Mikhail, with the more recent version of the LoopVectorize.cpp code (retrieved at the beginning of July 2016) I ran the following piece of C code: void foo(long *A, long *B, long *C, long N) { for (long i = 0; i < N; ++i) { C[i] = A[i] + B[i]; } } The vectorized LLVM program I obtain contains 2 vector.body blocks - one named