similar to: gep and strength reduction

Displaying 20 results from an estimated 10000 matches similar to: "gep and strength reduction"

2019 Aug 26
2
SCEV related question
Here is original C code: void topup(int a[], unsigned long i) { for (; i < 16; i++) { a[i] = 1; } } Here is the IR before the pass where I expect SCEV to return trip-count value ; Function Attrs: nofree norecurse nounwind uwtable writeonly define dso_local void @topup(i32* nocapture %a, i64 %i) local_unnamed_addr #0 { entry: %cmp3 = icmp ult i64 %i, 16 br i1
2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Is that why we do not deduce +<nsw> from "add nsw" either? Is that an intrinsic limitation of creating a context-invariant expressions from a Value* or is that a limitation of our implementation (our unification not considering the nsw flags)? On Wed, Aug 15, 2018 at 12:39 PM Friedman, Eli <efriedma at codeaurora.org> wrote: > On 8/15/2018 12:21 PM, Alexandre Isoard via
2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
I'm not sure I understand the poison/undef/UB distinctions. But on this example: define i32 @func(i1 zeroext %b, i32 %x, i32 %y) { > entry: > %adds = add nsw i32 %x, %y > %addu = add nuw i32 %x, %y > %cond = select i1 %b, i32 %adds, i32 %addu > ret i32 %cond > } It is important to not propagate the nsw/nuw between the two SCEV expressions (which unification would
2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Hello, If I run clang on the following code: void func(unsigned n) { > for (unsigned long x = 1; x < n; ++x) > dummy(x); > } I get the following llvm ir: define void @func(i32 %n) { > entry: > %conv = zext i32 %n to i64 > %cmp5 = icmp ugt i32 %n, 1 > br i1 %cmp5, label %for.body, label %for.cond.cleanup > for.cond.cleanup:
2018 Aug 16
3
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Ok. To go back to the original issue, would it be meaningful to add a SCEVUMax(0, BTC) on the final BTC computed by SCEV? So that it does not use "negative values"? On Wed, Aug 15, 2018 at 2:40 PM Friedman, Eli <efriedma at codeaurora.org> wrote: > On 8/15/2018 2:27 PM, Alexandre Isoard wrote: > > I'm not sure I understand the poison/undef/UB distinctions. >
2019 Aug 26
2
missing simplification in ScalarEvolution?
Hi Sanjoy, Thanks for the reply! Your approach sounds good to me! I think 1) is legal as address wraparound in unsigned range doesn't make sense given a positive offset, but I am not sure. I think umax will not be added if we can prove the predicate as known. I am not sure whether umax will get simplified if we add nuw to the expressions. -Pankaj -----Original Message----- From: Sanjoy
2019 Aug 20
2
missing simplification in ScalarEvolution?
Hi, I have this small test case- %struct1 = type { i32, i32 } @glob_const = internal constant [4 x %struct1] [%struct1 { i32 4, i32 5 }, %struct1 { i32 8, i32 9 }, %struct1 { i32 16, i32 0 }, %struct1 { i32 32, i32 10 }], align 16 define void @foo() { entry: br label %loop loop: ; preds = %loop, %entry %iv = phi %struct1* [ getelementptr
2013 Aug 22
2
[LLVMdev] scev questions
Hi, I'm trying to get the following loop to vectorize (simple reduction): unsigned int sum2(unsigned int *a, int len){ unsigned int s = 0; for (int i = 0; i < len; i += 4) s += *a++; return s; } The loop fails to vectorize because SCEV could not compute the loop exit count. It appears SCEV cannot handle the non-unit increment of the loop counter. Is this a known limitation of
2020 Jul 16
2
LLVM 11 and trunk selecting 4 wide instead of 8 wide loop vectorization for AVX-enabled target
Hey list, I've recently done the first test run of bumping our Burst compiler from LLVM 10 -> 11 now that the branch has been cut, and have noticed an apparent loop vectorization codegen regression for X86 with AVX or AVX2 enabled. The following IR example is vectorized to 4 wide with LLVM 11 and trunk whereas in LLVM 10 it (correctly as per what we want) vectorized it 8 wide matching the
2020 Jun 09
2
LoopStrengthReduction generates false code
Hi. In my backend I get false code after using StrengthLoopReduction. In the generated code the loop index variable is multiplied by 8 (correct, everything is 64 bit aligned) to get an address offset, and the index variable is incremented by 1*8, which is not correct. It should be incremented by 1 only. The factor 8 appears again. I compared the debug output
2013 Aug 22
0
[LLVMdev] scev questions
On 22 August 2013 13:24, Redmond, Paul <paul.redmond at intel.com> wrote: > Hi, > > I'm trying to get the following loop to vectorize (simple reduction): > > unsigned int sum2(unsigned int *a, int len){ > unsigned int s = 0; > for (int i = 0; i < len; i += 4) > s += *a++; > return s; > } > > > The loop fails to vectorize because SCEV
2020 Jun 10
2
LoopStrengthReduction generates false code
The IR after LSR is: *** IR Dump After Loop Strength Reduction *** ; Preheader: entry: tail call void @fill_array(i32* getelementptr inbounds ([10 x i32], [10 x i32]* @buffer, i32 0, i32 0)) #2 br label %while.body ; Loop: while.body: ; preds = %while.body, %entry %lsr.iv = phi i32 [ %lsr.iv.next, %while.body ], [ 0, %entry ] %uglygep = getelementptr
2019 Aug 21
2
missing simplification in ScalarEvolution?
Thanks for the suggestion but datalayout info did not solve the problem! -Pankaj -----Original Message----- From: Philip Reames <listmail at philipreames.com> Sent: Tuesday, August 20, 2019 5:26 PM To: Chawla, Pankaj <pankaj.chawla at intel.com>; llvm-dev at lists.llvm.org Subject: Re: [llvm-dev] missing simplification in ScalarEvolution? Try adding a datalayout with pointer size
2020 Jun 09
2
LoopStrengthReduction generates false code
Hm, no. I expect byte addresses - everywhere. The compiler should not know that the arch needs word addresses. During lowering LOAD and STORE get explicit conversion operations for the memory address. Even if my arch was byte addressed the code would be false/illegal. Boris > Am 09.06.2020 um 19:36 schrieb Eli Friedman <efriedma at quicinc.com>: > > Blindly guessing here,
2018 Feb 27
0
Question about instcombine pass.
Hello, Everyone. I have a question about llvm's "Combine redundant instructions(instcombine)" pass. I have tested instcombine pass by writing the following three test cases. But, CASE3 is not optimized as I expected. Is this behavior expected? The version of llvm is: clang version 5.0.1 (tags/RELEASE_501/final 325232) Option of clang command is: clang -O1 a.c -S -emit-llvm
2015 Jul 16
2
[LLVMdev] Improving loop vectorizer support for loops with a volatile iteration variable
----- Original Message ----- > From: "Chandler Carruth" <chandlerc at google.com> > To: "Hal Finkel" <hfinkel at anl.gov> > Cc: "Hyojin Sung" <hsung at us.ibm.com>, llvmdev at cs.uiuc.edu > Sent: Thursday, July 16, 2015 1:06:03 AM > Subject: Re: [LLVMdev] Improving loop vectorizer support for loops > with a volatile iteration
2020 Jul 16
2
LLVM 11 and trunk selecting 4 wide instead of 8 wide loop vectorization for AVX-enabled target
Tried a bunch of them there (x86-64, haswell, znver2) and they all defaulted to 4-wide - haswell additionally caused some extra loop unrolling but still with 8-wide pows. Cheers, -Neil. On Thu, Jul 16, 2020 at 2:39 PM Roman Lebedev <lebedev.ri at gmail.com> wrote: > Did you specify the target CPU the code should be optimized for? > For clang that is -march=native/znver2/... /
2015 Aug 13
2
[LLVMdev] Improving loop vectorizer support for loops with a volatile iteration variable
Hi Gerolf, I think we have several (perhaps separable) issues here: 1. Do we have a canonical form for loops, preserved through the optimizer, that allows naturally-constructed loop nests to remain separable? 2. Do we forbid non-lowering transformations that turn vectorizable loops into non-vectorizable loops? 3. How do we detect cases where transformations cause a negative answer to either
2016 Aug 17
2
Loop vectorization with the loop containing bitcast
Hi , The following loop fails to be vectorized since the load c[i] is casted as i64 and the store c[i] is double. The loop access analysis gives up since they are in different types. Since these two memory operations are in the same size, I believe the loop access analysis should return forward dependence and thus the loop can be vectorized. Any comments? Thanks, Jin #define N 1000 double
2018 Jan 26
0
Late setting of SCEV NoWrap flags does bad with cache
Hi Max, On Wed, Jan 24, 2018 at 10:03 PM, Maxim Kazantsev via llvm-dev <llvm-dev at lists.llvm.org> wrote: > I want to raise a discussion about reasonability of late setting of > nsw/nuw/nw flags to SCEV AddRecs through setNoWrapFlags method. A discussion > about this have already happened in August last year, there was a concern > about different no-wrap flags that come from