search for: arrayidx12

Displaying 20 results from an estimated 26 matches for "arrayidx12".

Did you mean: arrayidx1
2013 Oct 30
3
[LLVMdev] loop vectorizer
...> %1 = load float* %arrayidx, align 4 > %arrayidx9 = getelementptr inbounds float* %b, i64 %add2 > %2 = load float* %arrayidx9, align 4 > %add10 = fadd float %1, %2 > %arrayidx11 = getelementptr inbounds float* %c, i64 %add2 > store float %add10, float* %arrayidx11, align 4 > %arrayidx12 = getelementptr inbounds float* %a, i64 %add8 > %3 = load float* %arrayidx12, align 4 > %arrayidx13 = getelementptr inbounds float* %b, i64 %add8 > %4 = load float* %arrayidx13, align 4 > %add14 = fadd float %3, %4 > %arrayidx15 = getelementptr inbounds float* %c, i64 %add8 > stor...
2017 May 19
4
memcmp code fragment
...el %if.end, label %if.then if.then: ; preds = %entry %cmp7 = icmp ugt i8 %0, %1 br label %return if.end: ; preds = %entry %inc = add i32 %i1, 1 %inc10 = add i32 %i2, 1 %idxprom11 = zext i32 %inc to i64 %arrayidx12 = getelementptr inbounds i8, i8* %block, i64 %idxprom11 %2 = load i8, i8* %arrayidx12, align 1 %idxprom13 = zext i32 %inc10 to i64 %arrayidx14 = getelementptr inbounds i8, i8* %block, i64 %idxprom13 %3 = load i8, i8* %arrayidx14, align 1 %cmp17 = icmp eq i8 %2, %3 br i1 %cmp17, label %i...
2013 Oct 30
0
[LLVMdev] loop vectorizer
...%a, i64 %add2 %0 = load float* %arrayidx, align 4 %arrayidx9 = getelementptr inbounds float* %b, i64 %add2 %1 = load float* %arrayidx9, align 4 %add10 = fadd float %0, %1 %arrayidx11 = getelementptr inbounds float* %c, i64 %add2 store float %add10, float* %arrayidx11, align 4 %arrayidx12 = getelementptr inbounds float* %a, i64 %add8 %2 = load float* %arrayidx12, align 4 %arrayidx13 = getelementptr inbounds float* %b, i64 %add8 %3 = load float* %arrayidx13, align 4 %add14 = fadd float %2, %3 %arrayidx15 = getelementptr inbounds float* %c, i64 %add8 store float %add...
2013 Oct 30
2
[LLVMdev] loop vectorizer
...%0 = load float* %arrayidx, align 4 > %arrayidx9 = getelementptr inbounds float* %b, i64 %add2 > %1 = load float* %arrayidx9, align 4 > %add10 = fadd float %0, %1 > %arrayidx11 = getelementptr inbounds float* %c, i64 %add2 > store float %add10, float* %arrayidx11, align 4 > %arrayidx12 = getelementptr inbounds float* %a, i64 %add8 > %2 = load float* %arrayidx12, align 4 > %arrayidx13 = getelementptr inbounds float* %b, i64 %add8 > %3 = load float* %arrayidx13, align 4 > %add14 = fadd float %2, %3 > %arrayidx15 = getelementptr inbounds float* %c, i64 %add8 >...
2013 Oct 30
0
[LLVMdev] loop vectorizer
...%a, i64 %add2 %1 = load float* %arrayidx, align 4 %arrayidx9 = getelementptr inbounds float* %b, i64 %add2 %2 = load float* %arrayidx9, align 4 %add10 = fadd float %1, %2 %arrayidx11 = getelementptr inbounds float* %c, i64 %add2 store float %add10, float* %arrayidx11, align 4 %arrayidx12 = getelementptr inbounds float* %a, i64 %add8 %3 = load float* %arrayidx12, align 4 %arrayidx13 = getelementptr inbounds float* %b, i64 %add8 %4 = load float* %arrayidx13, align 4 %add14 = fadd float %3, %4 %arrayidx15 = getelementptr inbounds float* %c, i64 %add8 store float %add...
2013 Nov 13
2
[LLVMdev] SCEV getMulExpr() not propagating Wrap flags
...4, !tbaa !1 %add = add nsw i32 %1, %I %arrayidx3 = getelementptr inbounds i32* %a, i64 %0 store i32 %add, i32* %arrayidx3, align 4, !tbaa !1 %2 = or i64 %0, 1 %arrayidx7 = getelementptr inbounds i32* %b, i64 %2 %3 = load i32* %arrayidx7, align 4, !tbaa !1 %add8 = add nsw i32 %3, %I %arrayidx12 = getelementptr inbounds i32* %a, i64 %2 store i32 %add8, i32* %arrayidx12, align 4, !tbaa !1 %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1 %exitcond = icmp eq i64 %indvars.iv.next, 512 br i1 %exitcond, label %for.end, label %for.body And, when going through the GEP: %0 = shl nsw i6...
2017 Dec 19
4
MemorySSA question
...emoryPhi({for.body8.lr.ph <http://for.body8.lr.ph>,1},{for.body8,2})* %indvars.iv = phi i64 [ 0, %for.body8.lr.ph ], [ %indvars.iv.next, %for.body8 ] %arrayidx10 = getelementptr inbounds i32, i32* %a, i64 %indvars.iv *; MemoryUse(4)* %5 = load i32, i32* %arrayidx10, align 4, !tbaa !2 %arrayidx12 = getelementptr inbounds i32, i32* %d, i64 %indvars.iv ; MemoryUse(4) %6 = load i32, i32* %arrayidx12, align 4, !tbaa !2 %mul = mul nsw i32 %6, %5 %arrayidx14 = getelementptr inbounds i32, i32* %e, i64 %indvars.iv *; 2 = MemoryDef(4)* store i32 %mul, i32* %arrayidx14, align 4, !tbaa !2 %i...
2014 Sep 18
2
[LLVMdev] [Vectorization] Mis match in code generated
...nsw i32 %add5, %4 %arrayidx8 = getelementptr inbounds i32* %a, > i32 5 %5 = load i32* %arrayidx8, align 4, !tbaa !1 %add9 = add nsw i32 > %add7, %5 %arrayidx10 = getelementptr inbounds i32* %a, i32 6 %6 = load > i32* %arrayidx10, align 4, !tbaa !1 %add11 = add nsw i32 %add9, %6 > %arrayidx12 = getelementptr inbounds i32* %a, i32 7 %7 = load i32* > %arrayidx12, align 4, !tbaa !1 %add13 = add nsw i32 %add11, %7 > %arrayidx14 = getelementptr inbounds i32* %a, i32 8 %8 = load i32* > %arrayidx14, align 4, !tbaa !1 %add15 = add nsw i32 %add13, %8 > %arrayidx16 = getelementptr...
2013 Oct 30
3
[LLVMdev] loop vectorizer
On 30 October 2013 09:25, Nadav Rotem <nrotem at apple.com> wrote: > The access pattern to arrays a and b is non-linear. Unrolled loops are > usually handled by the SLP-vectorizer. Are ir0 and ir1 consecutive for all > values for i ? > Based on his list of values, it seems that the induction stride is linear within each block of 4 iterations, but it's not a clear
2014 Sep 19
3
[LLVMdev] [Vectorization] Mis match in code generated
...%add7 = add nsw i32 %add5, %4 %arrayidx8 = getelementptr inbounds i32* %a, i32 5 %5 = load i32* %arrayidx8, align 4, !tbaa !1 %add9 = add nsw i32 %add7, %5 %arrayidx10 = getelementptr inbounds i32* %a, i32 6 %6 = load i32* %arrayidx10, align 4, !tbaa !1 %add11 = add nsw i32 %add9, %6 %arrayidx12 = getelementptr inbounds i32* %a, i32 7 %7 = load i32* %arrayidx12, align 4, !tbaa !1 %add13 = add nsw i32 %add11, %7 %arrayidx14 = getelementptr inbounds i32* %a, i32 8 %8 = load i32* %arrayidx14, align 4, !tbaa !1 %add15 = add nsw i32 %add13, %8 %arrayidx16 = getelementptr inbounds i3...
2013 Oct 30
2
[LLVMdev] loop vectorizer
...%a, i64 %add2 %0 = load float* %arrayidx, align 4 %arrayidx9 = getelementptr inbounds float* %b, i64 %add2 %1 = load float* %arrayidx9, align 4 %add10 = fadd float %0, %1 %arrayidx11 = getelementptr inbounds float* %c, i64 %add2 store float %add10, float* %arrayidx11, align 4 %arrayidx12 = getelementptr inbounds float* %a, i64 %add8 %2 = load float* %arrayidx12, align 4 %arrayidx13 = getelementptr inbounds float* %b, i64 %add8 %3 = load float* %arrayidx13, align 4 %add14 = fadd float %2, %3 %arrayidx15 = getelementptr inbounds float* %c, i64 %add8 store float %add...
2016 Jul 11
2
extra loads in nested for-loop
...o compile, this is the .ll that results (just showing 2 iterations of the unrolled inner loop): define void @f1([8 x double]* nocapture %c, [8 x double]* nocapture readonly %a, [8 x double]* nocapture readonly %b) #0 { entry: %arrayidx8 = getelementptr inbounds [8 x double]* %c, i64 0, i64 0 %arrayidx12 = getelementptr inbounds [8 x double]* %a, i64 0, i64 0 %0 = load double* %arrayidx8, align 8, !tbaa !1 %1 = load double* %arrayidx12, align 8, !tbaa !1 %arrayidx16 = getelementptr inbounds [8 x double]* %b, i64 0, i64 0 %2 = load double* %arrayidx16, align 8, !tbaa !1 %mul = fmul double...
2014 Sep 18
2
[LLVMdev] [Vectorization] Mis match in code generated
...tbaa !1 %add7 = add nsw i32 %add5, %4 %arrayidx8 = getelementptr inbounds i32* %a, i32 5 %5 = load i32* %arrayidx8, align 4, !tbaa !1 %add9 = add nsw i32 %add7, %5 %arrayidx10 = getelementptr inbounds i32* %a, i32 6 %6 = load i32* %arrayidx10, align 4, !tbaa !1 %add11 = add nsw i32 %add9, %6 %arrayidx12 = getelementptr inbounds i32* %a, i32 7 %7 = load i32* %arrayidx12, align 4, !tbaa !1 %add13 = add nsw i32 %add11, %7 %arrayidx14 = getelementptr inbounds i32* %a, i32 8 %8 = load i32* %arrayidx14, align 4, !tbaa !1 %add15 = add nsw i32 %add13, %8 %arrayidx16 = getelementptr inbounds i32* %a, i...
2013 Nov 08
1
[LLVMdev] loop vectorizer and storing to uniform addresses
...; preds = %for.cond %arrayidx9 = getelementptr inbounds [4 x float]* %sum, i32 0, i64 0 %13 = load float* %arrayidx9, align 4 %arrayidx10 = getelementptr inbounds [4 x float]* %sum, i32 0, i64 1 %14 = load float* %arrayidx10, align 4 %add11 = fadd float %13, %14 %arrayidx12 = getelementptr inbounds [4 x float]* %sum, i32 0, i64 2 %15 = load float* %arrayidx12, align 4 %add13 = fadd float %add11, %15 %arrayidx14 = getelementptr inbounds [4 x float]* %sum, i32 0, i64 3 %16 = load float* %arrayidx14, align 4 %add15 = fadd float %add13, %16 ret float %ad...
2013 Oct 30
0
[LLVMdev] loop vectorizer
...ign 4 >> %arrayidx9 = getelementptr inbounds float* %b, i64 %add2 >> %1 = load float* %arrayidx9, align 4 >> %add10 = fadd float %0, %1 >> %arrayidx11 = getelementptr inbounds float* %c, i64 %add2 >> store float %add10, float* %arrayidx11, align 4 >> %arrayidx12 = getelementptr inbounds float* %a, i64 %add8 >> %2 = load float* %arrayidx12, align 4 >> %arrayidx13 = getelementptr inbounds float* %b, i64 %add8 >> %3 = load float* %arrayidx13, align 4 >> %add14 = fadd float %2, %3 >> %arrayidx15 = getelementptr inbounds...
2014 Nov 10
2
[LLVMdev] [Vectorization] Mis match in code generated
...t; > %5 = load i32* %arrayidx8, align 4, !tbaa !1 > > > %add9 = add nsw i32 %add7, %5 > > > %arrayidx10 = getelementptr inbounds i32* %a, i32 6 > > > %6 = load i32* %arrayidx10, align 4, !tbaa !1 > > > %add11 = add nsw i32 %add9, %6 > > > %arrayidx12 = getelementptr inbounds i32* %a, i32 7 > > > %7 = load i32* %arrayidx12, align 4, !tbaa !1 > > > %add13 = add nsw i32 %add11, %7 > > > %arrayidx14 = getelementptr inbounds i32* %a, i32 8 > > > %8 = load i32* %arrayidx14, align 4, !tbaa !1 > > >...
2017 Dec 19
2
MemorySSA question
...t;,1},{for.body8,2})* >> %indvars.iv = phi i64 [ 0, %for.body8.lr.ph ], [ %indvars.iv.next, >> %for.body8 ] >> %arrayidx10 = getelementptr inbounds i32, i32* %a, i64 %indvars.iv >> *; MemoryUse(4)* >> %5 = load i32, i32* %arrayidx10, align 4, !tbaa !2 >> %arrayidx12 = getelementptr inbounds i32, i32* %d, i64 %indvars.iv >> ; MemoryUse(4) >> %6 = load i32, i32* %arrayidx12, align 4, !tbaa !2 >> %mul = mul nsw i32 %6, %5 >> %arrayidx14 = getelementptr inbounds i32, i32* %e, i64 %indvars.iv >> *; 2 = MemoryDef(4)* >> st...
2018 Feb 27
0
Question about instcombine pass.
...y ], [ %mul10, %for.body ] %indvars.iv = phi i64 [ 1, %entry ], [ %indvars.iv.next, %for.body ] %arrayidx4 = getelementptr inbounds [10 x i32], [10 x i32]* @a, i64 0, i64 %indvars.iv %0 = load i32, i32* %arrayidx4, align 4, !tbaa !2 %mul10 = mul nsw i32 %store_forwarded, %store_forwarded %arrayidx12 = getelementptr inbounds [10 x i32], [10 x i32]* @X, i64 0, i64 %indvars.iv store i32 %mul10, i32* %arrayidx12, align 4, !tbaa !2 store i32 %0, i32* %arrayidx4, align 4, !tbaa !2 %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1 %exitcond = icmp eq i64 %indvars.iv.next, 10 br i1 %exitcond...
2013 Nov 16
0
[LLVMdev] SCEV getMulExpr() not propagating Wrap flags
..., %I > %arrayidx3 = getelementptr inbounds i32* %a, i64 %0 > store i32 %add, i32* %arrayidx3, align 4, !tbaa !1 > %2 = or i64 %0, 1 > %arrayidx7 = getelementptr inbounds i32* %b, i64 %2 > %3 = load i32* %arrayidx7, align 4, !tbaa !1 > %add8 = add nsw i32 %3, %I > %arrayidx12 = getelementptr inbounds i32* %a, i64 %2 > store i32 %add8, i32* %arrayidx12, align 4, !tbaa !1 > %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1 > %exitcond = icmp eq i64 %indvars.iv.next, 512 > br i1 %exitcond, label %for.end, label %for.body > > And, when going throu...
2013 Oct 30
0
[LLVMdev] loop vectorizer
...%0 = load float* %arrayidx, align 4 > %arrayidx9 = getelementptr inbounds float* %b, i64 %add2 > %1 = load float* %arrayidx9, align 4 > %add10 = fadd float %0, %1 > %arrayidx11 = getelementptr inbounds float* %c, i64 %add2 > store float %add10, float* %arrayidx11, align 4 > %arrayidx12 = getelementptr inbounds float* %a, i64 %add8 > %2 = load float* %arrayidx12, align 4 > %arrayidx13 = getelementptr inbounds float* %b, i64 %add8 > %3 = load float* %arrayidx13, align 4 > %add14 = fadd float %2, %3 > %arrayidx15 = getelementptr inbounds float* %c, i64 %add8 >...