Displaying 20 results from an estimated 1000 matches similar to: "[LLVMdev] [Polly] Assert in Scope construction"
2013 Jul 05
0
[LLVMdev] [Polly] Assert in Scope construction
Hi Sergei,
On Thu, Jul 4, 2013 at 1:36 AM, Sergei Larin <slarin at codeaurora.org> wrote:
> Should have changed the subject line...
>
> ---
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted
> by
> The Linux Foundation
>
>
> > -----Original Message-----
> > From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at
2013 Jul 02
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 07/01/2013 09:41 AM, Renato Golin wrote:
> On 1 July 2013 02:02, Chris Matthews <chris.matthews at apple.com> wrote:
>
>> One thing that LNT is doing to help “smooth” the results for you is by
>> presenting the min of the data at a particular revision, which (hopefully)
>> is approximating the actual runtime without noise.
>>
>
> That's an
2013 Jul 01
2
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 1 July 2013 02:02, Chris Matthews <chris.matthews at apple.com> wrote:
> One thing that LNT is doing to help “smooth” the results for you is by
> presenting the min of the data at a particular revision, which (hopefully)
> is approximating the actual runtime without noise.
>
That's an interesting idea, as you said, if you run multiple times on every
revision.
On ARM,
2013 Aug 16
2
[LLVMdev] [Polly] Analysis of extra compile-time overhead for simple nested loops
Hi Sebpop,
Thanks for your explanation.
I noticed that Polly would finally run the SROA pass to transform these load/store instructions into scalar operations. Is it possible to run such a pass before polly-dependence analysis?
Star Tan
At 2013-08-15 21:12:53,"Sebastian Pop" <sebpop at gmail.com> wrote:
>Codeprepare and independent blocks are introducing these loads and
2013 Aug 15
0
[LLVMdev] [Polly] Analysis of extra compile-time overhead for simple nested loops
Codeprepare and independent blocks are introducing these loads and stores.
These are prepasses that polly runs prior to building the dependence graph
to transform scalar dependences into data dependences.
Ether was working on eliminating the rewrite of scalar dependences.
On Thu, Aug 15, 2013 at 5:32 AM, Star Tan <tanmx_star at yeah.net> wrote:
> Hi all,
>
> I have investigated the
2013 Aug 16
0
[LLVMdev] [Polly] Analysis of extra compile-time overhead for simple nested loops
I do not think that running SROA before polly is a good idea:
it would defeat the purpose of the code preparation passes that
polly intentionally schedules for the data dependence analysis.
If you remove the data references before polly runs, you would
miss them in the dependence graph: that could lead to incorrect
transforms.
On Thu, Aug 15, 2013 at 7:28 PM, Star Tan <tanmx_star at
2013 Aug 15
4
[LLVMdev] [Polly] Analysis of extra compile-time overhead for simple nested loops
Hi all,
I have investigated the 6X extra compile-time overhead when Polly compiles the simple nestedloop benchmark in LLVM-testsuite. (http://188.40.87.11:8000/db_default/v4/nts/31?compare_to=28&baseline=28). Preliminary results show that such compile-time overhead is resulted by the complicated polly-dependence analysis. However, the key seems to be the polly-prepare pass, which introduces
2016 Jul 27
2
Remove zext-unfolding from InstCombine
Hi Sanjay,
thank you a lot for your answer. I understand that in your examples it is desirable that `foo` and `goo` are canonicalized to the same IR, i.e., something like `@goo`. However, I still have a few open questions, but please correct me in case I'm thinking in the wrong direction.
> Am 21.07.2016 um 18:51 schrieb Sanjay Patel <spatel at rotateright.com>:
>
> I've
2013 Aug 16
0
[LLVMdev] [Polly] Analysis of extra compile-time overhead for simple nested loops
On 08/15/2013 03:32 AM, Star Tan wrote:
> Hi all,
Hi,
I tried to reproduce your findings, but could not do so.
> I have investigated the 6X extra compile-time overhead when Polly compiles the simple nestedloop benchmark in LLVM-testsuite. (http://188.40.87.11:8000/db_default/v4/nts/31?compare_to=28&baseline=28). Preliminary results show that such compile-time overhead is resulted by
2016 Jul 21
2
Remove zext-unfolding from InstCombine
Hi all,
I have a question regarding a transformation that is carried out in InstCombine, which has been introduced by r48715. It unfolds expressions of the form `zext(or(icmp, (icmp)))` to `or(zext(icmp), zext(icmp)))` to expose pairs of `zext(icmp)`. In a subsequent iteration these `zext(icmp)` pairs could then (possibly) be optimized by another optimization (which has already been there before
2013 Feb 07
2
[LLVMdev] Rotated loop identification
Thanks for your reply.
Maybe it wasn't so clear, but the optimization I'm writing is target-dependent
and so it's declared inside the target backend and is run after the independent
optimizer. So I cannot run my pass just after LoopRotate and before InstCombine.
I can still use ScalarEvolution, but the instruction combiner can in some cases
simplify the entry guard condition.
A small
2017 Dec 19
4
A code layout related side-effect introduced by rL318299
Hi,
Recently 10% performance regression on an important benchmark showed up
after we integrated https://reviews.llvm.org/rL318299. The analysis showed
that rL318299 triggered loop rotation on an multi exits loop, and the loop
rotation introduced code layout issue. The performance regression is a
side-effect of rL318299. I got two testcases a.ll and b.ll attached to
illustrate the problem. a.ll
2017 Dec 19
2
A code layout related side-effect introduced by rL318299
On Mon, Dec 18, 2017 at 5:46 PM Xinliang David Li <davidxl at google.com>
wrote:
> The introduction of cleanup.cond block in b.ll without loop-rotation
> already makes the layout worse than a.ll.
>
>
> Without introducing cleanup.cond block, the layout out is
>
> entry->while.cond -> while.body->ret
>
> All the arrows are hot fall through edges which is
2020 Apr 04
4
Legality of transformation
Please consider the following C code:
* #define SZ 2048 int main(void) { int A[SZ]; int B[SZ];
int i, tmp; for (i = 0; i < SZ; i++) { tmp = A[i];
B[i] = tmp; } assert(A[SZ/2] == B[SZ/2]); }*
On running -O1 followed by -reg2mem I get the following IR:
*define dso_local i32 @main() local_unnamed_addr #0 {entry: %A = alloca
[2048
2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Hello,
If I run clang on the following code:
void func(unsigned n) {
> for (unsigned long x = 1; x < n; ++x)
> dummy(x);
> }
I get the following llvm ir:
define void @func(i32 %n) {
> entry:
> %conv = zext i32 %n to i64
> %cmp5 = icmp ugt i32 %n, 1
> br i1 %cmp5, label %for.body, label %for.cond.cleanup
> for.cond.cleanup:
2010 Nov 29
1
cross tabulate variables by subject id
Dear list,
I have data like this:
dat1 <- data.frame(subject=rep(1:10,2),
cond1=rep(c("A","B"),each=5),
cond2=rep(c("C","D"),each=10),
choice=sample(0:1,10,replace=TRUE))
I would like to compare subjects' "choice" for (cond1=="A" &
cond2=="C") vs
2007 Dec 04
2
Multiple stacked barplots on the same graph?
Dear R-Users,
I would like to know whether it is possible to draw several
stacked barplots (i.e. side by side on the same sheet)...
my data look like :
Cond1 Cond1' Cond2 Cond2'
Compartment 1 11,81 2,05 12,49 0,70
Compartment 2 10,51 1,98 13,56 0,85
Compartment 3 1,95 0,63 2,81 0,22
Compartment 4 2,08 0,17
2012 Nov 26
2
[LLVMdev] RFC: change BoundsChecking.cpp to use address-based tests
I am investigating changing BoundsChecking to use address-based rather
than size- & offset-based tests.
To explain, here is a short code sample cribbed from one of the tests:
%mem = tail call i8* @calloc(i64 1, i64 %elements)
%memobj = bitcast i8* %mem to i64*
%ptr = getelementptr inbounds i64* %memobj, i64 %index
%4 = load i64* %ptr, align 8
Currently, the IR for bounds checking
2018 Aug 15
2
[SCEV] Why is backedge-taken count <nsw> instead of <nuw>?
Is that why we do not deduce +<nsw> from "add nsw" either?
Is that an intrinsic limitation of creating a context-invariant expressions
from a Value* or is that a limitation of our implementation (our
unification not considering the nsw flags)?
On Wed, Aug 15, 2018 at 12:39 PM Friedman, Eli <efriedma at codeaurora.org>
wrote:
> On 8/15/2018 12:21 PM, Alexandre Isoard via
2015 Jul 16
3
[LLVMdev] why LoopUnswitch pass does not constant fold conditional branch and merge blocks
Hi,
I have a general question on LoopUnswtich pass.
Consider the following IR snippet:
define i32 @test(i1 %cond) {
br label %loop_begin
loop_begin:
br i1 %cond, label %loop_body, label %loop_exit
loop_body:
br label %do_something
do_something:
call void @some_func() noreturn nounwind
br label %loop_begin
loop_exit:
ret i32 0
}
declare void @some_func() noreturn
After running