similar to: [LLVMdev] Speculative Loop Parallelization on LLVM IR

Displaying 20 results from an estimated 3000 matches similar to: "[LLVMdev] Speculative Loop Parallelization on LLVM IR"

2010 Jun 21
0
[LLVMdev] Speculative Loop Parallelization on LLVM IR
Hi Tobias: Thanks for replying . So if I understand correctly, in LLVM currently, the Polyhedral model is being built ( LLVM IR -------> Poly Model ----------> LLVM IR ). This is for compile-time optimizations of loop-nests [e.g. loop-transformations to expose parallelism or improve locality etc]. Yes, thats great for optimizing loop-nests. As an additional, since the real value of LLVM
2010 Jun 18
4
[LLVMdev] Speculative Loop Parallelization on LLVM IR
Hi Javed, On 06/18/10 14:07, Javed Absar wrote: > Hi: > I worked on loop-optimizations techniques previously using ORC. > Currently i see lots of research on speculative parallelization of > loops ... specially because multicores [for embedded systems] is > becoming popular. In other words, because you have > multiple cores, you can start some loops [Fast-Track] as if there is
2010 Jun 21
2
[LLVMdev] Speculative Loop Parallelization on LLVM IR
On Mon, Jun 21, 2010 at 1:12 AM, Javed Absar <javed.absar at gmail.com> wrote: > Hi Tobias: > > Thanks for replying . So if I understand correctly,  in LLVM currently, the > Polyhedral model is being built ( LLVM IR -------> Poly Model ----------> > LLVM IR ). > This is for compile-time optimizations of loop-nests [e.g. > loop-transformations to expose parallelism
2010 Jun 21
0
[LLVMdev] Speculative Loop Parallelization on LLVM IR
On Mon, Jun 21, 2010 at 10:27 AM, Daniel Berlin <dberlin at dberlin.org> wrote: > On Mon, Jun 21, 2010 at 1:12 AM, Javed Absar <javed.absar at gmail.com> wrote: >> Hi Tobias: >> >> Thanks for replying . So if I understand correctly,  in LLVM currently, the >> Polyhedral model is being built ( LLVM IR -------> Poly Model ----------> >> LLVM IR ).
2010 Jun 21
0
[LLVMdev] Speculative Loop Parallelization on LLVM IR
On 06/21/10 07:12, Javed Absar wrote: > Hi Tobias: > Thanks for replying . So if I understand correctly, in LLVM currently, > the Polyhedral model is being built ( LLVM IR -------> Poly Model > ----------> LLVM IR ). > This is for compile-time optimizations of loop-nests [e.g. > loop-transformations to expose parallelism or improve locality etc]. > Yes, thats great for
2010 Jul 28
0
[LLVMdev] LLVM meta-data for run-time optimization
Javed Absar wrote: > Hi > > I read on LLVM blog that meta-data has been implemented to coney debug > information to run-time system. > Can one use meta-data to convey developer specifc hints to run-time > system (e.g. JIT compiler)? > Keen to know your thoughts on this. I don't see why not. I've used LLVM metadata to record type-inference information and to
2010 Jul 28
2
[LLVMdev] LLVM meta-data for run-time optimization
Hi I read on LLVM blog that meta-data has been implemented to coney debug information to run-time system. Can one use meta-data to convey developer specifc hints to run-time system (e.g. JIT compiler)? Keen to know your thoughts on this. Thanks Javed -- my homepage: http://www.javedabsar.com -------------- next part -------------- An HTML attachment was scrubbed... URL:
2012 Oct 08
0
[LLVMdev] LLVM Loop Vectorizer (Nadav Rotem)
It would be great to get "accurate" dependence analysis from polyhedral framework. Anyone working on making polly into analysis+Transforms framework? -Prashantha -----Original Message----- From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Sahasrabuddhe, Sameer Sent: Monday, October 08, 2012 9:03 AM To: Hal Finkel; Javed Absar Cc: llvmdev at
2012 Oct 08
0
[LLVMdev] LLVM Loop Vectorizer (Nadav Rotem)
Hi Javed, Developing a good loop vectorizer takes several years. The work on the GCC vectorizer began in 2004, and they spent several years improving and optimizing their vectorizer. They started by vectorizing simple loops, and added features that they needed in order to vectorize additional loops that were important for them. They started with a single-block loops, and later they added
2012 Oct 07
0
[LLVMdev] LLVM Loop Vectorizer (Nadav Rotem)
Javed, I'd like to add that, mostly through Tobi's efforts, we were able to have isl (the integer set library) on which Polly depends relicensed such that it is now distributed under the MIT license, and thus Polly should be eligible for inclusion as some of LLVM's core analysis and transformation passes. -Hal ----- Original Message ----- > From: "Javed Absar"
2012 Oct 08
3
[LLVMdev] LLVM Loop Vectorizer (Nadav Rotem)
> -----Original Message----- > From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On > Behalf Of Hal Finkel > Sent: Monday, October 08, 2012 1:35 AM > > I'd like to add that, mostly through Tobi's efforts, we were able to have isl (the > integer set library) on which Polly depends relicensed such that it is now > distributed under the MIT
2011 Jul 19
3
[LLVMdev] speculative parallelization in LLVM
Hi Renato, No, I cannot, but in case it is, I want to take advantage of this. In case it is not, the instrumentation code will detect this at runtime and simply roll back to the original version. I will always keep an original version available, in addition to the ones I modify with Polly. However, initially I will speculate that it is allocated contiguously. Thanks, Alexandra
2017 Mar 14
2
[cfe-dev] proposal - pragma section directive in clang
Hi Reid, Unfortunately yes, it is. > If we do go with approach 3, I'd recommend adding a single metadata attachment that controls all sections a global could possibly live in (text, data, rdata, bss). I agree with this, although I think using metadata here wouldn't be right - don't we need to use attributes when dropping metadata would cause miscompiles? I was considering adding
2011 Jul 19
0
[LLVMdev] speculative parallelization in LLVM
On 07/18/2011 07:03 PM, Jimborean Alexandra wrote: > Hi, > > I plan to do some speculative parallelization in LLVM using Polly and I > target loops that contain pointers and indirect references. As far as I > know, Polly generates optimized code starting from the SCoPs, therefore > I plan to replace all pointer accesses with array accesses, such that > Polly will accept the
2011 Jul 19
4
[LLVMdev] speculative parallelization in LLVM
Hi Tobi, Thank you for your reply :). I know that array accesses are handled as pointers in LLVM, but as I understood Polly is focused on statically analysable code. As you mentioned: proving that pointer accesses actually represent virtual array accesses. In the case of a linked list for example, parsed with a pointer p = p->next, I expect that Polly will not handle this code. So I
2011 Jul 18
3
[LLVMdev] speculative parallelization in LLVM
Hi, I plan to do some speculative parallelization in LLVM using Polly and I target loops that contain pointers and indirect references. As far as I know, Polly generates optimized code starting from the SCoPs, therefore I plan to replace all pointer accesses with array accesses, such that Polly will accept the code. Each array access should use a liner function of the enclosing loops indices.
2017 Mar 14
2
[cfe-dev] proposal - pragma section directive in clang
Thanks Reid/Jonathon for your replies. Reid, An important case against module level flags is that it wont allow changing or resetting section names e.g. int a; #pragma clang section bss = "xyz" int b; In case above, users would like to see only 'b' placed in 'xyz' and not 'a' as well. Link pointed to by Jonathon seems to require same behavior.
2018 Jan 11
0
How to get started with instruction scheduling? Advice needed.
Hi Phil, > I've been watching this presentation from a 2014 LLVM dev meeting Thanks for your sharing! I am reviewing: * The chapter 10 (Instruction Level Parallelism) and chapter 11 (Optimizing for Parallelism and Locality) of Compiler Principle[1] * Adding and Optimizing a Subtarget for MIScheduler[2] by Dave Estes * Scheduler for in-order processors - what's present and
2011 Jul 19
2
[LLVMdev] speculative parallelization in LLVM
This is exactly want I need to achieve with Polly actually. I think a good idea would be to define intrinsics / metadata, as you mentioned, to notify Polly that even though it cannot analyse these accesses, to ignore them and perform the code transformations. We can go even further and maybe describe these accesses with some parametric linear functions. For instance: while (cond1){
2011 Jul 19
0
[LLVMdev] speculative parallelization in LLVM
On 19 July 2011 10:12, Jimborean Alexandra <xinfinity_a at yahoo.com> wrote: > %curr_array = alloca [10 x %struct.linked], align 8 > > while.. >  %tmp16 = getelementptr inbounds [10 x %struct.linked]* %curr_array, i32 0, > i32 1 Hi Alexandra, Can you guarantee that the linked list will be allocated in contiguous memory? cheers, --renato