similar to: [LLVMdev] no differnce in the execution time between seq. and parallel programs

Displaying 20 results from an estimated 20000 matches similar to: "[LLVMdev] no differnce in the execution time between seq. and parallel programs"

2012 Jun 05
0
[LLVMdev] no differnce in the execution time between seq. and parallel programs
Hi esraa, > i am using LLVM to execute two programs both of them was exactly > similar but the difference was the first one is a sequential program > has three functions. In the second program i am trying to execute it > by giving each function to thread (three thread work in parallel way) > ,but when i am trying to calculate the execution time i found the > execution time for
2012 Jun 06
2
[LLVMdev] no differnce in the execution time between seq. and parallel programs
Duncan Sands <baldrick <at> free.fr> writes: > > Hi esraa, > > > i am using LLVM to execute two programs both of them was exactly > > similar but the difference was the first one is a sequential program > > has three functions. In the second program i am trying to execute it > > by giving each function to thread (three thread work in parallel way)
2012 Jun 07
2
[LLVMdev] no differnce in the execution time between seq. and parallel programs
Duncan Sands <baldrick <at> free.fr> writes: > > > I would be happy if you give me an insight into what could have caused the > > difference. > > No idea. Maybe you forgot to turn optimizations on when compiling? Otherwise > you are going to have to send in your programs along with an explanation of how > you compiled them exactly (exact sequence of
2012 Jun 05
0
[LLVMdev] no differnce in the execution time between seq. and parallel programs
Hi all please can any one help me to find the reason?? i am using LLVM to execute two programs both of them was exactly similar but the difference was the first one is a sequential program has three functions. In the second program i am trying to execute it by giving each function to thread (three thread work in parallel way) ,but when i am trying to calculate the execution time i found
2012 Jun 06
0
[LLVMdev] no differnce in the execution time between seq. and parallel programs
> I would be happy if you give me an insight into what could have caused the > difference. No idea. Maybe you forgot to turn optimizations on when compiling? Otherwise you are going to have to send in your programs along with an explanation of how you compiled them exactly (exact sequence of commands) both for gcc and LLVM. Ciao, Duncan.
2013 Feb 04
2
[LLVMdev] RFC: [PATCH] parallel loop metadata
Hello all, Thanks for the comments. Attached is a new version with Tobias' and Sebastian's (final?) comments addressed. Any further comments are appreciated. Nadav suggested a request for comments in llvmdev before committing it. In order to describe the current idea of the parallel loop metadata, I think it's easiest to just copy-paste the documentation I wrote for this patch so
2011 Jan 07
1
[LLVMdev] Proposal: Generic auto-vectorization and parallelization approach for LLVM and Polly
Hi tobi, >> 2. Allow the some generic parallelism information live out specific >> autopar framework, so these information can benefit more passes in >> llvm. For example, the X86 and PTX backend could use these information >> to perform target specific auto-vectorization. > > What other types of parallelism are you expecting? We currently support > thread level
2015 Mar 09
4
[LLVMdev] LLVM Parallel IR
On 9 March 2015 at 17:30, Tobias Grosser <tgrosser at inf.ethz.ch> wrote: > If my memories are right, one of the critical issues (besides > other engineering considerations) was that parallelism metadata in LLVM is > optional and can always be dropped. However, for > OpenMP it sometimes is incorrect to execute a loop sequential that has been > marked parallel in the source
2011 Jan 06
0
[LLVMdev] Proposal: Generic auto-vectorization and parallelization approach for LLVM and Polly
On 01/06/2011 03:38 AM, ether zhhb wrote: > Hi, > > I just have a detail look at the code of Polly[1], it seems that Polly > start to support some basic auto-parallelization stuffs. This is true. However still work in progress. I hope we can soon show some interesting results. > I have some idea to improve the current auto-vectorization > and parallelization approach in
2011 Jan 06
3
[LLVMdev] Proposal: Generic auto-vectorization and parallelization approach for LLVM and Polly
Hi, I just have a detail look at the code of Polly[1], it seems that Polly start to support some basic auto-parallelization stuffs. I have some idea to improve the current auto-vectorization and parallelization approach in Polly. The main idea is, we separate the transform passes and codegen passes for auto-parallelization and vectorization (Graphite[2] for gcc seems to taking similar approach
2015 Mar 09
5
[LLVMdev] LLVM Parallel IR
I'm part of a research group at MIT looking to create an extension of LLVM that inherently allows one to nicely code a parallel loop. Most parallel frameworks tend to take the body of a parallel loop and stick it inside of a function for the parallel runtime to call when appropriate. However, this makes optimizations significantly more difficult as most compiler optimizations tend to be
2018 Jun 07
2
[RFC] Abstract Parallel IR Optimizations
This is an RFC to add analyses and transformation passes into LLVM to optimize programs based on an abstract notion of a parallel region. == this is _not_ a proposal to add a new encoding of parallelism == We currently perform poorly when it comes to optimizations for parallel codes. In fact, parallelizing your loops might actually prevent various optimizations that would have been applied
2008 Dec 25
2
[LLVMdev] Questions on Parallelism and Data Dependence Analysis
Hi, I have two questions about llvm, and expect your reply very much. 1. Is there any plan of llvm to support Thread-Level Parallelism by using OpenMP, MPI, pthread or llvm-defined directives? If automatic parallelism exploring is very hard, what is the key problem? We can't get the precise data dependence information at compile-time? 2. Can I use the functions provided by llvm to
2017 Jan 28
3
[RFC][PIR] Parallel LLVM IR -- Stage 0 -- IR extension
Dear all, This RFC proposes three new LLVM IR instructions to express high-level parallel constructs in a simple, low-level fashion. For this first stage we prepared two commits that add the proposed instructions and a pass to lower them to obtain sequential IR. Both patches have be uploaded for review [1, 2]. The latter patch is very simple and the former consists of almost only mechanical
2017 Mar 16
2
[GSoC] Project Proposal: Parallel extensions for llvm analysis and transform framework
Hello, Below is a proposal for a GSoC project that I would like to work on this year. Your input and feedback is much appreciated. Background: ========= My name is Kareem Ergawy and I currently work as part of the PIR project. PIR is an extension of the IR to support fork-join parallelism that is currently under review [1, 2, 3, 4]. Goals: ===== As a GSoC project, here I propose an
2008 Apr 18
2
naive question regarding running parallel C code from R
Hi, I have only the vaguest notions of what parallel programing, but I think I have a situation where it might be of use to me, or at least provide me with the opportunity to learn more about it. Before I invest in figuring out the nuts and bolts, can anyone confirm that this is a sane approach, or provide alternatives that I could pursue? I'm running stochastic simulations, with the actual
2013 Jan 29
3
[LLVMdev] [PATCH] parallel loop awareness to the LoopVectorizer
On Jan 29, 2013, at 12:51 AM, Tobias Grosser <tobias at grosser.es> wrote: > > # ignore assumed dependences. > for (i = 0; i < 4; i++) { > tmp1 = A[3i+1]; > tmp2 = A[3i+2]; > tmp3 = tmp1 + tmp2; > A[3i] = tmp3; > } > > Now I apply for whatever reason a partial reg2mem transformation. > > float tmp3[1]; > > # ignore assumed
2017 Mar 08
2
[RFC][PIR] Parallel LLVM IR -- Stage 0 --
> On Mar 8, 2017, at 11:50 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > > On 03/08/2017 01:24 PM, Tian, Xinmin wrote: >> I assume the referring case is something like below, right? >> >> #pragma omp parallel num_threads(n) >> { >> #pragma omp critical >> { >> x = x + 1; >> } >> }
2017 Mar 08
3
[RFC][PIR] Parallel LLVM IR -- Stage 0 --
I assume the referring case is something like below, right? #pragma omp parallel num_threads(n) { #pragma omp critical { x = x + 1; } } If that is the case, the programmer is already writing the code that is not "serial equivalent". Our representation for parallelizer is %t = @llvm.region.entry()["omp.parallel"(),
2013 Jan 30
0
[LLVMdev] [PATCH] parallel loop awareness to the LoopVectorizer
On 01/29/2013 07:58 PM, Nadav Rotem wrote: > > On Jan 29, 2013, at 12:51 AM, Tobias Grosser <tobias at grosser.es > <mailto:tobias at grosser.es>> wrote: > >> >> # ignore assumed dependences. >> for (i = 0; i < 4; i++) { >> tmp1 = A[3i+1]; >> tmp2 = A[3i+2]; >> tmp3 = tmp1 + tmp2; >> A[3i] = tmp3; >> } >>