Hi Duncan Sands, As I have understood, GOMP and OpenMP provides support for parallelizing program at source program level. But I am at the IR level. That is I am trying to parallelize the IR code. This is the case of automatic parallelization. The programmer writing the code does not have any idea of parallelization going behind the hood. So my question is instead of support at the source program level, is the an support at the LLVM IR level to parallelize things ?? Regards, Gokul Ramaswamy H.C On Tue, Mar 22, 2011 at 10:08 PM, Duncan Sands <baldrick at free.fr> wrote:> Hi Gokul Ramaswamy, > > > I am new to LLVM. So please help me out. Here is what I am trying to > > achieve: > > > > If there are 2 statements in a source program - > > S1; > > S2; > > > > and I know these is no data and control dependency between them > and > > both take large amount of time to execute. So I want to execute them > > in parallel. > > > > So as S1 starts executing, I want to launch another thread and > > execute S2 in parallel. > > > > I need help on how to launch a new thread and schedule some specific > > code on this new thread. I searched for it but did not get satisfiable > > results. Please help me out LLVM Developers. > > llvm-gcc and dragonegg support GOMP (gnu open-mp). The way it works is > that the > front-end lowers parallel constructs into library calls, extra functions > and so > on. > > Ciao, Duncan. > > > > > Regards, > > Gokul Ramaswamy H.C > > _______________________________________________ > > LLVM Developers mailing list > > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110322/78030cfb/attachment.html>
On Tue, Mar 22, 2011 at 1:36 PM, Gokul Ramaswamy <gokulhcramaswamy at gmail.com> wrote:> Hi Duncan Sands, > > As I have understood, GOMP and OpenMP provides support for > parallelizing program at source program level. But I am at the IR level. > That is I am trying to parallelize the IR code. This is the case of > automatic parallelization. The programmer writing the code does not have any > idea of parallelization going behind the hood. > > So my question is instead of support at the source program level, is the an > support at the LLVM IR level to parallelize things ?? >No, you have to insert calls to things like pthreads or GOMP or OpenMP or whatever threading runtime you choose. Reid -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110322/7ac5ab96/attachment.html>
On 03/22/2011 01:56 PM, Reid Kleckner wrote:> On Tue, Mar 22, 2011 at 1:36 PM, Gokul Ramaswamy > <gokulhcramaswamy at gmail.com <mailto:gokulhcramaswamy at gmail.com>> wrote: > > Hi Duncan Sands, > > As I have understood, GOMP and OpenMP provides support for > parallelizing program at source program level. But I am at the IR > level. That is I am trying to parallelize the IR code. This is the > case of automatic parallelization. The programmer writing the code > does not have any idea of parallelization going behind the hood. > > So my question is instead of support at the source program level, is > the an support at the LLVM IR level to parallelize things ?? > > > > No, you have to insert calls to things like pthreads or GOMP or OpenMP > or whatever threading runtime you choose.Which is what we also do in Polly. In case you just have the simple case of two statements you want to execute in parallel, I propose to write this as OpenMP annotated C code, compile the code with dragonegg to LLVM-IR and have a look what code is generated. You will need to create similar code and similar function calls if you want to do it at the LLVM-IR level. One thing that might simplify the code is to specify in OpenMP that you want to be able to select choices at runtime. A common construct is: SCHEDULE(runtime) This will stop dragonegg from inlining some OpenMP runtime calls, which could complicate the code unnecessarily. Cheers Tobi P.S.: In case of directly inserting OpenMP function callsn it would be nice to have support for a set of LLVM intrinsics that will automatically be lowered to the relevant OpenMP/mpc.sf.net function calls. Let me know when you think about working on such a thing.