On 03/22/2011 01:56 PM, Reid Kleckner wrote:> On Tue, Mar 22, 2011 at 1:36 PM, Gokul Ramaswamy > <gokulhcramaswamy at gmail.com <mailto:gokulhcramaswamy at gmail.com>> wrote: > > Hi Duncan Sands, > > As I have understood, GOMP and OpenMP provides support for > parallelizing program at source program level. But I am at the IR > level. That is I am trying to parallelize the IR code. This is the > case of automatic parallelization. The programmer writing the code > does not have any idea of parallelization going behind the hood. > > So my question is instead of support at the source program level, is > the an support at the LLVM IR level to parallelize things ?? > > > > No, you have to insert calls to things like pthreads or GOMP or OpenMP > or whatever threading runtime you choose.Which is what we also do in Polly. In case you just have the simple case of two statements you want to execute in parallel, I propose to write this as OpenMP annotated C code, compile the code with dragonegg to LLVM-IR and have a look what code is generated. You will need to create similar code and similar function calls if you want to do it at the LLVM-IR level. One thing that might simplify the code is to specify in OpenMP that you want to be able to select choices at runtime. A common construct is: SCHEDULE(runtime) This will stop dragonegg from inlining some OpenMP runtime calls, which could complicate the code unnecessarily. Cheers Tobi P.S.: In case of directly inserting OpenMP function callsn it would be nice to have support for a set of LLVM intrinsics that will automatically be lowered to the relevant OpenMP/mpc.sf.net function calls. Let me know when you think about working on such a thing.
Hi, I am looking into something similar as well for auto-parallelization i.e. some sort of low level support at the IR level for parallelization. I'd be interested in collaborating with anyone who is working on the same.>From a brief look at the architectural overview of Polly, it seems as if theparallel code generation is being done at the IR level since the input file is an LLVM IR file? Would it be possible to re-utilize that functionality for building something to this end? Thanks Nipun On Tue, Mar 22, 2011 at 3:28 PM, Tobias Grosser <grosser at fim.uni-passau.de>wrote:> On 03/22/2011 01:56 PM, Reid Kleckner wrote: > > On Tue, Mar 22, 2011 at 1:36 PM, Gokul Ramaswamy > > <gokulhcramaswamy at gmail.com <mailto:gokulhcramaswamy at gmail.com>> wrote: > > > > Hi Duncan Sands, > > > > As I have understood, GOMP and OpenMP provides support for > > parallelizing program at source program level. But I am at the IR > > level. That is I am trying to parallelize the IR code. This is the > > case of automatic parallelization. The programmer writing the code > > does not have any idea of parallelization going behind the hood. > > > > So my question is instead of support at the source program level, is > > the an support at the LLVM IR level to parallelize things ?? > > > > > > > > No, you have to insert calls to things like pthreads or GOMP or OpenMP > > or whatever threading runtime you choose. > > Which is what we also do in Polly. > > In case you just have the simple case of two statements you want to > execute in parallel, I propose to write this as OpenMP annotated C code, > compile the code with dragonegg to LLVM-IR and have a look what code is > generated. You will need to create similar code and similar function > calls if you want to do it at the LLVM-IR level. > > One thing that might simplify the code is to specify in OpenMP that you > want to be able to select choices at runtime. A common construct is: > > SCHEDULE(runtime) > > This will stop dragonegg from inlining some OpenMP runtime calls, which > could complicate the code unnecessarily. > > Cheers > Tobi > > P.S.: In case of directly inserting OpenMP function callsn it would be > nice to have support for a set of LLVM intrinsics that will > automatically be lowered to the relevant OpenMP/mpc.sf.net function > calls. Let me know when you think about working on such a thing. > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110322/a20e99ae/attachment.html>
On Wed, Mar 23, 2011 at 5:26 AM, Nipun Arora <nipun2512 at gmail.com> wrote:> Hi, > I am looking into something similar as well for auto-parallelization i.e. > some sort of low level support at the IR level for parallelization. > I'd be interested in collaborating with anyone who is working on the same. > From a brief look at the architectural overview of Polly, it seems as if the > parallel code generation is being done at the IR level since the input file > is an LLVM IR file? > Would it be possible to re-utilize that functionality for building something > to this end?Adding to Tobias' comments following is what Polly with OpenMP support does. If Polly detects two statements(preferably for loops) can be parallelized it will generate the required GOMP calls automatically. As of now the interface is not designed in a such a way that it can be reused. If we find that designing such OpenMP intrinsics is useful for people we can think about that. Regards, -- Raghesh II MTECH Room No: 0xFF Mahanadhi Hostel IIT Madras