Hi, We would like to inform the community that we're releasing a version of our research compiler, "AESOP", developed at UMD using LLVM. AESOP is a distance-vector-based autoparallelizing compiler for shared-memory machines. The source code and some further information is available at http://aesop.ece.umd.edu The main components of the released implementation are loop memory dependence analysis and parallel code generation using calls to POSIX threads. Since we currently have only a 2-person development team, we are still on LLVM 3.0, and some of the code could use some cleanup. Still, we hope that the work will be of interest to some. We would welcome any feedback, comments or questions! Thanks, Tim Creech
On 03/03/2013 02:09 PM, Timothy Mattausch Creech wrote:> Hi, > We would like to inform the community that we're releasing a version of our research compiler, "AESOP", developed at UMD using LLVM. AESOP is a distance-vector-based autoparallelizing compiler for shared-memory machines. The source code and some further information is available at > > http://aesop.ece.umd.edu > > The main components of the released implementation are loop memory dependence analysis and parallel code generation using calls to POSIX threads.Interesting ! I happen to finish the initial TileGX backend support, which is a many core processor. I am looking forward to testing AESOP on TileGX silicon. --- Regards, Jiong> Since we currently have only a 2-person development team, we are still on LLVM 3.0, and some of the code could use some cleanup. Still, we hope that the work will be of interest to some. > > We would welcome any feedback, comments or questions! > > Thanks, > Tim Creech > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev-- Regards, Jiong. Wang Tilera Corporation.
Hi, On 03/03/2013 07:09 AM, Timothy Mattausch Creech wrote:> [...] > The main components of the released implementation are loop memory > dependence analysis and parallel code generation using calls to POSIX > threads.The loop memory dependence analysis sounds very interesting to me. Could you provide some more information regarding its capabilities? Cheers, Sebastian -- Mit freundlichen Grüßen / Kind regards Sebastian Dreßler Zuse Institute Berlin (ZIB) Takustraße 7 D-14195 Berlin-Dahlem Germany dressler at zib.de Phone: +49 30 84185-261 http://www.zib.de/
Hi Jiong, I actually work day-to-day with Tilera processors and I was very pleased to see your recent mail about the TileGx patch! I have access to a Tile-Gx 8036 myself and am certainly planning to add native TileGx support to AESOP in the near future. (Shouldn't be hard: mostly it will require us to finally upgrade from LLVM 3.0 and compile our runtime dependencies for it.) I expect that we will use Tilera's own barrier implementations (in libtmc) directly in our codegen. -Tim On Sun, Mar 03, 2013 at 03:01:23PM +0800, Jiong Wang wrote:> On 03/03/2013 02:09 PM, Timothy Mattausch Creech wrote: > >Hi, > > We would like to inform the community that we're releasing a version of our research compiler, "AESOP", developed at UMD using LLVM. AESOP is a distance-vector-based autoparallelizing compiler for shared-memory machines. The source code and some further information is available at > > > > http://aesop.ece.umd.edu > > > >The main components of the released implementation are loop memory dependence analysis and parallel code generation using calls to POSIX threads. > > Interesting ! I happen to finish the initial TileGX backend > support, which is a many core processor. I am looking forward to > testing AESOP on TileGX silicon. > > --- > Regards, > Jiong > > >Since we currently have only a 2-person development team, we are still on LLVM 3.0, and some of the code could use some cleanup. Still, we hope that the work will be of interest to some. > > > >We would welcome any feedback, comments or questions! > > > >Thanks, > >Tim Creech > >_______________________________________________ > >LLVM Developers mailing list > >LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > >http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > > > -- > Regards, > Jiong. Wang > Tilera Corporation. >
Hi Sebastian, Sure! The bulk of LMDA was written by Aparna Kotha (CCd). It computes dependences between all instructions, computes the resulting direction vectors in the function, then associates them all with loops. At a high level, the dependence analysis consults with AliasAnalysis, and ScalarEvolution before resorting to attempting to understand the effective affine expressions and performing dependence tests (e.g., Banerjee). If it cannot rule out a dependence, then it will additionally consult with an ArrayPrivatization analysis to see if an involved memory object can be made thread private. It is probably also worth mentioning that the LMDA has been written not only to function well with IR from source code, but also with low level IR from a binary to IR translator in a separate project. This has required new techniques specific to this problem. Aparna can provide more information on techniques used in our LMDA. -Tim On Sun, Mar 03, 2013 at 09:18:47AM +0100, Sebastian Dreßler wrote:> Hi, > > On 03/03/2013 07:09 AM, Timothy Mattausch Creech wrote: > > [...] > > The main components of the released implementation are loop memory > > dependence analysis and parallel code generation using calls to POSIX > > threads. > > The loop memory dependence analysis sounds very interesting to me. Could > you provide some more information regarding its capabilities? > > > Cheers, > Sebastian > > > -- > Mit freundlichen Grüßen / Kind regards > > Sebastian Dreßler > > Zuse Institute Berlin (ZIB) > Takustraße 7 > D-14195 Berlin-Dahlem > Germany > > dressler at zib.de > Phone: +49 30 84185-261 > > http://www.zib.de/
Hi Timothy,> We would like to inform the community that we're releasing a version of our research compiler, "AESOP", developed at UMD using LLVM. AESOP is a distance-vector-based autoparallelizing compiler for shared-memory machines. The source code and some further information is available at > > http://aesop.ece.umd.edu > > The main components of the released implementation are loop memory dependence analysis and parallel code generation using calls to POSIX threads. Since we currently have only a 2-person development team, we are still on LLVM 3.0, and some of the code could use some cleanup. Still, we hope that the work will be of interest to some.Do you have data show us that how much parallelization the AESOP can extract from those benchmarks? :) Regards, chenwj -- Wei-Ren Chen (陳韋任) Computer Systems Lab, Institute of Information Science, Academia Sinica, Taiwan (R.O.C.) Tel:886-2-2788-3799 #1667 Homepage: http://people.cs.nctu.edu.tw/~chenwj
On Mon, Mar 04, 2013 at 03:01:15PM +0800, 陳韋任 (Wei-Ren Chen) wrote:> Hi Timothy, > > > We would like to inform the community that we're releasing a version of our research compiler, "AESOP", developed at UMD using LLVM. AESOP is a distance-vector-based autoparallelizing compiler for shared-memory machines. The source code and some further information is available at > > > > http://aesop.ece.umd.edu > > > > The main components of the released implementation are loop memory dependence analysis and parallel code generation using calls to POSIX threads. Since we currently have only a 2-person development team, we are still on LLVM 3.0, and some of the code could use some cleanup. Still, we hope that the work will be of interest to some. > > Do you have data show us that how much parallelization the AESOP can > extract from those benchmarks? :) > > Regards, > chenwj > > -- > Wei-Ren Chen (陳韋任) > Computer Systems Lab, Institute of Information Science, > Academia Sinica, Taiwan (R.O.C.) > Tel:886-2-2788-3799 #1667 > Homepage: http://people.cs.nctu.edu.tw/~chenwjHi Wei-Ren, Sorry for the slow response. We're working on a short tech report which will be up on the website in April. This will contain a "results" section, including results from the SPEC benchmarks which we can't include in the source distribution. Briefly, I can say that we get good speedups on some of the NAS and SPEC benchmarks, such as a 3.6x+ speedup on 4 cores on the serial version of NAS "CG" (Fortran), and "lbm" (C) from CPU2006. (These are of course among our best results.) -Tim