William Moses via llvm-dev
2021-Jan-12 22:13 UTC
[llvm-dev] RFC: Enzyme, Automatic Differentiation for LLVM as an LLVM Incubator Project
Integration into the monorepo is an interesting logistical question. Right now with the absence of AD for prior LLVM versions, Enzyme aims to support LLVM 7 onwards for users that require a specific LLVM (e.g. Julia currently requires LLVM 9). Obviously integration into the monorepo itself would fix this for subsequent versions so it's somewhat of a chicken-and-the egg issue. I'd love to hear any thoughts from folks on how something like this might be eventually handled. I'd also like to see upstream users (for example to differentiate MLIR -- ideally with nice integration for reductions, see comment below regarding parallelism). Outside of logistics, development velocity is somewhat high right now as we explore efficient extensions to Enzyme for parallelism (CPU, GPU, MPI, etc). In essence the additional complexity from parallelism stems from the fact that a benign read race in the forward pass becomes a write race in the reverse pass. Ideally this is handled with efficient reductions for performance (we currently support a specific subset of parallel codes and fall back to using atomics). My hope would be to graduate to the monorepo or similar after settling these questions and having more folks battle-test the system. On Tue, Jan 12, 2021 at 4:33 PM Mehdi AMINI <joker.eph at gmail.com> wrote:> Hi, > > Since the project is alright going on for some time, what is the current > level of maturity? > You're proposing to get through as an incubator project, how far is it > from getting at the point where it would integrate the monorepo? How do you > see the roadmap on this? > > Thanks, > > -- > Mehdi > > > On Tue, Jan 12, 2021 at 12:58 PM William Moses via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Hi all, >> >> Automatic differentiation (AD) is a key component in algorithms used in >> machine learning, scientific computing, and elsewhere. >> >> For the last year-and-a-half, the `Enzyme` group have been looking at the >> practical possibility of doing automatic differentiation as part of the >> LLVM optimization pipeline. Performing automatic differentiation in LLVM is >> quite beneficial as it allows all of the languages that lower to LLVM to >> incorporate automatic differentiation without much additional work. It also >> allows for automatic differentiation across languages, which is similarly >> beneficial. >> >> One unexpected benefit we found of doing AD at the LLVM-level is that >> there is a significant performance benefit (4.2x in our tests) to be gained >> by performing AD after LLVM’s optimization passes [1]. >> >> >> After several months of testing with various users including the Rust [4, >> 5], C/C++, Julia [6], Fortran, and machine learning communities, we’d like >> to share LLVM-based automatic differentiation more widely and ask to be >> considered as an LLVM incubator project. >> >> Our code is available here ( >> https://github.com/wsmoses/Enzyme/tree/master/enzyme) as a plugin for >> LLVM versions 7 through master. We’ve had weekly meetings for the past >> several months with folks from MIT, Argonne, Princeton, Google, NVIDIA, and >> Facebook and welcome anyone who wants to join. Documentation and install >> instructions for Enzyme is available here: https://enzyme.mit.edu. We >> have our charter available here: >> >> <https://docs.google.com/document/d/10IK2EgZa-4WF0lOSlkND1_cX3IQLAxEVSOWqbQzNpcs/edit#> >> https://docs.google.com/document/d/10IK2EgZa-4WF0lOSlkND1_cX3IQLAxEVSOWqbQzNpcs/edit# >> >> >> Performing automatic differentiation inside of LLVM presents several >> interesting technical questions, which we’ve explored with the community in >> a poster and SRC talk at the 2020 US LLVM Dev Meeting [2, 3]. >> >> The Enzyme team >> >> [1] >> https://proceedings.neurips.cc/paper/2020/file/9332c513ef44b682e9347822c2e457ac-Paper.pdf >> >> [2] https://c.wsmoses.com/posters/Enzyme-llvmdev.pdf >> >> [3] https://www.youtube.com/watch?v=auQNFDlaXdM, >> https://c.wsmoses.com/presentations/enzyme-llvmdev-reduced.pdf >> >> [4] https://github.com/tiberiusferreira/oxide-enzyme >> https://github.com/bytesnake/oxide-enzyme, >> >> [5] >> https://internals.rust-lang.org/t/automatic-differentiation-differential-programming-via-llvm/13188 >> >> [6] https://github.com/wsmoses/Enzyme.jl >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210112/b40850de/attachment.html>
Renato Golin via llvm-dev
2021-Jan-15 17:53 UTC
[llvm-dev] RFC: Enzyme, Automatic Differentiation for LLVM as an LLVM Incubator Project
Hi William, I think this is a really cool project and worthy of being in LLVM. For now it's a plugin pass, which goes well with incubator projects, but it could very well be a standard IR pass that is enabled by flags, etc. We have similar examples for OpenCL, OpenMP, etc. which need integration on both Clang and LLVM. Shouldn't be too messy. On Tue, 12 Jan 2021 at 22:14, William Moses via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Obviously integration into the monorepo itself would fix this for > subsequent versions so it's somewhat of a chicken-and-the egg issue. I'd > love to hear any thoughts from folks on how something like this might be > eventually handled. I'd also like to see upstream users (for example to > differentiate MLIR -- ideally with nice integration for reductions, see > comment below regarding parallelism). >I think long term you should split support for older versions with the trunk version. Once (and if) it gets merged into the monorepo, we can keep the incubation repo with the previous versions only, and only to official versions, mostly static and for historical value. People should be encouraged to use the newer versions that have native support. Differentiating MLIR would probably be a separate infrastructure, not sure using the same libraries? I haven't dug too much, but I expect some of the analysis to be tailored to LLVM's operations and types, which have fixed semantics and are very different from MLIR, with custom dialects, operations and types. But it would be really nice if we had that at the MLIR level, at least for the standard upstream dialects, as it would be a major boost for ML compilers to start using MLIR more aggressively. cheers, --renato -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210115/f1d58f65/attachment.html>