Renato Golin via llvm-dev
2019-Sep-09 22:39 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
On Mon, 9 Sep 2019 at 22:22, Chris Lattner <clattner at google.com> wrote:> Including a bunch of content, eg a full langref doc: > https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.mdThanks Chris, that looks awesome! This one could perhaps be improved with time: https://github.com/tensorflow/mlir/blob/master/g3doc/ConversionToLLVMDialect.md Which I think was Hal's point. If we had a front-end already using it in tree, we could be a bit more relaxed with the conversion specification. I remember when I did the EDG bridge to LLVM, I mostly repeated whatever Clang was doing, "bug-for-bug". :) A cheeky request, perhaps, for the Flang people: they could help with that document on what they have learned using MLIR as a front-end into LLVM IR. We get some common patterns written down, but also we get to review their assumptions earlier, and make sure that both Flang and MLIR co-evolve into something simpler. cheers, --renato
Mehdi Amini via llvm-dev
2019-Sep-09 22:57 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
On Mon, Sep 9, 2019 at 12:30 PM Renato Golin <rengolin at gmail.com> wrote:> Overall, I think it will be a good move. > > Maintenance wise, I'm expecting the existing community to move into > LLVM (if not all in already), so I don't foresee any additional costs. > > Though, Hal's points are spot on... > > On Mon, 9 Sep 2019 at 18:47, Finkel, Hal J. via llvm-dev > <llvm-dev at lists.llvm.org> wrote: > > 3. As a specific example of the above, the current development of the > new Flang compiler depends on MLIR. > > Who knows, one day, Clang can, too! :) > > > 5. As a community, we have been moving toward increasing support for > heterogeneous computing and accelerators (and given industry trends, I > expect this to continue), and MLIR can facilitate that support in many > cases (although I expect we'll see further enhancements in the core LLVM > libraries as well). > > Yes, and yes! MLIR can become a simpler entry point into LLVM, from > other languages, frameworks and optimisation plugins. A more abstract > representation and a more stable IR generation from it, could make > maintenance of external projects much easier than direct connections > of today. This could benefit research as much as enterprise, and by > consequence, the LLVM project. > >Thanks for the great summary, this is exactly my view as well!> > That all having been said, I think that it's going to be very important > to develop some documentation on how a frontend author looking to use LLVM > backend technology, and a developer looking to implement different kinds of > functionality, might reasonably choose whether to target or enhance MLIR > components, LLVM components, or both. I expect that this kind of advice > will evolve over time, but I'm sure we'll need it sooner rather than later. > > Right, I'm also worried that it's too broad in respect to what it can > do on paper, versus what LLVM can handle on code. > > With MLIR as a separate project, that point is interesting, at most. > When it becomes part of the LLVM umbrella, then we need to make sure > that MLIR and LLVM IR interact within known boundaries and expected > behaviour. > > I'm not saying MLIR can't be used for anything else after the move, > just saying that, by being inside the repo, and maintained by our > community, LLVM IR would end up as the *primary* target, and there > will be a minimum stability/functionality requirements. >I fully agree with everything you wrote! :) I really hope that MLIR can succeed as an enabler for users to plug into the LLVM ecosystem. As an example of something that MLIR is trying to solve elegantly on top of LLVM is helping with heterogeneous computing. Today a compiler framework that would try to support a device accelerator (like a GPU) would need to manage outside of / above LLVM how to split the host and device computation. MLIR allows to have both in the same module, and providing some convenient facility for the "codegen" and integration with LLVM. This is still a work in progress, but if you look at this IR: https://github.com/tensorflow/mlir/blob/master/test/mlir-cuda-runner/gpu-to-cubin.mlir#L6-L11 The lines I highlighted are defining a GPU kernel, wrapped in a "gpu.launch" operation. The `mlir-cuda-runner` is a command line tool that tests will run passes to separate the kernel GPU code from the host code, and emit the LLVM IR in two separate LLVM modules: one for the GPU kernel (using the NVPTX backend) and another one for the host. Then everything is ran through a JIT (assuming you have CUDA and a compatible GPU installed). In the example above, LLVM is directly used for both the host and the kernel, but there is also a Vulkan/SPIR-V emitter (instead of NVPTX) in the work. In this case LLVM would be used for providing the JIT environment and for the host module, but not the kernel (at least not unless there is a SPIR-V backend in LLVM). Fundamentally MLIR is very extensible, and let the user define their own abstraction and compose on top of whatever the community will want to propose in the core. We proposed a tutorial for the US Dev Meeting in which we planned to show how this layers and compose with LLVM in detail, but there are already so many great tutorial sessions in the schedule that we couldn't get a slot. In the meantime we are currently still revamping our online tutorial in the coming weeks ( https://github.com/tensorflow/mlir/blob/master/g3doc/Tutorials/Toy/Ch-1.md) to make it more representative. Hope this helps. -- Mehdi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190909/e57e38a9/attachment.html>
Chris Lattner via llvm-dev
2019-Sep-10 05:49 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
> On Sep 9, 2019, at 3:39 PM, Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On Mon, 9 Sep 2019 at 22:22, Chris Lattner <clattner at google.com> wrote: >> Including a bunch of content, eg a full langref doc: >> https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md > > Thanks Chris, that looks awesome! > > This one could perhaps be improved with time: > https://github.com/tensorflow/mlir/blob/master/g3doc/ConversionToLLVMDialect.md > > Which I think was Hal's point. If we had a front-end already using it > in tree, we could be a bit more relaxed with the conversion > specification.Don’t worry, Flang is coming soon :-). In all seriousness, if you didn’t notice, the Flang team is planning to give a talk at LLVMDev in a month or so about Flang + MLIR. I’d also love to see a round table or other discussion about MLIR integration at the event. The topic of Clang generating MLIR is more sensitive and I think it is best broached as a separate conversation, one motivated with data. I think that Clang generating MLIR can be a hugely positive thing (witness the explosion of recent proposals for LLVM IR extensions that are easily handled with MLIR) but it seems more conservative and logical to upgrade the existing Clang “CFG" representation to use MLIR first. This brings simple and measurable improvements to the reliability, accuracy, and generality of the data flow analyses and the Clang Static Analyzer, without introducing a new step that could cause compile-time regressions. Iff that goes well, we could consider the use of MLIR in the main compilation flow. In any case, I hope that "Clang adoption" is not considered to be a blocker for MLIR to be adopted as part of the LLVM project. This hasn’t been a formal or historical requirement for new LLVM subprojects, and I’d like to make sure we don’t put undue adoption pressure on Clang - it is important that we are deliberate about each step and do the right (data driven) thing for the (huge) Clang community. -Chris> > I remember when I did the EDG bridge to LLVM, I mostly repeated > whatever Clang was doing, "bug-for-bug". :) > > A cheeky request, perhaps, for the Flang people: they could help with > that document on what they have learned using MLIR as a front-end into > LLVM IR. > > We get some common patterns written down, but also we get to review > their assumptions earlier, and make sure that both Flang and MLIR > co-evolve into something simpler. > > cheers, > --renato > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Renato Golin via llvm-dev
2019-Sep-10 09:12 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
On Tue, 10 Sep 2019 at 06:49, Chris Lattner <clattner at google.com> wrote:> In all seriousness, if you didn’t notice, the Flang team is planning to give a talk at LLVMDev in a month or so about Flang + MLIR. I’d also love to see a round table or other discussion about MLIR integration at the event.Ah, the title was just "Flang update", I didn't check the abstract. Looking forward to it.> The topic of Clang generating MLIR is more sensitive and I think it is best broached as a separate conversation, one motivated with data. I think that Clang generating MLIR can be a hugely positive thing (witness the explosion of recent proposals for LLVM IR extensions that are easily handled with MLIR) but it seems more conservative and logical to upgrade the existing Clang “CFG" representation to use MLIR first. This brings simple and measurable improvements to the reliability, accuracy, and generality of the data flow analyses and the Clang Static Analyzer, without introducing a new step that could cause compile-time regressions. Iff that goes well, we could consider the use of MLIR in the main compilation flow.Totally agreed!> In any case, I hope that "Clang adoption" is not considered to be a blocker for MLIR to be adopted as part of the LLVM project. This hasn’t been a formal or historical requirement for new LLVM subprojects, and I’d like to make sure we don’t put undue adoption pressure on Clang - it is important that we are deliberate about each step and do the right (data driven) thing for the (huge) Clang community.Absolutely. It doesn't make sense to put artificial orthogonal constraints, when we know the implementation would raise more questions than answer and could take years to get right. I'm hoping by adding MLIR first, we'd have a pretty solid use case and the eventual move by Clang, if any, would be smoother and more robust. I agree with this proposal being the first step. I'm also personally happy with the current level of docs and progress of Flang. LGTM, thanks! :D --renato