Renato Golin via llvm-dev
2019-Sep-09 19:29 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Overall, I think it will be a good move. Maintenance wise, I'm expecting the existing community to move into LLVM (if not all in already), so I don't foresee any additional costs. Though, Hal's points are spot on... On Mon, 9 Sep 2019 at 18:47, Finkel, Hal J. via llvm-dev <llvm-dev at lists.llvm.org> wrote:> 3. As a specific example of the above, the current development of the new Flang compiler depends on MLIR.Who knows, one day, Clang can, too! :)> 5. As a community, we have been moving toward increasing support for heterogeneous computing and accelerators (and given industry trends, I expect this to continue), and MLIR can facilitate that support in many cases (although I expect we'll see further enhancements in the core LLVM libraries as well).Yes, and yes! MLIR can become a simpler entry point into LLVM, from other languages, frameworks and optimisation plugins. A more abstract representation and a more stable IR generation from it, could make maintenance of external projects much easier than direct connections of today. This could benefit research as much as enterprise, and by consequence, the LLVM project.> That all having been said, I think that it's going to be very important to develop some documentation on how a frontend author looking to use LLVM backend technology, and a developer looking to implement different kinds of functionality, might reasonably choose whether to target or enhance MLIR components, LLVM components, or both. I expect that this kind of advice will evolve over time, but I'm sure we'll need it sooner rather than later.Right, I'm also worried that it's too broad in respect to what it can do on paper, versus what LLVM can handle on code. With MLIR as a separate project, that point is interesting, at most. When it becomes part of the LLVM umbrella, then we need to make sure that MLIR and LLVM IR interact within known boundaries and expected behaviour. I'm not saying MLIR can't be used for anything else after the move, just saying that, by being inside the repo, and maintained by our community, LLVM IR would end up as the *primary* target, and there will be a minimum stability/functionality requirements. But perhaps more importantly, as Hal states clearly, is the need for an official specification, similar to the one for LLVM IR, as well as a formal document with the expected semantics into LLVM IR. Sooner, indeed. cheers, --renato
Chris Lattner via llvm-dev
2019-Sep-09 21:22 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Hi Renato, Thank you for your kind words. If you are interested, the documentation for MLIR is located here: https://github.com/tensorflow/mlir/blob/master/g3doc/ Including a bunch of content, eg a full langref doc: https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md -Chris> On Sep 9, 2019, at 12:29 PM, Renato Golin <rengolin at gmail.com> wrote: > > Overall, I think it will be a good move. > > Maintenance wise, I'm expecting the existing community to move into > LLVM (if not all in already), so I don't foresee any additional costs. > > Though, Hal's points are spot on... > > On Mon, 9 Sep 2019 at 18:47, Finkel, Hal J. via llvm-dev > <llvm-dev at lists.llvm.org> wrote: >> 3. As a specific example of the above, the current development of the new Flang compiler depends on MLIR. > > Who knows, one day, Clang can, too! :) > >> 5. As a community, we have been moving toward increasing support for heterogeneous computing and accelerators (and given industry trends, I expect this to continue), and MLIR can facilitate that support in many cases (although I expect we'll see further enhancements in the core LLVM libraries as well). > > Yes, and yes! MLIR can become a simpler entry point into LLVM, from > other languages, frameworks and optimisation plugins. A more abstract > representation and a more stable IR generation from it, could make > maintenance of external projects much easier than direct connections > of today. This could benefit research as much as enterprise, and by > consequence, the LLVM project. > >> That all having been said, I think that it's going to be very important to develop some documentation on how a frontend author looking to use LLVM backend technology, and a developer looking to implement different kinds of functionality, might reasonably choose whether to target or enhance MLIR components, LLVM components, or both. I expect that this kind of advice will evolve over time, but I'm sure we'll need it sooner rather than later. > > Right, I'm also worried that it's too broad in respect to what it can > do on paper, versus what LLVM can handle on code. > > With MLIR as a separate project, that point is interesting, at most. > When it becomes part of the LLVM umbrella, then we need to make sure > that MLIR and LLVM IR interact within known boundaries and expected > behaviour. > > I'm not saying MLIR can't be used for anything else after the move, > just saying that, by being inside the repo, and maintained by our > community, LLVM IR would end up as the *primary* target, and there > will be a minimum stability/functionality requirements. > > But perhaps more importantly, as Hal states clearly, is the need for > an official specification, similar to the one for LLVM IR, as well as > a formal document with the expected semantics into LLVM IR. Sooner, > indeed. > > cheers, > --renato-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190909/b1fafe1b/attachment.html>
Sjoerd Meijer via llvm-dev
2019-Sep-09 22:32 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
FWIW: +1 from me. Personally, I am very excited about this. I cannot speak on behalf of Arm, but I haven't heard about any concerns either. ________________________________ From: llvm-dev <llvm-dev-bounces at lists.llvm.org> on behalf of Chris Lattner via llvm-dev <llvm-dev at lists.llvm.org> Sent: 09 September 2019 22:22 To: Renato Golin <rengolin at gmail.com> Cc: llvm-dev <llvm-dev at lists.llvm.org>; Reid Tatge <tatge at google.com>; Mehdi Amini <aminim at google.com>; Tatiana Shpeisman <shpeisman at google.com> Subject: Re: [llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation Hi Renato, Thank you for your kind words. If you are interested, the documentation for MLIR is located here: https://github.com/tensorflow/mlir/blob/master/g3doc/<https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md> Including a bunch of content, eg a full langref doc: https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md -Chris On Sep 9, 2019, at 12:29 PM, Renato Golin <rengolin at gmail.com<mailto:rengolin at gmail.com>> wrote: Overall, I think it will be a good move. Maintenance wise, I'm expecting the existing community to move into LLVM (if not all in already), so I don't foresee any additional costs. Though, Hal's points are spot on... On Mon, 9 Sep 2019 at 18:47, Finkel, Hal J. via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: 3. As a specific example of the above, the current development of the new Flang compiler depends on MLIR. Who knows, one day, Clang can, too! :) 5. As a community, we have been moving toward increasing support for heterogeneous computing and accelerators (and given industry trends, I expect this to continue), and MLIR can facilitate that support in many cases (although I expect we'll see further enhancements in the core LLVM libraries as well). Yes, and yes! MLIR can become a simpler entry point into LLVM, from other languages, frameworks and optimisation plugins. A more abstract representation and a more stable IR generation from it, could make maintenance of external projects much easier than direct connections of today. This could benefit research as much as enterprise, and by consequence, the LLVM project. That all having been said, I think that it's going to be very important to develop some documentation on how a frontend author looking to use LLVM backend technology, and a developer looking to implement different kinds of functionality, might reasonably choose whether to target or enhance MLIR components, LLVM components, or both. I expect that this kind of advice will evolve over time, but I'm sure we'll need it sooner rather than later. Right, I'm also worried that it's too broad in respect to what it can do on paper, versus what LLVM can handle on code. With MLIR as a separate project, that point is interesting, at most. When it becomes part of the LLVM umbrella, then we need to make sure that MLIR and LLVM IR interact within known boundaries and expected behaviour. I'm not saying MLIR can't be used for anything else after the move, just saying that, by being inside the repo, and maintained by our community, LLVM IR would end up as the *primary* target, and there will be a minimum stability/functionality requirements. But perhaps more importantly, as Hal states clearly, is the need for an official specification, similar to the one for LLVM IR, as well as a formal document with the expected semantics into LLVM IR. Sooner, indeed. cheers, --renato -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190909/2534d2c0/attachment.html>
Renato Golin via llvm-dev
2019-Sep-09 22:39 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
On Mon, 9 Sep 2019 at 22:22, Chris Lattner <clattner at google.com> wrote:> Including a bunch of content, eg a full langref doc: > https://github.com/tensorflow/mlir/blob/master/g3doc/LangRef.mdThanks Chris, that looks awesome! This one could perhaps be improved with time: https://github.com/tensorflow/mlir/blob/master/g3doc/ConversionToLLVMDialect.md Which I think was Hal's point. If we had a front-end already using it in tree, we could be a bit more relaxed with the conversion specification. I remember when I did the EDG bridge to LLVM, I mostly repeated whatever Clang was doing, "bug-for-bug". :) A cheeky request, perhaps, for the Flang people: they could help with that document on what they have learned using MLIR as a front-end into LLVM IR. We get some common patterns written down, but also we get to review their assumptions earlier, and make sure that both Flang and MLIR co-evolve into something simpler. cheers, --renato
David Greene via llvm-dev
2019-Sep-10 20:39 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> writes:> But perhaps more importantly, as Hal states clearly, is the need for > an official specification, similar to the one for LLVM IR, as well as > a formal document with the expected semantics into LLVM IR. Sooner, > indeed.+1. There are all kinds of scattered documents on the TensorFlow site talking about MLIR, the affine dialect, etc. but nothing of the quality and approachability of LLVM's language reference. I find it difficult to pull all the pieces together. Of course by its nature, MLIR doesn't lend itself to concrete semantic descriptions, though I would expect the affine dialect (and others) to have documentation on par with the LLVM IR. For MLIR itself, I would want documentation somewhat less dense than the current BNF-style specification. Does the current proposal only cover adding the base MLIR to the LLVM project, or also the affine dialect and possibly others? The affine dialect could certainly be quite useful for many projects. -David
Mehdi AMINI via llvm-dev
2019-Sep-10 22:51 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
On Tue, Sep 10, 2019 at 1:40 PM David Greene via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> writes: > > > But perhaps more importantly, as Hal states clearly, is the need for > > an official specification, similar to the one for LLVM IR, as well as > > a formal document with the expected semantics into LLVM IR. Sooner, > > indeed. > > +1. There are all kinds of scattered documents on the TensorFlow site > talking about MLIR, the affine dialect, etc. but nothing of the quality > and approachability of LLVM's language reference. I find it difficult > to pull all the pieces together. >One of the main reason we haven't invested in a proper website and documentation was in anticipation of a possible integration in LLVM, so we didn't prioritize what I saw as throw-away work. We're looking forward to have a space on llvm.org for MLIR and build great online docs there! Of course by its nature, MLIR doesn't lend itself to concrete semantic> descriptions, though I would expect the affine dialect (and others) to > have documentation on par with the LLVM IR.Just last week I had to scout through the affine dialect "LangRef <https://github.com/tensorflow/mlir/blob/master/g3doc/Dialects/Affine.md>" for something, and I also felt that it is due for a refresh! It seemed a bit more than just BNF though, do you have example of what you would like to see expanded there? And to be clear: the ambition should be that the dialects included in-tree (in MLIR/LLVM) get some level of documentation on-par with LLVM LangRef.> For MLIR itself, I would > want documentation somewhat less dense than the current BNF-style > specification. > > Does the current proposal only cover adding the base MLIR to the LLVM > project, or also the affine dialect and possibly others? The affine > dialect could certainly be quite useful for many projects. >The current proposal includes all the content of https://github.com/tensorflow/mlir/ as-is. It does *not* include the TensorFlow specific dialects and other pieces here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/ Best, -- Mehdi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190910/b6e09024/attachment-0001.html>
Reasonably Related Threads
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- MLIR landing in the monorepo