Chris Lattner via llvm-dev
2019-Sep-09 15:30 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Hi all, The TensorFlow team at Google has been leading the charge to build a new set of compiler infrastructure, known as the MLIR project <https://github.com/tensorflow/mlir>. The initial focus has been on machine learning infrastructure, high performance accelerators, heterogeneous compute, and HPC-style computations. That said, the implementation and design of this infrastructure is state of the art, is not specific to these applications, and is already being adopted (e.g.) by the Flang compiler <https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19>. If you are interested in learning more about MLIR and the technical design, I’d encourage you to look at the MLIR Keynote and Tutorial at the last LLVM Developer Meeting <http://llvm.org/devmtg/2019-04/>. MLIR is already open source on GitHub <https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d>, and includes a significant amount of code in two repositories. “MLIR Core” is located in github/tensorflow/mlir <https://github.com/tensorflow/mlir>, including an application independent IR, the code generation infrastructure, common graph transformation infrastructure, declarative operation definition and rewrite infrastructure, polyhedral transformations etc. The primary TensorFlow repository at github/tensorflow/tensorflow <https://github.com/tensorflow/tensorflow/> contains TensorFlow-specific functionality built using MLIR Core infrastructure. In discussions with a large number of industry partners <https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/>, we’ve achieved consensus that it would be best to build a shared ML compiler infrastructure under a common umbrella with well known neutral governance. As such, we’d like to propose that MLIR Core join the non-profit LLVM Foundation as a new subproject! We plan to follow the LLVM Developer Policy <http://llvm.org/docs/DeveloperPolicy.html>, and have been following an LLVM-style development process from the beginning - including all relevant coding and testing styles, and we build on core LLVM infrastructure pervasively. We think that MLIR is a nice complement to existing LLVM functionality, providing common infrastructure for higher level optimization and transformation problems, and dovetails naturally with LLVM IR optimizations and code generation. Please let us know if you have any thoughts, questions, or concerns! -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190909/0f43668b/attachment.html>
Finkel, Hal J. via llvm-dev
2019-Sep-09 17:46 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Hi, Chris, et al., I support adding MLIR as an LLVM subproject. Here are my thoughts: 1. MLIR uses LLVM. LLVM is one of the MLIR dialects, MLIR is compiler infrastructure, and it fits as a natural part of our ecosystem. 2. As a community, we have a lot of different LLVM frontends, many of which have their own IRs on which higher-level transformations are performed. We don't currently offer much, in terms of infrastructure, to support the development of these pre-LLVM transformations. MLIR provides a base on which many of these kinds of implementations can be constructed, and I believe that will add value to the overall ecosystem. 3. As a specific example of the above, the current development of the new Flang compiler depends on MLIR. Flang is becoming a subproject of LLVM and MLIR should be part of LLVM. 4. The MLIR project has developed capabilities, such as for the analysis of multidimensional loops, that can be moved into LLVM and used by both LLVM- and MLIR-level transformations. As we work to improve LLVM's capabilities in loop optimizations, leveraging continuing work to improve MLIR's loop capabilities in LLVM as well will benefit many of us. 5. As a community, we have been moving toward increasing support for heterogeneous computing and accelerators (and given industry trends, I expect this to continue), and MLIR can facilitate that support in many cases (although I expect we'll see further enhancements in the core LLVM libraries as well). That all having been said, I think that it's going to be very important to develop some documentation on how a frontend author looking to use LLVM backend technology, and a developer looking to implement different kinds of functionality, might reasonably choose whether to target or enhance MLIR components, LLVM components, or both. I expect that this kind of advice will evolve over time, but I'm sure we'll need it sooner rather than later. Thanks again, Hal On 9/9/19 10:30 AM, Chris Lattner via llvm-dev wrote: Hi all, The TensorFlow team at Google has been leading the charge to build a new set of compiler infrastructure, known as the MLIR project<https://github.com/tensorflow/mlir>. The initial focus has been on machine learning infrastructure, high performance accelerators, heterogeneous compute, and HPC-style computations. That said, the implementation and design of this infrastructure is state of the art, is not specific to these applications, and is already being adopted (e.g.) by the Flang compiler<https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19>. If you are interested in learning more about MLIR and the technical design, I’d encourage you to look at the MLIR Keynote and Tutorial at the last LLVM Developer Meeting<http://llvm.org/devmtg/2019-04/>. MLIR is already open source on GitHub<https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d>, and includes a significant amount of code in two repositories. “MLIR Core” is located in github/tensorflow/mlir<https://github.com/tensorflow/mlir>, including an application independent IR, the code generation infrastructure, common graph transformation infrastructure, declarative operation definition and rewrite infrastructure, polyhedral transformations etc. The primary TensorFlow repository at github/tensorflow/tensorflow<https://github.com/tensorflow/tensorflow/> contains TensorFlow-specific functionality built using MLIR Core infrastructure. In discussions with a large number of industry partners<https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/>, we’ve achieved consensus that it would be best to build a shared ML compiler infrastructure under a common umbrella with well known neutral governance. As such, we’d like to propose that MLIR Core join the non-profit LLVM Foundation as a new subproject! We plan to follow the LLVM Developer Policy<http://llvm.org/docs/DeveloperPolicy.html>, and have been following an LLVM-style development process from the beginning - including all relevant coding and testing styles, and we build on core LLVM infrastructure pervasively. We think that MLIR is a nice complement to existing LLVM functionality, providing common infrastructure for higher level optimization and transformation problems, and dovetails naturally with LLVM IR optimizations and code generation. Please let us know if you have any thoughts, questions, or concerns! -Chris _______________________________________________ LLVM Developers mailing list llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev -- Hal Finkel Lead, Compiler Technology and Programming Languages Leadership Computing Facility Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190909/cfd721f3/attachment.html>
Renato Golin via llvm-dev
2019-Sep-09 19:29 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Overall, I think it will be a good move. Maintenance wise, I'm expecting the existing community to move into LLVM (if not all in already), so I don't foresee any additional costs. Though, Hal's points are spot on... On Mon, 9 Sep 2019 at 18:47, Finkel, Hal J. via llvm-dev <llvm-dev at lists.llvm.org> wrote:> 3. As a specific example of the above, the current development of the new Flang compiler depends on MLIR.Who knows, one day, Clang can, too! :)> 5. As a community, we have been moving toward increasing support for heterogeneous computing and accelerators (and given industry trends, I expect this to continue), and MLIR can facilitate that support in many cases (although I expect we'll see further enhancements in the core LLVM libraries as well).Yes, and yes! MLIR can become a simpler entry point into LLVM, from other languages, frameworks and optimisation plugins. A more abstract representation and a more stable IR generation from it, could make maintenance of external projects much easier than direct connections of today. This could benefit research as much as enterprise, and by consequence, the LLVM project.> That all having been said, I think that it's going to be very important to develop some documentation on how a frontend author looking to use LLVM backend technology, and a developer looking to implement different kinds of functionality, might reasonably choose whether to target or enhance MLIR components, LLVM components, or both. I expect that this kind of advice will evolve over time, but I'm sure we'll need it sooner rather than later.Right, I'm also worried that it's too broad in respect to what it can do on paper, versus what LLVM can handle on code. With MLIR as a separate project, that point is interesting, at most. When it becomes part of the LLVM umbrella, then we need to make sure that MLIR and LLVM IR interact within known boundaries and expected behaviour. I'm not saying MLIR can't be used for anything else after the move, just saying that, by being inside the repo, and maintained by our community, LLVM IR would end up as the *primary* target, and there will be a minimum stability/functionality requirements. But perhaps more importantly, as Hal states clearly, is the need for an official specification, similar to the one for LLVM IR, as well as a formal document with the expected semantics into LLVM IR. Sooner, indeed. cheers, --renato
Tanya Lattner via llvm-dev
2019-Oct-07 08:17 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
On behalf of the LLVM Foundation board of directors, we accept MLIR as a project into LLVM. This is based upon the responses that the community is supportive and is in favor of this. We will provide services and support on our side. Welcome MLIR! Thanks, Tanya Lattner President, LLVM Foundation> On Sep 9, 2019, at 8:30 AM, Chris Lattner via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi all, > > The TensorFlow team at Google has been leading the charge to build a new set of compiler infrastructure, known as the MLIR project <https://github.com/tensorflow/mlir>. The initial focus has been on machine learning infrastructure, high performance accelerators, heterogeneous compute, and HPC-style computations. That said, the implementation and design of this infrastructure is state of the art, is not specific to these applications, and is already being adopted (e.g.) by the Flang compiler <https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19>. If you are interested in learning more about MLIR and the technical design, I’d encourage you to look at the MLIR Keynote and Tutorial at the last LLVM Developer Meeting <http://llvm.org/devmtg/2019-04/>. > > MLIR is already open source on GitHub <https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d>, and includes a significant amount of code in two repositories. “MLIR Core” is located in github/tensorflow/mlir <https://github.com/tensorflow/mlir>, including an application independent IR, the code generation infrastructure, common graph transformation infrastructure, declarative operation definition and rewrite infrastructure, polyhedral transformations etc. The primary TensorFlow repository at github/tensorflow/tensorflow <https://github.com/tensorflow/tensorflow/> contains TensorFlow-specific functionality built using MLIR Core infrastructure. > > In discussions with a large number of industry partners <https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/>, we’ve achieved consensus that it would be best to build a shared ML compiler infrastructure under a common umbrella with well known neutral governance. As such, we’d like to propose that MLIR Core join the non-profit LLVM Foundation as a new subproject! We plan to follow the LLVM Developer Policy <http://llvm.org/docs/DeveloperPolicy.html>, and have been following an LLVM-style development process from the beginning - including all relevant coding and testing styles, and we build on core LLVM infrastructure pervasively. > > We think that MLIR is a nice complement to existing LLVM functionality, providing common infrastructure for higher level optimization and transformation problems, and dovetails naturally with LLVM IR optimizations and code generation. Please let us know if you have any thoughts, questions, or concerns! > > -Chris > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191007/7e76f299/attachment.html>
Chris Lattner via llvm-dev
2019-Oct-07 22:55 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Fantastic, thank you! -Chris> On Oct 7, 2019, at 1:17 AM, Tanya Lattner via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On behalf of the LLVM Foundation board of directors, we accept MLIR as a project into LLVM. This is based upon the responses that the community is supportive and is in favor of this. We will provide services and support on our side. > > Welcome MLIR! > > Thanks, > Tanya Lattner > President, LLVM Foundation > >> On Sep 9, 2019, at 8:30 AM, Chris Lattner via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> Hi all, >> >> The TensorFlow team at Google has been leading the charge to build a new set of compiler infrastructure, known as the MLIR project <https://github.com/tensorflow/mlir>. The initial focus has been on machine learning infrastructure, high performance accelerators, heterogeneous compute, and HPC-style computations. That said, the implementation and design of this infrastructure is state of the art, is not specific to these applications, and is already being adopted (e.g.) by the Flang compiler <https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19>. If you are interested in learning more about MLIR and the technical design, I’d encourage you to look at the MLIR Keynote and Tutorial at the last LLVM Developer Meeting <http://llvm.org/devmtg/2019-04/>. >> >> MLIR is already open source on GitHub <https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d>, and includes a significant amount of code in two repositories. “MLIR Core” is located in github/tensorflow/mlir <https://github.com/tensorflow/mlir>, including an application independent IR, the code generation infrastructure, common graph transformation infrastructure, declarative operation definition and rewrite infrastructure, polyhedral transformations etc. The primary TensorFlow repository at github/tensorflow/tensorflow <https://github.com/tensorflow/tensorflow/> contains TensorFlow-specific functionality built using MLIR Core infrastructure. >> >> In discussions with a large number of industry partners <https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/>, we’ve achieved consensus that it would be best to build a shared ML compiler infrastructure under a common umbrella with well known neutral governance. As such, we’d like to propose that MLIR Core join the non-profit LLVM Foundation as a new subproject! We plan to follow the LLVM Developer Policy <http://llvm.org/docs/DeveloperPolicy.html>, and have been following an LLVM-style development process from the beginning - including all relevant coding and testing styles, and we build on core LLVM infrastructure pervasively. >> >> We think that MLIR is a nice complement to existing LLVM functionality, providing common infrastructure for higher level optimization and transformation problems, and dovetails naturally with LLVM IR optimizations and code generation. Please let us know if you have any thoughts, questions, or concerns! >> >> -Chris >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191007/de44c524/attachment.html>
Tatiana Shpeisman via llvm-dev
2019-Oct-08 06:13 UTC
[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
Fantastic news, indeed! Thank you for accepting MLIR as an LLVM project! Tatiana On Mon, Oct 7, 2019 at 9:16 PM Reid Tatge <tatge at google.com> wrote:> This is great news! Congratulations everyone! > > On Mon, Oct 7, 2019 at 9:14 PM Tatiana Shpeisman <shpeisman at google.com> > wrote: > >> Congratulations, everybody! >> >> On Mon, Oct 7, 2019 at 3:55 PM Chris Lattner <clattner at google.com> wrote: >> >>> Fantastic, thank you! >>> >>> -Chris >>> >>> On Oct 7, 2019, at 1:17 AM, Tanya Lattner via llvm-dev < >>> llvm-dev at lists.llvm.org> wrote: >>> >>> On behalf of the LLVM Foundation board of directors, we accept MLIR as a >>> project into LLVM. This is based upon the responses that the community is >>> supportive and is in favor of this. We will provide services and support on >>> our side. >>> >>> Welcome MLIR! >>> >>> Thanks, >>> Tanya Lattner >>> President, LLVM Foundation >>> >>> On Sep 9, 2019, at 8:30 AM, Chris Lattner via llvm-dev < >>> llvm-dev at lists.llvm.org> wrote: >>> >>> Hi all, >>> >>> The TensorFlow team at Google has been leading the charge to build a new >>> set of compiler infrastructure, known as the MLIR project >>> <https://github.com/tensorflow/mlir>. The initial focus has been on >>> machine learning infrastructure, high performance accelerators, >>> heterogeneous compute, and HPC-style computations. That said, the >>> implementation and design of this infrastructure is state of the art, is >>> not specific to these applications, and is already being adopted (e.g.) by the >>> Flang compiler >>> <https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19>. If you >>> are interested in learning more about MLIR and the technical design, I’d >>> encourage you to look at the MLIR Keynote and Tutorial at the last LLVM >>> Developer Meeting <http://llvm.org/devmtg/2019-04/>. >>> >>> MLIR is already open source on GitHub >>> <https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d>, >>> and includes a significant amount of code in two repositories. “MLIR Core” >>> is located in github/tensorflow/mlir >>> <https://github.com/tensorflow/mlir>, including an application >>> independent IR, the code generation infrastructure, common graph >>> transformation infrastructure, declarative operation definition and rewrite >>> infrastructure, polyhedral transformations etc. The primary TensorFlow >>> repository at github/tensorflow/tensorflow >>> <https://github.com/tensorflow/tensorflow/> contains >>> TensorFlow-specific functionality built using MLIR Core infrastructure. >>> >>> In discussions with a large number of industry partners >>> <https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/>, >>> we’ve achieved consensus that it would be best to build a shared ML >>> compiler infrastructure under a common umbrella with well known neutral >>> governance. As such, we’d like to propose that MLIR Core join the >>> non-profit LLVM Foundation as a new subproject! We plan to follow the LLVM >>> Developer Policy <http://llvm.org/docs/DeveloperPolicy.html>, and have >>> been following an LLVM-style development process from the beginning - >>> including all relevant coding and testing styles, and we build on core LLVM >>> infrastructure pervasively. >>> >>> We think that MLIR is a nice complement to existing LLVM functionality, >>> providing common infrastructure for higher level optimization and >>> transformation problems, and dovetails naturally with LLVM IR optimizations >>> and code generation. Please let us know if you have any thoughts, >>> questions, or concerns! >>> >>> -Chris >>> >>> _______________________________________________ >>> LLVM Developers mailing list >>> llvm-dev at lists.llvm.org >>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>> >>> >>> _______________________________________________ >>> LLVM Developers mailing list >>> llvm-dev at lists.llvm.org >>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>> >>> >>>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191007/f8d493bf/attachment.html>
Apparently Analagous Threads
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation
- [flang-dev] About OpenMP dialect in MLIR