Saleem Abdulrasool via llvm-dev
2018-May-04 01:14 UTC
[llvm-dev] Thank you from the Glow Developers
Hello LLVM community, We have been working hard on a new domain specific optimizing compiler, and we are pleased to announce that we have recently open sourced the project! We would like to introduce you to Glow, an optimizing compiler for neural networks! This new compiler is built on the hard work of this community and we would like to thank all of the contributors to the LLVM project. We hope that the project will be beneficial to others as well, which would not have been possible without your work. You can find the sources to it at http://github.com/pytorch/glow and read up on the work in the associated paper we have released at https://arxiv.org/pdf/1805.00907. Thank you all! The Glow Developers -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180503/cd24a31a/attachment.html>
Sean Silva via llvm-dev
2018-May-05 20:23 UTC
[llvm-dev] Thank you from the Glow Developers
Very cool! The first thing that jumps out to me is how tidy and modular the code structure is. The code feels very familiar (stylistically, organizationally, etc.) to me as an LLVM developer. One thing that wasn't at all clear to me is how this is different/similar to TensorFlow XLA (previously mentioned on this list). Can you briefly compare and contrast this with TensorFlow XLA? -- Sean Silva On Thu, May 3, 2018, 6:14 PM Saleem Abdulrasool via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hello LLVM community, > > We have been working hard on a new domain specific optimizing compiler, > and we > are pleased to announce that we have recently open sourced the project! We > would like to introduce you to Glow, an optimizing compiler for neural > networks! > > This new compiler is built on the hard work of this community and we would > like > to thank all of the contributors to the LLVM project. We hope that the > project > will be beneficial to others as well, which would not have been possible > without > your work. > > You can find the sources to it at http://github.com/pytorch/glow and read > up on > the work in the associated paper we have released at > https://arxiv.org/pdf/1805.00907. > > Thank you all! > > The Glow Developers > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180505/edb2a397/attachment.html>
Saleem Abdulrasool via llvm-dev
2018-May-11 15:36 UTC
[llvm-dev] Thank you from the Glow Developers
Hi Sean, Sorry for the delay. On Sat, May 5, 2018 at 1:23 PM Sean Silva <chisophugis at gmail.com> wrote:> Very cool! The first thing that jumps out to me is how tidy and modular > the code structure is. The code feels very familiar (stylistically, > organizationally, etc.) to me as an LLVM developer. >Thanks! We absolutely took inspiration from the wonderful work in LLVM :). I’m glad that you found it familiar and tidy. One thing that wasn't at all clear to me is how this is different/similar> to TensorFlow XLA (previously mentioned on this list). Can you briefly > compare and contrast this with TensorFlow XLA? >That is a very keen observation. There are many similarities between the two projects. However, there are some differences too. Both are interested in performing cross-node optimizations to address memory usage and execution time. In order to accomplish their goals, both have their own IR and optimization passes. At the same time, there seem to be some differences as well. In the case of glow we have focused on the use of Caffe2 models and have support for the ONNX format as well. We have been trying to focus on providing a more target independent model and have been considering some heterogeneous execution models as well. XLA is definitely a more mature compiler compared to glow. I think that there are sufficient similarities and differences that there are ample opportunities for collaboration as these projects grow further. -- Sean Silva> > > On Thu, May 3, 2018, 6:14 PM Saleem Abdulrasool via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Hello LLVM community, >> >> We have been working hard on a new domain specific optimizing compiler, >> and we >> are pleased to announce that we have recently open sourced the project! >> We >> would like to introduce you to Glow, an optimizing compiler for neural >> networks! >> >> This new compiler is built on the hard work of this community and we >> would like >> to thank all of the contributors to the LLVM project. We hope that the >> project >> will be beneficial to others as well, which would not have been possible >> without >> your work. >> >> You can find the sources to it at http://github.com/pytorch/glow and >> read up on >> the work in the associated paper we have released at >> https://arxiv.org/pdf/1805.00907. >> >> Thank you all! >> >> The Glow Developers >> > _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> > --Saleem Abdulrasool compnerd (at) compnerd (dot) org -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180511/3eaeb1b0/attachment-0001.html>
Possibly Parallel Threads
- Thank you from the Glow Developers
- [Job Ad]Alibaba Group is hiring deep learning compiler engineers
- Switch between Wine Versions for best App support - "glow"
- LED does not glow on new Voicemail
- RFC: Exposing TargetTransformInfo factories from TargetMachine