I've been asked how LTO in LLVM compares to equivalent capabilities in GCC. How do the two compare in terms of scalability? And robustness for large applications? Also, are there any ongoing efforts or plans to improve LTO in LLVM in the near future? Any information would be much appreciated. Thanks, --Vikram S. Adve Visiting Professor, Computer Science, EPFL Professor, Department of Computer Science University of Illinois at Urbana-Champaign vadve at illinois.edu http://llvm.org
On Fri Dec 12 2014 at 1:00:55 PM Adve, Vikram Sadanand <vadve at illinois.edu> wrote:> I've been asked how LTO in LLVM compares to equivalent capabilities in > GCC. How do the two compare in terms of scalability? And robustness for > large applications? > >It depends on which scheme that's being used, but in general full LTO falls over in scaling past a certain size application due to memory use etc. (Debug info is also a problem, but see below). Not sure what you mean between scalability and robustness though. If you've got some more specific questions I can probably give you a rundown of some of the differences.> Also, are there any ongoing efforts or plans to improve LTO in LLVM in the > near future? > >Yes. Many ongoing efforts. See the metadata rewrite, the FDO/PGO work, etc. It's an area of active development and work. -eric> Any information would be much appreciated. Thanks, > > --Vikram S. Adve > Visiting Professor, Computer Science, EPFL > Professor, Department of Computer Science > University of Illinois at Urbana-Champaign > vadve at illinois.edu > http://llvm.org > > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141212/0cc5bb79/attachment.html>
On 12/12/14 15:56, Adve, Vikram Sadanand wrote:> I've been asked how LTO in LLVM compares to equivalent capabilities > in GCC. How do the two compare in terms of scalability? And > robustness for large applications?Neither GCC nor LLVM can handle our (Google) large applications. They're just too massive for the kind of linking done by LTO. When we built GCC's LTO, we were trying to address this by creating a partitioned model, where the analysis phase and the codegen phase are split to allow working on partial callgraphs (http://gcc.gnu.org/wiki/LinkTimeOptimization for details). This allows to split and parallelize the initial bytecode generation and the final optimization/codegen. However, the analysis is still implemented as a single process. We found that we cannot even load summaries, types and symbols in an efficient way. It does allow for programs like Firefox to be handled. So, if by "big" you need to handle something of that size, this model can doit. With LLVM, I can't even load the IR for one of our large programs on a box with 64Gb of RAM.> Also, are there any ongoing efforts or plans to improve LTO in LLVM > in the near future?Yes. We are going to be investing in this area very soon. David and Teresa (CC'd) will have details. Diego.
Sent from my Windows Phone ________________________________ From: Diego Novillo<mailto:dnovillo at google.com> Sent: 12/13/2014 3:30 AM To: Adve, Vikram Sadanand<mailto:vadve at illinois.edu>; <llvmdev at cs.uiuc.edu> List<mailto:llvmdev at cs.uiuc.edu>; Xinliang David Li<mailto:davidxl at google.com>; Teresa Johnson<mailto:tejohnson at google.com> Subject: Re: [LLVMdev] LTO question On 12/12/14 15:56, Adve, Vikram Sadanand wrote:> I've been asked how LTO in LLVM compares to equivalent capabilities > in GCC. How do the two compare in terms of scalability? And > robustness for large applications?Neither GCC nor LLVM can handle our (Google) large applications. They're just too massive for the kind of linking done by LTO. When we built GCC's LTO, we were trying to address this by creating a partitioned model, where the analysis phase and the codegen phase are split to allow working on partial callgraphs (http://gcc.gnu.org/wiki/LinkTimeOptimization for details). This allows to split and parallelize the initial bytecode generation and the final optimization/codegen. However, the analysis is still implemented as a single process. We found that we cannot even load summaries, types and symbols in an efficient way. It does allow for programs like Firefox to be handled. So, if by "big" you need to handle something of that size, this model can doit. With LLVM, I can't even load the IR for one of our large programs on a box with 64Gb of RAM.> Also, are there any ongoing efforts or plans to improve LTO in LLVM > in the near future?Yes. We are going to be investing in this area very soon. David and Teresa (CC'd) will have details. Diego. _______________________________________________ LLVM Developers mailing list LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141213/878772b3/attachment.html>
On Fri, Dec 12, 2014 at 1:59 PM, Diego Novillo <dnovillo at google.com> wrote:> On 12/12/14 15:56, Adve, Vikram Sadanand wrote: >> >> I've been asked how LTO in LLVM compares to equivalent capabilities >> in GCC. How do the two compare in terms of scalability? And >> robustness for large applications? > > > Neither GCC nor LLVM can handle our (Google) large applications. They're > just too massive for the kind of linking done by LTO. > > When we built GCC's LTO, we were trying to address this by creating a > partitioned model, where the analysis phase and the codegen phase are split > to allow working on partial callgraphs > (http://gcc.gnu.org/wiki/LinkTimeOptimization for details). > > This allows to split and parallelize the initial bytecode generation and the > final optimization/codegen. However, the analysis is still implemented as a > single process. We found that we cannot even load summaries, types and > symbols in an efficient way. > > It does allow for programs like Firefox to be handled. So, if by "big" you > need to handle something of that size, this model can doit. > > With LLVM, I can't even load the IR for one of our large programs on a box > with 64Gb of RAM. > >> Also, are there any ongoing efforts or plans to improve LTO in LLVM >> in the near future? > > > Yes. We are going to be investing in this area very soon. David and Teresa > (CC'd) will have details.Still working out the details, but we are investigating a solution that is scalable to very large programs. We'll share the design in the near future when we have more details worked out so that we can get feedback. Thanks! Teresa> > > Diego.-- Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413