On Fri, Sep 16, 2016 at 7:40 PM, Mehdi Amini <mehdi.amini at apple.com> wrote:> > > On Sep 16, 2016, at 7:37 PM, Teresa Johnson <tejohnson at google.com> wrote: > > > > On Fri, Sep 16, 2016 at 6:17 PM, Mehdi Amini <mehdi.amini at apple.com> > wrote: > >> >> > On Sep 16, 2016, at 6:13 PM, Carsten Mattner <carstenmattner at gmail.com> >> wrote: >> > >> > On Sat, Sep 17, 2016 at 2:07 AM, Teresa Johnson via llvm-dev >> > <llvm-dev at lists.llvm.org> wrote: >> > >> >> Yes and to add on - the ThinLTO backend by default will >> >> kick off std::thread::hardware_concurrency # of threads, which I'm >> finding is >> > >> > Is it just me or does that sound not very intuitive or at least a >> > little unexpected? >> > It's good that it uses the resources eagerly, but in terms of build >> systems this >> > is certainly surprising if there's no control of that parameter via >> > make/ninja/xcode. >> >> You can control the parallelism used by the linker, but the option is >> linker dependent >> (On MacOS: -Wl,-mllvm,-threads=1) >> > > Wait - this is to control the ThinLTO backend parallelism, right? In which > case you wouldn't want to use 1, but rather the number of physical cores. > > > Well it depends what behavior you want :) > > I should have used N to match ninja -jN. > > > > > When using gold the option is -Wl,-plugin-opt,jobs=N, where N is the > amount of backend parallel ThinLTO jobs that will be issued in parallel. So > you could try with the default, but if you have HT on then you might want > to try with the number of physical cores instead. > > > > How does it affects parallel LTO backends? > (I hope it doesn't) >In regular LTO mode, the option will also affect parallel LTO codegen, which is off by default. Is that what you meant?> > -- > Mehdi > > > >> > >> >> too much for machines with hyperthreading. If that ends up being an >> issue I can >> >> give you a workaround (I've been struggling to find a way that works >> on various >> >> OS and arches to compute the max number of physical cores to fix this >> in the source). >> > >> > I've been using ninja -jN so far. I suppose when building with ThinLTO >> I should >> > run ninja -j1. Would that >> > >> > What's the workaround? >> >> Seems like you missed my previous email: : cmake >> -DLLVM_PARALLEL_LINK_JOBS=1 >> Also, ninja is parallel by default, so no need to pass -j. >> >> This way you get nice parallelism during the compile phase, and ninja >> will issue only one link job at a time. >> >> ā >> Mehdi >> >> >> >> > > > -- > Teresa Johnson | Software Engineer | tejohnson at google.com | > 408-460-2413 > >-- Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/bc75fbf2/attachment.html>
> On Sep 16, 2016, at 7:48 PM, Teresa Johnson <tejohnson at google.com <mailto:tejohnson at google.com>> wrote: > > > > On Fri, Sep 16, 2016 at 7:40 PM, Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote: > > > On Sep 16, 2016, at 7:37 PM, Teresa Johnson <tejohnson at google.com <mailto:tejohnson at google.com>> wrote: > >> >> >> On Fri, Sep 16, 2016 at 6:17 PM, Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote: >> >> > On Sep 16, 2016, at 6:13 PM, Carsten Mattner <carstenmattner at gmail.com <mailto:carstenmattner at gmail.com>> wrote: >> > >> > On Sat, Sep 17, 2016 at 2:07 AM, Teresa Johnson via llvm-dev >> > <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> > >> >> Yes and to add on - the ThinLTO backend by default will >> >> kick off std::thread::hardware_concurrency # of threads, which I'm finding is >> > >> > Is it just me or does that sound not very intuitive or at least a >> > little unexpected? >> > It's good that it uses the resources eagerly, but in terms of build systems this >> > is certainly surprising if there's no control of that parameter via >> > make/ninja/xcode. >> >> You can control the parallelism used by the linker, but the option is linker dependent >> (On MacOS: -Wl,-mllvm,-threads=1) >> >> Wait - this is to control the ThinLTO backend parallelism, right? In which case you wouldn't want to use 1, but rather the number of physical cores. >> > > Well it depends what behavior you want :) > > I should have used N to match ninja -jN. > > > > >> When using gold the option is -Wl,-plugin-opt,jobs=N, where N is the amount of backend parallel ThinLTO jobs that will be issued in parallel. So you could try with the default, but if you have HT on then you might want to try with the number of physical cores instead. > > > How does it affects parallel LTO backends? > (I hope it doesn't) > > In regular LTO mode, the option will also affect parallel LTO codegen, which is off by default. Is that what you meant?Yes. Iām sad that it is the same option: the parallel LTO changes the final binary, which is really not great in my opinion. In ThinLTO the parallel level has this important property that the codegen is unchanged! ā Mehdi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/6dab632a/attachment.html>