> On Sep 16, 2016, at 4:46 PM, Carsten Mattner <carstenmattner at gmail.com> wrote: > > On Sat, Sep 17, 2016 at 12:48 AM, Teresa Johnson <tejohnson at google.com <mailto:tejohnson at google.com>> wrote: >> >> On Fri, Sep 16, 2016 at 2:54 PM, Carsten Mattner via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>> >>> On Fri, Sep 16, 2016 at 11:28 PM, Mehdi Amini <mehdi.amini at apple.com> wrote: >>> >>>> You probably missed -DLLVM_BINUTILS_INCDIR. >>>> >>>> See: http://llvm.org/docs/GoldPlugin.html >>> >>> plugin-api.h is in /usr/include, so I'd expect it to be found, but I >>> can explicitly set BINUTILS_INCDIR and re-bootstrap with gcc 6.2.1. >>> >>> I have ld.gold, but I'm not sure if /usr/bin/ld uses it, though I'd expect >>> it to since it's been in for a couple releases now. >>> >>> $ ld -v >>> GNU ld (GNU Binutils) 2.27 >>> $ ld.bfd -v >>> GNU ld (GNU Binutils) 2.27 >>> $ ld.gold -v >>> GNU gold (GNU Binutils 2.27) 1.12 >> >> >> Looks like your default ld is GNU ld.bfd not ld.gold. You can either change your >> /usr/bin/ld (which probably is a link to /usr/bin/ld.bfd) to point instead to >> /usr/bin/ld.gold, or if you prefer, set your PATH before the stage1 compile to a >> location that has ld linked to ld.gold. > > I can look into and check why Arch Linux has it configured like that. > > In the meantime, Mehdi's suggestion to explicitly pass BINUTILS_INCDIR > restored the previous configure behavior, and the new llvm build has > lib/LLVMgold.so. Thanks to both of you for pointing out the missing cmake > flag. > > I've checked the configure step and it didn't fail as it did before, but before > I try to build in ThinLTO mode: since the configure step checks for the gold > plugin, is it safe to assume that I don't have to change the default system > ld to gold and ThinLTO will work, or is that a build requirement for > bootstrapping llvm in ThinLTO mode?Try to build llvm-tblgen, you’ll know quite quickly :) — Mehdi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/7d408088/attachment.html>
> On Sep 16, 2016, at 4:59 PM, Mehdi Amini via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > >> On Sep 16, 2016, at 4:46 PM, Carsten Mattner <carstenmattner at gmail.com <mailto:carstenmattner at gmail.com>> wrote: >> >> On Sat, Sep 17, 2016 at 12:48 AM, Teresa Johnson <tejohnson at google.com <mailto:tejohnson at google.com>> wrote: >>> >>> On Fri, Sep 16, 2016 at 2:54 PM, Carsten Mattner via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >>>> >>>> On Fri, Sep 16, 2016 at 11:28 PM, Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote: >>>> >>>>> You probably missed -DLLVM_BINUTILS_INCDIR. >>>>> >>>>> See: http://llvm.org/docs/GoldPlugin.html <http://llvm.org/docs/GoldPlugin.html> >>>> >>>> plugin-api.h is in /usr/include, so I'd expect it to be found, but I >>>> can explicitly set BINUTILS_INCDIR and re-bootstrap with gcc 6.2.1. >>>> >>>> I have ld.gold, but I'm not sure if /usr/bin/ld uses it, though I'd expect >>>> it to since it's been in for a couple releases now. >>>> >>>> $ ld -v >>>> GNU ld (GNU Binutils) 2.27 >>>> $ ld.bfd -v >>>> GNU ld (GNU Binutils) 2.27 >>>> $ ld.gold -v >>>> GNU gold (GNU Binutils 2.27) 1.12 >>> >>> >>> Looks like your default ld is GNU ld.bfd not ld.gold. You can either change your >>> /usr/bin/ld (which probably is a link to /usr/bin/ld.bfd) to point instead to >>> /usr/bin/ld.gold, or if you prefer, set your PATH before the stage1 compile to a >>> location that has ld linked to ld.gold. >> >> I can look into and check why Arch Linux has it configured like that. >> >> In the meantime, Mehdi's suggestion to explicitly pass BINUTILS_INCDIR >> restored the previous configure behavior, and the new llvm build has >> lib/LLVMgold.so. Thanks to both of you for pointing out the missing cmake >> flag. >> >> I've checked the configure step and it didn't fail as it did before, but before >> I try to build in ThinLTO mode: since the configure step checks for the gold >> plugin, is it safe to assume that I don't have to change the default system >> ld to gold and ThinLTO will work, or is that a build requirement for >> bootstrapping llvm in ThinLTO mode? > > Try to build llvm-tblgen, you’ll know quite quickly :)Also, you should limit the number of parallel link jobs: cmake -DLLVM_PARALLEL_LINK_JOBS=1 And use ninja if you don’t already: cmake -GNinja — Mehdi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/a1c92a61/attachment.html>
On Fri, Sep 16, 2016 at 5:00 PM, Mehdi Amini via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > On Sep 16, 2016, at 4:59 PM, Mehdi Amini via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > On Sep 16, 2016, at 4:46 PM, Carsten Mattner <carstenmattner at gmail.com> > wrote: > > On Sat, Sep 17, 2016 at 12:48 AM, Teresa Johnson <tejohnson at google.com> > wrote: > > > On Fri, Sep 16, 2016 at 2:54 PM, Carsten Mattner via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > On Fri, Sep 16, 2016 at 11:28 PM, Mehdi Amini <mehdi.amini at apple.com> > wrote: > > You probably missed -DLLVM_BINUTILS_INCDIR. > > See: http://llvm.org/docs/GoldPlugin.html > > > plugin-api.h is in /usr/include, so I'd expect it to be found, but I > can explicitly set BINUTILS_INCDIR and re-bootstrap with gcc 6.2.1. > > I have ld.gold, but I'm not sure if /usr/bin/ld uses it, though I'd expect > it to since it's been in for a couple releases now. > > $ ld -v > GNU ld (GNU Binutils) 2.27 > $ ld.bfd -v > GNU ld (GNU Binutils) 2.27 > $ ld.gold -v > GNU gold (GNU Binutils 2.27) 1.12 > > > > Looks like your default ld is GNU ld.bfd not ld.gold. You can either > change your > /usr/bin/ld (which probably is a link to /usr/bin/ld.bfd) to point instead > to > /usr/bin/ld.gold, or if you prefer, set your PATH before the stage1 > compile to a > location that has ld linked to ld.gold. > > > I can look into and check why Arch Linux has it configured like that. > > In the meantime, Mehdi's suggestion to explicitly pass BINUTILS_INCDIR > restored the previous configure behavior, and the new llvm build has > lib/LLVMgold.so. Thanks to both of you for pointing out the missing cmake > flag. > > I've checked the configure step and it didn't fail as it did before, but > before > I try to build in ThinLTO mode: since the configure step checks for the > gold > plugin, is it safe to assume that I don't have to change the default system > ld to gold and ThinLTO will work, or is that a build requirement for > bootstrapping llvm in ThinLTO mode? > >Yeah, perhaps this is working somehow anyway.> > Try to build llvm-tblgen, you’ll know quite quickly :) > > > Also, you should limit the number of parallel link jobs: cmake > -DLLVM_PARALLEL_LINK_JOBS=1 > And use ninja if you don’t already: cmake -GNinja >Yes and to add on - the ThinLTO backend by default will kick off std::thread::hardware_concurrency # of threads, which I'm finding is too much for machines with hyperthreading. If that ends up being an issue I can give you a workaround (I've been struggling to find a way that works on various OS and arches to compute the max number of physical cores to fix this in the source). Teresa> — > Mehdi > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > >-- Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/35d87c90/attachment.html>
> On Sep 16, 2016, at 7:02 PM, Carsten Mattner <carstenmattner at gmail.com> wrote: > >> On Sat, Sep 17, 2016 at 3:17 AM, Mehdi Amini <mehdi.amini at apple.com> wrote: >> >>> On Sep 16, 2016, at 6:13 PM, Carsten Mattner <carstenmattner at gmail.com> wrote: >>> >>> On Sat, Sep 17, 2016 at 2:07 AM, Teresa Johnson via llvm-dev >>> <llvm-dev at lists.llvm.org> wrote: >>> >>>> Yes and to add on - the ThinLTO backend by default will >>>> kick off std::thread::hardware_concurrency # of threads, which I'm finding is >>> >>> Is it just me or does that sound not very intuitive or at least a >>> little unexpected? >>> It's good that it uses the resources eagerly, but in terms of build systems this >>> is certainly surprising if there's no control of that parameter via >>> make/ninja/xcode. >> >> You can control the parallelism used by the linker, but the option is linker dependent >> (On MacOS: -Wl,-mllvm,-threads=1) > > That's what I meant. Maybe lld can gain support for that and allow us to use > the same ld pass-through via the compile driver so that it works on Linux and > BSD too. > >>>> too much for machines with hyperthreading. If that ends up being an issue I can >>>> give you a workaround (I've been struggling to find a way that works on various >>>> OS and arches to compute the max number of physical cores to fix this in the source). >>> >>> I've been using ninja -jN so far. I suppose when building with ThinLTO I should >>> run ninja -j1. Would that >>> >>> What's the workaround? >> >> Seems like you missed my previous email: : cmake -DLLVM_PARALLEL_LINK_JOBS=1 > > I didn't miss that and it will hopefully help with limiting LTO link phase > resource use, but I do wonder if it means it's either linking one binary > or compiling objects and not both at parallel.I believe it only limits the number of concurrent links, without interacting with the compile phase, but I'd have to check to be sure.> >> Also, ninja is parallel by default, so no need to pass -j. >> >> This way you get nice parallelism during the compile phase, >> and ninja will issue only one link job at a time. > > I know, but I use ninja for C++ project because those are the most > frequent CMake user, and compiling C++ often requires limited it > to less than NUM_CORES.ThinLTO is quite lean on the memory side. What's you bottleneck to throttle you -j? -- Mehdi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/f129998d/attachment.html>
> On Sep 16, 2016, at 7:37 PM, Teresa Johnson <tejohnson at google.com> wrote: > > > >> On Fri, Sep 16, 2016 at 6:17 PM, Mehdi Amini <mehdi.amini at apple.com> wrote: >> >> > On Sep 16, 2016, at 6:13 PM, Carsten Mattner <carstenmattner at gmail.com> wrote: >> > >> > On Sat, Sep 17, 2016 at 2:07 AM, Teresa Johnson via llvm-dev >> > <llvm-dev at lists.llvm.org> wrote: >> > >> >> Yes and to add on - the ThinLTO backend by default will >> >> kick off std::thread::hardware_concurrency # of threads, which I'm finding is >> > >> > Is it just me or does that sound not very intuitive or at least a >> > little unexpected? >> > It's good that it uses the resources eagerly, but in terms of build systems this >> > is certainly surprising if there's no control of that parameter via >> > make/ninja/xcode. >> >> You can control the parallelism used by the linker, but the option is linker dependent >> (On MacOS: -Wl,-mllvm,-threads=1) > > Wait - this is to control the ThinLTO backend parallelism, right? In which case you wouldn't want to use 1, but rather the number of physical cores. >Well it depends what behavior you want :) I should have used N to match ninja -jN.> When using gold the option is -Wl,-plugin-opt,jobs=N, where N is the amount of backend parallel ThinLTO jobs that will be issued in parallel. So you could try with the default, but if you have HT on then you might want to try with the number of physical cores instead.How does it affects parallel LTO backends? (I hope it doesn't) -- Mehdi> >> >> > >> >> too much for machines with hyperthreading. If that ends up being an issue I can >> >> give you a workaround (I've been struggling to find a way that works on various >> >> OS and arches to compute the max number of physical cores to fix this in the source). >> > >> > I've been using ninja -jN so far. I suppose when building with ThinLTO I should >> > run ninja -j1. Would that >> > >> > What's the workaround? >> >> Seems like you missed my previous email: : cmake -DLLVM_PARALLEL_LINK_JOBS=1 >> Also, ninja is parallel by default, so no need to pass -j. >> >> This way you get nice parallelism during the compile phase, and ninja will issue only one link job at a time. >> >> — >> Mehdi >> >> >> > > > > -- > Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/df96037a/attachment.html>
On Fri, Sep 16, 2016 at 7:40 PM, Mehdi Amini <mehdi.amini at apple.com> wrote:> > > On Sep 16, 2016, at 7:37 PM, Teresa Johnson <tejohnson at google.com> wrote: > > > > On Fri, Sep 16, 2016 at 6:17 PM, Mehdi Amini <mehdi.amini at apple.com> > wrote: > >> >> > On Sep 16, 2016, at 6:13 PM, Carsten Mattner <carstenmattner at gmail.com> >> wrote: >> > >> > On Sat, Sep 17, 2016 at 2:07 AM, Teresa Johnson via llvm-dev >> > <llvm-dev at lists.llvm.org> wrote: >> > >> >> Yes and to add on - the ThinLTO backend by default will >> >> kick off std::thread::hardware_concurrency # of threads, which I'm >> finding is >> > >> > Is it just me or does that sound not very intuitive or at least a >> > little unexpected? >> > It's good that it uses the resources eagerly, but in terms of build >> systems this >> > is certainly surprising if there's no control of that parameter via >> > make/ninja/xcode. >> >> You can control the parallelism used by the linker, but the option is >> linker dependent >> (On MacOS: -Wl,-mllvm,-threads=1) >> > > Wait - this is to control the ThinLTO backend parallelism, right? In which > case you wouldn't want to use 1, but rather the number of physical cores. > > > Well it depends what behavior you want :) > > I should have used N to match ninja -jN. > > > > > When using gold the option is -Wl,-plugin-opt,jobs=N, where N is the > amount of backend parallel ThinLTO jobs that will be issued in parallel. So > you could try with the default, but if you have HT on then you might want > to try with the number of physical cores instead. > > > > How does it affects parallel LTO backends? > (I hope it doesn't) >In regular LTO mode, the option will also affect parallel LTO codegen, which is off by default. Is that what you meant?> > -- > Mehdi > > > >> > >> >> too much for machines with hyperthreading. If that ends up being an >> issue I can >> >> give you a workaround (I've been struggling to find a way that works >> on various >> >> OS and arches to compute the max number of physical cores to fix this >> in the source). >> > >> > I've been using ninja -jN so far. I suppose when building with ThinLTO >> I should >> > run ninja -j1. Would that >> > >> > What's the workaround? >> >> Seems like you missed my previous email: : cmake >> -DLLVM_PARALLEL_LINK_JOBS=1 >> Also, ninja is parallel by default, so no need to pass -j. >> >> This way you get nice parallelism during the compile phase, and ninja >> will issue only one link job at a time. >> >> — >> Mehdi >> >> >> >> > > > -- > Teresa Johnson | Software Engineer | tejohnson at google.com | > 408-460-2413 > >-- Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160916/bc75fbf2/attachment.html>
On Sat, Sep 17, 2016 at 4:13 AM, Mehdi Amini <mehdi.amini at apple.com> wrote:> ThinLTO is quite lean on the memory side. What's you bottleneck to throttle you -j?It's an acquired behavior from previous compiles of heavy users of C++ templates or some other C++ feature that blows up space use, having to be careful not to swap. I usually try -jNUM_VIRT_CORES, but could also run -j of course. gcc's space overhead has been optimized significantly for OpenOffice and Mozilla a couple releases back, so it's probably less of an issue these days.