Sedat Dilek via llvm-dev
2016-Mar-03 07:09 UTC
[llvm-dev] Building with LLVM_PARALLEL_XXX_JOBS
I had only a quick view on the blog-texts. It might be that a CLANG generated with LTO/PGO speeds up the build. Can you confirm this? Can you confirm binutils-gold speed up the build? Has LLVM an own linker? Can be used? Speedup the build? Yesterday night I loooked through available CMAKE/LLVM variables... ### GOLD # CMAKE_LINKER:FILEPATH=/usr/bin/ld # GOLD_EXECUTABLE:FILEPATH=/usr/bin/ld.gold # LLVM_TOOL_GOLD_BUILD:BOOL=ON ### OPTLEVEL # CMAKE_ASM_FLAGS_RELEASE:STRING=-O3 -DNDEBUG # CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG # CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG ### LTO # LLVM_TOOL_LLVM_LTO_BUILD:BOOL=ON # LLVM_TOOL_LTO_BUILD:BOOL=ON ### PGO # LLVM_USE_OPROFILE:BOOL=OFF #### TABLEGEN # LLVM_OPTIMIZED_TABLEGEN:BOOL=OFF So '-O3' is default for a RELEASE build. Not sure which of the LTO variables are suitable, maybe both. PGO? Is that the correct variable? The blog-text mentioned to use optimized-tablegen. Good? Bad? Ugly? Thanks in advance for answering my questions. Best regards, - Sedat - On 3/3/16, fariborz jahanian <fjahanian2016 at gmail.com> wrote:> Building a binary with ‘LTO’, -O3, etc. will slow down the build. But the > built binary could run much faster. > I am not sure what the intention is here. > > - Fariborz > >> On Mar 2, 2016, at 4:42 PM, Mehdi Amini via llvm-dev >> <llvm-dev at lists.llvm.org> wrote: >> >> >>> On Mar 2, 2016, at 4:22 PM, Sedat Dilek <sedat.dilek at gmail.com> wrote: >>> >>> I got some more inspirations on how to speedup my build and integrated >>> the URLs into my scripts (attached). >>> >>> For example to use GOLD as linker or to use '-O3' OptLevel maybe in >>> combination with LTO and PGO (using '-O3 -flto -fprofile-use'). >> >> LTO *will* slow down dramatically the build. >> >> -- >> Mehdi >> >> >> >>> >>> Let's see when the v3.8.0 FINAL is released. >>> >>> - Sedat - >>> >>> On 3/2/16, Fabio Pagani <pagabuc at gmail.com> wrote: >>>> Hey Chris, >>>> >>>> Sedat was asking for a way to "to speedup my build" and those blog >>>> posts >>>> were really helpful to me. >>>> Anyway LLVM_DISTRIBUTION_COMPONENTS sounds very cool, hope you will >>>> push >>>> your code soon! >>>> >>>> On Tue, Mar 1, 2016 at 11:32 PM, Chris Bieneman <cbieneman at apple.com> >>>> wrote: >>>> >>>>> Fabio, the work I was mentioning here is an extension beyond those >>>>> blog >>>>> posts. >>>>> >>>>> Some details: >>>>> >>>>> * The “almost 40%” number I referred to is a multi-stage clang build. >>>>> That >>>>> means we build a host-capable compiler, then build the actual compiler >>>>> we >>>>> want to ship. >>>>> * I’m at Apple, so points 1 and 2 are already covered (we only use >>>>> clang, >>>>> and ld64 is a fast linker). >>>>> * Our system compiler is PGO+LTO’d, but our stage1 isn’t. Stage1 isn’t >>>>> because the performance improvement of PGO+LTO is less than the time >>>>> it >>>>> takes to build, and stage1 is basically a throwaway. >>>>> * We are using Ninja and CMake, but this configuration isn’t really >>>>> significantly faster than autoconf/make, and actually “ninja install” >>>>> is >>>>> slower in my tests than the old autoconf “make install”. The slowdown >>>>> is >>>>> almost entirely due to Ninja’s “all” target being a lot bigger. >>>>> * This performance is for clean builds, not incremental so ccache or >>>>> shared libraries would not be a valid optimization >>>>> * We do use optimized tablegen >>>>> * “Build Less” is exactly what the LLVM_DISTRIBUTION_COMPONENTS >>>>> enables, >>>>> just in a friendly wrapper target. >>>>> >>>>> -Chris >>>>> >>>>> >>>>> >>>>> On Mar 1, 2016, at 1:12 PM, Fabio Pagani <pagabuc at gmail.com> wrote: >>>>> >>>>> For faster builds and rebuilds you should definitely read: >>>>> >>>>> https://blogs.s-osg.org/an-introduction-to-accelerating-your-build-with-clang/ >>>>> https://blogs.s-osg.org/a-conclusion-to-accelerating-your-build-with-clang/ >>>>> >>>>> Hope this helps! >>>>> >>>>> On Tue, Mar 1, 2016 at 9:17 PM, ChrisBieneman via llvm-dev < >>>>> llvm-dev at lists.llvm.org> wrote: >>>>> >>>>>> >>>>>> >>>>>>> On Mar 1, 2016, at 10:01 AM, Mehdi Amini <mehdi.amini at apple.com> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>>> On Mar 1, 2016, at 9:57 AM, Chris Bieneman <cbieneman at apple.com> >>>>>> wrote: >>>>>>>> >>>>>>>> There are a few notes I'd like to add to this thread. >>>>>>>> >>>>>>>> (1) we have a number of places throughout out CMake build where we >>>>>>>> use >>>>>> features from newer CMakes gated by version checks. Most of these >>>>>> features >>>>>> are performance or usability related. None of them are correctness. >>>>>> Using >>>>>> the latest CMake release will often result in faster builds, so I >>>>>> encourage >>>>>> it. >>>>>>>> >>>>>>>> (2) CMake's "install" target will pretty much always be slower from >>>>>> clean than the old autoconf/make "install" target. This is because in >>>>>> CMake >>>>>> "install" depends on "all", and our CMake builds more stuff in "all" >>>>>> than >>>>>> autoconf did. To help with this or CMake system has lots of >>>>>> convenient >>>>>> "install-${name}" targets that support component-based installation. >>>>>> Not >>>>>> every component has one of these rules, but if one you need is >>>>>> missing >>>>>> let >>>>>> me know. I also recently (r261681) added a new option >>>>>> (LLVM_DISTRIBUTION_COMPONENTS) that allows you to specify a list of >>>>>> components that have custom install targets. It then creates a new >>>>>> "install-distribution" target that wraps just the components you >>>>>> want. >>>>>> For >>>>>> Apple this is almost a 40% speed up over "ninja install". >>>>>>> >>>>>>> That sounds great, I want to use it! >>>>>>> It would even be more awesome with an description/example in >>>>>> docs/CMake.rst :) >>>>>> >>>>>> Once I get the last of the kinks worked out for our internal adoption >>>>>> I'm >>>>>> going to open source our config files that use it. >>>>>> >>>>>> I've also made a note to remind myself to document it in >>>>>> docs/CMake.rst. >>>>>> I need to do a pass updating that with a bunch of the cool new things >>>>>> we're >>>>>> doing with CMake. Thanks for the reminder. >>>>>> >>>>>> -Chris >>>>>> >>>>>>> >>>>>>> -- >>>>>>> Mehdi >>>>>>>> >>>>>>>> On Feb 25, 2016, at 11:08 AM, Sedat Dilek via llvm-dev < >>>>>> llvm-dev at lists.llvm.org> wrote: >>>>>>>> >>>>>>>>>> Which combination of cmake/ninja versions are you using (latest >>>>>>>>>> are >>>>>>>>>> v3.4.3 and v1.6.0)? >>>>>>>>> >>>>>>>>> With this combination I could reduce build-time down from approx. >>>>>>>>> 3h >>>>>>>>> down to 01h20m. >>>>>>>>> >>>>>>>>> $ egrep -i 'jobs|ninja' llvm-build/CMakeCache.txt >>>>>>>>> //Program used to build from build.ninja files. >>>>>>>>> CMAKE_MAKE_PROGRAM:FILEPATH=/opt/cmake/bin/ninja >>>>>>>>> //Define the maximum number of concurrent compilation jobs. >>>>>>>>> LLVM_PARALLEL_COMPILE_JOBS:STRING=3 >>>>>>>>> //Define the maximum number of concurrent link jobs. >>>>>>>>> LLVM_PARALLEL_LINK_JOBS:STRING=1 >>>>>>>>> CMAKE_GENERATOR:INTERNAL=Ninja >>>>>>>>> >>>>>>>>> $ LC_ALL=C ls -alt >>>>>> logs/3.8.0rc3_clang-3-8-0-rc3_cmake-3-4-3_ninja-1-6-0/ >>>>>>>>> total 360 >>>>>>>>> drwxr-xr-x 2 wearefam wearefam 4096 Feb 25 19:58 . >>>>>>>>> drwxr-xr-x 6 wearefam wearefam 4096 Feb 25 19:58 .. >>>>>>>>> -rw-r--r-- 1 wearefam wearefam 130196 Feb 25 19:54 >>>>>>>>> install-log_llvm-toolchain-3.8.0rc3.txt >>>>>>>>> -rw-r--r-- 1 wearefam wearefam 205762 Feb 25 19:51 >>>>>>>>> build-log_llvm-toolchain-3.8.0rc3.txt >>>>>>>>> -rw-r--r-- 1 wearefam wearefam 14331 Feb 25 18:30 >>>>>>>>> configure-log_llvm-toolchain-3.8.0rc3.txt >>>>>>>>> >>>>>>>>> $ LC_ALL=C du -s -m llvm* /opt/llvm-toolchain-3.8.0rc3 >>>>>>>>> 315 llvm >>>>>>>>> 941 llvm-build >>>>>>>>> 609 /opt/llvm-toolchain-3.8.0rc3 >>>>>>>>> >>>>>>>>> - Sedat - >>>>>>>>> >>>>>>>>> [1] >>>>>>>>> https://cmake.org/files/v3.5/cmake-3.5.0-rc3-Linux-x86_64.tar.gz >>>>>>>>> _______________________________________________ >>>>>>>>> LLVM Developers mailing list >>>>>>>>> llvm-dev at lists.llvm.org >>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>>>>> >>>>>> _______________________________________________ >>>>>> LLVM Developers mailing list >>>>>> llvm-dev at lists.llvm.org >>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>>>> >>>>> >>>>> >>>>> >>>> >>> <build_llvm-toolchain_clang-cmake-ninja.sh><install_llvm-toolchain_clang-cmake-ninja.sh> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > >
Tilmann Scheller via llvm-dev
2016-Mar-04 10:28 UTC
[llvm-dev] Building with LLVM_PARALLEL_XXX_JOBS
Hi Sedat, On 03/03/2016 08:09 AM, Sedat Dilek via llvm-dev wrote:> It might be that a CLANG generated with LTO/PGO speeds up the build. > Can you confirm this?Yes, a Clang host compiler built with LTO or PGO is generally faster than an -O3 build. Some things to keep in mind when building the Clang host compiler: GCC: - GCC 4.9 gives good results with PGO enabled (1.16x speedup over the -O3 build), not so much with LTO (actually regresses performance over the -O3 build, same for PGO vs PGO+LTO) - GCC 5.1/5.2/5.3 can't build Clang with LTO enabled (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66027), that's supposed to be fixed in GCC 5.4 Clang: - PGO works and gives a good 1.12x speedup over the -O3 build (produced about 270GB of profiling data when I tried this in December last year, this should be addressed soon once the in-process profiling data merging lands) - LTO provides a 1.03x speedup over the -O3 build - I have not tried LTO+PGO with full Clang bootstrap profiling data but I would expect that it helps to increase the performance even further> Can you confirm binutils-gold speed up the build?Yes, gold is definitely faster than ld when building Clang/LLVM.> Has LLVM an own linker? > Can be used? Speedup the build?I haven't tried it but lld can definitely link Clang/LLVM on x86-64 Linux.> The blog-text mentioned to use optimized-tablegen. > Good? Bad? Ugly?Good, it helps to speed up debug builds. Regards, Tilmann
Sedat Dilek via llvm-dev
2016-Mar-12 11:45 UTC
[llvm-dev] Building with LLVM_PARALLEL_XXX_JOBS
On Fri, Mar 4, 2016 at 11:28 AM, Tilmann Scheller <tilmann at osg.samsung.com> wrote:> Hi Sedat, > > On 03/03/2016 08:09 AM, Sedat Dilek via llvm-dev wrote: >> >> It might be that a CLANG generated with LTO/PGO speeds up the build. >> Can you confirm this? > > Yes, a Clang host compiler built with LTO or PGO is generally faster than an > -O3 build. > > Some things to keep in mind when building the Clang host compiler: > > GCC: > - GCC 4.9 gives good results with PGO enabled (1.16x speedup over the -O3 > build), not so much with LTO (actually regresses performance over the -O3 > build, same for PGO vs PGO+LTO) > - GCC 5.1/5.2/5.3 can't build Clang with LTO enabled > (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66027), that's supposed to be > fixed in GCC 5.4 > > Clang: > - PGO works and gives a good 1.12x speedup over the -O3 build (produced > about 270GB of profiling data when I tried this in December last year, this > should be addressed soon once the in-process profiling data merging lands) > - LTO provides a 1.03x speedup over the -O3 build > - I have not tried LTO+PGO with full Clang bootstrap profiling data but I > would expect that it helps to increase the performance even further > >> Can you confirm binutils-gold speed up the build? > > Yes, gold is definitely faster than ld when building Clang/LLVM. > >> Has LLVM an own linker? >> Can be used? Speedup the build? > > I haven't tried it but lld can definitely link Clang/LLVM on x86-64 Linux. > >> The blog-text mentioned to use optimized-tablegen. >> Good? Bad? Ugly? > > Good, it helps to speed up debug builds. >[ CCed all folks who answered me ] Hi, I have built my llvm-toolchain v3.8.0 (FINAL) with binutils-gold v1.11 in a 1st run. When building with cmake/ninja there are 150 "Linking" lines... $ grep Linking logs/3.8.0_clang-3-8-0_cmake-3-4-3_ninja-1-6-0_gold-1-11_compile-jobs-2_link-jobs-1/build-log_llvm-toolchain-3.8.0.txt | wc -l 150 I have the following cmake-options realized... *** SNIP *** ### CMAKE OPTIONS # NOTE #1: CMake Version 2.8.8 is the minimum required(Ubuntu/precise ships v2.8.7 officially) # NOTE #2: For fast builds use recommended CMake >= v3.2 (used: v3.4.3) and Ninja (used: v1.6.0) # NOTE #3: How to check available cmake-options? # EXAMPLE #3: cd $BUILD_DIR ; cmake ../llvm -LA | egrep $CMAKE_OPTS # # CMake binary CMAKE="cmake" # CMake compiler options COMPILERS_CMAKE_OPTS="-DCMAKE_C_COMPILER=$COMPILER_CC -DCMAKE_CXX_COMPILER=$COMPILER_CXX" # NOTE-1: cmake/ninja: Use LLVM_PARALLEL_COMPILE_JOBS and LLVM_PARALLEL_LINK_JOBS options # NOTE-2: For fast builds use available (online) CPUs +1 or set values explicitly # NOTE-3: For fast and safe linking use bintils-gold and LINK_JOBS="1" COMPILE_JOBS="2" ##COMPILE_JOBS=$(($(getconf _NPROCESSORS_ONLN)+1)) LINK_JOBS="1" JOBS_CMAKE_OPTS="-DLLVM_PARALLEL_COMPILE_JOBS=$COMPILE_JOBS -DLLVM_PARALLEL_LINK_JOBS=$LINK_JOBS" # Cmake linker options (here: Use binutils-gold to speedup build) LINKER="/usr/bin/ld.gold" CMAKE_LINKER="$LINKER" CMAKE_LINKER_OPTS="-DCMAKE_LINKER=$CMAKE_LINKER" # CMake Generators CMAKE_GENERATORS="Ninja" GENERATORS_CMAKE_OPTS="-G $CMAKE_GENERATORS" # CMake configure settings PREFIX_CMAKE_OPTS="-DCMAKE_INSTALL_PREFIX=$PREFIX" OPTIMIZED_CMAKE_OPTS="-DCMAKE_BUILD_TYPE=RELEASE" ASSERTIONS_CMAKE_OPTS="-DLLVM_ENABLE_ASSERTIONS=ON" TARGETS_CMAKE_OPTS="-DLLVM_TARGETS_TO_BUILD=X86" CONFIGURE_CMAKE_OPTS="$PREFIX_CMAKE_OPTS $OPTIMIZED_CMAKE_OPTS $ASSERTIONS_CMAKE_OPTS $TARGETS_CMAKE_OPTS" # All CMake options CMAKE_OPTS="$COMPILERS_CMAKE_OPTS $JOBS_CMAKE_OPTS $CMAKE_LINKER_OPTS $GENERATORS_CMAKE_OPTS $CONFIGURE_CMAKE_OPTS" *** SNAP *** Is LINK_JOBS="2" speeding up the build? One guy of you told me to use "1" to be on the safe side - that is my default. Personally, I do not think this is very very efficiently - more than with binutild-bfd Linker. Using llvm-link (in a 2nd build) - good|bad|ugly? [ TODO#S: Before doing a 2nd build (and in a 3rd run using more optimized binaries) ] How do I anable LTO via CMAKE? How do I enable PGO via CMAKE? Grepping for 'lto' 'pgo' gives no help in [1]. Searching there for '-fprofile' shows... LLVM_PROFDATA_FILE:PATH Path to a profdata file to pass into clang’s -fprofile-instr-use flag. This can only be specified if you’re building with clang. Unsure what to use!>From my build-script (attached)...##### BEGIN *** SECTION WILL BE DELETED *** # # XXX: TRYOUT #1: Use GOLD as linker # XXX: TRYOUT #2: Use '-O3' OptLevel # XXX: TRYOUT #3: Use #2 in combination with LTO and PGO ('-O3 -flto -fprofile-use') # XXX: TRYOUT #4: Use optimized tablegen # ### TRYOUT #1: GOLD <--- DONE # CMAKE_LINKER:FILEPATH=/usr/bin/ld # GOLD_EXECUTABLE:FILEPATH=/usr/bin/ld.gold # LLVM_TOOL_GOLD_BUILD:BOOL=ON # ### TRYOUT #2: OPTLEVEL '-O3' <--- NOP # CMAKE_ASM_FLAGS_RELEASE:STRING=-O3 -DNDEBUG # CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG # CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG # ### TRYOUT #3: LTO AND PGO <--- UNSURE # LLVM_TOOL_LLVM_LTO_BUILD:BOOL=ON # LLVM_TOOL_LTO_BUILD:BOOL=ON # LLVM_USE_OPROFILE:BOOL=OFF # #### TRYOUT #4: TABLEGEN # LLVM_OPTIMIZED_TABLEGEN:BOOL=OFF # ##### END *** SECTION WILL BE DELETED *** Thanks for any help and ideas. Regards, - Sedat - [1] http://llvm.org/releases/3.8.0/docs/CMake.html -------------- next part -------------- A non-text attachment was scrubbed... Name: build_llvm-toolchain_clang-cmake-ninja.sh Type: application/x-sh Size: 4638 bytes Desc: not available URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160312/6249bc5d/attachment.sh>
Sedat Dilek via llvm-dev
2016-Jul-17 16:52 UTC
[llvm-dev] Building with LLVM_PARALLEL_XXX_JOBS
On Fri, Mar 4, 2016 at 11:28 AM, Tilmann Scheller <tilmann at osg.samsung.com> wrote:> Hi Sedat, > > On 03/03/2016 08:09 AM, Sedat Dilek via llvm-dev wrote: >> >> It might be that a CLANG generated with LTO/PGO speeds up the build. >> Can you confirm this? > > Yes, a Clang host compiler built with LTO or PGO is generally faster than an > -O3 build. > > Some things to keep in mind when building the Clang host compiler: > > GCC: > - GCC 4.9 gives good results with PGO enabled (1.16x speedup over the -O3 > build), not so much with LTO (actually regresses performance over the -O3 > build, same for PGO vs PGO+LTO) > - GCC 5.1/5.2/5.3 can't build Clang with LTO enabled > (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66027), that's supposed to be > fixed in GCC 5.4 > > Clang: > - PGO works and gives a good 1.12x speedup over the -O3 build (produced > about 270GB of profiling data when I tried this in December last year, this > should be addressed soon once the in-process profiling data merging lands) > - LTO provides a 1.03x speedup over the -O3 build > - I have not tried LTO+PGO with full Clang bootstrap profiling data but I > would expect that it helps to increase the performance even further >I jumped over to LLVM v3.8.1 and trying again your speedup-build hints - using GCC v4.9.2 and GNU/binutils v2.22 on a Ubuntu/precise AMD64 system. $ uname -r 3.13.0-92-generic My build-type is "release"... BUILDTYPE_CMAKE_OPTS="-DCMAKE_BUILD_TYPE=RELEASE" ...for a "debug" build I have not enough disc-space. Unfortunately, when I use and enable backported LTO-flag... LTO_CMAKE_OPTS="-DLLVM_ENABLE_LTO=ON" ...my build breaks here. Is LTO only available/useful for debug builds? How do you pass "-flto" in your build-script (see my attached build-script). Can you look at the attached build-log file? Now, I am building with none of the speedup options.>> Can you confirm binutils-gold speed up the build? > > Yes, gold is definitely faster than ld when building Clang/LLVM. >The same with GNU/gold and LTO-flag enabled.>> Has LLVM an own linker? >> Can be used? Speedup the build? > > I haven't tried it but lld can definitely link Clang/LLVM on x86-64 Linux. >Not tried.>> The blog-text mentioned to use optimized-tablegen. >> Good? Bad? Ugly? > > Good, it helps to speed up debug builds. >And for release builds? - Sedat - -------------- next part -------------- --- llvm.orig/cmake/modules/HandleLLVMOptions.cmake +++ llvm/cmake/modules/HandleLLVMOptions.cmake @@ -635,6 +635,13 @@ append_if(LLVM_BUILD_INSTRUMENTED "-fpro CMAKE_EXE_LINKER_FLAGS CMAKE_SHARED_LINKER_FLAGS) +option(LLVM_ENABLE_LTO "Enable link-time optimization" OFF) +append_if(LLVM_ENABLE_LTO "-flto" + CMAKE_CXX_FLAGS + CMAKE_C_FLAGS + CMAKE_EXE_LINKER_FLAGS + CMAKE_SHARED_LINKER_FLAGS) + # Plugin support # FIXME: Make this configurable. if(WIN32 OR CYGWIN) --- llvm.orig/docs/CMake.rst +++ llvm/docs/CMake.rst @@ -347,6 +347,10 @@ LLVM-specific variables are ``Address``, ``Memory``, ``MemoryWithOrigins``, ``Undefined``, ``Thread``, and ``Address;Undefined``. Defaults to empty string. +**LLVM_ENABLE_LTO**:BOOL + Add the ``-flto`` flag to the compile and link command lines, + enabling link-time optimization. Defaults to OFF. + **LLVM_PARALLEL_COMPILE_JOBS**:STRING Define the maximum number of concurrent compilation jobs. -------------- next part -------------- A non-text attachment was scrubbed... Name: build_llvm-toolchain.sh Type: application/x-sh Size: 5448 bytes Desc: not available URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160717/297182e7/attachment.sh> -------------- next part -------------- -- The C compiler identification is GNU 4.9.2 -- The CXX compiler identification is GNU 4.9.2 -- The ASM compiler identification is GNU -- Found assembler: /usr/bin/gcc-4.9 -- Check for working C compiler: /usr/bin/gcc-4.9 -- Check for working C compiler: /usr/bin/gcc-4.9 -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/g++-4.9 -- Check for working CXX compiler: /usr/bin/g++-4.9 -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Looking for dirent.h -- Looking for dirent.h - found -- Looking for dlfcn.h -- Looking for dlfcn.h - found -- Looking for errno.h -- Looking for errno.h - found -- Looking for execinfo.h -- Looking for execinfo.h - found -- Looking for fcntl.h -- Looking for fcntl.h - found -- Looking for inttypes.h -- Looking for inttypes.h - found -- Looking for limits.h -- Looking for limits.h - found -- Looking for link.h -- Looking for link.h - found -- Looking for malloc.h -- Looking for malloc.h - found -- Looking for malloc/malloc.h -- Looking for malloc/malloc.h - not found -- Looking for ndir.h -- Looking for ndir.h - not found -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for signal.h -- Looking for signal.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for sys/dir.h -- Looking for sys/dir.h - found -- Looking for sys/ioctl.h -- Looking for sys/ioctl.h - found -- Looking for sys/mman.h -- Looking for sys/mman.h - found -- Looking for sys/ndir.h -- Looking for sys/ndir.h - not found -- Looking for sys/param.h -- Looking for sys/param.h - found -- Looking for sys/resource.h -- Looking for sys/resource.h - found -- Looking for sys/stat.h -- Looking for sys/stat.h - found -- Looking for sys/time.h -- Looking for sys/time.h - found -- Looking for sys/uio.h -- Looking for sys/uio.h - found -- Looking for termios.h -- Looking for termios.h - found -- Looking for unistd.h -- Looking for unistd.h - found -- Looking for utime.h -- Looking for utime.h - found -- Looking for valgrind/valgrind.h -- Looking for valgrind/valgrind.h - found -- Looking for zlib.h -- Looking for zlib.h - found -- Looking for fenv.h -- Looking for fenv.h - found -- Looking for FE_ALL_EXCEPT -- Looking for FE_ALL_EXCEPT - found -- Looking for FE_INEXACT -- Looking for FE_INEXACT - found -- Looking for mach/mach.h -- Looking for mach/mach.h - not found -- Looking for mach-o/dyld.h -- Looking for mach-o/dyld.h - not found -- Looking for histedit.h -- Looking for histedit.h - found -- Performing Test HAVE_CXXABI_H -- Performing Test HAVE_CXXABI_H - Success -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Looking for pthread_getspecific in pthread -- Looking for pthread_getspecific in pthread - found -- Looking for pthread_rwlock_init in pthread -- Looking for pthread_rwlock_init in pthread - found -- Looking for pthread_mutex_lock in pthread -- Looking for pthread_mutex_lock in pthread - found -- Looking for dlopen in dl -- Looking for dlopen in dl - found -- Looking for clock_gettime in rt -- Looking for clock_gettime in rt - found -- Looking for compress2 in z -- Looking for compress2 in z - found -- Looking for el_init in edit -- Looking for el_init in edit - found -- Looking for setupterm in tinfo -- Looking for setupterm in tinfo - found -- Looking for arc4random -- Looking for arc4random - not found -- Looking for backtrace -- Looking for backtrace - found -- Looking for getpagesize -- Looking for getpagesize - found -- Looking for getrusage -- Looking for getrusage - found -- Looking for setrlimit -- Looking for setrlimit - found -- Looking for isatty -- Looking for isatty - found -- Looking for futimens -- Looking for futimens - found -- Looking for futimes -- Looking for futimes - found -- Looking for writev -- Looking for writev - found -- Looking for mallctl -- Looking for mallctl - not found -- Looking for mallinfo -- Looking for mallinfo - found -- Looking for malloc_zone_statistics -- Looking for malloc_zone_statistics - not found -- Looking for mkdtemp -- Looking for mkdtemp - found -- Looking for mkstemp -- Looking for mkstemp - found -- Looking for mktemp -- Looking for mktemp - found -- Looking for closedir -- Looking for closedir - found -- Looking for opendir -- Looking for opendir - found -- Looking for readdir -- Looking for readdir - found -- Looking for getcwd -- Looking for getcwd - found -- Looking for gettimeofday -- Looking for gettimeofday - found -- Looking for getrlimit -- Looking for getrlimit - found -- Looking for posix_spawn -- Looking for posix_spawn - found -- Looking for pread -- Looking for pread - found -- Looking for realpath -- Looking for realpath - found -- Looking for sbrk -- Looking for sbrk - found -- Looking for srand48 -- Looking for srand48 - found -- Looking for lrand48 -- Looking for lrand48 - found -- Looking for drand48 -- Looking for drand48 - found -- Looking for strtoll -- Looking for strtoll - found -- Looking for strtoq -- Looking for strtoq - found -- Looking for strerror -- Looking for strerror - found -- Looking for strerror_r -- Looking for strerror_r - found -- Looking for strerror_s -- Looking for strerror_s - not found -- Looking for setenv -- Looking for setenv - found -- Looking for dlerror -- Looking for dlerror - found -- Looking for dlopen -- Looking for dlopen - found -- Looking for __GLIBC__ -- Looking for __GLIBC__ - found -- Performing Test HAVE_INT64_T -- Performing Test HAVE_INT64_T - Success -- Performing Test HAVE_UINT64_T -- Performing Test HAVE_UINT64_T - Success -- Performing Test HAVE_U_INT64_T -- Performing Test HAVE_U_INT64_T - Success -- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB -- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB - Success -- Performing Test LLVM_HAS_ATOMICS -- Performing Test LLVM_HAS_ATOMICS - Success -- Performing Test SUPPORTS_NO_VARIADIC_MACROS_FLAG -- Performing Test SUPPORTS_NO_VARIADIC_MACROS_FLAG - Success -- Performing Test HAS_MAYBE_UNINITIALIZED -- Performing Test HAS_MAYBE_UNINITIALIZED - Success -- Target triple: x86_64-unknown-linux-gnu -- Native target architecture is X86 -- Threads enabled. -- Doxygen disabled. -- Sphinx disabled. -- Go bindings disabled. -- Could NOT find OCaml (missing: OCAMLFIND OCAML_VERSION OCAML_STDLIB_PATH) -- Could NOT find OCaml (missing: OCAMLFIND OCAML_VERSION OCAML_STDLIB_PATH) -- OCaml bindings disabled. -- Performing Test C_SUPPORTS_FPIC -- Performing Test C_SUPPORTS_FPIC - Success -- Performing Test CXX_SUPPORTS_FPIC -- Performing Test CXX_SUPPORTS_FPIC - Success -- Building with -fPIC -- Performing Test SUPPORTS_FVISIBILITY_INLINES_HIDDEN_FLAG -- Performing Test SUPPORTS_FVISIBILITY_INLINES_HIDDEN_FLAG - Success -- Performing Test CXX_SUPPORTS_MISSING_FIELD_INITIALIZERS_FLAG -- Performing Test CXX_SUPPORTS_MISSING_FIELD_INITIALIZERS_FLAG - Success -- Performing Test C_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG -- Performing Test C_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG - Failed -- Performing Test CXX_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG -- Performing Test CXX_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG - Failed -- Performing Test C_SUPPORTS_DELETE_NON_VIRTUAL_DTOR_FLAG -- Performing Test C_SUPPORTS_DELETE_NON_VIRTUAL_DTOR_FLAG - Failed -- Performing Test CXX_SUPPORTS_DELETE_NON_VIRTUAL_DTOR_FLAG -- Performing Test CXX_SUPPORTS_DELETE_NON_VIRTUAL_DTOR_FLAG - Success -- Performing Test C_WCOMMENT_ALLOWS_LINE_WRAP -- Performing Test C_WCOMMENT_ALLOWS_LINE_WRAP - Failed -- Performing Test CXX_SUPPORTS_CXX11 -- Performing Test CXX_SUPPORTS_CXX11 - Success -- Performing Test C_SUPPORTS_FNO_FUNCTION_SECTIONS -- Performing Test C_SUPPORTS_FNO_FUNCTION_SECTIONS - Success -- Performing Test C_SUPPORTS_FFUNCTION_SECTIONS -- Performing Test C_SUPPORTS_FFUNCTION_SECTIONS - Success -- Performing Test CXX_SUPPORTS_FFUNCTION_SECTIONS -- Performing Test CXX_SUPPORTS_FFUNCTION_SECTIONS - Success -- Performing Test C_SUPPORTS_FDATA_SECTIONS -- Performing Test C_SUPPORTS_FDATA_SECTIONS - Success -- Performing Test CXX_SUPPORTS_FDATA_SECTIONS -- Performing Test CXX_SUPPORTS_FDATA_SECTIONS - Success -- Found PythonInterp: /usr/bin/python2.7 (found version "2.7.12") -- Constructing LLVMBuild project information -- Targeting X86 -- Looking for unwind.h -- Looking for unwind.h - found -- Performing Test COMPILER_RT_HAS_FPIC_FLAG -- Performing Test COMPILER_RT_HAS_FPIC_FLAG - Success -- Performing Test COMPILER_RT_HAS_FPIE_FLAG -- Performing Test COMPILER_RT_HAS_FPIE_FLAG - Success -- Performing Test COMPILER_RT_HAS_FNO_BUILTIN_FLAG -- Performing Test COMPILER_RT_HAS_FNO_BUILTIN_FLAG - Success -- Performing Test COMPILER_RT_HAS_FNO_EXCEPTIONS_FLAG -- Performing Test COMPILER_RT_HAS_FNO_EXCEPTIONS_FLAG - Success -- Performing Test COMPILER_RT_HAS_FOMIT_FRAME_POINTER_FLAG -- Performing Test COMPILER_RT_HAS_FOMIT_FRAME_POINTER_FLAG - Success -- Performing Test COMPILER_RT_HAS_FUNWIND_TABLES_FLAG -- Performing Test COMPILER_RT_HAS_FUNWIND_TABLES_FLAG - Success -- Performing Test COMPILER_RT_HAS_FNO_STACK_PROTECTOR_FLAG -- Performing Test COMPILER_RT_HAS_FNO_STACK_PROTECTOR_FLAG - Success -- Performing Test COMPILER_RT_HAS_FNO_SANITIZE_SAFE_STACK_FLAG -- Performing Test COMPILER_RT_HAS_FNO_SANITIZE_SAFE_STACK_FLAG - Failed -- Performing Test COMPILER_RT_HAS_FVISIBILITY_HIDDEN_FLAG -- Performing Test COMPILER_RT_HAS_FVISIBILITY_HIDDEN_FLAG - Success -- Performing Test COMPILER_RT_HAS_FNO_RTTI_FLAG -- Performing Test COMPILER_RT_HAS_FNO_RTTI_FLAG - Success -- Performing Test COMPILER_RT_HAS_FFREESTANDING_FLAG -- Performing Test COMPILER_RT_HAS_FFREESTANDING_FLAG - Success -- Performing Test COMPILER_RT_HAS_FNO_FUNCTION_SECTIONS_FLAG -- Performing Test COMPILER_RT_HAS_FNO_FUNCTION_SECTIONS_FLAG - Success -- Performing Test COMPILER_RT_HAS_STD_CXX11_FLAG -- Performing Test COMPILER_RT_HAS_STD_CXX11_FLAG - Success -- Performing Test COMPILER_RT_HAS_FTLS_MODEL_INITIAL_EXEC -- Performing Test COMPILER_RT_HAS_FTLS_MODEL_INITIAL_EXEC - Success -- Performing Test COMPILER_RT_HAS_FNO_LTO_FLAG -- Performing Test COMPILER_RT_HAS_FNO_LTO_FLAG - Success -- Performing Test COMPILER_RT_HAS_MSSE3_FLAG -- Performing Test COMPILER_RT_HAS_MSSE3_FLAG - Success -- Performing Test COMPILER_RT_HAS_STD_C99_FLAG -- Performing Test COMPILER_RT_HAS_STD_C99_FLAG - Failed -- Performing Test COMPILER_RT_HAS_SYSROOT_FLAG -- Performing Test COMPILER_RT_HAS_SYSROOT_FLAG - Success -- Performing Test COMPILER_RT_HAS_FVISIBILITY_INLINES_HIDDEN_FLAG -- Performing Test COMPILER_RT_HAS_FVISIBILITY_INLINES_HIDDEN_FLAG - Success -- Performing Test COMPILER_RT_HAS_GR_FLAG -- Performing Test COMPILER_RT_HAS_GR_FLAG - Failed -- Performing Test COMPILER_RT_HAS_GS_FLAG -- Performing Test COMPILER_RT_HAS_GS_FLAG - Failed -- Performing Test COMPILER_RT_HAS_MT_FLAG -- Performing Test COMPILER_RT_HAS_MT_FLAG - Failed -- Performing Test COMPILER_RT_HAS_Oy_FLAG -- Performing Test COMPILER_RT_HAS_Oy_FLAG - Failed -- Performing Test COMPILER_RT_HAS_GLINE_TABLES_ONLY_FLAG -- Performing Test COMPILER_RT_HAS_GLINE_TABLES_ONLY_FLAG - Failed -- Performing Test COMPILER_RT_HAS_G_FLAG -- Performing Test COMPILER_RT_HAS_G_FLAG - Success -- Performing Test COMPILER_RT_HAS_Zi_FLAG -- Performing Test COMPILER_RT_HAS_Zi_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WALL_FLAG -- Performing Test COMPILER_RT_HAS_WALL_FLAG - Success -- Performing Test COMPILER_RT_HAS_WERROR_FLAG -- Performing Test COMPILER_RT_HAS_WERROR_FLAG - Success -- Performing Test COMPILER_RT_HAS_WFRAME_LARGER_THAN_FLAG -- Performing Test COMPILER_RT_HAS_WFRAME_LARGER_THAN_FLAG - Success -- Performing Test COMPILER_RT_HAS_WGLOBAL_CONSTRUCTORS_FLAG -- Performing Test COMPILER_RT_HAS_WGLOBAL_CONSTRUCTORS_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WC99_EXTENSIONS_FLAG -- Performing Test COMPILER_RT_HAS_WC99_EXTENSIONS_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WGNU_FLAG -- Performing Test COMPILER_RT_HAS_WGNU_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WNON_VIRTUAL_DTOR_FLAG -- Performing Test COMPILER_RT_HAS_WNON_VIRTUAL_DTOR_FLAG - Success -- Performing Test COMPILER_RT_HAS_WVARIADIC_MACROS_FLAG -- Performing Test COMPILER_RT_HAS_WVARIADIC_MACROS_FLAG - Success -- Performing Test COMPILER_RT_HAS_W3_FLAG -- Performing Test COMPILER_RT_HAS_W3_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WX_FLAG -- Performing Test COMPILER_RT_HAS_WX_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WD4146_FLAG -- Performing Test COMPILER_RT_HAS_WD4146_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WD4291_FLAG -- Performing Test COMPILER_RT_HAS_WD4291_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WD4391_FLAG -- Performing Test COMPILER_RT_HAS_WD4391_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WD4722_FLAG -- Performing Test COMPILER_RT_HAS_WD4722_FLAG - Failed -- Performing Test COMPILER_RT_HAS_WD4800_FLAG -- Performing Test COMPILER_RT_HAS_WD4800_FLAG - Failed -- Looking for __func__ -- Looking for __func__ - found -- Looking for fopen in c -- Looking for fopen in c - found -- Looking for dlopen in dl -- Looking for dlopen in dl - found -- Looking for shm_open in rt -- Looking for shm_open in rt - found -- Looking for pow in m -- Looking for pow in m - found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Looking for __cxa_throw in stdc++ -- Looking for __cxa_throw in stdc++ - found -- Looking for __i686__ -- Looking for __i686__ - found -- Looking for __i386__ -- Looking for __i386__ - found -- Compiler-RT supported architectures: x86_64 -- Looking for rpc/xdr.h -- Looking for rpc/xdr.h - found -- Looking for tirpc/rpc/xdr.h -- Looking for tirpc/rpc/xdr.h - not found -- Performing Test COMPILER_RT_TARGET_HAS_ATOMICS -- Performing Test COMPILER_RT_TARGET_HAS_ATOMICS - Success -- Clang version: 3.8.1 -- Performing Test CXX_SUPPORTS_NO_NESTED_ANON_TYPES_FLAG -- Performing Test CXX_SUPPORTS_NO_NESTED_ANON_TYPES_FLAG - Failed -- Configuring done -- Generating done -- Build files have been written to: /home/wearefam/src/llvm-toolchain/llvm-build -------------- next part -------------- A non-text attachment was scrubbed... Name: build-log_llvm-toolchain-3.8.1.txt.gz Type: application/x-gzip Size: 41076 bytes Desc: not available URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160717/297182e7/attachment.bin>