similar to: Memory overflow during cmake/ninja build

Displaying 20 results from an estimated 1000 matches similar to: "Memory overflow during cmake/ninja build"

2016 Feb 11
5
issues with split llvm libraries and llvmpipe and failing to load library
Hey, So in Fedora rawhide we are now building llvm 3.7.1 into the lots of little shared libraries format. However I'm running into a major problem with the fact that sometimes dlclose isn't dropping all the LLVM libraries from the address space of the process. We have a sequence like this: a) X server asks mesa gbm library to init, it loads the kms_swrast_dri.so with
2017 Sep 27
1
Build error
Hello, I am building LLVM with ninja on Linux environment and I continue to have the error below. I am thinking that the cause might be my PC not having sufficient RAM memory. In this sense, I extended my swap memory with a swap file (90GB), but it didn't solved the problem. Should I add more physical RAM memory to my PC, or is there any software based solution I can try first? Thank you and
2019 Jun 06
2
clang: error: linker command failed due to signal (use -v to see invocation)
~/Documents/llvm-project/build$ ninja After over 2,000 files compiled. [25/1074] Linking CXX executable bin/llvm-lto FAILED: bin/llvm-lto : && /usr/bin/clang++  -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long
2016 Jul 23
3
[llvm-toolchain v3.8.1] LTO: Linking clang hangs with ld.gold and LLVMgold.so plugin
> On Jul 23, 2016, at 1:53 PM, Sedat Dilek via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > > On Sat, Jul 23, 2016 at 7:48 PM, Piotr Padlewski <prazek at google.com <mailto:prazek at google.com>> wrote: >> How big is your project? >> LTO eats RAM even faster than chrome. For example linking clang with LTO >>
2016 Jul 23
2
[llvm-toolchain v3.8.1] LTO: Linking clang hangs with ld.gold and LLVMgold.so plugin
How big is your project? LTO eats RAM even faster than chrome. For example linking clang with LTO could take 16GB of ram. Have you tried using LTO on your project on that machine, or is it your first time? Piotr On Sat, Jul 23, 2016 at 2:42 AM, Sedat Dilek via llvm-dev < llvm-dev at lists.llvm.org> wrote: > On Thu, Jul 21, 2016 at 12:01 PM, Sedat Dilek <sedat.dilek at
2015 Aug 24
4
Error building llvm
Trying to run make to build llvm, I faced the following error: Linking CXX shared library ../../lib/libLTO.so collect2: error: ld returned 1 exit status make[2]: *** [lib/libLTO.so.3.8.0svn] Error 1 make[1]: *** [tools/lto/CMakeFiles/LTO.dir/all] Error 2 make: *** [all] Error 2 So, what's the problem here? Regards, Marwa Yusuf Teaching Assistant - Computer Engineering Department
2020 Mar 28
3
LLD issue on a massively parallel build machine
Alex : Can you please try `numactl` or `taskset` after https://github.com/llvm/llvm-project/commit/09158252f777c2e2f06a86b154c44abcbcf9bb74 ? There was a tiny bug in how sched_getaffinity() was used, see: https://reviews.llvm.org/D75153#1942336 De : Alex Brachet-Mialot <alexbrachetmialot at gmail.com> Envoyé : March 28, 2020 12:11 PM À : Itaru Kitayama <itaru.kitayama at gmail.com>
2020 Mar 28
2
LLD issue on a massively parallel build machine
Hi, On a 1296-core Intel machine with 376 GB, setting -DLLVM_PARALLEL_LINK_JOB=1 does not help (switching back to ld scales) see: [5085/5201] Linking CXX executable bin/clang-11 FAILED: bin/clang-11 : && /home/usr4/c74014i/opt/clang/current/bin/clang++ -stdlib=libc++ -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra
2017 Aug 26
2
building release_50 with gcc7.2.0 on MacOS: duplicate symbol llvm::DominatorTreeBase
This is release_50 branch of git, sha1: f1d5723be3f9456a6b16cdf687847ac2918846de Using gcc 7.2.0 from homebrew. $ CC=/usr/local/opt/gcc/bin/x86_64-apple-darwin16.7.0-gcc-7 CXX=/usr/local/opt/gcc/bin/x86_64-apple-darwin16.7.0-g++-7 cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/Users/andy/local/llvm5 -DCMAKE_PREFIX_PATH=/Users/andy/local/llvm5 $ make VERBOSE=1 [ 92%] Linking CXX
2020 Mar 28
3
LLD issue on a massively parallel build machine
$ free -g total used free shared buff/cache available Mem: 376 149 20 1 207 225 Swap: 3 0 3 Too small swap size for a 72-core login machine? On Sun, Mar 29, 2020 at 4:28 AM Alex Brachet-Mialot < alexbrachetmialot at gmail.com> wrote: > Enable threads is for building llvm with
2020 Mar 28
2
LLD issue on a massively parallel build machine
Hi, My configuration is below: cmake -GNinja -DLLVM_ENABLE_LLD=ON -DLLVM_ENABLE_THREADS=OFF -DLLVM_ENABLE_LIBCXX=ON -DCMAKE_BUILD_TYPE=Release -DGCC_INSTALL_PREFIX=/home/usr4/c74014i/opt/gcc-7.5.0/ -DLIBOMPTARGET_ENABLE_DEBUG=ON -DCMAKE_INSTALL_PREFIX=$HOME/opt/clang/${today} -DCMAKE_C_COMPILER=clang -DCMAKE_C_FLAGS="" -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_CXX_FLAGS=""
2020 Mar 28
2
LLD issue on a massively parallel build machine
That is slowing down the build visibly, for the speed I should stick with ld at the moment. On Sun, Mar 29, 2020 at 4:42 AM Alexandre Ganea <alexandre.ganea at ubisoft.com> wrote: > Would `taskset -c 0-3 ninja check-all -j4` work? > > > > *De :* Itaru Kitayama <itaru.kitayama at gmail.com> > *Envoyé :* March 28, 2020 3:37 PM > *À :* Alex Brachet-Mialot
2020 Apr 01
4
LLD issue on a massively parallel build machine
On 2020-03-29, Nemanja Ivanovic via llvm-dev wrote: >Glad you got it working. >My suggestion about LLVM_ENABLE_THREADS didn't work because you didn't >apply it when building the build linker. > >When you don't have the ability to rebuild the build compiler, this doesn't >apply. In those cases, I end up doing a dirty hack where I use a wrapper >script with
2020 Mar 28
2
LLD issue on a massively parallel build machine
Good news, I was able to use up to 37 cores for LLVM build with LLD. The build speed, did not measure precisely though, is comparable to the build with GNU ld case. Thank you all for your help! On Sun, Mar 29, 2020 at 5:04 AM Alex Brachet-Mialot < alexbrachetmialot at gmail.com> wrote: > Yes unfortunately that would limit you to 4 cores. > > There’s no easy way to use lld with
2017 Jan 19
2
undefined symbols during linking LLDB 4.0 RC1
Hello, I update my building scripts to build LLVM 4.0 RC1 (with clang, lldb, libc++, libc++abi, lld) on CentOS 6 and I got a lot of undefined symbols during linking LLDB. I'm using clang-3.9 and this configuration: -DLLVM_TARGETS_TO_BUILD="X86" -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=/usr/bin/clang -DCMAKE_CXX_COMPILER=/usr/bin/clang++
2020 Apr 01
2
LLD issue on a massively parallel build machine
On 04/01/2020 01:47 PM, Itaru Kitayama via llvm-dev wrote: > Thanks for the heads up the supercomputer > Is down for maintenance this week. > I’ll give it a try when it gets back. > This doesn't look to me like it's necessarily an lld issue. Trying to build all the sub-projects without limiting the number of ninja jobs is almost guaranteed to run out of memory on a machine
2015 Nov 30
3
difference with autotools, cmake and ninja building methods
2015-11-30 12:58 GMT+08:00 Chris Bieneman <beanz at apple.com>: > The autotools build system is officially deprecated and will be removed in a > future release. > > CMake is the recommended configuration system, but it is only a > configuration system. It generates build files for multiple different build > systems. One of the most popular build systems is Ninja. You cannot
2020 Feb 02
3
lld out of memory
Hi, I am seeing an LLVM build failure with recent LLD on x86 like: [...] lib/libLLVMCodeGen.a lib/libLLVMBitWriter.a lib/libLLVMScalarOpts.a lib/libLLVMAgg ressiveInstCombine.a lib/libLLVMInstCombine.a lib/libLLVMTransformUtils.a lib/libLLVMDebugInfoDWARF.a lib/lib LLVMMCDisassembler.a lib/libLLVMExecutionEngine.a lib/libLLVMTarget.a lib/libLLVMAnalysis.a lib/libLLVMProfil eData.a
2016 Dec 20
6
(Thin)LTO llvm build
​Hi again, Teresa. Looks like I had forgotten to report back with success when finally building 3.9.0 in ThinLTO linker mode back in October. Sorry about that and thanks for helping me out. I know how important it is to get success reports as well, as a developer myself, so sorry again :(. While that worked back then, last weekend I tried to build 3.9.1 using 3.9.0 as installed from Arch Linux
2020 Apr 01
2
LLD issue on a massively parallel build machine
On another login node which is 256 (GB)/48 (nodes) JURECA at JSC, I never had an LLD issue without setting -j when executing ninja in the past few weeks. On Thu, Apr 2, 2020 at 7:17 AM Itaru Kitayama <itaru.kitayama at gmail.com> wrote: > Tom, > Then what ratio do you think it’s minimal? > > On Thu, Apr 2, 2020 at 6:11 Tom Stellard <tstellar at redhat.com> wrote: >