similar to: Ninja build (on Windows anyway) may be doing redundant work

Displaying 20 results from an estimated 5000 matches similar to: "Ninja build (on Windows anyway) may be doing redundant work"

2018 Nov 19
2
Ninja build (on Windows anyway) may be doing redundant work
Do you still see this if you use lld-link for linking? The "corrupt obj file" is something we saw on chrome's bots every now and then before we switched to lld. On Mon, Nov 19, 2018 at 5:27 PM Zachary Turner <zturner at google.com> wrote: > +Nico Weber <thakis at google.com> > > On Mon, Nov 19, 2018 at 12:25 PM via llvm-dev <llvm-dev at lists.llvm.org>
2018 Nov 20
2
Ninja build (on Windows anyway) may be doing redundant work
Since there's no "[2663/3121] " line between the two messages, the two lines are from the same link.exe invocation. I don't know why link.exe thinks it needs to print this line twice, ninja doesn't have anything to do with it. On Mon, Nov 19, 2018 at 6:57 PM <paul.robinson at sony.com> wrote: > I'm more concerned about seeing the message come out twice, which
2018 Nov 20
2
Ninja build (on Windows anyway) may be doing redundant work
(resend to the list) And of course, just as I say that, my next ninja build shows the line only once. On reflection I am less sure that the lack of a [N/M] line means they are from the same invocation. Surely ninja could spawn two links, which then independently report "Creating library" after ninja emits the [N/M] lines. --paulr From: llvm-dev [mailto:llvm-dev-bounces at
2013 Feb 06
1
[LLVMdev] [cfe-dev] Using CMake/Ninja on buildbots
On Wed, Feb 6, 2013 at 3:01 PM, Sean Silva <silvas at purdue.edu> wrote: > IMO, any functional/correctness difference between an incremental and > clean build should be considered a build system bug, If your (c)makefile underspecifies dependencies, there's nothing the build system can do. > especially for > C++ projects where incremental vs. clean can mean 10 second vs 30
2017 Jul 20
3
FYI: Ninja-build user may use CMake-3.9
This is useful for developer who uses multicore builder. https://cmake.org/cmake/help/v3.9/release/3.9.html#other-changes - The Ninja <https://cmake.org/cmake/help/v3.9/generator/Ninja.html#generator:Ninja> generator has loosened the dependencies of object compilation. Object compilation now depends only on custom targets and custom commands associated with libraries on
2017 Jul 20
2
FYI: Ninja-build user may use CMake-3.9
On Fri, Jul 21, 2017 at 1:16 AM Reid Kleckner <rnk at google.com> wrote: > This is great news! Do we know who contributed the changes to cut the > extra library dependencies? > > Do you think we should remove ENABLE_OBJLIB to simplify our CMake files in > the near future? It seems to me that anyone who cares about highly parallel > build throughput can upgrade CMake to get
2019 Jul 31
2
buildbot failure in LLVM on sanitizer-x86_64-linux-gn
vitalybuka, sanitizer-x86_64-linux-gn is _still_ on http://lab.llvm.org:8011/console . Can we please get it removed? On Wed, Jul 3, 2019 at 7:07 AM Nico Weber <thakis at chromium.org> wrote: > https://reviews.llvm.org/D63909 landed. Maybe it needs a master restart > to have an effect? > > On Wed, Jul 3, 2019 at 1:03 PM Roman Lebedev <lebedev.ri at gmail.com> wrote: >
2020 Sep 01
2
[cfe-dev] Can we remove llvmbb from IRC?
On Tue, Sep 1, 2020 at 3:57 PM David Blaikie <dblaikie at gmail.com> wrote: > > > On Tue, Sep 1, 2020 at 12:42 PM Nico Weber <thakis at chromium.org> wrote: > >> On Tue, Sep 1, 2020 at 3:32 PM David Blaikie <dblaikie at gmail.com> wrote: >> >>> On Tue, Sep 1, 2020 at 12:07 PM Nico Weber via cfe-dev < >>> cfe-dev at lists.llvm.org>
2019 May 24
2
Prevent ninja from rerunning cmake in a new build directory
Just posted this fix on ninja's github page, but figured I'd share it with a larger audience. Every time I run cmake && ninja in a new build directory, ninja will rerun cmake because the entry for build.ninja in .ninja_log is older than the timestamp on CMakeCache.txt, even if the timestamps on the actual file isn't older. The following patch fixes the problem, i.e.,
2015 Sep 04
3
Running tests on OS X 10.10 vs "Killed: 9"
Hi, building 'check-all' on any of my machines running OS X 10.10 usually fails because a few tests fail due to some processes being killed by the kernel (there's always "Killed: 9" somewhere in lit's error output). Everything's fine on 10.9. How do folks deal with this? Don't use 10.10 for building llvm? Is there some tweakable to tell the kernel "please
2020 Sep 01
2
[cfe-dev] Can we remove llvmbb from IRC?
On Tue, Sep 1, 2020 at 3:32 PM David Blaikie <dblaikie at gmail.com> wrote: > On Tue, Sep 1, 2020 at 12:07 PM Nico Weber via cfe-dev < > cfe-dev at lists.llvm.org> wrote: > >> Hi, >> >> llvmbb's job is to inform people of build breaks. However, it seems to >> trigger for a big list of bots, and at least one of them seems to always be >>
2020 Sep 03
2
Flakey failure on clang-ppc64le-linux-multistage
I think that was maybe the discussion on https://reviews.llvm.org/D78245 On Thu, Sep 3, 2020 at 6:22 PM Robinson, Paul <paul.robinson at sony.com> wrote: > I have a vague memory that libcxx wanted it for something, and claimed it > would be hard to work around not having it. > > Anyone else remember that? I can’t dredge up the details, sorry… > > In any event, a separate
2020 Sep 03
2
Flakey failure on clang-ppc64le-linux-multistage
Sure. I didn't use lit or ninja. I simply copied the script produced by lit (/home/buildbots/ppc64le-clang-multistage-test/clang-ppc64le-multistage/stage1/tools/clang/test/Driver/Output/target-override.c.script) into a temporary directory (along with a deep copy of the build directory). I modified the paths in the script to point to the temporary directory. Then I ran the script in a loop. For
2020 Sep 02
2
Flakey failure on clang-ppc64le-linux-multistage
Well, I am at my wit's end. I have copied over the script and directories for this test case and run it a few million times. First I was running one at a time, then I switched to kicking off 1000 at a time. All the while, the bots continued to run on the same machine. The script never failed even once. I am not sure if this has something to do with Python as part of llvm-lit or what is going
2019 Jul 03
2
buildbot failure in LLVM on sanitizer-x86_64-linux-gn
Why does GN bot still send mails? I thought it got fixed? On Wed, Jul 3, 2019 at 1:44 PM <llvm.buildmaster at lab.llvm.org> wrote: > > The Buildbot has detected a new failure on builder sanitizer-x86_64-linux-gn while building llvm. > Full details are available at: > http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-gn/builds/1820 > > Buildbot URL:
2019 Jun 06
4
Adding llvm-undname to the llvm-cov bot
On Wed, Jun 5, 2019 at 1:33 PM <vsk at apple.com> wrote: > > > On Jun 4, 2019, at 4:41 PM, Nico Weber <thakis at chromium.org> wrote: > > On Mon, Jun 3, 2019 at 2:06 PM <vsk at apple.com> wrote: > >> Hi Nico, >> >> Sorry for the delay, I've been OOO. The llvm-cov bot should produce >> reports for llvm-undname starting today. >>
2015 Sep 04
2
[cfe-dev] Running tests on OS X 10.10 vs "Killed: 9"
On Fri, Sep 4, 2015 at 12:46 PM, Sean Silva <chisophugis at gmail.com> wrote: > > > On Fri, Sep 4, 2015 at 10:27 AM, Nico Weber via cfe-dev < > cfe-dev at lists.llvm.org> wrote: > >> Hi, >> >> building 'check-all' on any of my machines running OS X 10.10 usually >> fails because a few tests fail due to some processes being killed by the
2020 Sep 03
3
Flakey failure on clang-ppc64le-linux-multistage
Should be fixed by https://reviews.llvm.org/D87103 Shall we consider deprecating(emitting a warning)/removing %T from lit? lldb, lld/COFF and clang-tools-extra are the three major users of %T. There are a few other %T in other places but there are not too many. We will also investigate whether other projects using lit are using %T. On Thu, Sep 3, 2020 at 11:25 AM David Blaikie <dblaikie at
2020 Sep 03
2
Flakey failure on clang-ppc64le-linux-multistage
This is likely due to a race condition (%T is a shared parent directory). I'll put up a patch to fix it. On Thu, Sep 3, 2020 at 10:00 AM David Blaikie via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Is the machine running any jobs in parallel? Would it be worth trying running lit in the loop, rather than the script? (perhaps lit's doing something interesting) or maybe the
2019 Jun 10
2
Adding llvm-undname to the llvm-cov bot
On Mon, Jun 10, 2019 at 2:11 PM <vsk at apple.com> wrote: > > > On Jun 6, 2019, at 9:56 AM, Nico Weber <thakis at chromium.org> wrote: > > On Wed, Jun 5, 2019 at 1:33 PM <vsk at apple.com> wrote: > >> >> >> On Jun 4, 2019, at 4:41 PM, Nico Weber <thakis at chromium.org> wrote: >> >> On Mon, Jun 3, 2019 at 2:06 PM <vsk at