Displaying 20 results from an estimated 1000 matches similar to: "Fwd: buildbot failure in LLVM on clang-cmake-mips"
2015 Oct 01
2
buildbot failure in LLVM on clang-cmake-mips
On Thu, Oct 1, 2015 at 12:08 PM, Daniel Sanders <Daniel.Sanders at imgtec.com>
wrote:
> I do. I'll take a look.
>
> Is there a way for owners to get emails for long-lasting failures?
>
I'm not sure what the generic setup is, but at least for the builder/slave
I admin, it emails me on every failure. So I get a lot of mail,
continuously, if there's a consistent
2015 Oct 02
3
buildbot failure in LLVM on clang-cmake-mips
I've just noticed that this is a new test added in r248325 and has never passed on this builder. Added the author of the test (Evgeniy).
From: llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org] On Behalf Of Daniel Sanders via llvm-dev
Sent: 01 October 2015 20:34
To: David Blaikie
Cc: llvm-dev
Subject: Re: [llvm-dev] buildbot failure in LLVM on clang-cmake-mips
> > I do. I'll take
2015 Oct 02
2
buildbot failure in LLVM on clang-cmake-mips
Thanks. From the debugging I've done so far it looks like it could be another 32-bit big-endian specific bug. It seems to be segfaulting in the memset() in allocate_stack.c (from glib) because given stack pointer is null. I'm guessing this is because it read the wrong half of a 64-bit value somewhere but I haven't identified where it goes wrong.
________________________________________
2015 Oct 01
3
Fwd: buildbot failure in LLVM on sanitizer-x86_64-linux-bootstrap
This buildbot seems to have been failing for a while (though it's hard for
me to identify the root cause in the logs, as I mentioned in another
thread, so it's hard to say if it's the same failure, or if the failure is
consistent, etc) - anyone watching it/caring aobut it?
---------- Forwarded message ----------
From: <llvm.buildmaster at lab.llvm.org>
Date: Wed, Sep 30, 2015 at
2015 Oct 01
2
Fwd: buildbot failure in LLVM on llvm-mips-linux
This buildbot seems to have been failing continuously for a couple of weeks
now ( http://lab.llvm.org:8011/builders/llvm-mips-linux/builds/14658 ) - is
anyone watching it/caring about it?
---------- Forwarded message ----------
From: <llvm.buildmaster at lab.llvm.org>
Date: Wed, Sep 30, 2015 at 11:34 PM
Subject: buildbot failure in LLVM on llvm-mips-linux
To: Ahmed Bougacha
2015 Oct 01
2
Fwd: buildbot failure in LLVM on llvm-mips-linux
The failure is a bit odd. LLVM is ignoring $PWD because it doesn't have the same inode as '.'. This causes the test failure because DW_AT_comp_dir and $PWD differ. However, $PWD and '.' should be the same inode since $PWD is a symlink to the current directory and stat() looks through symlinks.
> Since this latest board only has two cores, it will run slower and it will need
2017 Jun 12
2
Enable vectorizer-maximize-bandwidth by default?
Guys, Just to clarify that with the current fix in SLM there is no need to wait for other issues to be fixed (minor issue).
So you can move on with your patch.
From: Agabaria, Mohammed
Sent: Wednesday, June 07, 2017 15:24
To: Zaks, Ayal <ayal.zaks at intel.com>; Chandler Carruth <chandlerc at gmail.com>; Flamedoge <code.kchoi at gmail.com>; Dehao Chen <dehao at google.com>
2017 Feb 10
4
(RFC) Adjusting default loop fully unroll threshold
On 02/10/2017 05:21 PM, Dehao Chen wrote:
> Thanks every for the comments.
>
> Do we have a decision here?
You're good to go as far as I'm concerned.
-Hal
>
> Dehao
>
> On Tue, Feb 7, 2017 at 10:24 PM, Hal Finkel <hfinkel at anl.gov
> <mailto:hfinkel at anl.gov>> wrote:
>
>
> On 02/07/2017 05:29 PM, Sanjay Patel via llvm-dev wrote:
2017 Feb 08
2
(RFC) Adjusting default loop fully unroll threshold
On 02/07/2017 05:29 PM, Sanjay Patel via llvm-dev wrote:
> Sorry if I missed it, but what machine/CPU are you using to collect
> the perf numbers?
>
> I am concerned that what may be a win on a CPU that keeps a couple of
> hundred instructions in-flight and has many MB of caches will not hold
> for a small core.
In my experience, unrolling tends to help weaker cores even more
2017 Feb 13
5
(RFC) Adjusting default loop fully unroll threshold
FWIW, I'm good with the updated data, but I'd really like at least someone
from Apple and someone from ARM to chime in here... CC-ing random people in
the hope it helps...
On Mon, Feb 13, 2017 at 8:30 AM Dehao Chen via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Thanks for the comment. The performance experiments were performed on
> Intel Sandybridge. Updated this info to
2017 Feb 07
2
(RFC) Adjusting default loop fully unroll threshold
Ping... with the updated code size impact data, any more comments? Any more
data that would be interesting to collect?
Thanks,
Dehao
On Thu, Feb 2, 2017 at 2:07 PM, Dehao Chen <dehao at google.com> wrote:
> Here is the code size impact for clang, chrome and 24 google internal
> benchmarks (name omited, 14 15 16 are encoding/decoding benchmarks similar
> as h264). There are 2
2017 Feb 02
2
(RFC) Adjusting default loop fully unroll threshold
> On Feb 1, 2017, at 4:57 PM, Xinliang David Li via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> clang, chrome, and some internal large apps are good candidates for size metrics.
I'd also add the standard LLVM testsuite just because it's the suite everyone in the community can use.
Michael
>
> David
>
> On Wed, Feb 1, 2017 at 4:47 PM, Chandler Carruth via
2017 Feb 02
2
(RFC) Adjusting default loop fully unroll threshold
I had suggested having size metrics from somewhat larger applications such
as Chrome, Webkit, or Firefox; clang itself; and maybe some of our internal
binaries with rough size brackets?
On Wed, Feb 1, 2017 at 4:33 PM Dehao Chen <dehao at google.com> wrote:
> With the new data points, any comments on whether this can justify setting
> fully inline threshold to 300 (or any other
2016 Nov 21
4
(RFC) Encoding code duplication factor in discriminator
In many cases, the line-table fussing to improve autoFDO/sample-PGO would also likely help the debugging experience for optimized code, certainly in cases where line attribution is inherently ambiguous. In those cases, I have no problem with Just Doing It.
Something likely to pad the line table to benefit profiling without similarly benefiting debugging… that's probably worth inventing a
2016 Nov 04
2
(RFC) Encoding code duplication factor in discriminator
Discussed with Hal, Adrain and Paul offline at the llvm dev meeting today.
* trip count is not enough for vectorization, there is runtime check that
might go false, which can be reflected in profile that we may want to
preserve.
* simply recording these context-profile may cause problems to
iterative-sample-pgo. i.e. when you find a loop's vectorized version no
executed (due to runtime
2016 Oct 27
0
(RFC) Encoding code duplication factor in discriminator
The large percentages are from those tiny benchmarks. If you look at
omnetpp (0.52%), and xalanc (1.46%), the increase is small. To get a better
average increase, you can sum up total debug_line size before and after and
compute percentage accordingly.
David
On Thu, Oct 27, 2016 at 1:11 PM, Dehao Chen <dehao at google.com> wrote:
> The impact to debug_line is actually not small. I only
2016 Nov 02
2
(RFC) Encoding code duplication factor in discriminator
----- Original Message -----
> From: "Dehao Chen" <dehao at google.com>
> To: "Hal Finkel" <hfinkel at anl.gov>
> Cc: "llvm-dev" <llvm-dev at lists.llvm.org>, "Xinliang David Li"
> <davidxl at google.com>
> Sent: Tuesday, November 1, 2016 8:24:30 PM
> Subject: Re: [llvm-dev] (RFC) Encoding code duplication factor in
2016 Nov 01
2
(RFC) Encoding code duplication factor in discriminator
----- Original Message -----
> From: "Hal Finkel via llvm-dev" <llvm-dev at lists.llvm.org>
> To: "Dehao Chen" <dehao at google.com>
> Cc: "llvm-dev" <llvm-dev at lists.llvm.org>, "Xinliang David Li"
> <davidxl at google.com>
> Sent: Tuesday, November 1, 2016 4:26:17 PM
> Subject: Re: [llvm-dev] (RFC) Encoding code
2016 Nov 01
2
(RFC) Encoding code duplication factor in discriminator
damn... my english is not readable at all when I try to write fast...
trying to make some clarification below, hopefully can make it more
readable...
On Tue, Nov 1, 2016 at 2:07 PM, Dehao Chen <dehao at google.com> wrote:
> Oops... pressed the wrong button and sent out early...
>
> On Tue, Nov 1, 2016 at 2:01 PM, Dehao Chen <dehao at google.com> wrote:
>
>> If
2016 Oct 27
2
(RFC) Encoding code duplication factor in discriminator
The impact to debug_line is actually not small. I only implemented the part
1 (encoding duplication factor) for loop unrolling and loop vectorization.
The debug_line size overhead for "-O2 -g1" binary of speccpu C/C++
benchmarks:
433.milc 23.59%
444.namd 6.25%
447.dealII 8.43%
450.soplex 2.41%
453.povray 5.40%
470.lbm 0.00%
482.sphinx3 7.10%
400.perlbench 2.77%
401.bzip2 9.62%
403.gcc