Displaying 20 results from an estimated 10000 matches similar to: "Clang -O0 performs optimizations that undermine dynamic bug-finding tools"
2016 Jan 30
4
Sulong
Hi everyone,
we started a new open source project Sulong:
https://github.com/graalvm/sulong.
Sulong is a LLVM IR interpreter with JIT compilation running on top of the
JVM.
By using the Truffle framework, it implements speculative optimizations
such as inlining of function pointer calls through AST rewriting.
One area of our research is to provide alternative ways of executing LLVM
bitcode that
2019 Nov 13
2
Difference between clang -O1 -Xclang -disable-O0-optnone and clang -O0 -Xclang -disable-O0-optnone in LLVM 9
Hello,
I m trying to test individual O3 optimizations/ replicating O3 behavior on
IR. I took unoptimized IR (O0), used disable-o0-optnone via (*clang -O0
-Xclang -disable-O0-optnone*). I read somewhere about *clang -O1 -Xclang
-disable-O0-optnone,* so I also tested on this initial IR.
I have observed by using individual optimizations, the performance (i.e
time) is better when the base/initial
2007 May 14
2
Uploading entire directories
Did google a while, but didn''t find a simple, clean solution for uploading
whole, entire directoriy/ies with Rails. Is there any ?
Rigger
--
: : i''m a climber : :
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group.
To post to this group, send email to
2015 Dec 03
2
bitcode versioning
Is there going to be a formal interface/API for this version-block information? I have had to "extend" the IR and bitcode representations several times to address absences/limitations in the handling of various vector types, in particular FP16 vector types; and it would be really useful if I had a "standard" way of doing this, and identifying that my dialect was different.
2015 Feb 09
2
[LLVMdev] Is "clang -O1" the same as "clang -O0 + opt -O1"?
Hello,
I encounter a bug that pumped during execution of "clang -O1". However the
bug cannot be reproduced by using "clang -O0 + opt -O1". It seems that
"clang -O1" is not the same as "clang -O0 + opt -O1". According to the
generated LLVM IRs are large, I would like to use bugpoint with "clang -O1"
directly instead of using "clang -O0"
2009 Nov 22
3
[LLVMdev] -O0 compile time speed (was: Go)
On Nov 21, 2009, at 1:00 PM, Arnt Gulbrandsen wrote:
> Chris Lattner writes:
>> I'm still really interested in making Clang (and thus LLVM) faster at -O0 (while still preserving debuggability of course).
>
> Why?
I want the compiler to build things quickly in all modes, but -O0 in particular is important for a fast compile/debug/edit cycle. Are you asking why fast compilers
2012 Nov 06
1
[LLVMdev] which Register allocator to use with llc -O0
Hi,
We were using "linearscan" register scan with llc -O0 option. As per the llvm blog, this is replaced with greedy register alloation.
http://blog.llvm.org/2011/09/greedy-register-allocation-in-llvm-30.html
But I think, this register allocation (i.e. 'greedy and 'basic') is blocked if used with -O0 option of llc. Only 'fast register allocator' option can be used
2017 Sep 18
2
Clang/LLVM 5.0 optnone attribute with -O0
Hi,
We have a research LLVM-based domain-specific code generator that we
want to upgrade form LLVM 4.0 to 5.0. The code generator is written as
an out-of-tree loadable module for opt.
Till Clang 4.0 we were compiling the front-end code (annotated C++)
using -O0. The generated bitcode was further processed using opt with
our module loaded. In Clang 5.0 we see that using -O0 adds the optnone
2009 Nov 22
0
[LLVMdev] -O0 compile time speed (was: Go)
Chris Lattner writes:
> On Nov 21, 2009, at 1:00 PM, Arnt Gulbrandsen wrote:
>
>> Chris Lattner writes:
>>> I'm still really interested in making Clang (and thus LLVM) faster
>>> at -O0 (while still preserving debuggability of course).
>>
>> Why?
>
> I want the compiler to build things quickly in all modes, but -O0 in
> particular is
2015 Dec 11
2
bitcode versioning
Hi Mehdi and my apologies for the delay in responding - the day job got in the way :-)
Our target is still out-of-tree so my reasons for extending the IR would be eliminated if we were a proper part of LLVM, which I would like to do when the time is right for us.
My extensions are quite simple really, and I expect that they will be wanted in the TRUNK sometime anyway.
At the moment I only have
2016 Dec 23
0
struct bitfield regression between 3.6 and 3.9 (using -O0)
On 12/22/2016 5:45 PM, Phil Tomson wrote:
> Given that this is compiled with -O0, would there a way to skip the
> Optimization of the Type-legalized selection DAG? It's fine until it
> optimizes the Type-legalized selection DAG into the Optimized
> Type-legalized selection DAG.
Umm, I wouldn't really suggest shoving the problem under the rug... I
mean, turning off the
2017 May 23
2
[GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
Great!
I thought I had to look at our pipeline at O0 to make sure optimized regalloc was supported (https://bugs.llvm.org/show_bug.cgi?id=33022 <https://bugs.llvm.org/show_bug.cgi?id=33022> in mind). Glad I was wrong, it saves me some time.
> On May 22, 2017, at 12:51 AM, Kristof Beyls <kristof.beyls at arm.com> wrote:
>
>
>> On 22 May 2017, at 09:09, Diana Picus
2017 Sep 18
1
Clang/LLVM 5.0 optnone attribute with -O0
You can also add the -Xclang -disable-O0-optnone flag to your command line. This will disable the implicit optnone when compiling with O0.
Cheers,
Michael
On Mon, Sep 18, 2017 at 7:27 AM +0200, "Craig Topper via llvm-dev" <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:
To prevent optnone from being added you can replace -O0 with "-O1 -Xclang
2017 Dec 30
3
Issues with omp simd
I even tried following;
int main(int argc, char **argv)
{
const int size = 1000000;
float a[size], b[size],c[size];
#pragma omp simd
for (int i=0; i<size; ++i)
{
a[i]=2; b[i]=3; c[i]=4;
c[i]= a[i] + b[i];
}
return 0;
}
but the output with and without openmp simd is same. why is that so?
On Sun, Dec 31, 2017 at 12:01
2009 Nov 21
2
[LLVMdev] -O0 compile time speed (was: Go)
On Nov 19, 2009, at 1:04 PM, Bob Wilson wrote:
>> I've tested it and LLVM is indeed 2x slower to compile, although it
>> generates
>> code that is 2x faster to run...
>>
>>> Compared to a compiler in the same category as PCC, whose pinnacle of
>>> optimization is doing register allocation? I'm not surprised at all.
>>
>> What else
2017 Dec 30
2
Issues with omp simd
hello,
i am trying to optimize omp simd loop as follows
int main(int argc, char **argv)
{
const int size = 1000000;
float a[size], b[size],c[size];
#pragma omp simd
for (int i=0; i<size; ++i)
{
c[i]= a[i] + b[i];
}
return 0;
}
i run it using the following command;
g++ -O0 --std=c++14 -fopenmp-simd lab.cpp -Iinclude -S -o lab.s
2017 Mar 30
3
[GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
Hi Renato,
If Kristof is busy I can make runs on AArch64 Linux (Cortex-A53 and Cortex-57).
Thanks,
Evgeny Astigeevich
Senior Compiler Engineer
Compilation Tools
ARM
> -----Original Message-----
> From: llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org] On Behalf Of
> Renato Golin via llvm-dev
> Sent: Thursday, March 30, 2017 9:54 AM
> To: Quentin Colombet
> Cc: llvm-dev;
2017 May 24
2
[GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
Hi Kristof,
Thanks for the measurements.
> On May 24, 2017, at 6:00 AM, Kristof Beyls <kristof.beyls at arm.com> wrote:
>
>>
>> On 23 May 2017, at 21:48, Quentin Colombet <qcolombet at apple.com <mailto:qcolombet at apple.com>> wrote:
>>
>> Great!
>> I thought I had to look at our pipeline at O0 to make sure optimized regalloc was
2016 May 13
3
[RFC] Disabling DAG combines in /O0
Hi all,
The DAGCombiner pass actually runs even if the optimize level is set to None. This can result in incorrect debug information or unexpected stepping/debugging experience. Not to mention that having good stepping/debugging experience is the major reason to compile at /O0.
I recently suggested a patch to disable one specific DAG combine at /O0 that broke stepping on a particular case
2009 Nov 22
0
[LLVMdev] -O0 compile time speed
>>
>> Sort of. Why you think more speed than LLVM currently provides is a
>> significant benefit.
>
> My compiler supports LLVM as a backend. The language heavily relies on
> compile-time environment-dependent code generation, so it needs the
> JIT. One of the things that is holding back LLVM on production systems
> is that it needs minutes to JIT a medium-sized