Displaying 20 results from an estimated 1200 matches similar to: "[LLVMdev] Why code doesn't speed up much with optimization level increase?"
2010 Aug 04
1
[LLVMdev] JITing code with indirect branch in LLVM 2.7
I am trying to JIT some code containing an indirect branch (and the
corresponding store i8* blockaddress(@label)). I am using LLVM 2.7
code base. I build the ExecutionEngine using EngineBuilder, and call
engine->getPointerToFunction(func). When I use
setOptLevel(llvm::CodeGenOpt::None), the JITing fails with the
following message :
JIT.h:131: virtual void*
2010 Feb 11
2
[LLVMdev] LLVM memory usage?
Hi,
I'm seeing rather high memory usage from LLVM and I'd like to track down
what I'm doing to cause it. My application is a simple web application
server that compiles web pages with embedded script to bitcode and compiles
them with the JIT on demand. I've taken tools/lli.cpp as a starting point
and extended it to load additional modules.
However, if I load successive pages and
2010 Feb 12
0
[LLVMdev] LLVM memory usage?
On Thu, Feb 11, 2010 at 6:53 PM, James Williams <junk at giantblob.com> wrote:
> Hi,
>
> I'm seeing rather high memory usage from LLVM and I'd like to track down
> what I'm doing to cause it. My application is a simple web application
> server that compiles web pages with embedded script to bitcode and compiles
> them with the JIT on demand. I've taken
2010 Mar 04
4
[LLVMdev] Last chance to get anything into llvm-c and ocaml bindings
I've pretty much finished exposing all I wanted to llvm-c and the
ocaml bindings for the soon to be released 2.7. Does anyone need any
other functions exposed before the code freeze on the 7th?
2016 Jun 23
2
AVX512 instruction generated when JIT compiling for an avx2 architecture
With LLVM 3.8 the JIT compiler engine generates an AVX512 instruction
although I target an 'avx2' CPU (intel Core I7).
I just downloaded the most recent 3.8 and still it happens.
It happens with this input module:
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
define void @module_cFFEMJ(i64 %lo, i64 %hi, i64 %myId, i1 %ordered, i64
%start, i32* noalias align 32
2010 Mar 06
1
[LLVMdev] Last chance to get anything into llvm-c and ocaml bindings
On Fri, Mar 5, 2010 at 5:53 AM, George Giorgidze <giorgidze at gmail.com> wrote:
>
> Hi Erick,
>
> Can you make the following functions available in llvm-c.
>
> createStandardFunctionPasses
> createStandardModulePasses
> createStandardLTOPasses
>
> Thanks in advance, George
This is a little tricky, so I need some advice from the community.
First off, I'm
2010 Mar 05
0
[LLVMdev] Last chance to get anything into llvm-c and ocaml bindings
Erick Tryzelaar <erick.tryzelaar <at> gmail.com> writes:
>
> I've pretty much finished exposing all I wanted to llvm-c and the
> ocaml bindings for the soon to be released 2.7. Does anyone need any
> other functions exposed before the code freeze on the 7th?
>
Hi Erick,
Can you make the following functions available in llvm-c.
createStandardFunctionPasses
2010 Nov 03
4
[LLVMdev] Fw: Forcing the Interpreter segfaults
[I assume you meant to send this to the list as well, not just me.]
Begin forwarded message:
Date: Wed, 3 Nov 2010 14:43:54 +0000
From: Salomon Brys <salomon.brys at gmail.com>
To: Török Edwin <edwintorok at gmail.com>
Subject: Re: [LLVMdev] Forcing the Interpreter segfaults
I have build LLVM in debug mode. Here are the informations of the
segfault : memcpy() at 0x7ffff6f6581e
2016 Jun 23
2
AVX512 instruction generated when JIT compiling for an avx2 architecture
On 06/23/2016 12:56 PM, Craig Topper wrote:
> Can you check what value "getHostCPUName" returned?
getHostCPUName() = skylake
>
> On Thu, Jun 23, 2016 at 9:53 AM, Frank Winter via llvm-dev
> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>
> With LLVM 3.8 the JIT compiler engine generates an AVX512
> instruction although I
2010 Nov 03
0
[LLVMdev] Fw: Forcing the Interpreter segfaults
Hi Salomon, please don't forget to reply to the list too (I've CC'd the list).
> I don't think my code is doing anything worng...
No, it looks fine to me, and the interpreter certainly supports this. That
suggests that the value of %str is not being transmitted to the function right.
If it is getting the wrong pointer value, that would explain why it barfs.
Ciao,
Duncan.
2016 Feb 05
4
MCJit Runtine Performance
Hi Morten,
Something else just occurred to me: can you share your EngineBuilder configuration lines? (http://llvm.org/docs/doxygen/html/classllvm_1_1EngineBuilder.html)
In particular - are you explicitly setting the optimization level? The old JIT may have had a different default.
- Lang.
Sent from my iPad
> On Feb 4, 2016, at 10:54 PM, Jim Grosbach via llvm-dev <llvm-dev at
2012 Oct 12
3
[LLVMdev] Dynamically loading native code generated from LLVM IR
On 12 Eki 2012, at 20:00, Jim Grosbach wrote:
>
> On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote:
>
>> Dear Tim,
>>
>>>
>>> The JIT sounds like it does almost exactly what you want. LLVM's JIT
>>> isn't a classical lightweight, dynamic one like you'd see for
>>> JavaScript or Java.
2013 Jul 18
2
[LLVMdev] LLVM 3.3 JIT code speed
Hi,
Our DSL LLVM IR emitted code (optimized with -O3 kind of IR ==> IR passes) runs slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason?
I tried to play with TargetOptions without any success…
Here is the kind of code we use to allocate the JIT:
EngineBuilder builder(fResult->fModule);
2016 Sep 14
4
setDataLayout segfault
I get a segfault with this code when setting the data layout:
int main(int argc, char** argv)
{
llvm::InitializeNativeTarget();
llvm::LLVMContext TheContext;
unique_ptr<Module> Mod(new Module("A",TheContext));
llvm::EngineBuilder engineBuilder(std::move(Mod));
std::string mcjit_error;
engineBuilder.setMCPU(llvm::sys::getHostCPUName());
2010 Nov 20
3
[LLVMdev] Poor floating point optimizations?
On Nov 20, 2010, at 2:41 PM, Sdadsda Sdasdaas wrote:
> And also the resulting assembly code is very poor:
>
> 00460013 movss xmm0,dword ptr [esp+8]
> 00460019 movaps xmm1,xmm0
> 0046001C addss xmm1,xmm1
> 00460020 pxor xmm2,xmm2
> 00460024 addss xmm2,xmm1
> 00460028 addss xmm2,xmm0
> 0046002C movss dword ptr
2011 Mar 22
2
[LLVMdev] LLVM optimization passes crash when running on second thread
Hello,
I am trying to modify my LLVM-based compiler to perform an initial, no-optimization compilation synchronously on startup and then perform an asynchronous, optimized recompilation in the background, and I am getting in one of the optimization passes.
- I am using the official release of LLVM 2.8
- I have compiled LLVM with threading enabled; I am running llvm::llvm_start_multithreaded() on
2011 Nov 03
1
[LLVMdev] Whither /Support/StandardPasses.h?
> Date: Wed, 26 Oct 2011 11:52:50 -0700
> From: Tanya Lattner <lattner at apple.com>
> Subject: [LLVMdev] Release Notes: Volunteers needed
> We need some volunteers to help with the 3.0 release notes. Traditionally, Chris has been the one to go
> through all the commits (6 months worth!) and come up with a concrete list of things that have changed in 3.0.
> Ideally,
2010 Nov 21
0
[LLVMdev] Poor floating point optimizations?
Thanks for replying so fast. This UnsafeFPMath trick in fact solves "pxor adds"
case, but the resulting code is still not as good as I expected from LLVM.
For example expressions like "1+x+1+x+1+x+1+x" (basically adding a lot of
constants and variables) are complied to a long series off <add>s both in IR
and
assembly code.
Both GCC and MSVC generates C1*x +C2 (mov +
2011 Mar 22
0
[LLVMdev] LLVM optimization passes crash when running on second thread
On Tue, Mar 22, 2011 at 11:51 AM, Peter Zion
<peter.zion at fabric-engine.com> wrote:
> Hello,
>
> I am trying to modify my LLVM-based compiler to perform an initial, no-optimization compilation synchronously on startup and then perform an asynchronous, optimized recompilation in the background, and I am getting in one of the optimization passes.
>
> - I am using the official
2010 May 28
4
[LLVMdev] Combining Branch Statements - Missing Optimization Pass?
I have some LLVM IR after the optimization passes defined in createStandardModulePasses with the optimization level set to 3. It contains what appears to me to be an easily optimizable branch statement.
In particular, note in the code below that at the end of the "loop" BasicBlock that there is a conditional branch where in the false case, it branches to the label