search for: fresult

Displaying 6 results from an estimated 6 matches for "fresult".

Did you mean: result
2013 Jul 18
2
[LLVMdev] LLVM 3.3 JIT code speed
...ptimized with -O3 kind of IR ==> IR passes) runs slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason? I tried to play with TargetOptions without any success… Here is the kind of code we use to allocate the JIT: EngineBuilder builder(fResult->fModule); builder.setOptLevel(CodeGenOpt::Aggressive); builder.setEngineKind(EngineKind::JIT); builder.setUseMCJIT(true); builder.setCodeModel(CodeModel::JITDefault); builder.setMCPU(llvm::sys::getHostCPUName()); TargetOptions targetOptions; targetOptions.NoFram...
2013 Jul 18
0
[LLVMdev] LLVM 3.3 JIT code speed
...IR ==> IR passes) runs slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason? > > I tried to play with TargetOptions without any success… > > Here is the kind of code we use to allocate the JIT: > > EngineBuilder builder(fResult->fModule); > builder.setOptLevel(CodeGenOpt::Aggressive); > builder.setEngineKind(EngineKind::JIT); > builder.setUseMCJIT(true); > builder.setCodeModel(CodeModel::JITDefault); > builder.setMCPU(llvm::sys::getHostCPUName()); > > TargetOptions targetOpt...
2013 Jul 18
2
[LLVMdev] LLVM 3.3 JIT code speed
...slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason? >> >> I tried to play with TargetOptions without any success… >> >> Here is the kind of code we use to allocate the JIT: >> >> EngineBuilder builder(fResult->fModule); >> builder.setOptLevel(CodeGenOpt::Aggressive); >> builder.setEngineKind(EngineKind::JIT); >> builder.setUseMCJIT(true); >> builder.setCodeModel(CodeModel::JITDefault); >> builder.setMCPU(llvm::sys::getHostCPUName()); >> >>...
2013 Jul 18
0
[LLVMdev] LLVM 3.3 JIT code speed
...slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason? >> >> I tried to play with TargetOptions without any success. >> >> Here is the kind of code we use to allocate the JIT: >> >> EngineBuilder builder(fResult->fModule); >> builder.setOptLevel(CodeGenOpt::Aggressive); >> builder.setEngineKind(EngineKind::JIT); >> builder.setUseMCJIT(true); >> builder.setCodeModel(CodeModel::JITDefault); >> builder.setMCPU(llvm::sys::getHostCPUName()); >> >>...
2013 Jul 16
0
[LLVMdev] General strategy to optimize LLVM IR
On Tue, Jul 16, 2013 at 8:16 AM, Stéphane Letz <letz at grame.fr> wrote: > Hi, > > Our DSL emit sub-optimal LLVM IR that we optimize later on (LLVM IR ==> LLVM IR) before dynamically compiling it with the JIT. We would like to simply follow what clang/clang++ does when compiling with -O1/-O2/-O3 options. Our strategy up to now what to look at the opt.cpp code and take part of it
2013 Jul 16
4
[LLVMdev] General strategy to optimize LLVM IR
Hi, Our DSL emit sub-optimal LLVM IR that we optimize later on (LLVM IR ==> LLVM IR) before dynamically compiling it with the JIT. We would like to simply follow what clang/clang++ does when compiling with -O1/-O2/-O3 options. Our strategy up to now what to look at the opt.cpp code and take part of it in order to implement our optimization code. It appears to be rather difficult to follow