Hi Lang,
For my information, Am I enabling it correctly?
llvm::TargetOptions TO;
TO.EnableFastISel = true;
std::string ErrStr;
engine
std::unique_ptr<llvm::ExecutionEngine>(llvm::EngineBuilder(std::move(sysModuleOwner))
.setErrorStr(&ErrStr)
.setVerifyModules(false)
.setMCPU(llvm::sys::getHostCPUName())
.setOptLevel(useOptimization ? llvm::CodeGenOpt::Default :
llvm::CodeGenOpt::None)
.setTargetOptions(TO)
.setMCJITMemoryManager(llvm::make_unique<llvm::SectionMemoryManager>())
.create()
);
I'm targeting x64 on windows. The last time I profiled it, it spends a lot
of time in the SelectionDAG. Which IR constructs would prevent FastIsel
from doint its magic ? I might be able to rewrite that IR part.
The object cache is already in place for the recurring code (80% of code),
but there are quite some non-recurring code that needs toe be jitted on the
fly.
I will experiment with ORC also for the memory usage.
Cheers,
On Sat, Aug 13, 2016 at 4:01 AM, Lang Hames <lhames at gmail.com> wrote:
> Hi Koffie,
>
> I'm surprised to hear that FastISel doesn't help - what
architecture are
> you compiling for? Is it falling back to SelectionDAG often?
>
> You can use an object cache if you're not already doing so and your
> use-case allows for it.
>
> If you switch to ORC you can also use lazy compilation to defer
> compilation of functions until they're first executed. This can improve
> startup times, and reduce overall compile times if not all functions are
> executed.
>
> Cheers,
> Lang.
>
> On Thu, Aug 11, 2016 at 11:47 PM, koffie drinker <gekkekoe at
gmail.com>
> wrote:
>
>> Hi,
>>
>> What other options do I have to reduce JIT time for large amount of
code?
>> - setting optimization level to none helps a lot
>> - enabling FastISel doesn't seem to help much
>>
>> Thanks!
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20160814/4a683fac/attachment.html>