search for: jittargetmachinebuilder

Displaying 5 results from an estimated 5 matches for "jittargetmachinebuilder".

2019 Aug 13
4
ORC v2 question
...IR optimization. Well for ORCV2 there is no change before and after. I also get this message: JIT session error: Symbols not found: { raise_error } Yes raise_error and all other extern functions are explicitly added as global symbols. > > CodeGen optimization seems a more likely culprit: JITTargetMachineBuilder and ExecutionEngineBuilder have different defaults for their CodeGen opt-level. JITTargetMachineBuilder defaults to CodeGenOpt::None, and ExecutionEngineBuilder default to CodeGenOpt::Default. > > What happens if you make the following modification to your setup? > > auto JTMB = llvm::o...
2019 Aug 13
3
ORC v2 question
Hi Lang, On Tue, 13 Aug 2019 at 20:47, Lang Hames <lhames at gmail.com> wrote: > > Sorry for the delayed reply. Looks like you have figured out how to solve your issue already. Out of interest, what did you need to do? Do you have anything that you would like to see added to http://llvm.org/docs/ORCv2.html ? > Sorry my post was misleading. I figured out below which was part of the
2019 Aug 14
3
ORC v2 question
...-front using the absoluteSymbols function. > > I would be inclined to do the latter: it's more explicit, and easier to limit searches to exactly the symbols you want. > Okay I will look into this. Thank you for all the help. > > CodeGen optimization seems a more likely culprit: JITTargetMachineBuilder and ExecutionEngineBuilder have different defaults for their CodeGen opt-level. JITTargetMachineBuilder defaults to CodeGenOpt::None, and ExecutionEngineBuilder default to CodeGenOpt::Default. > > > > What happens if you make the following modification to your setup? > > > >...
2019 Sep 19
3
"corrupted size vs. prev_size" when calling ExecutionSession::lookup()
...er(); llvm::InitializeNativeTargetAsmParser(); // create jit llvm::orc::ExecutionSession ES; llvm::orc::RTDyldObjectLinkingLayer ObjectLayer(ES, []() { return std::make_unique<llvm::SectionMemoryManager>(); }); auto JTMB = llvm::orc::JITTargetMachineBuilder::detectHost(); auto DL = JTMB->getDefaultDataLayoutForTarget(); llvm::orc::IRCompileLayer CompileLayer(ES, ObjectLayer, llvm::orc::ConcurrentIRCompiler(std::move(*JTMB))); llvm::orc::MangleAndInterner Mangle(ES, *DL); ES.getMainJITDylib().setGenerator( llvm::cantFail(llvm::orc::DynamicL...
2020 Sep 04
2
Performance of JIT execution
Hello, I recently noticed a performance issue of JIT execution vs native code of the following simple logic which computes the Fibonacci sequence: uint64_t fib(int n) { if (n <= 2) { return 1; } else { return fib(n-1) + fib(n-2); } } When compiled natively using clang++ with -O3, it took 0.17s to compute fib(40). However, when executing using LLJIT, fed with the IR output of "clang++