Displaying 20 results from an estimated 1000 matches similar to: "Orc JIT and lazily-loaded modules"
2016 Apr 21
4
Lazily Loaded Modules and Linker::LinkOnlyNeeded
Hey all,
For LinkModules, /*dest*/ is a fully materialized module, /*src*/ is a
lazily loaded module.
From what I understood, getLinkedToGlobal() is finding the function in
/*src*/ that matches some function declaration in /*dest*/, and given
that /*src*/ is lazily loaded it could be un-materialized.
The functions I need brought in from /*src*//**/ into /*dest*/ are
always declarations in
2016 Apr 20
2
Lazily Loaded Modules and Linker::LinkOnlyNeeded
>
>
> I understood from his description that he reversed the destination and
> source so that destination is the user code.
> I assumed it was not lazy loaded, but that would explain the question then
> :)
>
> Neil: can you clarify? If Teresa is right, why aren't you materializing
> the destination module entirely?
>
>
I don't think it has ever been tried
2016 Apr 20
2
Lazily Loaded Modules and Linker::LinkOnlyNeeded
TL;DR - when linking from a lazily loaded module and using
Linker::LinkOnlyNeeded, bodies of used functions aren't being copied
during linking.
Previously on one of our products, we would lazily load our runtime
module (around 9000 functions), and link some user module into this
(which is in all practical use cases much smaller). Then, post linking,
we have a pass that runs over the
2016 Apr 20
2
Lazily Loaded Modules and Linker::LinkOnlyNeeded
+cc Artem, who added the LinkOnlyNeeded flag.
On Wed, Apr 20, 2016 at 9:18 AM, Mehdi Amini via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Hi Neil,
>
> On Apr 20, 2016, at 5:20 AM, Neil Henning via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
> TL;DR - when linking from a lazily loaded module and using
> Linker::LinkOnlyNeeded, bodies of used functions
2019 Mar 26
2
ORC JIT fails with standard math librrary
Hi,
I still can't get IR functions to JIT compile with the ORC JIT when they
contain a call to the standard math library. Attached is a minimal exploit.
The program uses the KaleidoscopeJIT.h that ships with LLVM 8 (except
that I had to expose the Datalayout). It reads from the filesystem an IR
file (filename "func_works.ll" or "func_cos_fails.ll) and asks the ORC
JIT
2014 Jan 20
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
I'm having a problem with MCJIT (in LLVM 3.3 and 3.4), in which it's not resolving symbol mangling in a precompiled bitcode in the same way as old JIT. It's possible that it's just my misunderstanding. Maybe somebody can spot my problem, or identify it as an MCJIT bug.
Here's my situation, in a nutshell:
* I am assembling IR and JITing in my app. The IR may potentially make
2019 Aug 10
3
ORC v2 question
Hi,
I am trying out ORC v2 and facing some problems.
I am using LLVM 8.0.1.
I updated my ORC v1 implementation from 6.0 to 8.0 based on
Kaleidoscope example (i.e. using Legacy classes) and that works fine.
Now I am trying out ORC v2 apis, based on
https://github.com/llvm-mirror/llvm/blob/master/examples/Kaleidoscope/BuildingAJIT/Chapter2/KaleidoscopeJIT.h.
I have got it to compile and build.
2017 Sep 25
1
Some questions regarding ORC JIT apis
Hi Dibyendu
> On Windows 10 64-bit, with
> dynamic linking - I found one unexpected behaviour - the findSymbol()
> is unable to locate the JIT compiled function in the module if search
> for "exported" only is true ... even though the function is defined as
> having ExternalLinkage. I have to test on Linux / Mac OSX to see if
> the behaviour is different there.
2014 Jan 10
4
[LLVMdev] Bitcode parsing performance
Hi all, I'm trying to reduce the startup time for my JIT, but I'm running
into the problem that the majority of the time is spent loading the bitcode
for my standard library, and I suspect it's due to debug info. My stdlib
is currently about 2kloc in a number of C++ files; I compile them with
clang -g -emit-llvm, then link them together with llvm-link, call opt -O3
on it, and arrive
2019 Aug 10
2
ORC v2 question
Hi Praveen,
On Sat, 10 Aug 2019 at 21:05, Praveen Velliengiri
<praveenvelliengiri at gmail.com> wrote:
>
> Could you please send me your unoptimized and expected optimized code? The default implementation only contains some transformations. It would be helpful to know what you are actually trying.
> Optimize Module is just a function object.
>
You can view the code here:
2014 Jan 10
3
[LLVMdev] Bitcode parsing performance
That was likely type information and should mostly be fixed up. It's still
not lazily loaded, but is going to be ridiculously smaller now.
-eric
On Fri Jan 10 2014 at 12:11:52 AM, Sean Silva <chisophugis at gmail.com> wrote:
> This Summer I was working on LTO and Rafael mentioned to me that debug
> info is not lazy loaded, which was the cause for the insane resource usage
> I
2014 Jan 21
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
This is sounding rather like getLazyBitcodeModule is simply incompatible with MCJIT. Can anybody confirm that this is definitely the case? Is it by design, or by omission, or bug?
Re your option #1 and #2 -- sorry for the newbie questions, but can you point me to docs or code examples for how the linking or object caching should be achieved? If I do either of these rather than seeding my
2014 Jan 23
2
[LLVMdev] Bitcode parsing performance
Adrian may have handled this recently?
On Jan 13, 2014 3:34 PM, "Manman Ren" <manman.ren at gmail.com> wrote:
> I briefly looked at the bit code files and some types are not uniqued,
> here is one example:
> !3903 = metadata !{i32 786454, metadata !3904, null, metadata !"int64_t",
> i32 198, i64 0, i64 0, i64 0, i32 0, metadata !2258} ; [ DW_TAG_typedef ]
2011 Feb 24
2
[LLVMdev] Valgrind memcheck errors in llvm
I have ran under valgrind memcheck the process using libLLVM-2.9.so
(rev.126022) and got several errors:
==24227== Invalid read of size 1
==24227== at 0x40274C9: memcpy (mc_replace_strmem.c:497)
==24227== by 0x40D5B84: char* std::string::_S_construct<char
const*>(char const*, char const*, std::allocator<char> const&,
std::forward_iterator_tag) (in
2011 Jul 01
0
[LLVMdev] Bug in Inliner w/ lazy bitcode
Hi everyone,
In debugging an LLVM based system with a runtime module loaded from bitcode, I ran into a strange error when trying to use getLazyBitcodeModule instead of just ParseBitcodeFile (when loading lazily I get an "Invalid CALL" during bitcode deserialization). I can't decide if this is a "bug" or just a "you shouldn't use Module/Inliner like this".
2014 Mar 19
2
[LLVMdev] load bytecode from string for jiting problem
all of:
----
// cout << "lsr: " << lsr << "\n";
llvm::MemoryBuffer* mbjit =
llvm::MemoryBuffer::getMemBufferCopy (sr);
------
string lsr = sr.str();
// cout << "lsr: " << lsr << "\n";
2017 Nov 06
3
ORC JIT and multithreading
2014 Mar 20
2
[LLVMdev] load bytecode from string for jiting problem
Hello Willy,
Here is the dump from one of my bitcode files:
0000000 42 43 c0 de 21 0c 00 00 25 05 00 00 0b 82 20 00
As expected, 0x42 (= B), 0x43 (= C), xc0 and 0xde are in correct order. In
your case, the first byte is read as 37 (= 0x25). I wonder why? When you
check the bytes yourself, you get expected results. When the same bytes are
read from Stream object, you get a different result (maybe
2014 Mar 20
2
[LLVMdev] load bytecode from string for jiting problem
This segfault occuring only under valgrind,
in shell way, and in gdb way i have
Invalid bitcode signature
simple_scev_dynamic_array: /home/willy/apollo/llvm/include/llvm/Support/ErrorOr.h:258: storage_type *llvm::ErrorOr<llvm::Module *>::getStorage() [T = llvm::Module *]: Assertion `!HasError && "Cannot get value when an error exists!"' failed.
Command terminated by
2019 Sep 23
4
"Freeing" functions generated with SimpleORC for JIT use-case
Hi all,
I am using LLVM for JIT use-case and compile functions on the fly. I want
to "free" the modules after some time and reclaim any memory associated
with it. I am using the SimpleORC API
<https://llvm.org/docs/tutorial/BuildingAJIT1.html> now.
Is there an API to "free" all the memory associated with the module? I use
one "compiler" instance (think similar