similar to: [LLVMdev] Bitcode parsing performance

Displaying 20 results from an estimated 2000 matches similar to: "[LLVMdev] Bitcode parsing performance"

2014 Jan 10
3
[LLVMdev] Bitcode parsing performance
That was likely type information and should mostly be fixed up. It's still not lazily loaded, but is going to be ridiculously smaller now. -eric On Fri Jan 10 2014 at 12:11:52 AM, Sean Silva <chisophugis at gmail.com> wrote: > This Summer I was working on LTO and Rafael mentioned to me that debug > info is not lazy loaded, which was the cause for the insane resource usage > I
2014 Jan 23
2
[LLVMdev] Bitcode parsing performance
Adrian may have handled this recently? On Jan 13, 2014 3:34 PM, "Manman Ren" <manman.ren at gmail.com> wrote: > I briefly looked at the bit code files and some types are not uniqued, > here is one example: > !3903 = metadata !{i32 786454, metadata !3904, null, metadata !"int64_t", > i32 198, i64 0, i64 0, i64 0, i32 0, metadata !2258} ; [ DW_TAG_typedef ]
2014 Jan 20
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
I'm having a problem with MCJIT (in LLVM 3.3 and 3.4), in which it's not resolving symbol mangling in a precompiled bitcode in the same way as old JIT. It's possible that it's just my misunderstanding. Maybe somebody can spot my problem, or identify it as an MCJIT bug. Here's my situation, in a nutshell: * I am assembling IR and JITing in my app. The IR may potentially make
2011 Feb 24
2
[LLVMdev] Valgrind memcheck errors in llvm
I have ran under valgrind memcheck the process using libLLVM-2.9.so (rev.126022) and got several errors: ==24227== Invalid read of size 1 ==24227== at 0x40274C9: memcpy (mc_replace_strmem.c:497) ==24227== by 0x40D5B84: char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) (in
2014 Mar 19
2
[LLVMdev] load bytecode from string for jiting problem
all of: ---- // cout << "lsr: " << lsr << "\n"; llvm::MemoryBuffer* mbjit = llvm::MemoryBuffer::getMemBufferCopy (sr); ------ string lsr = sr.str(); // cout << "lsr: " << lsr << "\n";
2014 Jan 21
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
This is sounding rather like getLazyBitcodeModule is simply incompatible with MCJIT. Can anybody confirm that this is definitely the case? Is it by design, or by omission, or bug? Re your option #1 and #2 -- sorry for the newbie questions, but can you point me to docs or code examples for how the linking or object caching should be achieved? If I do either of these rather than seeding my
2014 Mar 20
2
[LLVMdev] load bytecode from string for jiting problem
This segfault occuring only under valgrind, in shell way, and in gdb way i have Invalid bitcode signature simple_scev_dynamic_array: /home/willy/apollo/llvm/include/llvm/Support/ErrorOr.h:258: storage_type *llvm::ErrorOr<llvm::Module *>::getStorage() [T = llvm::Module *]: Assertion `!HasError && "Cannot get value when an error exists!"' failed. Command terminated by
2014 Jan 26
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
Hi Gael, I tried converting to your approach but I had some issues making sure that all symbols accessed by the jit modules have entries in the dynamic symbol table. To be specific, my current approach is to use MCJIT (using an objectcache) to JIT the runtime module and then let MCJIT handle linking any references from the jit'd modules; I just experimented with what I think you're doing,
2014 Mar 20
2
[LLVMdev] load bytecode from string for jiting problem
Hello Willy, Here is the dump from one of my bitcode files: 0000000 42 43 c0 de 21 0c 00 00 25 05 00 00 0b 82 20 00 As expected, 0x42 (= B), 0x43 (= C), xc0 and 0xde are in correct order. In your case, the first byte is read as 37 (= 0x25). I wonder why? When you check the bytes yourself, you get expected results. When the same bytes are read from Stream object, you get a different result (maybe
2014 Mar 19
2
[LLVMdev] load bytecode from string for jiting problem
I mad the change, and still have the problem. I investigate more the source code of llvm. First, I change isRawBitcode function to print the content of the parameter like this: original: http://llvm.org/docs/doxygen/html/ReaderWriter_8h_source.html#l00081 inline bool isRawBitcode(const unsigned char *BufPtr, const unsigned char *BufEnd) { // These bytes sort
2014 Jan 21
4
[LLVMdev] MCJIT versus getLazyBitcodeModule?
Thanks for the pointers. Am I correct in assuming that putting the precompiled bitcode into a second module and linking (or using the object caches) would result in ordinary function calls, but would not be able to inline the functions? -- lg On Jan 21, 2014, at 11:55 AM, Kaylor, Andrew <andrew.kaylor at intel.com> wrote: > I would say that the incompatibility is by design. Not
2014 Jan 10
2
[LLVMdev] Bitcode parsing performance
On 10 January 2014 03:09, Sean Silva <chisophugis at gmail.com> wrote: > This Summer I was working on LTO and Rafael mentioned to me that debug info > is not lazy loaded, which was the cause for the insane resource usage I was > seeing when doing LTO with debug info. This is likely the reason that the > lazy loading was so ineffective for your debug build. > > Rafael, am I
2013 Dec 10
0
[LLVMdev] [RFC] MCJIT usage models
On Dec 9, 2013, at 3:59 PM, Kevin Modzelewski <kmod at dropbox.com> wrote: > About lazy compilation, I'm still of the opinion that that's better handled > outside of MCJIT. For the people asking for it, would it be enough to have > a wrapper around MCJIT that automatically splits modules and adds stubs to > do lazy compilation? I think that would be sufficient for me.
2011 Jul 01
0
[LLVMdev] Bug in Inliner w/ lazy bitcode
Hi everyone, In debugging an LLVM based system with a runtime module loaded from bitcode, I ran into a strange error when trying to use getLazyBitcodeModule instead of just ParseBitcodeFile (when loading lazily I get an "Invalid CALL" during bitcode deserialization). I can't decide if this is a "bug" or just a "you shouldn't use Module/Inliner like this".
2011 Jul 27
2
[LLVMdev] Linking opaque types
On Jul 26, 2011, at 8:11 AM, Talin wrote: >> >> If that's true, then it means that we're back to the case where every type has to be fully defined down to the leaf level. > > I'm not sure what you mean. LLVM is perfectly fine with opaque structs so long as you don't "deference" them, GEP into them, need their size, etc. > > Let me try with
2013 Apr 23
3
[LLVMdev] LLVM JIT Questions
On Tue, Apr 23, 2013 at 10:39 AM, Kaylor, Andrew <andrew.kaylor at intel.com>wrote: > Hi Dmitri, > > Regarding your first question, if you can use the MCJIT engine a caching > mechanism will be available very soon. I'm preparing to commit a patch > today to add this capability. I'm not sure what it would take to get > something similar working with the older JIT
2001 Jan 27
4
ogg123 oss plugin plays garbage
I tried to use the current cvs version of ogg123 with oss output and the ogg just sounds like static. I wanted to document it on the list in case anyone else is having the problem. I can make ogg123 write wav files fine. Also Vakor does not have any trouble playing oggs with ogg123, so I am not certain what the problem is. I have tried compiling all of ogg vorbis (ao,ogg,vorbis,vorbis-tools)
2015 Aug 13
2
Rationale for the object cache design?
Hello, I am a bit curious about the rationale for the current callback-based object cache API. For Numba, it would be easier if there would be a simple procedural API: - one method to get a module's compiled object code - one method to load/instantiate a module from a given piece of object code I manage to get around the callback-based API to do what I want, but it's a bit weird to work
2013 Apr 23
0
[LLVMdev] LLVM JIT Questions
Yes, exactly. My patch adds a new ObjectCache class which can be registered with MCJIT. MCJIT will then call this component before attempting to generate code to see if it has a cached object image for a given module. If the ObjectCache has a cached object, MCJIT will skip the code generation step and just perform linking and loading. If the ObjectCache does not have a cached version MCJIT
2016 Dec 13
0
Orc JIT and lazily-loaded modules
Hi, I'm trying to port some code from the original JIT (a project I haven't had a chance to work on for quite a while) to the new Orc JIT. I thought I'd try to use the Kaleidoscope tutorial as a starting point for getting acquainted with the new JIT and so I first tried to add the ability to load an existing bitcode file, then make calls to functions from that file. That was easy to