search for: objectcaching

Displaying 20 results from an estimated 34 matches for "objectcaching".

2013 Apr 23
0
[LLVMdev] LLVM JIT Questions
Yes, exactly. My patch adds a new ObjectCache class which can be registered with MCJIT. MCJIT will then call this component before attempting to generate code to see if it has a cached object image for a given module. If the ObjectCache has a cached object, MCJIT will skip the code generation step and just perform linking and loading. If the ObjectCache does not have a cached version MCJIT
2013 Apr 23
3
[LLVMdev] LLVM JIT Questions
On Tue, Apr 23, 2013 at 10:39 AM, Kaylor, Andrew <andrew.kaylor at intel.com>wrote: > Hi Dmitri, > > Regarding your first question, if you can use the MCJIT engine a caching > mechanism will be available very soon. I'm preparing to commit a patch > today to add this capability. I'm not sure what it would take to get > something similar working with the older JIT
2015 Aug 13
2
Rationale for the object cache design?
Hello, I am a bit curious about the rationale for the current callback-based object cache API. For Numba, it would be easier if there would be a simple procedural API: - one method to get a module's compiled object code - one method to load/instantiate a module from a given piece of object code I manage to get around the callback-based API to do what I want, but it's a bit weird to work
2013 Apr 23
1
[LLVMdev] LLVM JIT Questions
This sounds great, but will it work on Windows as well ? I considered MCJIT for my work, but was not sure if there are any features supported by old JIT but missing in MCJIT. Thanks, Dmitri Am 23.04.2013 20:26 schrieb "Kaylor, Andrew" <andrew.kaylor at intel.com>: > Yes, exactly. My patch adds a new ObjectCache class which can be > registered with MCJIT. MCJIT will
2018 Nov 05
2
ORC JIT api, object files and stackmaps
Hi Christian Your use case seems to have similar requirements as remote JITing in ORC. So far I haven't used that part myself and I am sure Lang can tell you much more about it. However, this comment on the RemoteObjectClientLayer class sounds promising for your questions (1) and (2): /// Sending relocatable objects to the server (rather than fully relocated /// bits) allows JIT'd code
2014 Sep 18
2
[LLVMdev] How to cache MCJIT compiled object into memory?
Hi, All I m not sure if this question has been asked or not. I'd like cache the MCJIT compiled object into memory buffer so it can be reused later. I followed the Andy Kaylor's example wrote an inherited class from ObjectCache and use raw_fd_ostream to save the cache and load the cache from a file. I checked raw_ostream and its subclass, maybe I am wrong but I don't see one is fit to
2014 Sep 11
2
[LLVMdev] Fail to load a pointer to a function inside MCJIT-ed code when it is reload from ObjectCache
Hi, All I have a problem to reuse mcjit jitted code loaded from ObjectCache from a file. In the first run, I use MCJIT generate function JittedOpExpr object code as following and it runs OK. 0x7fe4801fa1f8 at instruction 0x00007fe4cc6c2014 points to 0x69382E which is the beginning of ExecEvalVar function. Then I save the object code into a file after implementing notifyObjectCompiled method.
2016 Jul 07
2
ObjectCache and getFunctionAddress issue
Hi all, I'm trying to add pre-compiled object cache to my run-time. I've implemented the object cache as follow: class EngineObjectCache : public llvm::ObjectCache { private: std::unordered_map<std::string, std::unique_ptr<llvm::MemoryBuffer>> CachedObjs; public: virtual void notifyObjectCompiled(const llvm::Module *M, llvm::MemoryBufferRef Obj) { auto id =
2014 Jan 26
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
Hi Gael, I tried converting to your approach but I had some issues making sure that all symbols accessed by the jit modules have entries in the dynamic symbol table. To be specific, my current approach is to use MCJIT (using an objectcache) to JIT the runtime module and then let MCJIT handle linking any references from the jit'd modules; I just experimented with what I think you're doing,
2014 Sep 11
2
[LLVMdev] Fail to load a pointer to a function inside MCJIT-ed code when it is reload from ObjectCache
Thank you Lang. I attached the ELF object file here for your reference. Here is the IR dump of JittedOpExpr LLVM function. IrExprGetValue1 LLVM function calls to external function expr->evalfunc(expr, econtext, isNull, isDone); which should be pointed by 0x7fe4801fa1f8. However, only the first time MCJIT generated object point to expr->evalfunc but second time when program load from object
2013 Nov 18
2
[LLVMdev] (Very) small patch for the jit event listener
Hi Gaël, I would guess that MCJIT is probably attempting to load and link the shared library you return from the ObjectCache in the way it would load and link generated code, which would be wrong for a shared library. I know it seems like it should be easier to handle a shared library than a raw relocatable object (and it probably is) but MCJIT doesn't handle that case at the moment. The
2013 Nov 19
0
[LLVMdev] (Very) small patch for the jit event listener
Hi Andrew, Thank you very much for all your help! So, I have tested without my shared library (with a relocatable object and without) and still, my code is not executable. I was testing my code with multiple modules and I don't now if using multiple modules is fully functional? Anyway, I'm now allocating a mcjit for each function to be sure. But now, I have a new problem that comes
2013 Nov 19
1
[LLVMdev] (Very) small patch for the jit event listener
Hi Gaël, Multiple module support should be fully functional. However, there are some oddities in how MCJIT gets memory ready to execute, particularly if you are using the deprecated getPointerToFunction or runFunction methods. If you use these methods you'll need to call finalizeObject before you execute the code. I've heard reports that there's a bug doing that after adding
2013 Nov 16
0
[LLVMdev] (Very) small patch for the jit event listener
Hump, I think that I have to solution, but I have a new problem (more serious). For the solution, it's stupid, I have simply loaded the shared library in ObjectCache::getObject directly in a MemoryBuffer :) As the linker understand a .o, it understands a .so. Now, I'm able to compile a module (I call finalizeObject()), I'm able to find my first generated function pointer, but I'm
2010 Aug 23
0
EVE Online crashes at splash screen
...tarted 20:25:26 Starting services Replacing service 'machoNet' with 'eveMachoNet' Service machoNet: 0.001s Replacing service 'dataconfig' with 'eveDataconfig' Service dataconfig: 0.008s Replacing service 'photo' with 'evePhoto' Replacing service 'objectCaching' with 'eveObjectCaching' Service objectCaching: 0.000s Replacing service 'browserHostManager' with 'eveBrowserHostManager' Replacing service 'calendar' with 'eveCalendar' Service addressbook: 0.000s Service counter: 0.000s Service clientStatsSvc: 0.000s S...
2014 Jan 10
4
[LLVMdev] Bitcode parsing performance
Hi all, I'm trying to reduce the startup time for my JIT, but I'm running into the problem that the majority of the time is spent loading the bitcode for my standard library, and I suspect it's due to debug info. My stdlib is currently about 2kloc in a number of C++ files; I compile them with clang -g -emit-llvm, then link them together with llvm-link, call opt -O3 on it, and arrive
2013 Nov 16
2
[LLVMdev] (Very) small patch for the jit event listener
Hi Andrew (hi all:)), I perfectly understand the problem of relocation and it's really not a problem in my case. I'm still trying to make MCJIT runs but I face a small problem. I have to insert callback to the runtime for functions provided by vmkit (for example, a gcmalloc function to allocate memory from the heap). With the old JIT, VMKit simply loads a large bc file that contains all
2019 Aug 16
2
[ORC] [mlir] Dump assembly from OrcJit
+ MLIR dev mailing list since that’s where the OrcJit I’m using is. Thanks for all the details, Lang! What you described is exactly what I’m looking for! Please, MLIR dev, let me know if this debug feature and the solution that Lang describes below is interesting for MLIR. I’ll dig more into the details then but it doesn’t seem too complicated. Thanks, Diego From: Lang Hames [mailto:lhames at
2010 Aug 25
1
eve online crashing shortly after takeoff
...started 9:43:37 Starting services Replacing service 'machoNet' with 'eveMachoNet' Service machoNet: 0.001s Replacing service 'dataconfig' with 'eveDataconfig' Service dataconfig: 0.015s Replacing service 'photo' with 'evePhoto' Replacing service 'objectCaching' with 'eveObjectCaching' Service objectCaching: 0.000s Replacing service 'browserHostManager' with 'eveBrowserHostManager' Replacing service 'calendar' with 'eveCalendar' Service addressbook: 0.000s Service counter: 0.000s Service clientStatsSvc: 0.000s S...
2014 Jan 10
3
[LLVMdev] Bitcode parsing performance
That was likely type information and should mostly be fixed up. It's still not lazily loaded, but is going to be ridiculously smaller now. -eric On Fri Jan 10 2014 at 12:11:52 AM, Sean Silva <chisophugis at gmail.com> wrote: > This Summer I was working on LTO and Rafael mentioned to me that debug > info is not lazy loaded, which was the cause for the insane resource usage > I