similar to: Reducing JIT time

Displaying 20 results from an estimated 400 matches similar to: "Reducing JIT time"

2016 Feb 26
2
Heap problems with 3.8.0rc2 in combination with vs2015 sp1
Turns out llvm initializes memory before the constructor is invoked. Visual studio has /sdl by default on, and the __autoclassinit2 will zero the memory before the constructor is reached causing all the setters to be default zero. which clears the hashungoff var and causing the delete to flow in wrong part. When /sdl is enabled, the compiler generates code to perform these checks at run time: —
2016 Feb 25
0
Heap problems with 3.8.0rc2 in combination with vs2015 sp1
I found the root cause, but I don't know what's the best approach to fix it. Under windows, 64 bit, when a function is created the void *User::operator new(size_t Size) operator allocates space + Use*. In the Use* the HasHungOffUses is set to true. So the ptr to the use* is returned as new object. This ptr is NOT the ptr that was allocated by the system. For that ptr you need ptr - word
2016 Dec 20
4
thread safety ExecutionEngine::getFunctionAddress
Hi, I'm trying to speed up the JIT time with llvm (3.9.1). So far i've implemented the object cache, used FastISel and disabled optimizations. Jit time is still too slow for my purpose (I do have a lot of code to Jit). http://llvm.org/docs/ProgrammersManual.html#threads-and-the-jit states that we can invoke ExecutionEngine::getPointerToFunction() concurrently. This function was replaced
2016 Feb 25
2
Heap problems with 3.8.0rc2 in combination with vs2015 sp1
I made the llvm::Function() constructor public (for testing purpose) and used the non-overloaded new. auto func = ::new llvm::Function(...) if (func) func->eraseFromParent(); And the heap corruption is gone! Did something changed in llvm::User::new between 3.7.1 and 3.8.0 ? I found a bug in llvm ? On Thu, Feb 25, 2016 at 12:10 PM, koffie drinker <gekkekoe at gmail.com> wrote: > I
2016 Oct 28
4
MCJit and remove module memory leak?
I'm on llvm 3.8.1 and was wondering if there's a memory leak in the removeModule impl of mcjit. In the tutorial http://llvm.org/releases/3.8.1/docs/tutorial/LangImpl4.html a module is removed from the Jit by invoking removeModule. According to the tutorial: "Its API is very simple:: addModule adds an LLVM IR module to the JIT, making its functions available for execution;
2016 Nov 16
2
MCJit and remove module memory leak?
Hi Kevin, Koffie, We will start migrating to ORC for next release, but for now, this release > invoke delete after remove right? MCJIT's removeModule method does not delete the module. You'll need to do that manually. OrcMCJITReplacement is a bug-for-bug compatible implementation of MCJIT using ORC components, so it does not free the memory either. Does this mean MCJIT is
2016 Feb 24
2
Heap problems with 3.8.0rc2 in combination with vs2015 sp1
I recently upgraded from llvm 3.7.1 to a pre release of llvm (3.8.0rc2) in order to test some issues regarding bug 24233. After upgrading I starting to see heap corruption messages in vs 2015 sp1 when my program exits. "HEAP[ConsoleEngine.exe]: Invalid address specified to RtlValidateHeap( 0000000000290000, 0000000000318698 )" Initially I only got it in Release build. Debug build seems
2016 Feb 25
0
Heap problems with 3.8.0rc2 in combination with vs2015 sp1
I downloaded 3.8.0rc3 and I also have it in 3.8.0rc3. I did set a data access breakpoint on the first function ptr that causes the invalid heap. This would allow me to break whenever someone is touching that address. It did not show double deletes during debugging. Further more I managed to narrow it down to 2 function calls: // stupid code, but its just for triggering heap error auto func =
2016 Jul 29
2
Memory usage with MCJit
Hi Koffie, I'd highly recommend switching to ORC from MCJIT. It is much more flexible when it comes to memory-management. 1. I took the approach of 1 execution engine with multiple modules (I'm not > removing modules once they have been added). During profiling, I noticed > that the memory usage is high with a lot of code. How can I reduce the > memory usage? Is one execution
2016 Mar 02
3
What is the status of clang++ and LLVM on Windows
Hi, I am wondering what the status of Clang++ and LLVM on the Windows platform ? When I last saw what the state was there was not proper linking and more recently heard that the Structured Exception Handling was not working. The status page seems somewhat out of date ? Many thanks in advance, Aaron
2016 Jul 25
0
Memory usage with MCJit
+Lang for JIT things. On Sun, Jul 24, 2016 at 8:50 AM koffie drinker via llvm-dev < llvm-dev at lists.llvm.org> wrote: > Hi all, > > I'm building a runtime that can jit and execute code. I've followed the > kaleidoscope tutorial and had a couple of questions. Basically I have a > pre-compiler that compiles the code to cache objects. These cached objects > are
2016 Jul 24
2
Memory usage with MCJit
Hi all, I'm building a runtime that can jit and execute code. I've followed the kaleidoscope tutorial and had a couple of questions. Basically I have a pre-compiler that compiles the code to cache objects. These cached objects are then persisted and used to reduce Jit compile time. 1. I took the approach of 1 execution engine with multiple modules (I'm not removing modules once
2016 Jan 21
2
Propagation of foreign c++ exceptions (msvc, x64, llvm 3.7.1, MCJIT) through IR code
Hi all, I have the following code: [use llvm to generate ir_func() ] in side the ir_func() there's a call to a native cpp function that throws an exception. (Just imagine changing the fibonacci example and calling a native c++ func that throws inside the fibonacci body) I can't seem to catch "foreign" exception or any exception using the following pseudo code: try { // cast
2015 Aug 25
2
Problem with local context getType() == global context getType()
Hi, I'm experiencing a weird problem with llvm 3.7(rc2/rc3) that did not occur in llvm 3.6.2 I created a bug for it: https://llvm.org/bugs/show_bug.cgi?id=24521 I'm building a app where multiple code gen can happen in parallel. The documentation state that I need to use separate context. Each thread has it's own context. When code generating a constant number I use the *InContext()
2016 Jul 07
2
ObjectCache and getFunctionAddress issue
Hi all, I'm trying to add pre-compiled object cache to my run-time. I've implemented the object cache as follow: class EngineObjectCache : public llvm::ObjectCache { private: std::unordered_map<std::string, std::unique_ptr<llvm::MemoryBuffer>> CachedObjs; public: virtual void notifyObjectCompiled(const llvm::Module *M, llvm::MemoryBufferRef Obj) { auto id =
2016 Dec 20
0
thread safety ExecutionEngine::getFunctionAddress
> On Dec 20, 2016, at 9:13 AM, koffie drinker via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi, > > I'm trying to speed up the JIT time with llvm (3.9.1). > So far i've implemented the object cache, used FastISel and disabled optimizations. I can’t help with the rest, but just wanted to mention that totally disabling the IR optimizations is not necessarily
2016 Mar 09
3
Where is opt spending its time?
I am trying to improve my application's compile-time performance. On a given workload, I take 68 seconds to compile some code. If I disable the LLVM code generation (i.e. I will generate IR instructions, but skip the LLVM optimization and instruction selection steps) then my compile time drops to 3 seconds. If I write out the LLVM IR (just to prove that I am generating it) then my compile
2016 Dec 22
0
thread safety ExecutionEngine::getFunctionAddress
So I've made code to invoke the getfunctionaddress() in parallel. I did verify that the code was good, by substituting getfunctionaddress() with a bunch bogus computations. It seems that the code with getfunctionaddress() is being serialized. Is there a giant lock somewhere per executionengine? I have one execution engine that holds all the modules. Going through the llvm-dev list archives,
2011 May 13
2
L'abbe plot
I cannot seem to get a L'abbe plot to work on R. I do not understand what the X coordinates, or alternatively an object of class metabin, is supposed to mean. What is a class of metabin? Institute of Behavioral Genetics University of Colorado, Boulder Whitney.Melroy at Colorado.EDU
2017 Oct 23
2
EnableFastISel
Hi, In SelectionDAGISel::SelectAllBasicBlocks if (TM.Options.EnableFastISel) FastIS = TLI->createFastISel(*FuncInfo, LibInfo); followed by if (!FastIS) { LowerArguments(Fn); } else { The above implies that implementing FastIS is optional. In contrast to that, testing whether FastIS is actually been used is done by testing if TM.Options.EnableFastISel is set. For example